Close Menu
    Trending
    • My e-bike was stolen in Seattle. Here’s what happened next
    • How to Create an Effective Customer Feedback Survey in 5 Simple Steps
    • John Mayer Says Goodbye To Beloved Companion With Heartbreaking Message
    • Commentary: X’s transparency features expose the lucrative industry of political grifting
    • Celta Vigo earn shock 2-0 win at the Bernabeu as Real Madrid implode | Football News
    • Jaguars take AFC South lead after dominating slumping Colts
    • New Seattle city attorney: Where’s the plan to fight sex trafficking?
    • From ‘AI slop’ to ‘rage bait’: 2025’s words of the year represent digital disillusionment
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Tech News»Character.ai to ban teens from talking to its AI chatbots
    Tech News

    Character.ai to ban teens from talking to its AI chatbots

    The Daily FuseBy The Daily FuseOctober 29, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Character.ai to ban teens from talking to its AI chatbots
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Chatbot web site Character.ai is chopping off youngsters from having conversations with digital characters, after dealing with intense criticism over the sorts of interactions younger folks have been having with on-line companions.

    The platform, based in 2021, is utilized by thousands and thousands to speak to chatbots powered by synthetic intelligence (AI).

    However it’s dealing with a number of lawsuits within the US from dad and mom, together with one over the dying of an adolescent, with some branding it a “clear and present danger” to younger folks.

    Now, Character.ai says from 25 November under-18s will solely have the ability to generate content material corresponding to movies with their characters, slightly than discuss to them as they’ll at present.

    On-line security campaigners have welcomed the transfer however stated the function ought to by no means have been obtainable to kids within the first place.

    Character.ai stated it was making the modifications after “reviews and suggestions from regulators, security consultants, and oldsters”, which have highlighted issues about its chatbots’ interactions with teenagers.

    Consultants have beforehand warned the potential for AI chatbots to make issues up, be overly-encouraging, and feign empathy can pose dangers to younger and weak folks.

    “At present’s announcement is a continuation of our common perception that we have to maintain constructing the most secure AI platform on the planet for leisure functions,” Character.ai boss Karandeep Anand informed BBC Information.

    He stated AI security was “a transferring goal” however one thing the corporate had taken an “aggressive” method to, with parental controls and guardrails.

    On-line security group Web Issues welcomed the announcement, but it surely stated security measures ought to have been inbuilt from the beginning.

    “Our personal analysis exhibits that kids are uncovered to dangerous content material and put in danger when participating with AI, together with AI chatbots,” it stated.

    Character.ai has been criticised previously for internet hosting probably dangerous or offensive chatbots that kids may discuss to.

    Avatars impersonating British youngsters Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life on the age of 14 after viewing suicide materials on-line, have been discovered on the site in 2024 earlier than being taken down.

    Later, in 2025, the Bureau of Investigative Journalism (TBIJ) discovered a chatbot based mostly on paedophile Jeffrey Epstein which had logged greater than 3,000 chats with customers.

    The outlet reported the “Bestie Epstein” avatar continued to flirt with its reporter after they stated they have been a baby. It was one in every of a number of bots flagged by TBIJ that have been subsequently taken down by Character.ai.

    The Molly Rose Basis – which was arrange in reminiscence of Molly Russell – questioned the platform’s motivations.

    “But once more it has taken sustained stress from the media and politicians to make a tech agency do the precise factor, and it seems that Character AI is selecting to behave now earlier than regulators make them,” stated Andy Burrows, its chief govt.

    Mr Anand stated the corporate’s new focus was on offering “even deeper gameplay [and] role-play storytelling” options for teenagers – including these could be “far safer than what they could have the ability to do with an open-ended bot”.

    New age verification strategies can even are available in, and the corporate will fund a brand new AI security analysis lab.

    Social media knowledgeable Matt Navarra stated it was a “wake-up name” for the AI trade, which is transferring “from permissionless innovation to post-crisis regulation”.

    “When a platform that builds a teen expertise nonetheless then pulls the plug, it is saying that filtered chats aren’t sufficient when the tech’s emotional pull is robust,” he informed BBC Information.

    “This is not about content material slips. It is about how AI bots mimic actual relationships and blur the traces for younger customers,” he added.

    Mr Navarra additionally stated the massive problem for Character.ai shall be to create an attractive AI platform which teenagers nonetheless wish to use, slightly than transfer to “much less safer alternate options”.

    In the meantime Dr Nomisha Kurian, who has researched AI security, stated it was “a wise transfer” to limit teenagers utilizing chatbots.

    “It helps to separate inventive play from extra private, emotionally delicate exchanges,” she stated.

    “That is so necessary for younger customers nonetheless studying to navigate emotional and digital boundaries.

    “Character.ai’s new measures may replicate a maturing section within the AI trade – baby security is more and more being recognised as an pressing precedence for accountable innovation.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    Robot Videos: Biorobotics, Robot EV Charging, and More

    December 6, 2025

    Twitch star QTCinderella says she wishes she never started streaming

    December 5, 2025

    Entrepreneurship Program Fosters Leadership Skills

    December 5, 2025

    Elon Musk’s X fined €120m over ‘deceptive’ blue ticks

    December 5, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    What is the electric car grant and how can I claim?

    July 15, 2025

    Biden says ‘soul of America’ at stake as Trump inauguration nears | Joe Biden News

    January 15, 2025

    GROSS: Transgender TikToker Spends $17,000 to Remove Six of His Ribs — Plans to Make Them Into a Crown (VIDEO) | The Gateway Pundit

    January 11, 2025

    MIT study finds AI is already capable of replacing 11.7% of U.S. workers

    November 26, 2025

    Houston Leftist Booted from Food Insecurity Board After VILE Comment Calling Missing Girls’ Flooded Camp “White-Only Girls Camp” | The Gateway Pundit

    July 7, 2025
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.