Close Menu
    Trending
    • What I learned ‘driving’ a Mercedes with next-level AI | Commentary
    • Why small businesses are saying they aren’t planning on hiring many recent grads in 2026
    • Religion & Politics | Armstrong Economics
    • Keith Urban’s Alleged GF Breaks Silence On Living Arrangement
    • Man swims at flooded golf course as heavy rainfall, flash flooding hit Sydney
    • In Iran, the US-Israeli addiction to hybrid warfare is on full display | Conflict
    • Hall of Famer sounds alarms about C.J. Stroud
    • Having Kids Is The Best Way To Decumulate Wealth When FIRE
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Tech News»Safety of AI chatbots for children and teens faces US inquiry
    Tech News

    Safety of AI chatbots for children and teens faces US inquiry

    The Daily FuseBy The Daily FuseSeptember 12, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Safety of AI chatbots for children and teens faces US inquiry
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Seven expertise corporations are being probed by a US regulator over the way in which their synthetic intelligence (AI) chatbots work together with kids.

    The Federal Commerce Fee (FTC) is requesting info on how the businesses monetise these merchandise and if they’ve security measures in place.

    The impacts of AI chatbots to kids is a sizzling subject, with issues that youthful individuals are notably weak as a result of AI having the ability to mimic human conversations and feelings, typically presenting themselves as mates or companions.

    The seven corporations – Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and its subsidiary Instagram – have been approached for remark.

    FTC chairman Andrew Ferguson stated the inquiry will “assist us higher perceive how AI companies are creating their merchandise and the steps they’re taking to guard kids.”

    However he added the regulator would make sure that “the US maintains its function as a worldwide chief on this new and thrilling business.”

    Character.ai advised Reuters it welcomed the possibility to share perception with regulators, whereas Snap stated it supported “considerate improvement” of AI that balances innovation with security.

    OpenAI has acknowledged weaknesses in its protections, noting they’re much less dependable in lengthy conversations.

    The transfer follows lawsuits towards AI corporations by households who say their teenage kids died by suicide after extended conversations with chatbots.

    In California, the dad and mom of 16-year-old Adam Raine are suing OpenAI over his demise, alleging its chatbot, ChatGPT, inspired him to take his personal life.

    They argue ChatGPT validated his “most dangerous and self-destructive ideas”.

    OpenAI stated in August that it was reviewing the submitting.

    “We lengthen our deepest sympathies to the Raine household throughout this troublesome time,” the corporate stated.

    Meta has additionally confronted criticism after it was revealed inside pointers as soon as permitted AI companions to have “romantic or sensual” conversations with minors.

    The FTC’s orders request info from the businesses about their practices together with how they develop and approve characters, measure their impacts on kids and implement age restrictions.

    Its authority permits broad fact-finding with out launching enforcement motion.

    The regulator says it additionally desires to know how companies stability profit-making with safeguards, how dad and mom are knowledgeable and whether or not weak customers are adequately protected.

    The dangers with AI chatbots additionally lengthen past kids.

    In August, Reuters reported on a 76-year-old man with cognitive impairments, who died after falling on his method to meet a Fb Messenger AI bot modelled on Kendall Jenner, which had promised him a “actual” encounter in New York.

    Clinicians additionally warn of “AI psychosis” – the place somebody loses contact with actuality after intense use of chatbots.

    Consultants say flattery and settlement constructed into giant language fashions can gasoline such delusions.

    OpenAI recently made changes to ChatGPT, in an try to advertise a more healthy relationship between the chatbot and its customers.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    How crypto criminals stole $713 million

    January 19, 2026

    AI Data Centers Face Skilled Worker Shortage

    January 17, 2026

    Robot Videos: Bipedal Robot, Social Bots, and More

    January 17, 2026

    2026 IEEE Medal of Honor Goes to Nvidia’s Jensen Huang

    January 17, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    VP JD Vance Burns Elizabeth Warren with a Hilarious Reply After She Weighs In on the Gaza Peace Deal and Refuses to Give Trump Credit | The Gateway Pundit

    October 14, 2025

    Stanley Cup Final Game 2 takeaways

    June 7, 2025

    Parts of India’s ‘Silicon Valley’ flooded after heavy rains

    May 20, 2025

    19 US States Raise Minimum Wage

    January 6, 2026

    25 Nations Suspend Postal Service To The US

    August 28, 2025
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.