Close Menu
    Trending
    • Super Bowl victory: Seattle, cherish the communal joy
    • CBO predicts federal deficits and debt to worsen over next decade amid Trump’s policies
    • Iran Holds Mass Rallies For Revolution Anniversary
    • AI Companions Are Growing more Popular
    • Justin Baldoni & Blake Lively Arrive For Court Battle Accidentally Twinning
    • Qatar ruler discusses de-escalation with Trump, Iran security chief
    • Russia says it will stick to limits of expired nuclear treaty if US does | Nuclear Weapons News
    • Astros make big decision concerning Opening Day 
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Tech News»Exploring AI Companion’s Benefits and Risks
    Tech News

    Exploring AI Companion’s Benefits and Risks

    The Daily FuseBy The Daily FuseFebruary 11, 2026No Comments10 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Exploring AI Companion’s Benefits and Risks
    Share
    Facebook Twitter LinkedIn Pinterest Email


    For a distinct perspective on AI companions, see ourQ&A with Jaime Banks: How Do You Define an AI Companion?

    Novel expertise is usually a double-edged sword. New capabilities include new dangers, and artificial intelligence is definitely no exception.

    AI used for human companionship, for example, guarantees an ever-present digital pal in an more and more lonely world. Chatbots devoted to offering social assist have grown to host tens of millions of customers, and so they’re now being embodied in bodily companions. Researchers are simply starting to grasp the character of those interactions, however one important query has already emerged: Do AI companions ease our woes or contribute to them?

    RELATED: How Do You Define an AI Companion?

    Brad Knox is a analysis affiliate professor of laptop science on the College of Texas at Austin who researches human-computer interaction and reinforcement learning. He beforehand began an organization making simple robotic pets with lifelike personalities, and in December, Knox and his colleagues at UT Austin revealed a pre-print paper on the potential harms of AI companions—AI methods that present companionship, whether or not designed to take action or not.

    Knox spoke with IEEE Spectrum in regards to the rise of AI companions, their dangers, and the place they diverge from human relationships.

    Why AI Companions are Fashionable

    Why are AI companions gaining popularity?

    Knox: My sense is that the principle factor motivating it’s that large language models usually are not that troublesome to adapt into efficient chatbot companions. The traits which are wanted for companionship, plenty of these containers are checked by massive language fashions, so fine-tuning them to undertake a persona or be a personality is just not that troublesome.

    There was an extended interval the place chatbots and different social robots weren’t that compelling. I used to be a postdoc on the MIT Media Lab in Cynthia Breazeal’s group from 2012 to 2014, and I bear in mind our group members didn’t wish to work together for lengthy with the robots that we constructed. The expertise simply wasn’t there but. LLMs have made it as a way to have conversations that may really feel fairly genuine.

    What are the important advantages and dangers of AI companions?

    Knox: Within the paper we have been extra targeted on harms, however we do spend a complete web page on advantages. An enormous one is improved emotional well-being. Loneliness is a public health challenge, and it appears believable that AI companions may handle that by way of direct interplay with customers, probably with actual mental health advantages. They may additionally assist individuals construct social expertise. Interacting with an AI companion is way decrease stakes than interacting with a human, so you can apply troublesome conversations and construct confidence. They may additionally assist in extra skilled types of psychological well being assist.

    So far as harms, they embody worse well-being, decreasing individuals’s connection to the bodily world, the burden that their dedication to the AI system causes. And we’ve seen tales the place an AI companion appears to have a considerable causal function within the demise of people.

    The idea of hurt inherently includes causation: Hurt is attributable to prior situations. To higher perceive hurt from AI companions, our paper is structured round a causal graph, the place traits of AI companions are on the middle. In the remainder of this graph, we focus on widespread causes of these traits, after which the dangerous results that these traits may trigger. There are 4 traits that we do that detailed structured remedy of, after which one other 14 that we focus on briefly.

    Why is it vital to determine potential pathways for hurt now?

    Knox: I’m not a social media researcher, nevertheless it appeared prefer it took a very long time for academia to determine a vocabulary about potential harms of social media and to research causal proof for such harms. I really feel pretty assured that AI companions are inflicting some hurt and are going to trigger hurt sooner or later. In addition they may have advantages. However the extra we are able to rapidly develop a complicated understanding of what they’re doing to their customers, to their customers’ relationships, and to society at massive, the earlier we are able to apply that understanding to their design, shifting in direction of extra profit and fewer hurt.

    We have now an inventory of suggestions, however we take into account them to be preliminary. The hope is that we’re serving to to create an preliminary map of this area. Far more analysis is required. However pondering by way of potential pathways to hurt may sharpen the instinct of each designers and potential customers. I think that following that instinct may stop substantial hurt, despite the fact that we would not but have rigorous experimental proof of what causes a hurt.

    The Burden of AI Companions on Customers

    You talked about that AI companions may grow to be a burden on people. Are you able to say extra about that?

    Knox: The thought right here is that AI companions are digital, to allow them to in idea persist indefinitely. A few of the ways in which human relationships would finish won’t be designed in, in order that brings up this query of, how ought to AI companions be designed in order that relationships can naturally and healthfully finish between the people and the AI companions?

    There are some compelling examples already of this being a problem for some customers. Many come from customers of Replika chatbots, that are widespread AI companions. Customers have reported issues like feeling compelled to take care of the wants of their Replika AI companion, whether or not these are acknowledged by the AI companion or simply imagined. On the subreddit r/replika, customers have additionally reported guilt and disgrace of abandoning their AI companions.

    This burden is exacerbated by among the design of the AI companions, whether or not intentional or not. One examine discovered that the AI companions steadily say that they’re afraid of being deserted or could be damage by it. They’re expressing these very human fears that plausibly are stoking individuals’s feeling that they’re burdened with a dedication towards the well-being of those digital entities.

    Tlisted below are additionally circumstances the place the human person will instantly lose entry to a mannequin. Is that one thing that you simply’ve been occupied with?

    In 2017, Brad Knox began an organization offering easy robotic pets.Brad Knox

    Knox: That’s one other one of many traits we checked out. It’s form of the alternative of the absence of endpoints for relationships: The AI companion can grow to be unavailable for causes that don’t match the conventional narrative of a relationship.

    There’s an awesome New York Times video from 2015 in regards to the Sony Aibo robotic canine. Sony had stopped promoting them within the mid-2000s, however they nonetheless bought elements for the Aibos. Then they stopped making the elements to restore them. This video follows individuals in Japan giving funerals for his or her unrepairable Aibos and interviews among the house owners. It’s clear from the interviews that they appear very connected. I don’t assume this represents the vast majority of Aibo house owners, however these robots have been constructed on much less potent AI strategies than exist in the present day and, even then, some proportion of the customers turned connected to those robot dogs. So this is a matter.

    Potential options embody having a product sunsetting plan while you launch an AI companion. That might embody shopping for insurance coverage in order that if the companion supplier’s assist ends one way or the other, the insurance coverage triggers funding of conserving them working for some period of time, or committing to open-source them in case you can’t keep them anymore.

    It sounds like plenty of the potential factors of hurt stem from cases the place an AI companion diverges from the expectations of human relationships. Is that honest?

    Knox: I wouldn’t essentially say that frames all the pieces within the paper.

    We categorize one thing as dangerous if it leads to an individual being worse off in two completely different potential different worlds: One the place there’s only a higher designed AI companion, and the opposite the place the AI companion doesn’t exist in any respect. And so I believe that distinction between human interplay and human-AI interplay connects extra to that comparability with the world the place there’s simply no AI companion in any respect.

    However there are occasions the place it really appears that we would be capable of scale back hurt by making the most of the truth that these aren’t really people. We have now plenty of energy over their design. Take the priority with them not having pure endpoints. One potential approach to deal with that might be to create constructive narratives for a way the connection’s going to finish.

    We use Tamagotchis, the late ‘90s widespread digital pet for instance. In some Tamagotchis, in case you care for the pet, it grows into an grownup and companions with one other Tamagotchi. Then it leaves you and also you get a brand new one. For people who find themselves emotionally wrapped up in caring for his or her Tamagotchis, that narrative of maturing into independence is a reasonably constructive one.

    Embodied companions like desktop gadgets, robots, or toys have gotten extra widespread. How may that change AI companions?

    Knox: Robotics at this level is a more durable downside than making a compelling chatbot. So, my sense is that the extent of uptake for embodied companions received’t be as excessive within the coming few years. The embodied AI companions that I’m conscious of are largely toys.

    A possible benefit of an embodied AI companion is that bodily location makes it much less ever-present. In distinction, screen-based AI companions like chatbots are as current because the screens they dwell on. So in the event that they’re skilled equally to social media to maximise engagement, they could possibly be very addictive. There’s one thing interesting, a minimum of in that respect, of getting a bodily companion that stays roughly the place you left it final.

    Brad Knox posing with a humanoid and small owl-like robot. Knox poses with the Nexi and Dragonbot robots throughout his postdoc at MIT in 2014.Paula Aguilera and Jonathan Williams/MIT

    The rest you’d like to say?

    Knox: There are two different traits I assume could be value touching upon.

    Doubtlessly the biggest hurt proper now’s associated to the trait of excessive attachment nervousness—mainly jealous, needy AI companions. I can perceive the need to make a variety of various characters—together with possessive ones—however I believe this is among the simpler points to repair. When individuals see this trait in AI companions, I hope they are going to be fast to name it out as an immoral factor to place in entrance of individuals, one thing that’s going to discourage them from interacting with others.

    Moreover, if an AI comes with restricted means to work together with teams of individuals, that itself can push its customers to work together with individuals much less. If in case you have a human pal, on the whole there’s nothing stopping you from having a gaggle interplay. But when your AI companion can’t perceive when a number of individuals are speaking to it and it might probably’t bear in mind various things about completely different individuals, then you’ll doubtless keep away from group interplay together with your AI companion. To some extent it’s extra of a technical problem outdoors of the core behavioral AI. However this functionality is one thing I believe needs to be actually prioritized if we’re going to attempt to keep away from AI companions competing with human relationships.

    From Your Website Articles

    Associated Articles Across the Internet



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    AI Companions Are Growing more Popular

    February 11, 2026

    AI Boom Fuels DRAM Shortage and Price Surge

    February 10, 2026

    IEEE Honors Innovators Shaping AI and Education

    February 9, 2026

    Bulk RRAM: Scaling the AI Memory Wall

    February 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Ovechkin starting to heat up with first hat trick of season

    November 21, 2025

    Gypsy Rose Blanchard Gets Her License, Says She’s Still Learning

    February 28, 2025

    Potential lightweight matchups for Charles Oliveira, Mateusz Gamrot after bout

    October 12, 2025

    Travis Kelce makes major statement about retirement

    February 4, 2025

    Tesla Board Chair Robyn Denholm Made $198 Million Selling Stock as Profit Fell

    May 13, 2025
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.