Close Menu
    Trending
    • ‘Your AI slop bores me’: The viral website that lets humans answer your questions like ChatGPT
    • Killing The Ayatolla Was A Vast Mistake
    • Timothy Busfield Denies 35-Year-Old Sexual Assault Of Co-Star
    • US designates Afghanistan as ‘state sponsor of wrongful detention’
    • ‘No middle ground’: Israelis back Iran war, despite taking mounting hits | US-Israel war on Iran News
    • Greg Sankey makes admission about breaking away from NCAA
    • Big Tech influence: Let’s do our jobs, voters
    • Crypto is in its “cloned cell phone” era
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Tech News»AI chatbots unable to accurately summarise news, BBC finds
    Tech News

    AI chatbots unable to accurately summarise news, BBC finds

    The Daily FuseBy The Daily FuseFebruary 11, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    AI chatbots unable to accurately summarise news, BBC finds
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Imran Rahman-Jones

    Know-how reporter

    Getty Images A phone screen with the app icons ChatGPT, Copilot, Gemini and Perplexity displayedGetty Photographs

    4 main synthetic intelligence (AI) chatbots are inaccurately summarising information tales, in line with analysis carried out by the BBC.

    The BBC gave OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI content material from the BBC web site then requested them questions in regards to the information.

    It mentioned the ensuing solutions contained “important inaccuracies” and distortions.

    In a blog, Deborah Turness, the CEO of BBC Information and Present Affairs, mentioned AI introduced “limitless alternatives” however the firms growing the instruments have been “enjoying with hearth.”

    “We dwell in troubled occasions, and the way lengthy will or not it’s earlier than an AI-distorted headline causes important actual world hurt?”, she requested.

    The tech firms which personal the chatbots have been approached for remark.

    ‘Pull again’

    In the study, the BBC requested ChatGPT, Copilot, Gemini and Perplexity to summarise 100 information tales and rated every reply.

    It acquired journalists who have been related consultants within the topic of the article to charge the standard of solutions from the AI assistants.

    It discovered 51% of all AI solutions to questions in regards to the information have been judged to have important problems with some type.

    Moreover, 19% of AI solutions which cited BBC content material launched factual errors, corresponding to incorrect factual statements, numbers and dates.

    In her weblog, Ms Turness mentioned the BBC was in search of to “open up a brand new dialog with AI tech suppliers” so we are able to “work collectively in partnership to seek out options.”

    She known as on the tech firms to “pull again” their AI information summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting information tales.

    Some examples of inaccuracies discovered by the BBC included:

    • Gemini incorrectly mentioned the NHS didn’t suggest vaping as an support to give up smoking
    • ChatGPT and Copilot mentioned Rishi Sunak and Nicola Sturgeon have been nonetheless in workplace even after that they had left
    • Perplexity misquoted BBC Information in a narrative in regards to the Center East, saying Iran initially confirmed “restraint” and described Israel’s actions as “aggressive”

    On the whole, Microsoft’s Copilot and Google’s Gemini had extra important points than OpenAI’s ChatGPT and Perplexity, which counts Jeff Bezos as one among its buyers.

    Usually, the BBC blocks its content material from AI chatbots, nevertheless it opened its web site up during the exams in December 2024.

    The report mentioned that in addition to containing factual inaccuracies, the chatbots “struggled to distinguish between opinion and truth, editorialised, and infrequently failed to incorporate important context.”

    The BBC’s Programme Director for Generative AI, Pete Archer, mentioned publishers “ought to have management over whether or not and the way their content material is used and AI firms ought to present how assistants course of information together with the dimensions and scope of errors and inaccuracies they produce.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    How Cross-Cultural Engineering Drives Tech Advancement

    March 9, 2026

    Offshore Wind and Military Radar: Solving Security Gaps

    March 9, 2026

    Military AI Governance: Who Sets the Rules?

    March 8, 2026

    Laser 3D Printing Could Build Lunar Base Structures

    March 7, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Yankees’ Aaron Judge having season for the ages

    June 14, 2025

    Judge Extends Halt on Trump Plan to Dismantle U.S.A.I.D.

    February 14, 2025

    Palou continues historic IndyCar season with win at Iowa Speedway

    July 13, 2025

    Israel hits building in Beirut’s southern suburbs, first since truce

    March 28, 2025

    Federal Judge: Anthropic Acted Legally With AI Book Training

    June 24, 2025
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.