Close Menu
    Trending
    • Subcutaneous Microchip Mandates | Armstrong Economics
    • Prince Harry And Meghan ‘At A Crossroads,’ Expert Warns
    • Iran says oil blockade will continue until attacks end, Trump threatens to hit harder
    • Shai Gilgeous-Alexander ties NBA record in heroic win vs. Nuggets
    • ‘Your AI slop bores me’: The viral website that lets humans answer your questions like ChatGPT
    • Killing The Ayatolla Was A Vast Mistake
    • Timothy Busfield Denies 35-Year-Old Sexual Assault Of Co-Star
    • US designates Afghanistan as ‘state sponsor of wrongful detention’
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Tech News»AI chatbots unable to accurately summarise news, BBC finds
    Tech News

    AI chatbots unable to accurately summarise news, BBC finds

    The Daily FuseBy The Daily FuseFebruary 11, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    AI chatbots unable to accurately summarise news, BBC finds
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Imran Rahman-Jones

    Know-how reporter

    Getty Images A phone screen with the app icons ChatGPT, Copilot, Gemini and Perplexity displayedGetty Photographs

    4 main synthetic intelligence (AI) chatbots are inaccurately summarising information tales, in line with analysis carried out by the BBC.

    The BBC gave OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI content material from the BBC web site then requested them questions in regards to the information.

    It mentioned the ensuing solutions contained “important inaccuracies” and distortions.

    In a blog, Deborah Turness, the CEO of BBC Information and Present Affairs, mentioned AI introduced “limitless alternatives” however the firms growing the instruments have been “enjoying with hearth.”

    “We dwell in troubled occasions, and the way lengthy will or not it’s earlier than an AI-distorted headline causes important actual world hurt?”, she requested.

    The tech firms which personal the chatbots have been approached for remark.

    ‘Pull again’

    In the study, the BBC requested ChatGPT, Copilot, Gemini and Perplexity to summarise 100 information tales and rated every reply.

    It acquired journalists who have been related consultants within the topic of the article to charge the standard of solutions from the AI assistants.

    It discovered 51% of all AI solutions to questions in regards to the information have been judged to have important problems with some type.

    Moreover, 19% of AI solutions which cited BBC content material launched factual errors, corresponding to incorrect factual statements, numbers and dates.

    In her weblog, Ms Turness mentioned the BBC was in search of to “open up a brand new dialog with AI tech suppliers” so we are able to “work collectively in partnership to seek out options.”

    She known as on the tech firms to “pull again” their AI information summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting information tales.

    Some examples of inaccuracies discovered by the BBC included:

    • Gemini incorrectly mentioned the NHS didn’t suggest vaping as an support to give up smoking
    • ChatGPT and Copilot mentioned Rishi Sunak and Nicola Sturgeon have been nonetheless in workplace even after that they had left
    • Perplexity misquoted BBC Information in a narrative in regards to the Center East, saying Iran initially confirmed “restraint” and described Israel’s actions as “aggressive”

    On the whole, Microsoft’s Copilot and Google’s Gemini had extra important points than OpenAI’s ChatGPT and Perplexity, which counts Jeff Bezos as one among its buyers.

    Usually, the BBC blocks its content material from AI chatbots, nevertheless it opened its web site up during the exams in December 2024.

    The report mentioned that in addition to containing factual inaccuracies, the chatbots “struggled to distinguish between opinion and truth, editorialised, and infrequently failed to incorporate important context.”

    The BBC’s Programme Director for Generative AI, Pete Archer, mentioned publishers “ought to have management over whether or not and the way their content material is used and AI firms ought to present how assistants course of information together with the dimensions and scope of errors and inaccuracies they produce.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    How Cross-Cultural Engineering Drives Tech Advancement

    March 9, 2026

    Offshore Wind and Military Radar: Solving Security Gaps

    March 9, 2026

    Military AI Governance: Who Sets the Rules?

    March 8, 2026

    Laser 3D Printing Could Build Lunar Base Structures

    March 7, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Stay One Step Ahead of Cyber Threats for Five Years for $35

    August 17, 2025

    Why your electric bill is so high—and what could bring down rates

    January 22, 2026

    Magnitude 6.7 earthquake hits Japan’s northeast, tsunami warning issued | Earthquakes News

    December 12, 2025

    CLASSIC VIDEO: Watch Libertarian Nobel Prize Winning Economist Milton Friedman Giving a Preview of DOGE in 1999 | The Gateway Pundit

    February 6, 2025

    Russia says it is not easy to agree Ukraine peace deal with US

    April 15, 2025
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.