Close Menu
    Trending
    • Elections: Not the time for overhaul
    • Glossier is closing most of its stores, joins list of DTC brands that are scaling back on physical locations
    • Trump Backs Down – Will Declare Victory
    • Courtney Stodden Draws Chilling Parallels Between Her & Marilyn Monroe
    • Over 5,500 told to evacuate flooding in Hawaii as officials warn 120-year-old dam could fail
    • Colombia’s President Gustavo Petro under investigation in US for drug ties | Donald Trump News
    • Corey Heim wins NASCAR Truck Series race at Darlington
    • Medicare: Halt WISeR | The Seattle Times
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Business»This AI scans Reddit for ‘extremist’ terms and plots bot-led intervention
    Business

    This AI scans Reddit for ‘extremist’ terms and plots bot-led intervention

    The Daily FuseBy The Daily FuseMay 24, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    This AI scans Reddit for ‘extremist’ terms and plots bot-led intervention
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A pc science pupil is behind a brand new AI software designed to trace down Redditors exhibiting indicators of radicalization and deploy bots to “deradicalize” them via dialog. 

    First reported by 404 Media, PrismX was constructed by Sairaj Balaji, a pc science pupil at SRMIST in Chennai, India. The software works by analyzing posts for particular key phrases and patterns related to excessive views, giving these customers a “radical rating.” Excessive scorers are then focused by AI bots programmed to try “deradicalization” by partaking the customers in dialog. 

    Based on the federal government, the first terror menace to the U.S. now’s people radicalized to violence on-line via social media. On the identical time, there are fears round surveillance expertise and AI infiltrating on-line communities, to not point out considerations concerning the moral minefield of deploying such a software. 

    Responding to considerations, Balaji clarified in a LinkedIn post that the dialog a part of the software has not been examined on actual Reddit customers with out consent. As a substitute, the scoring and dialog parts had been utilized in simulated environments for analysis functions solely. 

    “The software was designed to impress dialogue, not controversy,” he defined within the publish. “We’re at a degree in historical past the place rogue actors and nation-states are already deploying weaponized AI. If a school pupil can construct one thing like PrismX, it raises pressing questions: Who’s watching the watchers?”

    Whereas Balaji doesn’t declare to be an skilled in deradicalization, as an engineer he’s within the moral implications of surveillance expertise. “Discomfort sparks debate. Debate results in oversight. And oversight is how we forestall the misuse of rising applied sciences,” he mentioned. 

    This isn’t the primary time Redditors have been used as guinea pigs just lately. Simply final month, researchers from the College of Zurich confronted intense backlash after experimenting on an unsuspecting subreddit. 

    The analysis concerned deploying AI-powered bots into the Change My View subreddit, which positions itself as a “place to publish an opinion you settle for could also be flawed,” in an experiment to see whether or not AI could possibly be used to alter peoples’ minds. When Redditors came upon they had been being experimented on with out their data, they weren’t impressed. Neither was the platform itself.

    Ben Lee, Reddit’s chief authorized officer, wrote in a post that neither Reddit nor the r/changemyview mods knew concerning the experiment forward of time. “What this College of Zurich crew did is deeply mistaken on each an ethical and authorized stage,” Lee wrote. “It violates tutorial analysis and human rights norms, and is prohibited by Reddit’s consumer settlement and guidelines, along with the subreddit guidelines.” 

    Whereas PrismX will not be presently being examined on actual unconsenting customers, it piles on the ever-growing query of the position of synthetic intelligence in human areas. 




    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    Glossier is closing most of its stores, joins list of DTC brands that are scaling back on physical locations

    March 21, 2026

    Planned Parenthood settles with EEOC to end DEI investigation of anti-white discrimination

    March 21, 2026

    Did AI write ‘Shy Girl’? A messy detection controversy rocks the world of book publishing

    March 20, 2026

    Why societal change and technology may be key to Americans regaining trust in the news media

    March 20, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Disgusting: New Company Charging $5,999 to Advise Which Humans Should Live and Which Should Die | The Gateway Pundit

    June 6, 2025

    The Fear Dax Shepard Hid Before Falling For Kristen Bell

    October 21, 2025

    All eyes turn to conclave after Pope Francis’ funeral

    April 27, 2025

    Robot Videos: Baseball Bots, UBTECH Walker, and More

    November 21, 2025

    ICC prosecutor opens probe into Belarus over deportations to Lithuania | Courts News

    March 12, 2026
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.