Close Menu
    Trending
    • Happiness ranking 2026: What unhappy people have in common as English-speaking countries are shut out of the top 10
    • NHS close to collapse and preventable deaths – key findings from the Covid-19 report
    • EV Nigeria: Kit-Based Approach Fuels EV Transition
    • Rebel Wilson Breaks Silence On Explosive ‘The Deb’ Legal Drama
    • Israel military says struck several Iranian naval ships in Caspian Sea
    • Iran attacks cut 17% of Qatar’s LNG capacity for up to 5 years: QatarEnergy | US-Israel war on Iran News
    • Trey Yesavage to miss start of season; can Blue Jays’ rotation hold up?
    • In the age of AI, seeing is no longer believing
    The Daily FuseThe Daily Fuse
    • Home
    • Latest News
    • Politics
    • World News
    • Tech News
    • Business
    • Sports
    • More
      • World Economy
      • Entertaiment
      • Finance
      • Opinions
      • Trending News
    The Daily FuseThe Daily Fuse
    Home»Opinions»In the age of AI, seeing is no longer believing
    Opinions

    In the age of AI, seeing is no longer believing

    The Daily FuseBy The Daily FuseMarch 19, 2026No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    In the age of AI, seeing is no longer believing
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Photographs from battle zones have all the time formed public opinion. Pictures of bombed cities, missile strikes and fleeing civilians affect how residents perceive conflicts and the way governments reply. A Vietnam Conflict {photograph} of youngsters fleeing a napalm assault grew to become one of the crucial highly effective symbols of that battle. For many years, many assumed such photographs represented goal reality.

    Intelligence professionals have lengthy identified that assumption is dangerous.

    As early because the Nineteen Eighties and Nineties, analysts warned that adversaries may manipulate video proof to affect public opinion and diplomacy. This concern — half of what’s now referred to as info operations — acknowledged that media itself generally is a weapon.

    Till not too long ago, creating convincing false imagery required specialised tools and experience. At this time, synthetic intelligence has modified that. Generative instruments can produce reasonable photographs and video with little greater than a written immediate, flooding the general public with materials that could be genuine, deceptive or completely fabricated.

    Latest conflicts illustrate the dimensions of the issue. Throughout Russia’s 2022 invasion of Ukraine, a deepfake video showed President Volodymyr Zelenskyy showing to induce troops to give up. It was rapidly debunked, however demonstrated how artificial media may be weaponized. Different broadly shared “battlefield” clips have been traced to the online game Arma 3.

    Related misinformation has circulated in preventing involving Iran and Israel, the place AI-generated imagery, recycled footage and mis-captioned movies have unfold broadly on-line, typically reaching tens of millions earlier than being corrected.

    The result’s a brand new type of battlefield: the battle for narrative management.

    Investigators now use open-source intelligence strategies — reverse-image searches, geolocation, satellite tv for pc comparisons and metadata evaluation — to confirm visible proof. However verification usually lags behind the viral whirlwind.

    What may be completed? The primary line of protection is the general public. Earlier than sharing dramatic content material, customers can verify trusted fact-checking organizations similar to PolitiFact, Snopes, Reuters Truth Verify, ProPublica and investigative teams like Bellingcat. A fast search usually reveals whether or not a picture has already been debunked.

    Primary methods additionally assist. Reverse-image searches can present if a photograph appeared earlier in one other context. Viewers ought to search for warning indicators similar to distorted textual content, repeating patterns or unnatural lighting — frequent in AI-generated photographs. If a dramatic declare seems solely on nameless accounts and never in respected information shops, that alone ought to elevate warning.

    Establishments even have obligations. Information organizations ought to develop rigorous verification practices earlier than publishing dramatic visuals, together with the routine use of geolocation, satellite tv for pc imagery and metadata evaluation. Social media platforms similar to Fb, Instagram and TikTok, the place a lot of this content material originates and spreads, ought to strengthen detection and labeling of manipulated media, scale back algorithmic amplification of unverified viral content material, and supply customers with clearer context when photographs are disputed or debunked. Expertise firms ought to go additional than voluntary labeling by embedding content material provenance requirements — digital “chain-of-custody” markers that permit customers to see the place a picture originated and whether or not it has been altered.

    Synthetic intelligence firms, specifically, ought to design programs that default to transparency, together with watermarking or cryptographic signatures for AI-generated content material, and will restrict the power to create misleading, real-world situations supposed to mislead. These measures won’t eradicate falsehoods, however they will elevate the price of deception.

    Authorities additionally has a task, although it have to be fastidiously bounded. Lawmakers can require clear disclosure of artificial media utilized in political promoting or public communications, set up penalties for malicious deepfakes supposed to deceive voters or incite hurt, and assist analysis into authentication applied sciences. On the similar time, any regulatory framework have to be crafted to guard First Modification rights and keep away from overreach into authentic expression.

    No single resolution will remedy the issue. However a mix of public consciousness, technological safeguards and focused coverage may help restore a measure of belief in what we see.

    Genuine photographs stay important for documenting occasions and holding governments accountable. However the digital age has modified the connection between photographs and reality.

    At this time, seeing and believing are now not the identical. The idiom is out of date.

    Richard Badalamente: was a senior scientist on the Pacific Northwest Nationwide Laboratory. Earlier than that he was a commissioned officer in the USA Air Power. He lives in Kennewick.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Daily Fuse
    • Website

    Related Posts

    Joe Kent: Define ‘imminent’ | The Seattle Times

    March 19, 2026

    Legislature: Time for some tinkering

    March 19, 2026

    WA lawmakers will claim wins, but education funding isn’t one of them

    March 18, 2026

    Staggering U.S. deficits call for a debt-to-GDP limit

    March 18, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The workplace just got even less friendly for LGBTQ+ workers

    January 24, 2026

    The Middle East | Armstrong Economics

    June 20, 2025

    Chicago’s Pension Funds Are Nearly Insolvent – Incoming $28m Bailout

    September 19, 2025

    Elon Musk, Video Game King? Well, Maybe Not.

    January 26, 2025

    Steelers’ backup QBs impress in preseason win 

    August 10, 2025
    Categories
    • Business
    • Entertainment News
    • Finance
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Thedailyfuse.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.