Members of Congress are but once more preparing to roll out a invoice that will preempt state legal guidelines on synthetic intelligence.
Strip away the polish and also you’ll acknowledge a well-known playbook: granting broad immunity for Huge Tech with minimal safeguards and doubtlessly no finish date in sight. It’s basically model 2.0 of Part 230, the legal responsibility protect that has allowed social media platforms to flee accountability for over a decade. This time, it’s geared toward letting tech off the hook for AI harms.
This new preemption push follows Washington’s failed try and impose a decadelong moratorium on state AI regulation earlier this yr, which the Senate overwhelmingly struck from the One Huge Stunning Invoice in July. Now lawmakers are reviving the idea in an anticipated preemption package deal that will bar states from enacting AI guardrails tailor-made to native wants and rising harms.
That issues as a result of whereas Congress has largely stalled on passing AI safeguards, states are responding to rising public issues. Lawmakers from Tennessee to California are enacting vital protections for younger individuals on-line, for artists and creators and for voters in our elections. The anticipated preemption invoice threatens to wipe out these safeguards and as an alternative proceed a system of zero accountability for the biggest tech corporations.
If this feels acquainted, it ought to. Part 230 provided near-total immunity to on-line platforms for third-party content material and courts interpreted it broadly. The consequence was a regime that incentivized poisonous content material and addictive engagement over duty. We noticed the viral unfold of disinformation, the monetization of outrage and the normalization of merchandise that hook kids whereas exposing them to exploitation and self-harm.
Now think about handing tech an analogous protect that blocks any state legislation that threatens to carry corporations accountable for AI harms. That’s the preemption proposal’s trajectory.
Contemplate three parallels.
First, little one security. Part 230 dulled the incentive to design for little one well-being on social media, and households paid the worth. With frontier AI methods, we’re already beginning to see the identical story play out. This month, dad and mom testified earlier than a Senate Judiciary Subcommittee in regards to the devastating impression of AI instruments on their youngsters. These dad and mom — who noticed their very own youngsters fall down a darkish gap of psychological well being impacts, self-harm and even suicide after participating with AI chatbots — urged senators to not remove accountability for AI companies by preempting state AI safeguards.
Second, election integrity. Platforms flourished underneath Part 230 whereas disinformation metastasized. But when social media offered a megaphone to these in search of to undermine our democracy, AI fashions threaten at hand those self same unhealthy actors a loudspeaker stack worthy of a live performance enviornment. Trying on the coverage panorama, state lawmakers are those taking the lead on laws that cracks down on deepfakes, voice clones and AI-enabled disinformation in elections. A federal preemption invoice not solely threatens to comb these protections apart, but additionally to forestall state lawmakers from passing new legal guidelines as future AI harms emerge.
Third, accountability. Part 230 made it close to impossible for victims of dangerous social media merchandise to hunt redress. Preemption would copy-paste that error into the AI period, insulating mannequin suppliers and enormous platforms from state-level legal responsibility and shopper cures. Voters don’t need that. In a latest ballot by The Synthetic Intelligence Coverage Institute, 73% of Americans mentioned AI corporations needs to be answerable for harms brought on by their expertise.
Over the previous couple of a long time, the Part 230 mannequin for regulating tech has failed badly, not simply in its penalties for customers on-line, however within the incapability of lawmakers in Congress to repair a authorized framework that has turn into the muse for a lot of at present’s tech business. The lesson is evident: develop a high-powered business in a low-accountability setting and the political will to deal with its harms later will fail to materialize.
Preemption’s defenders insist {that a} patchwork of state legal guidelines is overwhelming AI frontier labs — a number of the best-funded corporations globally — and that nationwide management calls for a single rule ebook. Placing apart the query of whom preemption laws is designed to profit, such a method solely is smart if the proposed federal rule ebook for regulating AI is actual.
A substantive nationwide framework would set enforceable duties of care, require danger assessments and incident reporting for high-risk methods, assure transparency to researchers and regulators, maintain harmful methods offline and protect state authority in domains the place harms manifest like shopper safety and little one welfare. Something much less will not be harmonization; it’s abdication.
Preemption isn’t simply unpopular with voters; it’s a lightning rod in Congress. Lawmakers already demonstrated in July that there’s broad, bipartisan discomfort with blanket preemption. Members of Congress acknowledged that bulldozing state safeguards isn’t “pro-innovation,” as proponents portend, it’s pro-immunity for Huge Tech. Innovation and adoption flourish when the foundations reward belief and high quality, not simply reckless pace.
We don’t want spherical two of Part 230. We’d like considerate insurance policies that defend individuals whereas letting accountable innovators compete and win. If we realized something from the final 25 years, it’s that immunity with out duty doesn’t make expertise higher — it makes it a lot worse.
This was initially printed by Tech Policy Press.

