The pre-AI world is gone. Estimates counsel that already, as many as one in eight children personally is aware of somebody who has been the goal of a deepfake photograph or video, with numbers rising to at least one in 4 who’ve seen a sexualized deepfake of somebody they acknowledge, both a buddy or a star. It is a actual drawback, and it’s one which lawmakers are abruptly waking as much as.
Within the Eighties, after I was a child, it was an image of a lacking baby on a milk carton from throughout the nation that encapsulated parental fears. In 2026, it’s an AI-generated suggestive picture of a liked one.
The growing availability of AI nudification instruments, corresponding to these related to Grok, has fueled skyrocketing stories of AI-generated baby sexual abuse materials — from roughly 4,700 in 2023 to over 440,000 within the first half of 2025 alone, in line with the Nationwide Middle on Lacking and Exploited Kids.
That is horrific, dirty stuff. It’s significantly tough to examine — and write about — as a mother, as a result of the power to protect your baby from it feels so past your management. Mother and father already wrestle simply to maintain children off social media, get screens out of lecture rooms or lock up family gadgets at evening. And that’s after a decade’s value of knowledge on social media’s influence on children.
Earlier than we’ve even solved that drawback, AI is taking the world by storm — particularly among the many younger. Almost half (42%) of American teenagers report speaking to AI chatbots as a buddy or companion. The overwhelming majority of scholars (86%) report utilizing AI throughout the faculty 12 months, in line with Training Week. Even children ages 5 to 12 are utilizing generative AI. In a number of high-profile circumstances, mother and father say AI chatbots inspired their teenagers to commit suicide.
Too many mother and father are out of the loop. Polling from Frequent Sense Media reveals that oldsters constantly underestimate their kids’s use of AI. Faculties, too. The identical survey discovered that few faculties had communicated — or arguably even developed — an AI coverage.
However there’s a shared sense of foreboding: People stay way more involved (50%) than excited (10%) concerning the elevated use of AI in day by day life, and the overwhelming majority imagine that they’ve little to no means to manage it (87%).
Policymakers are on the transfer. On Jan. 13, the Senate unanimously handed a invoice, the Defiance Act, to permit victims of deepfake porn to sue the individuals who created the photographs. The UK and EU are investigating whether or not Grok was used to generate sexually express deepfake photographs of girls and youngsters with out their consent, violating their On-line Security Act.
Within the U.S., the Take It Down Act, signed into legislation by Congress final 12 months, criminalized sexual deepfakes and requires platforms to take away the photographs inside 48 hours; sharers may face jail time.
In my house state of Texas, we now have a number of the most aggressive AI legal guidelines within the nation. The Securing Kids On-line by means of Parental Empowerment Act of 2024, amongst different issues, requires platforms to implement a method to stop minors from being uncovered to “dangerous materials.” It’s been unlawful since Sept. 1, 2025, to create or distribute any sexually suggestive photographs with out consent. Punishments vary from felony prices and imprisonment to recurring fines. And beginning this 12 months, the Texas Accountable AI Governance Act goes into impact banning AI growth with the only real intent to create deepfakes.
Texas won’t be identified for its bipartisanship, however these efforts have been pushed in a bipartisan method and framed (accurately) as defending Texas kids and parental rights. “In at this time’s digital age, we should proceed to combat to guard Texas children from misleading and exploitative expertise,” mentioned Lawyer Normal Ken Paxton, asserting his investigation into Meta AI studio and Character.AI.
However we don’t know but if these legal guidelines will likely be efficient. For one, it’s all nonetheless so new. For one more, the expertise retains altering.
And it doesn’t assist that the creators of AI are tight with Washington. Massive Tech firms are the large boys in D.C. today; their lobbying has grown considerably. Nearer to house, Texas Democrats are involved that Paxton won’t push Musk over the Grok debacle given the billionaire’s thick GOP connections.
Underneath the Trump administration, the Federal Commerce Fee launched a proper inquiry into Massive Tech, asking them to element how they check and monitor for potential adverse impacts of chatbots on children. However that’s basically self-disclosing; these identical firms haven’t precisely impressed confidence on that rating with social media, or within the case of Grok, in deepfake baby nudes.
Extra outdoors accountability is required. To that finish, a multi-prong strategy is required. I’d prefer to see Well being and Human Providers incorporate AI’s problem to children’ well-being as a part of the MAHA motion. A bipartisan fee may discover AI age limits, faculty insurance policies and youngsters’s relational expertise. (Concerningly, there was little point out of AI in MAHA’s complete report on baby well being final 12 months.)
However even with federal and state motion, the truth is that a lot of the AI world will likely be navigated by mother and father ourselves. Whereas there are steps that might restrict kids’s publicity to AI at youthful ages, avoidance alone is just not the reply. We’re solely originally, and already AI expertise is unavoidable. It’s in our computer systems, properties, faculties, toys and work and the AI age is just simply starting.
Extra scaffolding is required. The deep work will fall to folks. Mother and father have at all times wanted to lift kids with robust spines, thick skins and ethical advantage. The struggles of every period change, however that doesn’t. We are going to now want to lift kids who’ve the sense of function, critical-thinking talents and relational know-how to stay with this new and already ubiquitous expertise — with its nice promise and risks.
It’s a courageous new world on the market, certainly.

