Like many formidable tech corporations earlier than it, OpenAI launched itself to the tradition at giant with massive claims about how its know-how would enhance the world—from boosting productivity to enabling scientific discovery. Even the caveats and warnings had been de facto ads for the existential potential of artificial intelligence: We needed to be cautious with these things, or it would actually wipe out humanity.
Quick-forward to the current day, and OpenAI remains to be driving culture-wide conversations, however its attention-grabbing choices aren’t fairly so lofty. Its Sora 2 video platform—which makes it straightforward to generate and share AI-derived fictions—was greeted as a TikTok for deepfakes. That’s, a mash-up of two of essentially the most closely criticized developments in current reminiscence: addictive algorithms and misinformation.
As that launch was settling in (and being tweaked to deal with mental property complaints), OpenAI promised a forthcoming change to its flagship ChatGPT product, enabling “erotica for verified adults.” These merchandise aren’t precisely curing cancer, as CEO Sam Altman has instructed synthetic intelligence could sometime do. On the contrary, the strikes have struck many as weirdly off-key: Why is an organization that took its mission (and itself) so severely doing . . . this?
An apparent danger right here is that OpenAI is watering down a beforehand high-minded model. There are a number of main gamers in AI at this level, together with Anthropic, the maker of ChatGPT rival Claude, in addition to Meta, Microsoft, Elon Musk’s Grok, and extra. As they search to draw an viewers, they must differentiate themselves by means of how their applied sciences are deployed and what they make attainable, or straightforward. Briefly, what the know-how stands for. Because of this slop, memes, and intercourse seem to be such a comedown from OpenAI’s fastidiously cultivated status as an formidable however accountable pioneer.
To underscore the purpose, rival Anthropic just lately loved a shocking quantity of constructive consideration—an estimated 5,000 guests and 10 million social media impressions—for a pop-up occasion in New York’s West Village, dubbed a “no slop zone,” that emphasised analog creativity instruments. That is a part of a “Preserve Considering” branding campaign geared toward burnishing the status of its Claude chatbot. The corporate has positioned itself as taking a cautious method to creating and deploying the know-how (one which’s attracted some criticism from the Trump administration). It has additionally made Anthropic stand out in what is usually a move-fast-and-break-things aggressive area.
AI is a area that’s spending—and shedding—huge sums, and recently casting about for income streams within the right here and now whereas working towards that promised lofty future. Based on The Information, OpenAI misplaced $7.8 billion on income of $4.5 billion within the first half of 2025, and expects to spend $115 billion by 2029. ChatGPT has 800 million month-to-month customers, however paid accounts are nearer to twenty million, and these current strikes counsel that it must construct and leverage engagement. As Digiday just lately famous, OpenAI more and more appears to be not less than contemplating ad-driven fashions (as soon as dubbed a “final resort” by Altman).
Author and podcaster Cal Newport has made the case that developments like viral-video instruments and erotica chat are emblematic of a deeper shift away from grandiose financial impacts and towards “betting [the] firm on its capacity to promote adverts in opposition to AI slop and computer-generated pornography.” It’s nearly like a sped-up model of Cory Doctorow’s infamous enshittification process, pivoting from a high quality consumer expertise to an more and more degraded one designed for near-term revenue.
This isn’t completely truthful to OpenAI, whose each transfer is scrutinized partly as a result of it’s the best-known model in a singularly hyped class. All its rivals will even should ship actual worth in trade for his or her large prices to buyers and society at large. However exactly as a result of it’s a number one model, it’s notably inclined to dilution if it’s seen as straying from its idealistic promise, and rhetoric. A cutting-edge AI pioneer doesn’t wish to be perceived as an existential risk—but it surely additionally doesn’t wish to be branded as simply one other supply of crass distraction.

