Elon Musk’s social media platform X has provoked outrage after folks used its AI chatbot Grok to change images of girls by eradicating their clothes.
The BBC has seen a number of examples of it undressing girls to make them seem in bikinis with out their consent, in addition to placing them in sexual conditions.
XAI, the corporate behind Grok, didn’t reply to the BBC’s requests for remark, apart from with an automatically-generated reply stating “legacy media lies”.
A House Workplace spokesperson mentioned it was legislating to ban nudification instruments, and underneath a brand new legal offence, anybody who provided such tech would “face a jail sentence and substantial fines”.
The regulator Ofcom mentioned tech companies should “assess the danger” of individuals within the UK viewing unlawful content material on their platforms, however didn’t verify whether or not it was presently investigating X or Grok in relation to AI pictures.
Grok is a free AI assistant – with some paid for premium options – which responds to X customers’ prompts once they tag it in a submit.
It’s typically used to present response or extra context to different posters’ remarks, however folks on X are additionally in a position to edit an uploaded picture via its AI picture modifying function.
It has been criticised for permitting customers to generate images and movies with nudity and sexualised content material, and it was beforehand accused of making a sexually explicit clip of Taylor Swift.
Clare McGlynn, a legislation professor at Durham College, mentioned X or Grok “might forestall these types of abuse in the event that they wished to”, including they “seem to get pleasure from impunity”.
“The platform has been permitting the creation and distribution of those pictures for months with out taking any motion and we’ve got but to see any problem by regulators,” she mentioned.
XAI’s personal acceptable use policy prohibits “depicting likenesses of individuals in a pornographic method”.
In an announcement to the BBC, Ofcom mentioned it was unlawful to “create or share non-consensual intimate pictures or little one sexual abuse materials” and confirmed this included sexual deepfakes created with AI.
It mentioned platforms equivalent to X have been required to take “applicable steps” to “scale back the danger” of UK customers encountering unlawful content material on their platforms, and take it down shortly once they change into conscious of it.
Extra reporting by Chris Vallance.

