On Monday, January 5, the European Commission said it’s looking into complaints against Elon Musk’s Grok, which is reportedly being used to generate sexually explicit images of children. This comes just after the public prosecutor’s office in Paris launched an investigation into X for the same reason.

A spokesperson for the European Commission told journalists in Brussels on Monday it was “very seriously looking into” the creations of sexually explicit images of minors by Grok. 

“This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This has no place in Europe,” the spokesperson said. We reached out to the European Commission for additional information but did not immediately receive a response. 

While the images have been removed from the platform and the users involved banned, the incident raises questions as to why the users were allowed to produce such content in the first place. 

Grok faces scrutiny from European regulators 

Both x.AI and X, appear to be skating on thin ice with European regulators. Grok in particular has drawn widespread criticism for its lack of content moderation, which has enabled the creation of hateful content in the past. In November 2025, holocaust denial claims prompted an investigation by French authorities. 

At the same time, x.AI’s flagship product has come under fire for enabling users to remove clothing from images of women, depicting them in bikinis without their consent. 

While the company’s acceptable use policy forbids the creation of “depicting likenesses of persons in a pornographic manner,” and “the sexualization or exploitation of children,” these controls don’t appear to be enforced effectively.

The lack of content guardrails on Grok outputs, goes beyond avoiding censorship and comes across as complete negligence – particularly when the creation of such content can put minors at risk of harm. 

“Through Childline, we hear firsthand from young people about the devastating impact on their safety, mental health, and wellbeing when nude images of them are created and shared,” Rani Govender, policy manager for Child Safety Online at the The National Society for the Prevention of Cruelty to Children (NSPCC) told European Business Magazine

“AI apps like Grok are alarmingly easy to exploit. These technologies are putting children at risk of having illegal material generated of them, which can later be used to bully, extort, or torment victims.”

“It is unacceptable for X to continue neglecting their legal responsibility to protect children from this horrific form of abuse. Given this failure, Ofcom must act swiftly to take enforcement action – making it clear that no service can ignore their duties to keep children safe online,” Govender continued.

How does x.AI plan to respond to the scandal? 

After widespread backlash following the incident, Grok’s X account issued an apology statement for sharing an AI-generated image of two young girls in sexualized attire, estimated to be between 12-16. 

While Musk has not addressed the incident directly, he did make an X post on January 3 stating “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” 

We reached out to x.AI for comment on the incident but did not immediately receive a response. Based on the comments from Grok and Musk, however, it appears that x.AI is content to take a reactive approach to illegal depictions of minors. 

And, although x.AI’s acceptable use policy forbids the creation of sexualized content of persons, the fact that these images were made public suggests that any content guardrails in place are not effective and can be sidestepped with certain prompts. 

Grok also noted that the x.AI team is “reviewing to prevent future issues,” but the organization’s response leaves a lot to be desired – particularly with regard to leaving the AI assistant to “comment” on such a serious incident. 

x.AI and X are leaving women and children at risk of harm 

Merely confirming that illegal content will face consequences, does not absolve x.AI and X from implementing effective moderation to prevent this type of content from being produced. Failure to do so leaves women and children particularly at risk of harm on the platform and beyond. 

Not only is there the direct risk of children having their likenesses used to sexualize, bully, and extort them, as Rani Govender of the NSPCC stated, but there is also the risk of such material being normalized on the platform. 

A study of 75 child sexual abuse material offenders, for instance, found that most did not initially seek out child sexual abuse material (CSAM), but first encountered it inadvertently.

Featured image: xAI founder Elon Musk emphasizes the potential of the technology, but also points to the responsibility.
Source: Heute.at
License: Creative Commons Licenses