Musk has attempted to frame the situation as a battle between free speech and excessive government intervention, but with a number of countries banning Grok, the owner of X and Tesla – and the world’s richest man – has succumbed to pressure to update the app. On 9 January, X partially restricted access to Grok so that free users could not access the image generating capacity, however paid subscribers could continue to use the function. The Grok app, separate from X, is still able to create sexually explicit material from women’s pictures.
Grok issued a public apology over the incident on X, claiming that “xAI is implementing stronger safeguards to prevent this”. Musk said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
On 9 January, The Guardian reported that hundreds of posts generated and shared on X depicted sexualised, nonconsensual images of real women or children with items of clothing removed or added (ie. bikinis or swimsuits). A PhD researcher at Trinity College in Dublin, Nana Nwachukwu, found that hundreds of images depicted women celebrities, models, and stock photos of women that had been modified by Grok users.
Grok users provide prompts to the AI app, enabling the quick creation or modification of photorealistic images. On 7 January, Bloomberg reported that Grok users were generating (creating and sharing) up to 6,700 sexualised images per hour.
As reported by The Guardian UK, some users made disturbing requests for the images to be altered.
Partner at Dowson Turco, Nicholas Stewart leads the criminal and human rights law teams.
He says that the bicameral system in most Australian jurisdictions means the process of scrutiny and consultation slows the implementation of laws pertinent to new technology.
“Human rights lawyers and advocates of human rights have been asking for more regulation of AI since ChatGPT came to Australia in 2022,” he says. “It seems the Government wanted to leave regulation alone, to allow business, industry and members of society to come to terms with and adopt AI quickly. But my view is AI needs tight regulation because of its capacity to steal intellectual property, learn from our communication on devices and, under instruction from bad actors, scam, deceive and create child abuse material or intimate images of people.”
Australia’s legislation
Under Australia’s Online Safety Act 2021 (Cth), the eSafety Commissioner can issue removal notices for illegal content. If not removed, companies can face significant fines, but the lack of transparency over user identities and jurisdictional differences make it challenging to hold specific users, let alone Grok, accountable.
The Australian eSafety Commissioner issued a statement on 9 January, saying it had contacted X to seek further information on safeguards.
It wrote: “eSafety remains concerned about the use of the generative AI system Grok on X to generate content that may sexualise or exploit people, particularly children.
“While the number of reports eSafety has received remains small, eSafety has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery. eSafety will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act,” the statement continued.
“X, Grok, and a wide range of other services are also subject to systemic safety obligations to detect and remove child sexual exploitation material and other unlawful material as part of Australia’s world-leading industry codes and standards.”
ABC News reported that complaints of abuse to the eSafety Commissioner began before Christmas last year, when an “edit image” option was added to the AI chatbot.
Stewart says the process of seeking to remove images from social media is relatively easy. “It involves completing an online form with the eSafety Commission where you are asked to upload evidence. The issue for complainants is the amount of time that passes between lodging the form and the content being removed.”
He says the best way to speed this process up is to:
- take screen shots of the content that include details of where the content appears;
- save a link to the content;
- report the content to the platform;
- upload your screenshots and the link to the eSafety Commission form; and
- follow up the Commission with a phone call two hours after lodgement.
In September 2025, NSW amended the Crimes Act 1900 to make the production of a sexually explicit deepfake designed to be a genuine depiction of a real, identifiable person an offence punishable by up to three years’ jail. The NSW Government also criminalised “the non-consensual creation, recording and distribution of sexually explicit audio, whether real or designed to sound like a real, identifiable person.”
The Crimes Amendment (Intimate Images and Audio Material) Bill 2025 passed the upper house the same month. In a media statement at the time, the Government said: “It is already a crime in NSW to record or distribute intimate images of a person without their consent or to threaten to do so. This includes distributing photos that have been digitally altered, but not images wholly created using AI.
“Existing court takedown powers will apply to these expanded offences.
“The NSW Government has zero tolerance for the non-consensual production and distribution of sexually explicit material – regardless of how it is created.”
The UK technology secretary, Liz Kendall, tole The Guardian ministers were considering the possibility of barring access to X in the UK. On 10 January, Australian Prime Minister, Anthony Albanese, said that “global citizens deserve better”. He added, “The use of generative artificial intelligence to exploit or sexualise people without their consent is abhorrent. The fact that this tool was used so that people were using its image creation function through Grok is just completely abhorrent. It, once again, is an example of social media not showing social responsibility.”
As Nwachukwu told The Guardian, the creation of Grok-assisted sexualised images ramped up from October 2025, when Grok users were prompting the app to create images of them in Halloween costumes. Having recognised the capacity of Grok to respond efficiently, by December, “there was a huge uptick in people asking Grok to put different people in bikinis or other types of suggestive clothing”.
International response
Malaysia and Indonesia acted quickly to ban Grok. Musk’s company, xAI, now says it will geoblock content if it violates laws in a particular place. As reported by PBS, authorities in the Philippines said they were working to legally ban Grok. The UK and the European Union were investigating potential violations of online safety laws. France and India demanded stricter controls, and both California and Brazil called for an investigation.
As far as Malaysia and Indonesia’s swift ban on Grok, Stewart points out that this hasn’t quashed access to the tool.
“In a world first, Malaysia and Indonesia took a stand and banned the Grok tool within X,” he explains. “However, these bans have proved to be easy to navigate around due to the sophistication of this technology. Reported just today the ban issued in Malaysia and according to a post from the Grok account on X, the block can be easily bypassed with a VPN, as reported in the Guardian.”
At least one alleged victim of the exploitative images is suing. Ashley St Clair, the mother of one of Elon Musk’s children, has filed a lawsuit against xAI with the Supreme Court of the State of New York. St Clair alleges that explicit images were generated of her by his Grok AI tool, including one in which she was underage.
As reported by The Guardian, St Clair is seeking punitive and compensatory damages, claiming dozens of sexually explicit and degrading deepfake images were created by Grok. Carrie Goldberg is representing St Clair. The victims’ rights lawyer has previously represented women who were victims of sexual harassment and abuse online.
The 15 January filing (St. Clair v. X.AI Holdings Corp.) states that “xAI is directly liable for the harassment and explicit images created by its own chatbot, Grok”.
Stewart says, “The capacity of Grok to create intimate and illegal content is raising concerns around the world. Some governments are taking action, but they are not moving fast enough. The eSafety Commission said in a recent statement on 9 January 2026 that the Commission had received a small amount of reports of abuse. eSafety is only empowered to issue removal notices when material breaches the Online Safety Act. This is an example of how crucial it is for governments and those organisations whose purpose it is to protect citizens from exploitation and abuse are simply not able to compete with the rapidly expanding infrastructure of the big tech companies.”


Leave a Reply