Elon Musk’s social media platform X has moved to restrict access to its AI image editing tool to paying subscribers, following widespread criticism over non-consensual deepfake images.
The restriction comes after thousands of users exploited Grok, X’s chatbot, to create sexually explicit images from ordinary photos posted online. The practice sparked global outrage and drew attention from governments and regulators worldwide.
Research by deepfake analyst Genevieve Oh found that Grok generated roughly 6,700 sexually explicit or nudifying images every hour between January 5 and 6 a volume far exceeding other platforms, which averaged just 79 such images per hour.
Governments and Regulators React
The UK government, through Prime Minister Sir Keir Starmer, condemned the abuse of the AI tool. “It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table,” Starmer said in an interview with Greatest Hits Radio. British authorities have urged media regulator Ofcom to explore all enforcement measures, including a potential ban on X.
In Europe, India, Malaysia, and Brazil, regulators have also launched investigations into Grok’s role in producing sexualized images, including those depicting minors.
European Commission spokesperson Thomas Regnier criticized X’s attempt to downplay the content as “spicy mode,” calling it illegal and “appalling and disgusting.” India’s Ministry of Electronics and Information Technology has instructed X to review its governance and technical safeguards, while Malaysia’s Communications and Multimedia Commission is summoning X representatives for questioning.
Musk Responds Amid Backlash
Despite the mounting scrutiny, Elon Musk appeared to mock the controversy by sharing Grok-generated images of himself in a bikini, accompanied by laughing emojis.
With the new restriction, only paying subscribers can now access X’s AI image editing features. Users must register their names and payment details to use Grok, a move the platform says will reduce misuse but has yet to appease critics.
The incident has raised broader questions about the regulation of AI tools, particularly those capable of generating non-consensual content. Analysts warn that without stronger safeguards, social media platforms could become fertile ground for widespread exploitation and abuse.










