Elon Musk’s social media platform X is under intense scrutiny in a controversy that raises serious questions about online safety laws. Its AI chatbot Grok stands accused of generating deepfake sexualised images from photographs of real people. Despite now pledging new restrictions to prevent these images being generated, if Ofcom concludes that X has failed in its duty to protect users from illegal content, the platform could face a fine, restrictions or, in the most extreme scenario, a UK ban.
Such an outcome would be both unprecedented and politically charged. X has become a rallying ground for populist and free speech arguments and any attempt to curtail its use would almost certainly be framed as ideologically driven censorship.
The debate ignited by Grok’s behaviour is a symptom of wider pressures now merging as AI systems move from experimentation to mass deployment. AI hype is giving way to real-world impact at speed, exposing gaps between technological capability, governance frameworks, and enforcement capacity.
The UK’s Online Safety Act (OSA) was framed around three core ambitions: making the UK the safest place in the world to be online, delivering the strongest protections for children, and equipping Ofcom with proactive enforcement powers. The Grok case represents the first meaningful test of whether those ambitions can be delivered in an environment increasingly shaped by generative AI.
Grok was launched in 2023 as a permissive generative AI system embedded within a social-media platform. Conceived as “TruthGBT”, it was envisaged as an unconstrained alternative to what Musk characterised as ‘politically correct’ AI models. That design choice has brought predicable consequences, triggering widespread offence and anger. UK law criminalises the sharing of non-consensual intimate images, including AI-generated deepfakes yet requesting or creating such content via an AI system is not currently prohibited. Legislators are seeking to close this loophole in response to Grok’s antics.
However, the politics are complicated. In opposition, Labour pledged robust AI regulation and tougher online safety enforcement. In government, the emphasis has shifted. The comprehensive standalone AI bill once trailed now seems less likely, while ministers’ rhetoric has moved away from AI as a risk to be tightly constrained towards a solution to the UK’s productivity and economic woes. The UK Government was the first to sign a technology prosperity deal with the US, an agreement framed by President Donald Trump around deregulation and innovation.
Musk has cited potential UK enforcement action against Grok as evidence that the Government is “looking for any excuse” to supress free speech. For the UK Government, the need to act may be clear but the political context is being shaped by wider geopolitical sensitivities. Trump opposes regulatory constraints on free speech and is sceptical about foreign governments imposing rules on US-based platforms. How Keir Starmer’s government navigates this moment will signal more than its immediate approach to AI. This uncertainly is compounded by questions about how far UK digital regulation will be shaped by alignment with US political and economic priorities, particularly as AI governance becomes more entangled in debates around freedom of expression.
Viewed in this light, Grok is a test case. The outcome will indicate whether the UK’s online safety framework can deliver in an AI-driven platform environment, and whether enforcement can be applied with sufficient speed and authority to make a significant impact. The effort to tame Grok may not determine the future of AI regulation, but it offers an early signal of how effective and resilient that framework is likely to be in practice.