What's the development?
The revelations that X's AI chatbot Grok has been regularly used to create intimate and sexualised images, including of children, without consent has thrown the UK's online safety regime into the spotlight.
Researchers cited by Bloomberg News claimed Grok users were generating up to 6,700 undressed images per hour. While evidence suggests the tool has been used in this way for some time, it was the introduction of Grok "spicy mode" for its image and video generators which is reportedly responsible for the huge upsurge in this type of use. Unlike its competitors which include Open AI's Sora and Google's Veo, Grok Image appears not to have had safeguards in place to prevent creation of intimate and nudified images.
Grok's terms of use do require users to comply with the law including by refraining from "depicting likenesses of persons in a pornographic manner" and from "the sexualisation or exploitation of children", however, these terms do not appear to be regularly enforced and its age checking is easy to circumvent as it does not rely on proof of age.
In response to public outrage, X initially announced that it would limit access to its image creation tool to paying subscribers whose details could be verified. On 14 January, Elon Musk posted saying he was "not aware of any naked underage images generated by Grok. Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests. When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately". By the following morning, X had announced it had acted to prevent its tool altering images of real people to put them in revealing clothing. The changes were reportedly applied to publicly available tweets on the Grok chatbot but not when using the Grok Assistant on X, and involved geo-blocking, however reports suggested that on the morning of 15 January, it was still possible to generate images of real people in swimwear in the UK, France and Belgium.
This will not, however, be sufficient to head off regulator scrutiny, Ofcom announced an Online Safety Act compliance investigation on 12 January 2026, and the ICO reported on 7 January, that it had contacted X and xAI to seek clarity on measures to ensure data protection compliance after receiving complaints.
Government ministers have been making sweeping claims about the ability of the Online Safety Act (OSA) to tackle the issues thrown up but are also scrambling to introduce or uplift some of the offences it has announced over the past year.
So what is the current framework and what can we expect?
Online Safety Act - Ofcom investigation
Ofcom's investigation focuses on whether X has complied with its obligations under the OSA in relation to user-generated content. The investigation will focus, in particular, on whether X has complied with its obligations to:
- asses the risk of people in the UK encountering illegal content and carry out updated risk assessments before making changes to its service
- take appropriate steps to prevent people in the UK from seeing priority illegal content including non-consensual CSAM and intimate images
- take down illegal content swiftly on becoming aware of it
- have regard to protecting users from a breach of privacy laws
- compile children's risk assessments and update them before making significant changes to its services
- use highly effective age assurance to protect UK children from encountering porn.
Criminal offences and deepfakes
By way of refresher, the OSA introduces duties in relation to illegal content which is classified into: priority illegal content (ie the content amounts to a priority offence) and other illegal content; content harmful to children which is also sub-categorised as primary priority content, priority content harmful to children, and non-designated content harmful for children. It also covers certain types of adult user content for Category 1 services. The distinctions are significant as there is a duty to prevent users encountering priority content (adults) and primary priority content (children). Services are also required to use age verification to prevent children encountering primary priority content and Part 5 services are required to use highly effective age assurance to stop children accessing pornographic content. Safety duties relating to other types of illegal and harmful content relate more to risk assessment and management including takedown requirements, and safety by design.
The OSA amended the Sexual Offences Act 2003 (SOA) to introduce new criminal offences relating to online or digital content. However, over the course of the last two years, the government has proposed additional offences and has also changed or planned to change the status of some of the offences and some of the content to which they relate. A number of these are intended to address deepfakes and nudification. Some of the announced offences have not yet been enacted or brought into force, and the government is now under pressure to act.
Online Safety Act
The current deepfake and intimate images offences are in s66B of the Sexual Offences Act 2003 (inserted by the OSA). They occur where:
- Person A intentionally shares a photograph or film which: shows or appears to show person B in an intimate state without B's consent and where A does not reasonably believe B consents; with the intention of causing B alarm, distress or humiliation and B does not consent; or A does so with the purpose of A or another person's sexual gratification, B does not consent and A does not reasonably believe B consents.
- A threatens to share such a photo of film and does so with the intention or a recklessness that B or a person who knows B will fear the threat being carried out.
There is a defence where the person charged under the first offence can prove they had a reasonable excuse for sharing the photo or film. There are various exceptions in s66(c).
Data Use and Access Act
The government announced on 12 January that it will bring s138 of the Data (Use and Access) Act 2025 into force imminently. This covers the creation of deepfakes ("purported intimate images") and inserts a new s66D into the SOA. It is an offence if person A intentionally creates a purported intimate image of person B, B does not consent and A does not reasonably believe B consents. Similarly, it is an offence if A intentionally requests creation of the image. A also commits an offence if they intentionally request creation of a purported intimate image that includes or excludes something in particular, B does not consent to A requesting the inclusion or exclusion, and A does not reasonably believe B consents.
Crime and Policing Bill
The Crime and Policing Bill has completed its passage through the Commons and is currently at Committee Stage in the House of Lords. It will make a number of changes to the SOA relating to deepfake intimate images, CSAM material, and nudification apps including:
- Schedule 9 Part I amends the SOA to create a new s66AA. It creates three new offences (taking or recording offences): where person A intentionally takes a photograph or records a film showing person B in an intimate state without B's consent or A's reasonable belief in B's consent; taking or recording with the intention of causing B alarm, distress or humiliation, where B does not consent; and taking or recording for the purpose of obtaining sexual gratification, where B does not consent and A does not reasonably believe B consents.
- A new s66AC creates offences for installing, adapting, preparing or maintaining equipment with the intention of enabling the commission of the taking or recording offences.
- For the purposes of ss66B and C, references to a photograph or film also include an image made or altered by computer graphics which appears to be a photograph or film, a copy of a photograph or film, or data which is capable of being converted into a film, image or paragraph.
- Clause 63 of the Bill creates a new s46A SOA – it is an offence to make, adapt, possess, supply or offer to supply a child sexual abuse image-generator. There are various defences for ISPs to s46A offences along the 'mere conduit' /hosting lines.
- New s46C SOA (also inserted by clause 63 Crime and Policing Bill) – where a s46A offence is committed by a body corporate, partnership or unincorporated association, with the consent or connivance of a relevant person (eg director, manager), both the person and the body commit the offence.
- On 14 January 2025, the government confirmed it plans to amend the Bill to include a ban on creation and supply of AI nudification tools, saying: "this will apply to applications that have one despicable purpose only: to use generative AI to turn images of real people into fake nude pictures and videos without their permission. The new legislation will allow the police to target the firms and individuals who design and supply these disgusting tools".
The Crime and Policing Bill also makes clear that where senior managers acting within their apparent or actual authority commit an offence, the corporate body for which they work will also commit the offence, regardless of where they are incorporated (subject to limited exclusions for partnerships and corporations sole).
What does this mean for you?
As we discuss in our article on Ofcom's statement and guidance: a safer life online for women and girls and the government's strategy and action plan to tackle violence against women and girls, it has long been clear that protecting women and girls online is a major priority.
X is facing scrutiny not just in the UK but also in the EU under the Digital Services Act and the GDPR as well as local laws, and further afield, including in California. Calls for X to be banned and social media to be restricted to under-16s in the UK, have, unsurprisingly, prompted a less than positive response not only from X with Elon Musk accusing the UK looking for "any excuse for censorship", but also, potentially from the Trump administration. A State Department official said on 13 January that "from America's perspective, nothing is off the table…when it comes to free speech…Let's wait and see what Ofcom does and we'll see what America does in response".
This is a major test for the UK's online safety regime. If Ofcom finds X has failed to comply with the OSA, it has a full arsenal of enforcement measures including substantial fines and, potentially, business disruption measures at its disposal. Whichever way prevailing political winds are blowing, it will need to deploy these appropriately if it is to demonstrate that the UK's online safety regime is effective.
The wider picture is not only about Ofcom and the OSA though. On 19 January, the government announced a consultation on the use of technology and social media by children which will look at a range of options including raising the age of digital consent, introducing measures to limit screen time, and even, introducing an Australia-style ban on the use of social media by under-16s. The resilience of the OSA may well have a bearing on which, if any, of these options are eventually adopted.