2026年1月22日
Radar - January 2026 – 1 / 3 观点
The revelations that X's AI chatbot Grok has been regularly used to create intimate and sexualised images, including of children, without consent has thrown the UK's online safety regime into the spotlight.
Researchers cited by Bloomberg News claimed Grok users were generating up to 6,700 undressed images per hour. While evidence suggests the tool has been used in this way for some time, it was the introduction of Grok "spicy mode" for its image and video generators which is reportedly responsible for the huge upsurge in this type of use. Unlike its competitors which include Open AI's Sora and Google's Veo, Grok Image appears not to have had safeguards in place to prevent creation of intimate and nudified images.
Grok's terms of use do require users to comply with the law including by refraining from "depicting likenesses of persons in a pornographic manner" and from "the sexualisation or exploitation of children", however, these terms do not appear to be regularly enforced and its age checking is easy to circumvent as it does not rely on proof of age.
In response to public outrage, X initially announced that it would limit access to its image creation tool to paying subscribers whose details could be verified. On 14 January, Elon Musk posted saying he was "not aware of any naked underage images generated by Grok. Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests. When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately". By the following morning, X had announced it had acted to prevent its tool altering images of real people to put them in revealing clothing. The changes were reportedly applied to publicly available tweets on the Grok chatbot but not when using the Grok Assistant on X, and involved geo-blocking, however reports suggested that on the morning of 15 January, it was still possible to generate images of real people in swimwear in the UK, France and Belgium.
This will not, however, be sufficient to head off regulator scrutiny, Ofcom announced an Online Safety Act compliance investigation on 12 January 2026, and the ICO reported on 7 January, that it had contacted X and xAI to seek clarity on measures to ensure data protection compliance after receiving complaints.
Government ministers have been making sweeping claims about the ability of the Online Safety Act (OSA) to tackle the issues thrown up but are also scrambling to introduce or uplift some of the offences it has announced over the past year.
So what is the current framework and what can we expect?
Ofcom's investigation focuses on whether X has complied with its obligations under the OSA in relation to user-generated content. The investigation will focus, in particular, on whether X has complied with its obligations to:
By way of refresher, the OSA introduces duties in relation to illegal content which is classified into: priority illegal content (ie the content amounts to a priority offence) and other illegal content; content harmful to children which is also sub-categorised as primary priority content, priority content harmful to children, and non-designated content harmful for children. It also covers certain types of adult user content for Category 1 services. The distinctions are significant as there is a duty to prevent users encountering priority content (adults) and primary priority content (children). Services are also required to use age verification to prevent children encountering primary priority content and Part 5 services are required to use highly effective age assurance to stop children accessing pornographic content. Safety duties relating to other types of illegal and harmful content relate more to risk assessment and management including takedown requirements, and safety by design.
The OSA amended the Sexual Offences Act 2003 (SOA) to introduce new criminal offences relating to online or digital content. However, over the course of the last two years, the government has proposed additional offences and has also changed or planned to change the status of some of the offences and some of the content to which they relate. A number of these are intended to address deepfakes and nudification. Some of the announced offences have not yet been enacted or brought into force, and the government is now under pressure to act.
The current deepfake and intimate images offences are in s66B of the Sexual Offences Act 2003 (inserted by the OSA). They occur where:
There is a defence where the person charged under the first offence can prove they had a reasonable excuse for sharing the photo or film. There are various exceptions in s66(c).
The government announced on 12 January that it will bring s138 of the Data (Use and Access) Act 2025 into force imminently. This covers the creation of deepfakes ("purported intimate images") and inserts a new s66D into the SOA. It is an offence if person A intentionally creates a purported intimate image of person B, B does not consent and A does not reasonably believe B consents. Similarly, it is an offence if A intentionally requests creation of the image. A also commits an offence if they intentionally request creation of a purported intimate image that includes or excludes something in particular, B does not consent to A requesting the inclusion or exclusion, and A does not reasonably believe B consents.
The Crime and Policing Bill has completed its passage through the Commons and is currently at Committee Stage in the House of Lords. It will make a number of changes to the SOA relating to deepfake intimate images, CSAM material, and nudification apps including:
The Crime and Policing Bill also makes clear that where senior managers acting within their apparent or actual authority commit an offence, the corporate body for which they work will also commit the offence, regardless of where they are incorporated (subject to limited exclusions for partnerships and corporations sole).
As we discuss in our article on Ofcom's statement and guidance: a safer life online for women and girls and the government's strategy and action plan to tackle violence against women and girls, it has long been clear that protecting women and girls online is a major priority.
X is facing scrutiny not just in the UK but also in the EU under the Digital Services Act and the GDPR as well as local laws, and further afield, including in California. Calls for X to be banned and social media to be restricted to under-16s in the UK, have, unsurprisingly, prompted a less than positive response not only from X with Elon Musk accusing the UK looking for "any excuse for censorship", but also, potentially from the Trump administration. A State Department official said on 13 January that "from America's perspective, nothing is off the table…when it comes to free speech…Let's wait and see what Ofcom does and we'll see what America does in response".
This is a major test for the UK's online safety regime. If Ofcom finds X has failed to comply with the OSA, it has a full arsenal of enforcement measures including substantial fines and, potentially, business disruption measures at its disposal. Whichever way prevailing political winds are blowing, it will need to deploy these appropriately if it is to demonstrate that the UK's online safety regime is effective.
The wider picture is not only about Ofcom and the OSA though. On 19 January, the government announced a consultation on the use of technology and social media by children which will look at a range of options including raising the age of digital consent, introducing measures to limit screen time, and even, introducing an Australia-style ban on the use of social media by under-16s. The resilience of the OSA may well have a bearing on which, if any, of these options are eventually adopted.
2026年1月22日
2026年1月22日
2026年1月22日