As generative AI technologies rapidly evolve and become increasingly integrated into online platforms of all types, we look at how the UK's Online Safety Act applies to them.
The UK Online Safety Act 2023 (OSA) is now two years old and the majority of its provisions have been live and enforceable since July 2025. The OSA places new legal requirements on providers of three types of internet service: user-to-user (U2U) services, search services, and services featuring pornographic content.
Regulated services under the OSA
The online safety regime applies to internet services that enable users of the service to generate, share or upload content (such as messages, images, videos, comments, audio) on the service that may be encountered by other users of the service. A "user-to-user service" is defined as an internet service by means of which content that is generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or users, of the service. A U2U service that includes a search engine is referred to as a 'combined service' and is subject to the duties applicable to search services as well.
There are certain exemptions from U2U service regulation. A U2U service is exempt if the only user-generated content enabled by the service is email, SMS, MMS or one-to-one live aural communication. A U2U service is also exempt if the only way users can communicate on it is by posting comments or reviews on the service provider's own content (as distinct from another user's content).
The OSA also places duties on "search services" - internet services which are, or include a search engine. A search engine is defined as a service or functionality that enables users to search more than one website and/or database or, in principle, to search all websites and/or databases.
Applicability of OSA definitions to generative AI services – Ofcom's view
In response to distressing incidents involving generative AI - including the tragic death of an American teenager who had developed a relationship with a chatbot based on a Game of Thrones character, and cases where users created chatbots to act as 'virtual clones' of real people and deceased children, including Molly Russell and Brianna Ghey - Ofcom issued an open letter to UK online service providers in late 2024 to clarify how the OSA applies to generative AI.
Where a site or app includes a generative AI chatbot that enables users to share text, images or videos generated by the chatbot with other users, it will be a U2U service. This includes, for example, services with 'group chat' functionality that enable multiple users to interact with a chatbot at the same time. Where a site or app allows users to upload or create their own generative AI chatbots - 'user chatbots' - which are also made available to other users, it is also a U2U service.
Ofcom's letter emphasised that services allowing users to create chatbots that mimic the personas of real and fictional people, which can be submitted to a chatbot library for others to interact with, are U2U services. Any text, images or videos created by these 'user chatbots' is user-generated content and is regulated by the OSA.
Indeed, any AI-generated content shared by users on a U2U service is regulated identically to human-generated content. For example, deepfake fraud material is regulated no differently to human-generated fraud material, regardless of whether that content was created on the platform where it is shared, or has been uploaded by a user from elsewhere.
Regarding search functionality, generative AI tools that modify, augment or facilitate the delivery of search results on an existing search engine, or which provide 'live' internet results to users on a standalone platform would be considered a search service regulated by the OSA. For example, in response to a user query about health information, a standalone generative AI tool might serve up live results drawn from health advice websites and patient chat forums.
Ofcom also noted that sites and apps that include generative AI tools that can generate pornographic material are regulated under the OSA, requiring highly effective age assurance to ensure children cannot normally access pornographic material.
Detailed analysis: chatbots and the OSA
A more detailed analysis of how chatbots interact with the OSA framework reveals three distinct regulatory contexts:
- users who generate content using chatbots (eg integrating customer service into social media)
- social media or search services that integrate chatbots into their service to respond to or interact with users, and
- the chatbot as a freestanding service, such as ChatGPT, with a further distinction between platforms like Character.AI or Replika Pro that allow users to create or personalise their own chatbots, compared to those that don't.
User-controlled chatbots
For user-controlled chatbots on regulated U2U services, the OSA addresses whether chatbot-generated content constitutes user-generated content. Section 55(4)(a) of the Act specifies that the reference to content generated, uploaded or shared by a user includes content generated, uploaded or shared by means of software or an automated tool applied by the user. This would cover chatbots - and the test is the application of the tool, not the control of the tool. So users 'hiring' chatbots from third-party providers would seem to be covered too.
As regards content harmful to children, chatbot outputs will be treated as if generated by a human. The question is whether the outputs fall within either of the categories of primary priority content or priority content, or satisfy the test in s60(2)(c) for non-designated content harmful to children. This question focusses on the impact of the content on children rather than who (or what) created the content.
However, there is a question about illegal content, notably the impact that the requirement to assess the mental element of crimes has (see s192(6)). Can chatbots have the requisite mental element or can we ascribe their actions to their provider or user? The OSA does not expressly deal with this question, though it does note at s59(12) that references to conduct of particular kinds are not to be taken to prevent content generated by a bot or other automated tool from being capable of amounting to an offence. This provision does not go as far as to say that the mental element can be satisfied by a bot - it focusses on the action part of the crime, not the mental element.
In terms of whether it could be reasonable to infer that the requirements of an offence have been satisfied (s192), it might be more convincing to ascribe intent where the chatbot is rules-based rather than when it is AI-based and potentially less predictable in its responses. The significance of this under the Act is that if you cannot satisfy the definition of illegal content, then you do not have content in relation to which the regulated services are required to act.
Integration into regulated services
Where a site or app includes a generative AI chatbot that enables users to share text, images or videos generated by the chatbot with other users, it will be a U2U service. However, there is a question as to whether the output of provider-controlled chatbots constitutes regulated content that triggers the safety duties, and this is true whether we are talking about Meta's Instagram accounts or the summary that a virtual assistant might come up with when asked a question.
As regards U2U services, the service provider only has to take action in relation to user-generated content; the output of provider-controlled accounts - whether driven by chatbots or people - is not user-generated content. So while the existence of the tool may be within the scope of a risk assessment, does it lead to content about which a service provider should take action? Risks are those arising from illegal content (section 59(14)) and content harmful to children (s60(6)), which are both limited to regulated user-generated content.
The scope of obligations for search services is defined differently. Search results "means content presented to a user of the service by operation of the search engine in response to a search request made by the user". It is not limited to the replication of third-party content. The limitation in s59(14) in relation to U2U services does not apply to search - though the question of the extent to which chatbot content can be criminal content is still open.
Freestanding chatbots
These come in a range of forms, so the analysis may be category-specific. Certainly it would seem as if something like ChatGPT search could be a search engine for the purposes of the OSA. The definition of a search engine in the OSA (s229) is somewhat circular (a search engine allows you to search) and seems more focussed on distinguishing between search functions within a website and general search services.
As regards other sorts of chatbot, the question would seem to be: can other users encounter the content you have generated? Ofcom noted that where a site or app allows users to upload or create their own generative AI chatbots - 'user chatbots' - which are also made available to other users, it is also a U2U service. If we are looking at something like Replika (which allows a user to engage with an AI girlfriend), the answer would be 'no'. So this could be seen as analogous to games where users do not encounter other users. However, services like Girlfriend GPT that allow users to share characters would seem to make the underlying platform a U2U service. Insofar as that content is made public - as Meta made the conversations of the users of the Meta AI public -the answer would seem to be 'yes', though there may be questions as to what is regulated content.
The definition of user-to-user service requires the possibility for other users to encounter that content whether or not the uploading/sharing user intended for that to happen and whether or not other users do actually encounter the content.
Mitigations and practical challenges
It seems in principle that at least some chatbots and their output could be caught by OSA and that mitigations will need to be applied as for other sorts of risks and content. In this context, the 'small print' attached to AI outputs (eg warnings about accuracy) will be insufficient - the entire range of relevant obligations will apply (and in particular the obligation relating to age verification in relation to primary priority content). Ofcom noted the need for an effective takedown system and for an appropriate complaints mechanism. These, however, are not particularly tailored to the chatbot context, and are almost certainly insufficient. Some consideration could be given as to whether mechanisms deployed in other livestream contexts could be appropriate here.
Regulatory uncertainties and future enforcement
While Ofcom has addressed the applicability of the OSA to generative AI services through its guidance documents and open letter, there remain significant regulatory questions that will likely require further clarification through enforcement action or additional guidance. Some chatbots and their outputs will be caught within the OSA regime, but, despite the government's encouragement of Ofcom to be proactive in the face of new technologies, particularly AI, the coverage appears incomplete and there are some technical questions which remain unanswered and may affect the completeness of protection.
One particularly complex issue concerns whether some generative AI services do have control over the search engine functionality that underpins the service. This question is crucial because it determines whether a service should be classified as a search service with attendant regulatory obligations, or whether it falls outside this category. The distinction becomes especially nuanced when considering generative AI tools that retrieve and synthesise information from across the internet. Do these services exercise sufficient control over the search functionality to be considered search service providers, or are they merely users of third-party search engines? The answer to this question has significant implications for regulatory compliance, as search services face specific duties around illegal content in search results and the protection of children from harmful content.
There are also boundary issues - for example, ChatGPT now has a search function - does this mean it is a search engine? And where would any boundary between search and chatbot lie? Again these questions remain largely unanswered and create uncertainty for service providers seeking to understand their obligations.
While the regulatory landscape for generative AI under the OSA will continue to evolve through a combination of guidance, industry engagement, and strategic enforcement action, service providers operating in this space should pay close attention to Ofcom's developing position and be prepared for potential test cases that may establish important precedents about how the OSA applies to novel AI functionalities. There may even be scope for chatbot-specific obligations.
Providers of generative AI services would be well-advised to engage proactively with Ofcom, conduct thorough assessments of their services against the OSA's definitions, and implement robust safety measures that can adapt to evolving regulatory expectations. As the regulatory framework matures through practical application, greater clarity will emerge about the boundaries of the OSA's application to generative AI - but this clarity may come through enforcement action as much as through guidance.