Control of mis- and dis-information is a highly contentious issue. While their proliferation online can cause serious real-world problems - as graphically illustrated by the riots last Summer following the tragic Southport mass stabbings - trying to define what they are and balance any restrictions against free speech concerns is fraught with difficulty.
The UK and EU have approached this issue differently - although neither has gone so far as to impose a general ban on such information - and meanwhile the US accuses the UK and EU of "trampling on democracy" and being a "hotbed of digital censorship”. So what are the rules in the UK and EU in this area, how do they differ from each other, and will they help in mitigating the harms they are intended to address?
The UK position
Ofcom, the UK media regulator, has recently established the slightly Orwellian sounding ‘Online Information Advisory Committee’ to provide it with advice about areas of Ofcom's work relating to mis- and dis-information. In particular, the Committee has a statutory remit in advising on how regulated providers under the Online Safety Act 2023 (OSA) should deal with this type of information, and the exercise of Ofcom’s power to require information from regulated services in annual “transparency reports”. These reports must cover, among other things, “content that terms of service indicate is prohibited on the service or the user's access to the kind of content is restricted” (excluding consumer content, meaning content that constitutes an offer to buy or sell goods or services or that constitutes a consumer offence).
In October last year, Peter Kyle, the Secretary of State for Science, Innovation and Technology, in an open letter to Ofcom’s Chief Executive about the implementation of the OSA, referred to the issue of misinformation in the context of the Summer riots in the following terms:
“One of the most alarming aspects of this unrest was how quickly and widely content spread. In light of this, I would appreciate an update from you on the assessment Ofcom has made about how illegal content, particularly disinformation, spread during the period of disorder; and if there are targeted measures which Ofcom is considering for the next iteration of the illegal harms code of practice in response. I also want to emphasise the importance of the Advisory Committee on Disinformation and Misinformation that Ofcom are establishing under the OSA. I look forward to hearing about Ofcom’s progress with the committee and what its key areas of focus are likely to be following the events of this summer".
However, the Secretary of State appears to have thought that the ambit of the OSA in this area is broader than it is. Control of harmful but legal content was originally proposed in the Online Harms White Paper but later dropped after strong lobbying from politicians, the media and civil society groups concerned over the impact of such restrictions on freedom of expression. As a result, Ofcom’s powers over mis- and dis- information are limited, particularly for adults.
Regulated services have safety duties in relation to illegal content on their services (s10). Under s59, illegal content is defined as any “relevant offence”, which includes any offence (including those created by the OSA) where the victim or intended victim is an individual or individuals (with some limited exceptions). The offences which are potentially relevant to mis- and dis-information are:
- fraud by false representation
- stirring up of hatred or inciting violence offences
- misleading statements or impressions in relation to financial services, and
- the new OSA offence of false communication (s179).
The new false communication offence is often the most relevant. It prohibits the sending or causing to be sent of a message (electronic or physical) known to be false, intending the message or information in it to cause non trivial psychological or physical harm to a likely audience (based on what’s reasonably foreseeable) and there being no reasonable excuse for sending the message. A "recognised news publisher" cannot commit the false communication offence, nor can licensed broadcasters or on-demand programme services (s180).
The illegal content safety duties under the OSA have to be balanced under the OSA with the freedom of expression duties (s22) and duties to protect content of democratic importance (s17). All providers “must have particular regard to the importance of protecting users’ right to freedom of expression within the law” when deciding on, and implementing, safety measures and policies, including in relation to illegal content (s22(2)). Category 1 (larger) providers have a duty to operate their services in a way that ensures that the importance of the free expression of content of democratic importance is taken into account when making such decisions.
In addition to potential regulatory intervention for over or under moderating illegal content, providers can be sued by their users for breach of contract under their terms of service if content the users generate, upload or share is taken down, or access to it restricted, in breach of the terms of service, or they are suspended or banned from using the service in breach of the terms of service (s72(1)).
Ofcom has issued guidance for regulated services on how they should approach their responsibilities in relation to the misleading information offence. This is in Annex 10 of its ‘Protecting People from Online Harms, Online Safety Guidance for Judgement for Illegal Content (starting at A13.19). It’s quite illuminating as to the problems and limitations of this offence (and fairly hard to find!) so worth quoting from at some length.
It first acknowledges that in relation to ‘state of mind’:
“It will not be possible for services to identify all instances of content amounting to this offence. A service will not always be in a position to know whether a user posting the content knows it is false and what the intent of the person is in making it".
It continues:
“However, in some cases, it will be appropriate for services to draw these inferences. When making an illegal content judgement, services will need to have reasonable grounds to infer that both the following are true:
- the user sending the message knew it was false
- the user sending the message conveying false information intended to cause non-trivial psychological or physical harm to a likely audience".
It “notes the issues around freedom of expression and the difficulty for services in determining falsity. However, we anticipate there are certain instances when services may be able to infer that the user posting the content knows the content to be false". In particular:
“Services should consider the following questions:
- Is the message actually false? If there are no reasonable grounds for the service to infer that it is, the content cannot be judged to be illegal.
- Is there evidence (either as part of the content or established through contextual information) to illustrate that the user posting the content knows the content is false?
- Is there evidence (either as part of the content or established through contextual information) that suggests that the user posting the content intends to cause non-trivial psychological or physical harm?”
It then concludes that “If the questions above are answered in the affirmative, then it is likely that the service will have reasonable grounds to infer that the content is illegal content for the purposes of the Online Safety Act".
In addition, “reasonable grounds to infer that content amounts to a false communications offence will also exist where information justifying this inference has been made available to services by law enforcement or a court order (except where the service has evidence to suggest the contrary)". In this regard, it was interesting to see how quickly the police issued a statement about the driver of the car involved in the recent incident in Liverpool resulting in multiple injuries.
The Guidance frankly acknowledges that “We anticipate that it will be challenging for service providers to make these judgements based on content alone". This must be right, although in addition to determining whether an offence has been committed, the guidance doesn’t say anything about whether, and if so how, the freedom of expression and democratic content obligations might come into play when deciding on whether to remove content that appears as though it may amount to an offence.
Finally, the Guidance cautions that “This offence is not intended to capture all ‘fake news’. Misinformation – that is, misleading or untrue information which is shared by a user who genuinely believes it to be true – is not captured by this offence".
This has led the Major of London, among others, to say in the wake of last Summer’s riots, that the OSA in this area is “not fit for purpose”. The government has also said that it stands ready to make changes if necessary, with the PM warning that “we are going to have to look more broadly at social media after this disorder”.
The false information offence clearly does not cover reckless or negligent dissemination of false information. It is therefore very hard to see how a case under s 179 could be successfully brought against, for example, Bernie Spofforth, who was accused of putting online to her over 50,000 followers the first post, which then went viral, wrongly suggesting the Southport attacker was a recently arrived asylum seeker. She stated at the time that if the details were true, “all hell is about to break loose”. She later stated, “I did not make it up. I first received this information from somebody in Southport”. To come within the offence, in addition to establishing knowledge of falsity, it would also be necessary to show that there was an intention to cause harm; and to fall within the illegal content duties under the OSA the victim or intended victim of the offence would need to be an individual or individuals. There would also need to be an assessment of whether the post should nonetheless be permitted by reason of freedom of expression issues or it being content of democratic importance.
It is also difficult to see how any other offence is likely to apply to this type of activity. The offence of fraud by false representation under s2 of the Fraud Act 2006 requires the dishonest making of a false representation with an intention to make a gain or cause loss to the perpetrator or another – not generally factors in typical misinformation cases. While the use of threatening, abusive or insulting words, including in written material online, is unlawful under s18 of the Public Order Act 1986, it requires proof of an intention to stir up racial hatred (defined as hatred against a group of persons, not individual(s)), or in all the circumstances that it is likely to be stirred up. The similar offences of stirring up religious hatred or hatred on the grounds of sexual orientation are subject to defences which exclude from their ambit anything which "prohibits or restricts discussion, criticism or expressions of antipathy, dislike, ridicule, insult or abuse of particular religions or the beliefs or practices of their adherents" (s29J) and of "sexual conduct or practices or the urging of persons to refrain from or modify such conduct or practices" (s29JA). The CPS guidance on race crimes states that:
"Stirring up hatred means more than just causing hatred, and is not the same as stirring up tension. It must be a hatred that manifests itself in such a way that public order might be affected. The offences that have been successfully prosecuted go well beyond the voicing of an opinion or the causing of offence. When considering whether or not to prosecute stirring up offences, there is a need to bear in mind that people have a right to freedom of speech. It is essential in a free, democratic and tolerant society, people are able to exchange views, even when these may cause offence. The issues involved in such cases are highly sensitive and charges for stirring up hatred require the consent of the Attorney General in addition to the consent of the Crown Prosecution Service".
Following an investigation (which itself was subject to criticism) no further action was taken against Ms Spofforth due to insufficient evidence (despite the false communication offence having been in force since 31 January 2024).
So there is a gap in UK law in relation to non-commercial statements made with a genuine belief in their truth, even if that belief has been arrived at negligently or recklessly, and which do not target specific groups based on racial, religious or sexual orientation grounds in so serious a way that public order might be affected (at least other than on certain content on regulated broadcast services and the Press). The Ofcom Broadcasting Code requires due accuracy and impartiality in news (5.2) and due impartiality on matters of political or industrial controversy and matters relating to current public policy (5.5). It also requires "generally accepted standards" to be applied to the content of television and radio services (2.1), and that factual programmes or items or portrayals of factual matters must not materially mislead the audience (2.2). The IPSO Editor's Code of Practice requires the Press to "take care not to publish inaccurate, misleading or distorted information or images" (1(i)).
What about the DSA?
It is interesting to compare the position in the UK under the OSA with that in the EU under the Digital Services Act 2022 (DSA).
The preamble to the DSA (paragraph 9) states its objective of "ensuring a safe, predictable and trusted online environment, addressing the dissemination of illegal content online and the societal risks that the dissemination of disinformation or other content may generate".
A particular category of systemic risk is identified as "relating to the design, functioning or use, including through manipulation, of very large online platforms", including risks stemming from "coordinated misinformation campaigns related to public health" (83). It continues (at 84) that providers of such platforms should "pay particular attention to how their services are used to disseminate or amplify misleading or deceptive content, including disinformation", and that "where the algorithmic amplification of information contributes to the systemic risks, those providers should duly reflect this in their risk assessments".
Examples of specific risks include "the creation of fake accounts, the use of bots or deceptive use of a service, and other automated or partially automated behaviours, which may lead to the rapid and widespread dissemination to the public of information that is illegal content or incompatible with an online platform's … terms and conditions and that contributes to disinformation campaigns".
These underlying policy objectives are reflected in the DSA itself, Article 34 of which requires very large online platforms to "diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services" and carry out such a risk assessment annually and prior to deploying functionalities "that are likely to have a critical impact on the risks identified". The assessment shall also analyse whether and how the identified risks "are influenced by intentional manipulation of their service, including by inauthentic use or automated exploitation of the service, as well as the amplification and potentially rapid and wide dissemination of illegal content and of information that is incompatible with their terms and conditions". Article 35 records the requirement to put in place "reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified".
Article 45 of the DSA refers to the encouraging and facilitation of the drawing up of "voluntary codes of conduct at Union level to contribute to the proper application" of the Regulation. In the case of "systemic failure to comply with the codes of conduct" signatories to the codes of conduct may be "invited" by the Commission and the European Board for Digital Services "to take the necessary action". Established in 2018, the Code of Practice on Disinformation was significantly strengthened in 2022, and on 13 February 2025, the Commission and the Board endorsed its integration as a Code of Conduct on Disinformation into the framework of the DSA.
It proclaims itself to be a "pioneering framework to address the spread of disinformation, agreed upon by a number of relevant stakeholders". The 34 signatories include many of the major tech players, such as Google, Meta, Microsoft and Tik Tok. It is said to have "proven to be an effective tool to limit the spread of online disinformation, including during electoral periods and to quickly respond to crises, such as the coronavirus pandemic and the war in Ukraine", containing 44 commitments and 128 specific measures. These cover elements including demonetisation, transparency, integrity of service, and empowerment of users, researchers and the fact checking community.
So the DSA does not ban misinformation unless it constitutes illegal content under national or EU law. Rather, it is focussed on processes, identifying systemic risks and putting in place effective mitigations. Unlike the OSA, there is no attempt to create a specific misinformation offence. However, the DSA goes much further in putting in place both legal and voluntary obligations on very large platforms to take specific steps to deal with the dissemination of disinformation on their services
What does this mean?
Both the OSA and DSA have strong sanctions, including the potential for very large fines as a percentage of global turnover of businesses in scope. It will be interesting to see over the next few years which approach is more successfully in addressing the issue of mis- and dis-information.
As stated, the UK Government has already come under pressure to strengthen the OSA in this area. However, any attempt to expand its scope further, such as the creation of further specific offences, will undoubtedly run into the same forceful opposition as did previous attempts to legislate to control "awful but lawful" content directed at adults.
Although there would likely be reluctance to follow the EU approach, it may be more acceptable, and more practical and effective, for the UK also to have a greater focus on process - identifying and mitigating specific risks relating to the dissemination of misinformation - rather than attempt to engage further in the almost impossible task of delineating acceptable, from unacceptable, lawful content.