18. Januar 2022
Radar - January 2022 – 1 von 5 Insights
The Joint Parliamentary Committee on the Draft Online Safety Bill has recommended significant changes to the draft Online Safety Bill published by the government in May 2021.
The Bill as currently drafted would create very significant new responsibilities for services which host user-generated content and search services. For more information, see our set of articles here.
If the Committee's recommendations were to be adopted by the government into a revised draft Bill, those responsibilities would apply over more services, content and activity, and would be more exacting and subject to more scrutiny. Some of the recommendations would require services to redesign the very features that make them popular with and convenient for users.
The tenor of the report is that online services have potential for significant and widespread harm and, before they can consider the good they can achieve, they have to be designed to avoid harms and then continually take action to prevent harmful content and activity arising. In a concept which runs through the report, the Committee's view is that service providers should be "liable for failure to take reasonable steps to combat reasonably foreseeable harm resulting from the operation of their services".
There's no guarantee as to what may happen with the recommendations but it is worth services carefully considering the direction of travel and bearing the recommendations in mind in any public policy engagement on the Bill.
Even without any change to the Bill, the recommendations may indicate the approach services should expect to be followed in secondary legislation defining illegal and harmful content and/or by Ofcom in its risk assessments and Codes of Practice.
This note brings together the key themes of the Report and suggests areas where the Committee may have fallen into error.
The Report recommends that the Bill be restructured to reflect its key objectives, namely to ensure that services (among other things):
This restructuring is not simply a matter of form – the stated objectives shape services' key responsibilities as can be seen in the Report's other, more detailed, recommendations.
The Bill currently covers "user-to-user" services (in which "user generated content" ie content that is generated, uploaded or shared by a user may be encountered by another user) and search engines. For user-to-user services, it is "user-generated content" that is subject to the safety duties Broadly speaking, that has meant that services like Facebook are in scope whereas services like Netflix should be out of scope.
The Report does not expressly recommend that this approach be entirely replaced. However, there is some suggestion that it is not the demarcation between UGC and provider-controlled content which should determine the content which is in scope but, rather, whether the content (and related activity) could cause harm. If that is right, then the Committee is implicitly recommending a significant extension in scope of the Bill. At the very least, the Report suggests that the largest online pornography providers whose services don't have user-to-user elements should still be in scope of safety duties in respect of children.
Services likely to be accessed by children
It also aligns, for the purposes of protecting children, the services in scope of the Bill to those in scope of the Age Appropriate Design Code or Children's Code. This implies that all services likely to be accessed by children (as defined by the Age Appropriate Design Code – ie that are more likely than not to be accessed by children, including those accessed by users of all ages) are in scope of the Bill regardless of their status as user-to-user.
The Committee concludes that all statutory requirements on user-to-user services, for both adults and children, should also apply to Internet Society Services likely to be accessed by children. This represents a significant widening of the services in scope beyond that envisaged by the government since its initial Online Harms White Paper published in Spring 2019 which could have unintended consequences if adopted.
Paid-for advertising
The Committee wants paid-for advertising to come in scope of the Bill. While it would remain for the ASA to regulate the ads (and advertisers), Ofcom would be responsible for acting against service providers who consistently allow paid-for advertisements that create a risk of harm to be placed on their platforms (including ads which contain misinformation and disinformation in breach of ASA rules).
Advertising doesn't obviously fit into the definition of user-generated content (not being generated, uploaded or shared by a user of the service) but this is another indication that the Committee's view is that the scope of the Bill should go beyond user-generated content on user-to-user services.
Categorisation of services
The Bill currently envisages that Ofcom will categorise services (into Categories 1, 2A, 2B and those not in a specified category) depending on their number of users, functionalities and other factors specified by the Secretary of State. The duties that apply will depend on the category – for example, the current duties in relation to "lawful but harmful" content and protecting content of democratic importance apply only to Category 1 services.
The Report recommends an overhaul of that categorisation, adopting a more nuanced approach based not just on size and high-level functionality, but factors such as risk (including based on algorithmic design, engagement/ sharing features, surveillance advertising, and end-to-end encryption), reach, user base, business model, and previous history of complaints, proceedings, or poor safety performance (including on the basis of independent research or whistleblowers' evidence).
The duties that apply to a service would continue to depend on its categorisation but it appears that all, rather than just the largest, services would have obligations in respect of content that is legal but harmful to adults and active obligations in respect of protecting freedom of expression in relation to certain kinds of content.
The Committee's view is that the Bill's focus on harmful content is incomplete, and it should also cover the activities, design and operation of services. For the Committee, online safety is not just about moderating content but also about the design of platforms and the ways people interact with content. Its view is that safety by design is critical to reducing the prevalence and reach of harmful online activity. And the activity the Committee is most concerned about is the propensity for services to facilitate and bring about sharing and spread of harmful material.
The Committee reaches this conclusion in part because of the belief that services prioritise engagement and potentially harmful content is more engaging. So it's not just the harmful content which is the evil to be cured but the design of the services which leads to that harmful content being over-presented to users.
The Report highlights how the same (algorithm) technology which allows a joke to go viral would allow COVID-19 disinformation to go viral. The platform should therefore proactively tackle this design risk, by identifying and putting in place systems and processes to mitigate them, rather than having to rely on the last resort of taking down the harmful content. The result is that the Committee sees as inherently risky many of the features of user-to-user services that make them user-friendly, convenient and appealing to users.
The Report recommends that services should have a responsibility to have in place systems and processes to identify reasonably foreseeable risks of harm arising from the design of their platforms and take proportionate steps to mitigate those risks.
Risks and possible mitigation measures may include:
Specifically in relation to disinformation, for example, the Committee recommends content-neutral safety by design requirements, set out as minimum standards in mandatory Codes of Practice. The Committee believes that changes to introduce friction in sharing could play a vital part in tackling activity which creates a risk of societal harm and recommends that Ofcom should use its Safety by Design Code of Practice to address the spread of misinformation.
It seems that, in the view of the Committee, the risk of the harmful content going viral is sufficiently serious to justify restricting the potential for harmless content going viral; the measures outlined above would all inevitably slow down the sharing of perfectly harmless content while also negatively impacting user experience. The Committee seems to presume that recommender algorithms, or sharing tools, are intrinsically harmful, without acknowledging the possibility that they could be trained/adjusted to reduce the likelihood that they would allow the recommendation or sharing of harmful content.
The Bill currently has a very complicated approach to identifying what is illegal content, and leaves much to the Secretary of State to identify relevant offences. The Committee, rightly, believes this is too dependent on his or her discretion.
Instead, the Committee prefers for criminal activity which can be committed online to appear on the face of the Bill as illegal content, and this would be the starting point for regulating potentially harmful online activity. This would include hate crime offences, the offence of assisting or encouraging suicide, the new communications offences recommended by the Law Commission, offences relating to illegal, extreme pornography and soon-to-be-introduced electoral offences. The Secretary of State would then have a restricted power to add to the definition of priority illegal content in exceptional circumstances.
The Committee also wants to include fraud offences as a "relevant offence" to which the illegal content duties apply, alongside terrorism and CSEA offences. Platform operators would therefore be required to be proactive in stopping fraudulent material from appearing in the first instance, not simply removing it when reported.
Content harmful to adults
The Committee criticises the Bill's definition of content that is harmful to adults (in respect of which Category 1 services currently have safety duties) – ie content giving rise to a material risk of significant adverse physical or psychological impact on an adult of ordinary sensibilities. It is open ended and requires service providers to make difficult judgments. The Committee is concerned about the concept's implications for free speech, its susceptibility to legal challenge and its effectiveness in tackling harm.
It therefore recommends removing the clause of the Bill which imposes safety duties in relation to content harmful to adults. It would be replaced by the type of general duty, already mentioned, to have in place proportionate systems and processes to identify and mitigate reasonably foreseeable risks of harm arising from "regulated activities" defined under the Bill.
There would then be definitions referring to specific areas of law that either apply in the offline world (but have not yet been extended to the online world) or have been specifically recognised in other contexts as legitimate grounds to interfere with freedom of expression. According to the Report, these could include abuse based on protected characteristics under the Equality Act 2010, threatening communications, disinformation likely to endanger public health and content and activity that promotes eating disorders and self-harm. The Secretary of State would have a restricted power to amend this list. The Report doesn't seem to suggest that this duty (applying in relation to legal but harmful content) would be limited to the largest services, as the Bill does.
Content harmful to children
Similar to its recommendations in respect of harmful content generally, the Committee also recommends a tightening of the definition of content that is harmful to children. Content would be caught if it is specified in the Bill, or in regulations, or if there is a reasonably foreseeable risk that it would be likely to cause significant physical or psychological distress to children who are likely to encounter it on the platform.
The Committee recommends key known risks of harm to children be included in the Bill. The Report envisages that these may include:
The current draft Bill envisages that Ofcom will produce Codes of Practice containing recommended steps for compliance. Compliance with the Codes would not be mandatory but would provide a compliance "safe harbour".
The Report instead recommends that there should be mandatory Codes of Practice in place in relation to:
In some circumstances these would be supported by binding minimum standards, for example as to the accuracy and completeness of risk assessments, age assurance technology, transparency reporting, and complaint and redress mechanisms.
The Report also recommends that Ofcom should begin drawing up risk profiles based on the characteristics of services including:
Should service providers be obliged to report on illegal content? It is currently a general principle of UK criminal law that people who come across commission of a crime are not obliged to report it (with exceptions for terrorism and money laundering offences), although a service provider may choose, for example, to report CSEA imagery to the Internet Watch Foundation of police. The Report seems to suggest that that principle should be subject to a horizontal exception, across all offences, requiring providers to report offences.
The Committee recommends that the highest risk service providers be required to archive and securely store all evidence of removed content from online publication for a set period of time, unless to do so would in itself be unlawful. It is unlawful in most circumstances, for example, to possess CSEA imagery once on notice of it. In those cases, service providers should store records of having removed the content, its nature and any referrals made to law enforcement.
Since the first government proposal on online harms, each subsequent proposal has purported to ratchet up the protection for freedom of expression but has not succeeded in reconciling its fundamental incompatibility with the safety duties to be imposed on services. This Report continues that trend and creates further tension for services between their safety duties and duties to protect freedom of expression.
What duties should be owed in relation to lawful but harmful content and how they sit with protection of freedom of expression has always been one of the most difficult issues to address.
The Report makes recommendations that purport to protect freedom of expression. For example, it proposes tighter definitions around content that creates a risk of harm. The intention is to narrow the requirements in relation to this content by basing them on those which already apply to content in relation to which society has recognised that there are legitimate reasons to interfere with free speech rights.
This approach would remove the broad delegation of decisions on what is harmful from service providers. The Report also proposes a greater emphasis on safety by design, stronger minimum standards and mandatory Codes of Practice set by Ofcom (again to reduce the burden on services having to make difficult judgment calls), and a strengthening of protections for content originating from news publishers.
Protection of 'public interest' content
The Bill currently imposes duties on Category 1 services to protect content of democratic importance and journalistic content (outside of this context, services only have a duty to have regard to users' right to freedom of expression). The Committee disagreed with the approach the Bill takes to those terms. Instead, it recommended replacing their protections with a single statutory requirement to have proportionate systems and processes to protect content where there are reasonable grounds to believe it will be in the public interest. Examples of content that would be likely to be in the public interest would be journalistic content, contributions to political or societal debate and whistleblowing. That is a narrow list (compared to, for example, section 4 of the Defamation Act 2013) but may be broadened by Ofcom in its Code of Practice.
This protection for content in the public interest is a useful counterweight to the freedom of expression risks posed by the general safety duty but is not without its problems. It will inevitably clash with the requirement to prevent harmful content and activity; providers will have to resolve that clash because they will need to avoid unjustified take down of content likely to be in the public interest.
Moreover, a material, but unspoken, risk runs through the Report that an unavoidable consequence of complying with the safety duties will be to stop the wide and free dissemination of content which is not harmful, illegal or in the public interest (as narrowly defined in the Report). The possible restrictions on this 'everyday' content demonstrates the potentially disproportionate approach adopted by the Committee; preventing this collateral damage will be a key concern for services in complying with the Bill and should, it is suggested, be a key concern for Ofcom in drafting its Codes of Practice and enforcing the duties. The government should also tackle this issue if it is to redraft the Bill, otherwise the Committee's acknowledgement of the importance of freedom of expression could approach lip service.
To take down or keep up?
As a general matter, the takedown or keep up decision is very difficult. Take the terrorism offences, for example: they are spread across multiple pieces of legislation, are complicated and require specialist knowledge. Decisions about whether or not a piece of content commits these offences can be finely balanced and require detailed review of the content. What is a provider to do about them? Take the 'safety first' approach and remove the content without doing a detailed review, or keep up the account and rely on its obligation to respect freedom of expression – which is to take priority? In other words, which compliance failure t would have less of a downside?
The Committee's apparent answer is to:
This provides very little comfort or direction in individual cases.
Appeals and redress against takedown
The Report recognises the ability for users whose content has been removed to go through an appeals process, and requires Ofcom to provide guidance on how they can be swiftly and fairly considered. It also notes that cases of systemic, unjustified takedown of content that is likely to be in the public interest would amount to a failure to safeguard freedom of expression as required by the objectives of the legislation.
The Committee recommends establishment of an external redress process available to someone who has been banned from a service or who had their posts repeatedly and systematically removed; it would be an additional body to appeal those decisions after they had come to the end of a service provider’s internal process.
The Committee also recommends creating an Online Safety Ombudsman to consider complaints about actions by higher risk service providers where either moderation or failure to address risks leads to significant, demonstrable harm (including to freedom of expression) and recourse to other routes of redress have not resulted in a resolution. It would be open, for example, to users who have had their content repeatedly taken down.
Finally, the Committee thinks that users should have a right of redress in the courts via a bespoke route of appeal to allow users to sue providers for failure to meet their obligations under the Bill.
Are services therefore now subject to a private law duty, owed to their users, to respect their freedom of expression, breach of which could create a cause of action and leave them susceptible to litigation? This seems to be what the Committee is recommending. It asserts as a fact that service providers’ user complaints processes are often obscure, undemocratic, and without external safeguards to ensure that users are treated fairly and consistently.
The Committee then asserts that it is only through the introduction of an external redress mechanism that service providers can truly be held to account for their decisions as they impact individuals. The Committee fails to recognise that what puts the services most at risk of failing to protect freedom of expression is the very extensive safety duties imposed on them – these are conflicting objectives that will be difficult or impossible for services to achieve at the same time.
It also appears to impose a requirement on services to host content they may not wish to, for whatever reason, even if it is in the public interest. That is a very novel proposition. Imagine a newspaper being compelled by a regulator to report on an issue of public importance or to publish content by a particular columnist; it wouldn't happen. But that appears to be effectively what the Committee wants to bring about: providers should be obliged to host any content provided it is in the public interest and not unlawful or harmful.
We suggest that the Committee and government should think again on this point and the approach taken in the Bill (and Report) to protecting freedom of expression generally.
In various places, the Report recommends that services should be subject to greater transparency requirements including:
The Committee recommends that the Bill should require that companies’ risk assessments be reported at board level, to ensure that senior management know and can be held accountable for the risks present on the service, and the actions being taken to mitigate those risks.
It also recommends that a senior manager at board level or reporting to the board should be designated the “Safety Controller” and made liable for a new offence: the failure to comply with their obligations as regulated service providers when there is clear evidence of repeated and systemic failings that result in a significant risk of serious harm to users.
In addition to the role of Online Safety Ombudsman (see 'Freedom of Expression' above), the Report recommends that a Joint Committee of both Houses of Parliament oversee regulation in this area, including the work of Ofcom and the Secretary of State. It seeks to limit the powers of the Secretary of State to interfere in Ofcom's work, for example by modifying Codes of Practice.
The Report also supports the introduction of criminal sanctions, including for the individual taking the role of "Safety Controller" within an in-scope organisation.
It remains to be seen the extent to which this Report proves influential. If the government takes some of the more significant recommendations on board, the Bill will inevitably be delayed. Either way, the Bill's passage through Parliament is likely to prove contentious. With much of the detail around implementation still to be decided in Codes of Practice (perhaps even more so if the Committee's recommendations are adopted), much remains unclear, not least, how relevant service providers should prepare for compliance and which service providers are in scope.
18. January 2022
von Adam Rendle, Xuyang Zhu
24. January 2022
von Debbie Heywood
11. January 2022
29. November 2021
24. January 2022
von mehreren Autoren