2022年4月13日
Radar - April 2022 – 1 / 4 观点
In May 2021, the government published the long-awaited Online Safety Bill (2021 Bill). As we discussed, it applied to user-to-user services and search services and included a statutory duty of care on certain online providers to protect their users from harm. The 2021 Bill focused on user generated content and left much of the detail to be set out in supplementary Codes of Practice and secondary legislation. It was also criticised both for not doing enough to protect from online harms, and for not doing enough to protect freedom of expression.
Nearly a year later, a revised version of the Online Safety Bill (OSB) has been laid before Parliament. There are a number of key changes and additions including:
This is an extremely long and complex piece of legislation which will be made all the more convoluted when the relevant Codes of Practice and secondary legislation are published. While more detail has been included on the face of the OSB than in the 2021 Bill, businesses will need the additional information to help them prepare for compliance. This places a considerable burden on Ofcom and there are few clues as to when it might publish draft Codes.
It is not at all clear that the government has worked out how to resolve the tension between limiting harmful content and protecting free speech, nor how to satisfactorily define what is meant by "harm". The scope of the OSB has widened beyond user generated content in some areas and new offences added to the face of the Bill, but many of the Joint Parliamentary Committee's recommendations on freedom of expression have not been adopted.
With a risk-based approach simultaneously appropriate but difficult to manage, compliance with the OSB will be a significant challenge for social media platforms and search engines. The OSB is expected to have its second reading in May but it's unclear how long it will be before it is enacted and whether there will be significant changes during its passage through Parliament.
Here are some of the main features and changes since the first draft.
Scope and categorisation
The OSB continues to apply to user-to-user and search services with no material changes to the definitions of each (s2). Circumstances in which services are considered "regulated", including exemptions, similarly continue to materially track the previous draft of the OSB (s3 and Schedule 1).
The exception to this is that providers of pornographic content are also now brought within scope of the OSB.
The OSB retains the categorisation of:
Criteria for Category 1, 2A and 2B services are still to be set out in secondary legislation.
Illegal and harmful content
Illegal content is now defined as content which (or the dissemination/possession/accessing of which) amounts to a relevant offence. The OSB removes the concept that illegal content consists of content which the provider of the service has reasonable grounds to believe amounts to a relevant offence (s52). Offences in relation to IP infringement and safety/quality of goods are excluded, as they are consumer offences under the CPUT Regulations.
Priority offences (other than terrorism and CSEA) are now defined in Schedule 7 – the list includes offences in relation to assisting suicide, threats to kill, fear or provocation of violence, harassment, stirring up hatred on grounds of race, religion or sexual orientation, stalking, putting people in fear of violence, racially or religiously aggravated public order offences, drugs offences, firearms and weapons offences, assisting unlawful immigration, sexual exploitation offences, sexual images offences, proceeds of crime offences, fraud and financial services offences, and related inchoate offences. The list does not at present include the new communications/cyberflashing offences introduced elsewhere in the OSB.
Categories of content harmful to children and to adults have not materially changed, save that non-priority content is now defined simply as "content … of a kind which presents a material risk of significant harm to an appreciable number or children/adults" (ss53 and 54). The definition no longer refers to the service having "reasonable grounds to believe" the content gives rise to risks. The government's press release was somewhat misleading in this regard – while priority content harmful to adults will be designated by the Secretary of State as under the 2021 Bill, it seems at present that services may still need to make some calls as to what non-priority content presents material risks.
"Harm" is now defined in s87 as including physical or psychological harm, including harm caused by individuals to themselves and by individuals to other individuals.
Safety duties
The illegal content and children's safety duties now apply to how a service is operated as well as content present on a service and therefore require services to take measures in relation to regulatory compliance and risk management arrangements, design of functionalities, algorithms and features, policies on terms of use, policies on use access to the service or content on the service (including blocking users from accessing the service or particular content), content moderation, functionalities allowing users to control the content they encounter, user support measures, and staff policies and practices (ss9 and 11).
This could be a result of the Joint Parliamentary Committee's focus on safety by design and the fact of the very features that make sharing of content easy being in themselves potentially harmful, though the OSB doesn't go as far as to say that services should be designed so as to prevent content from going viral and the government's response to the Joint Parliamentary Committee doesn't flag this as a change made in response to Committee's report.
With respect to safety duties protecting adults, changes largely relate to details that need to be provided in terms of service (s13). There is also a new concept of user empowerment duties (s14) – Category 1 services have a duty to provide features that adult users can apply if they wish to increase their control over the likelihood of encountering harmful content and labelling of harmful content, or to filter out non-verified users.
There is a new obligation for Category 1 services to offer all adult users (old and new) the option to verify their identity. There is no prescribed process and the verification need not require documentation to be provided (s57). This is connected to the user empowerment duty to include services to allow adults to filter out non-verified users noted above.
There is also a new obligation for all services to report all detected CSEA content present on the service to the National Crime Agency (s59). Further details regarding this obligation will be set out in regulations made by the Secretary of State. It will be an offence to knowingly provide materially false information that may result in imprisonment for up to 2 years or a fine.
Freedom of expression
The new OSB does not implement the wide-reaching recommendations regarding protection of freedom of expression made by the Joint Parliamentary Committee (which we discussed here).
All services still only have a duty to have regard to the importance of users' freedom of expression within the law. They are now also required to inform users in terms of service of their rights to bring a claim for breach of contract if their content is taken down or restricted in breach of the terms of service (s19), but there will be no other cause of action created in the OSB for individuals who consider their rights to freedom of expression have not been adequately protected.
Duties for Category 1 companies to protect content of democratic importance and journalistic content, and to carry out impact assessments and publish steps taken in response remain materially unchanged (ss15, 16 and 19).
There are only incremental additions to the situations for which services are required to apply a complaints procedure in relation to the exercise of the safety duties (eg where the use of proactive technology on the service results in a user's content being given a lower priority or where such technology is used in a way not contemplated by the terms of service, or where a user is unable to access content due to an incorrect assessment of their age) (s18).
Fraudulent advertising
New obligations have been introduced for Category 1 services to take measures to address fraudulent advertising. Category 1 services must take measures to prevent individuals encountering fraudulent ads, minimise the length of time for which such ads are present, and take down such ads when alerted to them or otherwise becoming aware of them (s34). Similar but slightly weaker obligations also apply to Category 2A search services.
A "Fraudulent advertisement" is defined as a paid-for advertisement that amounts to (or encourages, assists, etc) a specified offence under the Financial Services and Markets Act 2000, the Fraud Act 2006, or the Financial Services Act 2012.
Ofcom's role, enforcement and penalties
The OSB retains the concept that Ofcom will issue risk assessment profiles (s83) and detailed Codes of Conduct on compliance with all aspects of services' obligations under the OSB (including as to freedom of expression, privacy, fraudulent advertising and adult user verification) and that compliance with the Codes will provide a safe harbour (s45). As part of this, Ofcom may include in a Code of Practice measures describing the use of a proactive technology (Schedule 4, para 12).
Ofcom's investigatory powers substantially track the 2021 Bill (ss85-91), including (among other things) powers to require information, carry out investigations, and exercise powers of entry and inspection. Ofcom also now has powers to carry out audits (s91).
The OSB broadly retains the same enforcement regime as the 2021 Bill. This includes powers for Ofcom to issue:
There are some revisions in relation to the detailed implementation of some of these powers.
Penalties
The OSB now expresses more clearly that penalties under a confirmation decision may take the form of either a single amount and/or a daily rate penalty for continuous failure to comply with a notified requirement (s117). This was contemplated by the 2021 Bill but not addressed in detail.
Maximum penalties are unchanged at the greater of £18 million or 10% of qualifying worldwide revenue (Schedule 12).
Offences
Information offences – in addition to the offences of failing to comply with an Ofcom information notice, knowingly/recklessly providing false information, or intentionally providing information that is encrypted such that it cannot be understood by Ofcom, it is now also an offence for a person to intentionally suppress, destroy or alter information (s92).
Liability of senior managers for failure to take reasonable measures to prevent commission of information offences remains substantially the same as set out in the 2021 Bill (albeit that the defence for not complying with an information order has been drafted more flexibly to apply where an individual was a senior manager for such a short time after the information notice was given that they could not reasonably have been expected to take steps) (s93).
It is now also an offence for a person to:
As noted above, it is now also an offence to knowingly provide materially false information in relation to reporting of CSEA content.
There is a new provision that corporate officers are considered to commit any offence committed by a service/company if the offence is committed with the consent/connivance, or attributable to the negligence of, the officer (s164). "Officer" includes a director, manager, associate, secretary, other similar officer, or anyone purporting to act in such a capacity. This is in addition to the liability of senior managers for information offences provided in s93 and addressed above.
Unlike in relation to the liability of senior managers for failure to prevent information offences, there does not appear to be any provision stating that where a penalty is imposed on the corporate entity in relation to an offence, no proceedings can be brought against an individual. Penalties for these offences typically include up two years' imprisonment (in some cases less) or a fine.
Communications/cyberflashing offences
The OSB introduces the new communications offences recommended by the Law Commission, including in relation to harmful, false and threatening communications (ss150-152) and sending a photograph/film of genitals (cyberflashing) (s156).
The harmful, false and threatening communications offences give rise to liability for corporate officers if committed with the consent/connivance or as result of neglect of on officer (s155). Offences can give rise to imprisonment of up to two years for harmful communications, 51 weeks for false communications, and five years for threatening communications, or a fine (ss150-152).
A person/entity is considered to "send a message" for purposes of the offices if it sends, transmits or publishes a communication by electronic means or causes such a communication to be sent (s153). There are exceptions for news publishers, broadcasters, and on-demand programme service providers (ss150-152) and for ISPs (s153) but the position of other intermediaries is not expressly addressed.
As the OSB progresses, look out for more analysis from us or contact our Technology, IP and Media team.
Debbie Heywood looks at the recently announced draft Trans-Atlantic Data Privacy Framework to facilitate frictionless EU-US data flows – what does this mean for the UK?
2022年4月13日