The obligations of services in scope of the Online Safety Act (OSA) start with risk assessments. These are both the first major obligation that services need to comply with, and the basis for services' safety duties and the transparency, reporting and redress duties that flow from them. This article addresses the risk assessment requirements set out in the OSA and in Ofcom's draft guidance in its consultation on "Protecting people from illegal harms online".
Who needs to do what and when?
All in-scope services need to carry out both an illegal content risk assessment and a children's access assessment to determine whether the service is likely to be accessed by children. Services that are likely to be accessed by children need to do a further children's risk assessment, and Category 1 services also need to do an adult user empowerment risk assessment.
For services operating today, the deadline for carrying out these assessments is three months from the date that Ofcom publishes its guidance on the risk assessment in question. Going forward, for new services or those that change their operations such that they fall within scope of new obligations, the deadline will be three months from the date the service becomes an in-scope service, likely to be accessed by children, or Category 1 respectively. Services also need to keep risk assessments updated (Ofcom's guidance suggests this should be done annually) and carry out new assessments before undergoing significant changes in design and operation.
How to carry out risk assessments
The OSA sets out for each kind of risk assessment what a service needs to consider. Broadly speaking, these include the service's user base, the risk of individuals encountering different kinds of content, and risk of harm presented by that content. Services also need to consider the design and functionalities of the service, its algorithms, how quickly content spreads, the different ways the service is used, and broader matters relating to its operation, such as its business model and governance. Services also need to take account of risk profiles published by Ofcom.
Ofcom's draft guidance
Ofcom's draft guidance adds significantly to the information provided in the OSA on how illegal content risk assessments should be carried out. The consultation on the draft guidance closed on 23 February 2024 and the final version may contain changes. Even when the guidance is finalised, it will be non-binding – services will be treated as complying with a relevant duty if they follow Ofcom's Codes of Practice, but can also comply in other ways.
4-step assessments
Ofcom sees illegal content risk assessments as comprising 4 steps – (i) understanding the relevant harms, (ii) assessing the risk of those harms arising, (iii) implementing mitigating measures, and (iv) reporting and updating the risk assessments.
Illegal priority harms
The guidance breaks down the list of priority offences in the Act into 15 different kinds of illegal priority harm. These are: (i) terrorism offences; (ii) child sexual exploitation and abuse (CSEA) offences (including grooming and child sexual abuse material (CSAM)); (iii) encouraging or assisting suicide (or attempted suicide) or serious self-harm offences; (iv) harassment, stalking, threats and abuse offences; (v) hate offences; (vi) controlling or coercive behaviour (CCB) offence; (vii) drugs and psychoactive substances offences; (viii) firearms and other weapons offences; (ix) unlawful immigration and human trafficking offences; (x) sexual exploitation of adults offence; (xi) extreme pornography offence; (xii) intimate image abuse offences; (xiii) proceeds of crime offences; (xiv) fraud and financial services offences; and (xv) Foreign Interference Offence (FIO).
Understanding and assessing the risk of harm
Services need to understand each of these types of harm, and they need to separately assess the risk of each of these types of harms arising and grade the risk of each arising as being low, medium or high. Ofcom envisages different illegal content safety duties arising depending on whether a service is low, high, or multi-risk, which depends on the outcome of the risk assessment. A service is considered high risk where the risk assessment finds a high risk in relation to any one category of illegal priority harm, or multi-risk where the risk assessment finds a high risk of two or more illegal priority harms.
Evidence
When assessing the risks of these harms arising, the service needs to do this based on evidence. All services need to consider as core evidence Ofcom's risk factors, user complaints, user data, and analysis of any previous incidents of harm. Larger or multi-risk services may also need to consider additional enhanced evidence, such as product testing results, content moderation results, views of external experts and audits, and views of users and representative groups.
Risk factors and risk profiles
In its guidance Ofcom has set out in granular detail the service characteristics that it considers can give rise to risks – these are the risk factors – and what it considers those risks are and the types of harm to which they relate – which are the risk profiles.
The guidance invites services to ask themselves what kind of service is being provided, what the user base is, the level of user identification or anonymity, and the detailed ways in which users can connect and communicate with each other (for example, whether the service offers livestreaming, direct messaging, encrypted messaging, commenting, or image or location sharing, among others). Other factors are whether the service allows users to sell goods and services, how users search for content, whether recommender systems are used, and – beyond questions of how the service works – the service's commercial profile and its business model, including its revenue model and growth strategy.
In its risk profiles Ofcom sets out for each of these very detailed characteristics of what risks of different types of illegal harm these features might result in. The conclusions Ofcom reaches are not always intuitively applicable to all of the different types of service that might fall within a particular risk factor. This may be because Ofcom has sought to be comprehensive, and account for all potential harm scenarios even if they do not arise for every service, and/or because Ofcom has an archetypal user-to-user service in mind (e.g. a social media service or messaging forum). Where certain user-to-user services depart from the archetype or arise in a specific context, they may find that Ofcom's risk profiles are less suited to their situation. Given the importance of the risk profiles for shaping services' risk assessments and therefore safety duties, it would be helpful for the next iteration of Ofcom's guidance to acknowledge that services will need a measure of flexibility in assessing risks, and that not all risk profiles will necessarily apply to all services meeting a specific risk factor.
What happens next?
Ofcom's consultation has now closed and it plans to publish a statement of its final decisions on illegal harms in Autumn 2024. This will cover both the guidance and Codes of Practice on illegal harms. The final Codes of Practice on illegal content will be submitted to the Secretary of State for approval then laid before Parliament. This means the final Codes are unlikely to come into effect until Q3 2024 at the earliest, which suggests services' deadline to complete illegal content risk assessments will fall in Q4 this year or later.
You can access Part 1 of our Interface content on the OSA here, Part 2 here, and our full range of content on the OSA and the DSA here.