2 / 4

2020年5月1日

Protecting children in the online and social media age – 2 / 4 观点

Children and online harms

As the UK government's approach to tackling online harms evolves, protecting vulnerable groups, especially children, is increasingly at the forefront of proposals.

更多

On 12 February 2020, the government published its Initial Consultation Response to the April 2019 Online Harms White Paper, which set out the government's plans to tackle a wide range of online harms, and "make the UK the safest place in the world to be online".

The proposed regulatory framework will apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions (such as comments, forums or video sharing). You can read more about the proposed legislation in our summary of the White Paper.

While the response contains limited further detail in terms of how the new regulatory scheme will work, one of the more noticeable changes is an increased emphasis on the protection of children, driven by a strong response to the White Paper stressing the importance of higher levels of protection for young people online. Although this principle is laudable, its application raises some difficult practical issues which touch on sensitive topics around privacy and internet censorship.

Defining harms

The White Paper itself set out an initial, non-exhaustive list of harmful content and activities in scope of the new regime which – aside from covering child sexual exploitation and abuse (CSEA), and access by children to pornography – also extended to cyberbullying and trolling. The chosen regulator for enforcement (likely to be Ofcom) will set out steps that should be taken to tackle bullying, insulting, intimidating and humiliating conduct online, such as ensuring that those who have suffered from this type of harm have access to adequate support.

However, the response notes that harms like cyberbullying are particularly difficult to define and acknowledges online platform operators' concerns that imposing duties to monitor content and pass judgment on what constitutes cyberbullying will inevitably lead to censorship. Its solution is that the regulator will not investigate or adjudicate on individual complaints or force companies to remove specific pieces of lawful content.

Instead, the aim is to ensure protections for freedom of expression by establishing differentiated expectations on companies for illegal content and activity versus conduct that is not illegal but has the potential to cause harm. Rather than setting out and enforcing its own definitions of such harms, the new regulatory framework will instead require companies to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently.

In the interim, the consultation response makes clear that the government expects all social media companies to adhere to the principles and good practice guidance for implementing the principles set out in the statutory Social Media Code of Practice until the new regulatory requirements come into force. These principles – which are expected to shape the forthcoming codes of practice – include requiring social media companies to maintain clear, accessible and efficient reporting processes for dealing with notifications from users about harmful conduct, and to explain processes and action taken clearly, including in their platform terms and conditions.

Private vs public communications

Harmful activity online frequently involves a combination of public and private communication channels, and the consultation response also explores the difficulty of balancing the protection of children online with privacy considerations.

For example, people targeting children to commit serious online harms often make initial contact with a child on public social media platforms before moving to private messaging services to continue the grooming process. While it seems uncontroversial that the regulatory framework should address harms in public forums, this is less straightforward when regulation starts to move towards monitoring private communications.

Having said that, drawing a line between public forums and private communications is not as simple at it first appears. Although it is generally accepted that one-to-one phone calls, messages, and video calls should not be within the scope of any monitoring obligations, the distinction is less clear when it comes to the greyer areas of group chats and private forums.

Many respondents to the consultation cautioned against the somewhat arbitrary measure of using the number of participants as the sole measure for whether a communication is private. Others stated that identifying a maximum number of people beyond which a conversation is no longer considered private could be helpful.

Overall, the majority of consultation responses agreed that private communications should be out of scope for monitoring. Instead, it seems that the new regime will lean towards alternative solutions, such as requiring platforms to offer reporting mechanisms allowing users to report abusive or offensive content sent to them privately or posted in closed community forums or chat rooms.

Age verification

Another particular difficulty in establishing higher protections for young people online arises from age verification. This has been widely debated in recent years, not least as a result of the UK government's ill-fated proposal to block adult content online under Part 3 of the Digital Economy Act. After several false starts due to administrative issues, the initiative – widely known as the "porn block" – was dropped by the Department for Culture Media and Sport (DCMS) in October 2019. However, former Culture Secretary Nicky Morgan stated that the objectives of the Digital Economy Act would instead be delivered through the online harms initiative.

Much of the criticism levelled at the porn block focused on the practical methods of achieving age verification – suggestions for which ranged from credit card checks to the purchase of "porn passes" with ID from newsagents. Many opponents pointed towards the impracticality of these measures and the potential for damaging security breaches as a result of collecting such particularly sensitive data about users.

Nevertheless, preventing children from accessing online pornography remains a focus of the new regime, and the use of age-assurance tools to keep children from accessing inappropriate content is still a prominent part of the proposals. The consultation response points towards the work being conducted by the VoCO (Verification of Children Online) project, a cross-sector research initiative undertaken in partnership between DCMS, the Home Office and GCHQ, dedicated to exploring the concept of age assurance as a risk-based approach to recognising child users online, without undermining their privacy. Whether or not a practical solution which is feasible for both platforms and users can be found remains to be seen.

Next steps

While the proposed legislation will not come into force for some time yet, the government "expects companies to take action now to tackle harmful content or activity on their services" and more detailed proposals for online harms regulation are due to be released in the spring, including interim voluntary codes on tackling online terrorist and CSEA content and activity. The codes will be voluntary but are intended to bridge the gap and incentivise companies to take early action prior to the regulator becoming operational.

Plans to progress the work on online harms may well be derailed due to the focus on the current pandemic but in many ways, the pandemic highlights the issue of online harms to children as they are kept at home with screens for entertainment, often in a less supervised environment than usual as parents struggle to continue to work from home.

If you have any questions on this article please contact us.

返回

Interface

前往 Interface主页