From cyberbullying to terrorist attacks, the role of the internet is coming under increasing scrutiny with a number of initiatives at national and supra-national level to assess and mitigate the risks.
This is particularly true in the UK, with the government recently publishing a proposal to regulate a wide range of harms caused by user-generated content (UGC): the Online Harms White Paper (OHWP).
The key proposal is the introduction of a new statutory duty of care on online operators, overseen by an independent regulator. The proposal is ambitious both in the types of harm covered and the operators caught. While many have welcomed this attempt to regulate the area, others have questioned the approach.
Historically, online operators have been given considerable protection regarding content uploaded by their users. For 20 years, online operators who merely host – rather than create – content have enjoyed a "safe harbour" from liability for illegal content, unless and until they have notice of it (provided they act expeditiously to remove or disable access to it once on notice), under the EU's e-Commerce Directive. In addition, they are excused any general monitoring obligation.
The Directive was considered necessary to allow the early internet to flourish. It worked. UGC has not only transformed the way we interact with one another but has turned areas from journalism to advertising on their heads. But there is a trade-off. When users are given the opportunity to create and share content, there is always scope for misuse. That misuse has come into sharp focus in recent years through such things as the live streaming of terrorist attacks and examples of the promotion of self-harm and suicide.
In view of this, the public and interest groups ask why there is not greater regulation, in particular why online operators are not forced to proactively monitor and filter content. Regulation is not, however, as straightforward as it might initially seem. How should society guard against damaging misuse without severely impacting the positive aspects of the online environment?
Even for the largest online operators, the sheer volume of UGC makes this particularly difficult. Technology has developed sufficiently to allow proactive monitoring and filtering of large volumes of content but it remains a somewhat blunt instrument; no technology yet exists that can differentiate cyberbullying from satire. As well as the risk of filtering out legitimate content, innovation may be stifled and smaller, early stage companies may find themselves squeezed out if they lack the required filtering technology.
The EU has grappled with this for some time with mixed results. A provision in the proposed Regulation on Terrorist Activity Online imposing a positive monitoring obligation for terrorist material was recently watered down because it was deemed incompatible with the e-Commerce Directive.
Conversely, the Directive on Copyright in the Digital Single Market – which effectively imposes a positive monitoring obligation for material infringing copyright – was recently adopted. Despite the controversy, the general direction of travel at EU level is towards some degree of proactive monitoring by platforms, at least in relation to certain types of content (see our article for more).
The number and range of online harms also makes a coherent regulatory framework difficult. Regulating modern slavery is not the same as regulating disinformation. The former has a clear legal definition, the latter does not. Making sure that any regulatory framework minimises the obligation on online operators to work out what is and is not acceptable content is obviously key. Without such clarity, legitimate content will almost certainly be removed.
It is no surprise then that it has taken time to formulate a coherent proposal for regulation. Past proposals – such as that contained in the Internet Safety Strategy Green Paper – focused on self-regulation. This appears no longer to be the preferred approach of the UK government.
Indeed, some in the industry are calling for greater guidance, with a number of large technology companies writing to the UK government in February 2019, setting out what they believe a new regulatory framework should look like.
The OHWP proposes significantly increasing the responsibilities of operators to tackle harmful content and activities online. A new statutory duty of care would require operators to do what is "reasonably practicable" to tackle specified types of online harm. Compliance with the duty of care will be overseen and enforced by an independent regulator. In brief:
While the proposals have been broadly welcomed, some elements are proving controversial.
One of the main criticisms is the range of harms that the proposals attempt to regulate. By seeking to extend to content that is legal but nonetheless considered harmful, it is argued that they open the way to censorship of the internet. There is a vast grey area between free speech and hate speech and people will disagree on where the line should be drawn.
Absent any clear legal definition (in legislation or case law) of harms such as cyberbullying, the line will be a difficult one for operators to draw and some legitimate content will inevitably be removed. This is particularly so where the sanctions for failing to do so are significant and potentially include individual liability for senior managers.
Central to the proposal is the creation of a new statutory duty of care on online operators which will be overseen by an independent regulator. However, it is not proposed that the regulator will have power to determine individual disputes but rather to take enforcement action only where there is evidence of systemic failure to fulfil the duty of care. There are questions as to whether a regulator will have sufficient resources to bring action in appropriate cases.
Moreover, the statutory duty of care does not create a new right for individuals to enforce. Any action against online operators will have to be based on existing laws (such as negligence, breach of contract or defamation). Despite this, the OHWP says that there will be "…scope to use the regulator's findings in any claim against a company in the courts on the grounds of negligence or breach of contract". More needs to be done to make it clear how (if at all) existing law is changing, if speculative or vexatious claims are to be avoided.
A further criticism relates to the positive monitoring obligation. The UK government's claims that this is compatible with the provisions of the e-Commerce Directive must be open to doubt. If the UK leaves the EU, the point will probably become moot but if it does not, we can expect to see a challenge to this provision. How this will play out though is difficult to predict given the direction of travel at EU level towards positive monitoring.
Some argue that the provisions of the e-Commerce Directive now need to be amended. At present, there is a positive disincentive for operators to proactively monitor content – although many of the larger operators do so – since they are then deemed to be on notice and potentially liable. A change to this to allow some form of positive monitoring without risking liability might be beneficial for start-ups and smaller enterprises.
Some question whether the proposals are appropriately targeted. Even the most comprehensive regulatory framework aimed at online operators will not prevent a determined user posting harmful content online. It could be argued that giving users greater scope to trace and take action against those who post harmful content online would be equally or more effective.
Likewise, some argue that the proposals should contain more on technological measures to protect users, as well as education and awareness programmes. More generally, there is a question about whether national regulation, uncoordinated with other countries, is the best approach.
The OHWP has now been publicly consulted upon and the government is expected to respond to the results of the consultation later this year. Subject to some of the criticisms discussed above, the OHWP has broadly received cross-party political support and a new legal framework of some sort is likely to result.
Operators would be prudent to begin reviewing their systems and procedures to make sure that they conform with key elements of the proposals. Indeed, the government has specifically said that it expects operators to take action now.
Taylor Wessing is able to advise on the implications of and compliance with the new proposals. Further details on the proposals are available here.
If you have any questions on this article please contact us.
PSD2 seeks to improve the existing EU rules for electronic payments taking into account innovations in payment services, such as internet and mobile payments.
1 of 7 Insights
A little over a year ago, we discussed the impact of the German Netzwerkdurchsetzungsgesetz (Network Enforcement Act, or NetzDG) which came into force on 1 October 2017, and its impact on freedom of speech on social networks. On its two year anniversary, we ask: does it work?
2 of 7 Insights
In this post-truth, online world, vast swathes of the global population no longer obtain their information from traditional news organisations or broadcasters.
3 of 7 Insights
5 of 7 Insights
Article 17 of the new Copyright Directive has been accused of censoring freedom of expression and "breaking the internet" by making platforms caught by the article directly liable for infringing content uploaded by users.
6 of 7 Insights
7 of 7 Insights