< Back

Share |

The EU's first steps in tackling online disinformation

There is no quick fix to the problem of online disinformation, widely perceived as a threat to democracy itself. The European Commission is engaging with stakeholders in an effort to tackle the issue and has published a voluntary Code of Practice, signed up to by leading social media platforms and search engine providers. Just how much of an impact this will have remains to be seen.

November 2018

In the past few years, false or inaccurate news and automated social media account activity have become an online phenomenon, especially in connection with elections and terrorist attacks. While commonly referred to as 'fake news', the whole range of activities and behaviours which constitute the creation and distribution of fabricated, exaggerated and/ or manipulated content are better described collectively as 'disinformation'.

Recent examples of disinformation include false reports of victims of various horrific incidents ranging from the Westminster and Manchester terrorist attacks to the Grenfell Tower fire, and allegations that disinformation has played a role in US, Australian and European elections. It is clear that there can be a political agenda around some disinformation, but other stories such as the now-debunked myth of a baby being found in rubble nearly two weeks after the Grenfell tragedy, appear to be motivated only by the generation of advertising revenue.

Some of the problems in combating disinformation are identifying it, tracking its creation, slowing down or stopping its distribution, making it transparent to users, and ensuring there are genuine alternatives available. Due to these complexities, there is no quick fix; a combined technological, financial and public policy effort is needed. There does appear to be increasing awareness of the need to be critically engaged with online content among the public. However, the meteoric rise of the term 'fake news' also means that, at times, it can used by anyone who wants to dismiss an unfavourable report without actually needing to demonstrate that it is false or misleading. An assertion of fake news risks becoming fake news itself, thereby perpetuating the disinformation cycle.

There are many players (witting or unwitting) in the disinformation industry, from teenagers in their bedrooms churning out blog posts for click revenue, to the websites and social media platforms on which content is posted, the advertisers who fund them, the people who share content and the personalities who take advantage of the confusion caused. The constant news cycle encourages some advertisers to pay for quick, cheap and dirty content and this, in turn, can lead to the shedding of journalistic integrity in favour of speed of production.

Political attention has, however, focused on the big technology companies as they provide access to the internet, and social media platforms. For this reason, many see them as having a vital role to play in fighting disinformation, through their ability to flag inaccurate content or minimise the spread of disinformation. This is, however, a delicate area for all stakeholders due to the potential conflict between protecting freedom of speech and opinion and preventing the spread of disinformation. It is a very fine line between identifying and/or de-prioritising disinformation, and stifling freedom of speech on the internet.

Recognising the need to engage with the technology platforms and advertisers directly, the EU formed an expert group on disinformation and issued a Communication in April 2018 as part of its Digital Single Market strategy. As a result, major players in the technology and advertising industries have now come together to sign up to a Code of Practice published by the Commission to address the challenges of disinformation, hoping, no doubt, to avoid regulation in this area which could move them ever further away from the role of intermediary. Reported signatories of the Code so far include Facebook, Twitter, Google and Mozilla.

The Code of Practice seeks to balance the need to protect the public from disinformation while maintaining citizens' rights to freedom of expression. In part, it seeks to achieve this by defining "disinformation". Firstly, the information must be verifiably false or misleading. Next, the information must have been created or distributed for financial gain or with the intent to deceive. Finally, the information must also have the potential of public harm, which includes political, security, environmental and health threats. The Code specifically excludes advertising, satire and parody, and other types of information regulated under other codes or standards.

The Code focuses on five pillars identified by the expert group. These are transparency, literacy, empowerment, diversity and research. In practical terms, signatories to the Code are looking to disrupt advertising funds paid to disinformation creators and implement strategies to more clearly mark political or issue-based advertising as separate to editorial content. The signatories also agree to invest in technologies to assist people to find diverse content, think critically, and understand the advertisements with which they are presented. There is also a focus on research, with the signatories agreeing that disinformation needs to be tracked and its impact understood. The Code includes best practice policies, with reference to existing policies and practices established by some of the signatories.

The Code of Practice is voluntary and self-regulating. This has already led to criticism that it lacks teeth and fails to provide an incentive to make meaningful progress. The commitments made are somewhat generic and imprecise, meaning that any review will likely struggle to come to any real conclusion about compliance with KPIs and improvements made.

The signatories have agreed that the Code will be reviewed 12 months after coming into force. It will be interesting to see what the Commission's promised report at the end of year has to say about the effectiveness of this Code, although it may be too early for them to reach a conclusion.

If you have any questions on this article please contact us.

Holding an iPad
Kelly Burke


Kelly asks whether the EC's voluntary Code of Practice on disinformation will have a real impact on the spread of 'fake news'.

"The Code of Practice is voluntary and self-regulating. This has already led to criticism that it lacks teeth and fails to provide an incentive to make meaningful progress."