laptop
3 de 7

1 octobre 2019

Regulating the internet – 3 de 7 Publications

Fake news – How it became public enemy number one, and the challenges to fighting back

In this post-truth, online world, vast swathes of the global population no longer obtain their information from traditional news organisations or broadcasters.

En savoir plus
Auteur

Michael Yates

Associé

Read More

Earlier this year, fake news, or disinformation, was designated (along with many other things) as a "threat to our way of life" by the UK Government in its Online Harms White Paper (OHWP). It said "inaccurate information, regardless of intent, can be harmful" and "disinformation threatens [our] values and principles, and can threaten public safety, undermine national security, fracture community cohesion and reduce trust".

In the current climate, this might seem like an ironic statement. This report preceded the Digital, Culture, Media and Sports Committee's (DCMS) report on "Disinformation and 'fake news'", which made many recommendations to address the issue. Whatever your take on the issue of fake news, the fact is it has potential to influence millions of people.

Fake news on the rise?

Information received, consumed, liked, commented on and shared online outside traditional news organisations or broadcasters, can be less accurate or verifiable, misleading and distorted. Some say this has become a major threat to democracy because voters, having provided their data in exchange for free use of social media, can be micro-targeted with specific information, via Facebook for example, much more precisely than via mainstream media advertising.

Put another way, we are no longer only impressionable to the narratives and agendas of mainstream media conglomerates, but to anyone that can access or reach our screens (using our data) and get our attention.

Media lawyers have been fighting 'fake news' published by the mainstream media for years and voter manipulation via such means is nothing new. Just look at the vilification of Ed Miliband over his awkward consumption of a bacon sandwich which was used to help derail his 2015 election campaign. However, given social media platforms have literally billions of daily users, rather than millions of readers, global governments have become worried about forces beyond the media, which they cannot see or control and, up until recently, have failed to understand.

The sea change was triggered by the widely reported global data scandal of 2017 involving Cambridge Analytica and the data of allegedly 50 million Facebook users. The fallout highlighted dangers, including:

  • The unlawful collection, sale and use, by companies hired by political candidates, of personal data for the purpose of psychologically profiling voters and micro-targeting them with adverts on social media to influence their voting decisions.
  • The sale by Facebook of its users' data to many other app developers for advertising purposes without consent.
  • Conclusions (including in the Mueller report) by various intelligence agencies that foreign nation states used fake news to influence elections.

Many now believe that fake news is used to change the political climate and is a threat to democratic processes and elections due to its influence over online debate. This has raised questions about the amount of responsibility social media companies should have for what is processed and published on their platforms. Since then, dozens of regulators have conducted investigations and governments around the world are introducing new laws and regulations. Mark Zuckerberg was even questioned before US Congress and the European Parliament.

As a result, changes are afoot but there are significant challenges to fighting fake news.

What exactly is fake news?

The first problem comes with the definition itself. The DCMS's report rejected the use of the term "fake news" altogether, using "disinformation" instead. In its response to the DCMS's interim report on fake news, the UK Government defined "disinformation" as "the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purpose of causing harm, or for political, personal or financial gain".

The OHWP says disinformation is "information which is created or disseminated with the deliberate intent to mislead; this could be to cause harm, or for personal, political or financial gain". The Cambridge dictionary says it is "false stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke".

Others say it can include parody, satire or propaganda that consists of deliberate disinformation or hoaxes spread via traditional news media. Another view is that it consists of stories which confirm readers' own beliefs or biases according to characteristics (so not that different to reading the same newspaper every day).

Where is the line? Does fake news include print and broadcast media or not? If not, why is there a difference between using those mediums to influence public opinion with false stories and using social media? Does it include information published by governments and would that be subject to any regulation?

Spotting fake news

The Electoral Commission has called for a change in the law to make online political adverts show clearly who paid for them. It wants online adverts to carry the same information as printed election material, which has to say who has produced it. Facebook has recently started an online archive of political adverts on its site, with information about who is behind them and how they are targeted.

However, as a general point, it is considered that much of the fake news used to influence to 2016 Presidential elections was via unpaid posts, not paid adverts, which were voluntarily shared by other users; regulation or legislation targeting paid advertising would not have affected these posts at all.

Between October 2017 and September 2018, Facebook says it shut down 2.8 billion fake accounts. The company says people trying to abuse its systems often set up a computer which creates a new account every 10 seconds and it is engaged in a "constant war" to remove them. We have been told that Russia used thousands of false US identities supported by fake documents and imposter social media profiles to disseminate fake news or blend into real social media activities and influence online public discourse.

How should social media companies identify such fake accounts, given they can be indistinguishable from legitimate US users' profiles? If there is no way to verify authenticity, it becomes very difficult to prevent others sharing fake news more widely, potentially leading to republication by high profile users which could cause such news to permeate the mainstream media.

The spread of fake news using such methods is also difficult to prevent from a data protection point of view because users are not always targeted by advertisers using their personal data to micro target them, but are engaging one to one on a closed platform with fake social media profiles.

Further, much of this fake news (paid or unpaid) does not reference the election or voting or endorse a specific candidate, but focuses on subjects such as race, ethnicity and immigration, religion and law enforcement support to provoke emotional responses.

So how can they be identified as relating to an election or to politics, and be defined as "political"? Individuals expressing views on matters of public interest quite obviously fall within the concept of freedom of expression under the First Amendment in the USA, for example. It would be very hard to legislate against "promoting division". If no individual is identified in either a paid or unpaid post, then no legal cause of action could arise upon which to base any claim.

Whose responsibility is it?

Fake news, personalised to every user via algorithms, can generate large sums of advertising revenue for social media companies which publish it, which increases as web traffic increases audience levels, clicks and ad revenue.

Social media platforms have this in mind when proposing changes and their stance has traditionally been that they are not responsible for fake news and should not determine what users see and what they can access. However, sensing the direction of travel and eager to avoid incoming regulation, Facebook and other social media platforms are developing tools to allow their users to flag and report fake news.

This falls in line with the DCMS's new Code of Practice for providers of online social media platforms, a key principle of which states: "Social media providers should maintain a clear and accessible reporting process to enable users to notify social media providers of harmful conduct".

But what if users cannot identify fake news and why should we presume that there is any motivation for users to report content? If fake news is identified, platforms will only remove it if they think it violates their rules, which can be a very high threshold.

Even when platorms have removed content, their decisions have often attracted criticism (such as the removal of Nick Ut's Pulitzer Prize-winning photograph by Facebook). There is likely to be a significant clash between the social media platforms' terms and conditions and whatever regulation governments create.

But who will enforce these new rules and will platforms be compelled to remove content? Who should decide what you see or read, the platforms or the government – either way, this is a real issue in terms of freedom of speech.

Will regulation work?

The UK government has proposed a new regulatory framework which will increase the responsibilities of platforms or operators to tackle harmful content and activities online as set out in the OHWP. It will apply to any operator which allows users to share or discover user-generated content or interact with each other online.

Under the proposals, a new statutory duty of care applicable to various online harms will be imposed on operators, which will be overseen by an independent regulator. The regulator will set out how operators can comply with that duty of care by doing what is 'reasonably practicable' in Codes of Practice. It will include obligations to proactively monitor or scan for certain tightly defined categories of illegal content and disinformation is one of the harms specifically covered.

While further Codes of Practice will not be established until the regulator is operational, the government expects operators to take action now to tackle harmful content and activity on their services.

The Codes of Practice are likely to include requirements to make content that has been disputed by reputable fact-checking services less visible to users, to use fact-checking services particularly during elections, to promote authoritative news sources, to make it clear when users are dealing with automated accounts, as well as steps operators should take to sanction users who deliberately misrepresent their identity to spread and strengthen disinformation.

Ultimately, failure to comply with the duty of care could lead to significant fines and individual liability for senior management. While these new proposals will likely help in the fight, enforcement by whichever regulator is created to police the Codes of Practice, will be a key issue and there is inconsistency among various regulators.

The ICO has launched a new framework code of practice regarding the use of personal data in political campaigning. This is still at odds with advertising regulated by the ASA via the CAP and BCAP codes. The CAP code states that "claims in marketing communications, whenever published or distributed, whose principal function is to influence voters in a local, regional, national or international election or referendum are exempt from the Code", leaving political parties free to distribute what they wish. However, there is a prohibition on political advertising on television or radio services which has a political nature or end under the BCAP code, presumably because those mediums are more powerful. Such services are also regulated by OFCOM which imposes obligations of accuracy and fairness under the OFCOM Code. Why should social media now be considered any different?

Will fines focus the mind?

Facebook has already faced a number of fines in relation to the use of its data to spread fake news. In the UK, the ICO fined Facebook £500,000 in October 2018 for its involvement in the Cambridge Analytica scandal, finding that between 2007 and 2014, Facebook processed the personal information of users unfairly by allowing application developers access to their information without sufficiently clear and informed consent, and allowing access even if users had not downloaded the app, but were simply ‘friends’ with people who had.

However, this level of fine (although the maximum under the old law) is unlikely to deter companies which make billions of dollars in profit from processing users' data in breach of the law. Will the GDPR move the goal posts to a sufficient extent?

Larger fines have been levied against Facebook in the US, where it has agreed to pay a record $5bn fine to the US Federal Trade Commission to settle privacy concerns and $100 million to the US Securities and Exchange Commission. It must also establish an independent privacy committee that Facebook's chief executive Mark Zuckerberg will not have control over. Facebook was also fined more than €100 million by the European Commission in 2017.

Other data investigations (by the DoJ and the CMA) are ongoing with regards to competition issues, online advertising and whether making users' data available to advertisers in return for payment is producing good outcomes for consumers.

Many say companies with deep pockets like Facebook will easily absorb fines, but if large sums don't have an effect, it is hard to imagine what will. Perhaps in the end it will be public opinion that forces the issue. Social media platforms may well be able to weather fines, but they will not want to lose users and revenue.

Who do we trust?

Ultimately, disinformation has to be fought via changes to the law, which has to catch up in this area. Where it lags, new technologies like NewsGuard, which rates and reviews news and information websites using nine standards of credibility and transparency, allocating a "nutrition label", may also help in the fight.

But the OHWP states: "Importantly, the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not. There will be difficult judgment calls associated with this. The government and the future regulator will engage extensively with civil society, industry and other groups to ensure action is as effective as possible, and does not detract from freedom of speech online".

This perfectly summarises the dichotomy with online regulation and fake news. Ultimately, who do we trust?

If you have any questions on this article please contact us.

Return to

home

Go to Interface main hub