Over two and a half thousand years ago (stay with us here) Athenian historian Thucydides identified the types of misinformation and disinformation that he felt posed a threat to Athenian democracy and civic life, all of which sound familiar today.
Twenty-six centuries after Thucydides, Mark Twain didn’t say, “If you don’t read the newspaper, you’re uninformed. If you do, you’re misinformed.”
With an irony that the author might have enjoyed, the much cited but misattributed quotation made its way onto social media after appearing in newspaper columns around 2007, according to the Center for Mark Twain Studies. Now it is almost impossible to tell a real quote from a false one in the online world and Thucydides’ concerns continue to play out across new platforms and global democracies.
Fake quotes may be the least of our collective worries in 2024. Countries containing over half of the world’s population have held or will hold elections this year and concerns over the proliferation of fake news misleading voters are as high as they have ever been. The twin threats of misinformation and disinformation have been exacerbated by the rise of social media and sophisticated digital technologies, not least AI-generated content like deepfakes (read more here) which pose a threat to a fair and informed electoral process.
Limited resources and even more limited time for electoral authorities to respond to increasingly sophisticated threats mean that by the time authorities react to false information, it is often too late to curb its effects. Many believe the only way to halt the spread of fake news and prevent its influence on democracy is to impose more responsibilities on the publishers and hosts of online news, something that democracies around the world are currently grappling with given the potential conflict with free speech.
Defining misinformation and disinformation
Misinformation generally refers to false or misleading information spread without malicious intent. It often arises from misunderstandings or incorrect reporting and can be perpetuated by well-meaning individuals who believe they are sharing accurate information. Disinformation is deliberately false information created and spread with the intent to deceive or manipulate public opinion. It is a tool often employed for political gain, to undermine trust in institutions, or to sow public discord.
In reality the line between misinformation and disinformation is often blurred. Bad actors will seize on rumours and inaccuracies, spinning them for their own advantage and taking in innocent social media users as well as reputable news outlets with deliberate falsehoods, extending their reach to new audiences. Motivations matter but, in an increasingly complex digital ecosystem, effects matter more. Truth may be subjective but the threat to democracy is real and can only be addressed if we can learn to preserve and respect evidence and accuracy. This cannot be done without the support of those publishing and disseminating the news on which the voting public relies.
Digital fast food – clicks, likes and shares
The digital age has revolutionised the dissemination of information. Within twenty years social media platforms have sprung to life and become primary sources of news for millions if not billions of people. While these platforms enable rapid information sharing and democratise content creation, they can also present significant challenges to democracy. Algorithms are designed to maximise engagement as it leads to greater advertising revenue. This in turn can lead to platforms prioritising sensational content, which may contain misinformation and disinformation. Fake news is the fast food of the information world, full of algorithmic fat, sugar, and salt, addictive to those who click on it.
The speed at which false information can spread on social media platforms is staggering. A 2018 study by Massachusetts Institute of Technology found that false news stories are 70% more likely to be retweeted than true ones. This virality is often driven by the emotional response false stories can evoke, which encourages clicks, likes, and sharing. In the context of elections, this promotion of extremes can distort public perception of candidates and issues, influencing voting behaviour.
A review of recent electoral cycles demonstrates the impact of misinformation and disinformation. In the 2016 US Presidential election, Russian interference through disinformation campaigns on social media platforms was well documented. Troll farms and bots spread divisive content with the clear aim of polarising American voters. In the 2020 US election, misinformation about mail-in voting and COVID-19 led to widespread confusion and mistrust in the electoral process.
Legislative attempts to tackle fake news
Government efforts to tackle non-sanctioned propaganda are not new but in recent years, free speech has won out over attempts to curb misinformation and disinformation in most democracies – the UK's Online Safety Act is a case in point as it saw original efforts to regulate harmful online content including misinformation, significantly curtailed during the legislative process. A fundamental change in publishing has led to social media platforms hosting huge amounts of content for a proliferation of small publishers (including millions of individuals). The EU and US approach (in the e-Commerce Directive and Communications Decency Act respectively), of protecting content hosts from most forms of legal liability, provided they lack editorial oversight for offending content and act expeditiously to take it down once on notice, has come under intense scrutiny.
The EU has responded with the Digital Services Act creating new rules on how platforms moderate content, on advertising, algorithmic processes and risk mitigation. The DSA aims to ensure that platforms, particularly the very large ones – are more accountable and assume responsibility for the actions they take and the systemic risks they pose, including on disinformation and manipulation of electoral processes. The DSA is accompanied by the updated Code of Practice on Disinformation and the new EU Commission Guidance, announced in the European Democracy Action Plan. Signatories to the code, including major tech companies, commit to actions such as closing fake accounts, demonetising disinformation purveyors, and improving transparency around political advertising.
In addition (as discussed here), the EU recently passed the Political Advertising Regulation (PAR) which will, once fully applicable, explicitly regulate political advertising. In the United States, the Federal Election Commission (FEC) and other bodies are also working to improve transparency in online political advertising. FEC regulations now require platforms to disclose who is paying for political ads and to maintain public archives of these ads. Though determining what constitutes a ‘political ad’ is an ever evolving challenge.
Significant fines could be levied at platforms failing to meet their obligations under the EU’s DSA and under the PAR, and a range of other legal areas seem set to evolve to more aggressively counter inaccurate information in the contexts of public health, consumer protection, and advertising.
The role of technology companies – addressing misinformation
Tech companies, particularly social media platforms, search engines and messaging services, by their nature, play a primary role in spreading misinformation and disinformation so they also have a leading role in combating fake news. In response to past criticisms and in the face of the DSA and other threatened legislative initiatives, many tech companies have implemented measures to identify and, in some cases, remove false content from their platforms and services. The range of measures taken is constantly growing, along with financial investment in the fight against fake news. Actions include:
Fact checking
Meta has enhanced its content moderation systems and partnered with independent fact-checkers such as FactCheck.org and Snopes to identify and label false information on Facebook. Meta and TikTok have also announced mandatory labelling of AI-generated images across their platforms, and Apple has announced similar measures with respect to its Image Playground to help its users identify what is real and what is not.
The platform formerly known as Twitter introduced a system of labels and warnings for tweets that contain misleading information, particularly about elections and COVID-19. X, as it is now known, has recently taken the interesting step of crowd-sourcing its fact checking, allowing readers to add ‘context’ which then gets upvoted if others find it useful.
Algorithmic adjustments
YouTube has adjusted its recommendation algorithms to reduce the spread of misleading content, prioritising authoritative sources like news organisations and health authorities. Meta has also tweaked its Facebook News Feed algorithm to demote false information and boost content from credible sources. Both platforms have had to correct against the natural trend towards sensationalism and arguably have yet to strike what is admittedly a difficult balance.
User reporting
Instagram and WhatsApp, both owned by Meta, have improved tools for users to report false information. For instance, WhatsApp has limited the number of times a message can be forwarded to reduce the spread of viral misinformation. Messages that have been forwarded through chats more than five times will be labelled as “forwarded many times”, though no evidence has yet been produced to demonstrate that such a label alone would lead readers to be less trustful of the content.
Oversight
Mindful of their role as key sources of news, platforms are increasingly looking to establish independent bodies to offer oversight. Twitter has launched a transparency center where it publishes data on actions taken against rule-breaking content and accounts. Meta’s Oversight Board, an independent body, reviews and provides assessments of content decisions to ensure transparency and accountability on Facebook, Instagram and Threads.
Getting tough - removing and banning accounts
Facebook has publicised its efforts to take down thousands of accounts linked to disinformation campaigns in the run up to the 2024 election cycle, particularly those associated with state-backed actors. The platform has focused on removed accounts and pages that consistently spread disinformation, such as those linked to coordinated inauthentic behaviour from Russia, Iran, and China. The accounts usually host fake, often AI-generated photos, names and locations and are designed to look like those of ordinary users engaged on political topics.
Education
TikTok has run educational campaigns to help users identify and understand misinformation, providing tips on critical thinking and media literacy. Meta has also launched initiatives like the 'Get Digital' program to promote online safety and literacy.
Too much intervention or not enough?
Engagement and collaboration with public authorities has proved helpful in some contexts, Google has collaborated with health organisations like the World Health Organization to provide accurate information about COVID-19 in search results and YouTube. Facebook and X have also worked with election authorities to combat misinformation related to voting processes. However, such engagements come with their own risks as not all governments are interested in accuracy over control of messaging and tech companies may find themselves in the challenging position of determining which is more harmful, the false news they wish to tackle or the government asking them to remove unfavourable press. The line between authoritarian and democratic regimes is not always as clearcut as we may wish. Businesses need to distinguish legitimate concerns raised at state level from inappropriate political pressure.
The many efforts undertaken by businesses to tackle false information are not without challenges of their own. Algorithms can mistakenly flag legitimate content, and the sheer volume of posts makes it difficult for human fact-checkers to keep up. Moreover, the global nature of social media means that disinformation can quickly cross borders, complicating efforts to contain it.
The complex digital advertising ecosystem faces particular problems. Advertisers are increasingly reliant on ratings agencies to assess sites and platforms for fake news risk. Sites rated as high risk for disinformation may be added to a “dynamic exclusion list" leading to an effective boycott by advertisers. The criteria used by ratings agencies are not always clear and British publication Unherd has recently complained that one such agency, the Global Disinformation Index has assessed content in gender-critical editorial opinion pieces to rate it as high risk, effectively cutting off its advertising revenue. Editor-in-chief of Unherd, Freddie Sayers has spoken extensively about the lack of accountability for ratings agencies and the alleged left-wing bias.
Are we being taken in?
The 2024 elections will be a test of our collective ability to address the challenges of misinformation and disinformation. Governments, NGOs and tech companies themselves are pursuing many options to impede the spread of political disinformation and misinformation. Legal solutions may incentivise more aggressive action from platforms and technical solutions, if tempered to avoid free-speech violations, may be the short term answer.
If, however, the goal of a digital ecosystem largely cleansed of fake news is ever to be attained, it must start with education across the board (as the UK's Online Safety Act among others recognises), beginning when children first pick up a device, and continuing for us all.