I spend too much time online – you probably do too. If it's for work, well, that’s fine. If it's just a 'cheeky' scroll, that's fine too. If you have children, you probably think they also spend too much time online. When you're online, you might feel pretty safe. You know the deal. What's that – a free gift card? No thank you! But what about children? Maybe less safe, so it's no surprise that the main focus of online safety has been protecting children. But there's more to it.
A few years ago, everything received an app. Last year, everything received an avatar. This year, everything has been AI-enabled. The unstoppable AI wave is changing the ways in which we interact with the internet, as are evolving consumer trends – perhaps in response to apps getting a little too addictive. As we approach the end of 2024, we take stock and ask: how will we be safer online in 2025?
Disclaimer: the ball used for some of these trends is more Magic 8 than crystal.
We'll have better ways of getting things taken down
Two major pieces of recent legislation with implications for online safety and protecting children online have dominated in the UK and EU in 2024: the UK's Online Safety Act 2023 (OSA) and the EU's Digital Services Act (DSA). They both aim to make the internet a safer place for UK and EU users, but in different ways. Common between them are enhanced obligations on in-scope service providers to implement systems and processes to enable the easy reporting of content, and transparent complaints and redress procedures. These obligations have already kicked in under the DSA. The OSA's obligations on online platforms will likely enter into force in early 2025, meaning that next year, it will be easier for service users to flag content and get it taken down, resulting in lower incidences of harmful content appearing on feeds and an overall safer user experience.
It will be harder to beat age verification mechanisms
Parents often bemoan how easy it is for children to bypass self-declaration age verification measures. Gen Z remembers all too well how easy it was to create various social media accounts at the click of a mouse. The OSA will introduce an obligation on services likely to be accessed by children to take enhanced measures to mitigate and manage the risks and impact of harm to children and use proportionate systems and processes to prevent them from encountering harmful content, including through age verification and assurance mechanisms. Ofcom is due to publish children's access assessment guidance in January 2025. In-scope services will have a few months to assess whether they are likely to be accessed by children, before Ofcom publishes the Protection of Children Codes of Practice and risk assessment guidance in April 2025.
The UK government also recently launched a study into children's use of social media, and there is talk of a ban on under-16s using social media and/or having phones in school. The UK is not alone in looking at this – for example, Spain recently introduced a draft law guaranteeing the rights of minors in the digital environment and providing enhanced data protection. Social media companies are responding to this increased scrutiny on children's use of social media. Meta, for instance, recently introduced teen accounts for under-18s on Instagram and announced it will use AI-powered software to check profile account data to determine whether or not users are under-18.
The result of the considerable regulatory efforts to make the digital space safer for children will be that higher-risk services will continue to introduce harder-to-beat age verification mechanisms as online safety regimes ramp up.
LLMs will be used to combat online misinformation
Many hoped the OSA would be the answer to the growing spread of misinformation online in the UK. It fell short, with the new s.179 communications offence having been restricted to the spread of disinformation. Law makers recognised the difficulties in criminalising unintentional false speech. The answer to tackling to misinformation may lie in cure rather than prevention. Enter, AI (of course): the European Broadcasting Union recently announced the launch of a fake news analyser, an LLM which analyses "the linguistic properties of news articles and detect potentially misleading information". The LLM assigns a reliability score based on lexical, grammatical and semantic features. It goes much further than most online platforms which rely on users to report misinformation. Ofcom recently announced the formation of its Advisory Committee on Disinformation and Misinformation. The Committee will take the lead on the war against online misinformation in the UK, and LLMs similar to the EBU's may finally turn the tide both in the UK and more widely.
Online political advertising will be (further) regulated
Although this year's UK and US elections did not see the same volume of false and misleading political advertising we witnessed in 2016 and 2019, 2024 saw its fair share of political AI deepfakes. Non-broadcast political advertising remains largely unregulated in the UK – high-level regulation existing in the Elections Act 2022 and the OSA – against the general direction of travel. In March, the EU enacted the Political Advertising Regulation with obligations aimed at greater transparency and preventing political disinformation. Social media platforms are now major news sources, and yet political advertising on social media falls outside the remit of the UK's Advertising Standards Authority. Platforms themselves recognise the need for intervention: during the US election, many temporarily banned political advertising to prevent the undermining the electoral process. 2025 might finally be the year the UK government bites the bullet and commits to comprehensively regulating online political advertising.
Consumers will get enhanced digital rights
It's a familiar frustration – you find a great deal online for a flight, but by the time you check out, additional charges have doubled the price. Well, change is on the way. The UK consumer protection regime recently received its first makeover in nearly ten years with the Digital Markets, Competition and Consumers Act 2024 (DMCCA). Among other things, the DMCCA modernised the Consumer Protection from Unfair Trading Regulations 2008, expanded on the list of blacklisted practices (ie those deemed automatically unfair and prohibited without having to assess their effect on consumers) and introduced new requirements for other business practices. In the crosshairs are several online practices relating to fake reviews, drip pricing, and subscription traps. The DMCCA also armed the CMA with a better regulatory toolkit, allowing it to take consumer protection enforcement action directly rather than having to apply to the courts, and to impose significant financial penalties. Most of the consumer protection sections of the DMCCA are expected to commence in April 2025 with the subscription contracts regime coming into effect in Spring 2026. For more on consumer protection predictions see here.
Imitation AI chatbots will get a talking to
Generative AI can be as harmful as it can be ingenious. In October, a mother in the US issued legal proceedings against a prominent AI company alleging that one of their chatbots initiated a romantic relationship with her teenage son which tragically led him to take his own life. The same AI company also came under fire for enabling the creation of distressing AI chatbots imitating deceased children. In November, apparently in response to pressure from online safety groups, Ofcom published a sabre-rattling open letter to online service providers on the OSA's application to generative AI and chatbots, and confirmed the extent to which they are within its scope. With the OSA's first duties taking effect imminently, in 2025 there will likely be a crackdown by AI companies in the UK on user-to-user chatbots, and restrictions on the creation of chatbots impersonating real and fictional people which could prove distressing or harmful. For more AI predictions see here.
Phone, wallet, keys… and you'll need your digital ID
Or not exactly – not having to carry anything is kind of the point. 2025 is likely to be the year that secure digital identification is rolled out en masse across the UK and EU. In the UK, the Office for Digital Identities & Attributes recently published an updated pre-release of the UK digital identity and attributes trust framework setting out government-backed rules and standards for trustworthy and secure digital identity services. The Data (Use and Access) Bill, meanwhile, gives powers to the Secretary of State to set up a regime for trusted digital identity verification and to recognise EU and potentially other third country trust services. Given that the UK government wants closer alignment with the EU, it may introduce a UK version of the European Digital Identity Framework which mandates Member States to provide all EU citizens, residents and businesses with means of digital identification. The EU Digital Identity Wallets promise online safety and data privacy in a myriad of ways: they can be used to securely store and share digital documents, log in to social media, access social security benefits, authorise payments online, open bank accounts, register SIM cards, booking travel, and sign contracts. In an ideal world, the UK's version would be interoperable with the EU's. Sounding too good to be true? Let's hope not.
There'll be a boom in encrypted smartphones
Ever think of ditching your smartphone and becoming a digital hermit? Not to live off-grid, but to leave behind modern life's seemingly excessive and unavoidable data collection and tracking. Data exhaustion is getting to us – and encryption is an increasing priority for many. In the past decade, encrypted chat apps – previously only used by criminals and spies – exploded in popularity. But the apps are under attack. In May, France arrested Telegram's founder for failing to provide access to chats for the purposes of criminal investigations. In September, the controversial draft EU law "Chat Control" – which would provide Interpol with a back door into encrypted chats made available in the EU – narrowly lost support. In 2025, many may turn to the next step up – encrypted smartphones. They protect users from compromise at the hardware level, meaning they are less prone to spyware and malware.
In November 2024, UK consumer protection magazine Which? carried out tests on a number of connected devices and found that some – including air fryers and smart watches – processed excessive amounts of personal data. Which? particularly highlighted its finding that three air fryers which asked for permission to record audio on users' phones through a connected app, were then transferring personal data to China and/or connecting to trackers. The ICO said the tests "show that many products not only fail to meet our expectations for data protection but also consumer expectations". For many, encrypted smartphones (and other connected devices) may be the answer in 2025.
We'll get to the end of the scroll
"Doomscrolling" has been in the regulatory spotlight since the European Parliament's November 2023 report on addictive design of online services, in which it was identified as one of the key features exploiting psychological vulnerabilities to maximise interaction, alongside autoplay, pull-to-refresh and ephemeral content ("stories"). The viral Guardian headline from July told us all we needed to know: "Doomscrolling linked to existential anxiety, distrust, suspicion and despair, study finds". An expert report commissioned by the French government in April recommended introducing an outright ban on infinite scrolling and other harmful features. The European Commission's fitness check on EU consumer protection legislation published in October identified significant risks to children associated with addictive design, including infinite scrolling. We may finally have reached the end of the scroll. 2025 may see the first legislative attempt at tackling the key technology that keeps us glued to our screens and spending too much time online.
Whatever life online looks like in 2025, Taylor Wessing is monitoring online safety developments closely. To discuss any of these predictions, or other topics covered in our thought leadership on online safety which you can access here, please reach out to a member of our Technology, Media & Communications team.