26. Juni 2023
There has been plenty of press attention about the impact of both generative and 'general' artificial intelligence (AI) in the workplace. There is no single definition of AI, but it is helpful to think about AI as the 'science of making machines smart'. Core to the idea is the concept that machines can work in the same way as humans, only faster, better, and more reliably.
Financial services firms are embracing the use of AI in their business operations. The benefits of implementing AI in financial services for task automation, fraud detection and delivering personal recommendations are transformative. AI allows:
Financial services firms are embracing the use of AI for many reasons. Firms are using AI in their back-office operations, to make more informed risk assessments, to offer a personal service without branches, reduce and prevent fraud, speed up their operations and are exploring ways of using AI even more widely.
Can similar benefits be achieved from using AI in the workplace? In a 2020 report, 'My boss the algorithm: an ethical look at algorithms in the workplace', ACAS highlighted the opportunities and risks associated with greater reliance on algorithms in the workplace. Its key recommendations are still relevant:
Most financial services businesses have not yet taken the bold step of replacing human bosses with algorithms. Many firms however are already using AI to hire and train staff, conduct monitoring and surveillance, for disciplinary and performance management, work allocation, to decide on terms of employment and even in more limited cases, to end employment and withdraw work.
There are many differing views about whether this is a positive or negative development; overall, governments and businesses support it. Governments are focused on the use of AI to promote the economy. It is certainly the case that AI is a key focus for the UK government's growth strategy. The UK is home to a third of Europe's AI companies and twice as many as any other country in Europe.
PwC research estimates UK GDP will be 10.3% higher in 2030 because of the use of AI, an equivalent of £232 billion. This will come from consumer product enhancements stimulating demand, but will also be because of labour productivity improvements. It is predicted that AI will create 97 million new jobs worldwide by 2025. However, for AI to develop and drive these goals it is important that people trust and have confidence in AI systems. This is hard when there are senior players in the sector issuing heavy words of caution.
At the end of March, senior figures in AI, including Elon Musk, Apple co-founder Steve Wozniak and researchers at Deep Mind, called for a halt in the training of AI systems amid fears of a threat to humanity.
The letter was from Future of Life institute and highlighted the risks that future and more advanced systems might pose. Advanced AIs need to be developed with care, the letter writes, but instead, "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one — not even their creators can understand, predict, or reliably control".
The letter warns that AIs could flood information channels with misinformation and replace jobs with automation. Elon Musk and the other authors of the letter have called for governments to call for at least a six-month halt in further developments of AI. There is no indication that this will happen.
What is needed internationally is a system where regulation is in place to support the drive for development, but in such a way that the risks are minimised. The UK government has taken a different approach to the EU when it comes to regulation. It proposes a light touch approach, with compliance to be monitored by existing industry regulators.
The Information Commissioner's Office has received this approach positively. The government published a consultation paper, 'AI regulation: a pro-innovation approach — policy proposals' in March, seeking views on a policy approach which it describes as 'pro-innovation'. A response from industry is awaited.
The EU's approach is to have a greater focus on regulation, and is currently proposing a new regulatory framework in the draft Artificial Intelligence Act. This would introduce a risk-based system, either prohibiting the use of AI in areas identified as 'high risk', or else subjecting them to greater regulatory requirements.
The EU is also introducing the Platform Workers Directive, which aims to provide greater transparency and rights regarding decisions made by algorithms that significantly affect the platform workers' working conditions, such as access to work assignments. Despite this being an EU proposal, it may affect UK businesses where they use workers in the EU.
For employers considering integrating AI in the workplace without a clear framework for how it may be integrated safely, there are many unanswered questions. Top among these are concerns about the risks of breaching workers' privacy, making decisions based on incorrect information and about the risks of discriminatory algorithms.
Financial services employers can take comfort from the fact that there are clear rules that apply to profiling and automated decision making in the workplace, with the result that there is some certainty if using automated recruitment activities. When doing so employers will be controllers of personal data for this purpose.
They can undertake these activities if they both comply with data protection principles established by the UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018, and have a lawful basis for processing.
The rules include:
UK financial services businesses have an opportunity now to engage in the debate about the use of AI in the workplace by responding to the consultation. It will be interesting to monitor developments in the EU and the UK, and to assess how far other governments will go to regulate in this area.
This article was first published in Thomson Reuters Regulatory Intelligence.