17 May 2023
Law at Work - May 2023 – 2 of 8 Insights
With Chat GPT taking up so much press coverage recently, there has been a lot of discussion about the impact of both generative and 'general' AI in the workplace. There is no one single definition of AI but for our purposes it is helpful to think about AI as the "science of making machines smart". Core to the idea is the concept that machines can work in the same way as humans, only faster, better, and more reliably.
In a 2020 report, Acas highlighted the opportunities and risks associated with greater reliance on algorithms in the workplace. Its key recommendations remain pertinent today:
For most of us, our working life has not yet moved so far that our bosses are algorithms. However, many businesses are already using AI to hire and train staff, carry out monitoring and surveillance, for disciplinary and performance management, work allocation, to decide on terms of employment and even in more limited cases to end employment and withdraw work.
There are many differing views about whether this is a positive or negative development.
Of course, governments and businesses generally support it. Governments globally are focused on the use of AI generally as a way to promote the economy. It is certainly the case that AI is a key focus for the UK Government's growth strategy. The UK is home to a third of Europe's AI companies and twice as many as any other country in Europe.
PWC research estimates UK GDP will be 10.3% higher in 2030 because of the use of AI – an equivalent of £232 billion. This will come from consumer product enhancements stimulating demand but will also be because of labour productivity improvements. It is predicted that AI will create 97m new jobs globally by 2025. However, for AI to develop and drive these glittering goals it is important that people trust and have confidence in AI systems. This is hard when there are senior players in the sector issuing heavy words of caution.
At the end of March, key figures in AI including Elon Musk, apple co-founder Steve Wozniak and researchers at Deep Mind called for a halt in the training of AI systems amid fears of a threat to humanity. The letter was from Future of Life institute and highlighted the risks future and more advanced systems might pose. Advanced AIs need to be developed with care, the letter writes, but instead, "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control". The letter warns that AIs could flood information channels with misinformation and replace jobs with automation. Elon Musk and the other authors of the letter have called for governments globally to call for at least a six-month halt in further developments of AI. There is no indication that this will happen.
Clearly, what is needed globally is a system where regulation is in place to support the drive for development but in such a way that the risks are minimised. The UK Government has taken a very different approach to the EU when it comes to regulation. The UK Government proposes a light touch approach, with compliance to be monitored by existing industry regulators. This approach has been positively received by the ICO. The government published a consultation paper in March, seeking views on a policy approach which it describes as 'pro-innovation'. A response from industry is awaited.
The EU's approach is to have a greater focus on regulation. The EU currently is proposing a new regulatory framework in the draft Artificial Intelligence Act. This would introduce a risk-based system, either prohibiting the use of AI in areas identified as 'high risk', or else subjecting them to greater regulatory requirements. In addition, the EU is introducing the Platform Workers Directive, which aims to provide greater transparency and rights regarding decisions made by algorithms that significantly affect the platform workers' working conditions, such as access to work assignments.
Of course, for employers considering integrating AI in the workplace without a clear framework for how AI may be integrated safely, there are currently many questions and concerns. Many employers are concerned about the risks of breaching workers' privacy, making decisions based on incorrect information and also about the risks of discriminatory algorithms.
UK businesses have an opportunity now to engage in the debate about the use of AI in the workplace by responding to the current consultation. It will be interesting to monitor developments in the EU and the UK, and to assess how far other governments will go to regulate in this area. We will be monitoring this area closely and updating readers as the regulatory regime develops.
17 May 2023
by Helen Farr
17 May 2023
by Ruth Moffett
28 April 2023
3 April 2023