作者

Helen Farr

合伙人

Read More
作者

Helen Farr

合伙人

Read More

2023年5月17日

Law at Work - May 2023 – 2 / 8 观点

AI in the workplace: what's the direction of travel?

  • Quick read

With Chat GPT taking up so much press coverage recently, there has been a lot of discussion about the impact of both generative and 'general' AI in the workplace. There is no one single definition of AI but for our purposes it is helpful to think about AI as the "science of making machines smart". Core to the idea is the concept that machines can work in the same way as humans, only faster, better, and more reliably.

In a 2020 report, Acas highlighted the opportunities and risks associated with greater reliance on algorithms in the workplace. Its key recommendations remain pertinent today:

  • Algorithms should be used to advise and work alongside human line managers but not to replace them.
  • A human manager should always have final responsibility for any workplace decisions.
  • Line managers need to be trained in how to understand algorithms and how to use an ever-increasing amount of data.

For most of us, our working life has not yet moved so far that our bosses are algorithms. However, many businesses are already using AI to hire and train staff, carry out monitoring and surveillance, for disciplinary and performance management, work allocation, to decide on terms of employment and even in more limited cases to end employment and withdraw work.

There are many differing views about whether this is a positive or negative development.

Of course, governments and businesses generally support it. Governments globally are focused on the use of AI generally as a way to promote the economy. It is certainly the case that AI is a key focus for the UK Government's growth strategy. The UK is home to a third of Europe's AI companies and twice as many as any other country in Europe.

PWC research estimates UK GDP will be 10.3% higher in 2030 because of the use of AI – an equivalent of £232 billion. This will come from consumer product enhancements stimulating demand but will also be because of labour productivity improvements. It is predicted that AI will create 97m new jobs globally by 2025. However, for AI to develop and drive these glittering goals it is important that people trust and have confidence in AI systems. This is hard when there are senior players in the sector issuing heavy words of caution.

At the end of March, key figures in AI including Elon Musk, apple co-founder Steve Wozniak and researchers at Deep Mind called for a halt in the training of AI systems amid fears of a threat to humanity. The letter was from Future of Life institute and highlighted the risks future and more advanced systems might pose. Advanced AIs need to be developed with care, the letter writes, but instead, "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control". The letter warns that AIs could flood information channels with misinformation and replace jobs with automation. Elon Musk and the other authors of the letter have called for governments globally to call for at least a six-month halt in further developments of AI. There is no indication that this will happen.

Clearly, what is needed globally is a system where regulation is in place to support the drive for development but in such a way that the risks are minimised. The UK Government has taken a very different approach to the EU when it comes to regulation. The UK Government proposes a light touch approach, with compliance to be monitored by existing industry regulators. This approach has been positively received by the ICO. The government published a consultation paper in March, seeking views on a policy approach which it describes as 'pro-innovation'. A response from industry is awaited.

The EU's approach is to have a greater focus on regulation. The EU currently is proposing a new regulatory framework in the draft Artificial Intelligence Act. This would introduce a risk-based system, either prohibiting the use of AI in areas identified as 'high risk', or else subjecting them to greater regulatory requirements. In addition, the EU is introducing the Platform Workers Directive, which aims to provide greater transparency and rights regarding decisions made by algorithms that significantly affect the platform workers' working conditions, such as access to work assignments.

Of course, for employers considering integrating AI in the workplace without a clear framework for how AI may be integrated safely, there are currently many questions and concerns. Many employers are concerned about the risks of breaching workers' privacy, making decisions based on incorrect information and also about the risks of discriminatory algorithms.

UK businesses have an opportunity now to engage in the debate about the use of AI in the workplace by responding to the current consultation. It will be interesting to monitor developments in the EU and the UK, and to assess how far other governments will go to regulate in this area. We will be monitoring this area closely and updating readers as the regulatory regime develops.

本系列内容

就业、养老金和流动性

Government announces employment law reforms

2023年5月11日

作者 Shireen Shaikh

就业、养老金和流动性

AI in the workplace: what's the direction of travel?

2023年5月17日

作者 Helen Farr

就业、养老金和流动性

Ethnicity Pay Gap Reporting: Guidance published

2023年5月17日

作者 Shireen Shaikh

就业、养老金和流动性

Court of Appeal enforces 12 month non-compete restriction

2023年5月17日

作者 Ruth Moffett

就业、养老金和流动性

Hot Topics

2023年5月15日

作者 Shireen Shaikh

就业、养老金和流动性

Auto-enrolment update: The ongoing battle against non-compliance

2023年4月3日

作者 Afshan Mallik

养老金

Pensions Bulletin - April 2023

2023年4月28日

作者 Angela Sharma, Anna Taylor

Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe

Related Insights

就业、养老金和流动性

COVID-19 and health testing

2020年7月15日

作者 Helen Farr

点击此处了解更多