AI in Employment: Recent UK Legal Updates | Fieldfisher
Skip to main content
Insight

AI in Employment: Recent UK Legal Updates

A digital artwork of a mountain range composed of geometric shapes and triangular facets. The gradient background transitions from purple and pink hues on the left to blue hues on the right, creating a vibrant and abstract landscape.

The UK government has to date adopted a ‘pro-innovation’ stance to regulating Artificial Intelligence ('AI'). Instead of immediate legislation, it has pursued a principles-based approach, tasking selected regulators with developing sector-specific regulatory guidance.

While many jurisdictions have taken a similar approach, in March 2024 the EU approved the Artificial Intelligence Act (EU AI Act), which is anticipated to become the international standard.

Following on from this, recent UK industry AI guidance has been published, along with some calls for AI legislation. Here's a summary of recent UK AI updates that may impact employment law:

  1. Responsible AI in recruitment guidance

In March 2024, the Department for Science, Innovation and Technology issued the 'Responsible AI in Recruitment Guidance'. This highlights some of the key risks associated with deploying AI-enabled tools in HR and recruitment processes, including potential violations of 1.) the Equality Act 2010, such as by perpetuating existing biases inherent in training data; and 2.) data protection law, such as transparency and/or accuracy requirements.

The guidance sets out processes and assurance measures and mechanisms for employers to consider and put in place before, during and after AI procurement and deployment in the workplace. The guidance summarises the practical steps employers should take, including:

  1. conducting AI/bias impact assessments and audits/ monitoring, performance testing and due diligence;
  2. training staff;
  3. putting in place ethical and governance frameworks, principles and policies and reasonable adjustments; and
  4. providing channels for contestability of AI based decisions.

The aim is to ensure systems are set up responsibly in accordance with the UK's AI regulatory principles

  1. Equality and Human Rights Commission update on approach to regulating AI

On 30 April 2024, the Equality and Human Rights Commission ('EHRC') published an update on its 'approach to regulating AI'.

The EHRC has prioritised AI within its strategic plan for 2022 – 2025. It will be focusing on reducing and preventing digital exclusion in 2024–25, particularly for older and disabled people. This will predominantly be in the context of accessing local services, the use of AI in recruitment practices, developing solutions to address bias and discrimination in AI systems, and the use of facial recognition technology ('FRT').

The EHRC has voiced concern about the use of FRT becoming ingrained and normalised in a way that it says will not be possible to move away from once established. It has committed to partnering with the Centre for Data Ethics and Innovation (CDEI) on the fairness innovation challenge to develop tools for tackling algorithmic bias and discrimination.

The EHRC has also shown an appetite for supporting Claimants in FRT Employment Tribunal discrimination cases, such as in the recently settled case of Manjang v Uber Eats UK Limited. Whilst this case did not ultimately test whether using the FRT used was discriminatory, it demonstrated there is significant public interest in the potential for discrimination by ‘AI in the workplace’. We can expect continued interest in this area and an increasing emphasis on the need for transparency and good governance when implementing and using AI tools in the workplace.

  1. Regulating AI: the ICO's Strategic Approach

On 1 May 2024, the Information Commissioner's Office (ICO) published Regulating AI: The ICO's strategic approach, setting out the ICO's strategic approach to AI regulation. It explains the steps the ICO is taking to drive forward the principles set out in the AI Regulation White Paper, including providing:

  1. Guidance: for example on AI and data protection, automated decision-making and profiling, explaining decisions made with AI and biometric recognition technologies.
  2. Advice and support: for AI innovators, including the ICO Regulatory Sandbox, Innovation Advice and Innovation Hub services and a programme of consensual audits.
  3. Enforcement action: which can include issuing information notices, assessment notices, enforcement notices and monetary penalty notices; and
  4. Collaboration: with other regulators, the government, standards bodies and international partners.

Consultations are planned to gather input on updates to ICO guidance on AI and data protection and automated decision-making and profiling in spring 2025, to reflect changes in the forthcoming Data Protection and Digital Information Bill. There will also be continued focus on biometric technologies.

  1. TUC AI Employment Bill

In April 2024, the Trades Union Congress (TUC) published the draft Artificial Intelligence (Employment and Regulation) Bill (the 'TUC AI Bill'). This sets out a potential UK legislative framework for regulating the use of AI in the workplace, with a view to protecting the rights and interests of employees.

It aligns with the EU AI Act, by focusing on ‘high-risk’ AI decisions relating to employment matters. ‘High-risk’ is widely defined, broadly capturing a decision that could impact workers' legal rights or significant aspects of employment. This could include hiring, firing and/or assessing performance, for example.

Key proposed requirements for employers when using AI to make ‘high-risk’ decisions would include:

  1. Mandatory workplace AI risk assessments covering, for example, health and safety, data protection, equality and human rights before any high-risk decision making takes place;
  2. workers and unions to have significant access to information about how employer AI systems operate;
  3. mandatory consultation with unions before deploying high-risk AI decision-making systems;
  4. employers to establish and maintain a register of information about AI used in high-risk decision-making;
  5. employee rights to a personalised statement explaining how they may be impacted by AI decisions;
  6. employees entitled to human reconsideration of any automated high-risk decision;
  7. prohibition on the use of 'emotion recognition' technology when making any high-risk decision that could be detrimental to employees, workers or jobseekers; and
  8. a statutory right to disconnect for employees and protection from dismissal or detrimental treatment for exercising that right, subject to caveats.

The TUC AI Bill also prohibits discrimination via the use of AI and proposes amending the Equality Act 2010 so that employers will be held liable for decisions made by AI. If implemented, the burden of proof would be on the employer to show that no discrimination occurred, whether by the AI or any human involved in its operation (subject to a defence).

The Labour party has indicated it would introduce a new 'right to switch off'. Depending on the outcome of the upcoming general election, this proposed new right may be implemented in due course. However, the TUC AI Bill's passage into law, either as drafted or in a modified form (perhaps better balancing the interests of employers and employees) remains uncertain.

These recent updates reflect the evolving regulatory landscape surrounding AI in the UK and emphasise the importance of responsible and transparent AI deployment in the workplace. Employers should stay informed of further developments and ensure compliance with evolving legal standards.

Areas of Expertise

Employment