Locations
Some provisions of the EU AI Act become applicable on 2nd February 2025.
These are:
- AI literacy requirements; and
- the prohibition of certain AI practices (Chapter II).
See our blog on AI literacy requirements here. This blog deals with the prohibition of certain AI practices and how this may affect your day-to-day use of AI.
Article 5 prohibits the placing on the market, the putting into service, or the use of AI systems for certain purposes described below. Some of these use cases may currently be deployed in areas of security, recruitment or employment.
What AI systems are prohibited?
Under Article 5, AI systems used for the following purposes are prohibited*:
- subliminal techniques or manipulative or deceptive techniques to distort someone's behaviour, causing them significant harm;
- exploiting vulnerabilities of individuals or groups due to protected characteristics (e.g. age, disability, etc), causing them significant harm;
- evaluation or classification of individuals or groups over a certain period of time for social scoring purposes, leading to detrimental or unfavourable treatment;
- assessing or predicting individuals' risk of committing a criminal offence based solely on profiling or assessment of their personality traits and characteristics;
- systems to create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
- systems for inferring emotions of a natural person in the areas of workplace and education institutions;
- categorisation of individuals based on their biometric data to deduce or infer their protected characteristics;
- ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement (except in certain circumstances).
*The above is a high-level list. There are conditions for the application of the prohibitions, as well as exemptions to the prohibitions. For instance, "remote biometric identification systems" may be used for the purposes of law enforcement under certain circumstances and "emotion inferring systems" may be used for medical or safety purposes.
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Technology and Data insights from the experts at Fieldfisher.
Subscribe nowWho does it apply to?
The prohibition applies to all players involved in the development and deployment of the AI system, regardless of where they are located. So long as there is an EU 'link', the EU AI act is likely to apply, as the geographic scope of the Act is broad. For example, a provider of the system located outside of the EU selling it to EU customers (likely deployers) will be subject to the AI Act. Furthermore, providers and deployers not based in the EU, but where the output of the AI system is used in the EU, will also be caught.
Examples of uses cases which are prohibited
- Emotion recognition systems in the context of employment / recruitment.
- Emotion recognition systems in education, for instance, in a recorded exam.
- Assessments of potential criminality of individuals in the context of recruitment based on their social media use.
- Scraping of images from CCTV footage to create an internal face database for security purposes (not law enforcement).
- Use of subliminal techniques causing individuals to make purchases beyond their means.
- Social scoring for the purposes of granting or denying a service based on publicly available information.
- Use of biometric categorisation systems to deduce or infer protected characteristics such as race, religious beliefs, etc. for I&D purposes.
Next steps
Watch out for the use cases listed above. Even if an exemption is available, it is likely that the relevant AI system will be considered high-risk.
As part of your AI governance efforts:
- Build into your AI risk assessment tools a consideration of whether AI use cases fall into the above categories.
- Educate your staff internally (especially those in HR/recruitment, marketing and security) so that AI systems for these use cases are not rolled out. Doing so dovetails with the AI literacy requirement under Article 4 of the EU AI Act.
- Build in due diligence questions within your vendor due diligence procedures to ensure that you are not inadvertently engaging vendors that use prohibited AI systems.
Even if you are not caught under the EU AI Act, it is important for your AI governance efforts to understand the reasons behind the prohibitions and to put policies and procedures in place to ensure that you use AI in a responsible way.
For more information on how organisations are implementing AI governance, please contact your usual Fieldfisher lawyers.