Key takeaways from the European Commission's Guidelines on prohibited AI practices
Skip to main content
Insight

Key takeaways from the European Commission's Guidelines on prohibited AI practices

As of 2 February 2025, the EU AI Act ("AI Act") prohibits certain AI systems that are classified as prohibited AI practices.

On 4 February 2025, the European Commission published its guidelines on prohibited practices (the "Guidelines"), which clarify key concepts regarding AI technologies posing an unacceptable risk to individuals' safety and fundamental rights. 

These Guidelines not only address prohibited AI practices but also define important concepts related to the AI Act and its scope.

Understanding the interpretation of these practices is essential for businesses that are conducting internal risk assessments and inventories to ensure that the AI systems they develop and/or use comply with the AI Act.

This article focuses on six of the prohibited practices that are listed under Art. 5 of the AI Act, (it does not address real-time remote biometric identification and predictive policing, which mainly concern law enforcement activities) and summarizes the key takeaways for businesses.

  1. Harmful manipulation or deception

The AI Act prohibits certain AI systems that engage in manipulative practices. The Guidelines clarify each of the conditions as follows:

  • Placing on the market, putting on into service or using an AI system
  • Deployment of subliminal, manipulative, or deceptive techniques:
    • Subliminal techniques: This may involve the use of visual or auditory subliminal messages designed to influence individuals without their conscious awareness.
    • Purposefully manipulative techniques: This could include AI systems using background sounds or images to alter someone’s mood or behaviour.
    • Deceptive techniques: This involves providing false or misleading information to deceive individuals, undermining their autonomy and decision-making, e.g. a chatbot impersonating a real person through a synthetic voice.
  • Material distortion of behaviour: The AI system must have the objective or the effect of substantially impacting people's behaviour in a manner that appreciably impairs their ability to make an informed decision.
  • Potential significant harm: The distortion of behaviour should potentially cause significant harm to persons. Harm may include physical, psychological, financial, and economic harm.
  1. Harmful exploitation of vulnerabilities

The AI Act prohibits AI systems that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.

  1. The AI system must exploit vulnerabilities due to age, disability, or socio-economic situations.
  • The AI Act does not define "vulnerabilities," but it may refer to various susceptibilities—cognitive, emotional, physical, or other factors—that affect an individual or group's ability to make informed decisions or influence their behavior.
  • Those vulnerabilities are linked to age (children and older people), disabilities (including a range of physical, mental, intellectual, and sensory impairments), and socio-economic situations (such as those in extreme poverty or from ethnic/religious minorities).
  • The prohibition aims at preventing AI technologies from worsening existing social inequalities by exploiting these vulnerabilities.
  1. The exploitation enabled by the AI system must have the objective, or the effect of materially distorting the behavior of a person or a group of persons that (likely) causes harm to that person or groups of persons.
  • The exploitation implies a substantial impact but does not necessarily require intent to cause harm.
  • The concept of "harm" includes various adverse impacts such as physical, psychological, financial, and economic.
  • For example, an AI system that uses emotion recognition to support mentally disabled individuals in their daily life may also manipulate them into making harmful decisions, like purchasing products promising unrealistic mental health benefits.
  1. Social scoring

The AI Act prohibits certain social scoring systems. The Guidelines clarify that this prohibition applies when the following conditions are met:

  • Placing on the EU market, putting on into service or using an AI system
  • Evaluation or classification of individuals over time: The AI system must be intended or used to evaluate or classify individuals or groups based on:
    • Social behaviour: Actions, habits or interactions within society, e.g. the payment of debts.
    • Known, inferred or predicted personal or personality characteristics: Information such as age, gender, interests or financial situation.
  • Detrimental or unfavourable treatment resulting from social scores: The social score leads to detrimental or unfavourable treatment either:
    • In unrelated social contexts: For example, an authority using a predictive AI tool to review tax returns and select certain tax returns for closer inspection, which uses data on taxpayer's social habits.
    • Unjustified or disproportionate treatment: For example, a municipality using an AI system to score trustworthiness of residents based on data points such as insufficient volunteering, which may lead to withdrawal of public benefits.
  1. Untargeted scraping to develop facial recognition databases

The prohibition concerns the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

4 conditions must be fulfilled:

  1. The prohibition applies to the putting into service, use or placing on the market, which means that the prohibition applies to both providers and deployers of such AI systems.
  2. The purpose of the AI system is creating or expanding facial recognition databases.
  • The Guidelines define a "database" as "any organized collection of data enabling rapid search and retrieval by a computer".
  • A "facial recognition database" refers to an organized collection of human faces from digital images or videos which are compared against stored images to identify potential matches. These databases can be temporary, centralized, or decentralized.
  • It's important to highlight that the database does not need to be exclusively dedicated to facial recognition, it is sufficient that it has the capability for such use. This is a significant point to consider, as it broadens the scope of the prohibition to include AI systems that could potentially be repurposed for such use.
  1. The means to populate the database through AI tools for untargeted scraping.
  • Scraping involves automated tools like web crawlers or bots to extract data automatically from sources such as CCTV, websites, and social media.
  • "Untargeted" scraping refers to the indiscriminate collection of data, similar to a 'vacuum cleaner,' without targeting specific individuals or groups. Therefore, the term "untargeted" refers to data collection without a particular focus on any individual or group.
  • This highlights the broad and potentially invasive nature of untargeted scraping, which can raise privacy concerns and contribute to a sense of "mass surveillance".
  1. The sources are either from CCTV footage or from the internet.
  • The Guidelines outlines that posting facial images on social media does not imply consent for their inclusion in a facial recognition database.
  • Facial images can also be scraped from CCTV footage captured by surveillance cameras in public places such as airports, streets, and parks.
  1. Emotion recognition

The AI Act prohibits emotion recognition AI systems in certain contexts. According to the Guidelines, the following cumulative conditions must apply:

  • Placing on the market, putting into service, or using an AI system
  • Emotion inference via AI systems: The system must identify or infer emotions or intentions of natural persons based on biometric data such as facial expressions, voice characteristics or body gestures. Inferring emotions from written text is excluded.
  • Use in the workplace or educational institutions: The use of emotion recognition in other contexts is not prohibited.
  • Exclusions for medical or safety purposes: AI systems used for medical or safety reasons are exempted from the prohibition. For example, AI systems used for medical reasons to improve blind employees' accessibility are allowed.
  1. Biometric categorization

The AI Act prohibits biometric categorization systems that classify individuals based on their biometric data to deduce or infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sexual life, or sexual orientation 

  1. The system must be a biometric categorization system aimed at classifying individuals (…)
  • The categorization of an individual by a biometric system typically involves determining whether their biometric data corresponds to a group with specific predefined characteristics. It is not focused on identifying or verifying the individual's identity, but rather on assigning them to a particular category.
  • It may use physical characteristics (e.g., facial features, skin color) to assign individuals to specific categories, some of which may be sensitive or protected by Union non-discrimination law, such as race or be based on DNA or behavioral traits like keystroke analysis or gait.
  • The prohibition does not apply if the categorization concerns a whole group rather than looking specifically at the individual.
  1. (…) to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
  • For example, a biometric categorization system capable of deducing an individual’s religious orientation from their tattoos or faces falls under the prohibition.

Sanctions

Companies can be fined up to 35 million euro of 7% of the total worldwide turnover of the preceding financial year, whichever is higher.

Next steps for businesses

Businesses should review whether any AI systems they develop, deploy or source are used in ways that could be considered as a prohibited practice. This includes reviewing AI embedded in products, customer interfaces, employee monitoring tools or decision-making processes. The focus should not only be on the technology itself but also on how it is used in practice, as the prohibitions apply to specific  use cases as opposed to AI systems in the abstract. Businesses are advised to document these assessments and take appropriate action to adjust or discontinue any non-compliant use.

For further guidance on how the prohibitions may apply to your business, please contact the authors directly.

 

Areas of Expertise

Technology and Data