The EU AI Act: between hope and illusion
Skip to main content
Insight

The EU AI Act: between hope and illusion

Following a heated debate between the European co-legislators in 2023, the AI Act was finally adopted in time before the re-shuffling of European representatives that followed the elections of MEPs in the EU Parliament last June. The AI Act was then published in the EU's Official Journal on 1st August, 2024. Since then, companies have been grappling to understand the new text and how it applies to their business. 

The AI Act is a very ambitious text. In many aspects, it is more complicated than the GDPR. While it was initially developed as a product safety legislation, the rapid development of AI models (in particular, Large Language Models such as ChatGPT) pushed the legislator to broaden the scope of the AI Act to make sure it also applied to general-purpose AI models. The EU Parliament also imposed specific provisions aimed at protecting the fundamental rights of EU citizens. The result of this mishmash is a complex, hybrid law that aims to regulate high-risk AI systems in two very distinct scenarios. On the one hand, product manufacturers who integrate AI as a safety component into a product, or develop AI as a product, must comply with both the AI Act and the sector-specific EU harmonized product safety legislation that applies to them (e.g. toys, cars, aircrafts, lifts, radios, medical devices, etc.). On the other hand, all companies, regardless of the sector they are in, who use AI in one of the specific and pre-defined "high-risk" scenarios set out in the law, must also comply with the AI Act.

Unlike the GDPR which sets out a harmonized set of rules and principles that apply across the board to all entities in the EU, the AI Act takes a risk-based approach and categorizes AI according to its level of risk. Essentially, AI is either prohibited, high-risk, or subject to specific transparency requirements, with a stand-alone regime that applies to general-purpose AI models that present a "systemic risk" for the EU market. The AI Act does not intend to regulate all types of AI. For example, AI that is used for scientific research purposes, as well as certain limited, free and open-source AI systems, are out of scope of the AI Act. Nonetheless, the scope of the AI Act remains quite broad, and has an extra-territorial provision to ensure that entities who develop AI outside the EU with the intention of placing it or putting it into service in the EU, are also caught under the AI Act.

The application of the AI Act is also quite different from the GDPR. The context and purpose for which an AI system is used will be key to determining an entity's role and responsibilities under the AI Act, but also to understand what governance and compliance measures it must put in place. These will vary substantially depending on the type of AI and the role that an entity holds in the overall AI eco-system (e.g. provider, deployer, importer, distributor, etc.). Needless to say that a one-size-fits-all approach is not possible under the AI Act. On the contrary, companies must carefully assess the level of risk that is associated with the intended use of AI so as to determine how the AI Act applies to them.

Placed in the broader context of EU digital regulation (some may even say "overregulation"), complying with the AI Act may come across to some as a daunting task. The EU digital space is becoming increasingly complex, with multiple layers of digital legislation overlapping with one another, whether it is in the field of data, cybersecurity or digital products and services. Inevitably, companies will have to prioritize their compliance efforts and make certain choices. To some degree, companies can leverage the compliance efforts they have already made in some areas (e.g. GDPR, cybersecurity, product conformity assessments, etc.) but this implies a certain level of maturity in terms of compliance and does not remove the burden of having to comply with the different laws that apply to them.

The AI Act will also require companies to upskill and train their people to ensure they have the required skills and expertise to continue practising their jobs. This may range from basic training on AI to in-depth knowledge, depending on each employee's role within the organization. Privacy professionals have already seen their workload increase significantly since the AI Act was adopted, but this raises the question whether Data Protection Officers are sufficiently equipped, trained and prepared to advise on AI governance. Companies should therefore prioritize investing massively in AI literacy and training. 

Lastly, the impact of the AI Act on the rest of the world remains to be seen. While Eurocrats are secretly hoping that the AI Act will have a "Brussels effect" similar to the GDPR, such effect is far from certain. AI is viewed quite differently around the world. While most would agree that certain risks linked to AI must be contained and some form of regulation is therefore needed, it seems unlikely that the leading countries in terms of AI innovation, such as China and the USA, will follow the EU's steps. AI is viewed by many as an opportunity for growth and development. Trump has announced his intention to de-regulate the US and his priority seems to be on boosting innovation. While Europe faces economic stagnation, political turmoil and heightened security threats, the risk is that Europe will take the back seat in terms of AI innovation, which raises the question: "Can Europe remain competitive in a new world order that is dominated by AI?".

In the meantime, the provisions of the EU AI Act on high-risk AI systems come into full application on 2nd August 2026, which leaves companies less than two years to comply. Given the amount of effort that is required, that does not leave a lot of time. Companies who start implementing an AI governance framework now will be in a better position once the AI Act comes into full application and the EU regulators start enforcing it. As a reminder, fines for failing to comply with the provisions on high-risk AI systems can go up to 3% of a company's global turnover or €15 million, whichever is higher.  

Areas of Expertise

Technology and Data