The EU AI Regulation: How to Get Ahead of the Curve
Skip to main content
Insight

The EU AI Regulation: How to Get Ahead of the Curve

Locations

Ireland

Artificial intelligence is rapidly reshaping the landscape of regulatory compliance. With its ability to streamline processes, analyse vast quantities of data and increase efficiency, it is clear to see why AI is becoming embedded into compliance processes. Although clearly here to stay and to be embraced, there are obvious concerns around privacy, accuracy and transparency.

A significant milestone was reached in August 2024 with Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the AI Regulation) coming into force. This significant step towards a harmonised regulatory framework aims to foster responsible AI development and deployment in the EU, being made applicable in Member States via a phased approach over 36 months, beginning on 2 February 2025.

Essential Elements of the AI Regulation:

Risk Based Approach: 
In an effort to avoid over-regulation and keep in line with the principle of legislative proportionality, the AI Regulation implements a risk-based approach. There are four risk categories: Unacceptable Risk, High Risk, Limited Risk and Minimal Risk. AI systems are essentially assessed by the relevant national competent authority and placed into one of the above categories. With Unacceptable Risk systems being prohibited outright and Low Risk systems enjoying minimal restriction, the Regulation looks to find a balance between innovation on the one hand, and the protection of fundamental rights on the other. 

Non-Compliance Penalties
Sanctions have been created for organisations who fail to implement changes on time or disregard their obligations once the Regulation is in place. A graduated penalty structure has been implemented, with fines varying based on the severity and nature of the offence. With potential for fines to reach €35 million (or 7% of an organisation's global revenue), there is a clear desire to incentivise co-operation.

GPAI Foundation Models
General Purpose AI (GPAI) models, which are commonly trained on extensive datasets using methods like unsupervised or reinforcement learning, are highly versatile and applicable across numerous sectors and industries. It is therefore unsurprising that they have attracted specific and substantial regulatory focus.

The AI Regulation seeks transparency within this sector of AI, requiring technical documentation to be produced and users be made aware of the intended use of a particular GPAI model, alongside possible misuses and limitations. 

GPAI Models posing a systemic risk are subject to further obligations. These further obligations include but are not limited to: continuous assessment and mitigation of systemic risks, model evaluations and the documenting and reporting of AI incidents to national competent authorities.

Impact of the AI Regulation on Regulatory Bodies: 

Creation of safe experimentation spaces
The EU aims to foster trustworthy AI in Europe through transparent, accountable and ethical means. This approach is evident throughout the AI Regulation, with particular emphasis on Article 57 which facilitates the use of sandboxes. Monitored through a combination of regulated oversight, strict eligibility criteria and legal safeguards, safe spaces are created where regulators can experiment with new ideas without the risk of disrupting existing programmes. This in turn will promote innovation while identifying and mitigating any potential risk. 

Expanded Responsibilities
Regulatory bodies will have greater responsibility as a result of the AI Regulation. Existing frameworks (such as cyber security, data protection and privacy) will need to be adapted to incorporate AI specific risks (especially in higher risk areas like healthcare and finance). Regulatory bodies will need to develop technical knowledge in order to identify the risks posed by AI and ensure the best interests of the public are protected. 

Cross-Border Co-operation
Harmonisation within the Union will be a direct consequence of the Regulation and should result in increased cross-border cooperation. Co-operation is likely to come in many forms such as shared incident reporting to pinpoint patterns and potential risks, sectoral codes of AI conduct or shared best practices to name but a few. This will lead to a consistent and more effective approach toward AI for regulatory bodies throughout the EU. 

How To Prepare your Organisation: 
Although full compliance with the AI Regulation is not required until August 2027, many of the Regulations core functions are to be made effective by August 2026. In adopting early compliance measures, organisations are establishing themselves as leaders in their industry, setting the standard for AI use while simultaneously enhancing their reputation and building trust amongst stakeholders. 

With that in mind, here are some tips which will make compliance easier:

Stocktake of AI Inventory
Create an inventory of all AI systems/models your organisation currently/plan to have and establish if any fall within the scope of the Regulation. Identify which person/team shall be responsible for AI governance. By assessing the risk classification of everything listed, organisations will be aware of any potential obligations which may arise in the future. 

Update Internal Knowledge/Expertise
With the AI Regulation introducing an array of legal obligations and technical standards, it is essential that organisations equip employees with the necessary knowledge to navigate this highly technical area of law. An educated workforce will be able to adapt to any caselaw or guidance which may emerge, allowing them to make quicker and more informed decisions. Educating your workforce is a strategic investment that should improve performance and make your organisation more efficient moving forward.

Prepare for External Oversight
Under the AI Regulation, the government have designated an initial list of 8 public bodies (alongside a further 9 national public authorities responsible for safeguarding fundamental rights from the impact of high-risk AI systems) as competent authorities responsible for overseeing compliance. This oversight will come in a variety of forms such as audits, inspections and investigations. Organisations should establish governance programmes which allow for up to date and transparent documentation to be provided to the regulator when requested. Failure to supply adequate records may result in financial penalties and reputational damage. 

How Fieldfisher can help
Fieldfisher's experts across its Public & Regulatory and Corporate departments have vast experience in advising on cutting edge regulatory regimes, and are well placed to help your organisation navigate any challenges with legally robust implementation of AI technology that may lay ahead. Reach out to one of our experts if you would like to find out more about how we can help your organisation. 

Written by: Jonathan Moore and Liam Óg Beausang

Areas of Expertise

Public and Regulatory