New developments and ambitions in artificial intelligence | Fieldfisher
Skip to main content
Insight

New developments and ambitions in artificial intelligence

A close-up photo of a person using a laptop. The person’s right hand is typing on the keyboard. They have colorful nail polish and are wearing a long-sleeve sweater. The background is blurred, making it hard to discern specific details.

Locations

France

In light of the incredible technological revolution represented by Artificial Intelligence (AI), Europe has adopted a framework to guarantee the trust, rights and freedoms of individuals, while promoting digital innovation. On 13 March 2024, the European Parliament adopted the Regulation on Artificial Intelligence (AI Act), establishing a uniform legal framework for the development, marketing and use of AI systems in the European Union (EU).

The broad definition of AI systems and the graduation of the applicable rules and requirements according to the nature and level of risk guarantee legal certainty, while providing the flexibility needed to take account of rapid technological developments in this field.

The scope of the text covers all operators, any use of AI systems and their results on EU territory or having an impact within the EU, thus promoting international convergence.

The AI Act aims to make the most of AI systems while protecting fundamental rights, health and safety and enabling democratic control, by requiring all players in the value chain to master AI, during the development phase of the AI system, during its use, in the interpretation of results, by guaranteeing transparency on the possibilities and risks of the AI system.

While the entry into force of the AI Act is imminent, the application of its provisions is graduated, taking into account not only the level of risk but also the rapid emergence of Generative AI (GenAI).

Latest news on generative AI and deployers

The emergence of so-called generative AI has given rise to new uses and challenges, leading the text to distinguish between the "general-purpose AI model" that "has significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how the model is brought to market, and which can be integrated into a variety of downstream systems or applications" and the AI system "based on a general-purpose AI model that has the capacity to serve a variety of purposes, both for direct use and for integration into other AI systems".

With regard to general-purpose AI models, those presenting a so-called "systemic" risk are subject to an obligation on the part of the supplier to inform the Commission within two weeks, while the others are subject to specific transparency obligations and to the implementation of a policy to ensure compliance with copyright regulations. For general-purpose AI systems that generate synthesised content such as audio, image, video or text, providers must ensure that their output is marked in a machine-readable format, identifiable as having been generated or manipulated by an AI. In addition, suppliers are subject to appropriate efficiency, interoperability, robustness and reliability obligations.

These specific obligations, which provide a flexible response to the urgent needs arising from the uses and risks to fundamental rights - in particular those linked to "deepfake" - and intellectual property rights in particular, apply without prejudice to other requirements where the AI system constitutes a high-risk system, as is the case in particular in the health sector.

The responsibility of the "deployer", i.e. "using a high-risk AI system under their own authority", consists in particular of taking the appropriate technical and organisational measures to ensure that the systems are used in accordance with their instructions, ensuring that users are able to exercise human control, ensuring, where appropriate, that the input data is relevant and representative with regard to the purpose of the system, monitoring its operation, informing the supplier of any risks, and ensuring that automatically generated logs are kept.

In the case of the "deployer" of an AI system that does not constitute a "high-risk" system, the implementation of an AI User Charter or "voluntary code of conduct" is encouraged to ensure the use of trustworthy AI systems and their appropriate and transparent use.

French ambitions

On the very day that the European Parliament adopted the AI Act, the French Artificial Intelligence Commission, which was set up in September 2023 to "help make France a country at the forefront of the AI revolution", presented 25 operational recommendations for adapting the national AI strategy, based around six main lines of action to overcome the challenges of AI, including structurally redirecting savings towards innovation, facilitating access to data and assuming the principle of an "AI exception" in public research.

In terms of access to data, the AI Committee points out that data, and more specifically personal data, is "an essential ingredient in the development of artificial intelligence" and calls for changes to French rules and practices that are currently more restrictive than the GDPR (particularly in the health sector), by abolishing authorisation procedures and changing the CNIL's mandate to focus on innovation.

From a legal point of view, the adoption of the IA Act raises new challenges, particularly in terms of how its requirements fit in with those of the GDPR, which focus on individual protection. In particular, the requirements of specified purposes, minimisation and limitation of storage will have to find new conditions of application in this context of the use of massive data.

 

Article also published in La Lettre des Juristes d'Affaires