Locations
Marguerite Brac de La Perrière and Nicolas Castoldi, both speakers at the Dialogues de la santé organised by Villa M in partnership with Le Point on 30 May in Paris, shed light on the contours of ethical AI.
Although most artificial intelligence (AI) models are still at the research stage, some fairly mature AI systems have been revolutionising hospital care for several years.
This is the case in biology, ophthalmology and medical imaging, where certain software programs help to interpret examinations. As the number of AI applications in healthcare grows, the ethical issues surrounding this transition are becoming increasingly important.
Transparency, accountability, access to data, reliability of algorithms, validation of models... These are just some of the issues that will be discussed at the Dialogues de la santé (DDLS) dedicated to AI.
An ongoing process of reflection
For Nicolas Castoldi, deputy director to the managing director of AP-HP, and a speaker at the DDLS, hospitals are the first to be concerned. “On the one hand, they have been collecting increasingly precise data over the years, while providing care. And, on the other hand, in partnership or alone, they are stakeholders in research, the exploitation of this data and the construction of AI models.”
And, because it is the world of healthcare, “healthcare data inherently has a very particular level of sensitivity and protection,” he assures. “To build AI models, we need to be collectively able to develop them, prove their effectiveness, experiment with them, test them and validate them.”
The question of validating and deploying new AI systems is one of the major challenges facing hospitals. How can the reliability of an algorithm be verified? How can it be deployed simply and effectively for caregivers? What is the medical impact and what is the business model?
Today, these are everyday questions that vary according to the type of AI used. "We're in a transition phase,” concludes Nicolas Castoldi. "The ultimate challenge is to adapt and develop models in small steps. To do this, we need to try to work concretely on the first use cases and experiments to collectively form a doctrine.”
RGPD versus AI Act
Given this constant need for adaptation, the AI law adopted by the European Union on 13 March provides the flexibility needed to both innovate and evolve in the face of future AI challenges. "We are dealing with a truly generic text, in line with fifteen years of thinking about AI in healthcare," sums up Marguerite Brac de La Perrière, a partner at Fieldfisher specialising in digital and healthcare, and also a speaker at the DDLS.
One of the main sources of uncertainty for healthcare players is how the RGPD and the AI Act will fit together, as both are subject to injunctions that are sometimes difficult to reconcile. This is the case with the data retention period required by the RGPD and the need to use long-term data as the basis for AI models. "While we await the European Commission's guidance and implementing legislation, we're moving forward based on common sense and the goals of the regulations, while documenting our actions to justify each trade-off we make."
Europe, a value leader
What about non-European innovations? Should we reject them in the name of caution? The answer is no, provided that the AI system in question meets the requirements of the AI Act before it can be marketed. "The advantage is that the rules are so broad that everyone will be obliged to follow them," says the lawyer. "This shouldn't stifle innovation, but rather bring everyone up to speed."
In terms of partners, Hôtel-Dieu works mainly with French and sometimes European start-ups. Assistance Publique also has a number of partnerships with major industrial players, both French and non-French. "What we have in common is that we pay systematic attention to ensuring that the health data framework complies with both regulations and our standards. This is a sine qua non for us," says Nicolas Castoldi.
Systemic trust
For patients, AI is an opportunity to access better care, but again, not without questions, particularly around consent to the use of data to develop AI models. "Although it's essential to inform patients so that they can object, the subject is so technical and the risks so difficult to grasp that it's almost impossible to give fully informed consent," explains Marguerite Brac de La Perrière, who defends the idea of societal systemic consent, with clearly defined conditions to limit the risks of violating people's rights and freedoms.
Nicolas Castoldi agrees. "The basis of this transformation towards AI is trust and our ability to guarantee a very high level of security for all patients. This is an important point for us and one on which we are, by definition, very demanding". For the lawyer, this requirement is in any case reflected in European regulations for high-risk AI - that is, almost all AI systems used in healthcare - which guarantee development, marketing, deployment and use in line with EU values.
AI vs. doctor?
This duty of transparency also applies to liability. Who is at fault in the event of a diagnostic error, for example? “The answer is contractual. Everything will depend on what the AI supplier has announced in the contract. The idea, of course, is not to exonerate manufacturers from liability, but to ensure that they are fully informed about the conditions of use and limitations of the system. And, in all cases, the doctor is only assisted by the AI.”
Sometimes, the debate can lead to surprising situations. For example, this issue raised by Marguerite Brac de La Perrière: what result should be retained if the doctor's diagnosis differs from that of the AI? “After all, there's a good chance that the AI has accumulated more experience than the doctor. As for bias, the latter has some too. The most important thing, then, is for everyone to be informed of both the conditions of use and characteristics of the AI system and its results, so as to benefit from the best, i.e. an augmented doctor.”
Faced with these challenges, there is a collective need to think ahead. “AI will profoundly transform the face of healthcare. We need to rethink our entire social organization,” concludes Nicolas Castoldi.