Locations
The recent High Court Divisional Court decision in Ayinde v London Borough of Haringey [2025] EWHC 1383 (Admin) and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC (heard together) issues an unambiguous warning to the legal profession: Generative AI (GenAI) cannot be relied upon without proper verification and the full weight of professional regulation applies to any material a lawyer endorses, regardless of whether it was drafted by a person or a machine.
These judgments represent the first time the courts have directly addressed the risks of using GenAI to generate legal procedural documents, and the consequences of doing so without verifying the GenAI output.
Both cases were heard together under the Divisional Court’s Hamid jurisdiction, which relates to the Court's inherent power to regulate its own procedures and to enforce duties that lawyers owe to the court. Both Ayinde and Al-Haroun involved legal professionals placing material before the court that was suspected to have been drafted using unchecked, open-source GenAI tools.
In Ayinde, a barrister submitted grounds of claim containing fictitious case law, legal inaccuracies, and unverified content. In Al-Haroun, similar issues arose in a witness statement submitted by a solicitor. The result was wasted court and practitioner time, the submission of false information that risked interfering with the administration of justice, and conduct that the court considered improper, unreasonable, and negligent. In both cases, the practitioners were found to have either knowingly or recklessly misled the court (or attempted to do so) in breach of their professional regulatory obligations. The consequences included wasted costs orders, referrals to professional regulators, and public judicial criticism.
The Court’s position is clear: GenAI does not remove professional responsibility. Lawyers remain accountable for every submission made in their name.
GenAI in Professional Legal Work: Opportunity and Risk
GenAI refers to artificial intelligence systems that can generate (as the name implies) new content, including text, images, and code, learned from vast datasets and predictive algorithms. These systems are trained using large language models (LLMs) that produce fluent, convincing prose in response to user prompts.
Legal professionals are increasingly exploring these tools to expedite procedural drafting, summarise material, conduct legal research, and respond to legal queries.
There is an important distinction between the available models: namely, the free-version, and the paid-for version:
- Open-source GenAI models (e.g., ChatGPT), are publicly available, not legal-specific, and sometimes prone to "hallucinations", that is, confidently generating incorrect or fabricated content to provide an answer (or output) in favour of staying silent and not returning a response;
- Licensed GenAI models are integrated into legal platforms, which include domain-specific safeguards and limit outputs to verified sources.
Even where licensed tools are used, practitioners remain fully responsible for checking and validating all output. As the Court put it in Ayinde, GenAI tools are not “capable of conducting reliable legal research”. They may assist with drafting but cannot be trusted blindly, and cannot be used as a shield from professional accountability.
The Ayinde Case: Fabricated Cases and a Lack of Oversight
In the original Ayinde v The London Borough of Haringey matter, the claimant, Mr Frederick Ayinde brought a judicial review against Haringey Council concerning interim housing provision. He was represented by Haringey Law Centre, who instructed barrister Ms Sarah Forey to draft the grounds for judicial review.
The grounds submitted included:
- Five fictitious case citations
- A misstatement of section 188(3) of the Housing Act 1996
- Americanised spelling for the word "emphasized" and stylistic patterns indicative of GenAI drafting
When asked by both the defendant’s legal team and her own solicitors to produce the authorities cited, Ms Forey failed to do so. An initial explanation claimed the references were “minor errors” and "cosmetic errors" and would be corrected before the trail, but they were not.
At the hearing, Ritchie J found the errors could not be explained away as cosmetic. He described the conduct of Ms Forey and Haringey Law Centre as “improper, unreasonable, and negligent”. Each was ordered to pay £2,000 in wasted costs and referred to their respective regulators.
Most importantly, the judgment reaffirmed that placing one’s name on a legal document carries full professional responsibility for its content, regardless of the drafting method.
Al-Haroun: 18 Invented Authorities in a Commercial Claim
The Al-Haroun case offered a second example of GenAI misuse, this time in a high-value commercial dispute. The claimant sought over £89 million in damages, represented by Mr Abid Hussain of Primus Solicitors.
In response to court directions for the hearing on paper of an application to strike out the matter or enter summary judgment, the claimant's solicitor submitted a witness statement that included 45 authorities support the claimant's position. Upon review by the court, 18 of those authorities did not exist, and many others were irrelevant, misquoted, or were not accurately applicable to the arguments advanced.
Dias J dismissed the application and referred the matter to the Hamid list. She expressed serious concerns about the integrity of the material, noting that regulated legal professionals owe a duty not to mislead the court; submitting fabricated or unreliable material breached this duty and undermined the proper administration of justice.
The Court concluded that this was either a deliberate attempt to mislead or a serious failure to exercise basic professional diligence, both unacceptable in legal practice.
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Dispute Resolution insights from the experts at Fieldfisher.
Subscribe nowRegulatory Duties: AI Does Not Diminish Accountability
The Divisional Court judgment reiterates that the professional duties owed by solicitors and barristers apply regardless of how content is generated, which include:
- Not misleading the court, whether intentionally or through carelessness
- Relying only on properly arguable legal arguments
- Ensuring written material is accurate, appropriate, and relevant
- Acting with integrity and maintaining public trust in the profession
- Being accountable for material submitted in their name (whether, as the judgment explains was drafted by juniors, colleagues, or GenAI tools)
The Court also noted that these duties apply from the earliest stages of training. Pupil supervisors and law firms must ensure that junior lawyers understand their obligations when using new technologies.
Enforcement Tools: Sanctions and Referrals
The Court has a wide range of enforcement options available where AI-generated material is submitted without proper oversight. These include:
- Referral to regulators such as the SRA or BSB
- Wasted costs orders under CPR 46.8
- Contempt of court proceedings under CPR Part 81
- Referral for criminal investigation in cases of deliberate deception (e.g. perverting the course of justice)
The court acknowledged that public judicial criticism can have a deterrent effect, but emphasised that the risks to the administration of justice from submitting false material are so serious that, except in rare cases, criticism alone will not be a sufficient response.
The seriousness of any sanction will depend on the nature of the breach, the practitioner’s seniority, whether the misconduct was deliberate, and whether steps were taken to mitigate harm.
Crucially, enforcement is not limited to deliberate dishonesty. It also extends to failures in supervision, verification, and legal training, especially where these affect the court’s ability to function properly.
Looking Ahead: Training, Supervision, and Adaptation
The Divisional Court went further than addressing the individuals involved. In both Ayinde and Al-Haroun, the judges emphasised the need for broader institutional reform. The message was clear:
- Regulatory bodies must go beyond issuing guidance. Formal training, supervision, and disciplinary processes must adapt to the realities of GenAI use in legal practice.
- Junior lawyers must be held to the same standards as their more experienced peers; lack of resources or access to legal databases is not an excuse.
- Firms must invest in internal policies and verification processes to govern AI use, especially as GenAI becomes embedded in legal workflows.
The Bar Council, Law Society, and Inns of Court were each called upon to take proactive steps to ensure that legal professionals are fit to practise in an AI-enabled environment.
Conclusion: A Defining Moment
Ayinde and Al-Haroun represent a turning point in the integration of GenAI into legal practice. They are more than cautionary tales, they are foundational authorities on the professional use of emerging technologies in the courtroom.
The message is unequivocal: lawyers may use AI to support their work, but they remain personally responsible for every word submitted to the court. No technology, however sophisticated, can absolve a lawyer from their professional obligations. As the Court made clear, GenAI may assist, but it does not excuse.