Earlier this month, we hosted a lunch and learn session with our regulator clients bringing together colleagues from across the regulatory landscape to share insights and best practice on using artificial intelligence (AI) in regulatory decision-making.
As AI technologies become increasingly embedded across sectors, regulators must provide clear guidance about appropriate use of AI to their regulated communities, while also considering how to responsibly use AI in their own decision-making.
In this blog, we reflect on the key themes from our discussion with regulators, and offer practical insights into the opportunities and risks regulators face as they engage with AI.
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Dispute Resolution insights from the experts at Fieldfisher.
Subscribe nowProviding guidance on AI use
Regulators have a responsibility to guide their sectors on the appropriate use of AI. Without clear direction, regulated entities may hesitate to use AI in their practice, fearing legal uncertainty, reputational risk, or future enforcement action. This hesitancy could stifle innovation and prevent sectors from realising the benefits of AI.
A principles-based approach to guidance is likely to be most effective. Rather than prescribing specific technologies or use cases, regulators can set out high-level expectations, such as fairness, transparency, accountability, and proportionality that apply regardless of how AI evolves. This approach allows flexibility while still providing a clear framework for compliance. The Council for Licensed Conveyancers recently published its Guidance to the sector - 'AI and Technology Principles and Guidance' - which takes this principles-based approach.
The UK Government published an Artificial Intelligence Playbook earlier this year to help government departments and public sector organisations harness the power of a wider range of AI technologies safely, effectively, and responsibly. The AI Playbook sets out ten principles to be followed when using AI:
- Principle 1: You know what AI is and what its limitations are
- Principle 2: You use AI lawfully, ethically and responsibly
- Principle 3: You know how to use AI securely
- Principle 4: You have meaningful human control at the right stage
- Principle 5: You understand how to manage the AI life cycle
- Principle 6: You use the right tool for the job
- Principle 7: You are open and collaborative
- Principle 8: You work with commercial colleagues from the start
- Principle 9: You have the skills and expertise needed to implement and use AI
- Principle 10: You use these principles alongside your organisation's policies and have the right assurance in place
These principles represent a good starting point for any regulator developing guidance for their sector.
Guidance should not be developed in isolation. Ongoing engagement with the regulated community is essential to build trust and ensure that stakeholders feel comfortable with the use of AI in their own practice and by their regulator. Proactive consultation, pilot programmes, and clear communication can help bring stakeholders along on the journey. Above all, each sector will need to grapple with the fundamental question of: what should AI be used for? There may be a widespread feeling that certain decisions and processes should always be left to human decision-makers and not be left to AI. Ongoing engagement is essential and regulators will have a leadership role in this regard.
Regulators should also monitor the impact of AI adoption across their sectors. There is a real risk that better-resourced organisations may gain a competitive advantage by deploying more sophisticated AI tools, potentially distorting markets or creating barriers to entry. Understanding these dynamics will be key to ensuring fair and proportionate regulation.
The Government’s AI Opportunities Action Plan published in January 2025 recommends that regulators publish information each year about how they have enabled innovation and growth driven by AI in their sector; including publishing their timelines to publish guidance, make licence decisions, and report on resources allocated to AI-focused work.
Opportunities of using AI in regulatory decision-making
Many regulators are now exploring how AI can support their own decision-making processes. The potential benefits are significant:
- Automation of routine tasks: AI can streamline administrative processes and reduce manual workloads, which is particularly valuable in high-volume regulatory environments, such as licensing and compliance checks.
- Data-driven insights: Machine learning can identify patterns in large datasets, enabling more targeted interventions and risk-based regulation. For example, AI could help flag anomalies in financial reporting or detect emerging risks in environmental monitoring.
- Consistency and speed: AI systems can apply rules uniformly, reducing human error and bias. This can lead to faster responses to emerging risks or public concerns, as well as more consistent decision-making.
These opportunities are compelling, but they must be balanced against a range of legal, ethical, and operational risks.
Risks of using AI in regulatory decision-making
The use of AI in decision-making raises complex questions about accountability, transparency, and fairness. Regulators must tread carefully to ensure that legal powers are exercised appropriately and that public confidence is maintained.
- Accountability and discretion: It is essential to clarify who is responsible for AI-assisted decisions. Regulators must ensure legal powers are exercised appropriately and avoid unlawful delegation to AI or fettering of discretion.
- Automation bias: we tend to place too much trust in automated systems, a phenomenon known as automation bias. Regulators must ensure that staff retain their critical judgment and do not blindly accept AI outputs and instead exercise their discretion. Maintaining effective human oversight is essential, especially where rights or obligations are affected.
- Explainability and transparency: Many AI systems, particularly those using deep learning, operate as “black boxes,” making it difficult to understand how decisions are reached. Regulators must ensure that decisions are explainable to affected parties and be transparent about when and how AI is used.
The Government’s Algorithmic Transparency Recording Standard (ATRS) enables public sector organisations to publish information about the algorithmic tools they are using and why they are using them.
The ATRS is mandatory for all government departments and arm's-length bodies that deliver public services or interact with the public. It is expected to be adopted more widely, so Regulators should consider publishing transparency records and proactively informing the public about the use of AI in decision-making in clear and accessible language.
- Skills and training: Over-reliance on automation could lead to a loss of low-level training opportunities, creating a skills gap among junior staff. Regulators should ensure that staff continue to develop the analytical and legal skills needed to interpret and challenge AI outputs.
- Data protection and consent: AI systems often require large datasets for training. Regulators must ensure there is a lawful basis for processing data and that the AI technology itself complies with data protection legislation.
- Bias and discrimination: AI systems trained on historical data may perpetuate existing biases, leading to discriminatory outcomes. Regulators must assess and mitigate these risks, particularly in relation to protected characteristics under the Equality Act 2010. Conducting an equality impact assessment before deploying new AI technologies is strongly advisable.
Conclusion: A strategic and ethical approach to AI
AI presents both a challenge and an opportunity for regulators. By providing clear, principles-based guidance and engaging proactively with their sectors, regulators can help shape responsible AI adoption. At the same time, they must approach their own use of AI with care - ensuring that decisions remain lawful, fair, and transparent.
As the technology continues to evolve, so too must the regulatory response. We look forward to continuing the conversation with our clients and supporting them as they navigate this complex and exciting landscape.