The benefits of AI literacy extend beyond regulatory compliance. AI literacy is critical to the successful implementation of AI governance. Developing a baseline of AI knowledge, skills, and understanding, tailored to the individual, both increases trust in and engagement with AI governance approaches and promotes innovation using AI tools.
Decisions are better documented, human oversight serves its purpose, and potential issues are discovered earlier. The policies, technical guardrails, and other controls that businesses need to put in place can only achieve their purpose if staff know how to apply them.
An AI literate workforce is more capable of using AI well. A better understanding of the power at our fingertips means we know how to frame problems, craft better prompts, and assess the results. This improves output quality, increases efficiency, and shrinks the 'hallucination' risk.
AI literacy also drives cultural change. It provides a tool to open doors between legal, risk, IT, HR and product teams, sparking new innovations through more joined up thinking. Better AI literacy can have measurable impacts: higher adoption of approved tools, fewer incidents, more use-cases, and clearer audit trails. In essence, AI literacy helps convert traditional 'governance' into good everyday behaviours.
The EU AI Act's AI literacy obligations have been in force since February 2025, but many businesses are failing to fully appreciate their specific duties. So, what should your business be doing, not just to comply with law but to reap these wider benefits?
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Emerging Technologies insights from the experts at Fieldfisher.
Subscribe nowWhat does the law say?
Article 4 requires "providers and deployers of AI systems to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used". Let's take each key element in turn.
"providers and deployers"
Whether a business is caught by these requirements will depend on the AI use and the operator's role in the AI value chain. The AI literacy requirement applies to providers and deployers of AI systems but does not apply to importers or distributors. These roles are defined within the Regulation and each business will need to assess their own role.
In order to effectively fulfil their AI literacy obligations, deployers will need support from providers either in terms of information about the AI system, or training content that the provider may be better placed to provide or deliver. This can be a challenge if providers are unable or unwilling to provide the necessary support.
"to ensure, to their best extent, a sufficient level"
Whilst 'best extent' may sound a little vague, we suspect this could be interpreted as conceptually similar to the common contractual phrase 'best endeavours'. This is generally taken to indicate that the party will take all steps that are reasonable to achieve the requirement, as well as steps that may be more than merely reasonable. The potential impact here is that, when considering the business case for AI literacy measures, the regulation indicates that more weight should be given to ensuring sufficient literacy rather than focusing on cost and ease of implementing the measures.
This sounds like a high bar (and perhaps it is). However, the aim of the measures to be taken are to achieve a 'sufficient level' of AI literacy. Therefore, even if extraordinary steps must be taken, the requirement is not necessarily to turn everyone into experts overnight. What is 'sufficient' is expressly qualified with reference to both the existing level of technical knowledge, experience, education, and training of the persons becoming AI literate, together with the specific context for the relevant AI systems. It will depend on people's roles, and particularly on the nature of their involvement in operating or using the AI systems to be deployed.
"their staff and other persons dealing with the operation and use of AI systems on their behalf"
Whilst this could be interpreted to mean all staff, plus other persons dealing with AI, we believe the actual intent is a narrower interpretation of those staff (and other persons) who are dealing with AI. This means that only the persons dealing with AI will require the relevant training and this would avoid placing the unnecessary burden on employers to train all staff irrespective of interaction with AI. Having said that, some businesses may elect to roll out at least a basic level of training on the basis that an informed workforce is both good for the business and for society
The meaning of 'staff' is relatively straight-forward, but 'other persons' dealing with the operation and use of AI systems is a broader concept and will likely include subcontractors and service providers. The wording used is "dealing with the operation and use", rather than "operating and using", and the full extent of this is currently unclear. However, this could have a broader meaning and apply to others in the supply chain, such as those reselling your products that contain AI.
The AI literacy requirement is unlikely to extend to affected persons more broadly unless they are using the AI systems on your behalf. Instead, there are transparency requirements and other provisions of law (e.g., GDPR) that address the explainability of AI systems. There are also obligations on the relevant authorities (under other provisions of the EU AI Act) to take steps to improve the AI literacy of the public more generally.
What is AI literacy?
So far, we've focused on who the rules affect and the extent to which steps must be taken. But just what is AI literacy anyway?
AI Literacy is defined as "skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause".
Firstly, it's important to note that this extends to skills, not just to knowledge and understanding. This means it's likely not sufficient to simply provide information and a slideshow presentation to staff is unlikely to meet the standard. Training must enhance the AI-related skills of participants. This could include, for example, prompt engineering, data science, AI coding, AI cybersecurity, and interpretation of outputs.
It's critical to realise that the definition is not limited to AI in general (and awareness of the opportunities, risks, and possible harms) but should extend to skills, knowledge, and understanding of specific AI systems that are to be deployed.
Recital 20 of the EU AI Act gives further guidance on what AI literacy means: "AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems". Relevant topics will vary depending on the context, but could include:
- "the correct application of technical elements during the AI system’s development phase". This is going to be particularly relevant to those involved in the development of AI systems, including those selecting the data on which AI models might be trained or fine-tuned;
- "measures to be applied during its use". This is of course relevant to those deploying and operating AI systems, but a necessary prerequisite of this is an assessment as to what measures ought to be applied in relation to any specific AI system in the context of deployment;
- "suitable ways in which to interpret the AI system’s output". This is relevant to those dealing with AI system outputs, including text or audio-visual materials made by generative AI, or predictions and recommendations. Understanding bias and safeguards in handling AI outputs will be relevant; and
- "how decisions taken with the assistance of AI will have an impact on [affected persons]". Whilst this is relevant to what AI literacy means for affected persons, it also applies for those operating and using AI and its outputs who will need to have an appreciation of how use may impact others.
Finally, Recital 20 goes on to say that AI literacy should include the insights required to ensure appropriate compliance with the EU AI Act. Therefore, any staff and other persons operating or using AI systems will need to understand how their actions can impact on our business' EU AI Act compliance.
Consequences?
So far as enforcement goes, several regulatory bodies will be established including an AI Office within the European Commission and an AI Board serving as an advisory body. National public authorities will be responsible for enforcement, akin to the role of data protection authorities under the GDPR. Fines for violations vary depending on the seriousness of the offense. Whether we see substantial fines and penalties for failures to implement AI literacy in isolation remains to be seen, but it would be surprising if poor AI literacy implementation was not taken into account as and when things go wrong.
But setting aside the regulatory requirement for a moment, we've already identified a variety of good reasons to put AI literacy on the agenda thanks to the collateral benefits to the business. From enabling staff to understand and measure the value of data, to empowering them to identify potential areas for innovation and business improvement, to helping them recognise and manage risk—having a high degree of AI literacy allows for AI capabilities to be more easily rolled out and deliver on their promise.
What next?
The AI literacy requirements under Article 4 are now in force, and it is abundantly clear that the law requires more than just a policy, or a memo to all staff accompanied by an hour's generic training. A simple tick-box exercise will not suffice, and businesses wanting to remain compliant will need to engage thoughtfully with the issues.
We're recommending that businesses take steps to maintain awareness of developments in this space. Articles 66(f) and 95(2)(c) contemplate the development of AI literacy tools and voluntary codes of conduct by the authorities, which may impact the interpretation of what is 'sufficient' for the purposes of Article 4, as well as what is considered best practice in terms of AI literacy provided by providers and deployers.