As we rightly reported back at the beginning of the year (2025 will be ‘a blockbuster year’ for AI and UK plans to be an "AI maker" not an "AI taker" with its new copyright consultation and AI Opportunities Action Plan), 2025 is proving to be a very busy year for copyright and AI, not only in the UK, but around the world, and the tension continues between technological advancement and the protection of copyright.
The main concern among the creative industries (authors, artists, content creators) is the unauthorised use of their copyright content to train AI models and it is proving very difficult to find a solution that will satisfy both the technology and the creative industries. Below we look at various developments (or lack of) in the copyright and AI space and what is on the horizon.
UK developments
New AI bill to come in 2026
It was reported on 7 June 2025 that Peter Kyle, Secretary of State for Science, Innovation and Technology, intends to introduce a "comprehensive" AI bill to address concerns relating to safety and copyright, but that this will not be in place for at least another year. It is positive that the UK seems to have a plan for regulation, but the delay is frustrating. The Labour government had intended to introduce a short AI bill shortly after coming into office, but that was de-railed due to a decision to wait and align with Donald Trump's administration and not wanting to weaken the UK's appeal to AI companies. Press reports indicate that Peter Kyle recently wrote to MPs pledging to establish a cross-party working group in parliament to tackle AI and copyright.
It will be interesting to see what this means for the AI and copyright proposals put forward by the House of Lords in the Data (Use and Access) Bill which is in the final stages of its parliamentary journey (see below).
The UK's Data (Use and Access) Bill controversy
The Data (Use and Access) Bill (DUA Bill) was introduced to Parliament on 23 October 2024 (after two previous incarnations under the Conservative government) and aims to modernise the UK’s data protection framework and enhance the use of data for public and economic benefits in the UK.
Don't miss a thing, subscribe today!
Stay up to date by subscribing to the latest Intellectual Property insights from the experts at Fieldfisher.
Subscribe nowThere has been a lot of debate and parliamentary ping-pong about various amendments introduced to the DUA Bill by the House of Lords regarding copyright and AI. The House of Lords has been pushing for proposals which would give greater protection and transparency to the creative industries, requiring AI firms to disclose which copyright materials they use for AI training.
Baroness Kidron, British filmmaker and politician who had put these proposals forward in the House of Lords, had been concerned about the limited scope of the UK copyright and AI consultation. She thought that the copyright proposals under the DUA Bill would help to make the copyright regime fit for the age of AI and would mean less regulation than the text and data mining (TDM) exception with opt-out mechanism (see below for further details on this).
However, the DUA Bill was sent for scrutiny by a Public Bill Committee in March 2025 which voted to remove the proposed copyright provisions. A spokesperson said that while wanting genuine transparency about what is used in training AI, alongside rights holders’ control of their work and appropriate access to training material for AI, they did not believe the DUA Bill was the right vehicle for action. It would be of no benefit to introduce 'piecemeal' legislation, and the government needed more time to review the copyright and AI consultation responses. It was not right to pre-empt the consultation with these proposed clauses. It was of utmost importance not to rush to legislate 'before everything is in order' but take on board messages from the creators.
On 12 May 2025, the House of Lords proposed further copyright and AI amendments. These amendments were also backed by over 400 leading figures from the UK's creative industries, such as Sir Elton John, Sir Paul McCartney, Coldplay and Sir Ian McKellan, who wrote to the Prime Minister expressing their support for the amendments. However, on 15 May 2025, the House of Commons rejected those proposals also, invoking a rarely used financial privilege (a principle that can be invoked by the House of Commons over the House of Lords in matters related to public spending, i.e. they claimed the amendments would impose financial burdens). However, the government suggested that these issues could be addressed through the current consultation and impact assessments (how many more of these do we need?!).
At the time of writing, the DUA Bill is in its final stages with a fifth set of amendments proposed by Baroness Kidron being considered on 10 June 2025 (relating to the disclosure of AI training materials). It is unusual for a bill to still be in such a state of flux so late in its parliamentary journey and it may be that with the announcement of a more comprehensive AI bill next year, the copyright and AI proposals will be pulled altogether. If the two houses continue to disagree on the text for the bill, there is a risk the entire bill may not survive this parliamentary session.
Copyright and AI Consultation: December 2024
The government launched its latest consultation on copyright and AI in December 2024 and it closed at the end of February 2025. The consultation indicated that the government's preferred option to address the concerns felt by copyright holders whose content is being used for AI training, was to introduce a new and broader TDM exception for all purposes (currently under UK law it is just for research for non-commercial purposes) from which rightsholders could reserve their rights and this would be underpinned by transparency measures in relation to the content AI developers use to train their models. This provision would be similar to the TDM exception provided for by Article 4 of the Digital Single Market Directive (2019/790) (DSM Directive), which the UK did not end up implementing due to Brexit.
In a nutshell, AI developers would be able to scrape content and rely on the TDM exception, if the copyright owner’s rights have not been reserved. The government is currently still digesting over 11,500 responses it received, a significant number for a government consultation, reflecting the importance of the subject matter in the UK.
The creative industries continue to lobby against introducing the government-favoured broader TDM exception, considering it theft of copyright on a huge scale and prefer instead licensing as a way of securing adequate remuneration for their works. That is already happening for some organisations as we have seen from the Financial Times which is the first UK publisher to sign a licensing deal with OpenAI. Rightsholders also have significant concerns about the enforceability and technical viability of the rights reservation system. Implementing and monitoring such a system could be challenging and costly and the fact that the EU has yet to find an adequate technical solution for the opt-out speaks for itself.
It has not yet been announced when the response to the consultation will be released but press reports indicate it will be over the summer or by October 2025. The government made it clear that it will it not rush into anything before it has had time to carefully consider each and every response and it will continue to engage with interested stakeholders in order to come to an informed decision and a practical way forward that meets all objectives. The responses to the consultation are likely to inform any decision-making in relation to the new AI bill, but this will all take time, hence the delay to the bill.
Further policy developments
A huge part of the problem is that rights holders do not know exactly what works are being used due to the lack of transparency from AI developers about the materials they use to train their AI tools. At a Westminster parliamentary debate on 23 April 2025 on the impact of AI on IP, MPs agreed that the TDM provision was 'unworkable'. At the debate there was unanimous agreement that the new TDM exception and opt-out would not be the right solution, but rather licensing options (which have been tried and tested for over a century) coupled with transparency requirements.
At the debate, the government was encouraged to "lead from the front by ensuring that the Data (Use and Access) Bill includes granular transparency requirements so that AI companies must disclose what they use for free, and to put a stop to the unregulated scraping of creative content online." However, Chris Bryant, Minister for Culture, Media and Sport, indicated that the government does still favour the new TDM exception with opt out and allayed concerns saying that issues can be addressed by 'clever use of technology'.
Shortly after that debate, in early May 2025, Peter Kyle said proposals to introduce a rights reservation system of copyright rules was no longer his preferred option but one of several being given consideration. He clarified that the government is particularly interested in encouraging licensing agreements between AI companies and creators as a way of ensuring creators are adequately paid for their content.
On 10 April 2025, the Culture, Media and Sport Committee had also published a report into British high-end television (HETV) and film, which included a section on the impact of AI on the industry, particularly concerning the licensing of creative works to train AI models. The recommendation set out in the report is for the government to abandon its preference for a TDM exception for AI training with rights reservation model, and instead require AI developers to license any copyright works before using them to train their AI models. Labour MP James Frith made an important observation that, "It is absolutely imperative that we strike the right balance. This is not about pitting one side against the other; it is about coexistence and mutual interdependence".
On 3 June 2025, UK Culture Secretary Lisa Nandy delivered a keynote speech at the Media & Telecoms 2025 & Beyond Conference in London. She confirmed that the government would consult further with representatives from the creative industries on the impact of AI and vowed to 'find a way forward that works for the creative industry and creators, as well as the tech industries'. She acknowledged the importance and value of the creative industries in the UK and the challenges that copyright, authorship and fair compensation posed in this AI 'revolution'. She stressed that it was crucial to manage this period of transition and that as soon as the Data (Use and Access) Bill was passed (see above), she would work with Peter Kyle and start a series of roundtables with representatives from the creative industries to develop legislation with careful thought from Parliament and with 'no preferred option in mind'. She referred to the AI and copyright consultation and reassured interested parties, '[W]e have heard you loud and clear'.
On 9 June 2025, the British Film Institute (BFI) launched a report on generative AI for the screen industries in partnership with CoStar universities Goldsmiths, Loughborough and Edinburgh. The report entitled 'AI in the Screen Sector: Perspectives and Paths Forward' addresses the impact of generative AI on the screen industries, including copyright issues. The report highlights significant concerns about the use of copyright material, such as film and TV scripts, in training generative AI models without proper licensing or permission. The report offers various recommendations to be delivered over the next three years, including making the UK a 'world-leading IP licensing market'. The report highlights that 79 licensing deals for AI training were signed globally between March 2023 and February 2025 and that the Copyright Licensing Agency is currently developing a generative AI training licence and other companies are enabling deals between rightsholders and AI developers. The report considers the UK to be well-positioned to lead in this space and states that, '[B]y formalising IP licensing for AI training and fostering partnerships between rightsholders and AI developers, the UK can protect creative value, incentivise innovation, and establish itself as a hub for ethical and commercially viable AI supported content production'.
We can see from the various developments and initiatives in the UK that whilst legislation is on the cards, the TDM exception with rights reservation is now perhaps a less favourable option and that licensing is a valued and preferred option, as long as the creative industries and the AI developers can collaborate effectively.
Artificial Intelligence (Regulation) Bill 2024-25
On 4 March 2025, the Artificial Intelligence (Regulation) Bill was re-introduced into the House of Lords as a Private Members’ Bill (PMB). This bill was first introduced in November 2023 and was then dropped as a result of the announcement of the General Election back in June 2024 when Parliament was dissolved.
PMBs are public bills introduced by members of the Commons and Lords who are not government ministers. Although few PMBs become law, they are a way of exerting political influence on particular issues.
The PMB is a short bill (about 7 pages) and its main purpose is to establish an 'AI Authority' to oversee the regulatory approach to AI in the UK. It also contains some provisions relating to the protection of IP in the use of AI:
- Section 2 - general principle that businesses which develop or deploy or use AI should comply with IP
- Section 5 - requires those involved in training AI to supply a record of all third-party data and IP used in that training
There has been very little movement on this bill which has been waiting for its second reading in the House of Lords for a very long time. It is unlikely to go anywhere now during this parliamentary session, especially given the announcement of another more comprehensive AI bill coming next year.
It is interesting to view this single bill in the UK against the backdrop of the number of pending AI bills in the US, which grew to around 780 over a few months in early 2025.
Legal disputes
Getty Images v Stability AI
Everybody will be watching the UK High Court action in Getty Images v Stability AI which headed to trial in the UK on 9 June 2025 (and in which we are privileged to act for Getty Images). Getty Images claims that Stability AI has infringed intellectual property rights including copyright in content owned or represented by Getty Images. Its position is that Stability AI has unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images without a licence.
A decision in Getty Images v Stability AI has the potential to significantly shape the AI copyright litigation landscape. It will provide important guidance on how business models like those of Stability AI are treated under UK copyright law, which will also be of substantial interest to other jurisdictions. It may also have a bearing on the trajectory of similar litigation in the pipeline, and where the correct balance between rightsholders and AI developers must be struck.
EU developments
EU AI Act – Codes of Practice
Under Article 56 of Regulation (EU) 2024/1689 (EU AI Act), the AI Office is tasked with producing codes of practice to help providers of General Purpose AI (GPAI) models (e.g. ChatGPT) to comply with their obligations under the EU AI Act. These obligations include compliance with transparency requirements, such as drawing up and keeping up-to-date the technical documentation of the AI model including its training and testing process and providing clear information to companies intending to integrate a GPAI model into their AI systems. The codes of practice should inform providers of GPAI models about the level of detail required when publishing summaries of the content used for AI training.
Article 56 set out that these codes of practice would be ready 'at the latest by 2 May 2025' but once the deadline had passed, the EU AI Office confirmed there was a delay to the codes and they are expected 'by August'. This will be dangerously close to 2 August 2025 – the date the provisions under the Act for GPAI models apply. It is not surprising that the codes of practice are proving difficult to finalise given the contentious AI and copyright subject matter and there is no point rushing anything through, just to meet a deadline. It must be workable for all stakeholders. Interestingly the regulation does anticipate that it might not be possible to produce comprehensive codes and allows a fall back. In simple terms, if the industry cannot agree on a suitable code of practice for AI by the set deadline, the European Commission can establish mandatory rules to ensure AI systems meet safety and transparency standards.
EUIPO study on generative AI and copyright
On 12 May 2025, the European Intellectual Property Office (EUIPO), following extensive research and analysis, published a study on generative AI and copyright entitled 'The development of Generative Artificial Intelligence from a copyright perspective'. The study is designed to clarify how generative AI systems interact with copyright – technically, legally and economically – and focuses on 3 interconnected areas: (1) The use of copyright-protected works as training data for generative AI models, (2) the generation of new content by these systems, and the legal questions this raises, and (3) the wider implications for creators, AI developers, and the copyright ecosystem.
The detailed study (a mere 436 pages!) examines how copyright-protected content is used for AI training, which EU legal framework applies, how creators are able to reserve their rights through opt-out mechanisms and what technologies exist to identify or mark AI-generated output. It also explores licensing opportunities. The study is aimed at experts in the field but it is hoped it will also be accessible to a wider audience with its breadth of information and resources.
In line with the 'EUIPO Strategic Plan 2030', the study also refers to the launch of an 'EUIPO Copyright Knowledge Centre' by the end of 2025 which aims to equip copyright holders with 'clear, practical information on how their works may be used in the development of GenAI'. It will be a hub where stakeholders can come together and share needs, identify gaps and collaborate.
It is hoped that the study will deepen the understanding of generative AI technical functions and its relationship with copyright law and provide a foundation for informed decision-making and coordinated action going forward. See here for the EUIPO's press release and a 12-page Executive Briefing.
Legal disputes
The EUIPO study offers some helpful statistics on legal disputes around the world highlighting that most have arisen in the US but there have also been cases in the UK (see above), China, Canada, India and the EU. In the EU, three cases have been launched in Germany and one in France. These cases generally relate to whether any copyright exceptions or limitations apply in the context of AI training and under what circumstances they are engaged.
CJEU referral
On 3 April 2025, the Court of Justice of the European Union (CJEU) received its first referral on a landmark case involving copyright and AI. Like Company v Google Ireland Limited (C-250/25) has been referred by the Hungarian court in Budapest and relates to how EU copyright law (in particular articles 4 (TDM exception) and 15 (press publishers' rights in relation to online uses) of the DSM Directive) applies to the outputs of generative AI and the training of large language models (LLMs) like Google's AI chatbot, Gemini (formerly 'Bard').
Like Company is a Hungarian press publisher and operates various online news portals. Google Ireland provides access in the EEA to an LLM-based chatbot Gemini (similar to a search engine) and when given a certain prompt, the chatbot displays a response, which includes protected content from press publishers' webpages.
As a reminder, Article 15 of the DSM Directive grants press publishers exclusive rights over the making available to the public and the reproduction of their press publications by ISPs. The rights do not apply to hyperlinking or in respect of the use of individual words or very short extracts of a press publication. Article 4 is the TDM exception and allows for reproductions and extractions of lawfully accessible works for the purposes of text and data mining where the use of the works has not been expressly reserved by the rightsholders.
The Hungarian court has asked the CJEU for guidance on the following key issues:
- Does displaying content in a chatbot that matches protected content found on a press publisher's website constitute a communication to the public and is it relevant that the chatbot responses are the result of a process in which the chatbot merely predicts the next word based on observed patterns?
- Does the process of training an LLM-based chatbot constitute an act of reproduction, where the LLM is learning to recognise linguistic patterns from protected works?
- If such training is considered reproduction, does it qualify for the TDM exception which ensures free use for the purposes of text and data mining?
- Does generating a response by the chatbot that matches protected content in a press publication qualify as reproduction by the chatbot service provider (as opposed to the user inputting the prompts), requiring authorisation from the rightsholder?
This case will be avidly watched around the world as the EU's highest court has been tasked with clarifying the legal boundaries for AI providers and the scope of copyright protection for press publishers in the digital age. It is the first time the CJEU has been asked to address how article 15 of the DSM applies to generative AI. However, it will be a long time before we see a CJEU ruling (probably 2027) and it may well get superseded by other developments in the copyright and AI arena.
US and EU collaboration on AI challenges in the culture and creative sectors
A delegation of members of the European Parliament (MEPs) from the Committee on Culture and Education (CULT) visited Los Angeles from May 26 to 29 2025 to gain direct insight into how AI and other digital transformative technologies are affecting the culture, creative industries, and the news media sector.
The meetings were attended by representatives from film and music studios, streaming platforms, labour unions for writers, directors, actors, and other industry professionals, as well as public media representatives and Congresswoman Laura Friedman.
The press report issued by the European Commission states that the MEPs had 'observed a willingness to put in place solutions allowing the sectors to thrive mutually on both continents' and they also noted the fact that major film studios preferred contractual relationships on copyright, rather than a regulatory approach. The music industry however would benefit from 'fit-for-purpose regulatory provisions, clarifying guidelines, and efficient enforcement tools'.
The meeting was said to have reinforced the EU's best practices and leading legislative role in addressing the AI and copyright challenges, as highlighted by its introduction of the EU AI Act. The MEPs considered that the meeting had been 'a highly useful and enriching way to foster transatlantic collaboration based on open and constructive dialogue in the culture and creative sectors' and that it was important to remain united to tackle the ongoing challenges.
US developments
Legal disputes
The number of legal disputes in the US related to copyright and AI has been increasing rapidly, far exceeding any other country. There are currently very many high-profile cases addressing key areas, including the ownership of AI-generated content, the use of copyright-protected content to train AI models and when and if the doctrine of fair use applies. These cases involve authors, artists, music publishers and news media organisations challenging various big players in the tech/AI market such as Microsoft, Open AI, Stability AI, Perplexity AI, Meta and others in relation to the unauthorised use of protected content.
It does appear that some cases are settling due to deals being struck and licensing agreements being put in place. For example, on 29 May 2025, it was reported that Amazon has struck a multi-year licensing deal with The New York Times (NYT) so that it can publish content across various of its AI-powered customer services (Amazon has recently invested £4 billion in AI model 'Anthropic', which competes with the likes of Open AI's ChatGPT and Google's Gemini). Under the terms of the agreement, Amazon's virtual assistant Alexa will be able to use excerpts from NYT content and Amazon will be able to use NYT content to train its AI models.
The NYT has been in the press since late 2023 after it issued proceedings against OpenAI and Microsoft (owners of CoPilot) for their use of millions of NYT's articles to train their AI models without permission.
Some other media companies such as Condé Naste and News Corp have signed licensing agreements for the use of their content and in April 2025, The Washington Post announced a partnership with OpenAI for the use of news content on ChatGPT. Is this perhaps a sign of times to come? AI developers want to have access to high quality content to train their models and this is certainly one way of doing it which seems fair for all interested parties.
Perlmutter
On 10 May 2025, Shira Perlmutter, the (then) head of the US Copyright Office, received an email from the Trump administration terminating her employment. Perlmutter has hit back by filing a lawsuit against four White House officials and President Trump for unlawful dismissal alleging that they do not have the authority to appoint or remove the head of the Copyright Office. A federal judge recently refused Perlmutter's request for an emergency reinstatement, stating that she had not demonstrated she would be irreparably harmed if not immediately reinstated but the case is ongoing.
The day before she was fired, she had published a report on copyright and AI (Copyright and Artificial Intelligence, Part 3: Generative AI Training Pre-Publication Version). Some commentators observed that her firing could have had something to do with her stance on copyright and AI and that the administration's decision was part of a broader effort to replace officials perceived to be opposing the president's agenda – the administration may have seen Perlmutter as more pro-creatives than big tech.
Some may also remember Perlmutter from the widely publicised Thaler v Perlmutter AI case. In that case, computer scientist Dr Stephen Thaler created an AI system called the Creativity Machine which autonomously generated an artwork entitled, 'A Recent Entrance to Paradise'. Dr Thaler attempted to register the artwork for copyright protection, listing the machine as the sole author and himself as the owner. The US Copyright Office refused the application citing that only humans are eligible as authors for copyright protection. This decision was subsequently upheld by both the district court and the Court of Appeals for the District of Columbia Circuit, confirming that under US law, human authorship is required for copyright protection (although this does not prohibit copyrighting a work that has been made by or with the assistance of AI, but it will of course depend on the facts). It may be that the Trump administration was not totally on board with her initial decision-making on this case.
Final word (for now)
It is clear that the right balance must be found if society is to get the most out of both AI and the creative industries. The world benefits from both. As rightly stated by Labour MP James Frith, it is not about pitting one against the other but about finding a workable solution. The creative industries recognise the potential of AI and AI relies on IP, but balance is key. It is however heartening for the creative industries that the value of copyright is being recognised, hence the delay to the introduction of any legislation. Hopefully it will be possible to bridge the gap between AI and the creative industries in the coming months, especially if the government has been through the consultation responses with a fine-tooth comb and has engaged constructively with the relevant stakeholders.
The outcome of the consultation will be important and the government must think carefully about its next steps, particularly in relation to any new AI bill. The preferred TDM option proposed by the government may well fall by the wayside in favour of more practical solutions such as licensing. Whilst the government and parliament have promised not to rush into anything, they do need to speed up progress if the UK is to keep up with the fast pace of AI and fulfil its desire to be an AI global leader. Concrete decisions need to be made sooner rather than later, unless the judiciary gets there first. It will be interesting to see what impact the ruling in the Getty Images case has on any new AI regulation in the UK.