Skip to main content

Copyright wars 3.0: the AI challenge

As Silicon Valley and the creative industries square up over AI and copyright, emotions are running high. What is at stake? And was Peter Kyle right to refuse copyright amendments to a data protection law? 

The musician, Sir Elton John, has called the UK government “absolute losers” over policy proposals that would exempt artificial intelligence (AI) firms from compliance with copyright law. With comments timed for a Parliamentary vote, he is also reported as threatening to take the government to court if it does not rescind its position.

What we are witnessing is the latest round of the so-called “copyright wars”. Historically, the challengers were  recorded music,  movies, radio, television, video. It was always the incumbents who felt threatened by the new technology, and tried to fight off the “pirates”, only to eventually bring them into the fold.  In the first decade of this century, we saw the battle over peer-to-peer filesharing which I’ve reported on extensively.  You can read all about that elsewhere on this website.

In 2025, AI is the challenger.  Silicon Valley ‘s global tech monopolies are seemingly pitching the creative destruction* of the entertainment industries’ business model. The stakes are higher than ever. Two leviathan** industries are pitched head-to-head against each other in a massive power struggle over the commercial exploitation of digital content.

Copyright underpins the economics of the creative industries. Although it is often portrayed as being about author’s rights, copyright is ultimately about the exploitation of creative works. These days, it’s about database-facilitated  distribution agreements and  the exchange of licence fees via an industry network in multiple countries and languages. The value in the system is powered by an inventory of  high profile chartbusters, ahead of a long tail** of less well known works,  with  low-to-negligible copyright value. It is enforced by a global monitoring system to police unlicenced uses of copyrighted content on online platforms.

So is AI is a threat and would it break that system? The AI companies want free access to the copyrighted content in order to build systems known as large language models (LLMs). These are designed to learn from existing content and generate new material, hence they are collectively known as generative AI. Copyright protagonists fear that generative AI models may replicate in some way the original content or may spit out new content taking elements of the original.

The policy issue is explained by the United States Copyright Office, who recently released a report that frames the problem the way they see it. The report outlines how  “vast troves” of  copyrighted works  are being used by AI companies on an “unprecedented scope and scale” to power their models and to produce content that competes against the original works. The Copyright Office asserts that generative AI  goes beyond what would be permitted under existing copyright rules, notably the “fair use” principle in US law.  Fair use does not apply in the same way under  UK or EU law, although the principle at stake remains broadly the same.

The content that the AI companies want to use comprises entire estates of copyrighted works, dating back decades. Some of these will be extremely valuable, making the artists very wealthy. The Rolling Stones back catalogue is valued at around $500 million  [£338 million pounds according to Wikipedia].  Taylor Swift is worth a jaw-dropping $1.6 billion and  Sir Elton John’s net worth is £470 million [pounds] as per the Sunday Times Rich List [Source: Wikipedia].

The other beneficiaries of those back catalogues are the large music corporations, Universal, Sony and Warner,  and entertainment companies, such as Disney and Netflix. Their revenues depend on an industrial system comprising not only the artists, but the production facilities,  supported by a global licencing and distribution system that collects in the money. The creative industries are estimated to be worth £126 billion in gross value add  to the UK economy.  

Given the eye-watering  sums  stake, it's unsurprising that they will put up a strong defence against the AI attack.

It’s also unsurprising that the AI companies, attracting billions of dollars in funding, will fight for access. Open AI, which is only 10 years old, has 500 million worldwide users and (in April 2025)  a $300 billion valuation, compared with the 100-year-old Walt Disney Company with a $202 billion market capitalisation (20 May 2025) [data sourced from the Financial Times]. 

It explains  why the copyright holders are highly litigious. High profile cases include the New York Times v Open AI;  and Getty Images v Stability AI. It’s  reported that the latter has the go-ahead for the case to be heard in the UK courts.  According to the UK Press Gazette, UK-based Mumsnet is also suing Open AI; as is  international computer trade publisher Ziff Davis v Open AI in the US. Mumsnet alleges that Open AI has been scraping the 6 billion words on its website [scraping means using automated techniques to access publicly available code on a web site or platform].  

There is also quite a bit of deal-making. The UK Press Gazette counts nearly 30 publicly-known agreements between publishing groups and  AI companies, including Open AI deals with the Washington Post, the Guardian and Agence France Presse among others.

AI companies counter the copyright argument saying that they do not reproduce content:  AI models  search for patterns in the data which enable them to learn and predict how ideas connect.  However, there are huge risks that associated with AI models. They can make mistakes and importantly in this context, they don’t cite sources. Some AI companies lack  stability of corporate governance, which matters when they are making high stakes deals with other businesses. 

The complexities introduced by copyright  and its importance to the creative economy would seem to make this area ripe for policy intervention. The question is what should be done? The sheer scale of copyright eco-system  and the databases that the AI will utilise,  suggests that a policy response must similarly scale up. This is a point that policy-makers have been missing. Any proposals for copyright applied to generative AI models should be stringently examined  before firming up any new laws because if they don’t, the risk is that the laws will  be unworkable.

The US Copyright Office favoured a licencing system built on the existing framework, but declined to put forward a concrete proposal. Shortly after the report was published, the head of the Copyright Office and veteran of copyright advocacy,  Shira Perlmutter,  was fired by the Trump Administration,    amid rumours that the AI companies did not like the report. 

The EU rather hastily amended the AI Act [Regulation (EU) 2024/1689 ]   to address copyright. It relies on an exceptions for text and mining  in the 2019 Copyright Directive, that could be applied to generative AI models.   It  puts AI models under an obligation to respect  “opt-outs” for copyright owners, as long as rights-holders do not refuse permission, and calls for  an explanation of the content used for training the AI model.  However, there is uncertainty around the  implementation of these requirements.  [Article 53(1) ( c ) and (d)]. 

The UK government has been consulting on a broad opt-out system that would allow generative AI companies to use copyrighted works without permission. This was the target of Sir Elton John’s wrath. The consultation closed on 25 February 2025 and the government’s response in the form of a new AI copyright bill is awaited later this year.

Believing that there is an urgent need for legislation, the UK House of Lords took an activist stance. Amendments were tabled to the Data (Use and Access) Bill - legislation that has no remit for AI and copyright.

The Lords wanted a transparency mechanism, forcing AI companies to disclose the copyrighted works they were using. Three amendments were tabled, each one differently formulated, and borrowing language from the Online Safety Act, suggesting the preparatory haste. They risked creating a muddled statute, by trying to shoehorn AI copyright enforcement into a data protection bill where it does not belong.

Hence, the amendments were rejected by the Government and dropped from the Bill. The Secretary of State, Peter Kyle, speaking in Parliament on 22 May, said he wants to work with both industries to find a balanced solution. However, the Lords re-tabled their main amendment several times, and each time government rejected it. The government's final view is awaited on 5 June. 

As an aside, I’ve previously seen rightsholders attempt a similar hijacking of a telecoms law and it did not end well for them – please see the ‘Telecoms Package’ sections of Iptegrity.com. In my opinion,  copyright policy does not belong in either the telecoms or data protection frameworks. It  belongs squarely in the copyright framework, where the  economic impact and technical implications can be properly assessed.

Finally though, it’s important to recognise the  massive long tail of works that are copyright but earn little or no but may still be valuable to their creators reputationally. A citation, or recognition of authorship remains important, even without payment. Whatever system is devised, these  - including academic authors like myself - are likely to be the real victims of this corporate battle.  

---

If you liked this article, you may also like to read my previous work on copyright enforcement and digital systems. It’s all in the sections on the “Telecoms Package” on this website and in my books.  

If you would like to contact me, please do so via the contact page. 

If you refer to this article in your own work or cite it, please remember to credit me as “Dr Monica Horten, Iptegrity.com” and provide a link back to here.  

*Creative destruction is a theory of  the political economist Joseph Schumpeter

**A reference to the work of the 17th century political philosopher Thomas Hobbes 

***See Chris Anderson, The Long Tail, Random House, 2006  - well worth reading.

  • Article Views: 105

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten, independent policy advisor: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users.  Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing.  Former telecoms journalist,  experienced panelist and Chair, cited in the media eg  BBC, iNews, Times, Guardian and Politico.