AI and copyright – an author’s viewpoint
- Author: Monica Horten
- Published: 28 November 2025
A personal viewpoint on AI copyright deals for authors:
I've received an offer from a publisher who is inviting me to opt in to artificial intelligence (AI) licencing rights. What should I do? The publisher is being approached by AI companies who want to licence their content, which would potentially include my book, ironically entitled A Copyright Masquerade.
It's not easy to take a decision. There are so many unknowns at the moment about the kind of deals that could be on the table. There is potentially a lot of money at stake for authors, but some hard legal battles with AI companies can be expected. A licencing system would seem to be a more equitable outcome, but we are very far from any kind of legislative resolution. In this article, I consider some recent developments and what they could mean.
Readers will know that I’ve written extensively about past battles over copyright which inform the complex challenges we face with AI today. These earlier battles that were quite different from today's. They were about individuals targeted by large corporations, whereas today we are dealing with an inter-industry battle, in which book authors, like myself, are stuck in the middle. I am trying to unpack what it means for me as an author.
Please read this article as a positive statement. I would like my work to be included in any potential AI deals. I think there are benefits for me as an author. These include the ability for my work, an academic book, to be discovered by new generations of researchers and students. Indeed, AI is probably vital to ensure the discoverability of the book.
However, it does feel like wading into the unknown. Copyright for AI is quite different from print. With generative AI, and the so-called large language models, it is about the right to use the text of the book for training and possibly other purposes. It may also entail the right of communication to the public, depending on the kind of outputs. Assessing the monetary value of one's work is tricky.
Let’s cast the mirror onto the AI deals. The AI companies are awash with cash and settlements can potentially be worth far more than most authors get from their printed book. That makes it worthwhile for them to get a good deal from AI licencing.
The case of Bartz v Anthropic offers a perspective on what's at stake. This is a class action law suit filed in the US courts by a group of authors. A settlement of $1.5 billion was achieved for around half a million works. It is likely to realise around $3000 per work, with a 50:50 author: publisher split. For many authors, such as academic writers, who don’t earn much in royalties, this would be a welcome bonus.
This settlement was arrived at on the basis that Anthropic had downloaded pirated databases of books – so-called shadow libraries - to train its AI. Anthropic was deemed to be infringing copyright. However, it might not have turned out that way. Prior to the settlement being agreed, the case had been subject to a court ruling, where it had been deemed that Anthropic's use of the works was "fair use" under US law. That may sound a contradiction, but it is actually a technical legal point. Anthropic had initially downloaded the pirated copies and subsequently purchased legitimate copies of the same books. The use of the purchased copies was deemed "fair use" but the use of the pirated copies was a copyright violation.
In the French courts, a similar legal point will be tested in a case litigated by the French Publishers’ Association against Meta. The issue concerns unlawful use of copyrighted material to train its large language models. The publishers are asking for compensation and their content removed from the AI training datasets. Unfortunately, their claim isn’t public so there is no way of verifying the substance of it. The substance is so important in these cases because a court judgement will ultimately rely on highly technical detail.
Turning back to Bartz v Anthropic, I think one conclusion we can draw is that there is money to be claimed for authors. However, it also illustrates some of the difficulties in make a claim against AI companies. In establishing the claim, each work had to be checked against a database, and with almost half a million works that eventually made it into the final claim, that was a lot of checking.
Authors should not underestimate the tough legal battle to get there. They and their publishers will have to make their case at quite a technical level, and they need to be on the ball. The criteria for a book to be included in the Anthropic settlement was not only that it was on a list of books downloaded from the pirated libraries but also that it was registered with the US Copyright Office. Books that were downloaded but unregistered, were deleted from the list. My book A Copyright Masquerade can be found in a search of the shadow libraries that Anthropic downloaded, but is not registered, and is likely to be one of those deleted books. This raises a crucial point: if the criteria is registration of the copyright in the US, how many British authors will miss out?
A licencing system might spare us some of the pain of litigation. It would make sense in the AI space, where publishers don't have to bear costs for production and distribution of copies, which is the basis of the royalty system. However, there are question marks as to whether it should be a statutory system and there is some way to go before anything will be put in place. The US Copyright Office signalled the way in its report of May 2025, which recommended licencing as the way forward, but the report was immediately dumped by the Trump Administration. Could UK and the EU pick up the idea and run with it?
The UK House of Lords made an attempt to address AI and copyright in the Spring of this year with a proposition for “transparency” in law, but in my opinion, that was never sufficient. It put a stake in the ground, highlighting the issue to government, but in my opinion, it needed more policy development before it could pass into law.
The EU AI Act includes a transparency requirement specifically for copyright purposes. AI companies are told to provide a “sufficiently detailed” summary of the training data obtained, following an official EU template that can be downloaded here. I will let you decide for yourself if this template can function in a technical environment. I will just say that there is a lot of work to do in order to create a viable legislative proposal.
And so we go back to where I started: authors are the engines of the publishing industry and what’s at stake is the way that they will remunerated in the new world where their work becomes known via AI but is also used for strange new purposes we had never before imagined. It is difficult to determine whether or not we are getting a good deal when there is so little substantive information provided. Right now, the future of copyright for AI models is being decided in the courts and in obscure corporate back-rooms. That is not sustainable.
I find myself coming down in favour of a licencing system. However, it must centre on authors, and they must be fully involved in a transparent process, alongside their publishers, in order to take informed decisions about the options presented to them.
---
Please see also Copyright wars 3.0: the AI challenge
If you liked this, you may also like to read my earlier work on copyright and the Internet.
If you would like to contact me, please do so via the contact page. Please remember to credit me as “Dr Monica Horten” if you cite my article.
- Article Views: 153
IPtegrity politics
- How could they ban X?
- Grok AI images: can compliance be enforced?
- AI and copyright – an author’s viewpoint
- UK climb-down over Apple back-door was foreseeable
- Copyright wars 3.0: the AI challenge
- Why would the UK take on Apple?
- What's influencing tech policy in 2025?
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- EU at loggerheads over chat control
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Whatever happened to the AI Bill?
- Hidden effects of the UK Online Safety Act
- EU puts chat control on back burner
- Why did X lock my account for not providing my birthday?
- Creation of deep fakes to be criminal offence under new law
About Iptegrity
Iptegrity.com is the website of Dr Monica Horten, independent policy analyst: online safety, technology and human rights. Advocating to protect the rights of the majority of law abiding citizens online. Independent expert on the Council of Europe Committee of Experts on online safety and empowerment of content creators and users. Published author, and post-doctoral scholar, with a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing. Former telecoms journalist, experienced panelist and Chair, cited in the media eg BBC, iNews, Times, Guardian and Politico.
Online Safety
- How could they ban X?
- Online Safety and the Westminster honey trap
- Shadow bans: EU and UK diverge on user redress
- Why the Online Safety Act is not fit for purpose
- Fixing the human rights failings in the Online Safety Act
- Hidden effects of the UK Online Safety Act
- Why did X lock my account for not providing my birthday?
- Online Safety Act: Ofcom’s 1700-pages of tech platform rules
- Online Safety - a non-consensual Act
- Online Safety Bill passes as US court blocks age-checks law
- Online Safety Bill: ray of hope for free speech
- National Crime Agency to run new small boats social media centre
- Online Safety Bill: does government want to snoop on your WhatsApps?
- What is content of democratic importance?
- Online Safety Bill: One rule for them and another for us