Anthropic was not willing to gamble. And equally, nor was the Luddite Defence. Both sides opted to play safe, and the industry will live with the consequences.
The Settlement That Isn’t What It Seems
The recent settlement (not yet finalised) between Anthropic and a class of US authors in their copyright infringement lawsuit has been hailed by some as a victory for writers’ rights. However, a closer examination reveals this may be one of the most strategically misguided outcomes for the publishing industry in recent memory – a settlement that has inadvertently legitimised precisely the AI training practices that authors sought to challenge.
The View from the Beach is that this celebration is premature and misplaced. Rather than strengthening copyright protections, this settlement may have irreversibly weakened them whilst simultaneously providing AI companies with a roadmap for future operations that completely bypasses meaningful compensation to creators.
The Legal Landscape Before Settlement
To understand why this settlement represents such a strategic miscalculation, we must first examine the legal position that emerged from Judge William Alsup’s landmark ruling in June 2025.
Alsup ruled that Anthropic did not infringe the books of three authors used to train its AI models. Rather, the court found the training “exceedingly transformative” and that the fair use factors as a whole weighed in favour of Anthropic.
Crucially, however, Judge Alsup drew a clear distinction between legitimate and illegitimate acquisition of copyrighted material. While using books to train AI models constitutes fair use, downloading pirated books was a violation of copyright law. This created a bifurcated legal framework where the method of acquisition, not the use itself, became the determinative factor.
This distinction is legally sound and practically important. Anthropic didn’t break the law when it trained its chatbot with copyrighted books, a judge said, but it must go to trial for allegedly using pirated books. The court essentially established that whilst AI training on copyrighted works is fair use, companies cannot simply help themselves to pirated content to build their training corpora.
The Settlement’s Strategic Misstep
By settling this case, Anthropic and the authors have made what may prove to be a catastrophic error from the publishing industry’s perspective. The settlement prevents the establishment of crucial legal precedent regarding damages for the use of pirated content whilst simultaneously cementing the fair use doctrine for legitimate acquisition.
While Alsup ruled that training AI on copyright works is fair use, he left the piracy issue to a jury while certifying the class. A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy. US copyright law says that willful copyright infringement can justify statutory damages of up to $150,000 per work.
The point here is that Anthropic was not willing to gamble. And equally, nor was the Luddite Defence. If the latter seriously believed it had a strong case it would have told Anthropic to stuff the settlement where the sun doesn’t shine and get out there getting more claimants on board the class action for a mega-payout in December. Both sides opted to play safe, and the industry will live with the consequences.
Here’s the thing: This trial would have been pivotal. A substantial damages award would have sent a clear message to the AI industry about the costs of using pirated content, potentially creating a meaningful deterrent. Instead, the settlement, with undisclosed terms we may never learn of, removes this deterrent whilst crucially leaving the fair use precedent intact.
And more crucially still, we have this Alice In Wonderland legal daze where copyright depends on how the content is obtained. WTF?!
The Bizarre Logic of Copyright Protection
The emerging legal framework creates an almost surreal situation where copyright protection appears stronger for pirated works than for legitimately acquired ones. This represents a fundamental inversion of copyright law’s intended operation.
Consider the practical implications: if an AI company legitimately purchases a book from Amazon, Waterstones, B&N or any other retailer, it can now point to Judge Alsup’s ruling as strong precedent that training on that content constitutes fair use. The purchase price for a paperback – or better still, an ebook – becomes the total cost of acquiring perpetual training rights to that work.
However, if the same company downloads the same book from a piracy site, it faces potential statutory damages of up to $150,000 per work. The legal framework now suggests that authors’ strongest copyright protections exist only when their works are stolen, not when they are legitimately purchased.
This creates perverse incentives and undermines the fundamental logic of copyright law, which is supposed to reward legal acquisition and use whilst penalising theft.
The Implications for Future AI Licensing
But the settlement’s most damaging long-term effect may be its impact on the emerging AI licensing market. Until recently, AI companies were negotiating substantial licensing deals with publishers and authors. Microsoft’s reported $500-per-title, three-year licensing agreements represented a meaningful revenue stream that recognised the value of creative works in AI training.
These licensing arrangements are now under threat. Why would an AI company pay $500 for a three-year licence when it can simply purchase the book at retail price and gain what appears to be perpetual training rights under the fair use doctrine? The Alsup ruling, now effectively cemented by the settlement, suggests that legitimate purchase confers training rights without additional licensing requirements.
This shift could devastate the nascent AI licensing market. Publishers and authors who were beginning to see AI training as a potential new revenue stream may find themselves with significantly reduced bargaining power. The economic incentive for meaningful licensing agreements has been substantially undermined.
The Class Action’s Limited Scope
The settlement’s structure further limits its benefit to creators. The 2024 class action lawsuit was brought by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson who alleged Anthropic AI used the contents of millions of digitised copyrighted books to train the large language models behind their chatbot, Claude.
The class action format, whilst ostensibly protecting a broader group of authors, actually serves to limit the pool of potential claimants. Only those authors who can demonstrate clear ownership of copyrighted works and who are included within the certified class will be eligible for compensation. This excludes many international authors, those who haven’t formally registered their copyrights, and those who weren’t part of the original action.
Moreover, the settlement effectively prevents future claimants from pursuing similar actions based on the same conduct. Anthropic has contained its liability to a defined group whilst gaining protection against broader claims. Nice one, Anthropic!
Why Meta’s Response Will Be Crucial
The next significant test of this legal framework will come with Meta’s pending case, which involves similar allegations of using pirated content for AI training. Meta faces a critical strategic decision: follow Anthropic’s lead and settle, or challenge the underlying legal reasoning that distinguishes between pirated and legitimate content.
A settlement by Meta would further entrench the current framework. However, a decision to fight could potentially establish more favourable precedent for the AI industry. Meta might argue that the distinction between pirated and legitimately acquired content is irrelevant to the fair use analysis – that if training is truly transformative fair use, the source of the training material shouldn’t matter.
Such an argument, whilst legally risky, could potentially eliminate even the limited protections that currently exist for pirated content. It would represent the AI industry’s most aggressive position: that copyright holders have no recourse against AI training regardless of how the content was acquired.
The Silence of Authors’ Advocates
Perhaps most telling is the muted response from publishing industry organisations and authors’ advocacy groups. One might expect vociferous objection to a legal framework that appears to strip away most meaningful copyright protections whilst legitimising AI training on purchased works.
This silence suggests either a fundamental misunderstanding of the settlement’s implications or a strategic calculation that the short-term benefits of a settlement outweigh the long-term costs. If it’s the former, the industry is sleepwalking into a future where its intellectual property rights are significantly diminished. If it’s the latter, it represents a startling abandonment of the industry’s long-term interests.
Bizarrely, much of the industry is managing to do both simultaneously.
The Road Ahead: A Diminished Future
The Anthropic settlement may be remembered as the moment when the publishing industry inadvertently negotiated away its strongest position in the AI era. By settling a case that could have established meaningful deterrents against the use of pirated content, authors and publishers have effectively legitimised a framework where AI companies can acquire training rights for the price of a book purchase.
The implications extend beyond immediate financial concerns. The settlement suggests that the creative industries lack the strategic vision or legal commitment necessary to preserve their economic interests in an AI-dominated future. When faced with the opportunity to establish strong legal precedent, they chose the certainty of an undisclosed settlement over the potential for meaningful industry-wide protections.
For publishing professionals, the message is clear: the industry cannot rely on copyright law to provide meaningful protection against AI training. The legal framework now strongly favours AI companies, and the window for establishing stronger protections appears to be closing rapidly.
And pro-AI as I am, that bothers me.
A Pyrrhic Victory for the Industry And for the Luddite Fringe
The authors who brought the case against Anthropic may have achieved a personal financial settlement, but at enormous cost to the broader creative community. They have helped establish a legal precedent that makes AI training on copyrighted works presumptively legal whilst eliminating the prospect of meaningful damages that might deter such conduct.
This is not a victory for authors’ rights. It is a capitulation that may have ensured AI companies can continue training on creative works with minimal legal or financial consequences.
The celebration within the Luddite Resistance is premature and misplaced. What appears to be a win is actually a comprehensive defeat, dressed up with the superficial trappings of legal settlement. The only win here for the Luddite Fringe is a show of power over an industry unwilling to get off the fence and do what is right: to embrace AI and be in the driving seat.
The publishing industry now faces a future where its intellectual property protections are weaker, its licensing revenues are under threat, and its strategic position vis-à-vis the AI industry has been fundamentally compromised. The Anthropic settlement may prove to be the moment when authors and publishers lost the AI war whilst claiming victory in a single, ultimately meaningless battle.
This post first appeared in the TNPS LinkedIn newsletter.