Be careful what we wish for. The possibility of an executive order granting broad exemptions to AI companies from copyright claims poses an existential threat to the publishing industries as they currently operate.


It been a busy week for AI followers, with partial victories for both supporters and opponents, and a seeming own-goal from Meta, but then, Mark Zuckerberg is the king of own-goals.

The rapid advancement of AI this past year or three has ignited a complex legal battleground, particularly concerning copyright law and its application to the vast datasets used to train these powerful systems.

But lately, new developments have further complicated the scene in the form of competing Chinese AI systems, and a candle-in-the-wind US president being urged to pen an executive order to give AI companies exemption from copyright laws, “in the national interest.”

Their patriotism is admirable, of course, but we all know what the true interest is here, and rightsholders should be worried. As with tariffs, the publishing industries (books, news, music) can see what’s coming, but we are too busy pandering to tabloid headlines about AI devouring our children and jumping on feel-good class action law suits to see the real threat to our industry that is a possible executive order that will rip the carpet from beneath the feet of every class action in town and rewrite copyright law as we know it.

Three Big AI News Stories

I take a look at the executive order threat below, but first to look at the big three AI new stories this past week or so: the Meta piracy debacle; the New York Times vs OpenAI court ruling; and the Universal Music Group vs Anthropic court ruling.

At the heart of these (and countless similar) legal battles lies the tension between copyright protection, which grants creators notionally exclusive rights over their original works, and the doctrine of fair use, which permits limited use of copyrighted material without permission for purposes such as criticism, commentary, news reporting, teaching, scholarship, or research.

The problem being, limited use, along with teaching/training/research are not defined in any meaningful way, leaving the door wide open for stakeholders on both sides of the divide to claim fair or foul play.

AI companies frequently argue that their use of copyrighted material for training large language models (LLMs) falls under fair use, as the process is (they say) transformative, creating new AI models rather than directly replicating the original works. Meanwhile publishers contend that this mass ingestion of their content without consent or compensation constitutes copyright infringement, potentially causing irreparable harm to their businesses. Keep that “irreparable harm” line in mind as we proceed.

NYT vs OpenAI

The New York Times lawsuit against OpenAI took a hit this past week, not that you’d know it from industry reportage which focussed on only one element in the case update.

The New York judge allowed the core complaint of copyright infringement to move ahead, setting the stage for a potential landmark ruling on the applicability of fair use to AI training. That was the victory we all heard about.

But the judge also stiffed the extras the NYT threw in, dismissing, with prejudice, claims of unfair competition and a violation of the Digital Millennium Copyright Act

“With prejudice” meaning the judge has terminated that avenue for the NYT, and by precedent erecting another hurdle for other parties inclined to try the same claim.

The judge will give details of the decision-making at a later date, so for now we don’t know exactly how unfair competition is here defined, but this is a significant ruling despite being largely played down by media reportage that instead focussed on the go-ahead for the main case centred on fair use.

Now obviously the fair use issue is the biggie here. But that’s to miss an important point: By dismissing the unfair competition argument, the judge has set a bar for potential compensation remedies if OpenAI is later deemed to have infringed copyright, which brings us neatly to an independent decision by another court.

UMG vs Anthropic

Anthropic, the AI company being backed by Amazon, is being sued by Universal Music Group and others, claiming Anthropic violated copyright on 500 song lyrics. This week the judge denied, with prejudice, a preliminary injunction due to a lack of evidence of “irreparable harm.”

I’m no lawyer, but this strikes me as hugely important. Yes, if an AI company has crossed a legal boundary than it must be held accountable, but at the remedies stage, a demonstration of harm goes a long way to determining how much the company may be penalised.

The UMG-Anthropic case is not the NYT-OpenAI case, of course, and the NYT is asserting much wider infringement than that attributed to Anthropic, but it’s hard to imagine how the NYT might conceivably claim irreparable harm to its own business, were OpenAI’s actions to be declared illegal.

All of which serves to highlight the complexities of proving immediate and non-monetary damage in these novel legal contexts.

And of course it is precisely because these are novel legal contexts that, if allowed, it will likely take a Supreme Court ruing to have the final say. Unless the self-appointed king steps in first.

Publishing’s Facebook Friend Meta the Piracy Advocate

Meanwhile, the revelations about Meta’s alleged use of pirated books from LibGen to train its AI models further intensify the debate around ethical sourcing of training data and the potential for widespread copyright violations, and opens up a whole new can of worms.

It was only a few days ago that TNPS looked at the cack-handed juggling act that is how the industry manages to treat Amazon and AI.

As the Meta LibGen story made industry news it was another visit to the hypocrisy factory, as industry spokesfolk called out Meta’s wicked deed, on Meta-owned Facebook and Instagram, as if Meta AI is somehow unconnected with the rest of Meta and the Mark Zuckerberg that allegedly approved using the pirate book site for AI training is not the same Mark Zuckerberg that owns Facebook, Instagram and WhatsApp.

Tesla owners upset by what Elon Musk is doing have the decency to match actions with words. I’m not seeing any calls among authors and publishers to boycott Facebook.

Again, it will be for the courts to decide who committed what crimes, if any (the presumption of innocence is not something the publishing industry has much time for). But behind all this is the unedifying scenario of rights-holders gleefully pointing to pirate sites they have no way of blocking and then screaming that Meta should compensate them.

Which of course brings us back to that UMG-Anthropic ruling, because no publisher or author can point to Meta’s exploitation of LibGen and claim irreparable harm. Indeed, defining any level of harm at all will be challenging.

I used the Atlantic‘s search tool to see if any of my books were on the pirate site, and was appalled to find only three. Where are the rest? Should I be pleased or offended?

I then quickly totted up my losses from those books being on the pirate site. And then how much I must have lost by Meta, if it even noticed my books, using them to train its LLM.

It didn’t take long, and no, I won’t be losing any sleep over it.

Yes, the ethics of a mega-corp like Meta using a piracy site absolutely stink. But Mark Zuckerberg and Facebook have a long history of disdain for ethics which appears not to have worried us too much thus far.

Outraged indignation may make us feel better, if that’s what rocks our boat, but when it comes to demonstrating irreparable harm, plucking numbers out of the air will get us nowhere.

Is Regurgitation Irreparable Harm When It’s Manipulated?

The NYT case asserts ChatGPT regurgitated NYT news reportage pretty much word-for-word. OpenAI have already indicated this could only be achieved by prompt manipulation in breach of OpenAI rules. Authors claim AI has intimate knowledge of their books, but again, this has yet to be explained in court.

But no-one is asserting the big AI companies are copying entire books and regurgitating them to be sold, or even distributed free.

Which will leave the harm, should the fair use defence be ruled a non-runner, to be defined as having copyrights violated, which likely will incur no more than a slap on the cyber-wrist.

Deal or No Deal?

It’s not as if AI companies are not trying to come to arrangements with publishers. Rather, they are paying out huge sums already, and making new deals almost daily. It should be remembered the NYT sued OpenAI after first having tried to strike a deal. Those negotiations stalled because they could not agree on a price. By definition meaning the harm was not irreparable.

By suing, the NYT is leveraging court action to raise the price at which it will eventually settle. OpenAI is playing along – it has deeper pockets than the NYT – but knows that at the end of the day the risk of a ruling against fair use will be crippling.

Which begs the question, how many of these myriad legal cases slowing moving through the system now will actually go all the way? Most will likely come to some amicable settlement before that time and be quietly settled with discretion and we’ll all move on as if nothing happened.

The Elephant in the Room

Except that there’s an elephant in the room. The Trump executive order that publishing is determined to pretend isn’t on the table.

But this is real. We have the looming prospect of the US President simply writing the AI companies a blank cheque in the form of an executive order ruling all AI companies have fair use privilege over copyright material “in the national interest.”

Even three months ago, that argument was a non-runner. Then DeepSeek came along, and handed US AI companies a gift-horse.

US AI companies were falling over themselves as they elevated mild concern to faux panic. The Chinese are coming! Chinese AI is leaving us behind! And they copied our code! Have they no respect for copyright?

The publishing industry was so enraptured by the irony of Sam Altman crying foul over alleged copyright theft that it totally missed that the AI companies are loving what the Chinese are doing.

Okay, so DeepSeek grabbed some headline and a few followers, but clearly it made no difference to ChatGPT user numbers, and DeepSeek and its countless new Chinese buddy-bots like Manus now flooding the scene are all being lapped up by the tech guys and reverse-engineered to see what they are doing right.

But the real gift to the AI industry was simply that they are from China, for most of the past century America’s reserve bogeyman after Russia, but in the Trump era elevated to public enemy number on to take the heat off Trump-buddy Putin.

The AI industry already had a “Chinese threat to our national interest” precedent in TikTok, so it should have surprised no-one when AI big-knobs started talking about how Chinese AI was a threat to national security, playing to Trump’s validation needs.

Going Through The Motions

Right now the White House is going through the motions with its call for input into the decision-making about the future of AI.

And all credit to the Association of American Publishers, which has presented a well-balanced argument in favour of the status quo. Read the full submission here.

Full disclosure: I was impressed with the sober and rational thought that went into this, in stark contrast to the tabloid-headline grab that was the UK Publishers Association submission to the British government.

But this is the Trump Administration. Public opinion is neither here nor there.

In Trump’s pay-to-play world, the AI companies have already paid for front row seats, as we saw literally at the inauguration. And their self-righteous indignation and faux worries over Chinese AI companies and their new Chinese AI models are just the latest smoke and mirrors instalment to advance AI company interests.

New Avenues of Creativity and Remuneration

Regular TNPS readers will know I have no big issues with AI. The ethics and legalities need to be settled, of course, and concerns about jobs need to be addressed. But gen-AI is the best thing that ever happened to the publishing industry, and is opening up new avenues of creativity and remuneration for those willing to adapt.

But a shadow looms over the industry in the shape of an increasingly likely executive order pre-empting any final court decisions.

So let me begin to wind up this essay with a look at the three scenarios now on the table.

Scenario A: Ruling in Favour of Fair Use

If the courts ultimately rule that the use of copyrighted material for AI training constitutes fair use, the publishing industries would face a significant challenge in claiming “irreparable harm” in future cases. “Irreparable harm” typically refers to injury that cannot be adequately compensated by monetary damages. The “with prejudice” ruling in favour of Anthropic has already weakened this case.

Compensation in this scenario would likely be limited. While publishers might still pursue voluntary licensing agreements with AI companies, as some have already done, the dynamic will have changed.

Terms Dictated by the AI Companies

A fair use ruling would weaken publishers’ negotiating positions. The argument for mandatory compensation would be undermined, as the core activity of training would be legally protected.

This could lead to a future where AI models are predominantly trained on publicly available data or content licensed on terms dictated largely by the AI companies themselves. The impact could be particularly felt by news organisations, who fear that AI summarisation tools will reduce traffic to their websites, impacting advertising revenue.

Book publishers would see increased competition from AI-generated content, while music publishers could face challenges if AI can generate music based on the patterns legally learned from their copyrighted works.

That said, even under a fair use regime, AI could offer certain benefits to the publishing industries, such as enhanced productivity in editing and proofreading, personalised content creation, and improved marketing strategies. The key challenge would be adapting business models to thrive in an environment where their core content is freely used for AI development.

Scenario B: Ruling Against Fair Use

Conversely, a ruling against fair use in the context of AI training would significantly strengthen the publishing industries’ ability to claim “irreparable harm.” Such a ruling would establish that the unauthorised use of copyrighted material for training infringes upon the rights of creators. Publishers could argue that the continued use of their content without permission causes ongoing and irreparable damage to their businesses by devaluing their intellectual property, potentially substituting their original works, and undermining their revenue streams.

The Anthropic case suggests that proving this “irreparable harm” to the satisfaction of the courts remains a hurdle, but a definitive ruling against fair use would provide a stronger legal foundation for such claims.

In this scenario, the issue of compensation would take centre stage. AI companies would likely face significant financial liabilities for past and ongoing copyright infringement. This could lead to the establishment of mandatory licensing schemes or collective rights management organisations to ensure fair compensation for publishers and creators whose works have been used for AI training.

The class action lawsuit against Meta, alleging “mass theft” of books for AI training, exemplifies the potential scale of compensation claims in such a scenario. The Delaware District Court’s ruling in Thomson Reuters v. ROSS Intelligence, which found that AI training on copyrighted legal headnotes was not fair use, already offers a weak precedent for such a stance (weak because it did not directly involve generative AI).

While a ruling against fair use could potentially foster a more collaborative ecosystem where AI companies and publishers work together, ensuring that creators are fairly rewarded for their contributions to the training process. an ‘against’ ruling could dramatically slow down the pace of AI development.

Many of us may argue that would be a good thing, but be careful what we wish for.

Which brings us to:

Scenario C: Ramifications of an Executive Order Exempting AI from Copyright Claims

The prospect of an executive order from Trump, self-evidently influenced by close ties between political figures and AI CEOs, declaring AI companies exempt from copyright claims presents a radical scenario with potentially devastating consequences for the publishing industries.

In such a situation, traditional arguments of “irreparable harm” based on copyright infringement would likely be rendered moot. The legal basis for preventing AI companies from using copyrighted material without permission would be significantly weakened, if not entirely eliminated.

Compensation in this context would become highly improbable. If AI companies are legally exempt from copyright claims, they would have little incentive to offer compensation in the form of deals for the use of copyrighted material in their training datasets.

This could lead to a situation where the publishing industries are forced to compete with AI-generated content that has been developed using their own intellectual property, without any form of remuneration. Publishers on the verge of signing deals might want to accelerate the process and get something in writing ASAP in case the option disappears.

Ramifications for News, Book and Music Publishers

The ramifications for news, book, and music publishers under such an executive order could be severe.

News organisations might struggle to maintain their operations if their content is freely used to train AI that then provides summaries and answers, drawing readers away from original sources. Right now that element is barely an issue. AI companies know they can only go so far.

Book authors and publishers could see the value of their backlists and future works diminish if AI can readily learn from and replicate their styles and content with no restriction.

Similarly, musicians and music publishers could face challenges if AI can openly generate music trained on their copyrighted songs.

The long-term implications for the creative economy of an executive order giving such sweeping exemptions would be profound and negative for publishers and creators.

Be Careful What We Wish For

Be careful what we wish for. By obsessing over as yet unproven legal infractions instead of trying to engage with AI companies to strike deals, we risk the very real possibility of an executive order granting broad exemptions to AI companies from copyright claims, such that would pose an existential threat to the publishing industries as they currently operate.


This post first appeared in the TNPS LinkedIn newsletter.