Denial isn’t a strategy – it’s an abdication of responsibility. The technology will advance with or without the industry’s blessing. The only question is whether publishing professionals will help steer that development or be dragged along by it.


Back in June, Jane Friedman writing for The Writing Platform, addressed a topic close to my heart – the polarised debate about AI in publishing. This week said post made Bo Sacks’ publishing blog, along with an introduction by Sacks himself which built further on Friedman’s essay.

Both pieces form the basis for this TNPS op-ed where I explore the issues raised in more depth that either Friedman or Sacks had space to do.

Before I plunge in, a quick thought on the “polarised debate” between “doomers” and “boosters”. My position on AI should be pretty clear by now. For me, it is the most exciting, invigorating and transformative development not just in my lifetime, but in history.

AI has been the topic of discussion at some level since TNPS launched way back in 2017 – I mentioned it in the very first post – and before that in my online manifestations elsewhere. So I’m used to the abuse that comes with the territory.

But here’s the thing: I don’t write TNPS to court popularity, build a readership to sell a product, or find a comfortable seat on the industry fence. So when it comes to calling a spade a spade, or a Luddite a Luddite, I go with what I think needs to be said, not what I think people would prefer to hear.

Friedman and Sacks have a similar philosophy. They have more patience with the Resistance than I, granted. I have beyond infinite patience when teaching children, but little to no patience dealing with adults who choose not to know better. But at the end of the day they say what needs to be said. Because at the end of the day it is dealing with reality that ultimately delivers nuanced debate.

In this essay I try do justice to Friedman and Sacks’ commentary while of course delivering the TNPS View From The Beach perspective that occasionally polarises opinion.

I’m sure this will be one such occasion. Yeah, call me a snake-oil salesman, hit my books and posts with one-star reviews, threaten the family you know nothing about – whatever makes you feel good. It won’t make AI go away. It won’t alter the fact that the world is moving on, with or without you. Your fear is not my fear. Which brings us neatly to the point both Friedman and Sacks wish to make.

The Binary Prison of Fear


Friedman and Sacks have identified a fundamental truth that the publishing industry desperately needs to confront: fear is killing nuanced conversation. As Sacks observes, this fear has created “a binary prison, and nuance has been tossed into solitary confinement.”

The industry has split into two camps – the doomers and the boosters – with little room for the thoughtful, pragmatic discourse that could actually help navigate this transition. This polarisation isn’t just unproductive, they argue; it’s dangerous. As Sacks warns: “The absence of open, intelligent conversation about AI isn’t just frustrating, it’s dangerous. It leaves the field wide open for bad actors, for sloppy ethics, for the very abuses we claim to fear.”

At which point, let me nail my colours to the mast a final time in case you missed it above: I’m a booster and proud of it. But no, I’d argue that’s not counterproductive in itself. If every post we read about AI is balanced and arguing both sides, we’d get nowhere.

Our very industry is about debate and opposing perspectives. Rational debate, up to and including calling out Luddism for what it is, is necessary, in the same way we call Flat Earthers Flat Earthers.

Friedman and Sacks both talk about fear, and fear can be very rational. If you get the shivers when you encounter a rattle-snake, or you’re walking down a dark alley and you hear footsteps sneaking up behind you, that’s not irrational.

But when we start to anthropomorphise something as abstract as AI into an existential threat, irrationality is king.

We saw this when Jane Friedman, a year or so back, found that that nasty old AI had been churning out low-quality books in her name and selling them on Amazon. Or at least, that’s how the popular legend went. AI is stealing our books, our brand, our children’s toys and our grandmothers’ boiled sweets, and next it will steal our fluffy kittens and our newborns. Sound the alarm! Smash the machines! Bring back typewriters and quill pens!

With serendipitous timing, Anne R. Allen posted today about how her inbox is being overrun with AI-empowered spam likely emanating from the Nigerian Princedom of Scamland.

But of course, this does not mean Nigeria is full of scammers or every Nigerian is scamming you. It would seem they’ve finally got the AI memo and have updated the model designed and tested long before most folk had ever heard of AI.

The Real Threat Isn’t the Machine – It’s Human Agency


But Jane Friedman’s personal experience with AI-generated books bearing her name. and Anne R Allen’s scam-filled inbox illustrate a crucial point often lost in the hysteria: AI didn’t commit fraud: humans did.

The technology didn’t bypass Amazon’s safeguards and Google’s spam filters, exploit legal loopholes, or make the decision to steal someone’s identity or assume everyone with a similar name is the same person. Humans made those choices. Humans implemented inadequate safeguards. Humans chose to act unethically.

This distinction matters profoundly because it reveals where our focus should lie: not on banning or fearing the tool, but on establishing proper frameworks, ethics, and accountability for its use. A car can kill pedestrians, but we don’t ban cars – we create driving tests, speed limits, and hold drivers accountable for their actions.

Historical Precedent: We’ve Been Here Before


The publishing industry’s current panic follows a familiar pattern. As Sacks reminds us: “We’ve been here before, folks. The music industry had its Napster moment. Photography faced the digital camera. Journalism stared down the blogosphere and then the smartphone.”

Each technological shift brought predictions of apocalypse. Each time, industries evolved rather than collapsed. The question isn’t whether change will come – it’s whether we’ll participate in shaping it or have it imposed upon us.

Consider the technologies we now take for granted:

Word processors replaced typewriters (typewriter manufacturers didn’t survive, but employees found new jobs)
Email replaced postal submissions (and countless postal workers found new roles)
Digital printing transformed production (and hot-metal typesetters became history – or rather, their skills became history. So they upskilled.)
E-books created new markets (bookshops adapted. Those bookshops that failed went down because of online print sales, not ebooks)

Nobody in publishing today mourns the passing of manual typesetting or loses sleep over displaced typewriter repairmen. Consumers certainly don’t care whether their favourite novel was written on a MacBook Pro or an Olivetti – they care whether it’s a good story.

The Mythology of Suffering


One of the most perceptive observations in this debate comes from Sacks: “We have romanticised writing as an act of suffering, ink and sweat, late nights and broken hearts. We value the process as much as the product, and AI disrupts that mythology.”

Oscar Wilde may have something to do with that. “I was working on the proof of one of my poems all the morning, and took out a comma. In the afternoon I put it back again”

But Sacks’ point is clear. “We” value the process as much as the product. The consumer does not.

The consumer doesn’t know or care how hard we did or did not work, any more than they care how hard or what danger the fisherman faced to deliver that tasty morsel to our plate, or how many assembly line workers were put out of work because of car manufacturing automation so that new SUV could be sitting on our drive.

Show me the writer or publisher that drives a hand-made car, has the milkman deliver milk to their door, or has shunned central heating for a coal fire just to keep the coalman in work.

But as writers and creatives we like – nay, desperately want – to believe we are somehow different. Our job is special. We get emotionally attached to the very notion of our job.

Yes, I wept when I watched the manuscript thrown onto the fire in Little Women. I’m an author. I know how much work went into my manuscripts. My readers don’t care. Just as you don’t care that I spent almost the entire day writing this single post because the internet has been off and on like Christmas tree lights on steroids.

But does the post that could have been written in an hour or so gain authenticity of value by my having wasted most of the day sat here waiting for moments of connectivity? I feel the pain no question. No-one else gives a fig.

This romanticisation has created a dangerous conflation between effort and value. We’ve convinced ourselves that difficulty equals authenticity, that struggle equals worth. But as Friedman asks: “How much do we value something because of the suffering that went into it?”

The uncomfortable truth is that most readers don’t care about the writer’s process – they only care about the reading experience. A thriller that keeps them turning pages at 2 AM has succeeded regardless of whether it took six months or six years to write, whether it was crafted through agonising revision or flowed in a fever dream of inspiration.

Friedman challenges us further: “If a writer tells us they were struck by inspiration and wrote the whole work down in a fever dream of a few sessions without editing, we accept that. Do we not accept that creativity has many more forms than just the ones we’re acquainted with?”

AI as Tool, Not Replacement


The most important reframe needed in this conversation is understanding AI’s true nature. As Sacks puts it: “AI is neither plagiarism incarnate nor a deus ex machina for the lazy. It is a prompt-hungry, context-dependent partner that reflects the quality and the vision of the human steering it. Without guidance, it’s noise. With guidance, it’s a megaphone.”

This tool metaphor is crucial. A paintbrush doesn’t create art – an artist does. A word processor doesn’t write novels – an author does. AI doesn’t generate meaningful content without human vision, curation, and purpose.

Friedman asks the essential question: “If a human writer uses AI to generate ideas or generate words, why is that a problem, if they’re still acting as the ultimate god over their story?”

The answer reveals our biases more than our logic. We accept:

Research assistants who gather information
Editors who refine prose
Ghostwriters who craft entire works
Translators who adapt across languages
Voice recognition software that converts speech to text

Why should AI assistance be fundamentally different?

The Commercial Reality


Here’s the uncomfortable truth publishing professionals must confront: publishing is a business. For all the industry’s rhetoric about supporting authors and nurturing creativity, the commercial reality is stark. Royalty rates, advance levels, and marketing budgets reveal the true priorities.

If AI can produce content that audiences want to read, faster and more cost-effectively than traditional methods, publishers will use it. They have shareholders, overheads, and profit margins to consider. They’ve already automated typesetting, embraced print-on-demand, and moved to digital distribution – each step reducing costs and human involvement.

Publishers didn’t abandon digital transformation to preserve typesetters’ jobs. They won’t abandon AI to preserve traditional writing processes.

Consumer Indifference to Process


The most sobering reality is that consumers largely don’t care about production methods – they care about value. Readers want engaging stories, useful information, and compelling content. Whether it’s crafted by quill pen, typed on a mechanical keyboard, or generated with AI assistance is irrelevant to their experience.

In fact, many consumers might – Luddite Fringe, look away now! – actively prefer AI assistance if it means:

Lower prices for books
Faster publication schedules
More diverse content
Better translations
Personalised recommendations

Our obsession with process purity may prove to be completely disconnected from market reality.

Questions That Demand Answers

Friedman poses essential questions that the industry must address rather than avoid:

“Should we police how other people get their work done if they’re not breaking the law?”

This cuts to the heart of creative freedom and professional autonomy. If AI use is disclosed appropriately and no laws are broken, on what basis do we restrict others’ tools? (While still leaving the question, what is appropriate disclosure? The subtitles and bullet points and formatting you see here in the final version will have been added by AI, if the internet stays on long enough. Did that AI disclosure serve any useful purpose? Probably not.)

“What makes a work human in the end? When does work cross a line from AI generated to human created and owned?”

These definitional questions will shape copyright, attribution, and commercial practices. Avoiding them won’t make them disappear.

“If the writing comes too easily because of AI, is that a problem? Are we damaging ourselves?”

This reveals our assumption that difficulty equals value – an assumption worth examining another time. Here just to say I know authors who can spend a year on a single book and authors who write a book every month. I envy the latter. But what next? A notice on the cover saying, “I wrote this book in five weeks”? Another saying, ”I spent five years writing the first three lines of this book”? God help us.

The Path Forward: Purpose Over Process


Both Friedman and Sacks point toward the same solution: focus on purpose rather than process. As Friedman concludes: “Those who recognise their fear for what it is, and proceed with a purpose that is not changed or affected by AI, they will find a way forward, whether they use the technology or not.”

Writers who know why they write – not just how – will adapt. Publishers who understand their value proposition – not just their traditional methods – will evolve. The industry needs to ask itself:

What unique value do human creators bring that transcends their tools?
How can AI enhance rather than replace human creativity?
What ethical frameworks should govern AI use in publishing?
How do we maintain quality and authenticity in an AI-assisted world?
What new opportunities does this technology create?

The Cost of Denial

Sacks delivers the starkest warning: “The machine is here. The question isn’t whether it will replace you, it’s whether you’ll let fear do it first.”

The publishing industry can choose to engage thoughtfully with AI, establishing best practices, ethical guidelines, and quality standards. Or it can bury its head in the sand while others shape the future without input from publishing professionals.

Denial isn’t a strategy – it’s an abdication of responsibility. The technology will advance with or without the industry’s blessing. The only question is whether publishing professionals will help steer that development or be dragged along by it.

Courage Over Comfort

The choice facing the publishing industry isn’t between human creativity and artificial intelligence – it’s between fearful paralysis and thoughtful progress. AI is a tool that can be used ethically or unethically, creatively or destructively, as a crutch or as an amplifier.

The responsibility lies not with the technology but with the humans who wield it. And that responsibility starts with having the courage to engage in nuanced, open conversations about AI’s role in publishing’s future.

As Friedman notes: “The absence of open conversation creates barriers to progress and delays more intelligent use of a technology that can’t be stopped.”

The industry must choose: Will it lead this conversation, or will others lead it without them?

The technology isn’t waiting for permission. The market isn’t pausing for comfort. The future is being written now – with or without the publishing industry’s input.

The only question is whether that future will reflect the industry’s values, expertise, and vision, or whether those will be left behind with the typewriters and telegram operators of previous eras.

Fear, as both writers observe, is the real enemy. Not the machine, not the change, not even the uncertainty – but the paralysis that prevents us from engaging thoughtfully with what comes next.

The choice is ours. But only for now.


This post first appeared in the TNPS LinkedIn newsletter.