Part 5 of the ‘Quiver, Don’t Quake’ Book


TNPS note: This series of reviews of Quiver Don’t Quake summarises the gist of Nadim Sadek‘s arguments and attempts to take that debate forward. It is not meant as a summary reference to the original to save folk from reading it, but a supplementary to develop the ideas further.

Originally I called Quiver don’t Quake the most important book publishers would read this decade, but as I’ve been through it again and again line by line for this series of reviews, it became increasingly clear – and should be clear to anyone who has followed all the reviews – that Quiver don’t Quake is about far more than just the book publishing industry.

It’s a handbook for every creative out there, and for everyone whose lives are impacted by creatives – which means just about everybody. Because AI is about everybody. Whether we like it or not. And I sincerely hope book three, if a third book is coming, will be fully embrace how globally transformative AI is about to become.

All that said, this particular review, one of five so far, and more to come, does focus directly on the publishing industry. But rather than cite specific examples of publishers using it I’ve created generic examples and likely results that those not in our industry will hopefully gain inspiration and insiught from.

That is to say, I’ve placed emphasis on how to prompt, because without a good prompt an AI is just a glorified search engine and will disappoint as many as it delights. With good prompting, an good AI is like nothing else on Earth. A true miracle of modern science and technology. And we’re still on Day One of this journey.

One final preamble note: As ever, I’ve tried to be clear where I am referencing Sadek’s views for review and where my own views are to the fore for argument development.

As a rule of thumb, if it’s politely said, it’s Nadim. If it’s blunt to the point of rudeness, I’m your guy. I have patience and understanding you would not believe when it comes to teaching little children. But when it comes to adults wilfully misunderstanding change because it makes them uncomfortable or challenges their career trajectory…

I also must note that this fifth part of the review series is late, mostly because I’ve been busy elsewhere. The kids at school take a dim view of me asking for time off. I wouldn’t dare! But eventually a school holiday rolls around and I get a few days to myself.

But the bigger delay was in deciding just what to say here. The previous parts, and the next (on education) are well within my comfort zone. This one, not so much. Your Publishing House’s AI Roadmap?!

That is serious food for procrastination. But as I tell the children, never put off until tomorrow what you can put off until next month.

However, my month is up. So let’s get this review underway.


The Gap Between Knowing and Doing

We’ve established the psychological framework for creativity (Article 2). We’ve seen how other industries adapted successfully (Article 3). We’ve engaged seriously with AI’s harshest critics (Article 4).

Now comes the uncomfortable question: What are you actually going to do on Monday morning?

This is where most AI discourse fails publishing professionals. Books and articles explain what AI is, why it matters, and whether we should use it – but rarely provide concrete guidance on how to start, where to focus limited resources, and which tools actually solve publishing’s specific problems.

And this sort of gave me some leverage here I did not initially expect. Because this is precisely my objection to tech in the classroom. Teachers (in First World countries, anyway) are getting tech thrown at them as if it is the magical answer to every pedagogical problem ever, and they are shown – trained even – how to use it. This is the on-switch. This is how you do this. This is how you do that.

But teaching with it? They mostly have no clue, because the tech inventors, clever as they are, were creating tech for teachers as adults, not for teachers of children. And teachers were taught how to teach using the factory education model of listen, copy, repeat rote learning. And in doing so children advanced from listen, copy and repeat on paper to listen copy and paste on screen.

The tech in most modern classrooms in wealthier countries is truly magical in its capacity. And it’s barely understood, let alone used effectively, by teachers.

And in many ways I see that with AI for publishing. Yes, ChatGPT and Claude and Gemini can do amazing things (if you know how to prompt effectively and understand that iteration is essential – that this is not Google Search!). Yes, Grok can help with your sad sexual fantasies, Copilot can dance around your Word docs and Meta can do a great impression of Dick Van Dyke in Mary Poppins. But they are all just generic novelty toys unless we learn to leverage them for our own particular needs.

Keep that in mind as we move forward.

I’ve mentioned some AI models here in broad terms (ChatGPT, Claude, etc) and omitted mention of others (Grok, Meta, Copilot, etc) for brevity and clarity, But but models vary widely and within each model (most companies have many models) there are freemium and paid levels, each with their own strengths and weaknesses. And no two months are the same as new abilities and new capacity releases are announced. Sometimes no two days are the same!

So what follows are broad guidelines and deliberately generic examples that will hopefully inspire you to explore AI further.

If you are reading this at the weekend, some of this will be out of date by Monday. Things are moving that fast. The most successful publishers of the AI transformation, and make no mistake, this is transformative like nothing else we have ever experienced) will be those that get off the fence, fart in the face of the Luddite Fringe, and embrace AI for what it is – the most significant milestone in publishing history since the printing press.

From Understanding to Action – Your Publishing House’s AI Roadmap

Nadim Sadek’s Chapter 10, “Becoming a Collaborative Creator,” offers a framework for individual creative practice. Incredibly useful. But TNPS is first and foremost aimed at the publishing industry, so inspired by Sadek’s Collaborative Creator concept I’ve pivoted to a publisher-orientated exploration of what Sadek’s tenth chapter explores: organisational implementation of AI across editorial, marketing, rights, production, and discovery.

What follows is your roadmap. Not exhaustive (as said, AI capabilities evolve on a timescale that makes bacteria look like slow breeders), not prescriptive (your publishing house’s needs differ from others’), but actionable. You can begin tomorrow with freemium tools. You can scale strategically over quarters as you understand what works and invest in premium models.

The approach: Start small, measure results, expand what works, abandon what doesn’t. Treat AI integration as iterative experimentation, not a massive overnight transformation.

Let’s begin where it hurts most.

The Slush Pile Crisis: Triage, Not Surrender

The Problem:

Every publisher reports the same phenomenon: submission volume has exploded. Editorial assistants drown in manuscripts. Response times stretch to months. Quality manuscripts get lost in the noise.

The Temptation:

Use AI to automatically reject weak submissions, filter for “quality,” or score manuscripts algorithmically.

Why This Fails:

AI cannot reliably judge literary quality, authentic voice, or cultural significance. Automated rejection systems will eliminate precisely the unconventional voices that might be most valuable – the very democratisation Sadek celebrates.

The Smarter Approach: AI-Assisted Triage, Not AI Decision-Making

Stage 1: Initial Screening (Freemium Tools)

Tool Options:

  • ChatGPT Free / Claude (freemium) for basic manuscript analysis
  • Google’s Gemini for document processing
  • Copilot for quick assessments

Implementation:

Create a standard prompt template that editorial assistants use for initial manuscript review:

Example Prompt Framework:

Analyse this manuscript submission:

[Paste: Query letter + first chapter]

Provide:
1. Genre identification (be specific: not just "fantasy" but sub-genre(s))
2. Comparable published titles (3-5 similar books)
3. Distinctive elements (what makes this different from comparables)
4. Technical assessment (prose quality, structure, pacing - neutral observation only)
5. Red flags (factual errors, incoherence, plagiarism indicators)
6. Green flags (unique voice, cultural perspective, narrative innovation)

Do NOT judge whether we should publish this. Focus on factual description.

What This Achieves:

  • Speeds initial review from 30 minutes per submission to 10 minutes or less
  • Provides consistent framework across multiple assistants
  • Identifies comparables the assistant might not know
  • Flags obvious problems (incoherent plot, severe technical issues)
  • Highlights potential strengths that warrant deeper attention

What It Doesn’t Do:

  • Make acquisition decisions
  • Replace human judgment about voice authenticity
  • Assess commercial potential reliably
  • Understand cultural nuance or marginalised perspectives

Critical Rule: AI analysis goes in assistant’s notes, never in rejection letters. Human editors make all author-facing decisions. Never make an author think they were rejected by a machine.

Stage 2: Pattern Recognition Across Submissions (Paid Tools Worth Considering)

For larger publishers with budget:

Tools:

  • Litmaps or similar bibliography tools (adapted for comparable title mapping)
  • Custom GPT models (ChatGPT Plus allows creating house-specific models)
  • Airtable + AI integration for submission database with pattern analysis

Implementation:

Feed AI your last 5 years of:

  • Accepted manuscripts (with eventual sales data)
  • Rejected manuscripts that succeeded elsewhere (the ones that got away)
  • Submissions from authors who became successful after rejection

Ask AI to identify:

  • Patterns in what you’ve historically acquired vs. what succeeded commercially
  • Blind spots (types of submissions you consistently undervalue)
  • Market gaps (genres/subgenres/perspectives under-represented in your list)

Example Analysis Request:

Here are descriptions of 50 manuscripts we rejected over 3 years that were published successfully elsewhere.

Compare these to 50 manuscripts we accepted in the same period.

Identify:
1. Thematic patterns in our rejections vs. acceptances
2. Commercial performance differences (if data available)
3. Descriptive patterns in rejection reasons vs. actual market reception
4. Author demographic patterns (where data permits)

What are we systematically undervaluing?

What This Reveals:

One UK independent publisher discovered through this analysis that they consistently rejected “gritty urban fantasy with LGBTQ+ protagonists” whilst these performed strongly for competitors. Not because of conscious bias, but because their editorial team lacked personal reading experience in that subgenre and couldn’t assess quality reliably.

The fix wasn’t AI acquisition – it was hiring an editor with that reading background.

The AI’s role: Revealing blind spots humans couldn’t see from inside the pattern.

Stage 3: Augmented First Readers (Hybrid Approach)

For publishers committed to finding unconventional voices:

Implementation:

Train AI on your house’s editorial values (not just successful titles, but why you acquired them).

Process:

  1. Editorial director writes 2-3 pages: “What makes a [Your Imprint] book? What voices do we champion? What risks do we take?”
  2. Include 10-15 examples: title, acquisition rationale, what made you say yes despite concerns
  3. Create custom GPT or use Claude Projects (freemium in Claude) to embed this context
  4. When assistants use AI in this way for manuscript analysis, it references your house’s specific values, not generic publishing criteria

Example Prompt with House Context:

[Your imprint's editorial philosophy here]

Given this manuscript submission, assess:
1. Alignment with our editorial values (specific to our mission)
2. Where this author's voice/perspective fills gaps in our current list
3. Commercial comparables within our typical range
4. Challenges we'd face positioning this (honest assessment)
5. Why this might be exactly the risk we should take

Remember: We value [your specific values]. We've succeeded with unconventional [examples].

What This Achieves:

AI becomes house-specific first reader, not generic quality filter. It knows you value “economically marginalized voices with literary ambition” or “science fiction exploring non-Western cultural perspectives” or whatever defines your editorial identity.

Asking for honest assessment means the AI won’t go into its automatic agree-with-user mode.

The Result:

Assistant says: “AI flagged this memoir for rejection due to ‘unclear structure,’ but noted it’s exactly the kind of experimental form we championed in [past success]. Worth deeper read.”

That’s AI supporting human judgment, not replacing it.


Translation: From Impossible to Inevitable

The Problem:

Global publishing is multilingual, but translation is expensive ($0.05-0.20 per word professionally) and slow (months for novels). Most publishers can’t afford to translate their backlists or consider international editions of new acquisitions.

What’s Changed in 2024-2025:

AI translation has crossed a critical threshold – not “perfect” (no translation is), but “good enough to be edited into excellent” for many language pairs.

The Economic Shift:

Traditional: £8,000-12,000 per novel translation, 3-6 month timeline AI-Assisted: £1,000-3,000 (AI + human editor), 2-6 week timeline Savings: 70-80% cost reduction, 75% time reduction

By the time you read this, those costs may have come down!

The Three-Tier Translation Approach

Tier 1: AI Draft + Professional Editor (Premium Quality)

Best For: Frontlist titles, literary fiction, complex non-fiction, books where voice/style is crucial

Tools:

  • DeepL Pro (paid, superior to most alternatives for European languages)
  • Google Cloud Translation (good Asian language coverage)
  • GPT-4 or Claude Opus (for context-aware translation maintaining style)

Process:

  1. AI generates first-pass translation For literary work, use prompts like: “Translate this chapter maintaining the author’s distinctive voice. This is literary fiction with [describe style]. Preserve rhythm, maintain metaphors unless culturally impossible, keep the author’s sentence complexity.”
  2. Professional translator/editor reviews Not translating from scratch, but editing AI output Focus: voice preservation, cultural adaptation, subtle meaning Timeline: 70% faster than traditional translation
  3. Author review (if bilingual or using back-translation) Spot-check key passages Verify thematic preservation

Example Workflow:

Sadek’s own book Shimmr, Don’t Shake used AI-assisted translation for 60+ language editions. Professional editors refined AI drafts, reducing per-language cost from £10,000+ to £2,000-3,000 whilst maintaining quality.

His reflection: “The AI captured meaning accurately. Human editors ensured my voice remained distinctive in each language. Together, we achieved scale impossible pre-AI.”

Tier 2: AI Translation + Native Speaker Review (Good Quality, Maximum Scale)

Best For: Backlist titles, genre fiction, practical non-fiction, books where content matters more than lyrical style

Tools:

  • DeepL Free (surprisingly capable for most European languages)
  • ChatGPT / Claude (freemium tiers adequate for straightforward prose)
  • Gemini (improving constantly)

Process:

  1. AI translation with genre-specific prompts
  2. Native speaker review (not professional translator, but fluent reader) Check: accuracy, readability, cultural appropriateness Fix: obvious errors, awkward phrasing, cultural mismatches
  3. Light copyedit by professional if budget permits

Cost: £500-1,500 per title Quality: “Good enough” for commercial fiction, self-help, business books

When This Works:

A UK publisher used this approach for their romantic comedy backlist. Native Spanish speakers in their marketing team reviewed AI translations, fixed obvious issues, submitted to Spanish-language platforms.

Result: 15 backlist titles available in Spanish within 3 months. Combined sales: £23,000 in year one. Investment: ~£8,000 total. ROI: positive within 9 months.

Tier 3: AI-Only with Spot-Checking (Experimental/Low-Risk)

Best For: Exploration (testing market demand before investing in quality translation), marketing copy, metadata, short-form content

Tools:

  • Free AI translators (DeepL, Gemini, ChatGPT/Claude freemium)
  • Browser extensions for quick translation checks

Process:

  1. AI translates
  2. Quick spot-check by native speaker (informal, 15-30 minutes)
  3. Publish with disclaimer if appropriate: “AI-assisted translation, professional edition forthcoming if successful”

When This Makes Sense:

Testing whether your thriller series has audience in Poland before investing in professional Polish translations. AI translate first book, publish as e-book only, gauge response. If it sells 500+ copies, invest in professional translation for the series.

Risk: Subpar translation might harm reputation Mitigation: Be transparent, price accordingly, treat as market test

The Language Pairs That Work Best (2025-6 Reality)

Excellent AI Translation:

  • English ↔ Major European languages (French, German, Spanish, Italian, Portuguese)
  • English ↔ Chinese (Simplified)
  • English ↔ Japanese (improved dramatically over 2023-2024)
  • English ↔ Korean

Good AI Translation (with editor review essential):

  • English ↔ Arabic
  • English ↔ Hindi
  • English ↔ Russian
  • English ↔ Turkish
  • Major European language pairs

Adequate AI Translation (requires significant editing):

  • English ↔ many African languages (improving but limited training data)
  • English ↔ Southeast Asian languages (quality varies)
  • Any language pair involving “minor” languages (under 50M speakers)

The Ethical Consideration:

For under-represented language pairs, Sadek argues (as have I here, many times) for more AI translation, not less – specifically because professional human translation is economically unavailable.

The choice isn’t:

  • AI translation vs. human translation

The choice is:

  • AI translation (imperfect but accessible) vs. no translation at all

Given that choice, AI democratises access.

The Professional Translator Question

Will This Eliminate Translation Jobs?

Evidence from 2024: No – but it’s changing the work.

What’s Declining:

  • Straightforward technical translation (user manuals, business documents)
  • High-volume, low-complexity fiction translation

What’s Growing:

  • Literary translation (where voice/style matter most)
  • Specialized translation (legal, medical, technical)
  • Editorial/refinement work (editing AI drafts)
  • Cultural adaptation consulting (advising AI use)

Smart Translators:

  • Position themselves as AI editors/refiners
  • Specialise in complex literary work
  • Consult with publishers on AI translation workflows
  • Move up value chain to creative/cultural work

Publishers’ Responsibility:

Be transparent about AI use, compensate fairly (even if rates shift), credit human editors prominently.

Sadek’s position (which I endorse):

AI enables translation scale that creates more opportunity for translators willing to adapt, not less. The market expands; roles evolve.


Market Research: From Gut Feeling to Data-Informed Intuition

The Problem:

Publishers make acquisition decisions on: comp title performance, editor intuition, sales team gut feelings, recent auction results gossip.

None of this is systematically data-informed. Not because publishers don’t value data, but because comprehensive market research is prohibitively expensive and slow.

What AI Changes:

Rapid, comprehensive market analysis becomes accessible to any publisher with internet access.

Application 1: Comparable Title Analysis (Freemium)

Use Case: Editor considering memoir acquisition. Wants to understand market landscape.

Traditional Approach:

  • Ask colleagues for comps
  • Check Nielsen BookScan for similar titles (if you have access; expensive)
  • Browse Goodreads/Amazon reviews
  • Time: 2-3 hours, limited data points

AI-Assisted Approach:

Prompt Example (ChatGPT/Claude/Gemini – all freemium adequate):

I'm considering acquiring a memoir:
- Author: Immigrant author (Nigerian-British)
- Themes: Cultural identity, family trauma, food as cultural memory
- Style: Lyrical prose, non-chronological structure
- Similar to: "Crying in H Mart" (Michelle Zauner) meets "The Ungrateful Refugee" (Dina Nayeri)

Provide:
1. 10-15 comparable published memoirs (last 5 years)
2. Commercial performance indicators (if publicly available: awards, bestseller status, major reviews)
3. Publishers who've succeeded with similar memoirs
4. Apparent market trends: growing/saturated/declining
5. Positioning challenges and opportunities

Be specific with titles, authors, publishers, dates.

What You Get (in 2-3 minutes):

  • Comprehensive comp list you’d never have thought of
  • Pattern recognition: which publishers own this space
  • Market timing assessment: oversaturated or undersupplied?
  • Positioning insights: how to differentiate this memoir

Critical Step: Verify the titles exist. AI occasionally hallucinates. I find Claude the most reliable, but always check. Quick Google search confirms each title before using in acquisition discussion.

Value: 90% of the insight in 5% of the time.

Application 2: Trend Analysis and Gap Identification (Paid Tools Worth It)

For Publishers with Research Budget:

Tools:

  • Google Trends (free, but AI helps interpret)
  • Publisher’s Lunch / BookScan data + AI analysis (requires subscriptions)
  • Social listening tools (Brandwatch, Talkwalker) + AI synthesis

Use Case: Planning next year’s acquisition strategy. Want to identify emerging trends before they’re obvious.

Process:

  1. Gather data: bestseller lists, Goodreads trending, social media book conversations, BookTok/Bookstagram hashtags, literary agent MSWL (Manuscript Wish List) posts
  2. Feed to AI with prompt:
Here's data from:
- NYT Bestseller lists (fiction/non-fiction, last 12 months)
- Goodreads most-added books (last 6 months)
- Top 50 BookTok trending titles
- Literary agents' recent MSWL posts

Analyze:
1. Emerging themes/subgenres gaining momentum
2. Established categories declining
3. Reader demand vs. publisher supply mismatches
4. Demographic shifts (age/cultural background of trending authors)
5. Format trends (trilogy resurgence? standalone? duologies?)

Focus on *early* signals, not established trends everyone sees.

What This Reveals:

One publisher used this approach in late 2023 and identified “climate fiction with solutions focus” (not dystopian, but practical/hopeful) as emerging trend before it became obvious in 2024.

The strategic value: Commissioned three titles in this space before competitors, established early market position.

The AI’s role: Pattern recognition across datasets too large for human analysis. Human judgment determines which patterns matter.

Application 3: Audience Research for Niche Titles (Freemium Adequate)

Use Case: You’re publishing a biography of an obscure but fascinating historical figure. Who’s the audience? How do you reach them?

Prompt Example:

I'm publishing a biography of [Historical Figure: brief description].

Research:
1. What online communities discuss this person or era?
2. What related topics attract similar audiences? (history podcasts, documentary subjects, fiction set in this period)
3. What comparable biographies succeeded? What was their positioning?
4. What keywords/hashtags connect to potential readers?
5. What publications/podcasts/influencers cover this area?

Provide specific names, URLs where possible.

What You Get:

Comprehensive audience map: where they congregate online, what else they read, how to reach them.

Real Example:

Publisher used this for biography of lesser-known female aviator. AI identified:

  • Aviation history subreddits
  • Feminist history podcasts
  • Three specific history YouTubers covering this era
  • Related biographies publisher hadn’t considered
  • Specific keywords for metadata/Amazon categories

Marketing team used this to:

  • Target outreach (contacted those podcasters specifically)
  • Optimise metadata (used AI-identified keywords)
  • Position the book (emphasised connections to better-known figures AI identified)

Result: Niche title found its audience, sold 3,000+ copies in year one – well above forecast for such specialised biography.


Metadata and Discoverability: The Invisible Infrastructure

The Undervalued Crisis:

Most publishers treat metadata as administrative task. But metadata is discovery infrastructure. Poor metadata = invisible books, regardless of quality.

The AI Opportunity:

Generating comprehensive, optimised metadata takes minutes instead of hours.

SEO-Optimised Book Descriptions (Freemium)

The Problem:

Publishers write jacket copy for bookshop browsing. But 70%+ of discovery happens online, where different copy conventions work better.

Print Jacket Copy:

  • Literary, evocative
  • Teases without revealing
  • Assumes reader is browsing physically, can examine the book

Online Description:

  • Front-loads key information
  • Includes searchable keywords naturally
  • Answers: What is this? Who is it for? Why now?

AI Can Generate Both from Single Source

Process:

  1. Start with your print jacket copy
  2. Prompt:
Here's our print jacket copy for [Title]:

[Paste copy or upload photo of the cover]

Generate:
1. SEO-optimised web description (200 words)
   - Front-load genre and key themes
   - Include likely search terms naturally
   - Maintain marketing appeal while maximising discoverability
   
2. Amazon "Editorial Review" style (150 words)
   - Pull-quote worthy opening
   - Comp titles prominently
   - Clear audience identification

3. Short pitch (50 words) for:
   - Social media
   - Email marketing
   - Trade catalogue

For each, maintain tone but optimise for platform.

What You Get:

Platform-specific descriptions in minutes. Human editor reviews for accuracy and tone, makes final call.

The Time Savings:

Traditionally: 30-45 minutes per title for multiple descriptions AI-assisted: 5-10 minutes per title (including review/editing)

For 50-book list: Save 15-25 hours per season

BISAC and Keyword Optimisation (Freemium)

The Problem:

Proper BISAC categorisation and keyword selection determine whether readers find your book when browsing online retailers.

Most publishers:

  • Choose obvious BISAC codes (everyone in the category)
  • Use generic keywords (high competition, low specificity)
  • Miss niche categories where less competition = better visibility

AI-Assisted Approach:

Prompt:

Book: [Title, brief description, genre]

Provide:
1. Primary BISAC code (most obvious category)
2. 3-5 secondary BISAC codes (less obvious but valid)
   - Include niche categories with lower competition
3. 7-10 Amazon-specific keywords
   - Balance search volume with competition
   - Include specific subgenre terms
   - Add theme-based keywords readers might search
4. Goodreads shelf tags (10-15) actual readers use

Explain reasoning for non-obvious choices.

What This Reveals:

For a historical novel set in Victorian London with spiritualism themes, AI suggested:

Obvious: Fiction > Historical > Victorian

Non-obvious but valuable:

  • Body, Mind & Spirit > Spiritualism
  • Fiction > Gothic
  • Fiction > Literary > Historical
  • Social Science > Folklore & Mythology

Result: Book appeared in multiple categories, found audiences beyond typical historical fiction readers.

The Discoverability Impact:

Better categorisation = better browse visibility = more sales. One publisher reported 15-20% sales increase for backlist titles after metadata optimisation using this approach.

Geo-Targeting for Regional Editions (Paid Tools More Effective, but Freemium Usable)

The Problem:

Same book might need different positioning in UK vs. US vs. Australia. American references might confuse British readers; British idioms might need explanation for Americans.

AI-Assisted Solution:

Process:

  1. Start with master metadata (description, keywords, categories)
  2. Prompt for regional adaptation:
Adapt this book description for [UK/US/Australian] market:

[Master description]

Adjust:
1. Spelling/terminology conventions (honor/honour, etc.)
2. Cultural references (replace with local equivalents where necessary - example: hood/bonnet/, trunk/boot, pants/trousers)
3. Comp titles (use titles well-known in target market)
4. Marketing emphasis (themes that resonate locally)

Maintain core message but optimise for regional discovery.

Real Example:

Cookbook originally US-marketed emphasising “convenience” and “time-saving.”

UK adaptation emphasised “traditional techniques with modern shortcuts.”

Australian adaptation emphasised “fresh, seasonal ingredients.”

Same book, regionally optimised messaging = 25% stronger performance in non-origin markets.

The Ongoing Metadata Maintenance Challenge

The Reality:

Metadata isn’t set-and-forget. Keywords that worked last year become crowded. New categories emerge. Trends shift.

AI-Assisted Maintenance:

Quarterly review prompt:

Analyze current metadata for [Title]:

Current BISAC codes: [list]
Current keywords: [list]
Current description: [paste]

Published: [date]

Evaluate:
1. Are keywords still optimal or now oversaturated?
2. Have new relevant categories emerged?
3. Does description reflect current market language?
4. Suggest updates to improve discoverability

Compare to successful comparable titles published recently.

What This Catches:

  • Keywords that worked at launch are now too competitive
  • New BISAC categories added since publication
  • Trending terminology in the genre
  • Opportunity to reposition backlist titles

The Backlist Revenue Impact:

Publishers systematically optimising metadata report 10-30% revenue increase on backlist titles, with no other marketing changes.

That’s significant return for minimal time investment.


Audiobook Narration: The Accessibility Revolution

The Landscape (2025):

Audiobook market growing 20%+ annually. But professional narration costs £150-250 per finished hour. A 10-hour novel: £1,500-2,500 production cost.

For backlist or niche titles, economics often don’t work.

What’s Changed:

AI narration crossed viability threshold in 2024. Not universally loved, but increasingly accepted for specific use cases.

The Three-Tier Audiobook Strategy

Tier 1: Human Narration (Premium, Frontlist Priority)

When to Use:

  • Lead titles
  • Literary fiction (where voice/interpretation matters)
  • Celebrity/author-read memoirs
  • Titles with multiple character voices
  • Books where audiobook is significant revenue stream

No AI substitute here. Human narration quality, interpretive nuance, emotional delivery remain superior for these applications.

Tier 2: Hybrid Approach – AI Narration + Human Direction (Emerging Option)

The Innovation (2024-2025):

Tools like Descript’s Overdub, ElevenLabs, or Speechify allow creating AI voice models with human oversight.

Process:

  1. Professional narrator records 2-3 hours of directed performance
  2. AI voice model trained on that narrator’s voice
  3. AI generates full audiobook in narrator’s voice
  4. Human narrator reviews, re-records segments needing human nuance (emotional peaks, complex dialogue)
  5. Final edit combines AI efficiency with human artistry where it matters

Cost: ~£500-800 for 10-hour audiobook (vs. £1,500-2,500 traditional) Quality: 80-90% of full human narration Timeline: Weeks instead of months

When This Works:

  • Non-fiction (where consistent delivery matters more than emotional interpretation)
  • Genre fiction backlist (where narration quality threshold is “good enough,” not “exceptional”)
  • Books with single POV (avoiding multiple character voice challenges)

Ethical Consideration:

Many narrators now offer this as service – licensing their voice for AI generation under controlled conditions, with ongoing compensation. This is collaborative model, not replacement.

Several narrators have publicly discussed earnings: Base fee for voice licensing (£2,000-5,000), plus per-title fee (£200-500), plus performance royalties on sales.

Their calculation: Better to participate in expanding audiobook market than be cut out entirely.

Tier 3: AI-Only Narration (Budget/Experimental)

Tools:

  • Google Cloud Text-to-Speech (paid, high quality)
  • Amazon Polly (paid, integrated with Kindle ecosystem)
  • ElevenLabs (paid, excellent for quality AI voices)
  • NaturalReader (freemium option, adequate for testing)

When to Use:

  • Testing audiobook demand before investing in human narration
  • Backlist titles where production cost exceeds likely revenue
  • Educational/reference books (where information delivery matters more than performance)
  • Accessibility (offering audio format to vision-impaired readers)

Current Quality (2025):

Excellent: Straightforward non-fiction, business books, self-help Good: Contemporary fiction with minimal dialogue Adequate: Genre fiction with standard prose Poor: Literary fiction, dialect-heavy dialogue, poetry, humour

The Reality Check:

Some readers won’t accept AI narration. That’s fine. The question isn’t whether to replace human audiobooks, but whether to make audiobooks available where they wouldn’t exist otherwise.

Example:

Publisher had 150-book backlist. Only 12 had audiobook editions (lead titles). Used AI narration for 50 backlist titles (mid-performing, not lead titles).

Results (year one):

  • Total AI audiobook production cost: £15,000
  • Revenue: £42,000
  • Human narrator commission if produced traditionally would have been: £75,000-125,000 (economically impossible)

The economics work. Not for everything, but for expanding access where human narration isn’t economically viable.

The Listener Acceptance Question

Research from Spotify and Audible (2024):

  • ~40% of listeners accept AI narration for certain book types
  • ~25% actively prefer AI narration (consistent pacing, no narrator “performance”)
  • ~35% reject AI narration categorically

This splits by genre:

More Accepting:

  • Business/self-help readers (65%+ acceptance)
  • Academic/educational (70%+ acceptance)
  • Science/technology non-fiction (60%+ acceptance)

Less Accepting:

  • Literary fiction readers (30% acceptance)
  • Romance readers (35% acceptance)
  • Mystery/thriller readers (40% acceptance)

The Strategy:

Be transparent. Label clearly. Let readers choose. Offer human narration for lead titles, AI for backlist/experimental.

The market sorts itself.


The Implementation Roadmap: From Tomorrow to Next Quarter

You’ve seen the applications. Now: how to actually begin?

Week 1: The Assessment

Action: Audit current pain points and opportunities

Exercise (can use AI to help):

Evaluate our publishing house:

1. Biggest time sinks (tasks consuming disproportionate staff time)
2. Bottlenecks (where work queues up)
3. Missed opportunities (things we'd do if we had capacity)
4. Quality inconsistencies (where human error/fatigue impacts output)
5. Budget constraints preventing desirable outcomes

Rank by: Impact if solved, effort to implement, risk of getting wrong.

Expected Output:

Priority list of where AI assistance would deliver highest value for your specific situation.

Week 2-4: Pilot Projects

Action: Choose 2-3 small, low-risk experiments

Criteria for Good Pilot:

  • Contained scope (single task/workflow)
  • Clear success metrics (time saved, quality maintained, cost reduced)
  • Reversible (can stop if it fails)
  • Valuable if successful (worth scaling)

Example Pilots:

Pilot A (Metadata):

  • Take 10 backlist titles
  • Generate optimised metadata using AI
  • Compare discoverability metrics 90 days later
  • Metric: Sales increase vs. control group

Pilot B (Translation):

  • Choose one backlist title
  • AI-translate to Spanish (freemium tools)
  • Professional editor review
  • Publish as ebook
  • Metric: Sales vs. production cost

Pilot C (Market Research):

  • Next 5 acquisition considerations
  • Generate AI-assisted comp analysis
  • Compare to traditional research (time/insights)
  • Metric: Editor satisfaction, decision confidence

The Golden Rule:

Start small. Measure results. Expand what works. Abandon what doesn’t.

Month 2: Evaluate and Expand

Action: Review pilot results, scale successes, learn from failures

For Successful Pilots:

  • Document workflow (create standard operating procedure)
  • Train additional staff
  • Scale to more titles/projects
  • Measure ongoing performance

For Failed Pilots:

  • Analyse why (Wrong tool? Wrong application? Insufficient training?)
  • Decide: Abandon, revise approach, or try different use case?

For Mixed Results:

  • Identify what worked and what didn’t
  • Refine approach
  • Re-pilot with adjustments

Quarter 2: Strategic Integration

Action: Move from experimentation to standard practice for proven applications

This Means:

  • Update job descriptions (incorporate AI tool use)
  • Develop internal training
  • Invest in paid tools for applications with proven ROI
  • Create quality control frameworks
  • Establish ethical guidelines

Critical: Document both successes and limitations. Be honest about where AI helps and where it doesn’t.

Ongoing: Iteration and Innovation

The Reality:

AI capabilities evolve constantly. What doesn’t work today might work in six months. What works today might (will!) be superseded by better approaches.

The Mindset:

Continuous experimentation, perpetual learning. This isn’t a project with an endpoint – it’s a new way of operating.


The Freemium vs. Paid Decision Framework

Throughout this article, I’ve noted freemium vs. paid options. How to decide?

Start Freemium, Scale to Paid

Phase 1 (Experimentation): Use freemium tools exclusively

  • Prove the use case works
  • Understand workflow integration
  • Develop staff capability
  • Document value generated

Phase 2 (Scaling): Invest in paid tools where:

  • You are comfortable with the AI. N.B. Claude is a different planet from Meta, Kimi very different from ChatGPT or Copilot – it is worth trying several (all!) the freemium versions first with the same test prompts and iterations to get a feel for what is comfortable for you and your staff and your specific needs
  • Freemium hit limits (rate limits, feature restrictions). Right now Claude has the most generous freemium limits for serious work IMHO
  • Paid tools can offer significant time savings
  • Quality difference justifies cost
  • Volume justifies subscription

Example:

Metadata generation: Start with ChatGPT free or another LLM. Once you’re generating metadata for 50+ titles quarterly and it’s proven valuable, £20/month for ChatGPT Plus gives faster processing, better quality, priority access. But you might find a subscription to Claude or Gemini better for your specific needs. But experiment with freemium first!

At scale (100+ titles/season), custom solutions or enterprise tools make sense. Smaller operations cam still benefit from paid, but freemium might be totally adequate. Why pay when you don’t need to?

The Cost-Benefit Calculation

Framework:

Value of AI Tool = (Time Saved × Hourly Staff Cost) + (Revenue Increase from Better Outcomes) - (Tool Cost + Learning Curve Time)

If Value > 0 and Positive → Worth It

Example Calculation (Metadata Optimisation):

  • Time saved: 20 hours/season (vs. manual optimisation)
  • Staff hourly cost: £30/hour (fully loaded)
  • Revenue increase: £5,000/season (from better discoverability)
  • Tool cost: £240/year (ChatGPT Plus as example only)
  • Learning curve: 3-5 hours (one-time)

Value Year 1: (20 hours × £30 × 4 seasons) + £20,000 – £240 – (5 hours × £30) = £2,400 + £20,000 – £240 – £150 = £22,010 net value

ROI: 917%

That’s an obvious invest decision.

The Emerging Markets Consideration

For publishers in emerging markets with limited budgets. or in mature markets that are struggling with costs:

Freemium tools are genuinely sufficient for most applications in this article.

The paid tools offer:

  • Faster processing
  • Better quality (marginal in many cases)
  • Higher usage limits
  • Advanced features

But for basic translation, metadata, research, narration testing?

Freemium does 80% of what paid tools do.

Strategic Focus: Master freemium tools thoroughly before considering paid upgrades.

The Ethical Framework: Guiding Principles for AI Use

Before closing, essential considerations:

Transparency

These are best practice suggestions. Personally I’m not in favour of excessive labelling, but broadly speaking it may be best to go down that route at first. The benefits and pitfalls will become evident soon enough.

With Authors:

  • Disclose AI use in editing, translation, metadata
  • Provide opt-out mechanisms where feasible
  • Credit human editors/narrators prominently even in AI-assisted work

With Readers:

  • Label AI narration clearly
  • Don’t hide AI assistance in author bios/marketing
  • Be honest about production methods

Compensation

When AI Assists Human Work:

  • Pay fairly for the human contribution (editor, narrator, translator)
  • Don’t slash rates just because AI reduced time
  • Value expertise, judgment, refinement – not just hours worked

When Licensing Content for AI Training:

  • Ensure authors understand and consent
  • Provide fair compensation
  • Maintain ongoing dialogue as uses evolve

Quality Control

AI Makes Mistakes:

  • Always verify factual claims
  • Human review required before publication
  • Never publish AI output without human oversight

Red Lines (Don’t Cross):

  • Don’t use AI to write entire manuscripts and claim human authorship
  • Don’t use AI to fabricate author bios, reviews, or testimonials
  • Don’t use AI to generate plagiarised or copyright-infringing content
  • Don’t use AI to deliberately mislead readers about book content

Bias Mitigation

AI Reflects Training Data Bias:

  • Actively seek diverse voices and perspectives
  • Question AI assumptions about audience, quality, market
  • Use human judgment to override AI bias
  • Invest in diverse training data where possible

The Invitation to Begin

Nadim Sadek ends “Quiver, don’t Quake” with an invitation: not to fear AI, but to engage with it creatively, thoughtfully, ethically.

This TNPS review extends that invitation with specificity:

You know the pain points. You know your budget. You’ve seen concrete applications. You have the roadmap.

Monday morning, you can:

  • Begin one metadata optimisation pilot
  • Generate comp analysis for a pending acquisition
  • Test AI translation for one backlist title
  • Create a submission triage framework

Next week, you can:

  • Train editorial assistants on AI-assisted research
  • Evaluate freemium tools for your highest-value use case
  • Document your first successful AI application

Next quarter, you can:

  • Scale proven applications across your list
  • Invest strategically in paid tools where ROI justifies
  • Integrate AI assistance into standard workflows

This isn’t transformation. It’s augmentation.

AI won’t replace publishers, editors, translators, narrators, or marketers who do their jobs with judgment, taste, and cultural understanding.

But it will render obsolete those who refuse to augment their expertise with tools that expand capability.

The gap between understanding AI and using AI is where most publishing discourse fails. This article hopefully bridges that gap.

Now the question is simple:

Will you begin tomorrow?

Or will you still be debating whether to begin in 2027 when competitors have spent a year refining what works?

Sadek gave us the framework. The critics gave us the cautions. Other industries showed us the pattern.

Now it’s your turn to act.


This post first appeared in the TNPS LinkedIn Analysis Newsletter.