AI-produced audio slammed for its low quality was actually created by humans. Luddite Fringe, look away now!


The rapid advancement of artificial intelligence (AI) has ushered in an era of unprecedented technological capability, particularly in the realm of content creation. This progress, however, is often met with a complex and at times contradictory set of human perceptions and reactions, some natural, many irrational and self-serving.

This past week, the Swedish trade journal Boktugg carried a short item perfectly capturing the moment.

The discussion surrounding the Finnish translation of Carissa Broadbent’s The Serpent and the Wings of Night on Storytel centred on criticism regarding awkward wording. This inevitably led to the cries that the translation was carried out by Nuanxed, a reputable company associated with AI translation services for Storytel.

Untouched By Inhuman Hands

The subsequent debate – a field day for the Luddite Fringe – centered on the supposed inferiority of AI in handling literary translation, as reported by YLE. However, the publisher podcast revealed a significant twist: Nuanxed was NOT involved, and the translation was, in fact, the work of a human translator.

This incident rams home a critical point: the readiness to blame AI for perceived low quality, even in the absence of evidence, reveals a pre-existing negative bias. The very act of encountering awkward wording triggered an immediate association with AI, demonstrating an expectation that AI translation is inherently flawed. This assumption preempted any consideration of human error, highlighting a potential blind spot in our evaluation process when AI is involved.

This rush to judgment suggests that the narrative around AIs limitations in creative fields is so pervasive that it becomes the default explanation for any perceived shortcomings, and for that we can lay the blame fairly at the feet of the Luddite Fringe that constantly delights in telling us, ad nauseum, that AI can never replace humans, while simultaneously ranting without evidence that AI is replacing humans.

While the reaction of the Luddite Fringe is self-serving and irrational, this dichotomy suggests a wider and deep-seated bias that influences how we evaluate creative work based on its presumed origin.

Practically Flawless

The case of the Japanese literary prize awarded in January 2024 to Rie Kudan for her novel “Tokyo Sympathy Tower” provides another crucial lens through which to examine human bias in evaluating creative work . The novel, which even features AI as a theme, was lauded by the judges as “practically flawless” and “universally enjoyable.

This high praise was bestowed upon the work without the judges’ prior knowledge that approximately 5% of the text was directly generated by the ubiquitous ChatGPT. Kudan herself openly admitted her use of AI in her acceptance speech, stating her intention to continue working with such technologies to express her creativity.

It is inconceivable the judge would have lavished this same praise on the work had they known in advance AI played a part, Put simply, the knowledge of AI involvement would have triggered a different evaluative framework, potentially coloured by skepticism or a devaluation of the work’s “human” element.

Confirmation Bias

This inclination to judge content differently based on its perceived origin aligns with the well-established psychological phenomenon of confirmation bias. Confirmation bias describes the tendency to seek, interpret, favour, and recall information in a way that confirms or supports one’s prior beliefs or values.

In the context of evaluating creative work, if an individual holds a pre-existing belief that AI is incapable of producing high-quality content, they are more likely to interpret any ambiguity or perceived flaw in AI-generated work as confirmation of this belief. This can lead to a selective focus on negative aspects and a dismissal of any positive attributes

Conversely, if the same individual were to evaluate a piece of work attributed to a human, they might be more lenient in their judgment or attribute flaws to human error rather than inherent limitations of the creator.

In the Boktugg-cited example, above, the critics’ immediate assumption of AI involvement and subsequent negative assessment stemmed from a pre-existing notion of AI translation being subpar. Their confirmation bias led them to readily accept this explanation without thoroughly investigating other possibilities.

Similarly, in the case of the Akutagawa Prize, the potential for a negative re-evaluation upon the disclosure of AI involvement suggests that the judges, and the public, might hold a bias that would lead them to view AI-assisted work through a more critical lens, seeking to confirm their potential skepticism about AI’s role in literature.

Mitigating Strategies

Strategies to mitigate confirmation bias, such as actively seeking contradictory evidence and fostering critical thinking, are crucial in ensuring a more objective evaluation of creative work in an age where AI is increasingly involved.

While the concept of automation bias traditionally refers to the over-reliance on automated systems , the examples discussed suggest the existence of a “reverse” automation bias in creative fields. This reverse bias manifests as an inherent skepticism or negative predisposition towards AI-generated content, potentially leading to unfairly critical evaluations.

The immediate assumption of AI error in the Boktugg incident, and the almost-certain negative re-evaluation of the Akutagawa Prize-winning novel had the AI involvement been known beforehand, both point towards this phenomenon.

Fuelled by the Luddite Fringe

There is a broader skepticism and often negative rhetoric surrounding AI-generated art and writing. Arguments against AI in creative fields often cite a perceived lack of genuine creativity, artistic depth, and concerns about copyright and the devaluation of human skill. This widespread skepticism, prevalent in both the creative community and the general public, fuelled daily by the Luddite Fringe, supports the notion of a reverse automation bias, where there is an underlying tendency to distrust and view AI contributions negatively.

This stands in contrast to the traditional automation bias, where individuals might uncritically accept the output of automated systems. The existence of this reverse bias highlights the nuanced and context-dependent nature of human interaction with AI, particularly in domains traditionally associated with human ingenuity and expression.

These biases, both confirmation and the reverse automation bias, have significant implications for how we move forward in the era of AI. (The Luddite Fringe would prefer we do not move forward. Period. But that’s another story.)

The tendency to readily attribute flaws to AI inevitably leads to an underestimation of its current abilities, while the inherent skepticism towards AI authorship prevents the recognition of genuinely high-quality AI-generated content.

This can create a cycle where negative expectations hinder the appreciation and further development of AI creative tools. These biases pose challenges for the wider adoption and integration of AI in creative workflows.

Predisposed to View AI Output Negatively

If human professionals are predisposed to view AI output negatively, they might be reluctant to utilise it as a tool or collaborate with AI in creative projects. We see this constantly in the debates about AI in publishing on social media, where the Luddite Fringe exploits every opportunity to poison the debate.

Take the meta f**k-up with the piracy site LibGen. Yes, Meta was totally out of order in doing this, and rightly condemned, but the Luddite Fringe is presenting this as an example of how all AI companies operate in an ethical vacuum, conveniently overlooking that the Luddite Fringe’s main platform for disseminating these claims are platforms like Facebook and Instagram, owned by Meta and with along history of ethics abuse, and for which no payment is made to use.

Self-Righteous – Philip Pullman

When Meta gives, we cannot even be bothered to say thank you. When Meta takes, it is “yet more evidence of the catastrophic impact generative AI is having on our creative industries worldwide.”

That of course from the Society of Authors, an outfit even its own former president, Philip Pullman, called self-righteous, and that misses no opportunity to tell its members how evil AI is.

And let’s not forget that these AI naysayers are usually the exact same people who make a living, or whose members make a living, selling books on Amazon, a company with a long history of ethical shortcomings up to and including criminal convictions.

The increasing involvement of AI in creative fields necessitates a re-evaluation of traditional notions of authorship and creativity.

The controversy surrounding the Akutagawa Prize winner exemplifies this need, as the lines between human creation, AI assistance, and collaboration become increasingly blurred. There is a growing potential for hybrid approaches where humans and AI work together, leveraging the strengths of both. Focusing on AI as a tool to augment human creativity, rather than a replacement, might help to mitigate some of the negative biases and foster a more collaborative creative landscape.

Clickbait Headlines and Self-Righteous Soundbites

But of course there are no clickbait headlines and no self-righteous soundbites in looking at the positives of AI.

The “blame game” can and is having a detrimental effect on the development and public acceptance of AI technologies. Constant and deliberate negative attribution fosters a climate of distrust and fear, potentially slowing down the adoption of beneficial AI applications and hindering investment in further research and development.

None of which is to excuse past actions by AI companies where, as in the use of piracy sites, ethical boundaries have clearly been crossed Nor to excuse arguably illegal actions where copyright has allegedly been flouted, although the latter has yet to be established in any court of law.

Transformative is the Key

The Thomson Reuters vs Ross case, often cited, was not about generative AI, as the judge made clear, and while Thomson Reuters won its case of Ross exceeding the limits of the fair use clause, we should be clear this was because the judge ruled the use was not transformative. AI companies have a pretty strong case that their use is.

But the narrative being promoted by the Society of Authors and its ilk serve only to muddy the waters, ultimately harming the very creatives it purports to protect.

Not content with peddling its own toxic narrative, the SoA carries false assertions from author members in support of this narrative. “(AI companies) don’t believe in paying for copyrighted materials and they never will.

Real Facts and Luddite Fringe Facts

This “fact” conveniently ignores actual bona fide facts, such as that in August 2024 the self-same SoA ran a full post talking about the $10 million Taylor & Francis was collecting from AI companies for access to content. And in December 2024, the SoA ran a post covering the deal HarperCollins had made with an AI company.

Yet the SoA felt happy to include the misinformed author’s erroneous comments as part of yet another SoA post.

Curiously the SoA at no point in the HarperCollins post mentioned that the AI company concerned was handing over $5,000 per title to HarperCollins for a three-year non-exclusive training agreement, and that the books in question were deep backlist titles, some more than ten years old.

HarperCollins was pocketing half of that and offering $2,500 to the author. Keep in mind that according to the self-same SoA, the median earnings of UK authors is only £7,000 ($9,000). I suspect any authors actually in that range would be fighting one another to get a hand-out of $2,500 per title, if only they’d been told by the SoA how much was on the table.

Active Engagement in the Face of Toxic Disinformation

Right now, AI companies are actively trying to engage with publishers and, by extension, authors, in the face of toxic disinformation by the Luddite Fringe.

It’s a window of opportunity for publishers and authors alike, but one that may soon close.

As explored here at TNPS, there is a very real AI threat to the industry looming, in the face of a Trump executive order that could rule all AI use of content as “fair use” if it happens.

Such a ruling would leave the AI companies a green light to use any and all content without any compensation, or at best compensation on the AI companies’ terms.

But the SoA and its Luddite Fringe cohorts are too busy crying wolf to see the real wolf at the door.


This post first appeared in the TNPS LinkedIn newsletter.