skip to Main Content
Self-publishing News: European Writers Use Frankfurt Book Fair To Demand AI Transparency

Self-publishing News: European Writers Use Frankfurt Book Fair to Demand AI Transparency

ALLi News Editor, Dan Holloway

It was inevitable that one of the biggest stories at one of the biggest book events of the year would involve AI. And Frankfurt last week was no exception. Publishers have used the platform that Frankfurt affords to launch a statement calling for a big push from Europe’s legislators on AI. It comes against the backdrop of the EU’s AI Act, which was proposed earlier in the summer.

Publishers have pulled no punches about what they see as the threat from AI. It is, they claim, a threat to democracy. The statement from the European Writers' Council has an attention-grabbing header: “The EU must act now to ensure that Generative AI is more transparent – for the sake of the book chain and democracy.”

The statement opens by talking about the dangers of bias and misinformation. But the focus of their concern is clear. It is on the issue of rights and permissions. The AI Act as proposed requires organizations behind generative AI platforms to be transparent about the content on which the platforms have been trained. The Writers' Council welcomes that but wants to ensure the contents and enforcement of the Act are up to the task. They express their concern in one key sentence:

Transparency over inputs to AI is the only way to ensure quality and legitimacy of outputs.

And they also express urgency.

The horse, they hint, has long since bolted the stable in many ways. For years, there has been no transparency, and work has been fed into these models, “without consent, credit or compensation to the authors and publishers.”

The text of the AI Act will be finalised at the end of this year. At present, it distinguishes four different classes of AI based on the risk posed. Regulation will be proportionate to that level of risk. At the highest level is “unacceptable risk,” such as social credit or predictive policing systems. At the lowest “no or minimal” level are simple tools like spam filters. Generative AI platforms like ChatGPT currently come in the tier above that no/minimal risk level, classified as “limited risk.” Use of such platforms will need flagging (hence the recent policy announcements by the likes of Amazon). And there will need to be a list of copyrighted sources.

That transparency matters – but what matters most to the writing and publishing community is what that allows us to do with the information that our work has been used for.

Find Out More

Author: Dan Holloway

Dan Holloway is a novelist, poet and spoken word artist. He is the MC of the performance arts show The New Libertines, which has appeared at festivals and fringes from Manchester to Stoke Newington. In 2010 he was the winner of the 100th episode of the international spoken prose event Literary Death Match, and earlier this year he competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available for Kindle at http://www.amazon.co.uk/Transparency-Sutures-Dan-Holloway-ebook/dp/B01A6YAA40


This Post Has 2 Comments
  1. Hey there! Just finished reading your article on European writers demanding AI transparency, and I gotta say, it got me thinking! It’s crazy how AI is becoming such a big part of our lives, especially in something as personal as writing. Your insights really shed light on the importance of transparency in this rapidly evolving landscape.

    I totally agree with the writers pushing for more clarity on how AI is used in their industry. After all, when we pour our hearts and souls into our work, we want to know that it’s truly ours, right? It’s not just about the end product; it’s about the process too. Thanks for bringing attention to this issue and sparking this conversation. Looking forward to reading more of your thoughts in the future!

  2. The focus on transparency in the AI Act and the call for more stringent enforcement by the European Writers’ Council is crucial. It is understandable that content creators are concerned about their work being used in AI models without proper consent or compensation. The differentiation of AI based on the level of risk is a thoughtful approach, especially when dealing with potentially biased or harmful AI applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest advice, news, ratings, tools and trends.

Back To Top
×Close search