Book fairs love themes, and the Frankfurt Book Fair, in particular, has tended to dive into the pursuit of the zeitgeist with enthusiasm. I still regularly reference the BookTok takeover that managed to capture the excitement of that fledgling influencer scene. This year’s fair promises a deep dive into an equally zeitgeisty and utterly expected area. The response of the creative world to the challenges and opportunities of AI is still a relatively new conversation, even if it feels like it’s been with us forever.
Publishing Perspectives has put together a sneak peek preview of a fascinating roster of AI-related events at this year’s Frankfurt Book Fair. The names of some of the talk and panel hosts will be familiar from recent editions of this column, most notably Created by Humans and the Copyright Clearance Center. Both of these organizations have launched initiatives recently to help preserve the value of creative rights and the special status for human creators and their need to make a living in a landscape where technological involvement is seen as inevitable. The problem they seek to solve is the pragmatic one of, “How do we retain the greatest benefit for human creators in the face of inevitable change?”
That might not be the headline-grabbing activism many creators would like to see, but it is increasingly the direction the conversation is taking. And it is doing so from both sides, as the stridency on the big tech side of what was previously the dividing line has also been ratcheted down a notch and become more conciliatory.
Some of this has also been pragmatic: securing the agreement of creative industries for what seems to them like small change could give companies a time advantage that would pay back in orders of magnitude. Some of it is in response to increasing regulatory pressure, as we saw first with the EU’s AI Act, most recently with the NO FAKES and COPIED Acts in the US, and the possibility that the new UK government may take a new tack that is less tech-friendly.
But some of this is also a response to wider pushback, public skepticism, and increased questioning over whether or not the hype is real. Some of that pushback comes from concerns about the use to which technology in general and AI, in particular, might be put. The impending US elections are one focus for that concern, with news last week that OpenAI has shut down a political influence operation using its AI, obviously sparking “tip of the iceberg” worries. There is also concern in the UK over the role of social media in the spread of disinformation inflaming recent riots, as discussed in this TechCrunch article.
But some of the skepticism comes from the fact that AI still really isn’t that reliable. This is slightly different from the commentary I offered for a while on the insistence that it wasn’t very good. Whether image, film, text, or sound, AI is getting very good very quickly. But it seems to remain persistently bad at avoiding hallucinations. A new study last week confirmed this. One of the Cornell-based co-authors states, “At present, even the best models can generate hallucination-free text only about 35% of the time.” That’s really not very impressive.
Whether the problem is solvable seems to depend on whether AI can be trained on really good data, which raises a whole other raft of questions. And this comes amidst a report that an increasing number of websites (up from 1% to 5-7% in a year) are preventing scraping by some or all AI platforms.