Regulation and legislation might have struggled historically to keep pace with technology. AI is no different, but in the past few months, it has been a consistent narrative that various legislative regimes across the world have set themselves on markedly different courses—particularly when it comes to the relative protection and promotion of the creative and technological industries, and in some cases, the introduction or rejection of a transparency amendment.

ALLi News Editor Dan Holloway
EU Creators Call for Enforcement of AI Act
One of the legislative frameworks that has stood out is the European Union, where the AI Act demands transparency and seeks to protect the interests of rights holders and the creative industry as a whole. Creatives from across different areas have gathered in Brussels to demand that implementation of the AI Act continues to stress “transparency, consent, and remuneration,” amid fears that the final version might water down that language.
UK Withdrawal of AI Transparency Amendment Draws Criticism
Speaking of transparency, there continues to be somber reaction to the UK government’s withdrawal of an amendment to its own AI Bill that would have insisted on transparency. This means that the only way to avoid having one’s work used to train AI—as the bill stands—would be an explicit opt-out, however that is done. Dan Conway of the Publishers Association continues to refer to the “great copyright heist.” Legislation dealing with the creative industries and copyright is expected, but Conway is one of many voices seeking—like the group in Brussels—to ensure mollifying words become concrete actions.
And having started my last piece with a breaking “what has AI gone and done?” story (what it had gone and done was made up non-existent titles by famous writers that then found their way into a newspaper piece), this time I end with such a story. It’s a story that combines “What’s it done now?” with “What could possibly go wrong?”—making it feel somewhat like an agony column on problem parenting.
The AI in question is the new Claude model from Anthropic. Anthropic has always tended to make the case that Claude was a more ethical kind of AI. But it seems that the new model, in testing, would resort to blackmailing engineers by exposing personal secrets gleaned from emails if there was a threat to switch it off in favor of another platform. Though apparently it did so “only” 84 percent of the time if the other platform shared its values—much more if it didn’t. An ethical AI indeed.
Thoughts or further questions on this post or any self-publishing issue?
If you’re an ALLi member, head over to the SelfPubConnect forum for support from our experienced community of indie authors, advisors, and our own ALLi team. Simply create an account (if you haven’t already) to request to join the forum and get going.
Non-members looking for more information can search our extensive archive of blog posts and podcast episodes packed with tips and advice at ALLi's Self-Publishing Advice Center.