skip to Main Content
News Summary: Online Safety Act And EU AI Laws Raise Concerns For Writers And Publishers

News Summary: Online Safety Act and EU AI Laws Raise Concerns for Writers and Publishers

The Online Safety Act is a piece of U.K. legislation, but as I have been reporting for some years, it affects all of us. And that’s because, as is the way with national legislation, it doesn’t just impact the citizens of a country—its reach extends to everyone who wants to deal with that country. And that includes those who want to sell into its market, which is a lot of us who write in, or translate into, English.

ALLi News Editor Dan Holloway

As of Friday, July 25, platforms that contain material that is “harmful but legal,” in the act’s notorious language, are required to implement robust verification of someone’s age before they can view that material. Sites that fail to comply could be blocked.

Age Verification and Its Broader Impact

The legislation was primarily aimed at adult sites, but it is by no means restricted to those. X and Reddit have already implemented age verification before U.K.-based audiences can view certain posts. And Spotify is now considering the same for songs with adult lyrics. This means that the act is inevitably going to have an impact on anyone who writes for a mature audience (or rather anyone whose material is considered to be, by a person or an algorithm, aimed at such an audience). And with considerable nervousness around the security of firms providing that age verification, that could mean many members of the target audience—not just minors—are unable to access people’s work.

Pushback on the EU’s AI Act

And on the subject of implementing legislation, so as not to saturate all of this week’s posts, I will bundle up news about the implementation of the European Union’s AI Act—something else I have been reporting on for some time.

The AI Act was originally hailed by creatives for its protection of rights holders in the face of pressure from tech companies. But now that the act is in place, publishers are not so keen on the way it has been implemented.

In question is the way Article 53 of the act, designed to protect rights holders, has been put into practice. A large coalition has come together to argue that the code of practice, guidelines, and template that tech firms have to use to disclose how they trained their generative AI models are not sufficiently stringent and detailed to protect the rights the act sets out to protect.


Thoughts or further questions on this post or any self-publishing issue?

Question mark in light bulbsIf you’re an ALLi member, head over to the SelfPubConnect forum for support from our experienced community of indie authors, advisors, and our own ALLi team. Simply create an account (if you haven’t already) to request to join the forum and get going.

Non-members looking for more information can search our extensive archive of blog posts and podcast episodes packed with tips and advice at ALLi's Self-Publishing Advice Center.

Author: Dan Holloway

Dan Holloway is a novelist, poet and spoken word artist. He is the MC of the performance arts show The New Libertines, which has appeared at festivals and fringes from Manchester to Stoke Newington. In 2010 he was the winner of the 100th episode of the international spoken prose event Literary Death Match, and earlier this year he competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available for Kindle at http://www.amazon.co.uk/Transparency-Sutures-Dan-Holloway-ebook/dp/B01A6YAA40

Share

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest advice, news, ratings, tools and trends.

Back To Top
×Close search
Search
Loading...