The Online Safety Act is a piece of U.K. legislation, but as I have been reporting for some years, it affects all of us. And that’s because, as is the way with national legislation, it doesn’t just impact the citizens of a country—its reach extends to everyone who wants to deal with that country. And that includes those who want to sell into its market, which is a lot of us who write in, or translate into, English.

ALLi News Editor Dan Holloway
As of Friday, July 25, platforms that contain material that is “harmful but legal,” in the act’s notorious language, are required to implement robust verification of someone’s age before they can view that material. Sites that fail to comply could be blocked.
Age Verification and Its Broader Impact
The legislation was primarily aimed at adult sites, but it is by no means restricted to those. X and Reddit have already implemented age verification before U.K.-based audiences can view certain posts. And Spotify is now considering the same for songs with adult lyrics. This means that the act is inevitably going to have an impact on anyone who writes for a mature audience (or rather anyone whose material is considered to be, by a person or an algorithm, aimed at such an audience). And with considerable nervousness around the security of firms providing that age verification, that could mean many members of the target audience—not just minors—are unable to access people’s work.
Pushback on the EU’s AI Act
And on the subject of implementing legislation, so as not to saturate all of this week’s posts, I will bundle up news about the implementation of the European Union’s AI Act—something else I have been reporting on for some time.
The AI Act was originally hailed by creatives for its protection of rights holders in the face of pressure from tech companies. But now that the act is in place, publishers are not so keen on the way it has been implemented.
In question is the way Article 53 of the act, designed to protect rights holders, has been put into practice. A large coalition has come together to argue that the code of practice, guidelines, and template that tech firms have to use to disclose how they trained their generative AI models are not sufficiently stringent and detailed to protect the rights the act sets out to protect.
Thoughts or further questions on this post or any self-publishing issue?
If you’re an ALLi member, head over to the SelfPubConnect forum for support from our experienced community of indie authors, advisors, and our own ALLi team. Simply create an account (if you haven’t already) to request to join the forum and get going.
Non-members looking for more information can search our extensive archive of blog posts and podcast episodes packed with tips and advice at ALLi's Self-Publishing Advice Center.




