skip to Main Content
Self-Publishing News: UK Publishers And Writers Groups Call For AI Protection As A New Tool Enables “Data Poisoning”

Self-Publishing News: UK Publishers and Writers Groups Call for AI Protection as a new Tool Enables “Data Poisoning”

ALLi News Editor, Dan Holloway

You will remember that European writers’ groups used the AI Safety Summit as a chance to ask for greater AI protection for creative artists in law. This week it is the turn of UK groups; including the Society of Authors, Publishers’ Association, and Authors’ Licensing and Collecting Agency.

The groups have written to the UK government. Their open letter distinguishes, as we are seeing happen more and more, between AI tools that are seen as a benefit to the creative process, and the generative AI platforms that are taken to be less benign. ALLi News Editor, Dan Holloway, reports. 

A Call for AI Protection and Copyright

In particular, the letter focuses on copyright. There is a demand for retrospective compensation for the use of copyrighted works without consent, citing the Books3 database used to train large platforms. And there is a demand for an end to such un-consented use in future. They also seek full credit for the creators of any work, as well as the fair operation of permission and payment when it comes to use of people’s work.

There have been occasional stories over the past few months of creatives taking matters into their own hands when it comes to counteracting the perceived harms from AI. You might remember that Omegaverse writers took to dropping key words and phrases into their work in an attempt to “out” AI as being trained on it.

A Data Poisoning Tool

But now there is a tool that has been developed to outfox anyone looking to train an AI on your art. Nightshade, developed by a team at MIT, uses a more sophisticated form of obfuscation than simply filling your work with code words or nonsense. It manipulates the pixels in works of digital art. This means that the human eye sees what the artist intended.

But any AI that tries to interpret the work will “see” something entirely different and utterly misleading, so that it cannot generate meaningful works from it. The idea is that the whole stream of training data will be muddied by the presence of such poisoned images because it will be impossible to untangle the clean and dirty images.

Find Out More

Author: Dan Holloway

Dan Holloway is a novelist, poet and spoken word artist. He is the MC of the performance arts show The New Libertines, which has appeared at festivals and fringes from Manchester to Stoke Newington. In 2010 he was the winner of the 100th episode of the international spoken prose event Literary Death Match, and earlier this year he competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available for Kindle at http://www.amazon.co.uk/Transparency-Sutures-Dan-Holloway-ebook/dp/B01A6YAA40


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest advice, news, ratings, tools and trends.

Back To Top
×Close search