Artificial Intelligence might not be dominating the news right now, but this week provides a reminder that it is still very much there. Conveniently, I came across three stories that tie together the three great creative endeavours currently buckling under the strain: art, music, and writing. We'll be looking at all three today in ALLi's Self-publishing News update.
Art and AI
Let’s start with art. We know that AI can generate some pretty good art. Good enough, in fact, that it has won prizes. And we also know that artists are pretty cross that one of the things that enables this talent is that the best artistic AIs have been trained on their works. As yet, the position isn’t clear on whether images created by AI trained on work without the artists’ consent represents a breach of copyright. But clearly, it’s enough of a worry for those using AI-generated artwork that one of the biggest libraries of stock images has just launched a new copyright-safe gallery.
Getty Images guarantees that every image in the new gallery is safe to use without fear of copyright claims down the road. Whether this is a solution to a genuine problem or not, only time will tell. What it is important to remember, though, is that when it comes to the indie writing world, this is not just something for writers who create their own covers. Most cover designers use stock images. So even if you go to a creative professional for your cover, AI and copyright affects you (as much as it affects anyone). If you believe it matters, then you might want to protect yourself by seeking reassurance from your cover designers that they will use so-called copyright friendly images such as these.
Music and AI
In the world of music, last week Spotify’s CEO Daniel Ek waded into the AI debate. Spotify, of course, has become much more interesting to us as writers since it launched so purposefully into the audiobook space. Ek confirmed that Spotify had no plans to remove AI-generated music from the site. This follows its removal of tracks where AI voices impersonated other artists. But it is clear that the impersonation and not the use of AI was the problem in those cases. Ek is, of course, as immersed in technology as it is possible to get, which has led him to rub abrasively against many creative artists in Spotify’s brief history.
Also last week, the company announced that it would use AI for translating podcasts. The aim is to translate podcasts into popular languages while retaining the accent and voice qualities of the original speakers.
Writing and AI
Finally, to writers. The Australian Society of Authors has given its official line on AI. Specifically its response to the use of Australian works to train large language models. It coincides with the release of a shadow library called Books3, containing almost 200,000 books collated from pirate sites.
The response calls for, “transparency, permission and payment, unlocking new opportunities for our creative industries.” The argument is really interesting. The constant use of work by leading authors to train AI demonstrates that the big tech forms behind the models consider these works to be essential to what they do. They are not interchangeable with other works or random samples of prose. And if they are so essential, the argument goes, then tech firms should pay for them. “Instead,” the ASA says, “authors and artists are being locked out of the AI boom.”
Meanwhile in the UK, more writers are discovering Amazon has been selling titles purporting to have been authored by them. Of course, Jane Friedman brought this issue to prominence a few weeks ago. Now the alarm has been sounded by prominent TV presenter Rory Cellan-Jones.
The passage highlights Spotify CEO Daniel Ek’s stance on AI-generated music and the company’s utilization of AI for podcast translation. Ek’s decision not to remove AI-generated music reflects an understanding that the issue lies in impersonation rather than AI use. This underscores Spotify’s commitment to technological innovation while respecting creative integrity.