In the first of the year’s two retrospectives, I want to get AI over and done with here on ALLi's News column. I could write a whole book on the way generative AI has burst onto and dominated the creative scene in the past 18 months. Of course, if I wanted to write such a book, I’d save myself some time and get ChatGPT to do it for me.
Such has been the dominance of AI over all other news themes that my quarterly recaps have now bifurcated to allow, as I am doing here, me to get AI out of the way in one post while I dedicate another to the “rest of the news.” I don’t want to simply recap here what I talk about there. This has been a year when companies like Amazon have defined the terms on which we are permitted to use AI, and what we must declare when doing so; when other companies like Spotify decided how they would – and some like Wired decided how they wouldn’t – use AI; when author organizations started formulating AI clauses for contracts, and when a whole industry went on strike to curb the use of AI in screenwriting.
But for me the real story around generative this year has been how we have slowly started to learn (and still have not as a creative community internalised) the difficulties of applying existing copyright laws to the new world in which we find ourselves. A slew of writers’ groups, artists’ groups, and the bodies that represent us have launched lawsuits this year relating to copyright infringement. The substance of each has been roughly the same. Generative AIs have been trained on the copyrighted work of creatives without their consent and without compensation. This is a breach of the copyright those creatives hold.
Proof of this has tended to be presented in the form of the way those AI models have been able to create works “in the style of” named creatives. Or to provide detailed summaries of their work that couldn’t have been gained from secondary sources. And in the cases of some writers to create complete versions of texts the writers themselves have failed so far to finish (ahem, Winds of Winter).
But so far, nothing has stuck. And that has led, as I’ve noted a few times, to a sense that something is “not right” but an inability to express that in terms that demonstrate a particular breach of existing law. Generative AI has, for example, used shadow libraries to train its work. We know that shadow libraries are, well, shadowy (as the successful prosecution of Library Genesis – again – this year has shown). But using them doesn’t breach copyright law. Likewise, being able to produce work “in the style” of someone’s intellectual property fails to hit the legal sweet spot because it has been impossible to show the specific manipulation that led from one particular copyrighted work to one infringing piece.
2024 will, undoubtedly, see more case law laid down. But what it will really need to see (and we are starting to see glimpses of in the European Union’s AI Act, is a rethinking and redrafting of copyright law to account for the new landscape.
Generative AI has dominated the creative media landscape over the past 18 months, with ChatGPT being a prime example. Companies like Amazon, Spotify, Wired have defined their terms of AI usage. The film script writing industry also organized a strike to limit AI. A series of lawsuits surrounding AI infringement of copyright works were brought, but the lawsuits have not been won due to difficulty proving a direct connection. Necessary Reform copyright laws to accommodate AI realities.