OK, let’s draw breath after last week’s utter chaos at ChatGPT’s parent company OpenAI, as I reported last week on ALLi's news column. I will gather all the AI stories into one post so as not to over-egg that particular pudding for those of you who see AI as a cue to scroll on by. But I want to start with one of the more thoughtful posts looking back over last week’s fracas.
In what will be known as the Month o' the Two Sams, the key difference between Bankman-Fried, who is now awaiting prison, and Altman, who is now back at the head of a cleared out and reformed OpenAI, is their position in respect of the Effective Altruism movement.
Effective Altruism (EA) is an ideology I won’t spend long explaining here, but it is at the heart of many of the debates about AI. In short, the idea is that you should use your life to do as much good as possible. Conveniently for tech billionaires, EA’s main answer on how to do the most good is “make lots of money so you can give it away.” It also places absolute value on human survival – which means anything that threatens human existence at all, however minimally (as opposed to real things that do a lot of harm to a lot of people but won’t wipe us out) should be eradicated. Including AI.
Sam Bankman-Fried had drunk the EA Kool Aid. The problem was that meant he thought he could break fraud law for a higher purpose. Sam Altman on the other hand was very wary of the movement. He and many others in the tech space worry that its fear of what AI could do to humans means they don’t see its potential to help human progress. This put him at loggerheads with an EA-sympathetic Board that wanted more regulation to stop our potential annihilation.
What I find really interesting is how much the tangible changes in the way we work are driven by particular ideological divides in the technological community. As always, I’m not here to take sides. But I’m here to inform and I hope show you that these items are not only interesting, but things we should pay attention to so we can protect our futures the best we can with knowledge.
Latest AI Lawsuits
In other AI news, it has been a week when the slew of lawsuits currently in progress had a rough ride. Mark Williams has, as ever, a thoroughly entertaining take on the subject. Williams’ approach is one I would call pragmatic optimism, one I largely share (this is going to happen, you might as well find a way to try and make it work). The titles of his articles are works of literary art in themselves, and this week’s, “Law firms are throwing legal spaghetti at the wall to take down gen-AI, but judges are so far unimpressed” is no exception.
For all his entertainment value, the points Williams makes are very serious. And the most serious is one that a lot of creators across many media have yet to grasp fully. Which is that there is a great difference between knowing that “something seems wrong” and being able to demonstrate in court that “there is a law being broken here.”
Part of this stems from the fact that copyright law is so far from being caught up with technology that if this were a race it would be in danger of being lapped. And we have seen that yet again in the news this week where the latest suit to be rebuffed is Sarah Silverman’s case against Meta. The case is based on the use of copyrighted material to train Meta’s generative AI. Silverman’s lawyers claim that Meta relied on copyright material, used without consent or compensation, for the outputs it generates.
The way the judge phrased his denial of the case’s right to proceed is telling: “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.” And that’s the problem creators face. The protection we have under copyright law relies on proving the closeness of the output to the original to the extent that the former could only have arisen from unauthorised use of the latter. And the differences between what AI platforms produce and the materials on which they are trained, not to mention the way they break up the source material to derive outputs, makes that really hard.
In short, what case law is rapidly showing is that in the long term what we need is less to bring tech firms to heel and more to have new laws.
Find Out More About Copyright: