In this episode of the Self-Publishing News Podcast, Dan Holloway discusses the controversy surrounding Facebook AI, specifically Meta's decision to use user content to train its AI platforms unless users opt out. He highlights a helpful guide on how to opt out, examines a new study revealing authors' ambivalence toward AI, and introduces an app that helps users write their autobiographies. Dan also covers the legal battle between the New York Times and the creators of the game Worldle.
Thoughts or further questions on this post or any self-publishing issue?
If you’re an ALLi member, head over to the SelfPubConnect forum for support from our experienced community of indie authors, advisors, and team. Simply create an account (if you haven’t already) to request to join the forum and get going.
Non-members looking for more information can search our extensive archive of blog posts and podcast episodes packed with tips and advice at ALLi's Self-Publishing Advice Center.
Listen to Self-Publishing News: Facebook AI
On the Self-Publishing News podcast, @agnieszkasshoes discusses Facebook AI, specifically Meta's decision to use user content to train its AI platforms unless users opt out. Share on XDon't Miss an #AskALLi Broadcast
Subscribe to our Ask ALLi podcast on iTunes, Stitcher, Player.FM, Overcast, Pocket Casts, or Spotify.
Show Notes
About the Host
Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
Read the Transcripts to Self-Publishing News: Facebook AI
Dan Holloway: Hello and welcome to another self-publishing news broadcast from a very sunny morning here in Oxford where it is the morning after we had one of the work events that's actually of interest to our readers and listeners here, because we were inaugurating our new Professor of Linguistics, Professor Colin Phillips.
Do keep an eye out because his fascinating lecture titled, Now You Say It, Now You Don't, will be available on the university website shortly. It is always a really great reminder that I am lucky enough to do for a day job something that very much ties in with the passion I have for all things literary and all things language.
How To Opt Out of Meta’s Plan to Train AI On User Content
Talking of passion for all things language and literary, and people who don't necessarily have it, let's go to Facebook, and Meta's new decision, because it covers more than just Facebook, it covers Instagram, and so on, that it's going to use your content to train its AI platforms, unless you are, like our students back here, willing to write them an essay to explain why you shouldn't.
This, of course, has been causing some unhappiness amongst authors and creatives in general who have done a lot to protect intellectual copyright and to protect their work from AI, and everything we do when we are signing contracts saying please don't use this for AI, and all of a sudden Facebook is going to take what we post and use it.
As I say, you have to find the place where they are telling you that you can indeed opt out, and you have to then write them an essay explaining why you think you should be allowed to opt out.
Fortunately, ALLi's own Robin Phillips has written a very good post explaining how you can do that, how you can opt out of Facebook's AI, and I am sure Howard will share the link to that in the transcript, because it came out just after the column where I ran the story. So, thank you to Robin, it's a very helpful thing that they've put together, which explains exactly what you need to say to get Facebook to say, yes, of course, we will stop using your data and your posts.
Study Shows Divide on Academic Authors’ Feelings Towards AI
That's that, and it seems that after a few weeks off, there have been some considerable number of AI-related stories, so let's get it out of the way with two more stories related to AI.
The first ties back to what I was saying right at the start, because it's another study back here in Oxford, which talks about how authors, in this case academic authors, feel about AI and what it shows is really interesting.
It shows that they are still conflicted, and I say this, that they are still conflicted because obviously the Society of Authors did a big survey of authors and their opinions on AI, and found that lots of authors are using AI, but also that lots of authors are really worried about
it's one of those things that shows two things. It shows that AI is not just a monolith. Needless to say, that it's not just that AI is what we think of AI, as these generative models, these large language models, and so on that are taking in everyone's input and then helping people to write essentially words that they've basically put little or no thought into, and which are based on everyone else's words or images.
It's more than that. It's tools which help you with your grammar, for example, or help you to brainstorm, or just provide a little bit more.
What the study found, it was a study of more than 2,000 researchers, it showed that people were very worried about privacy. They were worried about intellectual property.
Interestingly, a quarter of them were worried that it would lead to a diminished level of critical thinking, and this fits with a meme that you may have seen going round, that if your workforce uses AI, then the person you get on day 1000 is exactly the same as the person you get on day one, because they haven't actually learned anything on the job, they've just learned to type things into AI. This lack of critical thinking is something that is clearly highlighted in this survey.
On the other hand, three quarters of the people who responded were already using AI tools, and translation and chatbots are the most frequently used of those.
People are worried about AI, but people are also using AI, presumably from some kind of practicality angle.
So, it's something that content creators everywhere from academics to artists to authors are feeling slightly conflicted about in this way that it's something that's clearly here, it's here to stay, everyone's using it, but nonetheless, the way it is trained, and the possible consequences of its overuse are causing people concern.
So, it's something that we basically haven't settled how we think about yet.
New App Uses AI to Create Autobiography
Then finally, on the AI front, I came across the launch of an app that was bound to happen. It's one of those, I think I put it in my column, that it will leave writers screaming into the void, but it will also leave a lot of other people thinking, what a great idea, this has enabled me to do what I otherwise wouldn't be able to do, which is to tell my story, and to write a book. Because we all have a book inside us, some of us just need AI to help bring it out. It's called Autobiography and it does exactly what it says. That is, it is an AI that helps you to write your autobiography.
So, it's essentially a chatbot. You tell it episodes from your life. It has a conversation with you about those episodes, and then it turns them into an autobiography, a personalized memoir, which is in full prose, and you can then tap publish, and it publishes it.
In many ways, it's an automated version of those things that are already out there, where people will write your life story, and you can self-publish your life story as a present to give to your friends and family. It makes a nice gift. You give an editor some photos and some episodes and they turn it into a book. Then you self-publish the book and give it to people as a present. It's basically an automated version of that, but what it does show is that clearly there are people out there who already think that AI can write whole books quite happily, and inevitably this is going to lead to some of those making their way onto Amazon and other platforms.
That all feels rather gloomy, or rather hopeful depending on your perspective.
New York Times Sues Wordle Variations
What definitely felt rather gloomy was the story that caught my eye about something that many of us will spend much of our time doing, and that is playing Wordle, which is of course the game that became popular during the pandemic.
It was created by someone who wanted to give his partner something to do during the pandemic, turned into a global craze, was bought by the New York Times, who said, we'll keep the spirit of the thing alive, as they always do, while they were handing over a seven-figure pay check, that you cannot blame someone for taking, and needless to say, they are realizing that they need to start monetizing this.
One of the ways they are starting to monetize it, you will have noticed, is they have all these annoying pop ups telling you that you have to create an account because it will give you a much better experience, not realizing, of course, that you will get the best experience by not having pop ups stopping you playing the game, or maybe that's just me.
But what they have done now is they have decided to start a legal suit against the image or geographical recognition game Worldle, because they say this is confusing readers. So, Worldle is one of many sound-alike games that have cropped up on the back of the popularity of Wordle, including another thing called Worldle, which is one that I actually play, which is where you get outlines of countries.
This isn't the one with the outlines of countries, this is the ones with the things taken from Google Images, I think. It was developed by a single developer called Corey Macdonald, and basically, the New York Times has come after Corey and said, we're suing you because you're confusing our readers. Basically, I think that you're taking eyeballs away from us.
Anyway, this seems very much not in the original spirit of Wordle, but it also seems inevitable, and it also makes you wonder what they're going to come after next, which of the many games that have some sort of variation of ‘ordle' plus starting consonant are they going to come after next, and it just felt like rather a sad sort of story about what media companies do.
Coming in the same week, of course, that media companies are still signing deals with OpenAI to sell their data to train AI. Yes, not necessarily a week in which the media has covered itself with glory. Are there such weeks when the media cover itself with glory? Who knows, maybe next week will be one of them.
In which case, I will of course make sure that I share the news with you. In the meanwhile, I look forward to speaking to you soon. I will make sure to send Howard the link to Robin's excellent article on how to opt-out of Facebook's AI training, and I will speak to you again soon. Thank you.