Politicians, corporations, and experts wrestle with how to govern technology we use. Welcome to Self-Publishing News with ALLi News editor Dan Holloway, bringing you the latest in indie publishing news and commentary.
Find more author advice, tips and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts, and a handy search box to find key info on the topic you need.
And you can catch up on all our advice and inspiration podcasts through your favourite podcast provider, or sign up for reminders so you don't miss a single one.
If you haven’t already, we also invite you to join our organization and become a self-publishing ally.
Listen to Self-Publishing News: Govern TechnologyOn the Self-Publishing News podcast with @agnieszkasshoes: Politicians, corporations, and experts wrestle with how to govern technology we use. Click To Tweet
Don't Miss an #AskALLi Broadcast
Subscribe to our Ask ALLi podcast on iTunes, Stitcher, Player.FM, Overcast, Pocket Casts, or Spotify.
About the Host
Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
Read the Transcripts to Self-Publishing News: Govern Technology
Dan Holloway: Hello and welcome to a Halloween week Self-Publishing News.
Featuring Elon Musk this week, I will leave you to draw your own conclusions as to whether or not that's a fitting use for a Halloween-themed podcast.
So, what I want to be able to look at in a little bit more depth this week is a series of stories around technology and the technology we use as writers, but from a slightly different angle.
One of the things that's been in the news a lot the last few days is not the technology itself, but rules about how we govern technology. I say we, what I mean is, how do governments ensure that the technology we use is safe in every respect of that word safe, and how do they go about defining words like safe.
That's a really meaty topic for what is, as I record, a fairly dismal Halloween evening.
X to Launch Two New Payment Tiers
So, let me start with the other big story which does indeed feature Elon Musk, which is that X has decided it's going to launch two new payment tiers.
So, as you know at the moment you can get yourself a blue checkmark by paying $8 a month, and with that comes all sorts of perks and benefits like getting actual people to see what you post.
Or the alternative at the moment is that you can use the service for free, but who knows whether anyone actually reads what you post, whether you have any reach. Lots of people have found interactions, engagement levels, analytics in general, have just fallen off a cliff.
It also comes shortly after X announced that they were introducing a pilot in New Zealand and the Philippines whereby all new users would have to pay at a very much lower level a dollar a year.
So, the two new levels will be called Basic, which will cost you $3 a month and Premium Plus for $16 a month. The real difference between them seems to be the level of reach that they give you. So, what X are calling the level of reply boost.
So, both of them will enable you to edit your posts, both of them will enable you to have slightly more characters than free users. You won't get a check mark if you have the basic option. You will if you have the premium plus.
But where premium plus gives you more than the $8 a month premium, is in giving you, in theory, more reach. This comes after X released figures which show that people aren't spending as much time on the platform as they used to, not to put too fine a point on it, but that people with premium accounts are spending three times more on the platform than people who have the regular account.
So, they want to make money, they want to convince advertisers that they are capable of making money, and they want to get people actually sitting on the platform and making posts and reading posts.
So, who knows whether this is where the tinkering will end, we will have to see, but we will move away now from X, and I've managed not to call it Twitter once. So, I'm going to draw a line and call that a win.
Online Safety Act Attempts to Govern Technology
We come now to this question of how you govern technology. What does it mean to govern technology?
We've talked in the past about the European Union's Digital Markets Act and Digital Services Act, how this affects Amazon, what they are trying to do to ensure that the platforms that we all use are transparent.
So, transparency is something where governments clearly feel they have a role to play and a role they can play.
This week, there are two slightly greyer areas that we're going to be looking at. The first is that the Online Safety Act in the UK has become the online Safety Act rather than the Online Safety Bill. That is, it is now law.
This is an act of parliament that seeks to provide protection for people when they are online.
It wants to make the experience of being online safer, in particular for minors; it wants to protect them from harmful content.
The phrase that's used a lot is harmful but legal. It's one of those phrases that feels very much like the famous word from the 1960s, obscene. What does it mean to be obscene? I don't know, but I know it when I see it, as the judge famously said.
So, what is meant by harmful? The answer in basic terms is whatever the government of the day feels is harmful, and that's what has got people worried. So, it is designed to stop platforms from sharing harmful content, where harmful is not defined by what's legal, but what someone decides that minors shouldn't see, and this affects us because as writers, a lot of us write things that people might consider harmful.
I can't say what they are because this podcast is going on YouTube, and YouTube does things to people who mention such things. So, there we go. There are lots of code words I could use, but that just feels odd.
So, harmful content, we write about this all the time. We know that romance and erotica writers often get into trouble with payment processors for the content they write. This feels like the same thing could happen, likewise, for horror writers, thriller writers, because the Act had its origins in concerns about people being driven to self-harm, then we could see people who write non-fiction having their words falling under scrutiny.
I mean, it could be that what we post on our website when we're advertising our books, when we're promoting our writing, could be considered harmful and we might therefore be subject to the Act; it's an extra thing we are going to have to think about as writers.
So, that's the Online Safety Act. There are all sorts of other things to do with it, that just basically show what we know, which is that legislators don't know anything about technology. Stuff to do with encryption and wanting platforms to be able to give them a backdoor into fully encrypted services because somehow, they think that is a thing.
UK Hosting Global AI Summit
We've got to this stage without mentioning AI, but as you may know, we are now on the eve of the UK hosting a global AI summit where people are going to get together and talk about AI, the impact it's going to have, and its implications for safety.
If you've been following the news or this column for any length of time, you will know that the UK government at the moment positions itself very much as the champion of technology. AI good, people who doubt the benefit of AI bad. Everything that the government says has this lens on it.
Nonetheless, on the eve of the conference, the UK has announced that it is starting an AI safety institute to discuss the safety of AI and every type of threat that it poses.
As writers, I'm sure you'll be particularly interested to know that this is basically about the headline making stuff, it's not about the stuff that is actually likely to make a concrete difference to our daily lives, like whether people use AI-generated art for their book covers, this is much more about existential threat.
So again, I guess if there's a theme running through this week it's about how we define harm. So, what is harmful online content and now what is harmful consequences of AI?
The UK government is very much promoting the idea that harmful doesn't mean things that might interfere with the creative process, harmful means existential threat, largely because that's easier to conceptualize as harm.
But what they've said they won't do is support, coming back to Elon Musk, for a pause on development of AI. So, Elon Musk is one of many of the people in the tech industry who actually work on AI, who have called for large AI platforms to be put on pause for at least six months, and the UK government said no, pausing AI bad. So, we know where they stand on that.
Talking of setting up the study of catastrophic AI risks, OpenAI has set its own team up to study catastrophic AI risks, and again, these seem to focus on things like nuclear threat, biological warfare, not so much on things like the stuff that OpenAI is being accused of doing around taking shadow library content, training AI systems, and enabling it to mimic creators.
This week there seems to be a lot of setting up of these big, high-ticket, high-profile institutes that are going to solve the problem of AI threats without actually looking at the AI threats we want solving.
Finally jumping on the bandwagon of overseeing AI, regulating AI, the UN, the United Nations, has decided that it is going also to set up a new AI advisory board. So, this feels like It's slightly more interesting. So, some of the people involved are actually interested more in the creation privacy aspects of it than the existential threaty stuff.
So, it will be interesting to see, as more and more organizations are setting up these oversight bodies, how much of a voice we as a creative community feed into which of them and which of those get listened to. So, which actually drive policy, which actually drive the direction of progress and how we feed our voices up into that.
That's the picture I try never to lose when I'm reporting on this, is how do you, the writer who is worried about these things, or you, the voice artist, you, the cover artist, worried about what is going to happen to your work in a year's time, feed your voice into these big oversight bodies that may or may not do anything, run by large governmental and non-governmental organizations.
Inevitably, next week's podcast will feature some stuff from the AI Summit. I am sure it will feature lots of book-related stuff that is nothing to do with it as well.
In the meanwhile, happy Halloween, and welcome to November,