Generative AI is already doing one of the very worst things it could possibly do
It has escaped the lab, it is out in the world, it cannot and it will not be contained.
Hi everyone,
Welcome back to Techtris, your weekly dose of tech, the internet, culture, and how it all fits together (or doesn’t). I’m doing something slightly different this week – it’s just one item, it’s quite news-y, and it’s honestly a bit bleak.
This is the kind of thing that I’ll usually do on top of the paid weekly newsletter when it’s called for, and it’ll go to everyone on the free list. But given the subject matter at hand, I’m going to keep the other (somewhat lighter) items for next week and give over the letter to this.
But please do consider upgrading, either for £5 per month or £50 a year – and for the months of May and June, any annual subscribers from the UK will be entitled to a free copy of The System: Who Owns The Internet and How It Owns Us or Post Truth: How Bullshit Conquered The World.
AI is already doing one of the very worst things it could possibly do
Just a note to flag that this section features a discussion of AI generating (simulated) images of child abuse, should you wish to skip it for that reason.
The pace of development in AI is happening so fast that whatever it is you’re worried about, you’re probably already too late. There are two types of change going on in tandem: one is the actual advance of the models of the main players. That’s extremely fast and speeding up, and would be a huge challenge to tackle on its own.
But perhaps the even more significant change is how quickly open source models are catching up. Generally when we’re talking about software, “open source” is the good guy – it is code that is (almost always but not always) free for anyone to use, and also for anyone to edit.
Open source projects are inherently collaborative, and often supported and maintained by volunteers. Despite that, much of the architecture of the internet is built on the back of open source code – online space is not quite as hyper-capitalistic as it first appears.
A few short weeks ago, one of the biggest fears about the new generation of AI models was that they would hand over another generation of online monopolies to the current big incumbents: Meta has an AI almost ready for showtime, Google’s Bard is live, and OpenAI and Microsoft are backing GPT-4.
Despite the “OpenAI” name, all three of these models are closed and proprietary. This was part of the alarm they were causing: who gets to decide what questions an AI will or won’t answer, what it will consider, and so on?
For that reason, OpenAI CEO Sam Altman made an appearance before a Congressional committee last week. It was not at all Altman’s fault, but it made for dismal viewing – politicians simply do not understand technology enough to be remotely effective at governing it.
There is, though, something scarier than this technology in the hands of a few small companies – and it’s that technology being in the hands of everyone. I wrote a few weeks ago about a leaked Google memo that suggested open source models would match their proprietary equivalents and perhaps even exceed them.
(Update: the original version of this wrongly said Henk van Ess had tested this finding himself – he has not. Instead, he discovered images generated by other users. I regret the error.)
We are starting to see what that means, and just how dark all of this can easily get. Henk van Ess, an OSINT specialist and online/AI researcher, discovered a genuinely horrifying example of just this: people had used generative AI to produce ‘fake’ images of child abuse.
Van Ess found that users had asked OpenJourney, an open-source product similar to the AI image generator MidJourney, to generate such images – and it appears to have done so (he has only posted this fully blurred out image to social, and that is all I have seen).
In his post on his discovery, van Ess described discovering the material: “I didn't search actively for compromising explicit pictures of underaged children,” he writes, “but saw them by accident while research the output of OpenJourney on a Discord-channel."
Both OpenJourney and MidJourney by default publish the results of almost all user queries for generated material to public Discord channels – which is generally innocuous for MidJourney, which has strict filters on what it will generate, but which is evidently very different for OpenJourney, which at the time of writing is a free-for-all.
It is important to note at this stage: you should absolutely not, under any circumstances, generate images like this. Nor should you, for any reason, deliberately seek them out. It is illegal in most jurisdictions to produce or possess images of child abuse even when such images are photoshopped, or cartoons, or for some other reason. Freelance research is not an excuse under the law for generating or possessing such images.
Van Ess is clearly acting with the best of intentions – demonstrated by him raising the alarm about this so quickly – and think he is very unlikely to face legal consequences for the material he may briefly have possessed. But I hope absolutely no-one follows in his footsteps, and certainly not in the footsteps of those who generated the images, even if they can credibly claim they were trying to ‘test’ the technology.
Van Ess is already looking into using this to force OpenJourney to introduce at least some basic content moderation to prevent its AI producing illegal content. That would be a start, but it’s never going to be enough: now that there is open source code in the world that can generate this, someone will run it. Be it from some anonymous server somewhere in the world, or else on the dark web, there will be a version – probably several – of an AI willing to produce this.
It has escaped the lab, it is out in the world, it cannot and it will not be contained. This is generative AI’s Oppenheimer moment. Generative AIs themselves will prove impossible to regulate at the fringes. Now, it is about working out what we can do.
Most of this is about targeting what we can target and accepting what we cannot. It is people who generate prompts and people who hold content. There are already laws against what people can make with AI, which is (after all) just a faster way of making what could be made by Photoshop.
More importantly, we can regulate companies who want to use these products in the mainstream. If you want to register a company, operate in the open, etc, there can be codes of conduct, laws, and taxes – after all, it is not just what is being generated that will change the world, it is the fact it is generated in seconds with minimal human input.
The naivety of the tech world was deeply charming in its early days, when the internet was a network of a few hundred room-sized computers that could be brought down by an over-excited student experimenting with what is possible.
That same naivety holds true today, and it is now grossly irresponsible. Decade after decade, well-intentioned nerds end up staggered by the consequences of their actions. A cute and basic algorithm to show you videos you like ends up radicalising people into QAnon or even ISIS. Infinite scroll, a handy trick to keep you entertained, (arguably) pulls you into staying Too Online. The list goes on.
And now we’ve built a tool that follows any prompt any person can give it and are relying on the flimsiest of handrails to keep it in check. Everyone who’s ever seen a movie knows how that story ends.
Perhaps the dystopias got it wrong. Perhaps the scary bit isn’t when the AI has become self-aware and is taking over the world. Perhaps the scary bit is right now, when we’re still in control and we’re watching helplessly as it all goes wrong anyway.
The usual (and hopefully more cheerful) Techtris newsletter will go out midweek this week, as this has displaced it.
See you then,
James

