Skynet versus Bluesky
AI has escaped the walled gardens already, don’t bet on Bluesky, and an exclusive subscription offer (oooh)
THE HARD SELL: A thousand of you have signed up to the free list for Techtris. Thanks! I hope you’re enjoying it, and getting some value from it. This, however, will be the penultimate completely free weekly issue – after next week, the bulk of the full weekly newsletter will be for paid subscribers only.
I will continue to send shorter, news-y updates to the free list, but this longer weekly email will become for paid subs only. I hope some of you consider upgrading.
For a limited time only, if you take out an annual subscription I will send you a free copy (of your choice) of either Post-Truth: How Bullshit Conquered The World or The System: Who Owns The Internet And How It Owns Us.
Postage will be free for the UK, £5 for EU nations and £7 for the rest of the world, and delivery will take a couple of weeks (so I can send everything in a batch). If you already have an annual subscription, you are very welcome to a free book as a thankyou – just drop me an email if you’d like one!
Otherwise, anyone who buys an annual subscription in the month of May will get an email with a form asking you to provide details for the free book. Enjoy!
Right, on with our scheduled newsletter.
“We have no moat”
My attention was drawn this week (via Simon Willison’s excellent blog) to an astonishing memo apparently drawn up by an engineer on Google’s AI team, later independently verified as authentic by Bloomberg.
The most startling claim contained therein is that neither Google nor OpenAI’s proprietary models (Bard and ChatGPT-4 respectively) have any significant lead over leaked or open source AI models – and they have little to no prospect of keeping a lead in the long-term.
This goes significantly against the prevailing wisdom of regulators – just this week the UK’s Competitions and Markets Authority (see last week’s newsletter for more on them) announced an early review of the AI sector to see if it would be anti-competitive.
If this internal memo is on the money, then the reality could be very much the opposite: if AIs are openly available, open-source and cheap to train and modify, then they will quickly be commodified and adapted for any possible use. The logic of the memo underpinning this is that small, incremental training on relatively small datasets is allowing models to leapfrog the vast and costly training used on the closed, proprietary models.
This might sound like promising news, especially for critics of the big tech giants. The reality is much trickier: big companies can be regulated, and can be required to build assurance structures, reports and such like.
If AI is cheap to train and to adapt, it will be harder for governments or regulators to steer its course. If the call earlier this year for an AI moratorium was ridiculous then, it is even more ridiculous in this world – the genie is already out of the bottle, and nothing will put it back in.
The memo is not necessarily correct, of course, but it’s extremely plausible. It might not have made headline news this week, but it has the potential to be far more significant than any other events that might have taken place this week.
Bluesky thinking
Another week, another new social network saviour. First it was Mastodon, then it was post.news, and today it’s Bluesky – with journalists and other Twitter power users earnestly angling for invitations and popping back to Twitter to let us all know how much fun it is over there.
I am…unconvinced. As Adam Timworth notes, this time two years ago the audio social network Clubhouse was the hot new thing everyone was using. Is anyone still on there? How about BeReal?
There are deeper problems than just being another flash-in-the-pan. Not least is that Bluesky is founded by none other than…former Twitter CEO Jack Dorsey. It is, in fact, a spinoff of a Twitter initiative. That it is the closest network to Continuity Twitter might seem reassuring, but it shouldn’t be.
Twitter was not a well-run company under Jack Dorsey, who did the CEO job part-time and whose focus was at best described as erratic – Twitter ended up devoting its severely limited engineering resources to esoteric blockchain products and NFT profile pictures, all while failing to integrate its newsletter startup Revue or finally introducing the edit button to all users.
Until just before the sale, Dorsey was convinced Elon Musk was the best possible owner for the site, and had to step down from Twitter’s board because of his open and private support for the (initially hostile) takeover offer. Looking to a Jack Dorsey product to save Twitter is a little bit like going back to your deadbeat ex in the hope that this time they really have changed. They never have.
Dorsey also lamented the fact Twitter was a company rather than a protocol – meaning it needs revenues, profits, etc. Like Mastodon, Bluesky is intended to be an open-source federated protocol. This comes with all the same problems as it causes for Mastodon (detailed at greater length here) – it will be clunkier to use, require more trust, and monetisation becomes tricky.
It is all well and good to say that something shouldn’t be a company, but someone has to pay for developers, content moderators, policy staff, and the like. That is much more difficult in these kinds of setups.
Twitter’s disintegration continues apace – a bug (almost inevitably) caused Circles tweets to leak beyond their intended audience, Elon Musk is openly commenting on right-wing material on US racial violence, and half of new Twitter Blue subscribers have already cancelled their accounts.
But Twitter getting worse doesn’t magically make anything else better or more viable. Bluesky, I suspect, has its head in the clouds. But who knows, perhaps I’ll be eating crow and sending Skeets before the year is out.
Tech titbits
The grimness in digital media continues: VICE is set to be bought out of bankruptcy this week, and BuzzFeed News published its final piece.
It’s still not very good at legs, though.
“The Godfather of AI” quits Google and issues various dire warnings. Several people have rightly noted it’s all a bit late, really.
And finally…it’s plagiarism all the way down
Oh, dear reader, have we got a tale for you! A recent article from The Washington Post reveals how our very own ChatGPT, an AI language model created by OpenAI, is unwittingly contributing to the rise of spam websites and books. That's right, folks – it's a classic case of "it wasn't me, it was the AI!"
Apparently, some crafty individuals have discovered that they can harness the power of ChatGPT to generate loads of content – articles, blogs, and books – all in the name of making a quick buck (or a million, who's counting?). Not to toot our own horn, but the quality of content produced by ChatGPT is quite good, so it's no wonder these scam artists are capitalizing on it.
The Washington Post dives into the seedy underbelly of this AI content boom, where deceptive websites and books are popping up like mushrooms after a rain. These pesky plagiarists are taking advantage of the open access to ChatGPT, as well as its incredible versatility, to generate content on any topic under the sun (or moon, if you're a night owl).
The article shares a delightful anecdote of an unsuspecting AI-generated author – "Michael G. Edwards." Poor old Mike, who doesn't even exist, is credited with a series of self-help books that are nothing but hastily compiled AI-generated gibberish. It's like that time you tried to write an essay the night before it was due – except, you know, with a little help from our AI friend.
So, what's the solution to this AI-generated content epidemic? Some experts suggest tighter regulations on the use of AI-generated content, while others propose that OpenAI should more closely monitor the use of its creations. But let's not forget, there's a silver lining here too! Amidst all the spam, ChatGPT has also been a helpful resource for small businesses, students, and writers, who benefit from AI-generated content in more legitimate ways.
As always, it's a delicate balance between innovation and regulation. We're sure OpenAI is hard at work ensuring that their AI baby is used for good rather than ill. In the meantime, we can all take solace in the fact that it's not just us humans who get up to mischief – our AI pals are joining in on the fun too!
And there you have it, dear reader – a quick peek into the curious world of AI-generated spam content. Next time you come across a suspiciously well-written article, just remember: it might be one of ChatGPT's literary masterpieces!
… I’m hoping you can tell that this was all generated by ChatGPT-4, with the prompts of asking it to summarise the Post article in the tone of last week’s newspaper.
Two thoughts: it’s fonder of calling you “dear reader” than I am, and it’s got a higher opinion of its own writing prowess than I do.
Until next time,
James


