Techtris – we didn’t start the fire (that was Sam Bankman-Fried)
Yes, we're back, sorry to have been away so long…
Hi everyone,
It’s back! The danger with taking a hiatus is they have tendency to always end up longer than you might hope – in this case largely because I ended up taking the role of lead author on a Demos paper with a deadline ahead of the government’s AI summit this week.
It turns out that producing a 6,000 word discussion paper, helping organise a roundtable, and then turning the results into a 7,500 words provocation paper within six weeks is somewhat all-consuming. I’ve talked about some of what we found lower down in this newsletter.
My intention is to return to publishing around three chunky Techtris issues a month from hereon out. However, until that’s been a reliable schedule for a time I’m going to keep everything outside the paywall for its first four weeks.
Contributions will help make this viable, but please consider them entirely voluntary for the moment (and if you made an annual subscription I am happy to either extend it at no cost to the end of 2024 or offer you a refund – just let me know).
Thanks for signing up to Techtris, and I hope you enjoy this newsletter.
Cheers,
James
Sam Bankman-Fried: we’ll learn, but not very much
Sam Bankman-Fried has set yet another new world record, though one far fewer of us might desire than some of his previous milestones: he’s now the richest paper billionaire to face a lengthy jail sentence for the ill-gotten gains he made.
There are several lessons we can take from this: the first is that “too big to jail” doesn’t really hold. SBF was enormously connected, successful, was shielded by huge political contributions and had funnelled millions into media organisations that might otherwise give him a sceptical eye.
None of that worked. This might be a nervous day for other billionaires standing on shaky foundations – you don’t get gratitude after helping someone. It only holds while you’re still useful.
Another question will be asked of Stanford (Edit: updated from Yale, which was incorrect). Given SBF’s parents were both academics specialising in ethics (one of them from a legal perspective) they provide a fascinating case study in practical versus theoretical ethics – especially as both were involved, to different extents, in their son’s enterprise, and then both were ferocious in his defence. Had a fictional writer made SBF’s parents prestigious liberal professors, they’d be fired for being hokey – reality is written by a hack.
The big lesson of all of this will be far more disputed, though. Michael Lewis has in his podcast tried to distance himself, to an extent, from his own book – stating he always thought SBF was toast, and that he had quite an ambivalent view of what happened.
What shines through, though, is that he still thinks this was a fundamentally ‘good’ business, that if customers are largely compensated for their losses that would prove it, and Lewis still seems to believe crypto is a better, brighter future technology that’s being hampered by teething problems – the inevitable chaos of startup personalities.
There is a lot of money resting on Lewis being right, so you can be sure that in the short-run at least he will be. Expect the “one bad apple” analogy to be abused still further (the rest of the saying is “…ruins the barrel”, which is what happens in reality thanks to the chemicals one rotting apple releases). Crypto “good guys” will save the day.
The more costly reality is that there are no good crypto businesses. There is no crypto exchange or currency that is based in any real-world use case that is legal and ethical. Play2Earn is a pyramid scheme. DeFi is a pyramid scheme.
Crypto is a pyramid scheme, with a side business in enabling money laundering. There is no there there, there will continue to be no there there, and more people will go to jail if they try to be the next crypto billionaire.
That’s the obvious lesson here. We won’t learn it.
An aside: If you haven’t seen this courtroom sketch of SBF, then you’re missing out. I am genuinely fascinated as to what happened here: aren’t sketches supposed to look like the subject?
Oh Rishi, what a pity…
Rishi Sunak nearly had a good AI summit, and then he had to go and interview Elon Musk. That was not a clever move. I’ve written this up for The New European, but I couldn’t resist posting an extract here:
The event itself was just far weirder than anyone expected. Until the moment the discussion between Sunak and Musk began, it hadn’t occurred to anyone that when Sunak – the PM of the world’s sixth largest economy, and a nuclear power with a permanent UN Security Council seat – was going to be the interviewer. Nor that he’d be such an obvious fanboy of an incredibly divisive and mercurial billionaire …
Sunak’s obvious delight at meeting Musk could not be faked: it was clear this was a genuine case of a fan meeting their idol. One suspects this was why this ill-judged ‘chat’ went ahead: no sane communications professional would let a PM appear so obviously the subordinate in a public appearance if they could possibly stop it.
The pseudo-cynical view is Sunak wanted to do the event as part of a jobs fair for himself, showing he could be the next politician to follow in Nick Clegg’s footsteps and become a Silicon Valley exec.
This doesn’t quite pass the smell test: looking too keen in public is a big turn-off, while Musk himself is too erratic to repay the favour of a fawning interview – and too contentious for another exec to pick up a Musk fanboy. Sometimes the simple explanation is the best one: Sunak did this very silly event because he desperately wanted to do it.
You can read the full article here.
While we’re talking about Elon
It needs saying that Elon Musk is very deliberately turning what is still the only functional mass social network for real-time breaking news into a machine that monetises and makes viral the very worst of misinformation.
Rishi Sunak’s willingness to give a softball interview to Musk may look far worse this time next year than it does today. There are dozens of major elections that will be subject to interference through information operations – often these will be embraced by the campaigns concerned.
Yes, Donald Trump and the US is the standout here. But hardly the only one. We’re seeing day in and day out the damage done by rewarding misinformation, thanks to the relentless boosting of fake Israel/Hamas videos – as if the reality wasn’t horrifying enough.
Another extract from The New European (but an older piece) feels quite relevant in the light of Rishi Sunak X Elon Musk (as it was styled):
Several decisions made personally by Musk have acted together to make it so that X/Twitter actively finances malicious misinformation during times of conflict. The most obvious change is making verification merely a paid-for service, and requiring news organisations to pay a much higher fee for verification.
Many have, understandably, decided not to foot the bill for tackling misinformation on a social network they don’t own. That means that blue tick promotion is open to anyone. It is then heavily prioritised both in feeds and in replies.
Musk has also reworked the algorithm to punish posts with links that might take people off Twitter – for example, to a full and reasoned argument – in favour of those that keep someone on the site for longer. This favours video and threads posted directly to Twitter.
To finish off this toxic cocktail, add in that X/Twitter now shares revenue with verified creators who meet certain view thresholds, and you have created huge incentives to post whatever will get the most attention – with seemingly no quality control or demonetisation for false content (which is an extensive component of YouTube safety features).
In other words, and to put it plainly: this foul miasma over what is supposed to be the online public square is not the consequence of Elon Musk’s neglect. It is the inevitable result of his deliberate decisions.
To take it one step further than I did in the article: if you advertise on X, this is where your ad budget is going. That is something that will eventually end up on the doorsteps of said companies, sooner than they know.
Full piece here.
Let’s talk about AI, then
There have been more than enough words written about the AI safety summit, so I am going to stick to some observations that have come out of my work with Demos on this paper trying to frame the future discussion on AI and open source in particular. The full paper is available here, and is a distillation of views shared by myself and CASM’s Carl Miller. The thoughts below are my own:
Sunak’s approach will likely prove better than Biden’s: President Biden somewhat rudely jumped the gun on the international AI summit by launching a wide-sweeping executive order on AI. I’m not sure that this was the ‘win’ it appeared to be.
Executive orders are easily reversed and are almost always much less substantial than they appear – they can’t, by definition, have much serious money behind them. The US will struggle to impose its own rules on AI in the way it did with the internet: its head start is much smaller, and the world is multipolar now.
There is also extensive division even within AI on the right approaches to regulation. Biden’s jumping the gun may sideline, so far as it is possible to sideline the US, his team in that debate. Convening a regular forum to discuss how to handle all of this could prove much more effective in the long run than getting a shiny executive order.
Immediate risk versus frontier risk is a false divide: This is tackled at greater length in the paper, but worrying about the risk of the singularity or future bioweapons is no reason not to consider risks posed by AIs in the here and now. Finding regulatory models that work for today’s challenges will give us information on what might work tomorrow. Similarly, there’s no discrete switch between immediate and frontier risks: one will turn into the other eventually.
People pushing this divide hard should be regarded with a degree of suspicion: either they haven’t considered the issue deeply, or they have and their main motivation is dissuading regulators from acting against the problems they’re able to tackle today.
The UK is better-positioned to play a role in this than we might think: it’s fashionable in our liberal circles to refer to the UK as some kind of “Brexit island” basket case. That’s not necessarily the case.
Having dealt with UK-based AI businesses and civil servants working on AI for the Demos paper, I was genuinely impressed at the level of UK expertise on these issues – and while we tend to forget it, the UK has one of the largest tech sectors in Europe, which is still growing post-Brexit.
Not being a huge power bloc in our own right gives us an advantage of a sort: we’re not the EU, we’re not the US, but we’re significant enough that we can speak with both and get listened to, at least to an extent. That means we can be a useful convenor or outrider on this. We might not be able to drive the debate, but the ability to convene or to guide it should not be underrated.
Let’s cut this off here before it gets any more sincere. I’ll be back in the next week or two with some updates on populism, the meta verse, and a few other bits. I hope.
Cheers, and please do share if you’ve found this interesting,
James
PS. This newsletter was edited by Jasper Jackson, who is responsible for any bad opinions that have slipped in here somehow.



