Brits keep getting arrested for tweets. Elon Musk is (part of) the reason.
Social media moderation is being outsourced to UK police forces. Unsurprisingly, they're terrible at it.
There is nothing like engaging in a modern debate on free speech to help you see the virtues of draconian censorship. He who would sacrifice essential liberties so as not to see numerous bad takes on this topic…might just come out ahead.
Nonetheless, let’s get into the UK free speech wars, because there’s a stronger tech angle on it than most of the current debate would indicate and it’s got implications that go well beyond the small island upon which I live.
The first thing to say is that the UK does have something of a problem here, even if it is fashionable in some left-wing or liberal circles to claim otherwise. The last Conservative government passed multiple new laws governing online speech that were often vaguely-worded and which – as some of us tried to warn – risked causing confusion, being enforced too strictly, and threatening free speech.
Keir Starmer’s Labour Party hasn’t actually passed any new laws governing speech, but right-wing newspapers and commentators tend to find it easier to criticise these laws when there’s a nominally left-wing government in power. Enforcing the existing laws is the job of the police and the Crown Prosecution Service, both of which operate independently of political interference. Starmer has many faults, but he is a former Director of Public Prosecutions himself and is a stickler on this stuff: the police aren’t knocking on people’s doors for bad tweets on his order. He is not a Trumpian figure on that front. Really.
But the police are knocking on people’s doors, in serious numbers, and arresting them for their social media posts. It is true that more than 1,000 people are arrested every month over things they posted to the internet. What is often left out is that most of these arrests eventually result in no further action, or in a police caution. Most of the remainder is serious threatening behaviour, harassment or stalking. The numbers of people actually being jailed for borderline social posts in the UK remains small.
Still, when the Metropolitan Police are only catching bike thieves in 39 cases out of 6,752 stolen bikes in London and shoplifting is widely perceived to go ignored, the idea that police are arresting people for Takes feels…wrong.
There are also some clearly bad decisions being made – whatever you think of the rights and wrongs of arresting Graham Linehan1, surely everyone can agree that it didn’t need to happen at an airport and certainly didn’t need five officers to do it. That’s both straightforwardly bad on its merits, and tactically bad for his opponents: he can play the victim card on this one, because he very much does look like the victim.
It shouldn’t become normal that the police get involved in social media disputes. They have better things to do, and it’s bad for freedom of expression – even if people aren’t actually being sent to the gulags, self-censorship based on fear is real and a problem. We should be trying to tackle this. And that means looking why it’s happening. And that’s what brings us around to Elon Musk and the tech bros.
But first: if you’re enjoying the posts here, please do consider becoming a subscriber. I’m trying to post here 2-3 times a month (depending how long they are – the last piece weighed in at just under 4,000 words), and those of you who support it financially make that time and effort so much easier to justify. But any and all subscribers are welcome.
So, you’ve seen a bad tweet…
Let’s briefly jump back to the sunlit uplands / hellish censorious dystopia (delete to suit) of pre-Elon Musk Twitter in 2022, and imagine that you’ve seen a tweet that is so seriously abusive that you think it might be criminal. Perhaps it uses a racial or ethnic slur, or an insult against religion that you think might trigger the UK’s laws against religious hatred2. It could be a ‘joking’ threat of violence that you worry might be serious. It’s not an outright threat of violence or a clear-cut criminal case, but it’s borderline.
What do you do? Back then, most people would just report the tweet. Twitter moderation had its failures, but generally in such cases the tweet would be investigated within a couple of hours and a decision reached – and if it was found to be abusive or hateful, would be deleted, and if it was serious enough, the account would be deleted. In 99% of cases, that would then be the end of it. There was a simple and easy mechanism to deal with such cases.
And then Elon Musk took over, and he did many things all at once. He changed the moderation rules, but more significantly than that, he stopped paying many (if not most) of the moderators. At the same time, he allowed huge numbers of accounts that had been previously banned from the social network back onto it, without any requirements to change their behaviour. As if all of that wasn’t enough, Musk now posts content daily that is so extreme it would’ve been banned under the previous regime.
The result was predictable and has been widely reported: there is much more hate speech, of every sort, on Elon Musk’s X than there was on Twitter before it. If someone is sent a racist or menacing post, the chances of moderators taking it down are now vanishingly low – to use just one example, Sunder Katwala has extensively documented how targeted and sustained racist abuse on him is left untouched on X even when he reports it. While some barebones moderation remains, X is now in practice ungoverned, and has been allowed to operate in that way.
Musk and his followers would portray that as a victory of free speech. If it leaves some people no longer able or willing to post online because they fear harassment, threats, doxxing or other abuse, that’s not their problem. They are only interested in the narrow definition of “free speech” that suits them – will the social network delete your posts, or will the authorities knock on your door. The positivist framing of free speech as enabling everyone to contribute is far too woke for Musk and X.
The problem is that Musk’s free-for-all "no moderation policy turns into a mess when it operates in a country like the UK, which has laws governing speech online – almost all of which, we should remember, have significant public support.3 Musk has got rid of the quick and easy way to deal with borderline content.
If someone sees a tweet they think might qualify as hate speech, just reporting it to moderators is no longer an option. They either have to just tolerate it and do absolutely nothing, or they have to report it to police. Even if most people will shrug it off, and perhaps just quit the social network or lock their account, some fraction will start reporting to police.
This is the inevitable result of Elon Musk’s changes to X: he has outsourced moderation to the UK’s police forces – and unsurprisingly, they are bad at it. They are having to make decisions, and use the blunt instruments of the law, because Musk is refusing to use the many tools available to him.
Of course, X – despite Musk’s protestations – is not a particularly large social network. Most people don’t use it. It does feature in a disproportionate number of online arrests and rows, by virtue of his policies and its userbase – and since Trump’s re-election, Mark Zuckerberg (with his much larger social networks) has jumped aboard the anti-moderation train. If the police are jumping onto our posts, it’s partly because the tech bosses have made them.
Isn’t this just another way of blaming big tech for everything?
Honestly, yes, it is – at least a bit. It’s far too easy for Musk to present himself as a “free speech champion” when all he’s actually doing is boosting political views he agrees with, while he cuts the cost of operating his social network – and acting outraged when people get a knock on their door from police as a result of his decisions.
But this doesn’t let the UK government off the hook. In reality, this is still quite substantially their fault. The UK and EU keep passing laws that they insist are going to govern and regulate big tech – but passing a law means nothing if you’re not going to enforce it.
Elon Musk has openly defied multiple UK laws on the operation of a social network since he took over Twitter, and has faced nothing in the way of consequences. The EU has been much the same. The political reality for the UK and EU alike at the moment tends to discourage a fight with big tech. The UK’s politicians add insult to injury by continuing to use X daily even as it inflames their political problems.
But by refusing to even attempt to regulate social media networks directly, the government has tacitly accepted that police will be left to police unmoderated social networks directly. This will necessarily lead to mistakes.
When regulators are dealing with a social network owned by a major company, specialists can work together to carefully define what a law does and doesn’t cover, helping clarify rules for edge cases and helping companies define very specific policies. When all of that work is thrown away, these decisions are left in the laps of general police constables with no specialist training. That results in a mess.
This all shows why internet regulation and governance happens: the surge in arrests in the UK – and the huge political rows that are stemming from it – are a direct result of a breakdown in social media regulation.
It also shows up the phoniness of Musk’s position. The UK is a democracy, and people can vote for the laws they would like. I personally think UK laws governing the internet are overly strict, threaten free speech, and violate privacy – but public opinion isn’t with me. The way to change the situation is to change the laws, and the way to do that is to win people over.
By trying to pretend you can shrug them off “because internet” Musk isn’t actually advancing a cause, he’s just being a troll, and he’s leaving his followers more vulnerable to consequences – and turning the social network he bought into a sewer in the process.
The actual free speech debate is always more nuanced and more complex than the version people like to have on the internet. The reality is that there have been no new laws in the UK in the last 2-3 years governing free speech. What’s changed has been social media moderation, and it’s those changes that are the likeliest driver of arrests.
Weirdly, if the government wanted to do something to reduce arrests for posting without giving stalkers and harassers free rein to terrorise their victims, the quickest and best way it could do it would be to take enforcement action against social networks.
Does it have the nerve?
And whatever you think of Linehan himself, which in my case is very little
An important but nerdy thing to know about this law is that it’s not a blasphemy law: you can’t break it for insulting a religion, but instead people belonging to a particular religious group. This was introduced because of a wave of anti-Muslim hatred beginning in the 2000s. Attempts to prosecute those concerned under existing racial hatred laws failed, because Muslims aren’t a race or ethnic group, which persuaded New Labour of the need to introduce an equivalent law governing religious hatred.
If you’re very online, you might believe the public hate the age verification requirements of the Online Safety Act. In reality, 69% (nice) of the public backs them, and only 16% opposes. Or that’s what they tell pollsters, at least.