A reset, not a pivot
An apology, plus: does effective altruism have a future now that its poster boy is behind bars?
Hi everyone,
Sorry that this is the first Techtris you’ve seen in quite a few weeks. Our schedule got horribly thrown off by the confluence of two events – the very frenetic launch of my book, “The Other Pandemic: How QAnon Contaminated The World” and the unexpected and sudden death of one of my closest friends.
With those two pressures, something had to give, and I’m afraid it was this newsletter. I want to apologise to everyone for not delivering, and especially to those of you kind enough to pay to support the newsletter.
Please do drop me an email (jrball1@gmail.com) if that’s you, and I’ll add a three month complimentary subscription to anyone who has been paying to make up for the gap in service.
I intend to return to the usual four-newsletters-a-month schedule from the middle of September, in part to build up a library of content and segments to make sure we don’t have further gaps in the service – but there will be some posts between now and then.
The first is a reported essay on Sam Bankman-Fried, FTX, and effective altruism, which is especially timely this week given that SBF has just been jailed ahead of his trial for tampering with witnesses. I’m pleased to say it’s in this very newsletter now.
The second will be a collection of work related to QAnon and my new book, which will be with you by the end of this month, and then normal service should resume by the middle of September.
Thanks everyone for sticking with Techtris, and please do continue to do so.
My thanks,
James
(This newsletter is edited by Jasper Jackson, who has agreed that the real blame for the gap in service lies with him, for not goading me to post often enough)
What next for “effective altruism”…does the term even have meaning?
(Midjourney’s four suggestions when asked for workers from a crypto startup working on a beach in the Bahamas. Making them all male was very much Midjourney’s own idea, not the prompt)
A little over one year ago, Sam Bankman-Fried (widely known as “SBF”) could not be flying higher – aged just 30, he had an estimated net worth of $20 billion, making him the world’s second richest millennial, lagging behind only Mark Zuckerberg. He was a new kind of billionaire, too, committed to give away “99%” of his wealth to good causes, and – perhaps thanks to generous donations to multiple newsrooms – was a media darling to boot.
Today, he faces up to 150 years in prison, after his cryptocurrency and hedge fund empire collapsed amid allegations of fraud. Former employees have pled guilty and claim they acted in concert with Bankman-Fried to defraud lenders. He denies all wrongdoing.
The full truth is yet to emerge, but the path to Bankman-Fried’s meteoric rise and fall had a strange beginning – at a chance lunch with an eccentric Scottish academic looking to revolutionise how the world thinks about altruism, when he was just an undergrad.
That academic was William Macaskill – whose book, “What We Owe To The Future”, was endorsed by none other than Elon Musk. The idea was effective altruism was supposed to change the world – perhaps even to save it. The brainchild of a group of philosophers and academics around Oxford in 2010, EA, as its proponents refer to it, would overturn the clunky world of organised philanthropy. EA would follow the evidence and target money where it would be the most beneficial – whether that’s direct cash transfers, mosquito nets or deworming pills.
Macaskill had been giving a talk in Cambridge, Massacheussets, in 2012, and agreed to have lunch afterwards with a promising undergraduate interested in his nascent movement. Bankman-Fried asked Macaskill about working in the field of animal welfare, but was told it made more sense under EA principles to “earn to give” – get as rich as you possibly can, give as much as possible of it away, and you’ll do far more good in the long run than if you worked for a charity.
Bankman-Fried seemingly took the advice to heart, becoming a billionaire in his 20s, and holding true to his pledge – he kept in touch with Macaskill and donated millions to charities following his philosophy.
For most of the past decade, this looked like an idea that had paid off on an epic scale. Sam Bankman-Fried, now generally known as SBF, went into finance and then cryptocurrency, founding the hedge fund Alamada Research and the crypto exchange FTX – and boosting his net worth to a pinnacle of $26 billion. He had held firm to his commitments to EA, too – of the $46 billion of future giving pledged through EA, an incredible $16 billion came from SBF.
And then, as 2022 came to a close, the whole thing collapsed as it was revealed Alamada was being propped up by billions of secret lending from FTX, leaving SBF facing fraud charges in the US, investors looking for their missing billions, and EA with some serious soul-searching on its hands.
Macaskill, who last year had even suggested SBF should buy Twitter alongside Musk, declined to comment on his fateful lunch for this article, but previously told the Times he was “shocked and disgusted” by his former friend, who he said had “sadly conned” him.
One staffer in an EA organisation – who asked not to be named reflecting on the group’s founder and foundational moment – thought the lunch had changed the world, though perhaps not as Macaskill had intended.
“Clearly the world might look quite different if that lunch hadn’t happened,” they said. “Foreseeing that outcome though………” they said, with a verbal shrug as they let the sentence trail off with the reasonable implication that Macaskill could hardly have predicted what happened next.
Effective altruism is, then, a movement in the midst of a PR crisis. But its issues run deeper – EA is engaged in almost existential soul-searching, tackling not just the very public disgrace of its most famous adherent, but also a string of allegations of sexual assault and harassment, and accusations that the movement has become almost cult-like. How can a movement supposedly focused on doing good more effectively have gone so astray?
The core idea of the movement is uncontroversial: we should care about how well money we give to charity is spent, and try to make sure it does the most good it can. The leading UK effective altruism organisations try to formalise this in a number of ways – some create a secular tithe of sorts, asking members to commit to donating at least 10% of their annual income. Some stage regular discussions on how to solve the world’s problems.
Others go further. One organisation, 80,000 hours – named for the duration of a typical 50-year, full-time career – suggests occupations to maximise the benefit of your working life. This includes a ranked list of problems they believe people should tackle: “Risks from artificial intelligence” is first, “catastrophic pandemics” second, and “building effective altruism” third. “Climate change” ranks at seventh.
That constant – arguably doomed – push for rationalism in all things has had personal costs for some involved, too. Over the last month both TIME magazine and Bloomberg revealed that women within the effective altruism movement – which is overwhelmingly male – faced harassment and pressure from their male peers. One form this took was the argument that monogamy was not rational, as it was less efficient than polyamory, with some women telling the magazine they felt pressured into polyamorous relationships with multiple men.
This absence of boundaries – and the sense a woman is not genuinely part of the movement unless she acts in certain ways – can swing the other way, too. One woman in the UK charity sector recalled her surprise at being asked, by a stranger, on an effective altruism online forum whether she thought getting married or having children was against the principles of EA, given that they distracted from the cause.
Dr Beth Watts-Cobbe, a homelessness researcher at the University of Heriot-Watt, classes herself as a supporter of the ideas of effective altruism, but not necessarily the movement itself.
“When a friend saw a piece I wrote supporting effective altruism, they sent me about ten things about the cultish weirdness of it all,” she says. “The movement has taken a very specific, mathematically-driven, quite purist approach and gone along – quite bizarrely – with a particular lifestyle, which I don’t go along with.”
Watts-Cobbe says the useful bit of EA for her is an approach that involves actually assessing the benefit of different interventions – instead of just supporting charity for a “happy glow” that doesn’t generate positive change.
“There is something to the core idea, that we should hold ourselves to account when we donate to charity, instead of just feeling good about ourselves,” she says.
“There’s an inbuilt human tendency to care about particular kinds of things – the one girl struggling in front of you is more compelling than millions dying of malaria thousands of miles away – I think there’s a role for EA to be a corrective to that.”
The hardcore mathematical approach of EA – coupled with the interests of its billionaire advocates – led the group into some seemingly esoteric interests. A movement that started by looking at interventions like deworming or malaria nets quickly became obsessed with developing safe artificial intelligence, or getting humanity off the planet.
“I wonder about the extent to which the movement’s leadership was seduced by its association with the world of elite wealth and capital, and maybe became slightly intoxicated with it,” says Charles Keiden, a philanthropy expert who edits the trade publication Alliance magazine.
Silicon Valley donors had proven more interested in some of the long-term consequences of EA’s logic, Keiden explains.
The argument, as advanced by its proponents, is that potential future lives should be worth as much as those of people living today. Given the future trails out billions of years in advance, there are trillions of potential future humans, versus a mere eight billion of us alive today.
If you follow that logic to its conclusion, then those future humans absolutely have to take priority over anyone living today – any money spent feeding a starving child is money wasted. Instead, all effective altruism funds should go towards threats that could stop the future having trillions of living humans. This lends itself to funding the mission to make humanity an interplanetary species, lest the Earth be destroyed, and making sure that if AI gains sentience, it acts in the interests of humanity.
This is not a majority view within EA, but it is the source of a possible schism. A movement founded on effective and efficient modern charity can quickly be led to strange places, as a result of its own logic.
One such apparently perverse outcome was with the purchase of Wytham Abbey, a £15 million stately home outside Oxford in which Elizabeth I and Oliver Cromwell once lived (though not at the same time). That’s a far cry from mosquito nets.
The abbey – now used as a 25-bedroom convention centre – was bought by the Centre for Effective Altruism’s parent organisation, Effective Ventures, which was founded by Macaskill. Though the organisation had received funding from Sam Bankman-Fried, the abbey purchase was largely funded by a different body, Open Philanthropy, funded by Facebook co-founder Dustin Moskovitz and his wife, former Wall Street Journal reporter Cari Tuna.
In a post tackling the apparent contradiction of an organisation dedicated to effective philanthropy spending its money on a luxury property, Open Philanthropy staffer Claire Zabel explained the logic of the purchase became that EA could host events and persuade many more people to commit to giving at least 10% of their income to effective altruism causes – meaning that in the long run, such a purchase might prove more effective than giving directly to a well-evidenced cause. Plus, in the logic of EA, the ‘real’ cost is much less than £15 million, because the abbey is an asset that can one day be sold.
Zabel lamented that EA has less funding available now than it did late in 2021, but concluded “it’s currently too soon to say whether the usage will justify the investment” – and that she’d consider giving it again.
That an organisation dedicated to effective philanthropy can easily defend buying a literal mansion has not gone unnoticed by the wider charity sector, some of whom are frustrated by effective altruism – and who possess a much broader critique of its goals.
“Effective altruism attracts funding because they don’t threaten the status quo – but it looks like they attract funding because they have the better ideas,” says Professor Linsey McGoey, a sociologist at the University of Essex.
McGoey argues that the technocratic approach of EA brushes aside structural or political issues, and involves simply accepting the world as it is and trying to tinker around the edges.
“It seems so sanguine that it’s hard to sense how it could be problematic for recipients…” she says. “EA acts as if it’s omniscient in some ways, but strategically ignorant in others.”
There are still people willing to sign up to the cause, though – perhaps one of the tougher roles belongs to Shakeel Haseem, who joined the Centre of Effective Altruism from the Economist as head of communications in September last year.
“I had a baptism of fire,” he notes, ruefully, before saying that some growing pains for the EA movement are to be expected
“This was a very small community of people in a basement office trying to find money to fund malaria nets, and then it grew very quickly and got a lot of money very quickly,” he said. But in the spirit of making an opportunity out of a crisis, he said the current situation could be useful for the movement.
“It’s created a moment for people to step back and think about things more,” he says. “If you look at the EA online forums, you’ll see tonnes of this reflection at all levels of the EA community. It’s a dialogue that’s happening in public.”
Earn to give, Haseem insists, was already firmly on the way out long before SBF’s ventures collapsed. The EA movement is “more talent constrained than funding constrained” – it turned out it needed skilled people to work in the sector after all (he explains his own move to the Centre from the Economist in such terms).
Effective altruism is finding that almost all of its core tenets are more difficult to define than they first appear. The mantra of earning to give is a much more complex proposition when Sam Bankman-Fried – its living, breathing emblem – has left so many investors without their life savings.
The question of what constitutes “effectiveness” is not the simple, calculable thing EAs might have imagined – who to care about, the currently living or the future, humanity or all species, or the environment itself, are all subjective questions, not objective ones. How to calculate the benefit of direct interventions versus buying mansions can become matters for debate – you can make a rationalist case for almost anything, up to and including buying a media outlet for better coverage.
But most of all, effective altruism needs to reckon with its core concept. The movement is based on the idea that evidence is king and charity is calculable. Effective altruism seems to be learning – just as most teenagers do – that the world is an awful lot more complicated than that.


