I live for a high disagree-to-upvote ratio
Hmm, no. I wouldn’t want Vida Plena to update without evidence that they have those secondary effects.
But I think it would also be misleading to compare direct effects + household spillovers (in the case of Vida Plena) to direct effects + household spillovers + community spillovers + mortality reduction + consumption increases (GiveDirectly), unless you had good reason to believe that Vida Plena’s secondary effects are much worse than GiveDirectly’s. So I suppose I would be wary of saying that GiveDirectly now have 3–4x the WELLBY impact relative to Vida Plena—or even to say that GiveDirectly have any more WELLBY impact relative to Vida Plena—without having a good sense of how Vida Plena performs on those secondary outcomes. (But I feel like maybe I’m misunderstanding what you meant by applying a discount?)
The cost-effectiveness of GiveDirectly has gone up by 3-4x (GW blog, GD blog), though this was recent news and does not necessarily imply that WELLBYs will also go up by 3-4x (most of this increase is attributable to increased consumption) - but should constitute a discount at least.
I’m not sure about this; the HLI’s analysis of GiveDirectly only looks at direct individual effects and household spillovers. Whereas GiveWell’s update seemingly only found additional effects in terms of non-household spillovers, mortality, and consumption (based on a 5 minute check, so I might be wrong here).
I think it’s reasonable to argue that depression prevention would also have effects on mortality, consumption (via productivity increases—my guesses here peg this quite high, especially in LMICs and UMICs), and non-household spillovers (via increased income being reinvested into communities, using the same mechanism as GD). Unless there’s reason to believe that the non-accounted-for impacts on WELLBYs systematically favour GiveDirectly I’d be cautious about applying a discount—but curious for your take on that :)
AIM have already tried to do this for research, and they aren’t sure whether to continue their research fellowship in 2025.. I imagine they’d have some very good learnings on this topic if you got in touch!
Interesting to see the same from the EA Animal Welfare Fund, who only gave ~1.2% of funds to explicit alternative protein work. I suspect this is emblematic of a broader shift within EA toward getting easy, quick wins in neglected countries (?)
I am surprised to see such a gap between Europe and North America, given that both are at least economically similar! Would love to hear more about this—in my mental model there is probably more regulatory capture in the U.S., compounded by generally less ideological willingness to help animals. Is this correct?
Military applications of AI are not an idle concern. AI systems are already being used to increase military capacity by generating and analysing targets faster than humans can (and in this case, seemingly without much oversight). Palantir’s own technology likely also allows police organisations to defer responsibility for racist policing to AI systems.
Sure, for the most part, Claude will probably just be used for common requests, but Anthropic have no way of guaranteeing this. You cannot do this by policy, especially if it’s on Amazon hardware that you don’t control and can’t inspect. Ranking agencies by ‘cooperativeness’ should also be taken as lip service until they have a proven mechanism for doing so.
So they are revealing that, to them, AI safety doesn’t mean that they try to prevent AI from doing harm, just that they try to prevent it from doing unintended harm. This is a significant moment for them and I fear what it portends for the whole industry.
If you’re inclined to defend Scott Alexander, I’d like to figure out where the crux is. So I’ll try and lay out some standards of evidence that I would need to update my own beliefs after reading this article.
If you believe Scott doesn’t necessarily believe in HBD, but does believe it’s worth debating/discussing, why has he declined to explicitly disown or disavow the Topher Brennan email?
If you believe Scott doesn’t believe HBD is even worth discussing, what does he mean by essentially agreeing with the truth of Beroe’s final paragraph in his dialogue on ACX?
For both, why would he review Richard Hanania’s book on his blog without once mentioning Hanania’s past and recent racism? (To pre-empt ‘he’s reviewing the book, not the author’, the review’s conclusion is entirely about determining Hanania’s motivation for writing it)
If you believe Scott has changed his positions, why hasn’t he shouted from the rooftops that he no longer believes in HBD / debating HBD? This should come with no social penalty.
I would set Julia Wise’s comments to Thorstad in this article as the kind of statement I would expect from Scott if he did not believe in HBD and/or the discussion of HBD.
This is an awesome post, and it's a strong update in the direction of EV & CEA being much more transparent under your leadership. Very keen on hearing more from you in the future!
One other risk vector to EV stood out to me as concerning, but went somewhat unaddressed in this post. Consider:
EV was in a financial crisis; it had banked on receiving millions from FTX over the coming years
If a fraudulent or otherwise problematic individual hasn't been caught by the legal system, EV's donor due diligence tools may not catch them either.
I worry that the focus on legal risks is potentially missing a counterfactual here where a funding source is systematically upset. EV was not just banking on FTX to stay solvent / unfraudulent, but was also implicitly depending on cryptocurrency to remain frothy (the same can be said for EA, especially long-term risk cause areas, more broadly). Counterfactually, had FTX not been fraudulent, I still think that's it's likely that cryptocurrency would have collapsed over the following years. Assuming that the LTFF was receiving a proportion of FTX's funds, this still could've meant more than a 50% drop in funding from FTX (for example, Ethereum lost ~3/5ths of its market cap between November 2021 to November 2022).
You note:
Guardrails to prevent projects from running out of funding in a disorderly way and runway requirements to maintain resilience to possible future crises.
I would love to understand more about these financial controls. I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.
(It's outside the scope of this post, but I'm not so sure that the broader long-term risk cause areas could have withstood this, and indeed, in the present scenario many organisations did not. I sort of worry about this kind of systematic risk with Anthropic, who could be hit quite hard if the current AI bubble starts winding down, even if they aren't directly responsible for it; I'm sure there are others.)
So nice to see & thank you for sharing! If you’re ever interested in picking up habit tracking again, I’ve been through the ringer with apps and settled on Bearable; they have a lot of the more correlative stuff you were looking for (i.e. you can set up unlimited inputs & outcomes and it will correlate between them in a simple way).
But, also, I’ve been using habit/factor trackers for years and I’ve never gotten anything useful out of their analysis tools because there are too many confounders. I mostly find them useful as a habit that causes me to be more mindful about those things, which is valuable in and of itself. (And also I have chronic pain so can specifically keep an eye on my flare-ups for my rheumatologist).