I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
I get the sense that GiveWell would not recommend any animal welfare intervention (nor would they recommend any x-risk or policy intervention). But I don't think that's because they think any intervention that doesn't meet their standards isn't worth funding—they fund a lot of more speculative interventions thru Open Philanthropy. I think GiveWell wants to be viewed as a reliable source for high-quality charities, so they don't want to recommend more speculative charities even if the all-things-considered EV is good.
(I'm just speculating here.)
As with ~all criticisms of EA, this open letter doesn't have any concrete description of what would be better than EA. Like just once, I would like to see a criticism say, "You shouldn't donate to GiveWell top charities, instead you should donate to X, and here is my cost-effectiveness analysis."
The only proposal I saw was (paraphrased) "EA should be about getting teenagers excited to be effectively altruistic." Ok, the movement-building arm of EA already does that. What is your proposal for what those teenagers should then actually do?
Juncture 1 – Animal Friendly Researchers
There is some reason to believe that virtually everyone is too animal-unfriendly, including animal welfare advocates:
Animal welfare has much higher EV even under conservative assumptions. IMO only plausible argument against is that the evidence base for animal welfare interventions is much worse, so if you are very skeptical of unproven interventions, you might vote the other way. But you'd have to be very skeptical.
“Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders”
After reading your post, I wasn't sure you were right about this. But after thinking about it for a few minutes, I can't come up with any serious mistakes I've made that were "point-in-time blunders".
The closest thing I can think of is when I accidentally donated $20,000 to the GiveWell Community Foundation instead of The Clear Fund (aka GiveWell), but fortunately they returned the money so it all worked out.
Some interesting implications about respondents' median beliefs:
(A couple of these sound super wrong to me but I won't say which ones)
Can you explain how to use the discount rate parameter? From context, it looks like it's meant as the "social discount rate", not the "pure time preference" (which I would think should be 0). But for EAs, I think of the social discount rate as being positive mostly due to x-risk, and your model already separately accounts for AI x-risk, so should the social discount rate exclude discounting due to AI x-risk? In which case I would think it should be pretty low.
Edit: Never mind, I missed that you explained this in the post. Seems like you agree with my interpretation that the discount rate is based on non-AI x-risk.
(In which case 2% seems way too high to me, but that's just details.)