A

abrahamrowe

4421 karmaJoined Working (6-15 years)

Bio

Principal — Good Structures

I previously co-founded and served as Executive Director at Wild Animal Initiative, and was the COO of Rethink Priorities from 2020 to 2024.

Comments
195

Topic contributions
1

Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:

Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.

My experience isn't this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:

  • ~33% inherited wealth / family business
  • ~40% seems like they mostly "earned it" in the sense that it seems like they started a business or did a job well, climbed the ranks in a company due to their skills, etc. To be generous, I'm also including people here who were early investors in crypto, say, where they made a good but highly speculative bet at the right time.
  • ~20% seems like the did a lot of very difficult work, but also seem to have gotten really really lucky - e.g. grew a pre-existing major family business a lot, were roommates with Mark Zuckerberg, etc.
    • Obviously we don't have the counterfactuals on these people's lucky breaks, so it's hard for me to guess what the world looks like where they didn't have this lucky break, but I'd guess it's at least at a much lower giving potential.
  • 7% I'm not really sure.
     

So I'd guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don't know about it. I'm surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don't think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of "cause prioritization is directly downstream of the views of the wealthiest people" is worse than many alternatives. 


I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.

I agree! I think that these donors are probably the least incentivized to do this, but also where a the most value would come from. Though I'll note that as of me writing this comment the average is well above 10x the minimum donation.

Yeah, I agree that this seems tricky. I thought about sub-causes, but also worried they'd just make it really burdensome to participate every month.

I ended up making a Discord for participants, and added a channel where people can explain their allocation, so my hope is that this lets people who have strong sub-cause prioritization make the case to it for donors. Definitely interested in thoughts on how to improve this though, and seems worth exploring further.

After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, "Farmed vertebrate welfare should be an EA focus".

I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don't think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new animal-focused talent should probably be mostly going into earning-to-give for invertebrates/WAW, and that donations should mostly go to groups there or the EA AWF (which should in turn mostly fund invertebrates and WAW). I don't think farmed vertebrate welfare should be the default way that EAs recommend to help animals

I mean something like directly implementing an intervention vs finance/HR/legal/back office roles, so ops just in the nonprofit sense.

Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it's less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.

I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA's net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.

But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn't matter in the longrun from EA.

I don't think it's all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.

Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don't really think any of these are that likely on their own — I just think it's generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.

  • The various charities don't produce enough value to offset the harms of FTX (seems likely they already have produced more to me, but I haven't thought about it)
  • Things around accidentally accelerating AI capabilities in ways that end up being harmful
  • Things around accidentally accelerating various bio capabilities in ways that end up being harmful.
  • Enabling some specific person into entering a position of power where they end up doing a lot of harm.
  • X-risk from AI is overblown, and the E/accs are right about the potential of AI, and lots of harm is caused by trying to slow AI development/regulate it.
  • There is even stronger reactionary response to some future EA effort that makes things worse is some way.
  • Most of the risk from AI is algorithmic bias/related things, and AI folks' conflict with people in that field ends up being harmful for reducing it.
  • Using EV only for making decisions accidentally leads to a really bad world, even when all decisions made were positive EV.
  • EA crowds out other better effective giving efforts that could have arisen.

Two caveats on my view:

  • I think I'm skeptical of my own impact in ops roles, but it seems likely that senior roles are harder to hire for generally, which might generally mean taking one could be more impactful (if you're good at it).
  • I think many other "doer" careers that aren't ops are very impactful in expectation — in particular founding new organizations (if done well or in an important and neglected area). I also think work like being a programs staff member at a non-research org is very much in the "doer" direction, and could be higher impact than ops or many research roles.

Also, I think our views as expressed here aren't exactly opposite — I think my work in ops has had relatively little impact ex post, but that's slightly different than thinking ops careers won't have impact in expectation (though I think I lean fairly heavily in that direction too, just due to the number of qualified candidates for many ops roles).

Overall, I suspect Peter and I don't disagree a ton (though haven't talked with him about it) on any of this, and I agree with his overall assertion (more people should consider "doer" careers over research careers), I think I just also think that more people should consider earning to give over any direct work.

Also, Peter hires for tons of research roles, and I hire for tons of ops roles, so maybe this is also just us having siloed perspectives on the spaces we work in?

Thanks for the questions!!

What makes you hopeful that scalable interventions are coming, and can you say more about anything you're particularly excited about here?

The ones that seem most likely in the near future are:

  • Insecticide interventions like alternative crop insect management approaches, including genetic ones
  • Less painful insecticides
  • Fertility control for urban wildlife
  • Probably a lot more no one has considered

Things that make me think this is on the table:

  • I think there aren't great alternative animal welfare interventions, but animal interventions have really good returns if you get them right because you can impact so many animals.
  • We've made some cool progress on validating welfare measures that might be cheap to measure, which could be useful for assessing the sign of interventions.
  • It seems generally like the academic field building project is going well, so we should expect this to accelerate. 

In terms of timelines — I think this is more like 10-15 years. But part of the reason I think that's exciting is that I used to think it would be more like 2050+ before anything like this was on the table. I think I've also just generally decreased my confidence that the problems as are as difficult as I thought before (though I definitely think they are still tricky).

For insecticides, I think my view remains that we are something like 2-5 years of specific lab/field research away from plausibly having a great intervention, so it is sad that progress hasn't been made on it, and given that this also seemed like the case a few years ago, funding the research should have been a priority earlier.

Load more