Stylistically, some commenters don't seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.
As for why the latter is a problem, I agree with JWS's observation that this type of 'For The Greater Good' reasoning leads to great harm when applied at scale. This is not, or rather should not be, hypothetical for EA at this point. No amount of abstract reasoning for why this approach is 'better' is going to outweigh what seems to me to be very clear empirical evidence to the contrary, both within EA and without.
Beyond that issue, it's pretty easy to identify any person, grant, or policy as plausibly-very-harmful if you focus only on possible negative side effects, so you end up with motivated reasoning driving the answers for what to do.
For example, in this post Vasco recommends:
In addition, I encourage people there to take uncertainty seriously, and, before significant further investigation, only support interventions which are beneficial in the nearterm accounting for effects on farmed animals.
But why stop at farmed animals? What about wild animals, especially insects? What about the long-term future? If taking Expected Total Hedonistic Utilitarianism seriously as Vasco does, I expect these effects to dominate farmed animals. My background understanding is that population increase leads to cultivation of land for farming and reduces wild animal populations and so wild animal suffering quite a bit.. So I could equivalently argue:
In addition, I encourage Vasco to take uncertainty seriously, and, before significant further investigation, only support interventions which are beneficial in the nearterm accounting for effects on wild animals.
These would then tend to be the opposite set of interventions to the prior set. It just goes round and round. I think there are roughly two reasonable approcahes here:
By contrast, if your genuine goal is to pick an intervention with no plausible chance of causing significant harm, and you are being honest with yourself about possible backfires, you will do nothing.
I appreciate you writing this up at the top level, since it feels more productive to engage here than on one of a dozen comment threads.
I have substantive and 'stylistic' issues with this line of thinking, which I'll address in separate comments. Substantively, on the 'Suggestions' section:
At the very least, I think GiveWell and Ambitious Impact should practice reasoning transparency, and explain in some detail why they neglect effects on farmed animals. By ignoring uncertain effects on farmed animals, GiveWell and Ambitious Impact are implicitly assuming they are certainly irrelevant.
Why? It seems clear that you aren't GiveWell's target audience. You know that, and they know that. Unless someone gives me a reason to think that Animal Welfare advocates were expecting to be served by GiveWell, I don't see any value in them clarifying something that seems fairly obvious.
In addition, I encourage people there to take uncertainty seriously, and, before significant further investigation, only support interventions which are beneficial in the nearterm accounting for effects on farmed animals.
Unless the differences on human welfare are incredibly narrow or the impacts on animal welfare are enormous, this seems like a very bad idea. In general, donating $100 to a charity with suboptimal impacts on human welfare but improved impacts on animal welfare is going to be strictly worse - for both human and animal welfare - than donating $90 to the best human welfare charity and $10 to the best animal welfare charity.
Similarly, investigating the exact size of the effects mostly seems like a waste of time to me. I wrote this up in more detail a few years ago; was addressing a longtermist cluelessness critique but you can pretty much cut/paste the argument. To save a click-through, the key passage is:
Similar thoughts would seem to apply to also other possible side-effects of AMF donations; population growth impacts, impacts on animal welfare (wild or farmed), etc. In no case do I have reason to think that AMF is a particularly powerful lever to move those things, and so if I decide that any of them is the Most Important Thing then AMF would not even be on my list of candidate interventions
GiveWell and Ambitious Impact could also offset the nearterm harm caused to farmed animals by funding the best animal welfare interventions. I calculate these are over 100 times as cost-effective as GiveWell’s top charities ignoring their effects on animals. If so, and some funding from GiveWell or Ambitious Impact is neutral due to negative effects on animals, these could be neutralised by donating less than 1 % (= 1/100) of that funding to the best animal welfare interventions.
Equally, GiveWell or AIM's donors can offset if they are worried about this. That seems much better than GiveWell making the choice for all their donors.
I think my if-the-stars-align minimum is probably around £45k these days. But then it starts going up once there are suboptimal circumstances like the ones you mention. In practice I might expect it to land at 125% to 250% of that figure depending how the non-salary aspects of the job look.
I'm curious about the motivation of the question; FWIW my figure here is a complicated function of my expenses, anticipated flexibility on those expenses, past savings, future plans, etc. in a way that I wouldn't treat it as much of a guide to what anyone else would or should say.
It does indeed depend a lot. I think the critical thing to remember is that the figure should be the minimum of what it costs to get a certain type of talent and how valuable that talent is. Clean Water is worth thousands of dollars per year to me, but if you turned up on my doorstep with a one-year supply of water for $1k I'd tell you to stop wasting my time because I can get it far more cheaply than that.
When assessing the cost of acquiring talent, the hard thing to track is how many people aren't in the pool of applicants at all due to funding constraints. That sounds like it's Abraham's position and I think it's more common than often given credit for; there's something very low-status in EA about saying 'I could be doing this more impactful thing, but I won't because it won't pay me enough'.
Funding isn't the only constraint on salaries of course; appearances matter too. Once your org is paying enough that you can't really pay more without getting a lot of sideways glances you don't want to get, that's when I would mostly stop calling you funding-constrained* and then I imagine this number can get really really high; cost of talent becomes ~infinite and we're back to looking at 'value'. Open Phil's hiring is perhaps in approximately that position.
If you are still in a position where you could raise salaries if it weren't for funding constraints, I tend to think this number struggles to make it out of low six figures. Possible exceptions are positions that want a very specific combination of skills and experiences, like senior leadership at central EA orgs.
*Assuming you are mostly turning money into people into impact, rather than e.g. money into nets into impact.
I got very lucky that I was born in a city that is objectively one of the best places in the world to do what I do, so reasons to move location are limited.
More generally I don't feel like I'm doing anything particularly out of the ordinary here compared to a world where I am not donating; I like money, more of it is better than less of it, but there are sometimes costs to getting more money that outweigh the money. Though I would say that as you go up the earnings curve it gets easier and easier to mitigate the personal costs, e.g. by spending money to save time.
Perhaps the biggest risk is that if I set my marginal tax + donation rate too high I am insufficiently incentivised to earn more, to the detriment of both me and the world. Still working on that one.
This really depends how broadly I define things; does reading the EA Forum count? In terms of time that feels like it's being pretty directly spent on deciding, my sense is ~50 hours per year. That's roughly evenly split between checking whether the considerations that inform my cause prioritisaition have changed - e.g. has a big new funder moved into a space - and evaluating individual opportunities.
I touched on the evaluation question in a couple of other answers.
My views have not changed directionally, but I do feel happier with them than I did at the time for a couple of reasons:
With my more recent work it seems much too soon to say anything definitive about social impact, so I always try to acknowledge some chance that I'll feel bad when I look back on this.
It has varied. Giving both of us half the budget is in some ways most natural but we quickly noticed it was gameable to the extent we can predict each other's actions, similar to what is described here. At the moment we're much closer to 'discuss a lot and fund based on consensus'.
I think the short answer is 'depends what you mean?'. Longer answer: