J

JackM

4075 karmaJoined

Bio

Feel free to message me on here.

Comments
681

Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.

You're right that the issue at its core isn't the meat eater problem. The bigger issue is that we don't even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don't even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.

I dispute this. I'm admittedly not entirely sure but here is my best explanation.

A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):

  • The number of animals who will be killed for food (i.e. impacting animal welfare).
  • CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).

Importantly, we don't know the sign and magnitude of these "unintended" effects (partly because we don't in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the "intended" effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we can't really weigh them. If you think you can weigh them, then please tell me more. 

So I think it's the saving lives that really gets us into a pickle here - it leads to so much complexity in terms of predictable effects.

There are some EA interventions that don't involve saving lives and don't seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I don't think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.

Also, note that under Greaves' model there are types of cluelessness that are not problematic, which she calls "simple cluelessness". An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.

A lot of "non-EA" altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they don't involve saving lives and are often on quite a small scale so aren't going to predictably influence things like economic growth. For example, giving food to a soup kitchen - other than helping people who need food it isn't at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of "non-EA" altruistic actions might not have predictable unintended effects, in large part because they don't involve saving lives. So I don't think they will run us into the cluelessness issue.

I need to think about this more but would welcome thoughts.

I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.

I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.

Just to make sure I understand your position - are you saying that the cluelessness critique is valid and that it affects all altruistic actions? So Effective Altruism and altruism generally are doomed enterprises?

I don't buy that we are clueless about all actions. For example, I would say that something like expanding our moral circle to all sentient beings is robustly good in expectation. You can of course come up with stories about why it might be bad, but these stories won't be as forceful as the overall argument that a world that considers the welfare of all beings (that have the capacity for welfare) important is likely better than one that doesn't.

I'm not so sure about that. The link above argues longtermism may evade cluelessness (which I also discuss here) and I provide some additional thoughts on cause areas that may evade the cluelessness critique here.

I'd also add the cluelessness critique as relevant reading. I think it's a problem for global health interventions, although realize that one could also argue that it is a problem for animal welfare interventions. In any case it seems highly relevant for this debate.

Thanks for taking a balanced view, but I would have liked to see more discussion of the replaceability argument which really is pivotal here.

You say that whoever is hired into a progress-accelerating role, even if they are safety-conscious, will likely be most effective in the role and so will accelerate progress more than an alternative candidate. This is fair but may not be the whole story. Could the fact that they are safety-conscious mean they can develop the AI in a safer way than the alternative candidate? Maybe they would be inclined to communicate and cooperate more with the safety teams than an alternative candidate. Maybe they would be more likely to raise concerns to leadership etc.

If these latter effects dominate it could be worth suggesting that people in the EA community apply even for progress-accelerating roles, and it could be more important for them to take roles at less reliable places like OpenAI than slightly more reliable like Anthropic.

The person who gets the role is obviously going to be highly intelligent, probably socially adept, and highly-qualified with experience working in AI etc. etc. OpenAI wouldn't hire someone who wasn't.

The question is do you want this person also to care about safety. If so I would think advertising on the EA job board would increase the chance of this.

If you think EAs or people who look at the 80K Hours job board are for some reason less good epistemically than others then you will have to explain why because I believe the opposite.

Load more