PabloAMC 🔸

Quantum algorithm scientist @ Xanadu.ai
1115 karmaJoined Working (6-15 years)Madrid, España

Bio

Participation
5

Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)

Comments
140

Hey Vasco, on a constructive intention, let me explain how I believe I can be a utilitarian, maybe hedonistic to some degree, value animals highly and still not justify letting innocent children die, which I take as a sign of the limitations of consequentialism. Basically, you can stop consequence flows (or discount them very significantly) whenever they go through other people's choices. People are free to make their own decisions. I am not sure if there is a name for this moral theory, but it would be roughly what I subscribe to.

I do not think this is an ideal solution to the moral problem, but I think it is much better than advocating to let innocent children die because of what they may end up doing.

I donated the majority of my yearly donations to a campaign for AMF I did through Ayuda Efectiva for my wedding. The goal was to promote effective donations in my family and friends. I also donated a small amount to the EA Forum election because I think it is good for democratic reasons to allow the community to decide where to allocate some funds.

Hi @Jbentham,

Thanks for the answer. See https://forum.effectivealtruism.org/posts/K8GJWQDZ9xYBbypD4/pabloamc-s-quick-takes?commentId=XCtGWDyNANvHDMbPj for some of the points. Specifically, the problem I have with the post is not about cause prioritization or cost-effectiveness.

Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases.

I think I disagree with this. Instead, I think most people find it hard to do what they believe because of social norms. But I think it would be hard to find a significant percentage of people who believe that "letting innocent children die because of what they could do".

Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.

Probably you are somewhat right here, but I believe "letting innocent children die" is even a weirder opinion to have.

Hi there,

Let me try to explain myself a bit.

For example, global health advocates could similarly argue that EA pits direct cash transfers against interventions like anti-malaria bednets, which is divisive and counterproductive, and that EA forum posts doing this will create a negative impression of EA on reporters and potential 10% pledgers.

There is a difference between what the post does and what you mention. The post is not saying that you should prioritize animal welfare vs global health (which I would find quite reasonable and totally acceptable). I would find that useful and constructive. Instead, the post claims you should simply not donate the money if considering antimalarial nets. Or in other words, that you should let children die because of the chicken they may have eaten.

Also, traditionally, criticism of "ends justifies the means" reasoning tends to object to arguments which encourage us to actively break deontological rules (like laws) to pursue some aggregate increase in utility, rather than arguments to prioritise one approach to improving utility over the other (which causes harm by omission rather than active harm), eg - prioritising animal welfare over global health, or vice-versa.

In fact, the deontological rule he is breaking seems clear to me: that innocent children should die because their statistical reference class says they will do something bad. And yes, they are still innocent. To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant.

Also, let me say: I have no issue with discussing the implications of a given moral theory, even if they look terrible. But I think this should be a means to test and set limits to your moral theory, not a way to justify this sort of opinion. Let me reemphasize that my quarrel has nothing to do with cause prioritization or cost-effectiveness. Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.

As I commented above, it would not make any sense for someone caring about animals to kill people.

You only did so on the ground of not being an effective method, and because it would decrease support for animal welfare. Presumably, if you could press a button to kill many people without anyone attributing it to the animal welfare movement you would, then?

Thanks MHR!

This is informative, I strongly upvoted. A few comments though:

  1. I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.

  2. I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.

Agreed. Does the end "saving human lives" justify the means "increasing nearterm suffering a lot"?

I think this is mixing things up. Switching "saving lives" with "increasing nearterm suffering a lot" is not symmetric because of two key points. First, one is the cause (saving the life) and the other the consequence, and as such the increasing suffering is not really a means. Second, and most importantly, the suffering only happens if the saved child decides he will actually each chicken. This highlights the key issue I have with this line of reasoning: I think people can make decisions. After all, I heard the arguments for animal welfare and I switched to a plant-based diet. Convicting people because of something people in their statistical group class do is morally wrong. For example, I would find it wrong to argue against letting an immigrant into a country because his or her reference class commits crimes with a certain frequency. And I would similarly find it dystopian to preventatively incarcerate people because the statistical group they belong to tends to commit certain crimes.

When you argue that "we should let a child who lives in a certain village in Nigeria die" of malaria because Nigerians eat chicken, you are convicting the child for something he has not done yet, just something people in her country do. This I strongly find morally repugnant. This is probably a result of using utilitarianism, but even utilitarianism has limits and I strongly feel this is one of them.

The specific numbers I presented may well be off, as there is lots of uncertainty.

Let me emphasize: this is not an issue of cost-effectiveness or cause prioritization. You are not saying that it is preferable to prioritize spending the resources on cause X rather than on cause Y. You are saying that it is preferable to not spend the resources at all, and let the child die. I don't like that. You would be telling Peter Singer that actually, the drowning child should drown not because of the suit or whatever, but because the child might act unmorally in the future.

Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming?

The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example.

I find this argument morally repugnant and want to highlight it. Using some of the words I have used in a reply:

Let me quote William MacAskill comments on "What We Owe the Future" and his reflections on FTX (https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx):

A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

First, naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.

Second, plausibly it is wrong to do harm even when doing so will bring about the best outcome.

Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive.

I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so.

I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.

However, I think the large effects on animals should be seen as a motivation to help animals as cost-effectively as possible, and I do not see how killing people would fit this bill.

I think this is trying to dodge a bullet. It is not a matter of cost effectiveness, it is a matter that letting a child die of malaria because they could each chicken is a terrible idea in many (most) ethical frameworks. Let me reemphasize, but in Elizer Yudkowski words (https://www.lesswrong.com/posts/Tc2H9KbKRjuDJ3WSS/leaky-generalizations) now:

In my moral philosophy, the local negative utility of Hitler's death is stable, no matter what happens to the external consequences and hence to the expected utility.

Now, you could argue that similarly to this case, the expected utility of saving the child might be negative even if local utility is pretty positive. It seems to me that this is convicting someone of something bad (eating a chicken) that he has not had time to do yet, and furthermore, on very handwavy probability calculations that could turn out to be wrong!

Let me also quote William MacAskill comments on "What We Owe the Future" and his reflections on FTX (https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx):

A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

First, naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.

Second, plausibly it is wrong to do harm even when doing so will bring about the best outcome.

Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive.

I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so.

I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.

Load more