tl;dr: Just as not everyone is selfish, not everyone cares about the impartial good in a scope-sensitive way. Claims that effective altruism is "trivial" are silly in a way that's comparable to the error in tautological egoism.
 

Human motivations vary widely (at least on the margins; “human nature” may provide a fairly common core). Some people are more selfish than others. Some more altruistic. Among the broadly altruistic, I think there is significant variation along at least two dimensions: (i) the breadth of one’s “moral circle” of concern, and (ii) the extent to which one’s altruism is goal-directed and guided by instrumental rationality, for example seriously considering tradeoffs and opportunity costs in search of moral optimality.

I think some kinds of altruism—some points along these two dimensions—are morally much better than others. Something I really like about effective altruism is that it highlights these important differences. Not all altruism is equal, and EA encourages us to try to develop our moral concerns in the best possible ways. That can be challenging, but I think it’s a good kind of challenge to engage with.

As I wrote in Doing Good Effectively is Unusual:

We all have various “rooted” concerns, linked to particular communities, individuals, or causes to which we have a social or emotional connection. That’s all good. Those motivations are an appropriate response to real goods in the world. But we all know there are lots of other goods in the world that we don’t so easily or naturally perceive, and that could plausibly outweigh the goods that are more personally salient to us. The really distinctive thing about effective altruism is that it seriously attempts to take all those neglected interests into account.…

Few people who give to charity make any serious effort to do the most good they can with the donation. Few people who engage in political activism are seriously trying to do they most good they can with their activism. Few people pursuing an “ethical career” are trying to do the most good they can with their career. And that’s all fine—plenty of good can still be done from more partial and less optimizing motives (and even EAs only pursue the EA project in part of their life). But the claim that the moral perspective underlying EA is “trivial” or already “shared by literally everyone” is clearly false.

So I find it annoyingly stupid when people dismiss effective altruism (or the underlying principles of beneficentrism) as “trivial”. I think it involves a similar sleight-of-hand to that of tautological egoists, who claim that everyone is “by definition” selfish (because they pursue what they most want, according to their “revealed preferences”). The tautological altruist instead claims that everyone is “by definition” an effective altruist (because they pursue what they deem best, according to their “revealed values”).

Are everyone’s moral motivations really all the same?

Either form of tautological attribution is obviously silly. The extent to which you are selfish depends upon the content of what you want (that is, the extent to which you care non-instrumentally about other people’s interests). Likewise, the extent to which you have scope-sensitive beneficentric concern depends upon contingent details of your values and moral psychology. Innumerate (“numbers don’t count”) moral views are commonplace, and even explicitly defended by some philosophers. Much moral behavior, like much voting, is more “expressive” than goal-directed. To urge people to be more instrumentally rational in pursuit of the impartial good is a very substantive, non-trivial ask.

I think that most people’s moral motivations are very different from the scope-sensitive beneficentrism that underlies effective altruism. (I suspect the latter is actually extremely rare, though various approximations may be more common.) I also think that most people’s explicit moral beliefs make it hard for them to deny that scope-sensitive beneficentrism is more virtuous/ideal than their unreflective moral habits. So my hope is that prompting greater reflection on this disconnect could help to shift people in a more beneficentric direction. (Some may instead double-down on explicitly endorsing worse values, alas. One can but try.)

As with the “everyone is really selfish” move, I suspect that appeals to tautological altruism tend to reflect motivated reasoning from people who don’t want to endure the cognitive dissonance of confronting the disconnect between their everyday moral reasoning and the abstract moral claims they appreciate are undeniable. I think that’s super lame, and people who are opposed to the EA conception of beneficence should stop eliding the differences, grow a spine, and actually argue against it (and for some concrete, coherent alternative).

62

4
0

Reactions

4
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

Richard - this is an important point, nicely articulated. 

My impression is that a lot of anti-EA critics actually see scope-sensitivity as actively evil, rather than just a neutral corollary of impartial beneficence or goal-directed altruism. One could psychoanalyze why they think this -- I suspect it's usually more of an emotional defense than a thoughtful application of deontology. But I think EAs need to contend with the fact that to many non-EAs, scope-sensitive reasoning about moral issues comes across as somewhat sociopathic. Which is bizarre, and tragic, but seems often true.

I am a bit concerned with the "broad moral circle" being definitional to Effective Altruism (though it accords with my own moral views and with EAs generally). If I recall correctly, EA, zoomed out as far as possible, has not committed to specific moral views. There are disagreements among EAs, for instance, as to whether deontological constraints should limit actions, or whether we should act wholly to maximize welfare as utilitarians. I had thought that the essence of effective altruism is to "do good", at least to the extent that we are trying to do so, as effectively as we can.

Consequently, I would see the fundamental difference between what EA altruists and non-EA altruists are doing as one of deliberateness, from which instrumental rationality would proceed. The non-EA altruist looks to do good without deliberation as to how to do so as best he/she can, or with bounded deliberation on this point. The EA looks to do good with deliberation as to how to do so the best he/she can.

I would agree that setting a broad moral circle would be an early part of what one would do as an EA (before more broad cause-prioritization, for instance), but EA has traditionally been open-minded as to what philosophies are morally true or false and many have viewed this as an important part of the EA project. Consequently, I would put the "adoption of a broad moral circle moral value" at least one step beyond the definition of EA.

There's certainly room for disagreement over the precise details, but I do think of a broad moral circle as essential to the "A" part of "EA".  As a limiting case: an effective egoist is not an EA.

I feel like there might be two things going on here:

  1. an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.

  2. a feeling like there's some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://gwern.net/narrowing-circle ).

I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But it's also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.

"EA-according-to-their-own values", i.e. E, is just instrumental rationality, right?

ETA: or maybe you're thinking instead of something like actually internalizing/adopting their explicit values as ends, which does seem like an important separate step?

I was meaning "instrumental rationality applied to whatever part of their values is other-affecting".

I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).

I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what "the good" is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA. 

Of course, my definition would require me to bite the bullet that one could be an "effective 'altruist'" and be purely selfish if they adopted a position such as ethical egoism.  But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA community's rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific community's rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we can't do the most good unless we have some notion of what the good is).

Nice post, Richard!

So I find it annoyingly stupid when people dismiss effective altruism (or the underlying principles of beneficentrism) as “trivial”.

Potential nitpick. I would rather use "silly" instead of "stupid". At least in portuguese, "estúpido" (which is the word closest to "stupid") is decently offensive.

Executive summary: The claim that effective altruism is "trivial" or universally shared is misguided, as human moral motivations actually vary widely and most people's everyday moral reasoning differs significantly from the scope-sensitive beneficence that underlies effective altruism.

Key points:

  1. Human motivations vary in selfishness, the breadth of moral concern, and the extent to which altruism is guided by instrumental rationality.
  2. Effective altruism encourages developing moral concerns in the best possible ways, considering opportunity costs and tradeoffs in pursuit of moral optimality.
  3. Most people engaged in charitable giving, activism, or ethical careers do not seriously attempt to do the most good possible.
  4. Dismissing effective altruism as "trivial" is misguided, akin to the error of tautological egoism.
  5. Confronting the disconnect between everyday moral reasoning and the principles of effective altruism may help shift people in a more beneficent direction.
  6. Critics of effective altruism should directly argue against it and for a concrete alternative, rather than eliding the differences in moral motivations.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities