Some excerpts:
Philosophical discussion of utilitarianism understandably focuses on its most controversial features: its rejection of deontic constraints and the "demandingness" of impartial maximizing. But in fact almost all of the important real-world implications of utilitarianism stem from a much weaker feature, one that I think probably ought to be shared by every sensible moral view. It's just the claim that it's really important to help others—however distant or different from us they may be. [...]
It'd be helpful to have a snappy name for this view, which assigns (non-exclusive) central moral importance to beneficence. So let's coin the following:
Beneficentrism: The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.
Clearly, you don't have to be a utilitarian to accept beneficentrism. You could accept deontic constraints. You could accept any number of supplemental non-welfarist values (as long as they don't implausibly swamp the importance of welfare). You could accept any number of views about partiality and/or priority. You can reject 'maximizing' accounts of obligation in favour of views that leave room for supererogation. You just need to appreciate that the numbers count, such that immensely helping others is immensely important.
Once you accept this very basic claim, it seems that you should probably be pretty enthusiastic about effective altruism. [...]
Even if theoretically very tame, beneficentrism strikes me as an immensely important claim in practice, just because most people don't really seem to treat promoting the general welfare as an especially important goal.
I'm a big fan of your philosophical writing and your attempts to philosophically defend and refine utilitarianism and effective altruism. I also really like your more general idea here of pushing people to think less about avoiding wrongdoing and towards thinking more about rightdoing.
I think one thing I'd wonder is what it means to make something a "central life project" and what kind of demandingness this implies. Is GWWC membership sufficient? Is 30min of volunteering a week sufficient? This is the hard part I think about satisficing views (even though I personally am definitely a satisficier when it comes to ethics).
I'm also curious what you mean by "[y]ou could accept any number of views about partiality and/or priority" since I think this actually runs counter to one of the core tenets of what I think of effective altruism, which is the radical empathy/impartiality of extending our care to strangers, nonhuman animals, future people, etc. In fact, I often think you gain a lot more by convincing people to adopt the radical empathy and "per dollar effectiveness maximization" views of effective altruism even if they then don't maximize their efforts / make EA a central life project. That is, I think someone devoting 1% of their income to The Humane League will create more benefit for general welfare than another person devoting 10% of their income to charities that laypeople typically think of when they think they are helping the general welfare.
I think the main way to rescue this is to insist strongly on the radical impartiality part but not insist on making it the sole thing a person does with their resources, or even their resources set aside to philanthropy.
Thanks Peter!
Right, I agree that beneficence should be impartial. What I had in mind was that one can combine a moderate degree of impartial beneficence with significant partiality in other areas of one's life (e.g. parenting). Thanks for flagging that this didn't come through clearly enough.
re: "central life project", this is deliberately vague, and probably best understood in scalar terms: the more, the better. My initial aim here is just to get more people on board with adopting it as a project that they take seriously. I don't think I can give a precise specification of where to draw the line. But also, I don't really want to be drawing attention to the baseline minimum, because that shouldn't be the goal.
Thanks! Both of those approaches sounds justifiable to me.
I like the idea, though I think its funny that we go from "It'd be helfpul to have a snappy name for this view," to another opaque and easily confused made up philosophical term. Maybe 'Helping other peopleism'.
I think beneficentrism is a good word and works fine. Feels well-optimized for its target audience, which I gather is philosophers and philosophy-fans who object to EA because they think EA commits you to utilitarianism.
I don't like the name much, though I can't think of better alternatives. I think Will MacAskill had suggested "benetarianism" for this or a similar view, many years ago. But I don't like that name either.
(I shared the post not because I like "Beneficentrism" as a label, but because it identifies a core idea shared by all plausible moral views, and notes that this idea is probably enough to generate the most important practical implications of both utilitarianism and effective altruism. On reflection, perhaps Richard's post should have described the idea without also proposing a label for it, since I fear people who dislike the label will take the idea less seriously than they would otherwise.)
There was a bit of discussion on Twitter about this post. Rob Bensinger had a thread that included this comment:
One (maybe slight boring) option would be something like "soft welfare-maximisation", where "soft" just means that it can be subjected to various constraints.
Another term for a related concept is Richard Ngo's "scope-sensitive ethics" (or "scale sensitive" as Ben Todd suggests), which he takes to "the core intuition motivating utilitarianism". However, that doesn't include any explicit reference to welfare or maximisation.
Is there anything wrong just with 'effective altruism' as the name?
Well, that's not what 'effective altruism' means, right? At least on some understandings of the term, EA is not even a normative view; it's rather a project that people can engage in for a variety of reasons. E.g. "excited altruists" do not, as such, embrace "beneficentrism". (Though I would personally agree that the latter is an excellent reason for becoming involved with EA.)
welfarism would be a natural one but that is already taken.
From the links you posted, the most powerful argument for effective altruism to me was this:
"(Try completing the phrase "no matter..." for this one. What exactly is the cost of avoiding inefficiency? "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)"
Unless someone had a kind of limited egotism (that perhaps favored only themselves and their friends, or themselves and their family, or themselves and their country, etc.), or a sadist, I don't see how they could disagree that making the world a better place in the best way possible is the moral thing to do.
Here is one criticism of EA that I have found powerful:
"Since many people are driven by emotion when donating to charity, pushing them to be more judicious might backfire. Overly analytical donors might act with so much self-control that they end up giving less to charity."
However, many of the charities that one wouldn't give to might have been harmful. So while one might miss opportunities by being analytical, they would also avoid mistakes. Also, it would be desirable to know what actions are helpful and for what reasons, so such actions can be sustained and not just happen some of the time by chance. Sustaining those actions would be better over the long term.
This is a nice concept, and it reminds me of the beginning of Nate Soares' Replacing Guilt book/blog post series. Specifically, the idea "you're allowed to fight for something," featured on the main page. Both Beneficentrism and Soares' early posts are focused on convincing people that there are goals worth achieving. Beneficentrism promotes a specific goal (general welfare), while Soares is more generally saying that it's OK to pick a goal and try to achieve it, but the tone feels pretty similar to me.