Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
6256 karmaJoined
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

🔸10% Pledge #54 with GivingWhatWeCan.org

Comments
378

Thanks for the feedback! It's probably helpful to read this in conjunction with 'Good Judgment with Numbers', because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.

(A general issue I often find here is that when I'm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesn't refute this - very different - 'steelman' position that they have in mind. But I'm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it!  I'm arguing against the specific position specified in the post, i.e. holding that different kinds of values can't -- literally, can't, like, in principle -- be quantified.)

I think this is confusing two forms of 'extreme'.

I'm actually trying to suggest that my interlocutor has confused these two things. There's what's conventional vs socially extreme, and there's what's epistemically extreme, and they aren't the same thing. That's my whole point in that paragraph. It isn't necessarily epistemically safe to do what's socially safe or conventional.

Yeah, I agree that one also shouldn't blindly trust numbers (and discounting for lack of robustness of supporting evidence is one reasonable way to implement that). I take that to be importantly different from - and much more reasonable than - the sort of "in principle" objection to quantification that this post addresses.

I think there could be ways of doing both. But yeah, I think the core idea of "it's good to actively help people, and helping more is better than helping less" should be a core component of civic virtue that's taught as plain commonsense wisdom alongside "racism is bad", etc.

Definitional clarity can be helpful if you think that people might otherwise be talking past each other (using the same word to mean something importantly different without realizing it). But otherwise, I generally agree with your take. (It's a classic failure-mode of analytic philosophers that some pretend not to know what a word means until it has been precisely defined. It's quite silly.)

Eh, I'm with Aristotle on this one: it's better to start early with moral education. If anything, I think EA leaves it too late. We should be thinking about how to encourage the virtues of scope-sensitive beneficentrism (obviously not using those terms!) starting in early childhood.

(Or, rather, since most actual EAs aren't qualified to do this, we should hope to win over some early childhood educators who would be competent to do this!)

I mean, it's undeniable that the best thing is best. It's not like there's some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.

I can see pathologies in both directions here. I don't think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others?  I think the crucial thing is just to frame it positively rather than negatively, and don't get confused about where the baseline or zero-point properly lies.

What do you mean by "maximization"? I think it's important to distinguish between:

(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.

(2) Maximizing within specific decision contexts: insofar as you're trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.

As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)

On the broader themes, a lot of what you're pointing to is potential conflicts between ethics and self-interest, and I think it's pretty messed up to use the language of psychological "health" to justify a wanton disregard for ethics. Maybe it's partly a cultural clash, and when you say things like "All perspectives are valid," you really mean them in a non-literal sense?

I'd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isn't more of it, particularly from philosophers who - unlike me - are actually employed by EA orgs!)

One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition to - what I tend to see as the higher priority - drawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)

A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenar's nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). He's a very highly-regarded philosopher, and I'd love to see him engage more with EA ideas.  Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.

Sounds like a good move! In my experience (both as an author and a reader), Substack is very simple and convenient, and the network effects (e.g. obtaining new readers via substack's "recommendations" feature) are much larger than I would have predicted in advance.

My claim is not "too strongly stated": it accurately states my view, which you haven't even shown to be incorrect (let alone "unfair" or not "defensible" -- both significantly higher bars to establish than merely being incorrect!)

It's always easier to make weaker claims, but that raises the risk of failing to make an important true claim that was worth making. Cf. epistemic cheems mindset.

Load more