Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
5226 karmaJoined
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

🔸10% Pledge #54 with GivingWhatWeCan.org

Comments
316

I agree with all this. If any Forum moderators are reading this, perhaps they could share instructions for how to update our display names? (Bizarrely, I can't find any way to do this when I go to edit my profile.)

That's an interesting case! I am tempted to deny that this (putative unconscious desire to be near the ocean) is really a mental state at all. I get that it can be explanatorily convenient to model it as such, using folk (belief-desire) psychology, but the same is true of computer chess programs. I'd want to draw a pretty sharp distinction between the usefulness of psychological modelling, on the one hand, and grounds for attributing real mental states, on the other. And I think it's pretty natural (at least from a perspective like mine) to take consciousness to be the mark of the mental, such that any unconscious state is best understood as mere information-processing, not meaningful mentality.

That's an initial thought, anyway. It may be completely wrong-headed!

Hi Derek, great post! A couple of points of push-back:

Suppose that the global workspace theory of consciousness is true – to be conscious is to have a certain information architecture involving a central public repository — why should that structure be so important as to ground value? What about other information architectures that function in modestly different ways? The pattern doesn’t seem all that important when considered by itself.

Maybe this is my dualist intuitions speaking, but the suppositions here seem to be in some tension with each other. If there's nothing "all that important" about the identified pattern, whyever would we have identified it as the correct theory of consciousness to begin with? (The idea that "consciousness is just one specific algorithm among many" seems very odd to me. Surely one of the most central platitudes for fixing the concept is that it picks out something that is distinctive, or special in some way.)

If things can matter to us even though they don’t affect how we feel, we may be inclined to think that similar things can matter to systems that feel nothing at all.

One reason to reject this inference is if we accept the phenomenal intentionality thesis that consciousness is necessary for having genuinely representational states (including desires and preferences).  I agree that consciousness need not be what's represented as our goal-state; but it may still be a necessary background condition for us to have real goals at all (in contrast to the pseudo-intentionality of mere thermostats and the like).

"Eugenics" is the worst word. (Is there any other word in the English language where the connotations diverge so wildly from the literal denotation?) "Liberal eugenics" is effectively a scissor-statement to generate utterly unnecessary conflict between low and high decouplers. Imagine if the literal definition of "rape" didn't actually include anything about coercion or lack of consent, and then a bunch of sex-positive philosophers described themselves as being in favor of "consensual rape" instead of picking a less inflammatory way of describing being sex-positive. That's eugenics discourse today.

ETA: my point being that it would seem most helpful (both for clear thinking and for avoiding unnecessary conflict) for people to use more precise language when discussing technologically-aided reproductive freedom and technologically-aided reproductive coercion. The two opposites are not the same, just because both involve technology and goal-directedness in relation to reproduction!

I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more "direct", explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA "worldview" here.

I'd be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.

Yeah, that's interesting, but the argument "we should consider just letting people die, even when we could easily save them, because they eat too much chicken," is very much not what anti-EAs like Leif Wenar have in mind when they talk about GiveWell being "harmful"!

(Aside: have you heard anyone argue for domestic policies, like cuts to health care / insurance coverage, on the grounds that more human deaths would actually be a good thing? It seems to follow from the view you mention [not your view, I understand], but one doesn't hear that implication expressed so often.)

That seems reasonable to me! I'm most confident that the underlying principles of effective altruism are important and good, and you seem to agree on that. I agree there's plenty of room for people to disagree about speculative cause prioritization, and if you think the EA movement is getting things systematically wrong there then it makes sense to (in effect, not in these words) "do EA better" by just sticking with GiveWell or whatever you think is actually best.

Apologies for the delay! I've now re-posted the amalgamated full text to the two misdirection posts here, and the interlude on 'What EA means to me' here.

Hi Noah, since I drew the "potential rebuttal" to your attention, could you update your post with the link? Good citation practice :-)

Also, fwiw, I find the clickbaity title rather insulting. It's not really true that being willing to revise some commonsense moral assumptions in light of powerful arguments automatically makes one "bad at moral philosophy". It really depends on the strength of the arguments, and how counterintuitive it would be to reject those premises. Common sense is inconsistent, and the challenge of moral philosophy is to work out how best to resolve the conflicts. You can't do that without actually looking into the details.

Ok, so it sounds like your comparisons with GiveWell were an irrelevant distraction, given that you understand the point of "hits based giving". Instead, your real question is: "why not [hire] a cheap developer literally anywhere else?"

I'm guessing the literal answer to that question is that no such cheaper developer applied for funding in the same round with an equivalent project. But we might expand upon your question: should a fund like LTFF, rather than just picking from among the proposals that come to them, try taking some of the ideas from those proposals and finding different (perhaps cheaper) PIs to develop them?

It's possible that a more active role in developing promising longtermist projects would be a good use of their time. But I don't find it entirely obvious the way that you seem to. A few thoughts that immediately spring to mind:

(i) My sense of that time period was that finding grantmakers was itself a major bottleneck, and given that longtermism seemed more talent-constrained than money-constrained at that time, having key people spend more time just to save some money presumably would not have seemed a wise tradeoff.

(ii) A software developer that comes to you with an idea presumably has a deeper understanding of it, and so could be expected to do a better job of it, than an external contractor to whom you have to communicate the idea. (That is, external contractors increase risk of project failure due to miscommunication or misunderstanding.)

(iii) Depending on the details, e.g. how specific the idea is, taking an idea from someone's grant proposal to a cheaper PI might constitute intellectual theft. It certainly seems uncooperative / low-integrity, and not a good practice for grant-makers who want to encourage other high-skilled people with good ideas to apply to their fund!

Load more