G

Guive

22 karmaJoined

Comments
7

Scarce relative to the current level or just < 10x the current level? 

I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn't it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don't like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn't matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare. 

 

Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying "X is lexicographically preferable to Y but Y has positive value", and "Y has no value"?

  1. ^

    From SEP: "A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, AB if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2."

Well, they could have. A lot of things are logically possible. Unless there is some direct evidence that he was motivated by EA principles, I don't think we should worry too much about that possibility. 

(1) I also heard this, (2) I'm pretty sure his name is spelled "Kuhn" not "Khun".

This is not necessarily an insurmountable obstacle. If someone wants to make a statement anonymously on a podcast they can write it out and have someone else write it. 

Yeah. The words "estimates" and "about" are right there in the quote. There is no pretension of certainty here, unless you think mere use of numbers amounts to pretended certainty.

 

But what is decision relevant is the expected value. So by best estimate do they mean expected value, or maximum likelihood estimate, or something else?  To my ear, "best estimate" sounds like it means the estimate most likely to be right, and not the mean of the probability distribution. For instance, take the (B) option in "Why it can be OK to predictably lose", where you have a 1% chance of saving 1000 people, and a 99% chance of saving no one, and the choice is non-repeatable. I would think the "best estimate" of the effectiveness of option (B) is that you will save 0 lives. But what matters for decision making is the expected value which is 10 lives. 

 

Sorry if this is a stupid question, I'm not very familiar with GiveWell. 

Answer by Guive1
1
0

Richard Chappell on consequentialism, theories of well-being, and reactive vs goal-directed ethics.

Ege Erdil on AI forecasting, economics, and quantitative history.

Chad Jones on AI, risk tolerance, and growth.

Phil Trammell on growth economics (the part of his work more directly focused on philanthropy was covered in his previous appearance). 

Steven Pinker (there are a lot of things that he has written that are relevant to one aspect or another of EA). 

Amanda Askell on AI fine-tuning, AI moral status, and AIs expressing moral and philosophical views (she talks some about this in a video Anthropic put out).

Pablo Stafforini on the history of EA and translations of EA content. 

Finally, I think it would be good to have an episode with a historian of the world wars, similar to the episode with Christopher Brown. Anthony Beevor or Stephen Kotkin, maybe.