Hide table of contents

If you had the power to ensure all EAs understood 1 concept back to front. What would it be?

Only one concept per answer.

Please do not read this and feel obliged to understand all these. I don't want obligation.

I am asking because I might create a test to see how well EAs do know these concepts then see if I can get content for concepts people don't know. This is meant to make things easier not harder.

29

0
0

Reactions

0
0
New Answer
New Comment

11 Answers sorted by

Expected value seems very important. It underlies a lot of other important concepts, is relevant to both neartermism and longtermism, and is extremely frequently brought up in EA discussions and arguments.

I've come, through the joking to serious pipeline, to telling people that EAs are just people who are really excited about multiplication, and who think multiplication is epistemically and morally sound. 

I think this is right, and its prevalence maybe the single most important difference between EA and the rest of the world.

Cause-Neutrality

I've been worried that the basic mental  motions of being able to evenhandedly consider switching between different causes in a single session of thought or conversation will be marginalized as people settle more into established hierarchies around certain causes. 

(I will fill out my answer more sometime in the future probably; others are welcome to comment and add to  it.)

Distribution of cost-effectiveness feels like one of the most important concepts from the EA community. The attitude that, for a given goal that you have, some ways of achieving that goal will be massively more cost-effective than others is an assumption that underlies a lot of cause comparisons, and the value of doing such comparisons at all.

The margin/marginal value.  

Anyone trying to think about how to do the most good will be very quickly and deeply confused if they aren't thinking at the margin. E.g. "if everyone buys bednets, what happens to the economy?" 

Not following the rules here ... but HERE is the 'official' list

Virtue signaling -- as a game theoretic concept, a moral-psychological instinct, and a ritualized cultural manifestation of our desire to do good (and to appear good). 

Effective Altruism is basically the systematic, rational, evidence-based attempt to overcome our human instincts for virtue-signaling, and to harness our desire to do good in more effective directions. So it's important to know what we're fighting against.

AI timelines

I think if every EA knew the current best set of AI forecasts and P(AI doom) then that would be really useful.

Can you recommend me a place where I could find this information or will it spoil your test? I have looked into this on various places but I still have no idea on what the current best set of AI forecasts or P(AI doom) would be.

Economizing on virtue. Good explanation here: https://link.springer.com/article/10.1007/BF01298375

Comments2
Sorted by Click to highlight new comments since:

I'd like to push back against [what I'm guessing is] the intended purpose of this question:

Do you want to make a list of things that all EAs should read?

If so - note [I believe] there is currently a failure mode in the community where people "get stuck" reading more and more and more before they do something "concrete" (which is not "reading more").

 

I'm not saying nobody should read lots of things. Some should.

 

I AM saying - this is probably feeding into an already existing problem, and adding another "list of things to know [with a subtext that it's 'so basic that every EA should know it']" does have potential to have negative value for the community.

I agree. I want concepts people should understand.

That's not about telling people what to read, that's about them having certain models in their heads.

Curated and popular this week
Relevant opportunities