J

JDLC

26 karmaJoined

Comments
5

Considered writing a similar post about the impact of anti-realism in EA, but I’m going to write here instead. In short, I think accepting anti-realism is a bit worse/wierder for ‘EA as currently’ than you think:

Impartiality 

It broadly seems like the best version of morality available under anti-realism is contractualism. If so, this probably significantly weakens the core EA value of impartiality, in favour of only those who you have a ‘contract’. It might rule out spatially far away people, it might rule out temporally far away people (unless you have an ‘asymmetrical contract’ whereby we are obligated to future generations because past generations were obligated to us), it probably rules out impartiality animals or non-agents/morally incapable beings.

‘Evangelism’

EA generally seems to think that we should put resources into convincing others of our views (bad phrasing but gist is there). This seems much less compelling on anti-realism, because your views are literally no more correct than others. You could counter that ‘we’ have thought more and therefore can help people who are less clear. You could counter that other people have inconsistent views (“Suffering is really bad but factory farms are fine”), however there’s nothing compelling bad about inconsistency on an anti-realist viewpoint either.

Demandingness

Broadly, turning morality into conditionals means a lot of the ‘driving force’ behind doing good is lost. It’s very easy to say “if I want to do good I should do X”, but then say “wow X is hard, maybe I don’t really want to do good after all”. I imagine this affects a bunch of things that EA would like people to do, and makes it much harder practically to cause changes if you outright accept it’s all conditional.

Note: I’m using Draft Amnesty rules for this comment, I reckon on a few hours of reflection I might disagree with some/all of these.

This is the most downvoted post I’ve seen on the forum so far. Why?

One key concern: Ideas all seem good, but it’s unclear to me if any/all are Attention Hazards / Opportunity Costs. Even if they are good, is the resources investment counterfactually harmful?

Not sure TWE you considered this, or what breadth of expert views/consensus this doc got in order to account for this.

(Sorry for negativity on what is a cool idea :-) )

Thanks for writing this post, currently reading as part of OSP syllabus. My thoughts below:

Epistemic Status: Pure armchair Philosophy, informed by 2 years within a uni group as participant. Will be involved with group running this year, interested to see if/how this updates any of the below.

Backchaining: This seems excellent.

Goals: On an individual level, SMART goals are amazing. I'm concerned that, on the group level, SMART goals are over-specific and counterproductive. More specifically (pun intended), the SMART framework will (almost) inevitably lead to Goodharting due to the specificity/measurability requirements.

Outsourcing: Excellent. Possible from signposting too many people towards specific group/opportunities and overwhelming them.

Personal Development: I love the sentiment of "you should treat yourself like one of your members that you are responsible for helping", but disagree practically acting towards yourself in the same manner as another group member is the best idea. Partly because you can't be sufficiently objective, partly because an external/second perspective gives a lot of value. It seems to me that asking a co-leader / experienced exec to take (some) responsibility for your (the leaders) personal development is a much better way to do this.

Safeguarding Values: Love this idea. It should be a forum post if it isn't already, and I want the link if it already is!

Opportunity vs Obligation: I strongly prefer (and feel more motivated by) an opportunity framing. BUT I don't know if this is a general reaction or personal one. Perhaps both are required, and some people are much more likely to put the obligation onto themselves, whilst others need more external 'pressure' on this. Unsure if there is any research on this (quantitative or qualitative).

Socials and Development: Great. One line that struck me is "We’ll then often have a social straight after". I suspect that separating the social/development, but having them very close by (spatially and temporally) is significantly better than having them on different nights (say). Mainly because helps balance the twin considerations of a social dynamic and an action focus. Don't know if this is true.

Resources: All look super useful.

What’s the best (ie. influenced you the most) criticism or development of your ‘key ideas’?

Specific papers/references/links would be ideal!

(By ‘key ideas’ I’m thinking things like speciesism, your concept of persons or drowning child argument, but answer based on whatever you would yourself put in this category)