Operations Associate @TFG, Team Lead for EAGxBerlin 2023, AIS Europe Retreat (AISER) 2023 co-organizer, EA Unconference organizer in 2019, 20, 21, 22. Interested in operations, AI policy, biosecurity, charity incubation, and Mental Health.
I co-organized the Unconference with Manuel Allgaier for the last 3 years and share the impression that there is a huge need for support structures for struggling EAs. My current idea is to run an event in summer 2022 focused on Burnout Prevention/Mental Health for EAs.
If you are interested in giving input/sharing advice/organizing the event with me, don't hesitate to reach out!
I have a hypothesis for why people are motivated to rationalise: It is very uncommon in EA to justify claims by taking their intrinsic values into account. It is assumed that EAs are "rational enough" to update their beliefs/change their career whenever they encounter arguments that are stronger than the previous ones. If pressed to explain themselves, the pattern described above often happens.
As an alternative response, it could be beneficial to create the norm that it is valid to care more about some causes than others. Most people have intrinsic values that are not always morally justifiable.
This approach would create more intellectual honesty and stop disguising the true motives for why people are motivated to work on specific causes.
I can also see the downside if taken to an extreme, for example, the inability to update one's beliefs since "Making Operas available to everyone is an intrinsic value of mine, so any critique on my chosen cause is futile".