I was chatting recently to someone who had difficulty knowing how to orient to x-risk work (despite being a respected professional working in that field). They expressed that they didn't find it motivating at a gut level in the same way they did with poverty or animal stuff; and relatedly that they felt some disconnect between the arguments that they intellectually believed, and what they felt to be important.
I think that existential security (or something like it) should be one of the top priorities of our time. But I usually feel good about people in this situation paying attention to their gut scepticism or nagging doubts about x-risk (or about parts of particular narratives about x-risk, or whatever). I’d encourage them to spend time trying to name their fears, and see what they think then. And I’d encourage them to talk about these things with other people, or write about the complexities of their thinking.
Partly this is because I don't expect people who are using intellectual arguments to override their gut to do a good job of consistently tracking what the most important things to do are on a micro-scale. So it would be good to get the different parts of them to sync more.
And partly because it seems like it would be a public good to explore and write about these things. Either their gut is onto something with parts of its scepticism, in which case it would be great to have that articulated; or their gut is wrong, but if other people have similar gut reactions then playing out that internal dialogue in public could be pretty helpful.
It's a bit funny to make this point about x-risk in particular because of course the above all applies to whatever topic. But I think people normally grasp it intuitively, and somehow that's less universal around x-risk. I guess maybe this is because people don't have any first-hand experience with x-risk, so their introductions to it are all via explicit arguments … and it's true that it's a domain where we should be unusually unwilling to trust our gut takes without hearing the arguments, but it seems to me like people are unusually likely to forget that they can know anything which has bearing on the questions without already being explicit (and also that perhaps the social environment, in encouraging people to take explicit arguments seriously, can accidentally overstep and end up discouraging people from taking anything else seriously). These dynamics seem especially strong in the case of AI risk — which I regard as the most serious source of x-risk, but also the one where I most wish people spent more time exploring their nagging doubts.
In this spirit, here are some x-risk sceptical thoughts:
These thoughts make me hesitant about confidently acting as if x-risk is overwhelmingly important, even compared to other potential ways to improve the long-run future, or other framings on the importance of helping navigate the transition to very powerful AI.
But I still existential risk matters greatly as an action-guiding idea. I like this snippet from the FAQ page for The Precipice —
[Edited a bit for clarity after posting]
Tooting my own trumpet, I did a lot of work on improving the question x-riskers are asking in this sequence.
I certainly think these are all good to express (and I could reply to them, though I won't right now). But also, they're all still pretty crisp/explicit. Which is good! But I wouldn't want people to think that sceptical thoughts have to get to this level of crispness before they can deserve attention.
Agree.
By the way, I'm curious which of these points give you personally the greatest hesitance in endorsing a focus on x-risk, or something.
I endorse many (more) people focusing on x-risk and it is a motivation and focus of mine; I don't endorse “we should act confidently as if x-risk is the overwhelmingly most important thing”.
Honestly, I think the explicitness of my points misrepresents what it really feels like to form a view on this, which is to engage with lots of arguments and see what my gut says at the end. My gut is moved by the idea of existential risk reduction as a central priority, and it feels uncomfortable being fanatical about it and suggesting others do the same. But it struggles to credit particular reasons for that.
To actually answer the question: (6), (5), and (8) stand out, and feel connected.
Great points, Fin!
On nuclear winter, besides my crosspost for Bean's analysis linked above, I looked more in-depth into the famine deaths and extinction risk (arriving to an annual extinction risk of 5.93*10^-12). I also got an astronomically low annual extinction risk risk from asteroids and comets (2.20*10^-14) and volcanoes (3.38*10^-14).
I think this study also implies as astronomically low extinction risk from climate change.
I believe y is not supposed to be in the exponent.
Relatedly:
Thanks Vasco!
I resonate a lot with this post. Thank you for writing it and giving an opportunity to people like me to express their thoughts on the topic. I'm writing with an anonymous account because publicly stating things like, 'I'm not sure it would be bad for humanity to be destroyed' seems dangerous for my professional reputation. I don't like not being transparent, but the risks here seem too great.
I currently work in an organization dedicated to reducing animal suffering. I've recently wondered a lot if I should go work on reducing x-risks from AI: it seems there's work where I could potentially be counterfactually useful in AI safety. But after having had about a dozen discussions with people from AI Safety field, I still don't have this gut feeling that reducing x-risks from AI is something that deserves my energy more than reducing animal suffering in the short-term.
I am not at all an expert on issues around AI, so take what follows as 'the viewpoint of someone outside the world of AI safety / x-risks trying to form an opinion on these issues, with the constraint of having a limited amount of time to do so'
The reasons are:
Ultimately, the two questions I would like to answer are:
Interesting points!
AI not being aligned at all is not exactly a live option? The pre-training relies on lots of human data, so it alone leads to some alignment with humanity. Then I would say that current frontier models post-alignment already have better values than a random human, so I assume alignment techniques will be enough to at least end up with better than typical human values, even if not great values.
I suppose that, for most the vast majority of cases, trying to make a technology safer does in fact make it safer. So I believe there should be a strong prior for working on AI safety being good. However, I still think corporate campaigns for chicken welfare are more cost-effective.
I sympathize with working on a topic you feel in your stomach. I worked on climate and switched to AI because I couldn't get rid of a terrible feeling about humanity going to pieces without anyone really trying to solve the problem (~4 yrs ago, but I'd say this is still mostly true). If your stomach feeling is in climate instead, or animal welfare, or global poverty, I think there is a case to be made that you should be working in those fields, both because your effectiveness will be higher there and because it's better for your own mental health, which is always important. I wouldn't say this cannot be AI xrisk: I have this feeling about AI xrisk, and I think many eg. PauseAI activists and others do, too.
In the dis-spirit of this article I'm going to take the opposite tack and I'm going to explore nagging doubts that I have about this line of argument.
To be honest, I'm starting to get more and more sceptical/annoyed about this behaviour (for want of a better word) in the effective altruism community. I'm certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, you've probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I don't think it should be encouraged wholesale in this community. Maybe I just don't have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example I've come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldn't explore those nagging doubts and express them out loud, just like maybe you shouldn't scream and hit things based on the mistaken belief that it'll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think that's a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they won't need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesn't need to be encouraged to do this.
On a similarly simple intellectual level, I see "people should not suppress doubts about the critical shift in direction that EA has taken over the past 10 years" as a no-brainer. I do not see it as intellectual wank in an environment where every other person assumes p(doom) approaches 1 and timelines get shorter by a year every time you blink. EA may feature criticism circle jerking overall, but I think this kind of criticism is actually important and not actually super well received (I perceive a frosty response whenever Matthew Barnett criticizes AI doomerism)
Noting another recent post doing this: https://forum.effectivealtruism.org/posts/RbCnvWyoiDFQccccj/on-the-dwarkesh-chollet-podcast-and-the-cruxes-of-scaling-to
Thanks for the post. I just today was thinking through some aspects of expected value theory and fanaticism (i.e., being fanatic about applying expected value theory) that I think might apply to your post. I had read through some of Hayden Wilkinson’s Global Priorities Institute report from 2021, “In defense of fanaticism,” and he brought up a hypothetical case of donating $2000 (or whatever it takes to statistically save one life) to the Against Malaria Foundation (AMF), or giving the money instead to have a very tiny, non-zero chance of an amazingly valuable future by funding a very speculative research project. I changed the situation for myself to consider why would you give $2000 to AMF instead of donating it to try to reduce existential risk by some tiny amount, when the latter could have significantly higher expected value. I’ve come up with two possible reasons so far to not give your entire $2000 to reducing existential risk, even if you initially intellectually estimate it to have much higher expected value:
I don’t know if this is exactly what you were looking for, but these seem to me to be some things to think about to perhaps move your intellectual reasoning closer to your gut, meaning you could be intellectually justified in putting some of your effort into following your gut (how much exactly is open to argument, of course).
In regards to how to make working on existential risk more “gut wrenching,” I tend to think of things in terms of responsibility. If I think I have some ability to help save humanity from extinction or near extinction, and I don’t act on that, and then the world does end, imagining that situation makes me feel like I really dropped the ball on my part of responsibility for the world ending. If I don’t help people avoid dying from malaria, I do still feel a responsibility that I haven’t fully taken up, but that doesn’t hit me as hard as the chance of the world ending, especially if I think I have special skills that might help prevent it. By the way, if I felt like I could make the most difference personally, with my particular skill set and passions, in helping reduce malaria deaths, and other people were much more qualified in the area of existential risk, I’d probably feel more responsibility to apply my talents where I thought they could have the most impact, in that case malaria death reduction.