If the world is destroyed, it won’t be our fault and there may have been nothing we could do about it. It will take a lot of people contributing their small part to avert AI danger, and they might not all do it, and we could all end up dead.

I work against this outcome because I don’t want it to happen, and I believe I can do some small part to reduce the danger. It’s delusional and not helpful to work on this cause because you think there has to be some way you can single-handedly fix it (sometimes referred to as “heroic responsibility”).

There’s an impulse to bear the urgency of x-risk (or of animal suffering, or global disease burden, etc.) in mind at all times, and for me this is often done with extremely delusional hopes that, by refusing to live my normal life and accept the threat of x-risk hanging over it, I am psychically fighting the threat. I am not.

I want to have as much impact as I can, but delusional emotional reasoning impact doesn’t count. I’m not having as much impact as I can if I’m locked in a futile psychic struggle. But if you leave this struggle, allies around you keep trying to pull you back in, because they mistake that urgency for seriousness. In fact, what that urgency does is make them flit around between short-term, low-impact agendas and not stick to a sustained plan of impact that would actually be the most helpful, the highest EV plan even given the risk of AI incidents coming first.

For example, people who are attracted to PauseAI US often become frustrated with protests because they take a lot of cooperation and are not immediately large. They prefer the fantasy of a tweet or video going viral and changing a lot of minds— something they can do by themselves in days or weeks instead of years but whose actual EV is quite low. They tell me that I am not taking the problem seriously enough because I am committed to developing a longer term (“longer term” by like 3-6 years, and dropping benefits throughout) strategy. Reading between the lines, they mean that I am not taking the problem seriously enough because I am not panicking about short timelines and thinking it’s all over after 3 years.

Sometimes I think people who want that sense of urgency are putting off processing the reality, and they are afraid they won’t keep fighting if they do that. I suspect many people I know repeat the “3 year” timeline as a mantra like “I only have to do this for 3 years, and then we either succeed or I die”. This is awful. They can end up seeing an upside to dying because they are so burnt out.

We can do better. I want to create places (like PauseAI US) where we can support each other for the long war. That means not living in a constant state of acute panic, but living good lives while we contribute our part to a bigger war effort. Humanity has done this many times before (and currently), and we must summon that resilience and courage to embrace what is good about life right now even under trying circumstances— not refuse to relax and contribute sustained, sustainable effort until after we have solved AI danger.

It’s not your responsibility to feel bad and resist the reality of x-risk. You can feel okay knowing you’re up against great odds, because you’re embracing what’s good about life and tending to your needs in the meantime. This battle is not mental and it will not be quick or elegant. We need to settle in for the long war.

68

15
0

Reactions

15
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

So no one should work on the worlds with the shortest timelines? Should they be given up?

Are they actually “working” on those worlds or are they just panicking and talking themselves out of doing any real work because it would take too long?

If it were the case there was nothing productive we could do that we expect to “work” before 3 years (not saying it is) then for sure people should give up wasting time on it and do things that take longer.

Curated and popular this week
Relevant opportunities