Hide table of contents

You can be at peace even when thinking the world is doomed. And while at peace you can still work against that Doom, even while being aware that nothing you do will make a difference. I believe there are states of mind like this that can be inhabited by humans.

Here I am not going to argue for imminent doom, or that nothing that you do matters. Rather, I want to point out that even when you believe in the dire circumstance of imminent unpreventable doom, it is possible to be at peace, even while working hard against the doom. Even while believing this to be futile. This is a possible state of mind for a human being.

And if it is possible, to be at peace, and work hard, even in this dire circumstance, it should be possible in any less dire circumstance too.


There are many games about how long can you survive, e.g. Dawn of War 2 the Last Stand, Serious Sam survival mode, and Project Zomboid. The very nature of these games is that you will soon die. And there is no saving. The difficulty will increase more and more until at some point you will get crushed.

But there are loads of people playing these games. Nothing about the impossibility of achieving victory seems to detract from the fun you can have. Would this really change if these games couldn't be restarted?

There is also the game You Only Live Once that you can only play once.

Do people not play these games? Do people not try hard when playing these games? No. To be fair, there is a big difference between AI doom and these games. In these games, you can make visible progress. The natural way to define success is to ask the questions: How long did you survive? Did you survive longer than last time?

This is where death with dignity and Duncan's advice is coming from, as far as I can tell. It's about redefining success as making as much progress as possible toward getting a good outcome, instead of directly aiming for a good outcome. Aiming to survive forever in Dawn of War 2 the last stand would probably be frustrating. You set out for a goal that you know is unachievable after all.

I think these strategies are valuable, though to me it seems they also miss something very basic.

Maybe this is a fluke and I will feel different soon, but today I felt like my expectation of doom did not influence me negatively. No negative qualia arose, generated by a heuristic in my brain, that "wants" to steer me away from executing a futile plan.

I didn't achieve this by pushing the doominess out of my mind, or by redefining success as getting as far as possible (getting as much dignity as possible). Instead, I was in a state of peace while contemplating the doom, with the relevant considerations plainly laid out in my mind. I think to achieve this you need to stop wanting the doominess to go away. And you need to stop grasping for straws of hope.

This might sound bleak, but the resulting first-person experience that you get is the opposite. There is no more aversion and craving arising. And giving up these negative emotions doesn't need to imply that you stop working on preventing the doom. Being in a state of frantic, continuous panic isn't actually that great for productivity anyway.

When I'm talking about giving up hope and giving up the craving, for wanting the world to be better, I'm talking about silencing the emotional components of your mind. I am not saying anything about changing your consequentialist, conscious reasoning. Mine is still targeted at making the biggest cumulative contribution that I can make, for preventing the doom. There is no contradiction here. In my model, the consequentialist reasoning component of your mind is separate from all of these heuristic algorithms that compute feelings that consequently arise in your consciousness, having a positive or negative valence associated with them, and steer you in particular ways.

Well, I don't really think I have done a good job (or any job whatsoever) of conveying how I managed to do this. I think the fact that I can do this is related to meditation. For example, in the Waking Up app, Sam Harris sometimes gives explicit instructions to "give up the struggle", and I think I just intuitively managed to successfully apply this learned mental motion here. So my best (and very lazy) recommendation right now is to also learn it from there.

Though probably it seems worth trying out directly. I expect at least some people might just be able to do this directly, given only the following instruction: "Just give up the struggle."

Dirt Wins

All of this applies to the situation where you think that nothing you do actually matters. I want to tell a little story about how I was wrong about the futility of my own actions in the past.

Once upon a time, I played a round of Zero-k. I think it was my first ever match against another player. In the beginning, it seemed like we were evenly matched, maybe I got a slight advantage. But then after some time, it became very one-sided. All my troops got decimated and I was pushed back into my base. I thought that I would surely lose. But I was not giving up in the face of that. I wanted to fight it out until the end. I definitely felt a pull toward just calling it GG. But I didn't budge. I still tried to do my best. I had no more resources. All I could build was boxes of dirt. But still, I didn't give up. I didn't continue because I thought there is a good chance that I could make a comeback. It was simply raw, unfelt, maybe illogical determination, to not give up.

After some time defending my base using mainly bags of dirt, I managed to slightly push back the enemy. However, it didn't take long and they reorganized an army and came back and again I thought I would surely lose. But still, I didn't give up.

And then something unforeseen happened. My enemy got lazy, or careless. Or perhaps they simply got bored by my persistence? By the fact that I was stretching out the game like an old chewing gum? In any case, I soon managed to accumulate a critical mass of dirt bags. I was starting to throw them at the enemy, slowly but surely pushing them back. That push never ground to a halt for long. Soon I was in the enemy's base, and it was only a matter of time until the dirt prevailed.

15

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

Johannes - thanks for sharing a useful perspective. I think in many cases, you're right that a kind of cool, resigned, mindful, courage in the face of likely doom can be mentally healthy for individuals working on X risk issues. Like the chill of a samurai warrior who tries to face every battle as if his body was already dead -- the principle of hagakure. If our goal is to maximize the amount of X risk reduction research we can do as individuals, it can make sense to find some equanimity while living under the shadow of personal death and species-level extinction.

However, in many contexts, I think that a righteous fury at people who witlessly impose X risks on the rest of us can also be psychologically healthy. As a parent, I'm motivated to protect my kids, by almost any means necessary, against X risks. As a citizen, I feel moral outrage against politicians who ignore X risks. As a researcher, righteous fury against X risks makes me feel more motivated to band together with other, equally infuriated, like-minded researchers, rather than suffering doomy hopelessness alone.

Also, insofar as moral stigma against dangerous technologies (e.g. AI, bioweapons, nukes) might be a powerful way to fight those X risks, righteous anti-doom fury and moral outrage might be more effective than chill resignation. Moral outrage tends to spark moral stigma, which might be exactly what we need to slow the development of dangerous technologies. 

Of course, moral outrage tends to erode epistemic integrity, motivates confirmation bias, reinforces tribalism, can provoke violence (e.g. Butlerian jihads), etc. So there are downsides, but in some contexts, the ethical leadership power and social coordination benefits of moral outrage might outweigh those.

Hagakure is, I think, a useful concept and technique to know. Thank you for telling me about it. I think it is different from what I was describing in this article, but it seems like a technique that you could layer on top. I haven't really done it a lot yet, though I guess there is a good chance that it will work.

I can definitely see that being outraged can be useful on the individual and the societal level. However, I think the major challenge is to steer the outrage correctly. As you say, epidemics can easily go under. I encourage everybody who draws motivation from outrage, to still think carefully through the reasoning for why they are outraged. These should be reasons such that if you would tell them to a neutral curious observer, the reasons alone would be enough to convince them of the thing (without the communication being optimized to convince).

Johannes - I agree that it can be important to try to maintain epistemic integrity even if one feels deep moral outrage about something.

However, there are many circumstances in which people won't take empirically & logically valid arguments about important topics seriously if they're not expressed with an authentic degree of outrage. This is less often the case within EA culture. But it's frequently the case in public discourse. 

It seems that Eliezer Yudkowsky, for example, has often (for over 20 years) tried to express his concerns about AI X-risk fairly dispassionately. But he's often encountered people saying 'If you really took your own arguments seriously, you'd express a lot more moral outrage, and willingness to use traditional human channels for expressing and implementing outrage, such as calls for moral stigmatization of AI, outlawing AI, ostracizing practitioners of AI, etc. (But then, of course, when he does actually argue that nation-states should be willing to enforce a hypothetical global moratorium on AI using the standard military intervention methods (e.g. drone strikes) that are routinely used to enforce international agreements in every other domain, people act all outraged, as if he's preaching Butlerian Jihad. Sometimes you just can't win....)

Anyway, if normal folks see a disconnect between (1) valid arguments that a certain thing X is really really bad and we should reduce it, and (2) a conspicuous lack of passionate moral outrage about X on the part of the arguer, then they will often infer that the arguer doesn't really believe their own argument, i.e. they're treating it as a purely speculative thought experiment, or they're arguing in bad faith, or they're trolling us, etc.

This is a very difficult issue to resolve, but I expect it to be increasingly important as EAs discuss practical ways to slow down AI capability development relative to AI alignment efforts.

I'm not sure if what you say is correct. Maybe. I think there is one difficulty that needs to be taken into account, which is that I i think it is hard to elicit the appropriate reaction. When I see people arguing angrily, I am normally biased against what they say is correct. So I need to make an effort to take them more seriously than I would otherwise do. So it is unclear to me which percentage of people moral outrage would even affect in the way that we want it to affect them.

There's also another issue. Maybe when you are emotionally outraged, it will induce moral outrage in other people. Would it be a good thing to create lots of people who don't really understand the underlying arguments but are really outraged and vocal about the position of AGI being an existential risk? i expect most of these people will not be very good at arguing correctly for AGI being an existential risk. They will make the position look bad and will make other people less likely to take it seriously in the future. Or at least this is one of many hypothetical risks I see.

Johannes - these are valid concerns, I think. 

One issue is: what's the optimal degree of moral anger/outrage to express about a given issue that one's morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one's arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren't typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up -- and it would look very strange if they were.)

Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don't really understand the issue very deeply at a rational, evidence-based level? Well, that's arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can't understand, and don't understand, and may never understand.

I don't expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don't understand our reasoning), or.... what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don't think we have the time. 

I think we need to get comfortable with being ethical leaders on some of these issues -- and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.

I rather liked this post (and I’ll put it on both EAF and LW versions)

https://www.lesswrong.com/posts/PQtEqmyqHWDa2vf5H/a-quick-guide-to-confronting-doom

Particularly the comment by Jakob Kraus reminded me that many people have faced imminent doom (not of human species, but certainly quite terrible experiences).

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities