Joe Carlsmith's latest post discusses the difference between the probabilities that one puts on events on a gut level and on a cognitive level, and advocates updating your gut beliefs towards your cognitive beliefs insofar as the latter better tracks the truth.


The post briefly notes that there can be some negative mental health consequences of this. I would like to provide a personal anecdote of some of the costs (and benefits) of changing your gut beliefs to be in line with your cognitive ones.


Around 6 months ago my gut realised that one day I was going to die, in all likelihood well before I would wish to. During this period, my gut also adopted the same cognitive beliefs I have about TAI and AI x-risk. All things considered, I expect this to have both decreased my impact from an impartial welfarist perspective and my personal life satisfaction by a substantial amount. 


Some of the costs for me of this have been:

  • A substantial decrease in my altruistic motivation in favour of self-preservation 
  • A dramatic drop in my motivation to work 
  • Substantially worse ability to carry out causes prioritisation 
  • Depression 
  • Generically being a less clear thinker 
  • Differing my exams 
  • I expect to receive a somewhat lower mark in my degree than I otherwise would have
  • Failing to run my university EA group well  

There have also been some benefits to this: 

  • I much more closely examined my beliefs about AI and AI X-risk 
  • Engaging quite deeply with some philosophy questions 

Note that this is just the experience of one individual and there are some good reasons to think that the net negative effects I've experienced won’t generalise: 

  • I’ve always been very good at acting on beliefs that I held at a cognitive level but not at a gut level. The upside therefore to me believing things at a gut level was always going to be small. 
  • I have a history of ruminative OCD (also known as pure O) - I almost without caveat recommend that others with ruminative OCD do not engage with potentially unpleasant beliefs one has at a cognitive level on a gut level. 
  • I’ve been experiencing some other difficulties in my life that probably made me more vulnerable to depression.

In some EA and Rationalist circles, there’s a norm of being quite in touch with one’s emotions. I’m sure that this is very good for some people but I expect that it is quite harmful to others, including myself. For such individuals, there is an advantage to a certain level of detachment from one’s emotions. I say this because I think it’s somewhat lower status to reject engaging with one’s emotions and I think that this is probably harmful. 

As a final point, note that you are probably bad at affective forecasting. I’ve spent quite a lot of time reading about how people felt close to death and there are a wide variety of experiences. Some people do find that they are afraid own deaths when close to them, and others find that they have no fear. I’m particularly struck by De Gaulle’s recollections of his experiences during the first world war where he found he had no fear of death, after being shot leading his men during his early years in the war as a junior officer. 


 

41

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I'm sorry to hear about this, Nathan. As I say in the post, I do think that the question how to do gut-stuff right from a practical perspective is distinct from the epistemic angle that the post focuses on, and I think it's important to attend to both.

[anonymous]5
3
0

I agree ideally one would do gut stuff right both practically and epistemically. In my case, the tradeoff of productivity loss and loss in general reasoning ability in exchange for some epistemic gains wasn't worth it.

 I think it's plausible that for people in a similar situation to me - people who are good at making decisions based on just analytic reasoning and have reason to think that they might be vulnerable if they were to try to believe things on a gut level as well as an analytic one - should consider not engaging certain EA topics on a gut level (I don't restrict this to AI safety - I know people who've had similar reactions thinking about nuclear risk and I've personally made the decision not to think about s-risk or animal welfare on a gut level either.)

I do want to emphasise that there was a tradeoff here - I think I have somewhat better AI safety takes as a result of thinking about AI safety on a gut level. The benefit though was reasonably small and not worth the other costs from an impartial welfareist perspective. 

I think it depends on what role you're trying to play in your epistemic community.

If you're trying to be a maverick,[1] you're betting on a small chance of producing large advances, and then you want to be capable of building and iterating on your own independent models without having to wait on outside verification or social approval at every step. Psychologically, the most effective way I know to achieve this is to act as if you're overconfident.[2] If you're lucky, you could revolutionise the field, but most likely people will just treat you as a crackpot unless you already have very high social status.

On the other hand, if you're trying to specialise in giving advice, you'll have a different set of optima on several methodological trade-offs. On my model at least, the impact of a maverick depends mostly on the speed at which they're able to produce and look through novel ideas, whereas advice-givers depend much more on their ability to assign accurate probability estimates on ideas that already exist. They have less freedom to tweak their psychology to feel more motivated, given that it's likely to affect their estimates.

  1. ^

    "We consider three different search strategies scientists can adopt for exploring the landscape. In the first, scientists work alone and do not let the discoveries of the community as a whole influence their actions. This is compared with two social research strategies, which we call the follower and maverick strategies. Followers are biased towards what others have already discovered, and we find that pure populations of these scientists do less well than scientists acting independently. However, pure populations of mavericks, who try to avoid research approaches that have already been taken, vastly outperform both of the other strategies."[3]

  2. ^

    I'm skipping important caveats here, but one aspect is that, as a maverick, I mainly try to increase how much I "alieve" in my own abilities while preserving what I can about the fidelity of my "beliefs".

  3. ^

    I'll note that simplistic computer simulations of epistemic communities that have been specifically designed to demonstrate an idea is very weak evidence for that idea, and you're probably better off thinking about it theoretically.

I'm sorry you've had this experience :( 

I know of some other people who've taken bad mental health damage from internalizing pessimistic beliefs about AI risk, as well.

I'm not sure what to do about this, because it seems bad to recommend 'try to not form pessimistic opinions about AI for the sake of your mental health, or remain very detached from them if you do form them', but being fully in touch with the doom also seems really bad.  

[anonymous]1
1
0

To be clear, I'm not at all recommending changing one's beliefs here. My language of gut belief vs cognitive beliefs was probably too imprecise. I'm recommending that, for some people, particularly if one is able to act on beliefs one doesn't intuitively feel, it's better not to try to intuitively feel those beliefs. 

For some people, this may come at a cost to their ability to form true beliefs,  and this is a difficult tradeoff. For me, I think, all things considered, intuiting beliefs has made me worse at forming true beliefs. 

Curated and popular this week
Relevant opportunities