A

AppliedDivinityStudies

2082 karmaJoined

Comments
158

I am around!
https://twitter.com/alexeyguzey/status/1668834171945635840

Does EA Forum have a policy on sharing links to your own paywalled writing? E.g. I've shared link posts to my blog, and others have shared link posts to their substacks, but I haven't see anyone share a link post to their own paid substack before.

I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future.  Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.

The idea that "the future might not be good" comes up on the forum every so often, but this doesn't really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don't fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we're pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today

Yeah, it's difficult to intuit, but I think that's pretty clearly because we're bad at imagining the aggregate harm of billions (or trillions) of mosquito bites. One way to reason around this is to think:
- I would rather get punched once in the arm than once in the ribs, but I would rather get punched once in the ribs than 10x in the arm
- I'm fine with disaggregating, and saying that I would prefer a world where 1 person gets punched in the gut to a world where 10 people get punched in the arm
- I'm also fine with multiplying those numbers by 10 and saying that I would prefer 10 people PiG to 100 people PiA
- It's harder to intuit this for really really big numbers, but I am happy to attribute that to a failure of my imagination, rather than some bizarre effect where TU only holds for small populations
- I'm also fine intensifying the first harm by a little bit so long as the populations are offset (e.g. I would prefer 1 person punched in the face to 1000 people punched in the arm)
- Again, it's hard to continue to intuit this for really extreme harms and really large populations, but I am more willing to attribute that to cognitive failures and biases than to a bizarre ethical rule

Etc etc. 

Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.

R.e.
> For instance, many people wouldn't want to enter solipsistic experience machines (whether they're built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.


I just don't trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what's real

And to be clear, I share the intuition that experience machines seem bad, and yet I'm often totally content to play video games all day long because it doesn't violate those two conditions.

So what I'm roughly arguing is: We have some good reasons to be wary of experience machines, but I don't think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility. 
 

people alive today have negative terminal value

This seems entirely plausible to me. A couple jokes which may help generate an intuition here (1, 2)

You could argue that suicide rates would be much higher if this were true, but there are lots of reasons people might not commit suicide despite experiencing net-negative utility over the course of their lives.

At the very least, this doesn't feel as obviously objectionable to me as the other proposed solutions to the "mere addition paradox".

 

The problem (of worrying that you're being silly and getting mugged) doesn't arise when probabilities are tiny, it's when probabilities are tiny and you're highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that "spending the next year of my life on AI Safety research" will prevent x-risk.

In the former cases, we have base rates and many trials. In the latter case, I'm just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.

Anyway, I mostly agree with what you're saying, but it's possible that you're somewhat misunderstanding where the anxieties you're responding to are coming from.


 

Thanks this is interesting, I wrote a bit about my own experiences here:

https://applieddivinitystudies.com/subconscious/

Under mainstream conceptions of physics (as I loosely understand them), the number of possible lives in the future is unfathomably large, but not actually infinite.

Load more