Imagine 2 valleys. One valley leads to billions and billions of years of extreme joy, while the other valley leads to billions and billions of years of extreme suffering. A superintelligence comes to you and tells you that there is a 98% chance that you will end up in the valley of joy for billions of years and a 2% chance that you will end up in the valley of suffering for billions of years. It also gives you a choice to choose non-existence. Would you take the gamble, or would you choose non-existence?

The argument presented in this post occurred to me several months ago and in the last several months, I have spent time thinking about the argument and discussed it with AI models and have not found a satisfactory answer to it given the real-world situation. The argument can be formulated for things other than advanced AI, but given the rapid progress in the AI field and that the argument was originally formulated in the context of AI, I will present it in that context.

Now apply the reasoning from the valleys from above to AGI/ASI. AGI could be here in about 15 months and ASI not long after that. Advanced AI(s) could prolong human life to billions and billions of years, take over the world and create a world in its image - whatever that might be. People have various estimates of how likely it is that the AGI/ASI will go wrong, but one thing that many of them keep saying is that the worst case scenario is that it will kill us all. That is not the worst case scenario. The worst case scenario is that it will cause extreme suffering or torture us for billions and trillions of years.

Let's assume better than 2% odds, let's say they are 0.5%, would you be willing to take the gamble with heaven or hell even if the odds are 0.5% for hell? And if not, at what point would you be willing to take the gamble instead of choosing non-existence?

If some of you might say that you would be willing to take the gamble at 0.5% for a living hell, in this case, would you be willing to spend 1 hour in a real torture chamber now for every 199 hours that you are in a positive mental state and not there? Advanced ASI could not only create suffering based on the current levels of pain experience that humans can have, which are already horrible, but increase pain experience of humans to unimaginable levels (for whatever reason - misalignment, indifference, evilness, sadism, counterintuitive ethical theories, whatever it might be).

If you do not assume a punitive afterlife for choosing non-existence and if choosing non-existence is an option, at which point, what odds do you need to have to take the gamble between almost a literal heaven or hell instead of choosing non-existence? I've asked myself this question and the answer is, when it comes to extreme suffering of billions and trillions of years, the odds would have to be very, very close to zero. What are your thoughts on this? If you think that this argument is not a valid argument, can you show me where is the flaw?

4

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from Blue11
Curated and popular this week
Relevant opportunities