FR

Frank_R

183 karmaJoined

Comments
49

On the one hand, I am more concerned with learning AI safety than organizing my own cryoperservation or chemical brain preservation. This is mostly because I think it is more important to avoid the worst possible futures before planning to be alive in the future. On the other hand, I think life extension should receive much more funding (as long as you do not use AI with dangerous capabilities), I am against anti-natalism and I think that brain preservation should be available for those who wish. There is a certain tension between both statements. I do not know how to resolve that tension, but thank you for pointing out the problem with my worldview. 

To answer your question I have to describe some scenarios how a non-aligned AI would act. This is slightly cringe since we do not know what an unaligned AI would do and this sounds very sci-fi like. In the case of something like a robot uprising or a nuclear war started by an AI many people would die under circumstances such that uploading is impossible, but brain banks could be still intact. If an unaligned AI really has the aim to upload and torture everyone, there will probably be better ways. [Insert something with nanobots here.]

In my personal, very subjective opinion there is a 10% chance of extinction by AI and a 1% chance for s-risks or something like Roko's basilisk. You may have different subjective probabilities and even if we agree on the possibilities, it depends very much on your preferred ethical theory what to do.       

The difference is that if you are biologically dead, there is nothing you can do to prevent a malevolant actor to upload your mind. If you are terminally ill and are pessimistic about the future, you can at least choose cremation.

I am not saying that there should be no funding for brain preservation, but personally I am not very enthusiastic since there is the danger that we will not solve the alignment problem.

This may sound pessimistic, but the value of brain preservation depends also on your views about the long term future. If you think that there is a non-negligible chance that the future is ruled by a non-aligned AI or that it will be easily possible to create suffering copies of you, then it would be better to erase the information that is necessary to reconstruct you after your biological death.

Unfortunately, I have not found time to listen to the whole podcast; so maybe I am writing stuff that you have already said. The reason why everyone assumes that utility can be measured by a real number is the von Neumann-Morgenstern utility theorem. If you have a relation of the kind "outcome x is worse than outcome y" that satisfies certain axioms, you can construct a utility function. One of the axioms is called continuity:

"If x is worse than y and y is worse than z, then there exists a probability p, such that a lottery where you receive x with a probability of p and z with a probability of (1-p), has the same preference as y."

If x is a state of extreme suffering and you believe in suffering focused ethics, you might disagree with the above axiom and thus there may be no utility function. A loophole could be to replace the real numbers by another ordered field that contains infinite numbers. Then you could assign to x a utility of -Omega, where Omega is infinitely large. 

Unfortunately, I do not have time for a long answer, but I can understand very well how you feel. Stuff that I find helpful is practising mindfulness and/or stoicism and taking breaks from internet. You said that you find it difficult to make future plans. In my experience, it can calm you down to focus on your career / family / retirement even if it is possible that AI timelines are short. If it turns out that fear of AI is the same as fear of grey goo in the 90s, making future plans is better anyway.

You may find this list of mental health suggestions helpful:

https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of

Be not afraid to seek help if you get serious mental health issues.

I have switched from academia to software development and I can confirm most that you have written from my own experience. Although I am not very involved in the AI alignment community, I think that there may be similar problems as in academia; mostly because the people interested in AI alignment are geographically scattered and there are too few senior researchers to advise all the new people entering the field. 

In my opinion, it is not clear if space colonization increases or decreases x-risk. See "Dark skies" from Daniel Deudney or the article "Space colonization and suffering risks: Reassessing the 'maxipok rule'” by Torres for a negative view. Therefore, it is hard to say if SpaceX or  Bezos Blue Origin are net-positive or negative.

Moreover, Google founded the life extension company Calico and Bezos invested in Unity Biotechnology. Although life extension is not a classical cause area of EA, it would be strange if the moral value of indefinite life extension was only a small positive or negative number.  

I want to add that sleep training is a hot-button issue among parents. There is some evidence that starting to sleep-train your baby too early can be traumatic. My advice is simply to gather evidence from different sources before making a choice.

Otherwise, I agree with Geoffrey Millers reply. Your working hours as a parent are usually shorter, but you learn how to set priorities and work more effectively.  

Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:

  • There exist longtermist interventions that are good with respect to a broad range of ethical theories and views about the far future, e.g. searching the waste water for unknown pathogens.
  • Sometimes it is possible to gather further evidence for counter-intuitive claims. For example, you could experiment with existing large language models and search for signs of misaligned behaviour.
  • There may exist unknown longtermist interventions that satisfy all of our criteria. Therefore,  a certain amount of speculative thinking is OK as long as you keep in mind that most speculative theories will  die.   

All in all, you should keep the balance between too conservative and too speculative thinking. 

Load more