Below is the list of things which in my view could affect the wellbeing of all people, but which is not part of known to me research in EA. As I found these topics important but underexplored I naturally tried my best to look as deep as I can into them, so many of the suggested below ideas have links to my works.
- Use the Moon as a data storage about humanity. This data could be used by the next civilization on Earth and will help it to escape global catastrophes or even will help it to resurrect humans.
- Explore the dangers of passive SETI. We could download dangerous alien AI. See also a recent post by Matthew Barnett.
- Study of UAP and their relation to our future prospects and global risks.
- Plastination as an alternative to cryonics. Some forms of chemical preservation are much cheaper than cryonics and do not require maintenance.
- Prove that death is bad (from the preferential utilitarianism point of view), and thus we need to fight aging, strive for immortality and research the ways to resurrect the dead (unpublished working draft).
- Research the topic of so-called “quantum immortality”. Will it cause eternal sufferings to anyone, or it could be used to increase one's chances of immortality?
- Explore the ways how to resurrect the dead.
- New approaches to digital immortality and life-logging which is the cheapest way to immortality available to everyone. Explore active self-description as an alternative to life-logging.
- Explore how to “cure” past sufferings. Past sufferings are bad. If we have a time machine, it could be used to save past minds from sufferings. But also, we can save them by creating indexical uncertainty about their location, which will work similarly to a time-machine.
- Global chemical contamination as an x-risk. Seems to be underexplored.
- Anthropic effects of the expected probability of runaway global warming: our world is more fragile than we think and thus climate catastrophe is more probable. Unpublished draft.
- Plan B in AI safety. Let’s speak seriously about AI boxing and the best ways to do it.
- Dig deeper into the acausal deals and messaging to any future AI. The utility of killing humans is small for advanced superintelligent AI and adding any small value to our existence can help.
- How the future nuclear war will be different from the 20s century nuclear war scenarios?
- Explore and create refuges to survive a global catastrophe on an island or in a submarine. Create a general overview of surviving options. Surviving in caves. Surviving moisture greenhouse (unpublished draft).
- How to survive the end of the universe. We may have to make important choices before we start space colonization.
- Simulation: Experimental and theoretical research. Explore simulation termination risks. Explore types of evidence that we are in a simulation and analyze the topic of so-called “glitches in the matrix” – are they the evidence that we are in the simulation?
- Psychology of human values: do they actually exist as a stable set of preferences and what does psychology tell us about that?
- Doomsday argument: what if it is true after all? What can be done to escape its prediction?
- Explore the risks of wireheading as a possible cause of the civilizational decline.
Could you elaborate why we have to make choices before space colonisation if we want to survive beyond the end of the last stars? Until now, my opinion is that we can can "start solving heat death" a billion years in the future while we have to solve AI alignment in the next 50 - 1000 years.
Another thought of mine is that it is probably impossible to resurrect the dead by computing how the state of each neuron of a deceased person was at the time of her/his death. I think, you need to measure the state of each particle in the present with a very high precision and/or the computational requirements for a backward simulation are much too high. Unfortunately, I cannot provide a detailed mathematical argument. This would be an interesting research project; even if the only outcome is that a small group of people should change their cause area.
If we start space colonisation, we may not be able to change goal-system of the spaceships that we will send to stars, as they will move away with near-light speed. So we need to specify what we will do with the universe before starting the space colonisation: either we will spend all resources to build as many simulations with happy minds as possible – or we will reorganise matter in the ways with will help to survive the end of the universe, e.g. building Tipler's Omega point or building worm hole into another universe.
---
Very high precision of brain details is not needed for resurrection as we every second forget our mind state. So only a core of long-term memory is sufficient to preserve what I call "information identity", which is necessary conditions for a person to regard himself as the same person, say, next day. But the whole problem of identity is not solved yet, and it would be a strong EA cause to solve it: we want to help people in the ways which will not destroy their personal identity, if that identity really matters.
Thank you for your answers. With better brain preservation and a more detailed understanding of the mind it may be possible to resurrect recently deceased persons. I am more skeptical about the possibility to resurrect a peasant from the middle ages by simulating the universe backwards, but of course these are different issues.
If we simulate all possible universes, we can do it. It is enormous computational task, but it can be done via acausal cooperation between different branches of multiverse, where each of them simulate only one history.
I see two problems with your proposal:
I am not against your ideas, but I am afraid that there are many conceptual and physical problems that have to solved before. What is even worse is that there is no universally accepted method how to resolve this issues. So a lot of further research is necessary.
1.The identity problems is known to be difficult, but here I assume that continuity of consciousness is not needed for it. Only informational identity is enough.
2. The difference between quantum - or big world- immortality is that we can select which minds to create and exclude N+1 moments which are damages or suffering.
Let us assume that a typical large but finite volume contains n happy simulations of you and n⋅10−100 suffering copies of you, maybe Boltzmann brains or simulations made by a malevolent agent. If the universe is infinite, you have infinitely many happy and infinitely suffering copies of you and it is hard how to interpret this result.
I think that there is way to calculate relative probabilities even in infinite case and it will converge to 1:1⋅10−100. For example, there is an article "The watchers of multiverse" which suggest a plausible way to do so.
Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting.
On UAP and glitches in the matrix: I sometimes joke that, if we ever build something like a time machine, we should go back in time and produce those phenomena as pranks on our ancestors, or to "ensure timeline integrity." I was even considering writing an April Fool's post on how creating a stable worldwide commitment around this "past pranks" policy (or, similarly, committing to go back in time to investigate those phenomena and "play pranks" only if no other explanation is found) would, by EDT, imply lower probabilities of scary competing explanations for unexplained phenomena - like aliens, supernatural beings or glitches in the matrix. (another possible intervention is to write a letter to superintelligent descendants asking them to, if possible, go back in time to enforce that policy... I mean, you know how it goes)
(crap I just noticed I'm plagiarizing Interstellar!)
So it turns out that, though I find this whole subject weird and amusing, and don't feel particularly willing to dedicate more than half an hour to it... the reasoning seems to be sound, and I can't spot any relevant flaws. If I ever find myself having one of those experiences, I do prefer to think "I'm either hallucinating, or my grandkids are playing with the time machine again"
Actually, I am going to write someday a short post "time machine as existential risk".
Technically, any time travel is possible only if timeline is branching, but it is ok in quantum multiverse. However, some changes in the past will be invariants: they will not change the future in the way that causes ground father paradox. Such invariants will be loopholes and have very high measure. UFO could be such invariants and this explains their strangeness: only strange thing are not changing future ti prevent their own existence.
Thank you for this list.
#2: I left a comment on Matthew’s post that I feel is relevant: https://forum.effectivealtruism.org/posts/CRvFvCgujumygKeDB/my-current-thoughts-on-the-risks-from-seti?commentId=KRqhzrR3o3bSmhM7c
#16: I gave a talk for Mathematical Consciousness Science in 2020 that covers some relevant items: I’d especially point to 7,8,9,10 in my list here: https://opentheory.net/2022/04/it-from-bit-revisited/
#18+#20: I feel these are ultimately questions for neuroscience, not psychology. We may need a new sort of neuroscience to address them. (What would that look like?)
SensorLog is an app that lets you continuously record iPhone sensor data as stream to a file or web server. You might use it as a convenient form of life logging. Presumably, resurrection is easier if the intelligence doing it has lots of info about your location, movements, environment, etc.
Thanks, I do a lot of lifelogging, but didn't know about this app.
Just curious: Could you make the case for resurrecting people instead of just creating new ones? (Agree that having more persons with positive welfare is desirable but don't see why resurrection would be the most cost-effective.)
Humans have strong preference not die, and they -many of them -would like to be resurrected if it will be possible and will be done with high quality. I am supporter of the preferential utilitarianism, so I care not only of the number of happy of observer-moments, but also about what people really want.
Anyway, resurrecting is a limited task: only 100 billion people ever lived, and resurrecting them all will not preclude as of creating of trillions of trillions new happy people.
Also, mortal being can't be really happy. So new people need to be immortal or they will suffer of existential dread.
Interesting, thanks! Though I don't see why you'd only ressurect humans since animals seem to have the preference to survive as well. Anyways, I think preferences are often misleading and are not a good proxy for what would really be fulfilling. To me it also seems odd to say that a preference remains even if the person is no longer existing. Do you believe in souls or how do you make that work? (Sorry for the naivety, happy about any recs on the topic)
I support animal resurrection too, but only after all humans will be resurrected. Again starting from most complex and close to human animals, like pets, primates. Also, it seems that some animals will be resurrected before humans, like mammoth, nematodes and some pets.
When I speak about human preferences, I mean current preferences: people do not want to die now and many prefer that they will be resuscitated if no damage.
Not OP, but it seems reasonable that if you perform an action to help someone, and that person then agrees in retrospect that they preferred this to happen, that can be seen as "fulfilling a preference".
For a mundane example, imagine I'm ambivalent about mini-golfing. But you know me, and you suspect I'll love it, so you take me mini-golfing. Afterwards, I enthusiastically agree that you were right, and I loved mini-golfing. I see this as pretty similar to me saying beforehand "I love mini-golfing, I wish someone would go with me", and you fulfilling my preference by taking me. In both cases, the end result is the same, even though I didn't actually have a preference for mini-golfing before.
Similarly, even though it is impossible for a dead person to have a preference, I think that if you bring someone back to life and they then agree that this was a fantastic idea and they're thrilled to be alive, that would be morally equivalent to fulfilling an active preference to live.
Thanks for the explanation!
I agree that it is great to do something to people for which they will be thankful later. But newly created people seem just as good for this and if you care a lot about preferences you could create them in a way that they will be very thankful and the pure creation is fulfilling for them. Still don't see the value of resurrection vs new people. I think my main problem with preference utilitarianism is that you can't say whether it's good or bad to create preferences since both has unintuitive conseqences.
It seems you can accommodate this just as well, if not better, within a hedonistic view—you didn't prefer to go mini-golfing, but mini-golfing made you happier once you tried it, so that's why you endorse people encouraging you to try new things. (Although I'm inclined to say, it really depends on what you would've otherwise done with your time instead of mini-golfing, and if someone is fine not wanting something, it's reasonable to err on the side of not making them want it.)