This is how our species is going to die. Not necessarily from nuclear war specifically, but from ignoring existential risks that don’t appear imminent at this moment. If we keep doing that, eventually, something is going to kill us – something that looked improbable in advance, but that, by the time it looks imminent, is too late to stop.
That’s the problem with freedom, in an advanced society. What can be done about it?
a. Targeted restrictions: The most natural thought is that we should tightly control just the really dangerous technologies, the ones that could be used to kill millions of people. So far, that’s worked because there aren’t that many such technologies (esp. nuclear weapons). It may not work in the future, though, when there are more such technologies. [...]
b. Defensive technologies: We’ll build defenses against the main threats. E.g., we’ll build defenses against nuclear weapons, we’ll engineer ourselves to resist genetically engineered viruses, etc. Problem: same as above; we may not be able to anticipate all the threats in advance. Also, defense is generally a losing game. It’s easier and cheaper to destroy things than to protect them. That’s why we have the saying “the best defense is a good offense”.
[...]
c. Tyranny/the End of Privacy: Maybe in the future, everyone will need to be closely monitored at all times, so that, if someone starts trying to destroy the world, other people can immediately intervene. Sam Harris suggested this in a podcast somewhere. Note: obviously, this applies as well (especially!) to government officials.
d. A better alternative . . . ?
Someone please fill in (d) for me. Thanks.
I don't think (c) works so better than the others. It implies a single-point-of-failure and bad incentives due to no accountability, besides the really hard problem of controlling everyone.
Transhuminsts would say (d) is super AGI, but that's basically (c) with more tech.
(Interplanetary civilization would possibly solve it... but as Huemer remarked, we're closer to destruction than to spreading through the galaxy)
In the 5 minute discussion, Huemer mentions a couple points from his "The Case For Tyranny" blog post, but clearly still doesn't have a response he is satisfied with.
Huemer says he recently read Neal Stephenson talking about "distributed monitoring" as a possible solution. He appears interested in the possibility, but not to have thought about it that much, and not ready to advocate for it.
The conclusion:
I don't think (c) works so better than the others. It implies a single-point-of-failure and bad incentives due to no accountability, besides the really hard problem of controlling everyone.
Transhuminsts would say (d) is super AGI, but that's basically (c) with more tech.
(Interplanetary civilization would possibly solve it... but as Huemer remarked, we're closer to destruction than to spreading through the galaxy)
It's weird that he doesn't cite https://nickbostrom.com/papers/vulnerable.pdf
Thanks for posting this! He regularly posts things that are of interest to EAs I think
Dwarkesh Patel asks Huemer (2021) about The Vulnerable World Hypothesis and he calls it "the strongest argument for a strong state... for keeping the state".
In the 5 minute discussion, Huemer mentions a couple points from his "The Case For Tyranny" blog post, but clearly still doesn't have a response he is satisfied with.
Huemer says he recently read Neal Stephenson talking about "distributed monitoring" as a possible solution. He appears interested in the possibility, but not to have thought about it that much, and not ready to advocate for it.
https://www.youtube.com/watch?v=--xKsIgv7tE&t=3727s