Here's Nick Bostrom briefly introducing the argument.
From what I've read the doomsday argument from analogy is as follows:
Imagine there are two urns in front of you, one containing 10 balls, the other containing 1 million balls. You don't know which urn is which. The balls are numbered and upon blindly picking a ball numbered "7", you reason (correctly) that you've most likely picked a ball from the 10-ball urn. The doomsday argument posits this: when thinking about whether the future will be long (e.g. long enough for 10^32 humans to exist) or relatively short (say long enough for 200 billion humans), we should think of our own birthrank (you're roughly the 100 billionth human) the way we think about picking ball number 7. In other words, as the 100 billionth human you're more likely to be in the set of 200 billion humans rather than in the set of 10^32 humans, and this should be considered evidence for adjusting our prior expectations for how long the future will be.
I found few discussions on this in EA fora so I'm curious to hear what you all think about this argument. Does it warrant thinking differently about the long-term future?
Indeed. Seems supported by a quantum suicide argument - no matter how unlikely the observer, there always has to be a feeling of what-its-like-to-be that observer.
https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality