Hide table of contents

I have not researched longtermism deeply. However, what I have found out so far leaves me puzzled and skeptical. As I currently see it, you can divide what longtermism cares about into two categories:

1) Existential risk.

2) Common sense long-term priorities, such as:

  • economic growth
  • environmentalism
  • scientific and technological progress
  • social and moral progress 

Existential risk isn’t a new idea (relative to longtermism) and economic growth, environmentalism, and societal progress aren’t new ideas either. Suppose I already care a lot about low-probability existential catastrophes and I already buy into common sense ideas about sustainability, growth, and progress. Does longtermism have anything new to tell me? 

8

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

Longtermism suggests a different focus within existential risks, because it feels very differently about "99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation" and "100% of humanity is destroyed, civilisation ends", even though from the perspective of people alive today these outcomes are very similar.

I think relative to neartermist intuitions about catastrophic risk, the particular focus on extinction increases the threat from AI and engineered biorisks relative to e.g. climate change and natural pandemics. Basically, total extinction is quite a high bar, and most easily reached by things deliberately attempting to reach it, relative to natural disasters which don't tend to counter-adapt when some survive.

Longtermism also supports research into civilisational resilience measures, like bunkers, or research into how or whether civilisation could survive and rebuild after a catastrophe.

Longtermism also lowers the probability bar that an extinction risk has to reach before being worth taking seriously. I think this used to be a bigger part of the reason why people worked on x-risk when typical risk estimates were lower; over time, as risk estimates increased. longtermism became less necessary to justify working on them.

because it feels very differently about "99% of humanity is destroyed, but the remaining 1% are able to rebuild civilisation" and "100% of humanity is destroyed, civilisation ends"

Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.

the particular focus on extinction increases the threat from AI and engineered biorisks

IMO, most x-risk from AI probably doesn't come from literal human extinction but instead AI s... (read more)

2
Ben Millwood🔸
Yeah, it seems possible to be longtermist but not think that human extinction entails loss of all hope, but extinction still seems more important to the longtermist than the neartermist. valid. I guess longtermists and neartermists will also feel quite different about this fate.

This is an interesting point, and I guess it’s important to make, but it doesn’t exactly answer the question I asked in the OP.

In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that it’s so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)

I feel like people in the EA community only started talking about "longtermism" in the la... (read more)

4
Ben Millwood🔸
I guess I think of caring about future people as the core of longtermism, so if you're already signed up to that, I would already call you a longtermist? I think most people aren't signed up for that, though.
3
Kevin Ulug
I agree that if you're already bought in to moral consideration for 10^umpteen future people, that's longtermism.
3
Yarrow
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.) Here's why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them. That's already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. It's the reason most people who care about climate change care about climate change. I don't really know what the best way to express the most mainstream view(s) would be. I don't think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a "discount rate" to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero. Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable different to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited. So: people who don't take a longtermist view of existential risk already have a good reason to care about existential risk. Also: people who take a longtermist view of ethics don't seem to have a good reason to think differently about any other subject than existential risk. At least, that's the impression I get from trying to engage open-mindedly and charitably with this

One takeaway, I think, is that these things which already seem good under common sense are much more important in the longtermist view.  For example, I think a longtermist would want extinction risk to be much lower than what you'd want from a commonsense view.

Does this apply to things other than existential risk?

1
Kevin Ulug
Yes. I think your list of commonsense priorities are even more beneficial in the view of longtermism. Factors like "would this have happened anyway, just a bit later" may still apply and reduce the impact of any given intervention. Then again, notions like "we can reach more of the universe the sooner we start expanding" could be an argument for sooner being better for economic growth.
Comments1
Sorted by Click to highlight new comments since:

Answering this question depends a little on having a sense of what the "non-longtermist status quo" is, but:

  • I think there's more than one popular way of thinking about issues like this,
    • in particular I think it's definitely not universal to take existential risk seriously,
  • I think common-sense and the status quo include some (at least partial) longtermism, e.g. I think popular rhetoric around climate change has often held the assumption that we were taking action primarily with our descendants in mind, rather than ourselves.
More from Yarrow
Curated and popular this week
Relevant opportunities