Reed Shafer-Ray

Co-Founder @ Lead For America
11 karmaJoined Working (0-5 years)Washington, DC, USA

Bio

Co-Founder of Lead For America, a nonprofit building a leadership force of our nation's most outstanding young people, starting with a paid, full-time Fellowship serving in their hometown or home state. LFA grew from a dorm room startup to a more than $7M organization that has created more than 250 full-time Fellowships across 40+ states.

Comments
2

Hi Michael, thanks for the feedback and the interest in this post! I'll try to respond to both of your points below:

  1. I discuss the Charlemagne Effect as one of the most obvious and easy-to-illustrate examples of long-term effects of traditionally neartermist interventions (TNIs), but mention that there are likely many other significant long-term effects that would require more thought and research to better define.
    1. As I write: "The Charlemagne Effect, whereby present people will reproduce and create huge numbers of future people, is at least one highly significant long-term effect of TNIs."
    2. Easy to miss, but I also discuss these nuances in greater depth in Footnotes 13 and 17, and the Considering Potential Impactions section.
  2. Regarding population growth, I respond to your concerns in the section Counterargument: Carrying Capacities. Additionally, Footnote 29 directly addresses this point, and my comment responding to Gregory Lewis touches on many of the same concerns. But to briefly summarize here, if it is true that we hit a global carrying capacity by 2100, you're right that the Charlemagne Effect is unlikely to have much impact. But if many other scenarios occur (space colonization, very slow but not completely static growth, or cyclic growth), then it will absolutely matter. But of course we won't know what actually will happen until it happens, but this uncertainty is similar to the uncertainty that accompanies making investments in traditionally longtermist interventions (TLIs) like AI safety or pandemic prevention — we can't really know how much we are reducing existential risk, we can only give our best estimates.

The hope of this post is not to argue that TNIs are more impactful than TLIs, but rather to make the case that people could reasonably disagree about which are more impactful based on any number of assumptions and forecasts. And therefore, that even within a longtermist utilitarian analysis, it is not obviously better to invest only in TLIs.

Hi Greg,

Thanks for reading the post, and your feedback! I think David Mears did a good job responding in a way aligned with my thinking. I will add a few additional points:

  1. I don't think we can really know how future population will grow. To name one scenario aligned with exponential growth I cite in my post, Greaves and MacAskill discuss the possibility of space colonization that could lead to expansion possibilities that could stretch on for millions or billions of years: 
    1. "As Greaves and MacAskill argue, it is feasible that future beings could colonize the estimated over 250 million habitable planets in the Milky Way, or even the billions of other galaxies accessible to us.[25] If this is the case, there doesn’t seem to be an obvious limit to human expansion until an unavoidable cosmic extinction event."
  2. Second, it's possible that even if growth doesn't exponentially grow, it could at various times have cyclic growth (booms and busts) or exponential decline. As I discuss, in both of these cases where there is not a carrying capacity, the Charlemagne Effect would still hold.
  3. Third, we can't know how long humanity will continue on. For example, the average mammal species has a "lifespan" of 1 million years. Plus humans are uniquely capable of creating existential catastrophe that could greatly shorten our species lifespan. In these cases, exponential growth may not be unrealistic.
  4. Last, I will point out that you could very well be right that in the future population growth follows a logistic curve, and/or humans continue on for billions of years. But there is some significant probability these conditions don't hold, just as we can't be certain that working to mitigate existential risk from AI, pandemics, etc. will prevent human extinction. Thus, within an expected value calculation of long term value, the Charlemagne Effect should still apply as long as there is some chance that the necessary conditions for it would exist.