As part of my research for Will MacAskill’s planned book on longtermism, I carried out an attitude survey to find out how people react to some related arguments. Participants were asked to read two passages of text and indicate their level of agreement with various statements after each passage. The first passage argues that future generations matters just as much as the present, that they are currently neglected, and that there are things we can do to help them. The second, more speculative passage argues that a bigger population is better than a smaller one (all else equal), and that in the long-run, we should spread beyond the solar system. We also collected some demographic data. In total, we recruited 403 college-educated US respondents via Positly. If you want to have a look for yourself, the full data set is available here.
Survey Text
Below are the two passages that survey respondents read.
First Passage
If all goes well, humanity has a long and flourishing future ahead. But whether it does depends in part on the decisions we make today. Distant future generations are no less important than the current generation. Their joys and sorrows are just as real as ours. As an analogy, consider people who live on the other side of the planet. Surely, they don’t matter any less than people who are closer to you in space. Their well-being counts for just as much. In the same way, future generations don’t matter any less simply because they are far away in time.
Yet we rarely pause to think about just how many people there will be in the future. At the moment, there are around seven billion people in the world. The Earth will remain habitable for another 500 million years. If there are another seven billion people for each century until Earth becomes uninhabitable, then the future will contain five million times as many people as are alive today. Because there will be so many people in the future, anything we could do today to improve their lives would be of tremendous importance. The stakes are simply astronomical.
However, future generations are neglected in today’s society. In part, this is due to the short-term incentives we face. For example, politicians get rewarded or punished based on how they perform over the course of an election cycle. As a result, they don’t have much reason to think carefully about how the decisions they make today will affect future generations in centuries to come. Because future generations are so neglected, we should aspire to create a society that does more to help them.
You might think that the future is just so inherently difficult to predict that we can’t really know how to benefit the future. But there are in fact many things we can do to help future generations. For example, we can implement policies that increase the rate of sustainable growth. Economic growth has been one of the main forces behind the increase in quality of life that we’ve seen over the course of history. We’re 50 times richer today than we were prior to the Industrial Revolution. That wealth means we have to work fewer hours, have longer, healthier lives, and are able to engage in a much wider range of leisure activities. Further economic growth may bring comparable benefits to future generations.
Secondly, we can set up institutions for the political representation of future generations. The policies we adopt today are rarely evaluated for their long-term consequences, even though such consequences can often be very significant. By having, for example, an official representative for future generations, we can make sure that these long-term consequences are properly accounted for, so that we don’t choose policies that negatively affect future generations.
Thirdly, we can help future generations by taking action to reduce the risk of human extinction. If humanity goes extinct, our potential for a great future will be lost forever. Today, that future is threatened by climate change and the risk of nuclear war. Moreover, some technological developments that are just around the corner, such as biotechnology, may also bring risks of extinction. Therefore, anything we do to reduce these and other risks will greatly benefit future generations.
Second Passage
If we play our cards right, we can create a wonderful future. Through further technological development, we can create even larger improvements in quality of life than we’ve seen over the past few centuries. Through further medical advances, we can eliminate the many illnesses that plague us today, including cancer and cardiovascular disease. But it is not only through technological and scientific advances that we can create a better future. Through political changes, we can create a much juster world. Although we may not create a utopia, we should expect that quality of life is much higher in the future than it is today.
Because lives in the future could be so wonderful, we should create as many of them as we can, without sacrificing quality of life for those who are already alive. When life is good, being born is a tremendous benefit. In addition to the benefits to the individuals being born, a greater population also means greater opportunity for scientific discoveries, technological advances, cultural expression and many other things we value.
In the very long-run, this means that we must eventually spread beyond the solar system. In principle, there is no reason why we shouldn’t be able to spread to other solar systems. In our galaxy alone, there may be as many as 40 billion habitable planets. These are planets that could support communities of flourishing humans. Given the astronomical stakes, and to ensure that humanity reaches its full potential, it is therefore morally imperative that we ensure that civilization survives long enough that we can spread through the galaxy.
Main Findings
Respondents were asked to indicate their level of agreement, from 1 = “Strongly disagree” to 7 = “Strongly agree”.
1. To what extent do you agree with the argument in the first text?
(M = 5.50, SD = 1.22)
2. People on the other side of the planet matter just as much as those who are near to you.
(M = 6.10, SD = 1.26)
3. People in the distant future matter just as much as those alive today.
(M = 5.79, SD = 1.35)
4. Humanity will still exist in a thousand years.
(M = 5.36, SD = 1.42)
5. Humanity will still exist in a million years.
(M = 4.19, SD = 1.59)
6. We should do more to help future generations.
(M = 6.03, SD = 1.05)
7. I would be willing to accept 5 percentage point higher taxes that will be used to benefit future generations, but which won't at all benefit the current generation.
(M = 4.25, SD = 1.86)
8. There are meaningful ways of affecting things a thousand years from now.
(M = 5.44, SD = 1.41)
9. There are meaningful ways of affecting things a million years from now.
(M = 4.34, SD = 1.79)
10. To what extent do you agree with the argument in the second text?
(M = 4.28, SD = 1.77)
11. For the average person, life will be better in a thousand years than it is today.
(M = 4.49, SD = 1.39)
12. For the average person, life will be better in a million years than it is today.
(M = 3.97, SD = 1.35)
13. Consider two civilizations. Both of them last for a million years. In the first civilization, there are ten billion people in every generation. In the second civilization, there is one billion people in every generation. Other than their population size, the two civilizations are identical. Their members are equally happy, and there are no issues with resource depletion, environmental degradation, or overpopulation.
If only one civilization could come into existence, which would you prefer?
(M = 3.56, SD = 1.81, where 1 = 'Strongly prefer 1B civilization' and 7 = 'Strongly prefer 10B civilization')
For this question, I also looked at the qualitative answers of those who expressed a strong preference for either the 1B or the 10B civilization. I’ve tried to categorize their reasons below (noting that some respondents gave more than one reason):
There were 74 respondents who strongly preferred the 1B civilization.
- More resources (25)
- Less crowded (22)
- Overpopulation (10)
- Environmental issues (8)
- Humans are overrated (5)
- Other (12)
There were 34 respondents who strongly preferred the 10B civilization.
- More happy people (23)
- More ideas & scientific/technological advances (10)
- More diversity/creativity/culture (4)
- Increases the chance of human survival and an even better future (3)
- Other (2)
14. I hope that in the future, humanity will spread to other solar systems.
(M = 4.71, SD = 1.77)
I also looked at qualitative answers to this question.
75 respondents were strongly in favour of space settlement, and gave the following reasons:
- Awesome/Amazing/Unlock potential/Discovery: 35
- Needed for survival: 28
- More room to address overcrowding/limited resources: 14
- Find out if there’s intelligent life: 3
- More people get to exist: 1
24 respondents were strongly against space settlement, and gave the following reasons:
- This is unlikely/impossible/we won’t survive that long: 11
- Humanity is a disaster for other solar systems: 8
- Against colonization: 2
- Man is stupid: 1
- I don’t want to leave Earth: 1
- Would just be more of the same problems: 1
- Humans don’t belong on other planets: 1
Correlations and Comparisons
- Valuing spatially distant people was correlated with valuing temporally distant people (r = 0.63, p < 3e-16), as is predicted by construal level theory.
- Willingness to accept a tax to help future generations was correlated with climate change concern (r = 0.53 , p < 3e-16)
- Climate change concern was somewhat negatively correlated with thinking that humanity will still exist in a thousand years (r = –0.18, p < 0.0004) and in a million years (r = –0.15, p < 0.003).
- Social justice concern was correlated with agreeing with the first passage of text (r = 0.31, p = 3e-10), but not with agreeing with the second passage.
- Sci-fi enjoyment was somewhat correlated with agreeing with the first passage (r = 0.17, p < 0.0008), and more strongly correlated with agreeing with the second (r = 0.28, p < 2e-8)
- As one might expect, people were more willing to agree that we should do more to help future generations in the abstract than when this was phrased in terms of a cost. But the two variables were still correlated (r = 0.53, p < 3e-16).
- Women were somewhat more likely than men to value future generations: M = 5.97 vs M = 5.61), t(397) = 2.67, p < 0.008, perhaps due to generally higher empathy levels.
- Men (M = 4.60) were more likely than women (M = 3.95) to agree with the second argument, t(401) = –3.72, p < 0.0003.
- Those who identified as economically free market were more likely than those who identified as economically socialist to think that humanity will still exist in a thousand years (M = 5.61 vs M = 4.95), t(274) = 4.46, p-value < 2e-05
Lessons
In terms of making a convincing case for longtermism, what do these findings imply? Here are some tentative takeaways, though no doubt there are further lessons.
- Valuing the future. One striking thing is how strongly people agree that future generations matter just as much as the present one (M = 5.79, SD = 1.35), and that we should do more to help them (M = 6.03, SD = 1.05). Of course, when helping future generations is presented as involving a personal cost (in the form of a tax increase) there is less agreement (M = 4.25, SD = 1.86), so it’s not clear how these attitudes would translate into action. Nevertheless, it does suggest that people generally view some of the core ideas of longtermism in a favorable light.
- Population ethics. Another striking thing is how little people think that a larger population is better (M = 3.56, SD = 1.81, where 1 = ‘Strongly prefer 1B civilization’ and 7 = ‘Strongly prefer 10B civilization). However, we also collected qualitative responses to this question, and found that many of the people who preferred the smaller civilization over the bigger were unwilling to accept the stipulations. Among the 74 respondents who strongly preferred the smaller civilization, the most commonly given reasons were more resources (25), less crowded (22), overpopulation (10), and environmental issues (8), in spite of the explicit claim that “there are no issues with resource depletion, environmental degradation, or overpopulation.” Nevertheless, 125 of the 403 respondents reported being indifferent between the two civilizations, so an unwillingness to accept the stipulation cannot be everything that’s going on here.
- Climate change. One striking, but perhaps not very surprising finding is just how strongly people associate talk of influencing and benefiting the future with climate change and sustainability. For example, when motivating their answers to the question about whether there are meaningful things we can do to affect things in a thousand years, over half of the respondents mentioned climate change and environmental issues. This suggests that a crucial aspect of communicating longtermism concerns how to position the view with respect to climate change.
- Weaker beliefs about the distant future. Many of the statements that concerned a million years into the future received responses that peaked around 4 (‘Neither agree nor disagree’). One hypothesis is that for the very distant future, people typically don’t have beliefs in any strong sense. Perhaps relatedly, as we expected, there was stronger agreement with the first text than the second (M = 5.50 vs. M = 4.28). This suggests that communicating the ‘weirder’ aspects of longtermism presents more of a challenge.
(Thanks to Will MacAskill and Lucius Caviola for discussion.)
Thanks for doing this work, and making it public. Similar to Max, I basically believe in the Total View, and am sympathetic to Temporal Cosmopolitanism, so consider this somewhat good news.
However, I am a little skeptical about some of the questions. To the extent you are trying to get at what people 'really' think (if they have real views on such a topic...) I worry that some of questions were phrased in a somewhat biased matter - particularly the ones asking for agreement with the text.
When doing political polling, people generally don't ask questions like this:
... because people's level of agreement will be exaggerated. Instead, it's often considered better practice to say phrase it more like:
Agreed. As I mentioned in this comment, people will tend to be inclined to agree with any generally positive sounding platitude, due to acquiescence bias and plausibly social desirability bias. On the whole, I would expect people to be extremely reluctant to explicitly deny that some people "matter just as much as" as others if the affirmative is put to them. This all may especially be a problem when the issues in question are ones people haven't really thought about before and so don't have clear attitudes- this will be particularly likely to elicit just superficial agreement.
I think one of the best approaches to ameliorate this is to use reversed statements i.e. ask people whether they agree with an item expressing the opposite attitude (i.e. that people who are alive here and now matter more). Sanjay should be posting a report of the results when we did this fairly soon. Quite often you will find that people will agree with statements expressing both an attitude and a statement designed to capture the exact opposite view, and you then need to work to find a set of items that together actually seems to meaningfully capture the attitude of interest.
Thanks for the suggestion to use reversed statements. As I said in my response to Larks, I share this concern, so if we run further iterations of the survey, I'll include something along these lines.
I look forward to seeing Sanjay's report!
Thanks for this. I basically share the concern that you and David express, and it would be good to revise the statements accordingly if we run further versions of the survey. But even if the extent of agreement is inflated, it seems reasonable to think that the ordinal ranking should remain the same (so that people agree more strongly with the first text than the second, and believe more strongly that people on the other side of the planet matter just as much than that those in the distant future do).
Interesting, thanks for publishing!
Just a quick note: Your finding about population ethics - i.e. that many prefer smaller populations - is consistent with findings reported in the following paper.
Among other things, Spears also finds that women prefer smaller populations more strongly than men.
I learned all of this from an unpublished literature review by David Althaus, which might be interesting for your purposes if you haven't seen it. [ETA: Actually David has publicly linked to the lit review - see subsection "An experimental study of population ethics and policy", pp. 23ff., for a summary of Spears (2017) - in this EA Forum post.]
(I'm not mentioning this to argue for any view. I'm very sympathetic to the total view in population ethics, and I agree with your interpretation that many subjects simply failed to understand the scenario in the intended way.)
Thanks for the pointer!
Wow, some fascinating and surprising answers, e.g. that there was more support for the statement "I hope that in the future, humanity will spread to other solar systems" than support for a population of 10bn rather than a population of 1bn. I was also interested in the finding that "Valuing spatially distant people was correlated with valuing temporally distant people (r = 0.63, p < 3e-16)."
Beyond some of the discussion around the question wording raised by others, I'm also wondering why you chose to present people with these two articles, rather than just running the survey without any accompanying information? I'm not sure what was gained by providing people with this information and I think it makes the answers less representative and useful. For example, you state that the results "suggest that people generally view some of the core ideas of longtermism in a favorable light." I would more cautiously claim that the results "suggest that people who have just read two articles that are favorable to longtermism generally accept some of the core ideas of longtermism, at least temporarily."
I think I would have found this more useful either as a nationally representative survey (e.g. using Ipsos), to explore what people currently think, while attempting to minimise the effects of the survey design on the answers, or as an RCT, where a control group (no article) is compared to 1+ intervention group(s), testing for the effectiveness of possible pro-longtermist messaging on people's attitudes.
(But, to clarify, as a quick survey via Positly, which is cheaper than using Ipsos, I do think that these findings are still useful and interesting.)
Perhaps a way to avoid this problem would be to use numbers that are both significantly less than the current population, such as 2B vs 3B rather than 1B vs 10B.
Question 13 seems under-specified to me, specifically this part: "Their members are equally happy." Does this mean their level of welfare is the same, but it could be at any level for the purposes of this question? Does the use of "happy" in particular mean the question assumes this constant level of welfare is net positive? Could the magnitudes of happiness and suffering differ between people as long as the "net welfare" is positive, assuming it's possible to make that aggregation?
I think these questions matter because they influence your interpretation of the answers as either a result of population ethical factors, or other things like the respondents' beliefs about the moral weight of happiness vs suffering. Someone could coherently accept totalism yet consider the smaller world better if, for instance, they think the higher number of cases of the extreme tails of suffering in the larger population (just because there are more people that things could go very wrong for) makes it worse.
A priori I expect suffering focused intuitions to be in the minority, but in any case it's not obvious that the answers to #13 reveal non-totalist or irrational population ethics among the respondents.
(I think "if the contributive axiological value of people is negative, then preferring smaller populations is consistent with - indeed, implied by - totalism in population ethics" is a valid point, and obviously so. It is also mentioned by Spears in the paper I cite above. I therefore find it quite irritating that the parent comment was apparently strongly downvoted. Curious if I'm missing a reason for this?
NB I also think the point is trivial and has an implausible premise, but IMO it is the hallmark of good philosophy that each individual statement seems trivial - e.g., Reasons and Persons features an ample amount of such claims that might strike some readers as trivial or pedantic.)
Cf. Russell:
"Trivial, but in a Derek Parfit way" is honestly the highest compliment I could ever receive.
Thanks, this is a good point. From looking at the qualitative answers that people provided in response to this question, it doesn't appear to have been much of an issue in practice, however.
I see, thank you - wasn't sure what might have been hidden in "Other." :)
Thanks for running this survey and sharing these results!
Minor point:
It seems like unwillingness to accept stipulations is just/almost as plausible an explanation of the indifferent responses as it is of the responses favouring the smaller civilization. People may have thought the bigger civilization would be better if not for problems like resource issues or overcrowding, but that those problems would or might occur, and thus they end up unsure which is better, or indifferent between them.