I recently read Greaves & MacAskill’s working paper “The case for strong longtermism” for a book/journal club, and noted some reactions to the paper. I’m making this post to share slightly-neatened-up versions of those reactions, and also to provide a space for other people to share their own reactions.[1] I’ll split my thoughts into separate comments, partly so it’s easier for people to reply to specific points.
I thought the paper outlined what (strong) longtermism is claiming - and many potential arguments for or against it - more precisely, thoroughly, and clearly than anything else I’ve read on the subject.[2] As such, it’s now one of the two main papers I’d typically recommend to someone who wanted to learn about longtermism from a philosophical perspective (as opposed to learning about what one’s priorities should be, given longtermism). (The other paper I’d typically recommend is Tarsney’s “The epistemic challenge to longtermism”.)
So if you haven’t read the paper yet, you should probably do that before / instead of reading my thoughts on it.
But despite me thinking the paper was a very useful contribution, my comments will mostly focus on what I see as possible flaws with the paper - some minor, some potentially substantive.
Here’s the paper’s abstract:
Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. If this thesis is correct, it suggests that for decision purposes, we can often simply ignore shorter-run effects: the primary determinant of how good an option is (ex ante) is how good its effects on the very long run are. This paper sets out an argument for strong longtermism. We argue that the case for this thesis is quite robust to plausible variations in various normative assumptions, including relating to population ethics, interpersonal aggregation and decision theory. We also suggest that while strong longtermism as defined above is a purely axiological thesis, a corresponding deontic thesis plausibly follows, even by non-consequentialist lights.
[1] There is already a linkpost to this paper on the Forum, but that was posted in a way that meant it never spent time on the front page, so there wasn't a time when people could comment and feel confident that people would see those comments.
There's also the post Possible misconceptions about (strong) longtermism, which I think is good, but which serves a somewhat different role.
[2] Other relevant things I’ve read include, for example, Bostrom’s 2013 paper on existential risk and Ord’s The Precipice. The key difference is not that those works are lower quality but rather that they had a different (and also important!) focus and goal.
Note that I haven’t read Beckstead’s thesis, and I’ve heard that that was (or perhaps is) the best work on this. Also, Tarsney’s “The epistemic challenge to longtermism” tackles a somewhat similar goal similarly well to Greaves and MacAskill.
This post does not necessarily represent the views of any of my employers.
I think the argument in the section “A meta-option: Funding research into longtermist intervention prospects” is important and is sometimes overlooked by non-longtermists.
Here’s a somewhat streamlined version of the section’s key claims:
Roughly the same argument has often come to my mind as well as one of the strongest arguments for at least doing longtermist research, even if one felt that all object-level longtermist interventions that have been proposed so far are too speculative. (I’d guess that I didn’t independently come up with the argument, but rather heard a version of it somewhere else.)
One thing I’d add is that one could also do cross-cutting work, such as work on the epistemic challenge to longtermism, rather than just work to better evaluate the cost-effectiveness of specific interventions or classes of interventions.
Two possible objections:
It might be too difficult to ever identify ahead of time a long-termist intervention as robustly good, due to the absence of good feedback and skepticism, cluelessness or moral uncertainty.
Cross-cutting work, if public especially, can also benefit others with goals/values unaligned with your own and do more harm than good. More generally, resources and capital, including knowledge, you try to build can also end up in the wrong hands eventually, which undermines patient philanthropy, too.
On your specific points:
(All that said, you did just say "Two possible objections", and I do think pointing out possible objections is a useful part of the cause prioritisation project.)
I basically agree with those two points, but also think they don't really defeat the case for strong longtermism, or at least for e.g. some tens or hundreds or thousands of people doing "second- or third-order" research on these things.
This research could, for example, attempt to:
It's hard to know how to count these things, but, off the top of my head, I'd estimate that:
So I think we should see "strong longtermism actually isn't right, e.g. due to the epistemic challenge" as a live hypothesis, but that it does seem too early to say we've concluded that or that we've concluded it's not worth looking into. It seems that we're sufficiently uncertain, the potential stakes are sufficiently high, and the questions have been looked into sufficiently little that, whether we're leaning towards thinking strong longtermism is true or that it's false, it's worth having at least some people doing serious, focused work to "double-check".
[This point is unrelated to the paper's main arguments]
It seems like the paper implicitly assumes that humans are the only moral patients (which I don't think is a sound assumption, or an assumption the authors themselves would actually endorse).
The authors imply (or explicitly state?) that any positive rate of pure time discounting would guarantee that strong longtermism is false (or at least that their arguments for strong longtermism wouldn’t work in that case).
I don’t think the authors ever make it very clear what “wide class of decision situations” means in the definitions of axiological and deontic strong longtermism.
They do give a rough sense of what they mean, and perhaps that suffices for now. But I think it’d be useful to be a bit clearer.
Here’s a relevant thing they do say:
They also say:
But, as noted, these quotes still seem to me to leave the question of what “wide class of decision situations” means to them fairly open.
I think the authors are a bit too quick and confident in dismissing the idea that population ethics could substantially change their conclusions
They write:
I think it's worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it's better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.
Yeah, this is true. On this, I've previously written that:
Your second paragraph makes sense to me, and is an interesting point I don't think I'd thought of.
[This point is unrelated to the paper's main arguments]
The authors write “If we create a world government, then the values embodied in the constitution of that government will constrain future decision-makers indefinitely.” But I think this is either incorrect or misleading.
(Whether it's incorrect or misleading depends on how narrowly the term “constitution” was intended to be interpreted.)
What's your take on this argument:
"Why do we need longtermism? Let's just do the usual approach of evaluating interventions based on their expected marginal utility per dollar. If the best interventions turn out to be aimed at the short-term or long-term, who cares?"
tl;dr:
[I wrote this all quickly; let me know if I should clarify or elaborate on things]
---
Here's one way to flesh out point 2:
Here's another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:
Here's another way to flesh out point 2::
All that said:
It seems backwards to first "buy into" longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.
This seems fine; if you're focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.
There are a few topics I don't remember the paper directly addressing, and that I'd be interested to hear people's thoughts on (including but not limited to the authors' thoughts). (Though it's also possible that I just forgot the bits of the paper where they were covered.)
Three specific good things from the paper which I’d like to highlight:
(These were not necessarily the most important good things about the paper, and were certainly not the only ones.)
Tangent: A quote to elaborate on why I think having multiple concepts/models/framings is often useful.
This quote is from Owen Cotton-Barratt on the 80,000 Hours Podcast, and it basically matches my own views:
Part of the authors' argument is that axiological/consequentialist considerations outweigh other kinds of considerations when the stakes are sufficiently high. But I don't think the examples they give are as relevant or as persuasive/intuitive as they think.
(I personally basically agree with their conclusion, as I'm already mostly a utilitarian, but they want to convince people who aren't sold on consequentialism.)
They write
(I think the following point might be important, but I also think I might be wrong and that I haven't explained the point well, so you may want to skip it.)
The authors claim that their case for strong longtermism should apply even for actors that aren't cause-neutral, and they give an example that makes it appear that adopting strong longtermism wouldn’t lead to very strange conclusions for an actor who isn’t cause-neutral. But I think that the example substantially understates the counterintuitiveness of the conclusions one could plausibly reach.
The authors seem to make a good case for strong longtermism. But I don’t think they make a good case that strong longtermism has very different implications to what we’d do anyway (though I do think that that case can be made).