Hide table of contents

This post discusses the introduction and definition of the term ‘longtermism’. Thanks to Toby Ord, Matthew van der Merwe and Hilary Greaves for discussion. 

[Edit, Nov 2021: After many discussions, I've settled on the following informal definition:

  • Longtermism is the view that positively influencing the longterm future is a key moral priority of our time.

This is what I'm going with for What We Owe The Future.

With this in hand, I call strong longtermism is the view that positively influencing the longterm future is *the* key moral priority of our time. It turns out to be suprisingly difficult to define this precisely, but Hilary and I give it our best shot in our paper.]


Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal:

  • It’s cumbersome and somewhat jargony
  • It’s a double negative; whereas focusing on the positive (‘ensuring the long-run future goes well’) is more inspiring and captures more accurately what we ultimately care about
  • People tend to understand ‘existential risk’ as referring only to extinction risk, which is a strictly narrower concept 
  • You could care a lot about reducing existential risk even though you don’t care particularly about the long term if, for example, you think that extinction risk is high this century and there’s a lot we can do to reduce it, such that it’s a very effective thing even by the lights of the present generation’s interests.
  • Similarly, you can care a lot about the long-run future without focusing on existential risk reduction, because existential risk is just about drastic reductions in the value of the future. (‘Existential risk’ is defined as a risk where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.) But, conceptually at least (and I think in practice, too) smaller improvements in the expected value of the long-run future could be among the things we want to focus on, such as changing people’s values, or changing political institutions (like the design of a world government) before some lock-in event occurs.  You might also think (as Tyler Cowen does) that speeding up economic and technological progress is one of the best ways of improving the long-run future. 

For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ‘longtermism’, with the following definition:

Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”

Since then, the term ‘longtermism’ seems to have taken off organically. I think it’s here to stay. Unlike ‘existential risk reduction’, the idea behind ‘longtermism’ is that it is compatible with any empirical view about the best way of improving the long-run future and, I hope, helps immediately convey the sentiment behind the philosophical position, in the same way that ‘environmentalism’ or ‘liberalism’ or ‘cosmopolitanism’ does.

But getting a good definition of the term is important. As Ben Kuhn notes, the term could currently be understood to refer to a mishmash of different views. I think that’s not good, and we should try to develop some standardisation before the term is locked in to something suboptimal. 

I think that there are three natural concepts in this area, which we should distinguish. My proposal is that we should name them as follows (stating the concepts imprecisely for now): 

(i) longtermism, which designates an ethical view that is particularly concerned with ensuring long-run outcomes go well;

(ii) strong longtermism, which, like my original proposed definition, is the view that long-run outcomes are the thing we should be most concerned about; 

(iii) very strong longtermism, the view on which long-run outcomes are of overwhelming importance. [1]

My initial proposal was that ‘longtermism’ (with no modifier) should refer to (ii), whereas now I think it should refer to (i). This is primarily because:

  • The first concept is intuitively attractive to a significant proportion of the wider public (including key decision-makers like policymakers and business leaders); my guess is that most people would find it intuitively attractive. In contrast, the second concept is widely regarded as unintuitive, including even by proponents of the view.
  • At the same time, it seems that we’d achieve most of what we want to achieve if the wider public came to believe that ensuring the long-run future goes well is one important priority for the world, and took action on that basis, even if they didn’t regard it as the most important priority. 

In general, if I imagine ‘longtermism’ taking off as a term, I imagine it getting a lot of support if it designates the first concept, and a lot of pushback if it designates the second concept. It’s also more in line with moral ideas and social philosophies that have been successful in the past: environmentalism claims that protecting the environment is important, not that protecting the environment is (always) the most important thing; feminism claims that upholding women’s rights is important, not that doing so is (always) the most important thing. I struggle to think of examples where the philosophy makes claims about something being the most important thing, and insofar as I do (totalitarian marxism and fascism are examples that leap to mind), they aren’t the sort of philosophies I want to emulate.

Let’s now consider definitions of the variants of longtermism.


Longtermism

I think we have two paths forward for the definition of longtermism. The first is the ‘no definition’ approach, suggested to me by Toby Ord: 

Longtermism is a philosophy that is especially concerned with improving the long-term future.

This is roughly analogous to terms like  'environmentalism’ and ‘feminism.’


The second approach is to have some minimal definition. For example:

Longtermism is the view that:

(i) Those who live at future times matter just as much, morally, as those who live today;

(ii) Society currently privileges those who live today above those who will live in the future; and

(iii) We should take action to rectify that, and help ensure the long-run future goes well.


I’m not confident at all about this precise definition, but I prefer the minimal definition approach over the no-definition approach for a few reasons:

  • When I look at other -isms, there is often a lot of confusion around what the concept denotes, and this hinders those who want to encourage others to take action in line with the -ism. Some examples:
    • Effective altruism is still widely conflated with utilitarianism, or with earning to give, or with the randomista movement. I’ve suggested a definition and I think that having this definition will both help with responses to critics and lessen the amount by which people in the first place misunderstand what effective altruism is about. I wish we’d had the existing definition much earlier. 
    • Liberalism means two different things in the US and UK: in the US a liberal is a social progressive whereas in the UK a liberal is a proponent of free markets. 
    • Anecdotally, I see a lot of confusion and resultant fighting over the term ‘feminism’, where it seems to me that a precise definition could have helped mitigate this at least somewhat. 
  • In particular, I worry that without the minimal definition, ‘longtermism’ would end up referring to strong longtermism, or even to very strong longtermism. The analogy here would be ‘effective altruism’ referring simply to applied utilitarianism in many people’s minds. Or, alternatively, it might refer to an unattractive mishmash of concepts, with Ben Kuhn’s suggestion about what ‘longtermism’ currently refers to being an example of that. 

I also just don’t see much of a case against having a minimal definition. If the precise definition turns out to be unhelpful in the future, we can quietly drop it. Or the precise definition might be something we don’t often highlight, but is just something we can refer to if people are grossly misrepresenting the position. And the minimal definition is compatible with people using the ‘no definition’ version too.

The strongest case for the no-definition approach, in my view, is that it could enable the term to evolve so as to better fit future times, and any current definition could be myopic. Perhaps that flexibility helped explain why terms like ‘environmentalism’ and ‘liberalism’ took off. But my proposed definition is so minimal that I find it hard to see that there would be much benefit from even greater flexibility.

An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.  This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions, but would not rule out views on which it’s just empirically intractable to try to improve the long-term future. Part of the idea is that this definition would open the way to a debate about the relevant empirical issues, in particular on the tractability of affecting the long run. This definition makes ‘longtermism’ somewhat more like the terms ‘cosmopolitanism’ or ‘antispeciesism’, and less like ‘neoliberalism’ or ‘feminism’ or ‘environmentalism’.

In my view, this definition would be too broad. I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes. I see the term 'longtermism' creating value if it results in more people taking action to help ensure that the long-run future goes well. But if one can endorse longtermism without thinking that we should, at least to some extent, try to promote good long-term outcomes, then it seems like we lose much of that value. And, insofar as the term has taken off so far, it has been used to refer to people who think that we should be trying to make the long-run future go better.

One implication of my definition, which one might object to, is that if, in the future, society starts to care about the long-term future exactly to the extent it should (or more than it should), then longtermism is no longer true. In my view, that seems like a good implication. Suppose that society started caring too much about the long term and was neglecting the interests of the present generation: then there would be no need for ‘longtermism’ as an idea; indeed, we would want to promote shorttermism instead! On my definition, longtermism stops being true exactly when it is no longer needed.


Strong Longtermism

The definition I initially proposed for longtermism was an attempt to capture the idea of strong longtermism. Here’s a stylistically modified version: 

Strong Longtermism is the view that the primary determinant of the value of our actions today is how those actions affect the very long-term future.

I think this definition is good enough for general use, but is technically not correctly capturing what we want. Perhaps most of the value of our actions comes from their long-run effects, but most of the differences in value between actions comes from their short-run effects. If so, then we should spend our time trying to figure out which actions best improve the short run; this is not the spirit of longtermism. 

Recently, Hilary Greaves and I have been working on a paper on the core case for longtermism and propose the more unwieldy but more philosophically precise:

Axiological strong longtermism =df In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.

Deontic strong longtermism =df In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.

Where by “the option whose effects on the very long-run future are best”, we mean “the option whose effects on the future from time t onwards are best”, where t is a surprisingly long time from now (say, 1000 years). My view is that we should choose the smallest t such that, any larger choice of t makes little difference to what we would prioritise. 

The key idea behind both the informal definition and the more precise definition is that, in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.


Hyphenation

People tend to naturally use both ‘long-termism’ and ‘longtermism’. I think it makes sense to decide on one as canonical, and I think the right choice is the un-hyphenated ‘longtermism’. There are a few reasons for this.

First, grammatically, either would be fine. ‘Long-term’ is a compound adjective (e.g. “She cares about the long-term future”), ‘long term’ is an adjective-noun pair (e.g. “She cares about the long term.”) And, in general, as long as a word is unambiguous, you don’t need to include a hyphen even in cases where it’s permissible to do so: so, for example, it’s ‘post-structuralism’ but ‘postfeminism’. [2] As the style manual of the Oxford University Press comments:  “If you take hyphens seriously, you will surely go mad.”

Second, if you can make a term shorter and quicker to write without sacrificing much, you should do so. So, for example, “Neoliberalism” is clearly a better term than “Neo-liberalism” and either is grammatically permissible.  

Third, hyphenated words tend to lose their hyphen over time as they become increasingly familiar. Examples: to-morrow, to-day, co-operative, pigeon-hole, e-mail, etc.  In 2007, the sixth edition of the Shorter Oxford English Dictionary removed the hyphens from 16,000 entries. So even if we adopted ‘long-termism’ it would probably change to ‘longtermism’ over time.

Fourth, the hyphenation makes the term ambiguous. Consider some other hyphenated -isms: ‘anarcho-capitalism’, or ‘post-structuralism’. Here the hyphenated prefix modifies an existing -ism. So the natural reading of ‘long-termism’ would be that ‘long’ modifies some other concept, ‘termism’. But of course that’s not what this term is supposed to convey. Insofar as ‘termism’ isn’t a concept, I don’t expect this to cause confusion, but it’s still a mild reason to prefer the unhyphenated version.

The best counterargument I know is that, on this view, the opposite of longtermism would be ‘shorttermism,’ which has a strange-looking double ‘t’. But there are many compound words with double consonants that we’ve gotten used to, like ‘bookkeeping’, ‘earring’, and ‘newsstand,’ including at least one with a double ‘t’, namely ‘posttraumatic’ (though this is also written ‘post-traumatic’), and even some with double vowels as a result of hyphen loss, like ‘cooperation’. And I’m not sure how often ‘shorttermism’ will get used. So I don’t see this as a strong counterargument.


[1] Nick Beckstead’s Main Thesis in his dissertation makes a claim similar to strong longtermism: “Main Thesis: From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” But the title of his thesis — ‘On the Overwhelming Importance of Shaping the Far Future’ — suggests an endorsement of very strong longtermism.


[2] Note that, for the compound adjective form, it’s grammatically preferred to say ‘the long-term future’, fine to say ‘the long term future’ (because there’s no ambiguity caused by dropping the hyphen), but currently not grammatical to say ‘the longterm future’. We could try using ‘longterm' with the aim of changing usage; my view is to stick with current grammar here, though, as we’re not using ‘long-term’ as a term of art or aiming to change its meaning.

Comments56
Sorted by Click to highlight new comments since:

Thanks for writing this; I thought it was good.

I would wonder if we might consider weakening this a little:

(i) Those who live at future times matter just as much, morally, as those who live today;

Anecdotally, it seems that many people - even people I've spoken to at EA events! - consider future generations to have zero value. Caring any amount about future people at all is already a significant divergence, and I would instinctively say that someone who cared about the indefinite future, but applied a modest discount factor, was also longtermist, in the colloquial-EA sense of the word.

I second weakening the definition. As someone who cares deeply about future generations, I think it is infeasible to value them equally to people today in terms of actual actions. I sketched out an optimal mitigation path for asteroid/comet impact. Just valuing the present generation in one country, we should do alternate foods. Valuing the present world, we should do asteroid detection/deflection. Once you value hundreds of future generations, we should add in food storage and comet detection/deflection, costing many trillions of dollars. But if you value even further in the future, we should take even more extreme measures, like many redundancies. And this is for a very small risk compared to things like nuclear winter and AGI. Furthermore, even if one does discount future generations, if you think we could have many computer consciousnesses in only a century or so, again we should be donating huge amount of resources for reducing even small risks. I guess one way of valuing future generations equally to the present generation is to value each generation an infinitesimal amount, but that doesn't seem right.

Is the argument here something along the lines of; I find that I don't want to struggle to do what these values would demand, so they must not be my values?

I hope I'm not seeing an aversion to surprising conclusions in moral reasoning. Science surprises us often, but it keeps getting closer to the truth. Technology surprises us all of the time, but it keeps getting more effective. If you wont accept any sort of surprise in the domain of applied morality, your praxis is not going to end up being very good.

Thanks for your comment. I think my concern is basically addressed by Will's comment below. That is it is good to value everyone equally. However, it is not required in our daily actions to value a random person alive today is much as ourselves or a random person in the future as much as ourselves. That is, it is permissible to have some special relationships and have some personal prerogatives.

Thanks for this! Wasn’t expecting pushback on this aspect, so that’s helpful. 

I’ll start with a clarification. What I mean by that clause is:   For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good. 

I.e. if you take some world, and move everyone around in time but keep their levels of wellbeing the same, then the new world is just as good as the old world.

(Caveat, I’m sure that there will be technical problems with this principle, so I’m just suggesting it as a first pass. This is in analogy with how I’d define the ideas that race, gender, etc are morally irrelevant: imagine two worlds where everything is the same except that people’s races or genders are different; these are equally good.)

That *does* rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)

But it doesn’t rule out the following:

  • Special relationships. E.g. you can believe that everyone is equally valuable but, because of a special relationship you have with your children, it’s permissible for you to save your child’s life rather than two strangers. Ditto perhaps you have a special relationship with people in your own society that you're interacting with (and perhaps have obligations of reciprocity towards).
  • Personal prerogatives. E.g. you can believe that $10 would do more good buying bednets than paying for yourself to go to the movies, but that it’s permissible for you to go to the movies.  (Ditto perhaps for spending on present-day projects rather than entirely on long-run projects.)

If you also add the normative assumptions of agent-neutral consequentialism and expected utility theory, and the empirical assumptions that the future is big and affectable, then you do get led to strong or very strong longtermism. But, if you accept those assumptions, then strong or very strong longtermism seems correct. 

I’m worried that a weakening where we just claim that future people matter, to some degree, would create too broad a church. In particular:  Economists typically suppose something like a 1.5% pure rate of time preference. On this view, even people in a billion years’ time matter. But the amount by which they matter is tiny and, in practice, any effects of our actions beyond a few centuries are irrelevant. I want a definition of longtermism - even a minimal definition - to rule out that view. But then I can’t think of a non-arbitrary stopping point in between that view and my version. And I do think that there are some benefits in terms of inspiringness of my version, too — it’s clear and robust. 

What would most move me is if I thought that the weaker version would capture a lot more people. And it’s interesting that Larks has had the experiences they have had. 

But it currently seems to me that’s not the case.  Aron Vallinder and I ran a survey on people’s attitudes on this issue: here were the results for how much people agree or disagree with the claim that ‘people in the distant future matter just as much as those alive today’:

Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.

Hi Will,

I would expect the item "People in the distant future matter just as much as those alive today" to produce somewhat inflated levels of agreement. One reason is that I expect that any formulation along the lines of "X people matter just as much as Y people" will encourage agreement, because people don't want to be seen as explicitly saying that any people matter less than others and agreement seems pretty clearly to be the socially desirable option. Another is that acquiescence bias will increase agreement levels, particularly where the proposition is one that people haven't really considered before and/or don't have clearly defined attitudes towards.

Me and Sanjay found pretty different results to this and, as I think he mentioned, we'll be sharing a writeup of the results soon.

Interesting! Can't wait. :)


(And agree that this would produce inflated levels of agreement, but feel like "do you endorse this statement?" is the relevant criterion for a definition even if that endorsement is inflated relative to action).

That does rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)

It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)

I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn't rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.

Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.

Why not include this in the definition of strong longtermism, but not weak longtermism?

Having longtermism just mean "caring a lot about the long-term future" seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you're going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, "begs the question").

When most people first hear the term longtermism, they're going to hear it in conversation or see it in writing without the definition attached to it. And they are going to assume it means caring a lot about the long-term future. So why define it to mean anything other than that?

On the other hand, anyone who comes across strong longtermism, is much more likely to realize that it's a very specific technical term, so it seems much more natural to attach a very specific definition to it.

IMHO the most natural name for "people at any time have equal value" should be something like temporal indifference, which more directly suggests that meaning.

Edit: I retract temporal indifference in favor of Holly Elmore's suggestion of temporal cosmopolitanism.

I agree with the sentiment that clause (i) is stronger than it needs to be. I don't really think this is because it would be good to include other well-specified positions like exponential discounting, though. It's more that it's taking a strong position, and that position isn't necessary for the work we want the term to do. On the other hand I also agree that "nonzero" is too weak. Maybe there's a middle ground using something like the word "significant"?

[For my own part intellectual honesty might make me hesitate before saying "I agree with longtermism" with the given definition — I think it may well be correct, but I'm noticeably less confident than I am in some related claims.]

"Aron Vallinder and I ran a survey on people’s attitudes on this issue…"

Hey Will — who was this a survey of?

I think Positly and mTurk, for people with university degrees in the US. We'll share a proper write-up soon of the wider survey, we haven't really looked at the data yet.

I wonder if it might be helpful to modify your claim (i) to be more similar to Hilary’s definition by referring to intrinsic value rather than mattering morally. Eg. Something like:

(i) Lives situated in the future have just as much intrinsic value as lives situated in the present

I think that wording could be improved but to me it seems like it does a better job of conveying:

“For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good."

As well as making allowance for special relationships and personal prerogatives, this also allows for the idea that the current generation holds some additional instrumental value (in enabling/affecting future generations) in addition to our intrinsic value. To me this instrumental value would have some impact on the extent to which people matter morally.

I think if you acknowledge that current populations may have some greater value (eg by virtue of instrumental value) then you would need to make claim (ii) stronger, eg. "society currently over-privileges those who live today above those who will live in the future".

I appreciate that “matter just as much, morally” is a stronger statement (and perhaps carries some important meaning in philosophy of which I’m ignorant?). I think it also sounds nicer, which seems important for an idea that you want to have broad appeal. But perhaps its ambiguity (as I perceive it) leaves it more open to objections.

Also, FWIW I disagree with the idea that (i) could be replaced with “Those who live at future times matter morally”. It doesn’t seem strong enough and I don’t think (iii) would flow from that and (ii) as it is. So I think if you did change to this weaker version of (i) it would be even more important to make (ii) stronger.

Well, we should probably distinguish between:

  1. Whether creating a person with a positive quality of life is bestowing a benefit on them (which is controversial)
  2. Whether affecting the welfare of someone in the future matters morally (not really controversial)

The minimal definition should probably include something like 2, but not 1. I think the current definition leaves it somewhat ambiguous, although I'm inclined to interpret it as my 2). I'd be surprised if you think 2 is controversial.

It is not that easy to distinguish between these two theories! Consider three worlds:

  • Sam exists with welfare 20
  • Sam does not exist
  • Sam exists with welfare 30

If you don't value creating positive people, you end up being indifferent between the first and second worlds, and between the second and third worlds... But then by 2), you want to prefer the third to the first, suggesting a violation of transitivity.

suggesting a violation of transitivity

The (normal) person-affecting response here is to say that options 1 and 3 and incomparable in value to 2 - existence is neither better than, worse than, or equally good as, non-existence for someone. However, if Sam exists necessarily, then 2 isn't a option, so then we say 3 is better than 1. Hence, no issues with transitivity.

Well that doesn't show it's hard to distinguish between the views, it just shows a major problem for person-affecting views that want to hold 2) but not 1).

shows a major problem

You mean, shows a major finding, no? :)

‘Longtermism’ is a new term, which may well become quite common and influential. The aim in giving the term a precise meaning while we still have the chance is to prevent confusions beforethey arise. This is particularly important if you’re hoping that a research field will develop around the idea. I think that this is really crucial.

I don’t have an issue with EAs using ‘longtermism’, but it’s decidedly not a “new term” and already has an existing academic literature in non-EA disciplines. So any attempts at disambiguation (which I applaud) should address how the term is currently being used. If you search for it on Google Scholar, you’ll find lots of results on “long-termism” from a business perspective (typically related to investments or corporate governance). I looked through a few pages of search results without seeing anything related to EA.

Google also provides an interesting perspective on hyphenation. I was originally in the “who cares?” camp, until I noticed that google returns different results for “longtermism” and “long-termism” (I used an incognito window and would advise the same for anyone trying to replicate this). “Long-termism” returns results associated with the business use cases (including various definitions); I don’t see anything EA related until halfway through the 2nd page of search results. This makes sense since the existing literature generally uses a hyphen.

Googling “Longtermism” returns some business/definition results, but has a lot of EA content on the first page including the first result (ForeThought Foundation). That said, Google asks if you meant “long termism” (which gives the same search results as the hyphenated version), suggesting there’s not a ton of people searching for the unhyphenated term. I don’t think EA should adopt a hyphenating convention based on short-term search engine optimization, but this does seem like a relevant consideration.

it’s decidedly not a “new term”

While the word long-termism itself isn't new, it's a relatively new way of describing the school of thought in moral philosophy being discussed here — if only because that school of thought itself has been quite small until recently.

I think that is what Will meant by it being a 'new term'.

There are plans to use longtermism (both the term and the idea) in disciplines beyond moral philosophy (e.g. the Global Priorities Institute’s longtermist research agenda which includes economics in addition to philosophy). So to “prevent confusion”, it’s important to understand whether other fields are using the term, and what other people are likely to think when they hear it.

FWIW, I think for most people something like “ultralongtermist” would do a better job of communicating the time frames Will is talking about.

I'm uneasy about loading empirical claims about how society is doing into the definition of longtermism (mostly part (ii) of your definition). This is mostly from wanting conceptual clarity, and wanting to be able to talk about what's good for society to do separately and considering what it's already doing.

An example where I'm noticing the language might be funny: I want to be able to talk about a hypothetical longtermist society, perhaps that we aspire to, where essentially everyone is on board with longtermism. But if the definition is society-relative this is really hard to do. I might say I think longtermism is true but we should try to get more and more people to adopt longtermism and then longtermism will become false and we won't actually want people to be longtermists any more — but we would still want them to be longtermist about 2019.

I think this happens because "longtermism" doesn't really sound like it's about a problem, so our brains don't want to parse it that way.

How about a minimal definition which tries to dodge this issue:
> Longtermism is the view that the very long term effects of our actions should be a significant part of our decisions about what to do today
?

(You gesture at this in the post, saying "I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes"; I agree, and prefer to just build the definition around that.)

I think some of the differences in opinion about what the definition should be may be arising because there are several useful but distinct concepts:
A) an axiological position about the value of future people (as in Hilary's suggested minimal definition)
B) a decision-guiding principle for personal action (as I proposed in this comment)
C) a political position about what society should do (as in your suggested minimal definition)

I think it's useful to have terms for each of these. There is a question about which if any should get to claim "longtermism".

I think that for use A), precision matters more than catchiness. I like Holly's proposal of "temporal cosmopolitanism" for this.

To my mind B) is the meaning that aligns closest with the natural language use of longtermism. So my starting position is that it should get use of the term. If there were a strong reason for it not to do so, I suppose you could call it e.g. "being guided by long-term consequences".

I think there is a case to be made that C) is the one in the political sphere and which therefore would make best use of the catchy term. I do think that if "longtermism" is to refer to the political position, it would be helpful if it were as unambiguous as possible that it were a political position. This could perhaps be achieved by making "longtermist" an informal short form of the more proper "member of the longtermism movement". Overall though, I feel uncompelled by this case, and like just using "longtermist" for the thing it sounds most like — which in my mind is B).

An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs. This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions, but would not rule out views on which it’s just empirically intractable to try to improve the long-term future

I’ve referred to this definition as “temporal cosmopolitanism.” Whatever we call it, I agree that we should have some way of distinguishing the view that time at which something occurs is morally arbitrary from a view that prioritizes acting today to try to affect the long-run future.

An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.

Just to make a brief, technical (pedantic?) comment, I don't think this definition would give you want you want. (Strict) Necessitarianism holds the only persons who matter are those who exist whatever we do. On such a view, the practical implication is, in effect, that only present people matter. The view is thus not longtermist on your chosen definition. However, Necessitarianism doesn't discount for time per se (the discounting is contingent on time) and hence is longtermist on the quoted definition.

Nice, thanks, this is a good point. And it would pose problems for my definition too (see my clarification comment in response to Larks).

Perhaps we should include the 'no-difference' view as part of the definition (in addition to just permutations across times). This is an intuitive view (I think Ben Grodeck has done a survey on this among the general public), but it would make the claim quite a bit more philosophically substantive.

Or we could just not worry about Necessitarianism counting - there'll always be some counterexamples, and if they are fringe views maybe that's ok. And, wouldn't someone who endorses Necessitarianism not believe (ii) of my definition? So it wouldn't be a counterexample to my definition after all. (Though it would for Hilary's).

(ii) Society currently privileges those who live today above those who will live in the future; and
(iii) We should take action to rectify that, and help ensure the long-run future goes well.

Do you mean Necessitarians wouldn't accept (iii) of the above? Necessitarians will agree with (ii) and deny (iii). (Not sure if this is what you were referring to).

I'm sympathetic to Necessitarianism, but I don't know how fringe it is. It strikes me as the most philosophically defensible population axiology that rejects long-termism which leans me towards thinking the definition shouldn't fall foul of it. (I think Hilary's suggestion would fall foul of it, but yours would not).

I think it's worth pointing out that "longtermism" as minimally defined here is not pointing to the same concept that "people interested in x-risk reduction" was probably pointing at. I think the word which most accurately captures what it was pointing at is generally called "futurism" (examples [1],[2]).

This could be a feature or a bug, depending on use case.

  • It could be a feature if you want a word to capture a moral underpinning common to many futurist's intuitions while being, as you said, remaining "compatible with any empirical view about the best way of improving the long-run future", or to form a coalition among people with diverse views about the best ways to improve the long-run future.
  • It could be a bug if people started informally using "longtermism" interchangably with "far futurism", especially if it created a motte-bailey style of argument in which to an easily defensible minimal definition claim that "future people matter equally" was used to response to skepticism regarding claims that any specific category of efforts aiming to influence the far future is necessarily more impactful.

If you want to retain the feature of being "compatible with any empirical view about the best way of improving the long-run future" you might prefer the no-definition approach, because criteria ii is not philosophical, but an empirical view about what society currently wrongly privileges.

From the perspective of addressing the "bug" aspect however, I think criteria ii and iii are good calls. They make some progress in narrowing who is a "longtermist", and they specify that it is ultimately a call to a specific action (so e.g someone who thinks influencing the future would be awesome in theory but is intractable in practice can fairly be said to not meet criteria iii). In general, I think that in practice people are going to use "longtermist" and "far futurist" interchangeably regardless of what definition is laid out at this point. I therefore favor the second approach, with a minimal definition, as it gives a nod to the fact that it's not just a moral stance and but advocates some sort of practical response.





Similar to Ollie and Larks, I'm slightly uncomfortable with

"(i) Those who live at future times matter just as much, morally, as those who live today;"

I'm pretty longtermist (I work on existential risk) but I'm not sure whether I think that those who live at future times matter "just as much, morally". I have some sympathy with the view that people nearer to us in space or time can matter more morally than those very distant - seperately from the question of how much we can do to effect those people.

I also don't think its necessary for the definition. A less strong definition would work as well. Something like:

"(i) Those who live at future times matter morally".

I thought this post was great for several reasons:
- It generated ideas and interesting discussion about the definition of one of the most important ideas that the community has developed.
- I regularly referred back to it as "the key discussion about what Longtermism means". I expect if Will published this as an academic paper, it would've taken years to come out and there wouldn't be as much public discussion. 
- I'm grateful Will used the forum to share his fairly early thoughts. This can be risky for a thinker like him, because it exposes him to public criticism.
-I'm glad Will shared his opinion that longtermism should be un-hyphenated. This has caught on (thankfully, in my view) and I think this post is partly why.

Really excited to see what Will does, what a promising young talent.

With regards to the phrase "the no definition approach", it seems to me that one should distinguish between the following two concepts:

1) Having an explicit, generally agreed-upon definition, but that being very unspecific and broad.

2) Having no explicit, generally agreed-upon definition at all.

In the below passage, the phrase "the no definition approach" seems to be used to express 1):

The first is the ‘no definition’ approach, suggested to me by Toby Ord:
Longtermism is a philosophy that is especially concerned with improving the long-term future.

This is because strictly speaking, the above statement seems to be an explicit definition (it clarifies the meaning of the term ‘longtermism’) that is very unspecific and broad.

On the other hand, it seems that the philosophies and movements discussed above which take the no definition approach don't have an explicit, generally agreed-upon definition at all (or at least I take it that you're implying that). Thus, it seems that you also use the phrase "the no definition approach" to cover 2).

It may be a bit confusing to use the phrase "the no definition approach" to express 1), since strictly speaking in those cases one does use an explicit definition. (However, maybe that phrase is sometimes used that way; I wouldn't know.) Also, you argue against the no definition approach with the following argument.

When I look at other -isms, there is often a lot of confusion around what the concept denotes, and this hinders those who want to encourage others to take action in line with the -ism.

As I understand the argument, it's saying that other movements and philosophies have experienced problems because they haven't had an explicit definition at all. If so that doesn't seem to be a good argument against having a very unspecific and broad explicit definition.

One possibility is to reserve the phrase "the no definition approach" for 2), whereas 1) could be called something else, e.g. "the ultraminimal definition approach". Then the argument could be something like this:

"One approach is the no definition approach. This leads to confusion as we've seen in other movements, etc.

Another approach is to use an ultraminimal definition. However, this is too permissive, and there is a risk that ‘longtermism’ would end up referring to strong longtermism, or even to very strong longtermism .

Hence we need a less permissive definition; a minimal definition."

(Obviously different terms could be used.)

Thus the arguments used above could still be used, but one would split the "no definition approach" into two different approaches, and use one of the two old arguments against each of the two new approaches.

Thanks for writing this! I agree it’s helpful to have terms clearly defined, and appreciate the degree of clarity you’re aiming for here.

I agree with Holly when she says:

“we should have some way of distinguishing the view that time at which something occurs is morally arbitrary from a view that prioritizes acting today to try to affect the long-run future”.

To me this seems to be differentiating between a strictly value claim (like Hilary’s) and one that combines value and empirical claims (like yours). So maybe it’s worth defining something like Hilary’s definition (or claim (i) in your definition) with another term (eg Holly’s suggestion of 'temporal cosmopolitanism' or something else like that). And at the same time defining longtermism as something that combines that value claim (or a version of it) with the empirical claim that society today is behaving in a way that is in conflict with this value claim, to create a normative claim about what we should do.

An alternative minimal definition [...] the (intrinsic) value of an outcome is the same no matter what time it occurs.

Why doesn't this do the job, if combined with the premise that we should maximize social welfare? I like to think in terms of a social planner maximizing welfare over all future generations. By assuming that value doesn't depend on time, we rule out pure time preference and thereby treat all generations equally. And maximizing social welfare gets us to stop privileging current generations (eg, by investing in reducing extinction risk at the expense of current consumption).

So I'd say: longtermism =df maximizing social welfare with no pure time preference.

The only reason I don’t identify as longtermist is tractability. I would appreciate a definition that allowed me to affirm that when a being occurs in time is morally arbitrary without also committing me to focusing my efforts on the long-term.

Yes, it's a bit question-begging to assert that the actions with the highest marginal utility per dollar are those targeting long-term outcomes.

This definition avoids issues with being falsified by empirical questions of tractability, as well as flip-flopping between short- and longtermism.

A complication perhaps worth noting is that although "longtermism" refers to the philosophical view that this post tries to characterize, "longtermist" may mean either "related to longtermism" (in that sense) or "related to the long-term future". Something—e.g. a policy—may be longtermist in the second send without being longtermist in the first sense both because it does not take a stance on the question concerning the relative moral importance of the short- vs the long-term, but also because it may consider the "long-term future" as being decades, centuries or millennia from now, rather than millions, billions or trillions of years as longtermism understands that expression. (As an example, consider a recent post on "Singapore’s Long-termist Policy", which uses the term "longtermist" in the second of the two senses identified above.)

A quick note: from googling longtermism, the hyphenated version ('long-termism') is already in use, particularly in finance/investment contexts, but in a way that in my view is broadly compatible with the use here, so I personally think it's fine (Will's version being much broader/richer/more philosophical in its scope, obviously).

long-termism in British English

NOUN

the tendency to focus attention on long-term gains

https://www.collinsdictionary.com/dictionary/english/long-termism

Examples:

https://www.americanprogress.org/issues/economy/reports/2018/10/02/458891/corporate-long-termism-transparency-public-interest/

https://www.cisl.cam.ac.uk/business-action/sustainable-finance/investment-leaders-group/promoting-long-termism

https://www.institutionalinvestor.com/article/b14z9mxp09dnn5/long-termism-versus-short-termism-time-for-the-pendulum-to-shift

Thanks for writing this, Will!

I’ve been firmly in the no-hyphen nohyphen camp since I came across the term but haven’t been able to articulate my reasons. These arguments make a lot of sense to me.

Those who live at future times matter just as much, morally, as those who live today

This could be interpreted as “the sum value of present generations = the sum value of future generations”. I’d have thought something like “Those who live at future times matter morally” leaves room for the implication that our impact on the long-term future is of overwhelming importance. I haven’t thought about this much though, just a reaction.

Great post!

In general, if I imagine ‘longtermism’ taking off as a term, I imagine it getting a lot of support if it designates the first concept, and a lot of pushback if it designates the second concept. It’s also more in line with moral ideas and social philosophies that have been successful in the past: environmentalism claims that protecting the environment is important, not that protecting the environment is (always) the most important thing; feminism claims that upholding women’s rights is important, not that doing so is (always) the most important thing. I struggle to think of examples where the philosophy makes claims about something being the most important thing, and insofar as I do (totalitarian marxism and fascism are examples that leap to mind), they aren’t the sort of philosophies I want to emulate.

Maybe this is the wrong reference class, but I can think of several others: utilitarianism, Christianity, consequentialism, where the "strong" definition is the most natural that comes to mind.

Ie, a naive interpretation of Christian philosophy is that following the word of God is the most important thing (not just one important thing among many). Similarly, utilitarians would usually consider maximizing utility to be the most important thing, consequentialists would probably consider consequences to be more important than other moral duties, etc.

The minimal definition is:

>Longtermism is the view that:

>(i) Those who live at future times matter just as much, morally, as those who live today;

>(ii) Society currently privileges those who live today above those who will live in the future; and

>(iii) We should take action to rectify that, and help ensure the long-run future goes well.

It doesn't really tell people why we care about the long term future so much. We do it because there can be much much much more people living in the long term future than there are living right now. This consideration seems more important to me than the fact that the society priveleges those who live today.

I wonder, did you not include this consideration in any of the definitions, because it's too difficult for people to understand?

This post strikes me as fairly pedantic. Is there a live confusion it's intending to solve?

The Wittgensteinian / Eliezerian view (something like "words are labels pointing to conceptual clusters that have fuzzy boundaries") seems to fully dissolve the need to precisely specify definitions of words.

Longtermism’ is a new term, which may well become quite common and influential. The aim in giving the term a precise meaning while we still have the chance is to prevent confusions before they arise. This is particularly important if you’re hoping that a research field will develop around the idea. I think that this is really crucial.

Some confusions that happened in part because we were slow to give ‘Effective Altruism’ a precise definition: people unsure on how much EA required sacrifice and sometimes seeing it as extremely demanding; people unsure on whether you could be focused on preserving nature for its own sake and count as an EA; people seeing it as no different from applied utilitarianism.

Some confusions that are apt to arise with respect to longtermism:

  • are we talking about the strong version or the minimal version? The former is a lot more unintuitive, do we want to push that?
  • How long is long term? Are you in the longtermist club if you’re focused on the next hundred years? What if you’re focused on climate change? (that’s a bit of important pedantry I didn’t get into in the post!)
  • Are you committed to a particular epistemology? Ben Kuhn seemed to think so. But my next post is on what I call ‘boring longtermism’, which separates out longtermism from some other claims that long-term oriented EAs tend to endorse.
  • Is this just a thing for sci if nerds? Is this intellectual movement just focused on existential risk or something broader? Etc

Thanks – I agree that confusions are likely to arise somewhere as a new term permeates the zeitgeist.

I don't think longtermism is a new term within EA or on the EA Forum, and I haven't seen any recent debates over its definition.

[Edited: the Forum doesn't seem like a well-targeted place for clarification efforts intending to address potential confusions around this (which seem likely to arise elsewhere)]. Encyclopedia entries, journal articles, and mainstream opinion pieces all seem better targeted to where confusion is likely to arise.

Even if the Forum isn't a "well-targeted place" for a certain piece of EA content, it still seems good for things to end up here, because "getting feedback from people who are sympathetic to your goals and have useful background knowledge" is generally a really good thing no matter where you aim to publish something eventually.

Perhaps there will come a time in the future when "longtermism" becomes enough of a buzzword to justify clarification in a mainstream opinion piece or journal article. At that point, it seems good to have a history of discussion behind the term, and ideally one meaning that people in EA already broadly agree upon. ("This hasn't been debated recently" =/= "we all have roughly the same definition that we are happy with".)

The definitions of many words are fuzzy in practice, but that doesn't mean it's ideal for things to be that way. And I seriously doubt it's ideal in technical research fields like philosophy or engineering.

In those cases shared and precise meanings can speed up research progress, and avoid terrible mistakes, by preventing misunderstandings.

Establishing some consistency in our terminology, like a shared definition of longtermism, strikes me as highly worthwhile.

I suspect the goal here is less to deconfuse current EAs and more to make it easier to explain things to newcomers who don't have any context.

(It also seems like good practice to me for people in leadership positions to keep people up to date about how they're conceptualizing their thinking)

Basically agree about the first claim, though the Forum isn't really aimed at EA newcomers.

(It also seems like good practice to me for people in leadership positions to keep people up to date about how they're conceptualizing their thinking)

Eh, some conceptualizations are more valuable than others.

I don't see how six paragraphs of Will's latest thinking on whether to hyphenate "longtermism" could be important to stay up-to-date about.


I downvoted this comment because:

i) The hyphenation segment clearly isn't the central argument of the post. This is a straw man.

ii) It's generally a bit dismissive and unkind.

ii) If you don't think something's important to stay up-to-date on, you don't have to read it or engage with it.


I didn't downvote as that would have caused the comment to be invisible, but do want to note that:

  • I find having to read comments that are written in an unnecessarily fight-y or dismissive style to be quite a significant tax on publishing blogposts, increasing the cost of publishing a blog post by 20% or more.
  • There are almost always easy alternatives to that style of language. Like, I am (clearly!) pretty anal about grammar issues in a way that's ripe for gentle ribbing - seems like an emoticon-laden joke would have conveyed the same sentiment in a nicer way. Or the direct approach could have been simply saying, "I think the paragraphs on hyphenation could have been relegated to a footnote or appendix" (which seems very reasonable).

I downvoted the above comment, because I think it is more critical than helpful in a way that mildly frustrates me (because I'm not sure quite what you meant, or how to update my views of the post in response to your critique) and seems likely to frustrate the author (for similar reasons).

What is your goal in trying to make points about whether this information is "important to stay up-to-date about" or worth being "six paragraphs" long?

Do you think this post shouldn't have been published? That it should have been shorter? That it would have been good to include more justification of the content before getting into detail about these definitions?

Raemon thought that it seems good for leaders to keep people updated on how they are conceptualizing things.

I argued that this doesn't seem true in all cases, pointing out that six paragraphs on whether to hyphenate "longetermism" isn't important to stay updated on, even when it comes from a leader.

---

For stuff like this, my ideal goal is something like "converge on the truth."

I usually settle for consolation prizes like "get more clarity about where & how I disagree with other folks in EA" and/or "note my disagreements as they arise."

... or how to update my views of the post in response to your critique

For what it's worth, I suspect there's enough inferential distance between us on fundamental stuff such that I wouldn't expect either of us to be able to easily update while discussing topics on this level of abstraction.

"Update my views of the post" probably wasn't the right phrase to use -- better would be "update my views of whether the post is a good thing to have on the Forum in something like its current form".

In general, I have a strong inclination that people should post content on the Forum if it is related to effective altruism and might be valuable to read for even a small fraction of users. I'm not concerned about too many posts being made (at least at the current level of activity).

I might be concerned if people seem to be putting more time/effort into posts than is warranted by the expected impact of those posts, but I have a high bar to drawing that conclusion; generally, I'd trust the author's judgment of how valuable a thing is for them to publish over my own, especially if they are an expert writing about something on which I am a non-expert.

Even if the information in this post wasn't especially new (I'm not sure which bits have and haven't been discussed elsewhere), I expect it to be helpful to EA-aligned people who find themselves trying to communicate about longtermism with people outside of EA, for some of the reasons Will outlined. I can imagine referring to it as I work on an edition of the EA Newsletter or prepare for an interview with a journalist.

--

Finally, on hyphenation:

a. There are at least two occasions in the last two months that I, personally, have had to decide how to spell "longtermism" in something written for a public audience. And while I am an unusual case...

b. ...hyphenation matters! Movements look less professional when they can't decide how to express terms they often use in writing (hyphenation, capitalization, etc.) Something like this makes me say "huh?" before I even start reading the article (written by a critic of a movement in that case, but the general point stands).

These are far from the most important paragraphs ever published on the Forum, but they do take a stand on a point with two reasonable sides and argue convincingly for one of them, in a way that could change how many readers refer to a common term.

I downvoted your comments as well, Milan, because I think this is exactly the kind of thing that should go on the EA Forum. The emergence of this term “longtermism” to describe a vaguer philosophy that was already there has been a huge, perhaps the main EA topic for like 2 years. I don’t even subscribe to longtermism (well, at least not to strong longtermism, which I considered to be the definition before reading this post) but the question of whether to hyphenate has come up many times for me. This was all useful information that I’m glad was put up for engagement within EA.

And the objection that words can never be precise is pretty silly. Splitting hairs can be annoying but this was an important consideration of meaningfully different definitions of longtermism. It’s very smart for EA to figure this out now to avoid all the problems that Will mentioned, like vagueness, when the term has become more widely known.

It sounded like your objection was that this post was about words and strategy instead of about the concepts. I for one am glad that EA is not just about thinking but about doing what needs to be done, including reaching agreement about how to talk about ideas and what kind of pitches we should be making.

Curated and popular this week
Relevant opportunities