Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
Thanks! I think that there's quite a lot of good content in your critical review, including some issues that really should be discussed more. In my view there are a number of things to be careful of, but ultimately not enough to undermine the longtermist position. (I'm not an author on the piece you're critiquing, but I agree with enough of its content to want to respond to you.)
Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications. I think this is a useful type of criticism, but one that often leads me suspecting that neither side is simply-incorrect, but rather looking for a good synthesis position which understands all of the important points. (Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.)
The point I most appreciate you making is that it seems like strong longtermism could be used to justify ignoring all sorts of pressing present problems. I think that this is justifiably concerning, and deserves attention. However my view is more like "beware naive longtermism" (rather like "bew... (read more)
In response to the plea at the end (and quoting of Popper) to focus on the now over the utopian future: I find myself sceptical and ultimately wanting to disagree with the literal content, and yet feeling that there is a good deal of helpful practical advice there:
Regarding the point about the expectation of the future being undefined: I think this is correct and there are a number of unresolved issues around exactly when we should apply expectations, how we should treat them, etc.
Nonetheless I think that we can say that they're a useful tool on lots of scales, and many of the arguments about the future being large seem to bite without relying on getting far out into the tails of our hypothesis space. I would welcome more work on understanding the limits of this kind of reasoning, but I'm wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.
To see more discussion of this topic, I particularly recommend Daniel Kokotajlo's series of posts on tiny probabilities of vast utilities.
As a minor point, I don't think that discounting the future really saves you from undefined expectations, as you're implying. I think that on simple models of future growth -- such as are often used in practice -- it does, but if you give some credence to wild futures with crazy growth rates, then it's easy to make the entire thing undefined even through a positive discount rate for pure time preference.
Hey Owen - thanks for your feedback! Just to respond to a few points -
>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.
Would be able to elaborate a bit on where the weaknesses are? I see in the thread you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN. Does this not refute at least 1 / 2 of the assumptions longtermism needs to 'get off the ground'?
> Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications.
Just to comment here - yup I intentionally didn't address the philosophical arguments in favor of longtermism, just because I felt that criticizing the incorrect use of expected values was a "deeper" critique and one which I hadn't seen made on the forum before. What would the argument for strong longtermism look like without the expected val... (read more)
I think it proves both too little and too much.
Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we were certain that the accessible universe were finite (as is suggested by (my lay understanding of) current physical theories), and we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.
In that world, would you be happy to drop your complaints? I don't really think you should, so it would be good to understand what the real heart of the issue is.
Too much, in the sense that if we apply the argument naively then it appears to rule out using EVs as a decision-making tool in many practical situations (where subjective probabilities are fed... (read more)
Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I'm the Ben acknowledged in the piece, and I'm in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) )
I'm confused about the claim
This seems in direct opposition to what the authors say (and what Vaden quoted above), namely that:
I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized. Next, you write that if
I don't t... (read more)
I can see two possible types of arguments here, which are importantly different.
[ETA: In this comment, which I hadn't seen before writing mine, Vaden seems to confirm that they were trying to make an argument of the second rather than the first kind.]
In this... (read more)
Technical comments on type-1 arguments (those aiming to show there can be no probability measure). [Refer to the parent comment for the distinction between type 1 and type 2 arguments.]
I basically don't see how such an argument could work. Apologies if that's totally clear to you and you were just trying to make a type-2 argument. However, I worry that some readers might come away with the impression that there is a viable argument of type 1 since Vaden and you mention issues of measurability and infinite cardinality. These relate to actual mathematical results showing that for certain sets, measures with certain properties can't exist at all.
However, I don't think this is relevant to the case you describe. And I also don't think it can be salvaged for an argument against longtermism.
First, in what sense can sets be "immeasurable"? The issue can arise in the following situation. Suppose we have some set (in this context "sample space" - think of the elements at all possible instances of things that can happen at the most fine-grained level), and some measure (in this context "probability" - but it could also refer to something we'd intuitively call length or volume) we ... (read more)
It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.
So my interpretation had bee... (read more)
You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments
Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)
I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.
On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".
But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes. (Of course, there might be other arguments for infinite possible futures.)
More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it fol... (read more)
Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:
If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout before the end of the universe (and therefore a finite upper bound on the size of the set of shoutable numbers)? I thought Isaac was making this point; not that it's physically impossible to shout all natural numbers sequentially, but that it's physically impossible to shout any of the natural numbers (except for a finite subset).
(Although this may not be crucial, since I think you can still validly make the point that Bayesians don't have the option of, say, totally ruling out faster-than-light number-pronunciation as absurd.)
... (read more)I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant.
I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.
In their article vadmas writes:
Some of your comments, including this one, seem to me to be defending simple or weak longtermism ('by far the most important effects are likely to be temporally distant'), rather than strong longtermism as defined above. I can imagine a few reasons for this:
At the moment, I don't have a great sense of which one is the case, and think clarity on this point would be useful. I could also have missed an another way to reconcile these.
I think it's a combination of a couple of things.
When I said "likely", that was covering the fact that I'm not fully bought in.
Both (i) and (ii) are arguably technicalities (and I guess that the authors would cede the points to me), but (ii) in particular feels very important.
If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify.
Related: Gell-Mann Amnesia
... (read more)(Edit: Also related, Epistemic Learned Helplessness)
Thanks for writing this. I think it's very valuable to be having this discussion. Longtermism is a novel, strange, and highly demanding idea, so it merits a great deal of scrutiny. That said, I agree with the thesis and don't currently find your objections against longtermism persuasive (although in one case I think they suggest a specific set of approaches to longtermism).
I'll start with the expected value argument, specifically the note that probabilities here are uncertain and therefore random valuables, whereas in traditional EU they're constant. To me a charitable version of Greaves and MacAskill's argument is that, taking the expectation over the probabilities times the outcomes, you have a large future in expectation. (What you need for the randomness of probabilities to sink longtermism is for the probabilities to correlate inversely and strongly with the size of the future.) I don't think they'd claim the probabilities are certain.
Maybe the claim you want to make, then, is that we should treat random probabilities differently from certain probabilities, i.e. you should not "take expectations" over probabilities in the way I've described. The problem with this is that (a) a... (read more)
Hi Vaden,
Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.
First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.
Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.
Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.
And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities... (read more)
"The Case for Strong Longtermism" is subtitled "GPI Working Paper No. 7-2019," which leads me to believe that it was originally published in 2019. Many of the things you listed (two of the podcast episodes, the fund, and several of the blog and forum posts) are from before 2019. My impression is that the paper (which I haven't read) is more a formalization and extension of various existing ideas than a totally new direction for effective alturism.
The word "longtermism" is new, which may contribute to the impression that the ideas it describes are too. This is true in some cases, but many people involved with effective altruism have long been concerned about the very long run.
Hi all! Really great to see all the engagement with the post! I'm going to write a follow up piece responding to many of the objections raised in this thread. I'll post it in the forum in a few weeks once it's complete - please reply to this comment if you have any other questions and I'll do my best to address all of them in the next piece :)
Thanks for writing this, I'm reading a lot of critiques of longtermism at the moment and this is a very interesting one.
Apart from the problems that you raise with expected value reasoning about future events, you also question the lack of pure time preference in the Greaves-MacAskill paper. You make a few different points here, some of which could co-exist with longtermism and some couldn't. I was wondering how much of your disagreement might be meaningfully recast as a differing opinion on how large your impartial altruistic budget should be, as an indiv... (read more)
This [The ergodicity problem in economics] seems like it could be important, and might fit in somewhere with the discussions of expected utility. I haven't really got my head around it though.
... (read more)Greaves and MacAskill do discuss risk aversion, uncertainty/ambiguity aversion and the issue of seemingly arbitrary probabilities in sections 4.2 and 4.5. They admit that risk aversion with respect to the difference one makes does undermine strong longtermism (and I think ambiguity aversion with respect to the difference one makes would, too, although it might also lead you to doing as little as possible to avoid backfiring), although they cited (Snowden, 2015) claiming that aversion with respect to the difference on makes is too agent-relative and therefo... (read more)
I wrote up my understanding of Popper's argument on the impossibility of predicting one's own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.
Consider a predictor:
A1: Given a sufficiently explicit prediction task, the predictor predicts correctly
A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).
T1: A1,A2=> Given a self-predi... (read more)
I am confused about the precise claim made regarding the Hilbert Hotel and measure theory. When you say "we have no measure over the set of all possible futures", do you mean that no such measures exist (which would be incorrect without further requirements: https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ), or that we don't have a way of choosing the right measure? If it is the latter, I agree that this is an important challenge, but I'd like to highlight that the situati... (read more)
Note that in that text Popper says:
And that he rejects only
My guess is that everyone in this discussion (including MacAskill and Greaves) agree with this, at least as claims about what's currently possible in practice. On the other hand, it seems uncontroversial that some form of long-run predictions are possible (e.g. above you've conceded they're possible for some astronomical systems).
Thus it seems to me that the key question is whether longterm... (read more)
Great and interesting theme!
(I've just written a bunch of thoughts on this post in a new EA Forum post.)
Just saw this, which sounds relevant to some of the comment discussion here:
https://twitter.com/OxfordPopper/status/1343989971552776192?s=20
Hi Vaden,
I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middl... (read more)