www.jimbuhler.site
I think the claim that pessimistic longtermism is evolutionarily selected for, because it would cause people to care more about their own families and kin than about far-off generations
Wait sorry, what? No, it would cause people to work on making the future smaller or reduce s-risks or something. Pessimistic longtermists are still longtermists. They do care about far-off generations. They just think it's ideally better if they don't exist.[1]
Having clarified that, do you really not find optimistic longtermism more evolutionarily adaptive than pessimistic longtermism? (Let's forget about agnosticism, here, for simplicity). I mean, the former says "save humanity and increase population size" and the latter says the exact opposite. I find it hard not to think the former favors survival and reproduction more than the latter, all else equal, such that it is more likely to be selected for.
Is it just that we had different definitions of pessimistic longtermism in mind? (I should have been clearer, sorry.)
And btw, this is not necessarily due to them making different moral assumptions than optimistic longtermists. The disagreement might be purely empirical.
I'm not sure why you think non-longtermist beliefs are irrelevant.
Nice. That's what makes us misunderstand each other, I think. (This is crucial to my point.)
Many people have no beliefs about what actions are good or bad for the long-term future (they are clueless or just don't care anyway). But some people have beliefs about this, most of whom believe X-risk reduction is good in the very long run. The most fundamental question I raise is Where do the beliefs of the latter type of people come from? Why do they hold them instead of holding that x-risk reduction is bad in the very long run or being agnostic on this particular question? [1] Is it because X-risk reduction is in fact good in the long term (i.e., these people have the capacity to make judgment calls that track the truth on this question) or because of something else?
And then my post considers the potential evolutionary pressure towards optimism vis-a-vis the long-term future of humanity as a candidate for "something else".
So I'm not saying optimistic longtermism is more evolutionary-debunkable than, e.g., partial altruism towards your loved ones. I'm saying it is more evolutionary-debunkable than not optimistic longtermism (i.e., pessimistic longtermism OR agnosticism on how to feel about the long-term future of humanity). Actually I'm not even really saying that, but I think that and this is why I chose to discuss an EDA against optimistic longtermism, specifically.
So if you want to disagree with me, you have to argue that:
A) Not optimistic longtermism is at least just as evolutionarily debunkable as optimistic longtermism, and/or
B) Optimistic longtermism is better explained by the possibility that our judgment calls vis-a-vis the long-term value of X-risk reduction track the truth than by something else.
Does that make sense?
So I'm interested in optimistic longtermism vs not optimistic longtermsm (i.e., pessimictic longtermism OR agnosticism on the long-term value of x-risk reduction). Beliefs that the long-term future doesn't matter or something are irrevelant, here.
Oh interesting.
> I don't think there's any neutral way to establish whose starting points are more intrinsically credible.
So do I have any good reason to favor my starting points (/judgment calls) over yours, then? Whether to keep mine or to adopt yours becomes an arbitrary choice, no?
Imagine you and I have laid out all the possible considerations for and against reducing X-risks and still disagree (because we make different opaque judgment calls when weighing these considerations against one another). Then, do you agree that we have nothing left to discuss other than whether any of our judgment calls correlate with the truth?
(This, on its own, doesn't prove anything about whether EDAs can ever help us; I'm just trying to pin down which assumption I'm making that you don't or vice versa).
Re (1): I mean, say we know the reason why Alice is a pro-natalist is 100% due to the mere fact that this belief was evolutionarily advantageous for her ancestors (and 0% due to good philosophical reasoning). This would discredit her belief, right? This wouldn't mean pro-natalism is incorrect. It would just mean that if it is correct, it is for reasons that have nothing to do with what led Alice to endorse it. She just happened to luckily be "right for the wrong reasons". Do you at least agree with this in this particular contrived example or do you think that evolutionary pressures cannot ever be a reason to question our beliefs?
(Waiting for your answer on this before potentially responding to the rest as I think this will help us pin down the crux.)
Ah nice, thanks for these points, Cody.
I'd be interested to see if you could defend the claim that pro-natalist beliefs have been selected for in human evolutionary history.
I mean... it's quite easy. There were people who, for some reason, were optimistic regarding the long-term future of humanity and they had more children than others (and maybe a stronger survival drive), all else equal. The claim that there exists such a selection effect seems trivially true. The real question is how strong it is relative to, e.g., a potential indirect selection toward truth-tracking longtermist beliefs. I.e., the EDA argument against optimistic longermism seems trivially valid. The question is how strong it is relative to other arguments. (And I'd really like for my potential paper to make progress on this, yeah!)
(Hopefully, the above also addresses your second bullet point.)
Now, you give potential reasons to believe the EDA is weak (thanks for that!):
I've seen people reason themselves into and out of pro-natalist and anti-natalist stances, often using mathematical reasoning. I haven't seen any reason to believe that the pro-natalists' reasoning in particular is succumbing to evolutionary pressure.
You can't reason yourself into or out of something like optimistic longtermism just using math. You need to make so many subjective judgment calls. And because you can reason yourself out of a belief does not mean that there weren't evolutionary pressures toward this belief. This means that the evo pressure was at least not overwhelmingly strong, however, fair. But I don't think anyone was contesting that. You can say this about absolutely all evolutionary pressures on normative and empirical beliefs. I don't think there is any that is so strong that we can't reason ourselves out of it. But this doesn't mean they can't have suspicious origins.
On person-affecting beliefs: The vast majority of people holding these are not longtermists to begin with. What we should be wondering is "to the extent that we have intuitions about what is best for the long-term (and care about this), where do these intuitions come from?". Non-longtermist beliefs are irrelevant, here. Hopefully, this also addresses your last bullet point.
No deadline? Can I register the day before? Or do you expect to potentially reach full capacity at some point before that?