A rough, unpolished, and mostly rhetorical post trying to find a connection between demandingness and fanaticism.
One objection to strong longtermism is how demanding it is.[1] There's a seeming “Pascal’s mugging” that occurs when calculating longtermist expected values to the orders of magnitude larger numbers of future people. But, I propose, people who accept the demandingness of a Singer-style utilitarianism shouldn’t treat the demandingness of longtermism any differently.
There’s an argument that tries to perform a reductio ad absurdum by showing that the longtermist, if she is able to reduce existential risk by "one billionth of one billionth of a percentage point,"[2] she should do so, since, assuming an astronomically large number of future lives, she might save billions of billions of people. Thus, she must fanatically serve the future, abandoning her own present-day people for an unliving and unlikely alien race.
Recall that someone who buys Singer’s arguments in the strongest sense,[3] who is choosing between fulfilling their own childhood dreams by becoming a musician and saving three children by living a life of destitution must choose the destitution. If with $10 you can roll a 500 sided die for a 1/500th chance of saving a child’s life or you can buy yourself coffee, you must spend the $10 on rolling the die, for 1/500th of a life (assuming the standard 30 years) is a bit over two days of life. Two days of life outweigh the 10 minutes of satisfaction a coffee might bring you. The numbers add up, and an egalitarian other-focused ethics drives down the value of the desires of the self as rapidly as the rate of other human lives increases.
Whenever expanding our circle of moral concern, Singerian utilitarianism demands ever more of us. If we accept that future humans have a similar moral standing to presently existing ones, being concerned at the demandingness when expanding moral concern from all present humans to all current and possible future humans seems as dispreferable as rejecting the demandingness we face when expanding moral concern from our immediate kin to all current humanity.[4] When we expanded our circle of compassion beyond our kin, our rights to take care of our children, to favor our families, to love our parents, were called into question. And now that we expand once more, indiscriminate to persons across the future light cone, our old rights again are called into question: to do good for present-people, to save lives that we can see and feel and touch, hold and embrace.
Those unwilling to accept this sort of demandingness from the longtermist perspective should probably get very comfortable with discounting or reject one of the fundamental premises—normativity, aggregative consequentialism, egalitariansim, or the moral standing of potential future people. Those unwilling to accept demandingness but who still have some desire to help current or future people in a maximizing aggregative egalitarian way might wish to take on neartermist or longtermist causes as a non-normative project,[5] which allows them to avoid demandingness and take their altruism to the length that appeals most to them, flavoring it as they like—neartermist, longtermist, or otherwise.
- ^
In this post, I assume that strong longtermism is true. (It certainly might not be.) "The case for strong longtermism" (Greaves & MacAskill 2021):
strong longtermism: the view that impact on the far future is the most important feature of our actions today
- ^
From the seemingly mistaken but nevertheless oft-quoted paper "Existential Risk Prevention as Global Priority" (Bostrom 2013).
Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilisation [referring to his estimate of 10^52 future lives] a mere 1 per cent chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.
- ^
In his famous 1972 paper "Famine, Affluence, and Morality," Singer writes:
The strong version, which required us to prevent bad things from happening unless in doing so we would be sacrificing something of comparable moral significance, does seem to require reducing ourselves to the level of marginal utility.
- ^
And similarly as dispreferable, perhaps, as rejecting the demandingness when we expand our moral concern to the hundred billion animals killed by present-day humans ever year.
- ^
Cf. "Effective Altruism," International Encyclopedia of Ethics (MacAskill & Pummer 2020):
Since effective altruism is a project rather than a normative claim, it is possible for one to both adopt this project as well as accept a nonwelfarist conception of the good (or indeed to adopt multiple projects, some of which involve promoting welfarist good and some of which involve promoting nonwelfarist good).