Hide table of contents

In Defence of Fanaticism is a Global Priorities Institute Working Paper by Hayden Wilkinson. This post is part of my sequence of GPI Working Paper summaries.

Hilary Greaves and William MacAskill think objections to fanaticism are among the strongest counterarguments to strong longtermism. Such objections also underpin some of the strongest counterarguments to expected value theory. Thus, contemplating fanaticism is critical for comparing neartermist and longtermist causes. One of Greaves and MacAskill’s responses to this counterargument cites Hayden Wilkinson’s In Defence of Fanaticism, suggesting perhaps we should be fanatical on balance.

Here I’ve done my best to summarize Wilkinson’s argument, making it more easily accessible while sacrificing as little argumentative strength as possible.

Introduction

Dyson’s Wager (modified for brevity)

Say you have $2,000 and must choose to donate it to either a charity that will certainly prevent one death from malaria or a charity that will research ‘positronium,’ which has a nonzero probability of bringing astronomically many blissful lives into existence in the far future. Expected value theory suggests you should give the $2,000 to the speculative positronium research. That conclusion is fanatical.

Fanaticism Definition

  • Fanaticism: For any tiny (finite) probability  > 0, and for any finite value v, there is some finite V large enough that Lrisky is better than Lsafe
  • Lrisky: Lottery with value V and probability ∊ > 0; value 0 otherwise
  • Lsafe: Lottery with value v and probability 1

Like positronium research, Lrisky offers a slim probability () of astronomical value (V), while, like the malaria charity, Lsafe offers a modest value (v) with certainty.

Previous justifications for fanatical verdicts relied on expected value theory. However, Wilkinson thinks there is good reason to accept fanaticism, even if we reject expected value theory. Hence, Wilkinson argues that denying fanaticism, as defined above, has implausible consequences—without relying on expected value theory. Wilkinson assumes totalism: we prefer outcomes with a higher total value (as opposed to, for example, average value). The Egyptology and Indology Objections (discussed later) rely on totalism.

Intuitions against fanaticism

Fanaticism is so counterintuitive it must be false.

It seems highly implausible that one should give up a guaranteed good payoff for a tiny chance of something better, no matter how tiny the chance.

However, intuitions about probabilities are often misguided and fall prey to various fallacies[1]. Additionally, intuitive decision-making often ignores probabilities[2], radically over- or under-estimates probabilities[3], and treats low probabilities as having probability 0 (Wilkinson includes multiple examples of this in different contexts[4]). For instance, one study found jurors are just as likely to convict a defendant based on fingerprint evidence if that evidence has a 1 in 100 probability of being a false positive as if the probability were 1 in 1,000 or even 1 in 1 million[5].

Thus, intuitions about low probabilities lead us to foolishly ignore probabilities roughly 1% or lower. Our intuitive judgments against fanaticism may be similarly foolish, which warrants at least considering the case in favor of it.

A continuum argument

Consider the following two lotteries:

  • L0: value 1 (e.g., one life is saved) with probability 1 (certainty)
  • L1: value 1010 (e.g., vastly more lives are saved) with probability 0.999999 (near-certainty); value 0 otherwise

Intuitively, L1 seems better. But now consider L2, which has a slightly lower probability of success but, if successful, saves many more lives.

  • L2: value with probability 0.9999992; value 0 otherwise

This seems better than L1. We could continue with L3L4, and so on until some Ln, such that 0.999999n is less than , for any arbitrarily small .

Intuition suggests that vastly increasing the payoff can compensate for slightly decreasing the probability, meaning each lottery in this sequence is better than the last, making the final lottery better than the first. But, the final lottery’s probability of any positive payoff is less than . So we have Fanaticism.

Transitivity and Minimal Tradeoffs Definitions

This continuum argument rests on two intuitive principles: 

  1. Transitivity: If La ≥ Lb and Lb ≥ Lc, then La ≥ Lc.
    1. That is, if a lottery (La) is at least as good as a second lottery (Lb), and the second lottery (Lb) is at least as good as a third lottery (Lc), then the first lottery (La) is at least as good as the third lottery (Lc).
  2. Minimal Tradeoffs: We can make tradeoffs between probability and value. For instance, we can always compensate for a slight decrease in the probability of success with a vastly greater payoff. (This is highly simplified and deformalized, so if you disagree with this principle or want more detail, please see the section on “Minimal Tradeoffs” on page 13).

Given the continuum argument, to reject fanaticism, you must reject one of these two principles.

A dilemma for the unfanatical

To reject fanaticism, you must also reject Scale Independence (as defined below) or allow your lottery comparisons to be absurdly sensitive to tiny changes.

Scale Independence Definition

  • Scale Independence: For any lotteries La and Lb, if La ≥ Lb, then k · La ≥ k · Lb for any positive, real k.
    • That is, if La is at least as good as Lb, then after multiplying the value of both by the same factor (k), La is still at least as good as Lb.

Scale Independence seems highly plausible. After all, if you’re multiplying the values of both lotteries by k, why should their value relative to each other change?

For fanaticism to be false, there must be some probability  > 0 and value v that makes Lrisky no better than Lsafe, no matter how big V is.

  • Lrisky: value V with probability ; 0 otherwise
  • Lsafe: value v with probability 1

Thus, to reject fanaticism, you must alter your comparisons of lotteries based on their scale—i.e., discount larger values (V) beyond the extent that their probability () is lower[6]. This violates Scale Independence.

Absurd sensitivity to tiny changes

You can avoid this violation by asserting that a guaranteed value (v), no matter how small, is always better than a greater value (V) with probability . However, that assertion means there must be a successive pair of probabilities (pi and pi+1) between 1 and ∊ such that no value at pi (no matter how large) is better than any value at pi+1 (no matter how small). If there were no such pair, we could create a sequence similar to that in the continuum argument.

Since pi and pi+1 can be arbitrarily close together, there could be astronomical value at pi , and miniscule value at pi+1, and the latter would still be better. Thus, rejecting fanaticism, our evaluations of lotteries become absurdly sensitive to tiny changes in probability, which is both intuitively implausible and impractical for decision-making.

Egyptology and Indology

Wilkinson presents two objections in this section. As presented in his paper, these objections rely on totalism (that we prefer outcomes with a higher total value).

The Egyptology Objection

The Egyptology Objection is that denying fanaticism makes your moral decisions depend on events that aren't altered by your choice, including those in distant galaxies or ancient Egypt.

Background Independence Definition

  • Background Independence: For any lotteries, La and Lb, and any outcome Ơ
    if La ≥ Lb, then LaƠ ≥ LbƠ
    • That is, if La is at least as good as Lb, then after adding the constant value of an outcome (Ơ) to both lotteries, La is still at least as good as Lb.

Some rejections of fanaticism[7] violate Background Independence and thus fall prey to the Egyptology Objection. 

For instance, say Ơ is an unaffected “background outcome” that occurred in ancient Egypt. If we violate Background Independence, our choice between La and Lb might change based on Ơ. Thus, we fall prey to the Egyptology objection if we violate Background Independence.

The Less Severe Egyptology Objection

But even rejections of fanaticism that don’t violate Background Independence give rise to a less severe Egyptology Objection. Wilkinson uses statistical distributions[8] to show how, if you are uncertain about an event in ancient Egypt (B), rejecting fanaticism can lead to situations where LriskyB is considered better[9] than Lsafe + B, even though Lrisky is not better than Lsafe, again making our moral decisions depend on unaffected events, such as those in ancient Egypt.

The Indology Objection

Take the same LriskyB and Lsafe + B  from above, where the former is considered better[9], yet Lrisky is not better than Lsafe. (For fanaticism to be false, such lotteries must exist). But let’s change B to represent a very uncertain value of an event in the ancient Indus Valley.

You could research Indology for years and pin down B’s real value as b, with certainty. If so, instead of choosing between LriskyB and Lsafe + B, you’d choose between
Lriskyb and Lsafe + b.

If you accept Background Independence, you’d make the same choice between Lriskyb and Lsafe + b as you’d make between Lrisky and Lsafe, so no matter what b you find, you’d make the same decision.

However, even though Lrisky is not better than Lsafe, you risk making the wrong decision using LriskyB and Lsafe + B, because, if you reject fanaticism, the uncertainty makes the former considered better[9]. Thus, you should perhaps spend years finding b—even though, if you accept Background Independence, you will make the same decision between 
Lriskyb and Lsafe + b as you’d make between Lrisky and Lsafe—no matter the value of b! This seems more absurd than accepting fanaticism.

Conclusion

To recap, to deny Fanaticism…

  1. We must deny either Transitivity or Minimal Tradeoffs and accept the counterintuitive verdicts that follow. 
  2. We must either violate Scale Independence or become absurdly sensitive to tiny differences in probability and value.
  3. We must accept a less severe version of the Egyptology Objection: in some cases, morally correct judgments depend on our beliefs about far-off, unaffected events, such as those in ancient Egypt.
  4. We either must accept the severe version of the Egyptology Objection by denying Background Independence, or face the Indology Objection: we sometimes ought to make decisions that we know we would reject if we learned more, no matter what we might learn.

Hence, rejecting fanaticism, as intuitive as it may initially feel, has highly unintuitive ramifications. The cure is worse than the disease.

We should accept that it is better to produce some tiny probability of infinite moral gain (or arbitrarily high gain), no matter how tiny the probability, than it is to produce some modest finite gain with certainty.

Accepting fanaticism also removes some of the counterarguments to expected value theory and its implications (which, for example, arguably includes strong longtermism).

  1. ^

    Wilkinson lists "the Conjunction Fallacy (Tversky & Kahneman 1983), the Gambler’s Fallacy (Chen et al. 2016), the Hot Hand Fallacy (Gilovich et al. 1985), and the Base Rate Fallacy (Kahneman & Tversky 1982)."

  2. ^
  3. ^

    "We intuitively overestimate some probabilities due to availability bias (Tversky & Kahneman 1974), and underestimate others out of indefensible optimism (Hanoch et al. 2019)."

  4. ^

    "When presented with a medical operation that posed a 1% chance of permanent harm, many respondents considered it no worse than an operation with no risk at all (Gurmankin & Baron 2005). And in yet another context, subjects were unwilling to pay any money at all to insure against a 1% chance of catastrophic loss (McClelland et al. 1993)."

  5. ^
  6. ^

    See the bottom of page 16 and page 17 for a much more formal line of reasoning as to why this is true.

  7. ^

    "One such proposal is expected utility theory with a utility function that is concave and/or bounded (e.g., Arrow 1971). As Beckstead and Thomas (n.d.: 15-16) point out, this results in comparisons of lotteries being strangely dependent on events that are unaltered in every outcome and indeed some irrelevant to the comparison."

  8. ^

    The math and full line of reasoning is on pages 23 to 27.

  9. ^

    I mean 'considered better' by Stochastic Dominance, which "says that if two lotteries have exactly the same probabilities of exactly the same (or equally good) outcomes, then they are equally good; and if you improve an outcome in either lottery, keeping the probabilities the same, then you improve that lottery. And that’s hard to deny!" (See page 10 for the formal definition).

Comments3
Sorted by Click to highlight new comments since:

Thanks for the helpful summary. I feel it's worth pointing out that these arguments (which seem strong!) defend only fanaticism per se, but not a stronger claim that is used or assumed when people argue for long-termism. The stronger claim being that we ought to follow Expected Value Maximization. It's a stronger ask in the sense that we're asked to take bets not of arbitrarily high payoffs, which can be 'gamed' to be high enough to be worth taking, but 'only' some specific astronomically high payoffs, which are derived from (as it were) empirically determined information, facts about the universe that ultimately give the payoff upper bounds. That said, it's helpful to have these arguments to show that 'longtermism depends on being fanatical' is not a knock-down argument against longtermism. Here's one example of that link being made: "...the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism" (Tarsney, 2019).

Thanks for this interesting summary! These are clearly really powerful arguments for biting the bullet and accepting fanaticism. But does this mean that Hayden Wilkinson would literally hand over their wallet to a pascal mugger, if someone attempted to Pascal mug them? Because Pascals mugger doesn't have to be a thought experiment. It's a script you could literally say to someone in real life, and I'm assuming that if I tried it on a philosopher advocating for fanaticism, then I wouldn't actually get their wallet. Why is that? What's the argument that lets you not follow through on that in practice?

I'll admit this was a lot to take in, and intuitively I'm inclined to reject fanaticism simply because it seems more reasonable, intuitively, to believe that high probability interventions are always better than low ones. This position, for me at least, is rooted in normalcy bias, and if there's one thing Effective Altruism has taught me, it's that normalcy bias can be a formidable obstacle to doing good. 

Curated and popular this week
Relevant opportunities