Hide table of contents

(This post arose out of confusion I had when considering "neutrality against making happy people". Consider this an initial foray into exploring suffering-happiness tradeoffs, which is not my background; I'd gladly welcome pointers to related work if it sounds like this direction has already been considered.)

There are two approaches for how to trade-off suffering and happiness which don't sit right with me when considering astronomical scenarios[1].

  1. Linear Tolerance: Set some (possibly large) constant . Then amount of suffering is offset by amount of happiness so long as .

My impression is that Linear Tolerance is pretty common among EAers (and please correct this if I'm wrong). For example, this is my understanding of most usages of "net benefit", "net positive", and so on: it's a linear tolerance of [2]. This seems ok for the quantities of suffering/happiness we encounter in the present and near-future, but in my opinion becomes unpalatable in astronomical quantities.

  1. No Significant Tolerance: There exists some threshold of suffering such that no amount of happiness can offset if .

This is almost verbatim "Torture-level suffering cannot be counterbalanced", and perhaps the practical motivation behind "Neutrality against making happy people" (creating a person which has 99% chance of being happy and otherwise experiences intense suffering isn't worth the risk; or, creating a person who experiences 1 unit of intense suffering for any units of happiness isn't worth it). However, this seems to either A. claim infrequent-and-intense suffering is worse than frequent-but-low suffering, or B. accept frequent-but-low suffering as equally bad, and prefer to kill off even almost-entirely happy lifeforms as soon as the threshold is exceeded[3]. Since my life is lower than almost-entirely happy yet I find it worth living, I am unsatisfied with this approach.

Toward Logarithmic Tradeoffs

I think the primary intuitions Linear Tolerance and No Significant Tolerance are trying to tap into are:

  • it seems like small amounts of suffering can be offset by large amounts of happiness
  • but once suffering gets large enough, the amount of happiness needed to offset it seems unimaginable (to the point of being impossible)

I don't think these need to contradict each other:

  1. Log Tolerance[4][5]: Set coefficients . Then amount of suffering is offset by amount of happiness so long as .

Log Tolerance is stricter than Linear Tolerance: the marginal tradeoff rate of will eventually drop below any linear tradeoff rate . Furthermore, in the limit the cumulative "effective" linear tradeoff rate of goes to zero.

Meanwhile, Log Tolerance also requires nigh-impossible amounts of happiness to offset intense suffering: while technically goes to infinity, nobody has ever observed it to do so. Consequently any astronomically expanding sentience/civilization would need to get better and better at reducing suffering. On the other hand, because is monotonically increasing, the addition of almost-entirely happy life is always permissible, which I suspect fits better with the intuitions of most longtermists.

Practical Impact

The practical impact Log Tolerance would have on how longtermists analyze risks is to shift from "does this produce more happiness than suffering" to "does this produces mechanisms by which happiness can grow exponentially relative to the growth of suffering?"

For example, one way we could stay below a log upper bound is if some fixed percentage of future resources are committed to reducing future s-risk as much as possible.

Open Questions

  • Are there any messy ethical implications of log tolerance?
  • I think any sublinear, monotonically nondecreasing function satisfying would have the same nice properties. Perhaps another function would allow for more/less suffering, or model the marginal tradeoff rate as decreasing at different rates, etc.

  1. My analysis assumes the existence of measures on happiness and suffering. Perhaps this limits it to utilitarian views of value? ↩︎

  2. where suffering measure may have been implicitly scaled to reflect how much worse it is than happiness ↩︎

  3. by almost-entirely happy, I mean only experiencing infinitesimal suffering ↩︎

  4. Without loss of generality we can assume is just the natural logarithm ↩︎

  5. I came up with this while focused on asymptotic behavior, so I'm only considering the nonnegative support of the tolerance function. I don't know how to interpret a negative tolerance, and suspect it's not useful. ↩︎

10

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Personally I still wouldn't consider it ethically acceptable to, say, create a being experiencing a -100-intensity torturous life provided that a life  with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn't seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don't find stronger suffering-focused views compelling would be sympathetic to a view like this one - and the implications for prioritizing s-risks versus extinction risks seem significant.

I really liked this post and made me think! Here are some stray thoughts which I'm not super confident in:

  • Something similar to  Linear Tolerance and No Significant Tolerance are called negative-leaning utilitarianism  (or weak negative utilitarianism) and lexical-threshold negative utilitarianism (see here or here)
  • It seems like logarithmic trade-offs are just linear tolerance where we've scaled (exponentially) all original suffering values  . I'm not sure if it's just easier just to think the suffering values were already this  value and then use linear tolerance?
  • I'm confused by your use of  and  for amounts of suffering and happiness for an individual. I'm guessing you're also factoring in intensity?

Here I'm using and to denote amounts of suffering/happiness, whether constrained to one individual or spread among many (or even distributed among some non-individualistic sentience).

Using exponentially-scaled linear tolerance seems equivalent mathematically. If anything, it highlights to me that how you define the measures for happiness and suffering is quite impactful, and needs to be carefully considered.

Curated and popular this week
Relevant opportunities