Hide table of contents

Last year Holden Karnofsky wrote the post, “EA is about maximization, and maximization is perilous”. You could read the post, but I suggest you just jump on board because Holden is cool, and morality is hard.

Given that you now believe that maximisation of doing good is actually bad and scary, you should also probably make some adjustments to the classic thought experiment you use to get your friends on board with the new mission of “do the most good possible [a large but not too large amount of good] using evidence and reason”.

A slightly modified drowning child thought experiment goes as follows


Imagine that you are walking by a small pond, and you see five children drowning. You can easily save the child without putting yourself in great danger, but doing so will ruin your expensive shoes. Should you save the children?

Obviously, your first instinct is to save all the children. But remember, maximisation is perilous.[1] It’s this kind of attitude that leads to atrocities like large financial crimes.[2] Instead, you should just save three or four of the children. That is still a large amount of good, and importantly, it is not maximally large.

But what should you do if you encounter just one drowning child? The options at first pass seem bleak – you can either:

  • Ignore the child and let them drown (which many people believe is bad).
  • Save the child (but know that you have tried to maximise good in that situation).

I think there are a few neat solutions to get around these moral conundrums:

Save the child with some reasonable probability (say 80%).

Before wading into the shallow pond, whip out the D10 you were carrying in your backpack. If you roll an eight or lower, then go ahead and save the child. Otherwise, go about your day.

 

Midjourney v5 prompt: A man rolling a die next to a pond with a drowning child. I stopped the generation 80% of the way through to avoid having a maximally good image.

Only partially save the child

You may have an opportunity to help the child to various degrees. Rather than picking up the child and then ensuring that they find their parents or doing other previously thought as reasonable things, you could:

  • Move the child to shallower waters so they are only drowning a little bit.
  • Help the child out of the water but then abandon them somewhere within a 300m radius of the pond.
  • Create a manifold market on whether the child will be saved and bid against it to incentivise other people to help the child.

 

 

The QALY approach[3]

  • Save the child but replace them with an adult who is not able to swim (but is likely to have fewer years of healthy life left).
  • Commit now to a policy of only saving children who are sufficiently old or likely to have only moderately healthy/happy lives.

The King Solomon approach

  • Cut the child in half and save the left half of them from drowning

Using these approaches, you should be able to convey the optimal most Holden-approved amount of good.

  1. ^

     If you like, you can remember the heuristic “maximisation bad”.

  2. ^

     As well as other things like eradicating diseases.

  3. ^

     QALYs are quality-adjusted life years (essentially a metric for healthy years lived).

275

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

More options:

  1. Save the child but then let your clothes dry, thereby killing thousands of zooplankton, rotifers, and nematodes.
  2. Never go outside without your Uber Walking Buddy, so that arguably they would’ve saved the child if you hadn’t.
  3. Don’t worry about it and save the child because the optimal (infinite EV) action would’ve been to give your expensive shoes to a normie in return for being your counterparty in a St. Petersburg Game.
  4. Offset every saved child with a donation to a marine habitat conservation charity.
  5. Save the child but tell it that there is no God thereby defecting in an acausal moral trade with Calvinists centuries ago.
  6. Save all children but on the shore cunningly arrange them in a spatiotemporal pattern that actually reduces overall value according to the value-density and hyperreal remedies to infinite ethics.
  7. Save children who eat meat.
  8. Corollary: Complex cluelessness means the EV of all actions is undefined/unknowable, so you’re free to save all the children you want. 
  9. Don’t worry about it because whatever you think is maximally good probably isn’t because of the Optimizer’s Curse.
  10. Counterfactuals are logically inconsistent in a deterministic world, so you might as well save the child and compare that action against a counterfactual where you prevented them from falling into the pond in the first place.
  11. Save the child but don’t tell them about cryonics.
  12. Buy the expensive shoes to ruin at an unnecessarily high price.
  13. Drench your clothes in a mild poison every morning so that by jumping into the water you contaminate it and expose the child to a low chance of death by poisoning.
  14. Save the child while keeping your eyes open or not plugging your ears – you thereby force the simulation to simulate the stressful drowning process in detail, which is not optimal.
  15. Use a quantum random number generator for all sorts of decisions every day to decorrelate yourself from your selves on other Everett branches: Now you’re free to save the child because on many branches you’re not even passing by the pond.
  16. Assume Omega predicted your actions: If it predicted that you would save the child, it put the child in the pond; if it predicted you wouldn’t save the child, it let the child play by the shore in peace. By being the sort of person who would save the child, you’ve already caused the child a lot of unnecessary stress, so saving them now doesn’t risk being optimal anymore. 

I think this was the maximally best April Fool's post.

Uh oh, better reduce the humor by 20% or we're courting peril.

More from calebp
Curated and popular this week
Relevant opportunities