I'm very much aligned with the version of utilitarianism that Bostrom and Ord generally put forth, but a question came up in a conversation regarding this philosophy and view of sustainability. As a thought experiment what would be consistent with this philosophy if we discover that a very clear way to minimize existential risk due to X requires a genocide of half or a significant subset of the population?
Hi Jose,
Bostrom and Ord do not put forth any version of utilitarianism. Bostrom isn't even a consequentialist, let alone a utilitarian. Both authors take moral uncertainty seriously. (Ord defends a version of global consequentialism, but not in the context of arguing for prioritizing existential risk reduction.) Nor does concern for existential risk reduction presuppose a particular moral theory. See the ethics of existential risk.
Separately, the dilemma you raise isn't specific to existential risk reduction. E.g. one can also describe imaginary scenarios ...
Hi Jose,
Bostrom and Ord do not put forth any version of utilitarianism. Bostrom isn't even a consequentialist, let alone a utilitarian. Both authors take moral uncertainty seriously. (Ord defends a version of global consequentialism, but not in the context of arguing for prioritizing existential risk reduction.) Nor does concern for existential risk reduction presuppose a particular moral theory. See the ethics of existential risk.
Separately, the dilemma you raise isn't specific to existential risk reduction. E.g. one can also describe imaginary scenarios in which trillions and trillions of sentient beings exploited for human consumption could be spared lives filled with suffering only if we do something horrendous to innocent people. And all reasonable moral theories, not just utilitarianism, must grapple with these dilemmas.