Hide table of contents

When comparing different problems, I found myself with moral questions about what we should care about, whose answers had a direct impact on the scale of global problems. 80k suggests a concept called moral uncertainty, which consists on maximizing the expected value of our decisions. For that it is necessary to estimate the probability of moral assertions (for example: non-human animal lives matter equally as human ones), but I haven't found resources for that. I would be very grateful if someone had a clue on how to proceed.

12

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

I think it's highly subjective and intuition-based for most people. For a very basic moral claim X, you would just ask: "How likely does X seem to me?" And then the probability that occurs directly to you is what you go with.

You might consider arguments for or against, but ultimately just pick a number directly for many claims. For other claims, you might derive them from others, e.g. multiplying (conditional) probabilities for each premise in an argument to get the probability of the conclusion (or a lower bound on it).

Here's one partial answer to your question. In Moral Uncertainty (pg 209), MacAskill et al. suggest that you can sometimes calibrate your confidence in a moral view using "induction from past experience." The more often that you (or other reasoners in your reference class) have changed your mind in the course of investigating a moral issue, the less confidence you should have in your current best guess answer. 

For example, perhaps you've spent a long time thinking about the ethics of letting a child drown in a shallow pond, and all along, you've never doubted that it's wrong. And perhaps you've also been thinking about whether it's categorically wrong to lie. Some days you're fully convinced by Kantian arguments for this view; other days you hear really convincing counterarguments, and you change your mind. Right now, you feel persuaded that lying is categorically wrong, but it nevertheless seems inappropriate for your credence on the wrongness of lying to vastly exceed your credence on the wrongness of letting a child drown. 

Moral dilemmas should never be an obstacle to making moral decisions. Morality, above all, is a way of life, that is, it is "practice of virtue." Considering a moral dilemma must be done in the context of a moral attitude within a cultural conception. Errors or exceptions constantly appear in moral dilemmas. Should I save the lives of a million chickens even at the cost of the life of a human being? In my opinion, your action of saving a million chickens at the cost of a human life would not be considered virtuous in the culture in which you find yourself. And you would not be a virtuous person if the attitude of the majority were indifferent to you to that extent.
All moral progress implies nonconformity, an overcoming of the resistance of the majority, but this virtuous action must imply a lifestyle in accordance with virtue itself in understandable terms.
For example, conscientious objection to military service may be considered a betrayal of a politically respectable ideal (as in Ukraine now invaded by Russia) and an immoral act for most people... but if the conscientious objector expresses his commitment to pacifism, altruism and benevolence in a convincing way, he will still be within the realm of comprehensible virtue and may lead to moral progress. And that does not imply that a moral dilemma has been resolved.

Curated and popular this week
Relevant opportunities