When people ask what aspiring effective altruists work on, I often start by saying that we do research into how you can help others the most. For example, GiveWell has found that distributing some 600 bed nets, at a cost of $3,000, can prevent one infant dying of malaria. For the same price, they have also found you could deliver 6,000 deworming treatments that work for around a year.
A common question at this point is 'how can you compare the value of helping these different people in these different ways?' Even if the numbers are accurate, how could anyone determine which of these two possible donations helps others the most?
I can't offer a philosophically rigorous answer here, but I can tell you how I personally approach this puzzle. I ask myself the question:
- Which would I prefer, if, after making the decision, I were equally likely to become any one of the people affected, and experience their lives as they would? [1]
Let's work through this example. First, we'll make the number of people we are considering a manageable number: for $5, I could offer 10 children deworming treatments, or alternatively offer 1 child a bed-net, which has a 1 in 600 chance of saving their life. To make this decision, I should compare three options:
- I don't donate, and so none of the 11 children receive any help
- Ten of the children receive deworming treatment, but the other one goes without a bed-net
- The one child receives a bed-net, but the other ten go without deworming
If I didn't know which of these 11 children I was about to become, which choice would be more appealing?
Obviously 2 and 3 are better than 1 (no help), but deciding between 2 and 3 is not so simple. I am confident that a malaria net is more helpful than a deworming tablet, but it is ten times more useful?
This question has the virtue of:
- Being 'fair', because in theory everyone's interests are given 'equal consideration'
- Putting the focus on how much the recipients' value the help, rather than how you feel about it as a donor
- Motivating you to actually try to figure out the answer, by putting you in the shoes of the people you are trying to help.
You'll notice that this approach looks a lot like the
veil of ignorance, a popular method among moral philosophers for determine whether a process or outcome is 'just'. It should also be very appealing to any consequentialist who cares about 'wellbeing', and thinks everyone's interests ought to be weighed equally. [2] It also looks very much like the ancient instruction to "
love your neighbor as yourself".
In my experience, this thought experiment pushes you towards asking good concrete questions like:
- How much would deworming improve my quality of life immediately, and then in the long term?
- How harmful is it for an infant to die? How painful is it to suffer from a case of malaria?
- What risk of death might I be willing to tolerate to get the long-term health and incomes gains offered by deworming?
- And so on.
I find the main weakness of applying this approach is that thousands of people might be affected in some way by a decision. For instance, we should not only consider the harm to young children who die of preventable diseases, but also the grief and hardship experienced by their families as a result. But that's just the start: health treatments delivered today will change the rate of economic development in a country and therefore the quality of life of all future generations. A big part of the
case for deworming is that it improves nutrition, and thereby raises education levels and incomes for people when they are adults - benefits that are then passed on to their children and their children's children.
This doesn't make this question the wrong one to ask, but rather that tracking and weighing the impact on the hundreds of people who might be affected by an action is beyond what most of us can do in a casual way. However, I find you can still make useful progress by thinking through and adding up the impacts on paper, or in a spreadsheet. [3] When you apply this approach, it is usually possible to narrow down your choices to just a few options, though in my experience you may then not have enough information to confidently decide among that remaining handful.
--
[1] A very similar, probably equivalent, question is: Which would I prefer if, after making the decision, I then had to sequentially experience the remaining lives of everyone affected by both options?
[2] One weakness is that this question is ambiguous about how to deal with interventions that change who exists (for instance, policies that raise or lower birth rates). If you assume that you must become someone - non-existence is not an option - you would end up adopting the 'average view', which actually has
no supporters in moral philosophy. If you simply ignored anyone whose existence was conditional on your decision, you would be adopting the 'person affecting view', which itself has
serious problems. If you do include those people in the population of people you could become, and add 'non-existence' as the alternative for the choices which cause those people not to exist, you would be adopting the 'total view'.
[3] Alternatively, if you were convinced that these long-term prosperity effects were the most important impact, and were similarly valuable across countries, you could try to estimate the increase in the rate of economic growth per $100 invested in different projects, and just seek to maximise that.
This is a neat approach, Rob, and some form of it seems likely to be one of the best ways of thinking about this. I think the emphasis on putting yourself in the shoes of those you're trying to help rather than acting for yourself is particularly valuable. I think there is one extra difficulty that you haven't mentioned, though, which is to do with people having other preferences than yours.
Even if I'm able to work out that, given a random chance of being one of the participants I would prefer 2 to 3, it doesn't necessarily follow that 2 is preferable to 3 in an objective sense. It is interesting to imagine what the participants themselves would choose behind your veil (if they were fully informed about the tradeoffs etc.).
In many cases, one finds that people tend to think that their own condition is less bad than people who don't have the condition do. (That is, if you ask sighted people how bad it would be to be blind they say it would be much worse than blind people do when asked.) This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria. It seems hard to know whom to prioritise then.
There's also the eternal problem with imagining what one would choose - people often choose poorly. I assume you're making some sort of assumptions choosing under the best possible conditions. It may be, though, that your values depend on your decision-making conditions.
Of course, you still have to choose and like you say it's clear that 2 and 3 are both preferable to 1. I think this tool will get you answers most of the time, and can focus your mind on important questions, but there's a intrinsic uncertainty (or maybe indeterminateness) about the ordering.
I would go for:
1) use their preferences and experiences (pretend you don't know what you personally want)
2) imagine you knew everything you could about the impacts.
Which I think is considered the standard approach when thinking behind a veil.
As you say, you might find it hard to do 1) properly, but I think that effect is small in the scheme of things. It's also better than not trying at all!
"This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria."
Wouldn't they then cancel out if you took the average of the two when deciding?
I know you qualify this process as you own heuristic rather than a philosophical justification, but I fail to see the value of empathetic projection in this case which, in practice, is an invite for all sorts of biases. To state just two points: (i) imagining the experiential world of someone else isn't the same, or anywhere near to, experientially being someone else; (ii) it is not obvious that the imagined person's emotional and value set have any normative force as to what distributions we should favour in the world, i.e. X preferring Y to Z is not a normative argument for privileging Y over Z.
In Rawls' original position, judgement is exercised by a representative invested with a books-worth of qualifications as to why its conclusions are normatively important, i.e. Rawls tries to exactly model the person as free and equal in circumstances of fairness (it has frequently been argued, quite correctly, that Rawls' OP is superfluous to Rawls' actual argument, for the terms of agreement are well-defined outside of it). In the case of your procedure, judgement is exercised by whoever happens to be using it.
IMO, the possibility of normative interpersonal comparisons requires at least: (i) that we can justify a delimited bundle of goods as normatively prior to other goods; (ii) that those goods, within and between themselves, are formally commensurable; (iii) that we can produce a cardinal measure of those goods in the real-world; (iv) that we use that measure effectively to calculate correlations between the presence of those goods and the interventions in which we are interested; (v) that we complement this intervention efficacy with non-intervention variables, i.e. if intervention X yields 5 goods and intervention Y 10 goods, but we can deliver 2.5 X at the price of 1 Y in circumstance Z, then in circumstance Z we should prioritise X intervention.
I'm sure that, firstly, you know this better and more comprehensively than I, and secondly, that this process itself is a highly ineffective (i.e. resource-consuming) means of proceeding with interpersonal comparisons unless massively scaled. That said, I don't see why it shouldn't be a schematic ideal against which to exercise our non-ideal judgements. Your heuristic might roughly help (iii), and in this respect might be very helpful at the stage of first-evaluations, but there is more exacting means, and four other stages, besides.