The summary and introduction can be read below. The full paper is available here.
This working paper was produced as part of the Happier Lives Institute’s 2022 Summer Research Fellowship
Summary
Given the current state of our moral knowledge, it is entirely reasonable to be uncertain about a wide range of moral issues. Hence, it is surprising how little attention contemporary philosophers have paid (until the past decade) to moral uncertainty. In this paper, I have considered the prima facie plausible suggestion that appropriateness under moral uncertainty is a matter of dividing one’s resources between the moral theories in which one has credence, allowing each theory to use its resources as it sees fit. I have gone on to develop this approach into a fully-fledged Property Rights Theory, sensitive to many of the complications that we face in making moral decisions over time. This Property Rights Theory deserves to takes its place as a leading theory of appropriateness under conditions of moral uncertainty.
Introduction
Distribution: Imagine that some agent J is devoting her life to ‘earning to give’: J is pursuing a lucrative career in investment banking and plans to donate most of her lifetime earnings to charity. According to the moral theory Thealth in which J has 60% credence, by far and away the best thing for her to do with her earnings is to donate them to global health charities, and the next best thing is to donate them to charities that benefit future generations by fighting climate change. On the other hand, according to the moral theory Tfuture in which J has 40% credence, by far and away the best thing for her to do with her earnings is to donate them to benefitting future generations, and the next best thing is to donate them to global health charities.2 On all other issues, Thealth and Tfuture are in total agreement: for instance, they agree on where J should work, what she should eat, and what kind of friend she should be. They only disagree about which charity J should donate to. Finally, neither Thealth nor Tfuture is risk loving: each theory implies that an $x donation to a charity that the theory approves of is at least no worse than a risky lottery over donations to that charity whose expected donation is $x. In light of her moral uncertainty, what is it appropriate for J to do with her earnings?3
According to one prima facie plausible proposal, it is appropriate for J to donate 60% of her earnings to global health charities and 40% of them to benefitting future generations – call this response Proportionality. Despite Proportionality’s considerable intuitive appeal, none of the theories of appropriateness under moral uncertainty thus far proposed in the literature support this simple response to Distribution.
In this paper, I propose and defend a Property Rights Theory (henceforth: PRT) of appropriateness under moral uncertainty, which supports Proportionality in Distribution.4 In §2.1, I introduce the notion of appropriateness. In §2.2, I introduce several of the theories of appropriateness that have been proposed thus far in the literature. In §2.3, I show that these theories fail to support Proportionality. In §§3.1-3.3, I introduce PRT and I demonstrate that it supports Proportionality in Distribution. In §§3.4-3.9, I discuss the details. In §3.10, I extend my characterisation of PRT to cover cases where an agent faces a choice between discrete options, as opposed to resource distribution cases like Distribution.5 In §4, I argue that PRT compares favourably to the alternatives introduced in §2.2. In §5, I conclude.
Acknowledgements: For helpful comments and conversations, I wish to thank Conor Downey, Paul Forrester, Hilary Greaves, Daniel Greco, Shelly Kagan, Marcus Pivato, Michael Plant, Stefan Riedener, John Roemer, Christian Tarsney, and Martin Vaeth. I also wish to thank the Forethought Foundation and the Happier Lives Institute for their financial support.
I really enjoyed reading this and it has strong implications for the allocation of resources in EA. With MEC framework, totalist theories dominate all the decision-making. I can't say that I am convinced by PRT but I hope it gets discussed more widely.
In the paper, you write:
So, that, effectively, resource shares always track credences in theories. Wouldn't this be unfair to the theories/theory-agents who invest disproportionately in growing or preserving their resources?
The theories/theory-agents could coordinate to save and invest to avoid tragedy of the commons, but some may still be more interested in saving/investing than others, and will effectively be forced to transfer wealth to those that spend more. Some may even be willing to put themselves in net debt, but this debt could be partially transferred to others.
Like in your example, I also don't think we'd want to leave all of the gains a theory-agent made with that theory-agent if their theory lost almost all of its support. Taking only from their initial endowment doesn't seem to go far enough.
There are probably different ways to redistribute resources after credences change, but I imagine they'll all be problematic or at least have weird consequences, even if not objectionable per se.
Some ideas and problems in this section of the comment.
(1) The approach that seemed at first most obvious to me is to rescale resources proportionally with the relative changes to credences at each step:
Suppose your current shares of resources are s1,s2,...,sn across theories T1,T2,...,Tn. These should sum to 1. They need not necessarily be nonnegative, in case we want to allow net debts. Suppose your credences have been multiplied by f1,f2,...,fn since you last checked and redistributed. Then, you would want to distribute resources proportionally to the products f1s1,f2s2,...,fnsn. This may not sum to 1, so you have to normalize by the dot product, f1s1+f2s2+...+fnsn to get the actual shares.
I expect the renormalization to lead to problems. If your credences in T1 increases, and your credence in T2 decreases by the same in absolute terms, but your credence in T3 is unaffected, it seems weird and potentially problematic for this to affect T3's resources, especially if it means losing resources.
Also, if a theory with net negative resources (net debt) has credence in it increased, it shouldn't be forced further into debt, and it should normally gain resources on net. In those cases, it's not clear what we should do and why. We could just replace fj with 1/fj or 1, before renormalizing.
(2) Instead then, we might adjust resource shares identically with our absolute credence adjustments, a1,an,...,an, i.e. if the credence in theory j has changed in absolute terms from cj to cj+aj, theory j's resources should change from sj to sj+aj. Because a1+a2+...+an=0 (including if we add or drop theories), we wouldn't need to renormalize, and theories whose credences don't change aren't affected. This seems more natural and less ad hoc than the first approach.
But, we may sometimes have sj+aj<0, even with cj+aj>0,sj>0, because the theory-agent has been spending down its resources or growing them less than others, and a theory with net positive resources and still positive credence in it is forced into debt or to give up the rest of its resources, which doesn't seem fair. It's not clear how much it should lose.
(3) Another approach would be to let the theory-agents decide ahead of time how resources should be redistributed with changes in credences, by some kind of vote (moral parliament). We don't need to decide this for them. But this could disproportionately favour more popular views (or coalitions of views). You could have them vote as if they don't know the credences in and resources with each theory (or as if they assume they'd only get a small fraction of the resources) and impartially across theories to prevent some biased rules and tyranny of the majority.
(4) Or we could use ad hoc fixes to the first two and hack them together. In (2), don't let a theory that had net positive resources go into debt because of credence changes, but allow it to hit 0, and just redistribute less to the theories that should have gained, in proportion to how much they should have gained (from the theory that hit 0, ideally). Then we could average this with some version of (1).
At a minimum, I'd probably want:
This seems kind of inflexible to me, as a strict rule, rather than a guideline, if that's the intention. Why shouldn't they be able to donate? Maybe they expect to earn back more easily in the future and some donation opportunities are pressing now. Furthermore, "donation" is a somewhat artificial category: taking on some kinds of debt could be seen as donations if they will benefit future direct donations or future direct work. Taking on lower paying direct work instead of going for higher paying jobs to pay off the debt faster also seems like a donation.
I'm interested in promoting robustly positive portfolios of interventions (e.g. in my post here).
Could the property rights approach be adapted to internalize negative (and positive) externalities? Suppose view A pursues an intervention that is net negative according to view B. Should A compensate B at the minimum cost to make up for the harm or prevent it from actually materializing*? A and B could look for some compromise intervention together instead, but it may actually be better for A to pursue its own intervention that's harmful according to B and for B to make up for that harm (and also pursue its own ends separately).
*perhaps weighting this compensation by some function of the credences in each view, e.g. credence in Bcredence in A? Otherwise fringe views could get disproportionate resources. On the other hand, if you do weight this way, this may not be enough to make up for or prevent the harm. I'm not sure there's any satisfactory way to do this.
Forgive me for failing to notice this comment until now Michael! Although this response might not be speaking directly to your idea of 'robustly positive portfolios', I do just want to point out that there is a fairly substantive sense in which in the Property Rights Theory as I have already described it, theory-agents 'internalize negative (and positive) externalities.' It's just an instance of the Coase Theorem. Suppose that agent A is endowed with some decision right; some ways of exercising that decision right are regarded by agent B as harmful, whereas others are not; and B can pay A not to use the decision right in the ways that B regards as harmful. In this case, the opportunity cost to A of choosing to use the decision right in a way that B regards as harmful will include the cost of losing the side payment that A could have collected from B in return for using that decision right in a way that B does not regard as harmful. So, the negative externality is internalised.
Another potential issue with the approach you outline is extortion. Maybe B finds X extremely harmful, and A is (nearly) indifferent between it and another option Y, or even also somewhat prefers Y to X. A could threaten B with X to get resources from B (say to do much more of Y with). I don't think we'd want to allow this, at least to a significant degree, e.g. A shouldn't do (much) better than both X and Y this way, and B shouldn't have to pay (much) more than the resource value of the difference between X and Y according to A.
Somehow I missed your reply, too, and I just came back here thinking to ask this question, forgetting that I already asked it. I was also thinking of B paying A, and I agree that works.
However, it seems like this can be unfair to B, because it imposes all the costs onto B. Basically, the polluters don't pay for their pollution and are instead paid to pollute less or just polute freely, whichever the polluters prefer.
And suppose what A wants to do is very bad according to B, but there's a second choice that's nearly as good according to A, but neutral to B. B might not have enough to pay A for A's second choice, but if we impose all of the costs (enough to completely offset the harms) onto A, then the harms will be prevented (or offset).
Or, you can imagine a fringe set of views whose most important goals are all harmful to the vast majority of views by credence. They can't be paid off to not cause harm. And maybe it's much cheaper to cause harm than to do good, so mitigating those harms could be costly. In such a case, I think you'd want to impose the externalities on the fringe views, or allow the supermajority to vote or pay to prevent the fringe views' acts (perhaps mixing with a moral parliament, or something like a constitution).
On the other hand, maybe we should think of "not caring" as the default, and you don't get to impose burdens on others to accommodate your concerns. And it seems like imposing the costs and benefits on those creating the externalities won't work nicely, because it could give fringe views that find others' top choices very harmful too many resources.
TL;DR, so this might be addressed in the paper
FWIW my first impulse when reading the summary is that proportionality does not seem particularly desirable.
In particular:
I'm not so sure what to say about 2., but I want to note in response to 1. that although the Property Rights Theory (PRT) that I propose does not require any intertheoretic comparisons of choiceworthiness, it nonetheless licences a certain kind of stakes sensitivity. PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.
That seemed like the case to me.
I still think that this is too weak and that theories should be allowed to entirely give up resources without trading, though this is more an intuition than a thoroughly meditated point.
I'm not sure about 2 (at least the second sentence) being desirable. We can already make trades, cooperate and coordinate with proportionality, and sometimes this happens just through how the (EA) labour market responds to you making a job decision. If what you have in mind is that you should bring the world allocation closer to proportional according to your own credences or otherwise focus on neglected views, then there's not really any principled reason to rule out trying to take into account allocations you can't possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd. Also, 1 might already account for 2, if 2 is about neglected views having relatively more at stake.
Some other things that could happen with 2:
You might overweight views you actually think are pretty bad.
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe relative to other plausible views because they motivate space colonization and expansionism, and b) it conflicts with separability, the intuition that what you can't affect (causally or acausally) shouldn't matter to your decision-making, a defining feature of the total view and sometimes used to justify it.
Also, AFAIK, the other main approaches to moral uncertainty aren't really sensitive to how others are allocating resources in a way that the proportional view isn't (except possibly through 1?). But I might be wrong about what you have in mind.
I don't understand why 1) this is the case or 2) why this is undersirable.
If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think it's entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.
I imagine the internal dialogue here between the longtermist and neartermist being like "look I don't know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so I'm just going to let you have it"
I don't understand what you mean.
Well then separability is wrong. It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.
I am not familiar with other proposals to moral uncertainty, so probably you are right!
(Generally I would not take it too seriously what I am saying - I find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)
The arbitrariness ("not really any principled reason") comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said "the rest of my community", which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don't have many resources, so in practice, it probably doesn't matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we're wrong about physical limits. I don't see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I don't think it's unreasonable to reject separability or total utilitarianism, and I'm pretty sympathetic to rejecting both. Why can't I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.
If you found this post helpful, please consider completing HLI's 2022 Impact Survey.
Most questions are multiple-choice and all questions are optional. It should take you around 15 minutes depending on how much you want to say.