Hide table of contents

The summary and introduction can be read below. The full paper is available here.

This working paper was produced as part of the Happier Lives Institute’s 2022 Summer Research Fellowship

Summary

Given the current state of our moral knowledge, it is entirely reasonable to be uncertain about a wide range of moral issues. Hence, it is surprising how little attention contemporary philosophers have paid (until the past decade) to moral uncertainty. In this paper, I have considered the prima facie plausible suggestion that appropriateness under moral uncertainty is a matter of dividing one’s resources between the moral theories in which one has credence, allowing each theory to use its resources as it sees fit. I have gone on to develop this approach into a fully-fledged Property Rights Theory, sensitive to many of the complications that we face in making moral decisions over time. This Property Rights Theory deserves to takes its place as a leading theory of appropriateness under conditions of moral uncertainty.

Introduction

Distribution: Imagine that some agent J is devoting her life to ‘earning to give’: J is pursuing a lucrative career in investment banking and plans to donate most of her lifetime earnings to charity. According to the moral theory Thealth in which J has 60% credence, by far and away the best thing for her to do with her earnings is to donate them to global health charities, and the next best thing is to donate them to charities that benefit future generations by fighting climate change. On the other hand, according to the moral theory Tfuture in which J has 40% credence, by far and away the best thing for her to do with her earnings is to donate them to benefitting future generations, and the next best thing is to donate them to global health charities.2 On all other issues, Thealth and Tfuture are in total agreement: for instance, they agree on where J should work, what she should eat, and what kind of friend she should be. They only disagree about which charity J should donate to. Finally, neither Thealth nor Tfuture is risk loving: each theory implies that an $x donation to a charity that the theory approves of is at least no worse than a risky lottery over donations to that charity whose expected donation is $x. In light of her moral uncertainty, what is it appropriate for J to do with her earnings?3 

According to one prima facie plausible proposal, it is appropriate for J to donate 60% of her earnings to global health charities and 40% of them to benefitting future generations – call this response Proportionality. Despite Proportionality’s considerable intuitive appeal, none of the theories of appropriateness under moral uncertainty thus far proposed in the literature support this simple response to Distribution.

In this paper, I propose and defend a Property Rights Theory (henceforth: PRT) of appropriateness under moral uncertainty, which supports Proportionality in Distribution.4 In §2.1, I introduce the notion of appropriateness. In §2.2, I introduce several of the theories of appropriateness that have been proposed thus far in the literature. In §2.3, I show that these theories fail to support Proportionality. In §§3.1-3.3, I introduce PRT and I demonstrate that it supports Proportionality in Distribution. In §§3.4-3.9, I discuss the details. In §3.10, I extend my characterisation of PRT to cover cases where an agent faces a choice between discrete options, as opposed to resource distribution cases like Distribution.5 In §4, I argue that PRT compares favourably to the alternatives introduced in §2.2. In §5, I conclude.

Read the full paper...

 

Acknowledgements: For helpful comments and conversations, I wish to thank Conor Downey, Paul Forrester, Hilary Greaves, Daniel Greco, Shelly Kagan, Marcus Pivato, Michael Plant, Stefan Riedener, John Roemer, Christian Tarsney, and Martin Vaeth. I also wish to thank the Forethought Foundation and the Happier Lives Institute for their financial support. 

31

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since:

I really enjoyed reading this and it has strong implications for the allocation of resources in EA. With MEC framework, totalist theories dominate all the decision-making. I can't say that I am convinced by PRT but I hope it gets discussed more widely.

In the paper, you write:

Each theory-agent Aj is endowed with a share of the decision maker’s resources in χi proportional to the decision maker’s credence at the time of χn in the corresponding moral theory Tj .

So, that, effectively, resource shares always track credences in theories. Wouldn't this be unfair to the theories/theory-agents who invest disproportionately in growing or preserving their resources?

The theories/theory-agents could coordinate to save and invest to avoid tragedy of the commons, but some may still be more interested in saving/investing than others, and will effectively be forced to transfer wealth to those that spend more. Some may even be willing to put themselves in net debt, but this debt could be partially transferred to others.

Like in your example, I also don't think we'd want to leave all of the gains a theory-agent made with that theory-agent if their theory lost almost all of its support. Taking only from their initial endowment doesn't seem to go far enough.

 

There are probably different ways to redistribute resources after credences change, but I imagine they'll all be problematic or at least have weird consequences, even if not objectionable per se.

 


Some ideas and problems in this section of the comment.

 

(1) The approach that seemed at first most obvious to me is to rescale resources proportionally with the relative changes to credences at each step:

Suppose your current shares of resources are  across theories . These should sum to 1. They need not necessarily be nonnegative, in case we want to allow net debts. Suppose your credences have been multiplied by  since you last checked and redistributed. Then, you would want to distribute resources proportionally to the products . This may not sum to 1, so you have to normalize by the dot product,  to get the actual shares.

I expect the renormalization to lead to problems. If your credences in  increases, and your credence in  decreases by the same in absolute terms, but your credence in  is unaffected, it seems weird and potentially problematic for this to affect 's resources, especially if it means losing resources.

Also, if a theory with net negative resources (net debt) has credence in it increased, it shouldn't be forced further into debt, and it should normally gain resources on net. In those cases, it's not clear what we should do and why. We could just replace  with  or 1, before renormalizing.

 

(2) Instead then, we might adjust resource shares identically with our absolute credence adjustments, , i.e. if the credence in theory  has changed in absolute terms from  to , theory 's resources should change from  to . Because  (including if we add or drop theories), we wouldn't need to renormalize, and theories whose credences don't change aren't affected. This seems more natural and less ad hoc than the first approach.

But, we may sometimes have , even with , because the theory-agent has been spending down its resources or growing them less than others, and a theory with net positive resources and still positive credence in it is forced into debt or to give up the rest of its resources, which doesn't seem fair. It's not clear how much it should lose.

 

(3) Another approach would be to let the theory-agents decide ahead of time how resources should be redistributed with changes in credences, by some kind of vote (moral parliament). We don't need to decide this for them. But this could disproportionately favour more popular views (or coalitions of views). You could have them vote as if they don't know the credences in and resources with each theory (or as if they assume they'd only get a small fraction of the resources) and impartially across theories to prevent some biased rules and tyranny of the majority.

 

(4) Or we could use ad hoc fixes to the first two and hack them together. In (2), don't let a theory that had net positive resources go into debt because of credence changes, but allow it to hit 0, and just redistribute less to the theories that should have gained, in proportion to how much they should have gained (from the theory that hit 0, ideally). Then we could average this with some version of (1).

 


 

At a minimum, I'd probably want:

  1. If the credence in a theory hasn't decreased, it shouldn't lose resources when redistributing.
    1. In particular, if a theory-agent puts itself into net debt, it shouldn't get to transfer that debt to others just because the credence in that theory decreased. Or, maybe this should be an exception to the rule, but it could incentivize debt-taking in a bad way.
  2. A theory shouldn't be put into debt or left with no resources after redistribution with credence changes, if it had positive net resources just before and still has positive overall credence in it after.
  3. Changes in resources should scale with the changes in credences and go in the same direction (or not change, in some cases). Substantial changes in credences should normally lead to substantial changes in resources, and small changes in credences should lead to small (or 0) changes in resources. If a theory had positive resources before redistribution and lost (almost) all credence in it, it should transfer (almost) all of its resources to others (maybe paying debts to other theory-agents first?).

Finally, suppose that W is negative. In that case, Ahealth and Afuture ’s initial wealth endowments chealthW and cfutureW will likewise be negative. In other words: Ahealth and Afuture will initially be endowed with debts. Both Ahealth and Afuturewill have to earn enough money – either through working, or through trading with each other – to pay off their initial debts before they can donate any money to charitable causes.

 

This seems kind of inflexible to me, as a strict rule, rather than a guideline, if that's the intention. Why shouldn't they be able to donate? Maybe they expect to earn back more easily in the future and some donation opportunities are pressing now. Furthermore, "donation" is a somewhat artificial category: taking on some kinds of debt could be seen as donations if they will benefit future direct donations or future direct work. Taking on lower paying direct work instead of going for higher paying jobs to pay off the debt faster also seems like a donation.

TL;DR, so this might be addressed in the paper

FWIW my first impulse when reading the summary is that proportionality does not seem particularly desirable.

In particular:

  1. I think it's reasonable for one of the moral theories to give up part of their alloted resources if the other moral theory believes the stakes are sufficiently high. The distribution should be stakes sensitive (though inter-moral theory comparisons of stakes is something that is not clear how to do)
  2. The answer does not seem to guide individual action very well, at least in the example. Even accepting proportionality, it seems that how I split my portfolio should be influenced by the resource allocation of the world at large.

I'm not so sure what to say about 2., but I want to note in response to 1. that although the Property Rights Theory (PRT) that I propose does not require any intertheoretic comparisons of choiceworthiness, it nonetheless licences a certain kind of stakes sensitivity. PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.

PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.

That seemed like the case to me.

I still think that this is too weak and that theories should be allowed to entirely give up resources without trading, though this is more an intuition than a thoroughly meditated point.

I'm not sure about 2 (at least the second sentence) being desirable. We can already make trades, cooperate and coordinate with proportionality, and sometimes this happens just through how the (EA) labour market responds to you making a job decision. If what you have in mind is that you should bring the world allocation closer to proportional according to your own credences or otherwise focus on neglected views, then there's not really any principled reason to rule out trying to take into account allocations you can't possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd. Also, 1 might already account for 2, if 2 is about neglected views having relatively more at stake.

Some other things that could happen with 2:

  1. You might overweight views you actually think are pretty bad.

  2. I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe relative to other plausible views because they motivate space colonization and expansionism, and b) it conflicts with separability, the intuition that what you can't affect (causally or acausally) shouldn't matter to your decision-making, a defining feature of the total view and sometimes used to justify it.

Also, AFAIK, the other main approaches to moral uncertainty aren't really sensitive to how others are allocating resources in a way that the proportional view isn't (except possibly through 1?). But I might be wrong about what you have in mind.

then there's not really any principled reason to rule out trying to take into account allocations you can't possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd

I don't understand why 1) this is the case or 2) why this is undersirable.

If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think it's entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.

I imagine the internal dialogue here between the longtermist and neartermist being like "look I don't know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so I'm just going to let you have it"

I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe

I don't understand what you mean.

it conflicts with separability, the intuition that what you can't affect (causally or acausally) shouldn't matter to your decision-making

Well then separability is wrong. It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.

other main approaches to moral uncertainty aren't really sensitive to how others are allocating resources in a way that the proportional view isn'

I am not familiar with other proposals to moral uncertainty, so probably you are right!

(Generally I would not take it too seriously what I am saying - I find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)

The arbitrariness ("not really any principled reason") comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said "the rest of my community", which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don't have many resources, so in practice, it probably doesn't matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we're wrong about physical limits. I don't see how you could draw lines non-arbitrarily here.

By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.

Rejecting separability requires rejecting total utilitarianism.

FWIW, I don't think it's unreasonable to reject separability or total utilitarianism, and I'm pretty sympathetic to rejecting both. Why can't I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.

I'm interested in promoting robustly positive portfolios of interventions (e.g. in my post here).

Could the property rights approach be adapted to internalize negative (and positive) externalities? Suppose view A pursues an intervention that is net negative according to view B. Should A compensate B at the minimum cost to make up for the harm or prevent it from actually materializing*? A and B could look for some compromise intervention together instead, but it may actually be better for A to pursue its own intervention that's harmful according to B and for B to make up for that harm (and also pursue its own ends separately).

*perhaps weighting this compensation by some function of the credences in each view, e.g. ? Otherwise fringe views could get disproportionate resources. On the other hand, if you do weight this way, this may not be enough to make up for or prevent the harm. I'm not sure there's any satisfactory way to do this.

Forgive me for failing to notice this comment until now Michael! Although this response might not be speaking directly to your idea of 'robustly positive portfolios', I do just want to point out that there is a fairly substantive sense in which in the Property Rights Theory as I have already described it, theory-agents 'internalize negative (and positive) externalities.' It's just an instance of the Coase Theorem. Suppose that agent A is endowed with some decision right; some ways of exercising that decision right are regarded by agent B as harmful, whereas others are not; and B can pay A not to use the decision right in the ways that B regards as harmful. In this case, the opportunity cost to A of choosing to use the decision right in a way that B regards as harmful will include the cost of losing the side payment that A could have collected from B in return for using that decision right in a way that B does not regard as harmful. So, the negative externality is internalised. 

Another potential issue with the approach you outline is extortion. Maybe B finds X extremely harmful, and A is (nearly) indifferent between it and another option Y, or even also somewhat prefers Y to X. A could threaten B with X to get resources from B (say to do much more of Y with). I don't think we'd want to allow this, at least to a significant degree, e.g. A shouldn't do (much) better than both X and Y this way, and B shouldn't have to pay (much) more than the resource value of the difference between X and Y according to A.

Somehow I missed your reply, too, and I just came back here thinking to ask this question, forgetting that I already asked it. I was also thinking of B paying A, and I agree that works.

However, it seems like this can be unfair to B, because it imposes all the costs onto B. Basically, the polluters don't pay for their pollution and are instead paid to pollute less or just polute freely, whichever the polluters prefer.

And suppose what A wants to do is very bad according to B, but there's a second choice that's nearly as good according to A, but neutral to B. B might not have enough to pay A for A's second choice, but if we impose all of the costs (enough to completely offset the harms) onto A, then the harms will be prevented (or offset).

Or, you can imagine a fringe set of views whose most important goals are all harmful to the vast majority of views by credence. They can't be paid off to not cause harm. And maybe it's much cheaper to cause harm than to do good, so mitigating those harms could be costly. In such a case, I think you'd want to impose the externalities on the fringe views, or allow the supermajority to vote or pay to prevent the fringe views' acts (perhaps mixing with a moral parliament, or something like a constitution).

On the other hand, maybe we should think of "not caring" as the default, and you don't get to impose burdens on others to accommodate your concerns. And it seems like imposing the costs and benefits on those creating the externalities won't work nicely, because it could give fringe views that find others' top choices very harmful too many resources.

Curated and popular this week
Relevant opportunities