MAD

Marcus_A_Davis

1736 karmaJoined

Bio

CEO of Rethink Priorities

Posts
17

Sorted by New

Comments
72

Hey Saulius,

I’m very sorry that you felt that way – that wasn’t our intention. We aren’t going to get into the details of your resignation in public, but as you mention in your follow up comment, neither this incident, nor our disagreement over WAW views were the reason for your resignation.

As you recall, you did publish your views on wild animal welfare publicly. Because RP leadership was not convinced by the reasoning in your piece, we rejected your request to publish it under the RP byline as an RP article representative of an RP position. This decision was based on the work itself; OP was not at all a factor involved in this decision. Moreover, we made no attempt to censor your views or prevent them from being shared (indeed I personally encouraged you to publish the piece if you wanted).

To add some additional context without getting into the details of this specific scenario, we can share some general principles about how we approach donor engagement.

We have ~40 researchers working across a variety of areas. Many of them have views about what we should do and what research should be done. By no means do we expect our staff to publicly or privately agree with the views of leadership, let alone with our donors. Still, we have a donor engagement policy outlining how we like to handle communication with donors.

One relevant dimension is that we think that if one of our researchers, especially while representing RP, is sending something to a funder that has a plausible implication that one of the main funders of a department should seriously reduce or stop funding that department, we should know they are planning to do so before they do so, and roughly what is being said so that we can be prepared. While we don’t want to be seen as censoring our researchers, we do think it’s important to approach these sorts of things with clarity and tact.

There are also times when we think it is important for RP to speak with a unified voice to our most important donors and represent a broader, coordinated consensus on what we think. Or, if minority views of one of our researchers that RP leadership disagrees with are to be considered, this needs to be properly contextualized and coordinated so that we can interact with our donors with full knowledge of what is being shared with them (for example, we don’t want to accidentally convey that the view of a single member of staff represents RP’s overall position).

With regard to cause prioritization, funders don’t filter or factor into our views in any way. They haven’t been involved in any way with setting what we do or don’t say in our cause prioritization work. Further, as far as I’m aware, OP hasn’t adopted the kind of approach we’ve suggested on any of our major cause prioritization on moral weight or as seen in the CURVE sequence.

We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn't have gone ahead when they did without us insisting on their value.

For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-facing work we spent another ~$603K on moral weight work with that money coming from individuals and RP's unrestricted funding.

Similarly, the CURVE sequence of WIT this year was our idea and we are on track to spend ~$900K against ~$210K funded by Open Phil on WIT. Of that $210K the first $152K was on projects related to Open Phil’s internal prioritization and not the public work of the CURVE sequence. The other $58K went towards the development of the CCM. So overall less than 10% of our costs for public WIT work this year was covered by OP (and no other institutional donors were covering it either).

Hey Vasco, thanks for the thoughtful reply.

I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.

Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I'm also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs. On another level, in some sense to the extent I think we can say fanatical claims don’t come up in practice it is because we’ve already decided it’s not worth pursuing them and discount the possibility, including the possibility of going looking for actions that would be fanatical.* Within the logic of EV, even if you thought there weren’t any ways to get the fanatical result with ~99% certainty, it would seem you’d need to be ~100% certain to fully shut the door on at least expending resources seeing if it’s possible you could get the fanatical option. To the extent we don’t go around doing that I think it’s largely because we are practically rounding down those fanatical possibilities to 0 without consideration (to be clear, I think this is the right approach).

All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense

I don’t think this is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.

It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.

I am confused about why RP is still planning to invest significant resources in global health and development… Maybe a significant fraction of RP's team believes non-hedonic benefits to be a major factor?

I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.

The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.

In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases. GiveWell alone is likely larger than all of the farm animal welfare spending in the world by non-governmental actors combined, and that includes a large number of animal actors I think it’s not plausible to affect with research. Further, I think most people who work in most spaces aren’t “cause neutral” and, for example, the counterfactual of all our GHD researchers isn’t being paid by RP to do AW research that influences even a fraction of the money they could influence in GHD.

Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.

Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.

*A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value, and that this always balances out increases in claims of value such that it's never rational within EV to act on these claims. I'm not certain this is wrong but, like with other approaches to this issue, within the logic of EV it seems you need to be at ~100% certainty this is correct to not pursue fanatical claims anyway. You could say in reply the rules of EV reasoning don't apply to claims about how you should reason about EV itself, and maybe that's right and true. But these sure seem like patches on a theory with weaknesses, not clear truths anyone is compelled to accept at the pain of being irrational. Kludges and patches on theories are fine enough. It's just not clear to me this possible move is superior to, say, just biting that you need to do rounding down to avoid this type of outcome.

Thanks for the engagement, Michael.

I largely agree with your notes and caveats.

However, on this:

Expected utility maximization can be guaranteed to avoid fanaticism while satisfying the standard EUT axioms (and countable extensions), with a bounded utility function and the bounds small enough or marginal returns decreasing fast enough, in relative terms… In my view, expected utility with a bounded utility function (not difference-making) is the most instrumentally rational of the options, and it and boundedness with respect to differences seem the most promising, but have barely been discussed in the sequence (if it all?). I would recommend exploring these options more.

I’m definitely in for exploring a variety of more options. We didn’t explore all possible options in this series, and I think we could, in theory, spend a lot more time investigating possible options including some of the combinations of theories, and more edge case versions of particular views like WLU you lay out.

However, I think while it is plausible EV could avoid some version of fanaticism that way, it still seems vulnerable to a very related issue like the following.

It seems there are actually two places for EV where rounding down or bound setting needs to happen to avoid issues with particularly risky gambles. (1) For really low probabilities (i.e. 1 in 100 trillion) with really high outcomes and (2) around the 50% line distinguishing actions that lean net positive from those that are neutral or negative in expectation. Conceptually, these are very similar but practically there may be different implications for doing them.

While it seems a bounded EV function with a function that assigns marginal returns a really steep decline could avoid the fanaticism of (1) (though this itself creates counterintuitive results), it doesn’t seem like this type of solution alone would resolve the issue where the the decision point is whether something is lean net positive but possibly only barely of (2).That is, there are many choices about actions where the sign of the action is uncertain and this applies, among other things, to x-risk interventions that have the possibility of having a very large expected utility if the action succeeds. Practically, it seems these types of choices are likely very common for charitable actors.

If despite a really large expected utility in your bounded function, you don’t think we should always take an action that is only, say, 50.0001% positive in expectation you wind up in a very similar place with regard to being “mugged” by high value outcomes that are not just unlikely to pay out but almost equally as likely to cause harm, then you think something has gone awry in EV. And it doesn’t seem reasonable bounds designed for avoiding really low probabilities but high EV outcomes will help you avoid this.

To be clear, I haven’t reasoned this out entirely, and I will just preemptively grant it’s possible you could create a different “bound” that would act on not just small probabilities, but also on these edge-cases where EU suggests taking these types of gambles. But if you do that this looks a lot like what you are doing is introducing a difference-making criteria to your decision theory. To the extent you may think this type of modified EU is viable, it is because it mimics the aversion of these other theories to certain types of uncertainty.

Basically, I’m actually not confident that this type of modification should matter much for us. The axiom choices matter here for which theory to put the most weight in but I’m unsure this type of distinction is buying you much practically if, say, after you make them you still end up with a set of theoretical options that look in practice like pure EV vs EV with rounding down vs something like WLU vs something like REU.

EDIT: grammar fix.

In trying to convince people to support global health charities I don't think I've ever gotten the objection "but people in other countries don't matter" or "they matter far less than Americans", while I expect vegan advocates often hear that about animals.

I have gotten the latter one explicitly and the former implicitly, so I'm afraid you should get out more often :).

More generally, that foreigners and/or immigrants don't matter, or matter little compared to native born locals, is fundamental to political parties around the world. It's a banal take in international politics. Sure, some opposition to global health charities is an implied or explicit empirical claim about the role of government. But fundamentally, not all of it as a lot of people don't value the lives of the out-group and people not in your country are in the out-group (or at least not in the in-group) for much of the world's population.

First, I think GiveWell's research, say, is mostly consumed by people who agree people matter equally regardless of which country they live in.

GiveWell donors are not representative of all humans. I think a large fraction of humanity would select the "we're all equal" option on a survey but clearly don't actually believe it or act on it (which brings us back to revealed preferences in trades like those humans make about animal lives).

But even if none of that is true, were someone to make this argument about the value of the global poor, the best moral (I make no claims about what's empirically persuasive) response is "make a coherent and defensible argument against the equal moral worth of humans including the global poor", and not something like "most humans actually agree that the global poor have equal value so don't stray too far from equality in your assessment." If you do the latter, you are making a contingent claim based on a given population at a given time. To put it mildly, for most of human history I do not believe we even would have gotten people to half-heartedly select the "moral equality for all humans" option on a survey. For me at least, we aren't bound in our philosophical assessment of value by popular belief here or for animal welfare.

David's post is here: Perceived Moral Value of Animals and Cortical Neuron Count

What do you think of this rephrasing of your original argument:

I suspect people rarely get deeply interested in the the value of foreign aid unless they come in with an unusually high initial intuitive view that being human is what matters, not being in my country... If you somehow could convince a research group, not selected for caring non-Americans, to pursue this question in isolation, I'd predict they'd end up with far less foreign aid-friendly results.

I think this argument is very bad and I suspect you do too. You can rightfully point out that in this context someone starting out at the 5th percentile before going into a foreign aid investigation and then determining foreign aid is much more valuable than the general population thinks would be, in some sense, stronger evidence than if they had instead started at the 95th percentile. However, that seems not super relevant. What's relevant is whether it is defensible at all to norm to a population based on their work on a topic given a question of values like this (that or if there were some disanalogy between this and animals).

Generally, I think the typical American when faced with real tradeoffs (they actually are faced with these tradeoffs implicitly as part of a package vote) don't value the lives of the global poor equally to the lives of their fellow Americans. More importantly, I think you shouldn't norm where your values on global poverty end up after investigation back to what the typical American thinks. I think you should weigh the empirical and philosophical evidence about how to value the lives of the global poor directly and not do too much, if any, reference class checking about other people's views on the topic. The same argument holds for whether and how much we should value people 100 years from now after accounting for empirical uncertainty.

Fundamentally, the question isn't what people substantively do think (except for practical purposes), the question is what beliefs are defensible after weighing the evidence. I think it's fine to be surprised by what RP's moral weight work says on capacity for welfare, and I think there are still high uncertainty in this domain. I just don't think either of our priors, or the general population's priors, about the topic should be taken very seriously.

Maybe. We're a little unsure about this right now. The code base for this is part of the bigger Cross-Cause Cost-Effectiveness Model which we haven't made a final determination on whether we will release it.

Jeff, are you saying you think "an intuition that a human year was worth about 100-1000 times more than a chicken year" is a starting point of "unusually pro-animal views"?

In some sense, this seems true relative to most humans' implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American's views about global health and development. Generally, it doesn't seem to buy much to frame things relative to people who've never thought about a given topic substantively and I don't think you'd think this would be a good critique of a foreign aid think tank looking into how much to value global health and development.

Maybe you are making a different point here?

Also, it would help more if you were being explicit about what you think a neutral baseline is. What would you consider more typical or standard views about animals from which to update? Moment to moment human experience is worth 10,000x that of a chicken conditional on chickens being sentient? 1,000,000x? And, whatever your position, why do you think that is a more reasonable starting point?

Thanks for the question, but unfortunately we can not share more about those involved or the total.

I can say we're confident this unlocked millions for something that otherwise wouldn't have happened. We think maybe half of the money moved would not have been spent, and some lesser amount would have been spent on less promising opportunities from an EA perspective.

Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:

  • We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
  • We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
  • We try to follow research and management best practices, and gather ideas on these fronts from organizations and leaders that have previously been successful.
  • We try to make RP a genuinely pleasant place to work for everyone on our staff.

As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.

Load more