Hide table of contents

Preamble

(this section can safety be skipped)

Last updated: 2020-04-25

Meta

I wasn't sure whether to post this as a post or a question. On one hand, I feel I've put enough effort / it's good enough that it warrants a post, which I feel would give it more visibility. On the other hand, if other people want to write answers on this question, I feel like asking a question to centralize the discussion on the topic is useful. I'm opting for something in-between: asking as a question, but writing my answer in the post / question's description. I've seen other questions doing this. Do you think this is the best norm or should we replace it with another one?

Alternative title

Why do we need the Effective Altruism community? (How) can we make effective altruism obsolete?

Motivation

I'm surprised to not have seen much on this topic (and would like to be linked to relevant pieces I missed). It seems to be a (the?) fundamental question of the Effective Altruism community. Some potential benefits I've identified from having a better understanding on this question includes: being better at determining which cause areas are likely to be neglected, better understand how we can make the world less dependent on philanthropy, and better understand the long-term consequences of different social norms on philanthropy.

Epistemic status

I think I'm pointing in a useful direction, but probably at least made mistakes on some details

Introduction

Effective Altruism is inherently rooted in philanthropy. Whether you earn-to-give, volunteer, self-fund your work, or take a pay cut to have a higher-impact career, you're ultimately exchanging resources for social good (the resource often being money).

Overview

I will discuss the following points.

On the role of philanthropy

  • it corrects for coordination failures
    • ie. are individuals in the system well coordinated?
  • it corrects for excluding people from the political apparatus
    • ie. are there individuals excluded from the system?
  • it corrects for inequalities
    • ie. is wealth fairly distributed in the system?

On getting rid of philanthropy

  • Should we obsolete philanthropy?
  • To fix or to patch: Should we prioritize obsoleting philanthropy?
  • Are systemic failures a free pass on not helping the world?
  • RadicalxChange: Effective altruism for systemic changes
  • Going meta: a system to fix the system (and the prosocial basilisk!)

EtA

By discussing in the comments, I realized that the 'inequality' and 'politically unempowered moral beings' motivations assume some form of preference utilitarianism (or cooperation mechanism). This wouldn't obsolete philanthropy for (all) other morality.

Coordination failures

Spatially-global goods

Problem: We don't have a global political entity

Description

Most levels of organization (family, city, country) have mechanisms to fund public good. However, global goods are more likely to be underfunded because the UN lacks enough power / countries lack sufficient coordination.

Scott Barrett, in zir book Why Cooperate? The Incentive to Supply Global Public Goods, identifies 5 types of global good based on what they require: single best effort, weakest link, aggregate effort, mutual restraint, coordination. Depending on the type of cooperation needed, global goods can be more or less likely to be fulfilled. Aggregate effort, mutual restraint, and sometimes weakest link are the most difficult to enforce.

Examples

  • Single best effort: Asteroid defense, knowledge, suppressing an infectious disease outbreak at its source, geoengineering
  • Weakest link: Disease eradication, preventing emergence of resistance and new diseases
  • Aggregate effort: Climate change mitigation, ozone layer protection
  • Mutual restraint: Non-use of nuclear weapons, non-proliferation, bans on nuclear testing and biotechnology research
  • Coordination: Standards for the measurement of time, for oil tankers, and for automobiles

Source of the examples, and for more information: Friendly AI as a global public good

Possible systemic solutions: Excellent global governance / world government

Notable organization: Global Challenges Foundation

Temporally-global goods

Problem: Time-inconsistent preferences

Possible systemic solution: Ancestor Worship is Efficient

Note: Although this could also fail into the next section with future people as the politically unempowered moral beings.

Other

Problem: Political decision-making is not a market, let alone an efficient one

Possible systemic solution: Futarchy, Social impact bond

Problem: Some valuable negative externalities are not captured by the market, or are not tradable.

Possible systemic solutions: Carbon market, Insurances for global catastrophes, Planetary Condominium


Politically unempowered moral beings

Problem: Not every moral beings have a political voice

Some moral beings could represent themselves, but they are not let. Historically, females have been part of this group. Today, young people still can't vote, although their parents often have a personal interest in representing them.

Some moral beings cannot represent themselves even if they were allowed to. This includes people that are cognitively incapable, such as very young humans, non-human animals, and severely mentally handicapped people. It also includes people that cannot reach our spatiotemporal position, such as dead people, not-yet born people, and people in parallel universes.

As a special case of future people, this include the future versions of existing people. Some moral beings should arguably be weighted more, such as those with a higher expectancy of remaining life years given they will live with the consequences for their votes for longer.

A larger proportion of voters can vote for a policy even with lesser passion compared to the minority proportion of voters who have higher preferences in a less popular topic. This can lead to a reduction of aggregate welfare.


Inequalities

Problems: Some moral beings have less capacities to gain wealth. Some moral beings might have a higher marginal utility for wealth. The way wealth gets distributed might not be fair (according to most operationalizations, as described in the literature on multiplayer bargaining problem).

Possible systemic solutions: Windfall Clause, Luxury tax*, Global basic income, Transforming nature

*Also Should effective altruists work on taxation of the very rich? (which I haven't read yet)

Related: Moral public goods


Should we obsolete philanthropy?

Alternative title: Should the economy capture all social good?

One of my saying is that people should aim to make themselves obsolete, such as by automating their job or creating a superior good/service making the previous one obsolete.

Of course, I'm just pushing in a direction: I don't think most people should spend all their time making their job obsolete without actually doing the job.

I think it's similar with philanthropy: when we can make it obsolete at a reasonable cost, we often should. Here's why.

1. Economic incentives are more robust. Philanthropy, by it's nature, is not a sure thing and relies on people's good will, and effective philanthropy also relies on people's rationality.

2. Philanthropy is not enough. The optimal amount of resources our civilisation should spend on common goods is more than what we currently spend. If we reduce the need for philanthropy in some areas, it will allow philanthropists to redirect their resources to other underfunded problems.

3. Having philanthropists give away their money might reduce the power of altruistic coalitions to steer the future (see: Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic?)

However, there are also some aspects we should be careful about when considering making philanthropy obsolete.

1. If economic incentives were enough to fund public goods at just the right amount on themselves, public good might get overfunded as some people are naturally inclined to donate money to public goods and might continue doing so (mentioned in Radical institutional reforms that make capitalism & democracy work better, and how to get them)

2. Philanthropy might act as a signal of care, which could be an important quality for selecting the people steering the future of Earth-originating life

3. Some systemic failures might cancel each other, and fixing only part of might make it worse. For example, having billionaires fund public good (inequality) might be good in a world where democracy hasn't decoupled values from expertise (coordination), even though ideally we might prefer to have other mechanisms to select who informs us of the most valuable public good to fund.

4. I have the impression that a lot of people see making a profit from providing important social goods, such as accurate testing during a pandemic crisis or valuable medication against age-induced pathologies, as a bad thing. Possibly even more so than profiting by exploiting addictions. For example, Robin Hanson mentions that "Clearly many see paying for results as risking too much greed, money, and markets in places where higher motives should reign supreme. Which is too bad, as those higher motives are often missing, and paying for results has a lot of untapped potential." It seems like shifting the cultural norm on this would be beneficial, especially if philanthropy is made less necessary. Otherwise, in a system where philanthropy is obsolete, organizations might charge more for social goods to compensate for the reputation hit it would have on them. Note also that endeavors that can be profitable should generally not use philanthropy money as philanthropy money is currently under-supplied, and money also helps with being held accountable for being efficient.

Overall, I have the impression we should move in the direction of making philanthropy less necessary / capturing more of it in the system, but be mindful about the order in which we improve the system to avoid problems such as the one mentioned in the con #3 just above.


To patch or to fix?

Alternative title: Should we prioritize obsoleting philanthropy?

Even though I think it would be good to move in a direction of encapsulating more philanthropy within the system ("fix"), is this a priority, or should we instead target the problems it causes directly ("patch")? Alternative names for 'patching' interventions could be targeted or one-off interventions (I'm not sure which term to use for this concept).

I definitely think there should be funding available to tackle both approaches, but at the margin, which one has the highest expected value?

Instead of fixing coordination problems, should we address the problems that arise from them directly, such as research on existential risks reduction and brain preservation?

Some considerations:

  • If you think existential risks are imminent, then you might not have the time to change the systems
  • If you think only a few global goods have a high expected value and that they would need several systems to be fixed in order to start getting funded, then you might prefer focusing on them directly
    • For example, AI safety in 2010 was neglected not just because we don't good global governance, but also because we haven't fully solved the expert problem

Instead of trying to legally give a political voice to oppressed groups, should we directly ask them (or if not possible, then try to guess) who they would want to vote for and act as their represent in elections?

Some considerations:

  • Whether the relevant systemic issues are in the Overton window
  • How many issues a given oppressed group would be interested in politically (ie. if there's only one, then maybe easier to just push for that one issue directly)
  • Is this group likely to stop being oppressed in the short-medium term (ex.: cellular agriculture might put an end to animal farming)

Instead of advocating for systemic redistribution of wealth, should we aim to make as much money as possible and redistribute it directly through charities like GiveDirectly?

Some considerations:

  • If you expect poverty to be reduced a lot in the medium-term future, then you might prefer to give directly instead of trying to reform the system, and vice-versa
  • If you think not reforming the system creates selection pressures against philanthropy, you might prefer to focus on reforming the system, and vice-versa

Not a free pass

I've sometimes seen systemic issues use as a free pass to not help the world, "Don't blame me, blame the System". And I don't think this is entirely wrong.

But I think we do need philanthropy as a correction mechanism to fix those systemic issues: the system is otherwise not as much (and plausibly enough) self-correcting.

We need philanthropy to make philanthropy obsolete.

And there are great opportunities for donations to help with those systemic issues; a few organizations which were mentioned above

Side note: With general advocacy, to people that are not (aspiring) effective altruists, it does seem like focusing on institutional changes is more fruitful than focusing on individual changes (see: Summary of Evidence for Foundational Questions in Effective Animal Advocacy).


RadicalxChange: Effective altruism for systemic changes

The Effective Altruism community has focused a lot, although not entirely, on individual changes. There are a lot of good reasons to do so, some of which have been pointed at in this post. But I think it's also important to pay attention to systemic problems.

I've always wondered what the unifying theme was behind RadicalxChange, but after writing this post, I had the sudden realization that it's about making philanthropy obsolete. I don't know if they know, but maybe they should use this in their branding. Their website describes RadicalxChange as:

a global movement dedicated to reimagining the building blocks of democracy and markets in order to uphold fairness, plurality, and meaningful participation in a rapidly changing world

In my experience, many Effective Altruists are interested in the idea of the RxC community, and I think they are excellent allies, and completes the EA community very well.


Going meta: a system to fix the system

The Philanthropy Prizes

With prizes to reward (the most) successful interventions towards making philanthropy obsolete, we could create a systemic incentive to correct systemic issues.

We could have 3 prizes:

  • The coordination prize
  • The empowerment prize
  • The equality prize

The coordination prize could potentially also be fragmented into more prizes: the global good prize, the longtermist prize (to reduce civilization-wide time-inconsistent preferences), the expert prize (to solve the expertise problem), etc.

The prizes could be annual prizes, impact prizes, or inducement prize contests.

Prizes could also make those pursuits more prestigious, although we should also take into account the literature on the overjustification effect.

For more related propositions, see: Moral economics — Cause Prioritization Wiki.

Charter cities

Charter cities are a good way to experiment with reforms in all 3 areas.

Notable organizations: Charter Cities Institute, The Seasteading Institute

The Prosocial Basilisk

My favorite solution, but seemingly implausible to work, would be to have philanthropists buy certificates of impact from organizations fixing systemic issues which the "system" (ie. governments) would then come to want to buy back once fixed, hence fully completing the loop, and bootstrapping a good world into existence 'out of thin-air'. A prosocial basilisk of some sort. Not unlike this story of "n sevraqyl fhcrevagryyvtrapr obbgfgenccvat vgfrys vagb rkvfgrapr" (rot13).

----

Also see my comments below.

New Answer
New Comment

2 Answers sorted by

I claim that "fixing" coordination failures is a bad and/or incoherent idea.

Coordination isn't fully fixable because people have different goals, and scaling has inevitable and unavoidable costs. Making a single global government would create waste on a scale that current governments don't even approach.

As people get richer overall, the resources available for public benefit have grown. This seems likely to continue. But directing those resources fails. Democracy doesn't scale well, and any move away from democracy comes with corresponding ability to abuse power.

In fact, I think the best solution for this is to allow individuals to direct their money how they want, instead of having a centralized system - in a word, philanthropy.

Coordination isn't fully fixable because people have different goals, and scaling has inevitable and unavoidable costs.

Good point! Note that some of the propositions I linked to account for people having different preferences (ex.: quadratic voting).

Making a single global government would create waste on a scale that current governments don't even approach.

To be clear, I wouldn't not want a global government to mingle with anything but global concerns. But for global concerns, I think the cost is worth it.

But directing those resources fai
... (read more)
1
Davidmanheim
I think we should be willing to embrace a system that has a better mix of voluntary philanthropy, non-traditional-government programs for wealth transfer, and government decisionmaking. It's the second category I'm most excited about, which looks a lot like decentralized proposals. I'm concerned that most extant decentralized proposals, however, have little if any tether to reality. On the other hand, I'm unsure that larger governments would help, instead of hurt, in addressing these challenges.

To make philanthropy obsolete, I think you'd have to either make advocacy obsolete or be able to capture it effectively without philanthropy. As long as sentient individuals have competing values and interests and tradeoffs to be made, which I'd still expect to be true even if nonhuman animals, future individuals and artificial sentiences gain rights and representation, I think there will be a need for advocacy. I don't expect ethical views to converge in the future, and as long as they don't, there should be room for advocacy.

Comments6
Sorted by Click to highlight new comments since:
I've always wondered what the unifying theme was behind RadicalxChange, but after writing this post, I had the sudden realization that it's about making philanthropy obsolete.

It seems like it wouldn't address many of the issues discussed in this article, especially politically unempowered moral beings, or many of the EA causes. Maybe it can make solving them easier, but it doesn't offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.

Thanks for our comment, it helped me clarified my model to myself.

especially politically unempowered moral beings

It proposes a lot of different voting systems to avoid (human) minorities being oppressed.

I could definitely see them develop systems to include future / past people.

But I agree they don't seem to tackle beings not capable (at least in some ways) of representing themselves, like non-human animals and reinforcement learners. Good point. It might be a blank spot for that community(?)

or many of the EA causes

Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality? Although maybe that assumes preference utilitarianism. With pure positive hedonistic utilitarianism, wanting to created more happy people is not really a coordination problem (to the extent most people are not positive hedonistic utilitarians), nor about empowering moral beings (ie. happiness is mandatory), nor about fixing inequalities (nor an egoist preference).

Maybe it can make solving them easier, but it doesn't offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.

Oh, I agree solving coordination failures to finance public goods doesn't solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren't coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.

Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality?

Better democracy won't help much with EA causes if people generally don't care about them, and we choose EA causes in part based on their neglectedness, i.e. the fact that others don't care enough. Causes have to be made salient to people, and that's a role for advocacy to play, and when they remain neglected after that, that's where philanthropy should come in. I think people would care more about animal welfare if they had more access to information and were given opportunities to vote on it (based on ballot initiatives and surveys), but you need advocates to drive this, and I'm not sure you can or should try to capture this all without philanthropy. Most people don't care much about the x-risks EAs are most concerned with, and some of the x-risks are too difficult to understand for the average person to get them to care.

Also, I don't think inequality will ever be fixed, since there's no well-defined target. People will always argue about what's fair, because of differing values. Some issues may remain extremely expensive to address, including some medical conditions, and wild animal welfare generally, so people as a group may be unwilling to fund them, and that's where advocates and philanthropists should come in.

Oh, I agree solving coordination failures to finance public goods doesn't solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren't coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.

What is "just the right amount"? And how do you see the UN coming to fund it if they haven't so far?

I don't think AI safety's current and past funding levels were significantly lower than otherwise due to coordination failures, but rather information asymmetries, like you say, as well as differences in values, and differences in how people form and combine beliefs (e.g. most people aren't Bayesian).

If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?

How else would you see (longtermist) AI safety make up for Open Phil's funding through political mechanisms, given how much people care about it?

Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.

Better democracy won't help much with EA causes if people generally don't care about them

More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn't because it would do things like:

  • Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn't be done democratically, but rather with expert systems such as prediction markets)
  • Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
we choose EA causes in part based on their neglectedness

I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn't be entirely true as I conceded in the previous comment because people have different fundamental values.

Causes have to be made salient to people, and that's a role for advocacy to play,

I think most causes wouldn't have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if 'advocacy' is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.

But maybe you want to declare ideological war, and aim to overwrite people's terminal values with yours, hence partly killing their identity in the process. If that's what you mean by 'advocacy', then you're right that this wouldn't be captured by the System, and 'philanthropy' would still be needed. But protecting ourselves against such ideological attacks is a social good: it's good for everyone individually to be protected. I also think it's likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people's moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.

Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don't think it's the best solution in the long run.

Also relevant: Against moral advocacy.

I'm not sure you can or should try to capture this all without philanthropy

I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I'm interested.

Also, I don't think inequality will ever be fixed, since there's no well-defined target. People will always argue about what's fair, because of differing values.

I don't know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can't change them. Although they could still decide to redistribute their own wealth in a way that's more fair according to their values, so in that sense you're right that their would still be a place for philanthropy.

Some issues may remain extremely expensive to address [...] so people as a group may be unwilling to fund them, and that's where advocates and philanthropists should come in.

I guess it comes down to inequality. Maybe someone thinks it's particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.

Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.

What is "just the right amount"?

I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.

And how do you see the UN coming to fund it if they haven't so far?

The UN would need to have more power. But I don't know how to make this happen.

If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?

At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.

How else would you see (longtermist) AI safety make up for Open Phil's funding through political mechanisms, given how much people care about it?

As mentioned above, using something like Futarchy.

-----

Creating a perfect system would be hard, but I'm proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.

If a non-profit organization is:

  • not solving some public good (in the economic sense: https://en.wikipedia.org/wiki/Public_good_(economics))
  • not redistributing money directly
  • not helping agents that can't help themselves / use money
  • not helping the donor directly
  • relying on donations

Then:

  • it's probably mostly done for signaling purposes and/or misguided
  • it's likely performing worse than the average company
    • although there could be less efficient ways of redistributing money that would arguably be better than the average company
Curated and popular this week
Relevant opportunities