Hide table of contents

TLDR: Sign up here to join a six-month experiment to democratize effective giving. The experiment establishes a community who agree to allocate charitable gifts proportionally to member votes. You’ll help make EA donations more representative of the community’s cause prioritization. Sign up and pledge by October November 15th to participate in our first second round.

Equal Hands is a 6-month trial in democratizing charitable giving among EA cause areas.

Here’s how it works:

  1. You pledge to give a certain amount each month.
  2. Each month that you pledge, vote on the optimal distribution of the donated money across causes (1 vote per person, no matter how much you give).
  3. The total amount of money pledged is split out proportionally to the total of the votes, so that no matter how much you gave, your voice equally influences the final allocation.
  4. To actually make the gifts, you will be assigned a particular cause area split with assigned dollar amounts, based on the preferences of the community and how much you pledged to give (more detail below). You can give to charities within that area (outlined below). After making donations on behalf of the community, you will submit your evidence of donations by the last day of the month.

As a result of this process, the community's donations will be distributed across cause areas according to the votes of the community, no matter how much each person donated. Larger donors will have an equal say in the allocation of charitable donations as smaller donors. The distribution of funds will represent the priorities of the community, as opposed to the priorities of a few people.

The minimum donation to participate in a given month is $25. As of November 2nd at 7:30pm Eastern time,  1 donor has pledged to give a total of $1,000 for November. This means that if 10 more people join at the minimum donation, each new donor will influence the allocation of $113.63.

So, if you give below the average donation amount, you’ll be increasing the degree of alignment the effective giving community has with your preferred allocations in expectation. If you give above the average donation amount, you’ll be allocating your funds more closely to the priorities of the effective giving community

If you are able, giving above the average donation is better — you're committing to having your charitable donations be guided by community consensus, and hedging against your own beliefs based on the beliefs of other thoughtful, committed donors.

Below, I give more detail on how the program works and why I'm doing this.

Effective giving overly weighs the views of a few decision makers.

Here’s a world where community priorities are not reflected by donations (as seems to be the case in EA right now):

  • There are three donors:
    • Donor 1 gives $1,000,000 per year to animal welfare. They are a tech founder who cares a lot about animals, and less about other areas.
    • Donor 2 gives $1,000 per year to global catastrophic risks. They are an academic working on researching these topics. They care about other causes, but feel more skilled at giving within GCRs than any other area.
    • Donor 3 gives $100 per year to global health charities. They work for animal welfare charities, but think they don’t do much good, so give to global health instead.
  • Right now, their donations don’t align with their collective beliefs at all. They are giving:
    • $1,000,000 to animal welfare, $1,000 to GCRs, and $100 to GHD. This happens despite none of them having beliefs like:
      • The funding gap in animal welfare being way bigger than the other areas.
      • That animal welfare is vastly more important than the other areas.
      • Donor 1’s views should massively outweigh the views of Donors 2 and 3.
    • Yet, their donations seem to imply these beliefs.
  • Collectively, they have a lot of knowledge and beliefs about the world. If they sat down and chose how to distribute the $1,001,100 in a matter they all agreed was best for the world, they probably wouldn’t give $1,000,000 to animals, $1000 to GCRs, $100 to GHD.
    • Donors 1 and 3 both think animals matter a lot, but Donor 3 is skeptical of the existing charities. Donor 1 doesn’t have access to the information that makes Donor 3 skeptical. It’s unclear if Donor 3 is right, but aggregating their beliefs might better capture an accurate view of the animal welfare space.
    • Donor 2 knows a lot about their specific research area, but not other areas, so they just give within GCRs and not outside it. They’d be happy to get the expertise of Donors 1 and 3 to inform their giving.
    • All three are motivated by making the world better, and believe strongly that other people have good views about the world, access to different information, etc.

 

This default is a problem

Donating inherently has huge power differentials — the beliefs of donors who are wealthier inevitably exerts greater force on charities than those with fewer funds. But it seems unlikely that having more money would be correlated with having more accurate views about the world. Equal Hands is an attempt to build a lightweight charitable giving system that divorces the influence of donations from the wealth of individuals, and instead tries to align donations with the beliefs of its community members as a whole.

Equal Hands functions similarly to tax systems in democracies — we don’t expect people who pay more in taxes to have better views about who should be elected to spend that tax money. Similarly, we should expect people who donate more to have better views about moral priorities.

This is a bet on the effective giving community as a whole having good beliefs collectively, instead of the current model for donating, which relies on a very small set of people having good beliefs and accurate models of the world. 

How will Equal Hands work exactly? An example funding round

(If you think this is an overly complicated way of doing this, see my FAQ below)

Let’s say there are three donors, who vote between August 1st and August 15th.

  • A, who gives $500, and votes for the funds to be distributed 50% to GCR mitigation, and 50% to animal welfare.
  • B, who gives $50, and votes 100% to GCR mitigation
  • C, who gives $100, and votes 100% to global health.

The committed funds are pooled ($650), and the votes are totaled (50% to GCR mitigation, 33.33% to global health, 16.67% to animal welfare). A total that should be donated to each is calculated from this vote ($325 to GCRs, $216.67 to global health, $108.33 to animal welfare).

Some attempt at preference matching/minimizing complexity for individual donors is made (to the extent possible), and on August 16th, donors are given donation instructions:

  • A: $108.33 to animal welfare, 116.67 to global health, $275 to GCRs
  • B: $50 to GCRs
  • C: $100 to global health

Donors have until August 31st to donate and submit receipts. On September 1st, let's say donors A and B have donated, but C has not. Backstop funds would be used (as available) to cover the $100 gap for global health. C is given a strike, and if they fail to donate again, will be banned from participation in future giving rounds.

On September 1st, participants are invited to update their allocation vote, or keep it the same, and the process repeats.

If the donors had given according to their pure preferences, the distribution of funds would have been:

  • $250 to animal welfare
  • $300 to GCRs
  • $100 to global health

But this doesn’t match the views of this imaginary three-person community, who think that animal welfare is significantly less important to fund, global health is a lot more important to fund, and GCRs are a bit more important to fund. The redistributed donations more closely match the beliefs of the community, and donor A gets to benefit from the wisdom of the crowd, and hedge against their own beliefs and uncertainties.

The Details

The process

  1. To join and receive notice of the charity rounds, sign up here.
  2. On the 1st of each month, you’ll receive a survey that will ask you:
    1. How much you intend to donate
    2. How you’d like to see funds distributed
  3. On the 16th of each month, you’ll receive instructions on where to donate.
    1. We’ll make every effort to make this simple for donors — trying to ensure that most donors only have to make 1 or 2 total transactions, no matter their total distribution
  4. You have until the last day of the month to make your donations, and submit your donation receipts via a final form.
  5. One of our founding donors will cover the gaps caused by donors whose votes affected the allocation but failed to donate; up to $500 per month.
    1. We currently have one individual acting as a “founding donor” on this project and playing this role. If you’d be willing to support the project in that way, you can indicate your interest on the sign-up form.

Participation will be optional on a monthly basis, but pledging to give and then failing to do so will cause you to be removed from the project on your second failure during the trial phase.

Transparency

Besides individual donor identities, everything that happens during this experiment will be transparent to the donors. You’ll have access to how distributions were determined, how much money was distributed, and how much money your individual vote impacted in expectation.

Improvements

There are lots of improvements that can be made to this process! Participants will be invited to give feedback, share ideas, and work on improving the system over the course of its first 6 months.

FAQ

Why would individual people participate?

Individual donors could participate if they generally agree that it is bad that charitable allocations across cause areas are made by the views of relatively few people, instead of by a community as a whole.

If donors are giving less than the average contribution, they should expect that their participation will directly increase the amount of money going to their preferred cause areas.

For donors giving above the average donation, they are demonstrating a commitment to hedging against their own beliefs, investing in democratic funding processes, and experimenting with effective giving!

What causes can I vote on / charities can I donate to?

To launch, the causes that people can vote on will be animal welfare, global catastrophic risk reduction, EA Community Building, global health and development, and climate change. The charities that they can donate their allocations to within each area are listed below.

Animal Welfare

  • ACE Recommended Charities (Will be updated if ACE releases new recommendations during this period)
    • Çiftlik Hayvanlarını Koruma Derneği
    • Dansk Vegetarisk Forening
    • Faunalytics
    • Fish Welfare Initiative
    • Legal Impact for Chickens
    • New Roots Institute
    • Shrimp Welfare Project
    • Sinergia Animal
    • The Good Food Institute
    • The Humane League
    • Wild Animal Initiative
  • EA Animal Welfare Fund

Reducing Global Catastrophic Risks

  • EA Long-Term Future Fund
  • Giving What We Can Risks and Resilience Fund
  • Longview Philanthropy Emerging Challenges Fund
  • Any charity with a Founders Pledge “Active Recommendation” under the following categories
    • Artificial Intelligence
    • Biosecurity
    • Global Catastrophic Risks
    • Global Security

EA Community Building

  • EA Infrastructure Fund
  • Centre for Effective Altruism
  • Ambitious Impact
  • Effektiv Spenden

Global Health and Development

Climate Change

  • Any Active Recommendation from Founders Pledge in the Climate Change category.

Why not just establish some kind of fund people can donate to and then vote on the allocation of its grants?

This seems like a viable longterm option for this project. But, it also comes with costs (the overhead of running the fund, legal and logistical hurdles, etc). I'm only committing to trialing this for 6 months. If it goes well, I think formalizing it via a fund should be considered.

Why cause areas and not individual charities?

There are several reasons to prefer voting on cause areas over individual charities, including:

  • Listing individual charities would be more “gameable” and susceptible to manipulation. In some sense, that’s okay — we want donations to represent people’s preferences! But, given this dynamic, it seems risky to open this system up to this vulnerability early, when it is unclear how much demand for this kind of system there is within EA. Charity elections inevitably turn into popularity contests, instead of focusing on the impact of funded groups.
  • Cause areas have both impactful charities, and funds available for donors to give to. Cause area-specific funds are usually managed by people with an understanding of the cause, while individual donor preferences for specific charities seem less likely to be well-researched. So, providing options for both hopefully increases the impact of donations.
  • To me, it feels more important to democratize how funds are split among community priorities (e.g. cause areas) than among specific charities. Cause areas feel closer to “values” or things that would be downstream from my beliefs. So, democratizing the allocation of funds to cause areas makes more sense than the allocations of funds to specific charities.
  • Every time I see a “vote on your favorite charity for it to get funding” mechanism, it seems like charities that are popular, not necessarily due to their impact but due to their communications skills, do especially well. This seems bad!

Like everything else about this project, nothing is set in stone beyond six months. Join, and contribute to the discussion about how to improve this if you disagree with this approach!

Why these specific charities to represent these cause areas and not [my preferred charity]?

The listed charities are meant as a sort of “minimum viable product.” They likely aren’t perfect, but they are popular places to give for donors in EA interested in each of these cause areas. But this is a project in democratization! Join, and help shape the future of this project if you see ways it could be better.

Why do I have to donate a minimum amount to participate?

Currently, there is a minimum donation requirement to prevent gaming of the system/spam votes. Everyone who votes will have committed to participating in donating. Hopefully that means they’ll have thought about their priorities, and their ultimate vote reflects on the community well. 

Can I give via another entity to one of the listed charities?

Yes! If for tax or logistical reasons, you need to route your donation through another vehicle, that's totally fine, as long as your donation ends up at one of the listed charities. For example, if you use the Giving What We Can platform to make donations, it would be fine to give within your assigned cause areas on that platform. Your donation evidence should just show that you allocated your gift according to your assignment.

Why not quadratic funding / some other hip mechanism?

Maybe that would be better! But for now, the project is just an effort to help line up donations in effective giving with the priorities of the community. If you want it to change, participate and make the case for it!

Will I have to donate to causes I don’t care about?

We’ll try to do some level of minimizing the complexity of donations for donors, and try to do preference matching when possible. However, it is likely that some donors, especially larger donors who prefer only 1-2 areas, will be asked to give to causes they might not otherwise have given to. But, this is also why this system exists — charitable decision making and ability to give shouldn’t go hand-in-hand. All participant’s voices count equally

What happens if this goes well?

We’ll keep it going, hopefully with improvements by incorporating lessons from the first iteration!

How is this governed/funded/run?

This is entirely run by Abraham Rowe (me), and governed according to the rules laid out above. I'd prefer that other people give input on how it is governed in the long-run, and people who participate will have a chance to join in on that.

All votes and calculations will be verifiable by the members when making their donations.

This project has no funding or costs. It’s entirely volunteer-based and takes just a few hours to run every month. 

Long-term governance is not yet decided, but will be decided by community members toward the end of the 6 month trial.

 

Sign up here to get reminders about each month's vote, and to participate in the November round.

190

3
2
12

Reactions

3
2
12

More posts like this

Comments28
Sorted by Click to highlight new comments since:

The first month of Equal Hands is complete!

Here are the results: 20 donors pledged to give $4,430 according to the collective preference of the pool.

This resulted in the following donations:

October 2024AWGHDGCREA CommunityClimate ChangeTotal
Pledge Breakdown39.25%26.20%27.40%3.75%3.40%100%
Implied Donations$ 1,738.78 $ 1,160.66 $ 1,213.82 $ 166.13 $ 150.62 $ 4,430.00 
Pseudo Counterfactual$ 2,711.00 $ 1,172.25 $ 354.75 $ 81.50 $ 110.50 $ 4,430.00 
Implied Change-$972.23-$11.59+$859.07+$84.63+$40.12 

 

All but 2 donors met their pledge, and $4,355 was given following the system. Backstopping funders covered the $75 gap left by the two donors.

Interestingly, the net effect (compared to the pseudo counterfactual of the money being distributed by each donor according purely to their preferences) of Equal Hands in October was roughly to move ~$900 from animal welfare to GCR areas. From the data, it looks like the primary cause of this was that animal welfare-motivated donors were most likely to give the largest amounts, but GCR donors were more likely to sign up (especially at the minimum, $25).

We're running 5 more months of this trial, and you can sign up here.

Note that we now have raised $3,015 in pledges for November, and the marginal $25 donation will influence the allocation of around $234. In October, the average donation influenced  $221.50, so if you're excited to directly influence the allocation of funding between EA causes, this is still a great way to give right now! You can sign up here.

(Writing personally, not organizationally)
I'm happy people are trying experiments like this!

Thinking about other ways that people incorporate each other's judgement about where to donate: often it involves knowing the specific people. 

I think some people who knew each other through early EA / GWWC did this — some had a comparative advantage in finance so went into earning to give, and others had a comparative advantage in research or founding organizations so went into nonprofits. But they made heavy use of each other's advice, because they knew each other's strengths.

It's also common to do this within a couple / family. My husband spent 10 years earning to give while I worked in social work and nonprofits, so he's earned the large majority of what we've donated. Early on, the two of us made separate decisions about where to donate our own earnings (though very informed by talking with each other). Later we moved to making a shared decision on where we'd donate our shared pot of money. This isn't necessarily the best system — people are biased toward trusting their family even in domains where the person isn't very competent, and you can see examples like the Buffett family where family members seem to make kind of random decisions.

I feel good about people pooling judgement when they know the strengths and weaknesses of the specific other people involved. I feel much less excited about pooling judgement with people whose judgement I know nothing about.

I think experimentation with new approaches is good, so for that reason I'm a fan of this.

When I evaluate your actual arguments for this particular mechanism design though, they seem quite weak. This makes me worry that, if this mechanism turns out to be good, it will only be by chance, rather than because it was well designed to address a real problem.

To motivate the idea you set up a scenario with three donors, varying dramatically in their level of generosity:

  • Donors 1 and 3 both think animals matter a lot, but Donor 3 is skeptical of the existing charities. Donor 1 doesn’t have access to the information that makes Donor 3 skeptical. It’s unclear if Donor 3 is right, but aggregating their beliefs might better capture an accurate view of the animal welfare space.
  • Donor 2 knows a lot about their specific research area, but not other areas, so they just give within GCRs and not outside it. They’d be happy to get the expertise of Donors 1 and 3 to inform their giving.
  • All three are motivated by making the world better, and believe strongly that other people have good views about the world, access to different information, etc.

I struggle to see how this setup really justifies the introduction of your complicated donation pooling and voting system. The sort of situation you described already occurs in many places in the global economy - and within the EA movement - and we have standard methods of addressing it, for example:

  • Donor 3 could write an article or an email about their doubts.
  • Donor 1 could hire Donor 3 as a consultant.
  • Donor 1 could delegate decisions to Donor 3.
  • Donor 2 can just give to GCR, this seems fine, they are a small donor anyway.
  • They could all give to professionally managed donation funds like the EA funds.

What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average. The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.

This also comes up in your democracy point:

Equal Hands functions similarly to tax systems in democracies — we don’t expect people who pay more in taxes to have better views about who should be elected to spend that tax money.

The way modern democratic states work is decidedly not that everyone can determine where a fraction of the taxes go if they pay a minimum of tax. Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts. 

One of the reasons for this is that is would be incredibly time consuming for individual voters to make all these determinations. And this seems to be an issue with your proposal also - it simply is not a good use of people's time to be making donation decisions and filling in donation forms every month for very small amounts of money. Aggregation, whether through large donors (e.g. the donation lottery) or professional delegation (e.g. the EA funds) is the key to efficiency.

The most bizarre thing to me however is this argument (emphasis added):

Donating inherently has huge power differentials — the beliefs of donors who are wealthier inevitably exerts greater force on charities than those with fewer funds. But it seems unlikely that having more money would be correlated with having more accurate views about the world.

Perhaps I am misunderstanding, or you intended to make some weaker argument. But as it stands your premise here, which seems important to the entire endeavor, seems overwhelmingly likely to be false. 

There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both: intelligence, education, diligence, emotional control, strong social networks, low levels of chronic stress, low levels of lead poisoning, low levels of childhood disease... And there are direct causal connections between money and accurate views, in both directions, because having accurate views about the world directly helps you make money (recognizing good opportunities for income, avoiding unnecessary costs, etc.) and having money helps you gain more accurate views about the world (access to information, more well educated social circle, etc.).

Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.

Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:

Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.

My experience isn't this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:

  • ~33% inherited wealth / family business
  • ~40% seems like they mostly "earned it" in the sense that it seems like they started a business or did a job well, climbed the ranks in a company due to their skills, etc. To be generous, I'm also including people here who were early investors in crypto, say, where they made a good but highly speculative bet at the right time.
  • ~20% seems like the did a lot of very difficult work, but also seem to have gotten really really lucky - e.g. grew a pre-existing major family business a lot, were roommates with Mark Zuckerberg, etc.
    • Obviously we don't have the counterfactuals on these people's lucky breaks, so it's hard for me to guess what the world looks like where they didn't have this lucky break, but I'd guess it's at least at a much lower giving potential.
  • 7% I'm not really sure.
     

So I'd guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don't know about it. I'm surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don't think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of "cause prioritization is directly downstream of the views of the wealthiest people" is worse than many alternatives. 


I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.

I think it's important to separate out the critical design features from the specific instantiation -- this is a six-month prototype that can run on $0 with a reasonable amount of a single person's volunteer labor. Like most no-budget volunteer efforts, it is likely going to be a bit clunky (e.g., "filling in donation forms every month for very small amounts of money"). Having a 501(c)(3) efficiently distribute out the money in a centralized manner would be ideal; it would also take a good bit of time and money to set up. It makes sense to run the clunky prototype first, get some of the bugs out, and then seek commitments of time and money to set up a more efficient infrastructure if the trials are promising enough.

What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average. 

How effectively does it succeed in incorporating and weighing all that information, though? As an intuition pump, if the current system perfectly did so, it shouldn't matter who which Donor was the million-dollar donor and which were small-dollar donors. 

The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.

This, of course, requires the large donor(s) to recognize the expert's expertise. Likewise, all of your examples rely on Donor 1 picking the right person to be persuaded by, to hire as a consultant, etc.

Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts. 

But they don't pick generically "good" politicians -- they pick ones who line up with their preferences on some big-picture questions (which can be seen as analogous to cause prio within cause areas here, or maybe even more specific than that). In this way, the preferences of the wealthy taxpayer (theoretically) don't get more weight than those of the pauper in the politician's decisions, and then the details get worked out by technocrats.

Of course, an outfit like EA Funds could do something like this if desired -- monies flowing into a Democratic Allocation Fund could be distributed amongst the existing cause-area funds based on some sort of democratic allocation algorithm.

There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both [ . . .]

I don't think zero (or even particularly low) correlation is necessary for this project to make sense. 

If one were shown 5,000 people in a crowd and were required to make a personally important decision based on their judgment about the world while knowing nothing other than their income/wealth, I submit that the optimal decision rule would be neither (a) weight all 5,000 views evenly, or (b) give predominant weight to the very richest people in the bunch, and very little to the bottom 80% (or whatever). But: if I know that a determination had already been made to make the bulk of the decision using rule (b), it would often make sense to use rule (a) on the margin that I could control.

In addition to questioning how strong the (money:good cause prio) correlation is, I am pretty confident it is not remotely linear. Suppose we somehow knew that it made sense to give five times the weight to the views of someone who made $250K/year than someone who made $50K/year (which is already doubtful to me). I would expect a much more modest ideal weighting between $250K and $1.25MM, and an even more modest ideal weighting between $1.25MM and $6.25MM, etc. Yet the current system gives greater prominence to the higher intervals (in absolute terms).

Finally, donation decisions can be significantly driven by donors' somewhat idiosyncratic preferences -- cf. Good Ventures' recent decision to stop funding various subcauses for reasons unrelated to any determination it made about those subcauses' effectiveness. Those preferences may well be anti-correlated with effectiveness insofar as highly neglected causes may pose more PR headaches. Not having their own private foundations, smaller donors can donate as they honestly see best without having to face the risk of external PR backlash. Even if idiosyncratic preferences were no more prevalent among the wealthy, it is probably better to dilute them rather than have so much riding on those of the top few people.

Second point within this comment I'm interested in discussing: If I'm summarizing you correctly, you think standard methods of addressing the problem ("cause allocation in EA is controlled by a few rich people who might not make good decisions") makes Equal Hands an unnecessary project. 

First: I agree with you that the current donation pooling/voting process is not optimal. Hopefully in the six months of the trial a more streamlined option will be found. A fund seems good; knowing the annoying-ness of setting up an appropriate 501c3 and considering the international nature of EA I understand why Abraham didn't go that route before determining whether there was any interest in the project, but I think if it succeeds creating a fund would be good. 

If a fund is created, the main difference between the Equal Hands concept and EA funds is that typical EA funds don't address at all the issue of larger donors having more influence. Yes, experts decide where the amounts within the buckets go. But if one billionaire likes GCR and no billionaires like animal welfare, there will be no mechanism to democratize the distribution between pools. It may be that you don't care about that, but assuming you did, do you see EA funds as addressing that issue in some way that I am missing? 

Second: I agree that a certain amount of donor 3 hiring donor 1 as a consultant or being convinced by a persuasive argument or similar goes on in EA (at least, much more than outside of EA). But the examples you give are such small levels of decision-making sharing. If you endorse the general rule that more decision makers tend to make better decisions than small groups, even when the small groups are composed of experts (which I think there is quite a bit of evidence for?) then a much more robust democratization seems good. 

There's a lot to discuss in this comment so it might be worth unpacking responses into sections. For myself, I'm most interested in your assertion that money is well-correlated with having more accurate views about the world. 

I think you're correct that there is some connection between "accurate views in a domain" to "success in that domain" on average. But I think the main driver of that connection is a correlation at the low end (e.g., people with really faulty pictures of reality are not successful) but no low correlation outside of that range. 

In the case of wealth, while we might expect that being well-attuned to reality is helpful, being "well-attuned to reality" is not a real trait (or if it is, it's extremely rare) -- most people are well-attuned to parts of reality and not others. Furthermore, wealth is in most societies highly driven by being lucky to be born into a particular family. So at the end of the day, we shouldn't expect donors with the most money to generally have the best views on what to do with it. 

In particular, I think that the dynamics in charity make this lack of correlation even more problematic, because the wealthiest folks have disproportionately more control over what happens in charity than the just-relatively-well-off folks, and we particularly shouldn't expect that being wildly wealthy is a good predictor of "being good at figuring out which charities are most impactful." Being insanely wealthy is probably even more luck driven than being successful in a normal way, and the more insanely wealthy you are, the more likely you are to have charities trying to sell themselves to you, and the worse your access to information about them will be. 

Just to reality-test my mental model here against my own experience: you suggest looking at the major donors in EA. By and large, my experience in EA is that there is not really a correlation between wealth and having good ideas about charity. I meet a lot of wealthy people in my job, and they are often shockingly out of touch. Maybe they were better calibrated before they got wealthy, but becoming insanely wealthy reduces how much people are honest to you and makes your life look so different from normal I expect you forget what normal is. Often, the people in EA I think make the best calls are sort of mid-tier employees of EA orgs, who are both thoughtful and have great insider info. 

Even beyond that, EA major donors are a small selection of rich people in general, who by and large I think make absolutely terrible decisions about charity (and I expect you think that also, since you're on the EA forum). So even if I wanted to grant you that these rich people might have accurate views within their domain, I wouldn't grant that that makes them better at choosing charities. 

Basically, my overall point is that (1) really wealthy people are probably mostly really wealthy by chance of circumstance; (2) if not chance, and it is domain expertise in the area of their success, that doesn't largely transfer to success in choosing charities, and (3) based on my experience of EA, wealthy EAs are no more likely to make good decisions than non-wealthy EAs. So I'm comfortable endorsing the idea that having more money is not generally a good predictor of having great ideas about charity. 

I don't really want to get into an argument here about whether extreme wealth is largely luck-driven, or how much success in one domain translates to success in others, since I believe people tend to be firmly entrenched in one view or another on those topics and it could distract from the main topic of the Equal Hands experiment. My intention is merely to illustrate why someone might endorse the original statement. 

rich people in general, who by and large I think make absolutely terrible decisions about charity

I think this follows from a more general fact about people. If anything, I would guess that there's a positive correlation between wealth and EA values: that a higher (though still depressingly low) proportion of wealthy people donate to effective causes than is true of the general population? Would be interesting to see actual data, though.

You probably should add AMF as an option. It doesn't seem to be on the GWWC list, but IIRC it is tax deductible in significantly more places than any other common EA charity. That would allow people from countries with few tax-advantaged options to participate without giving up their tax benefits to do so.

I was super surprised by this, but then discovered that indeed GW top recommended charities are all listed on the top of the page, in some kind of main set of recommendations. E.g. Humane League is absent in the same way from animal welfare. 

Maybe makes sense to list them in both groups (cause area, and top recs), @Sjir Hoeijmakers🔸 ?

If I remember correctly, we decided not to list them in both groups because people already need to scroll a lot (especially on mobile) to see all the 15 programs, if we added the 6 recommended ones it would become 21

 

I agree that not seeing the top programs in the various categories is also confusing though, especially if you want to link to them directly

Oh interesting. Great catch, thanks! Added.

Signed up. I am a little concerned about voters who don't think through their cause prioritization carefully enough and the causes being not granular enough so voter can't indicate their priority well.

You could have "wild animal welfare", "alternative proteins","fish and invertebrate welfare", "improving wellbeing on farms" and "reducing animal consumption" instead of just "animal welfare". That makes everything more complex though.

Yeah, I agree that this seems tricky. I thought about sub-causes, but also worried they'd just make it really burdensome to participate every month.

I ended up making a Discord for participants, and added a channel where people can explain their allocation, so my hope is that this lets people who have strong sub-cause prioritization make the case to it for donors. Definitely interested in thoughts on how to improve this though, and seems worth exploring further.

I am looking forward to pick a charity once I received an allocation and weigh in the opinion of others and myself. It may not be to my preferred cause, but I still have the freedom to pick a charity within the cause.

That's a great way to learn.

I can see myself recommending EH to beginner donors, donors who haven't thought through their cause prioritization yet, and donors who are very thoughtful relative to their budget.

Props for running an experiment. I'll be interested to see what the results are.

Is there anything interesting about supporting different currencies here? E.g. if I pledge to give $25 per month, but I'm actually likely to donate in £, do you tell me a dollar amount and ask me to convert it at time of donation? If so, do you need me to submit evidence of the FX rate in addition to evidence of the donation? Or perhaps you could ask me for my preferred currency, and send me pre-converted amounts when it comes time to do the donations?

Thanks! That's a great question and something I should figure out how to handle. I'll think about the ideal implementation of this and include something for November, but I think if it comes up for October participants:

  • Pledge in USD, stating the other currency amount planned to give in the alternate currency (spot converted on the day of the pledge) in the comments.
  • Give them amounts to give in their preferred currency using that rate.
  • Once donated and receipts are submitted, I'll spot convert at the time they donated, and if the dollar weakened relative to their original pledge substantially, backstop it.

Thanks for the initiative Abraham! This seems like an interesting and valuable experiment.

One crucial question I have: Is it somehow possible to make sure that I can make tax deductible donations? I live in Germany and Effektiv Spenden does not cover all of these cause area options, as far as I know / can see. For instance, I don't think I could donate tax-deductibly to any of the EA Community Building options from Germany.

Thanks! This is a great point. I'll work on getting some German-deductible options on the list for all categories for future months, but also can confirm that the pool has up to $1,500 (and potentially more) in donation swappable dollars to help navigate this right now.

@Moritz Stumpe, there is an extra donation form on Effektiv Spenden's website, which has an EA cause area section: https://effektiv-spenden.org/spenden-effektiver-altruismus/.

There you can donate to Effektiver Altruismus Deutschland and CEA tax-deductibly from Germany. 

Oh, nice, I didn't see that. Thanks so much Sebastian!

Hi Abraham, I have a suggestion on how to improve your democratic process if your membership continues to grow. 

I'm a huge fan of lottocratic processes (ie sortition) to make informed and smarter democratic decisions than mere voting. The rationale of lottocratic democracy is simple. Imagine how insane it would be that instead of using juries to decide court cases, we decided innocence or guilt based on voting. What jury duty does is facilitate democratic specialization. It allows a representative sample to perform a complex tax so that the larger whole does not have to. I write a full defense of sortition here (and full disclosure, I am a frequent advocate of the practice). 

Although the process you have created is much more democratic than the typical nonprofit, you personally retain enormous powers in setting the agenda and setting the final choices that can be allocated. It is admirable that you are putting in significant work to administer the fund; however that choice is not democratic. 

I suggest that the fund be administered by a small council of members (perhaps about 5 councilors) selected by lottery. One of the primary tasks of the small council is to elect an executive of the fund and review the executive's performance. It is far more efficient to let a small council perform this task; 5 people doing a performance review is vastly more efficient than demanding 20 people (assuming 20 participants) perform a performance review. 

If your fund manages to grow, I would suggest adding more and more councilors to the small council up to 25 councilors, to say manage 200 members. Eventually, I would even do away with voting all-together, and instead rely on the small council to make donation choices. With the same justifications as above, a small council would be far more efficient at the task. Moreover, councils are capable of deliberation, assigning roles/tasks, so that the council can make better informed decisions than voters. 

In contrast, voters need to make tradeoffs. A voter might devote more time towards working and generating more revenue for the fund, in exchange for less informed voting on what ought to be funded. Sortition mitigates these kinds of tradeoffs by increasing decision making efficiency by several factors.  

Thanks! Strongly agree with making it more democratic via some mechanism, and if it survives beyond the first 6 months, I plan on moving it to having some kind of elected oversight group or similar (mainly will figure out how to do that with input from the members). Interesting note on sortition - this seems plausibly like a good use for it. Thanks!

There are 7 days left to sign up for the first month of this experiment!

As of right now, the marginal $25 influences the allocation of about $253 in expectation.

Thanks for trying this, Abraham! I suspect having a big donor commit to it would be important to make this successful, because then small donors would have an incentive to join to influence more than the size of their individual donations.

I agree! I think that these donors are probably the least incentivized to do this, but also where a the most value would come from. Though I'll note that as of me writing this comment the average is well above 10x the minimum donation.

Curated and popular this week
Relevant opportunities