Patrick Gruban 🔸

COO / Co-Director @ Successif / EA Germany
1630 karmaJoined Working (15+ years)Munich, Germany

Bio

Participation
6

COO Successif, Co-Director EA Germany, Trustee Effective Ventures UK

Entrepreneur (currently textiles, previously software) for 25+ years and interested in EA since 2015, joining the local group and donating. I joined the EA Munich organiser team and took the GWWC pledge in 2020, developed the software for the new donation management system Effektiv Spenden is using in Germany and Switzerland in 2021 and have been co-director of EA Germany since November 2022.

I run the donation drive Knitters Against Malaria, which has raised over $100,000 for the Against Malaria Foundation since 2018.

How others can help me

Let me know if you have ideas for EA Germany or Successif

How I can help others

I can offer to mentor and be a sounding board if you are an EA-aligned non-profit entrepreneur

Comments
94

Topic contributions
5

Congratulations on the new organsations and finding the right people for the new boards. I'm happy to see this happen; all the best to the independent entities!

Re-Reading Will MacAksill's Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims: 

The ideas that EA is about maximising and about being science-aligned (understood broadly) are uncontroversial. The two more controversial aspects of the definition are that it is non-normative, and that it is tentatively impartial and welfarist. 

He didn't include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.

Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does 'recognition of tradeoffs' involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn't 'scope sensitivity' basically a subset of the concerns implied by 'impartiality'? Is something like 'do a counterfactually large amount of good' supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does 'scout mindset' need to be on the list, when 'thinking through stuff carefully and scrupulously' is a prerequisite to effective counterfactual actions?

This poses some interesting questions, and I've thought about them a bit, although I'm still a bit confused.

Let's start with the definition on effectivealtriusm.org, which seems broadly reasonable:

Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.

So what EA does is:

  1. find the best ways to help others
  2. put them into practice

So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.

But going back to the points, we could add some questions:

  1. find the best ways to help others
    1. How do we find the best ways to help?
    2. Who are the others?
  2. put them into practice
    1. How do we put them into practice?

1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, you'll have to understand the science behind them. 

1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then I'm not impartial, but I still could use some of the methods EAs are using.

2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1. 

In conclusion, I would see EA as:

  1. A research field that aims to find the best ways to help others
  2. A practical community that aims to put the results of 1 into practice
  3. Both governed by the following values:
    1. Impartiality or radical empathy
    2. Good character or collaborative spirit

Those two values seem to me to reflect the boundaries that the movement's founders, the most engaged actors, and the biggest funders want to see. 

Some people are conducting local prioritisation research, which might sometimes be worthwhile from an impartial standpoint, but giving up on impartiality would radically change the premise of EA work.

Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If we're willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.  

In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these don't need to be the ones we'll use in the future, so I see everything else as being downstream from the definition above.

All of these activities sound like services provided to the EA community. [...] the same way Givedirectly is and should be judged by how effectively they serve their beneficiaries (e.g. Africans below the poverty line), CEA should be judged by how effectively it serves its effective beneficiaries by empowering them to do those things.

This doesn't sound right to me. If you want to focus on the customer analogy, the funders are paying CEA to provide impact according to their impact metrics. CEA engages with a subset of the EA community that they think will lead to effects that they think will lead to impact according to their own theory of change and/or the ToC of the funder(s). Target groups can differ based on the ToC of project, so you see people engaging on the forum but being rejected from EAGs

I think there is much room for criticism when looking more closely at the ToCs, which is more to your next point:

  • The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities - reviews for which transparency is an absolute prerequisite for recommendation
  • It seems importantly hypocritical as a movement to demand it of evaluees but not to practice it at a meta level

Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.

I personally think there is a good case to be made to try for broader meta-funding diversification, which would necessitate more transparency around impact measurement. The EA Meta Funding Landscape Report asks some good questions. However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.

(All opinions are my own and don't reflect those of the organisations I'm affiliated with.)

Thank you for writing this up! I was happy to hear you're taking this approach at your EAG London opening talk and now see it in writing.

One point that stands out is that the principles published on effectivealtruism.org also include a "collaborative spirit" that is missing from your list:

Collaborative spirit: It’s often possible to achieve more by working together, and doing this effectively requires high standards of honesty, integrity, and compassion. Effective altruism does not mean supporting ‘ends justify the means’ reasoning, but rather is about being a good citizen, while ambitiously working toward a better world.

In the footnote, you write:

This list of principles isn’t totally exhaustive. For example, CEA’s website lists a number of “other principles and tools” below these core four principles and “What is Effective Altruism?” lists principles like “collaborative spirit”, but many of them seem to be ancillary or downstream of the core principles. There are also other principles like integrity that seem both true and extremely important to me, but also seem to be less unique to EA compared to the four core principles (e.g. I think many other communities would also embrace integrity as a principle).

CEA created the website effectivealtruism.org, and my understanding was that it used a collaborative approach to getting input from different stakeholders and was also published after the list of principles on CEA's website. Maybe I'm wrong here, but I would find it helpful to know more about the decision process behind the principle selection. 

I expect disagreement about the principles, but an approach focussed on principles (which I support) could be more powerful when there is broader stakeholder consensus on what they are. In your EAG London speech, you talked about CEA taking a stewardship role for the EA community, which I interpreted as hearing members' perspectives when making community-wide decisions. When you write, "I view the community as CEA’s team, not its customers." this sounds similar.

While CEA can have its own principles that differ, for example, from national and regional EA groups, a more consensus-based approach could help promote the brand across different target groups.

Thank you for that assessment! I agree that the legal risk is low, and for this reason, I wouldn't refrain from participating in the project.

On the reputation side, I might have updated too much from FTX. As an EA meta organisation, I  want a higher bar for taking donations than a charity working on the object level. This would be especially the case if I took part in a project that is EA branded and asked me to promote the project to get funding. Suppose Manifund collapses or the anonymous donor is exposed as someone whose reputation would keep people from joining the community. In that case, I think it would reflect poorly on the ability of the community overall to learn from FTX and install better mechanisms to identify unprofessional behaviour. 

Perhaps the crux is whether I would actually lose people in our target groups in one of the scenario cases or if the reputational damage would be only outside of the target group. In the last Community Health Survey, 46% of participants at least somewhat agreed with having a desire for changes post-FTX. Leadership and scandals were two of the top areas mentioned, which I interpret as community members wanting fewer scandals and better management of organisations. Vetting donors is one way that leaders can learn from FTX and reduce risk. But there is also the risk of losing out to donations.

Yes, thank you for putting it this way, that was what I want to convey. For example I would be more comfortable with taking a grant funded by an anonymous donation to Open Phil as they has a history of value judgments and due diligence concerning grants and seem to be a well-run organisation in general.

Thanks. I think linking to your internal notes might have helped; at least it gave me more insight and answered questions.

I don't think most will do this, nor do I believe due diligence has to be very extensive. In most cases this is delegated to the grant maker if there is enough prior information on their activities. For individuals it can be a short online search or reaching out to people who know them. For organisations it can also be asking around if others have done this, what I was doing here - having  a distributed process can reduce the amount of work for all.

In this case Manifund has a lot of information online, so as others haven't chimed in I'll do this as an example. 

First I had a cursory look at the Manifund website reading through their board of director meeting notes. What stood out:

  • "Technically underwater, after pending Manifold for Charity donations"
    • Seems like they had -$42,789
    • In the overview sheet the deficit is gone now but probably because asset numbers were not updated
  • Seems like the only grant in the last year was from SFF, so not sure how extensive grantmaker oversight was
  • More details about their structure in the SFF application
    • Seemingly no independent board oversight

Then went into their notes to find out about their compliance processes, especially concerning donor and grantee due diligence.

  • I found this note with "getting a better sense of manifund’s due diligence for individual projects" linking to an ops document without any due diligence checks.
  • There's a doc about international payments and fiscal sponsorship that raises questions if they are aware of the steps needed to vet foreign entities as well as if they have the resources needed (probably not needed now but in case foreign orgs take part in the program). One remarkable quote:

Places I’m worries we (would) fall short (if we did this for LTFF):

  • money management/accounting
    • we’re just pretty loose about this right now: we have one big pot, and track things in our txns table in a way that basically works but we probably want to refine if we have more different pots that shouldn’t mix
    • I am personally bad about this in my own life as well, wary of having millions of dollars controlled by my 21 year old self + Austin who’s VERY “move fast and break things” 
  • There is a process for selecting grantees
  • Their main donor last year was anonymous: "We decided to do a regranting program after we were introduced to an anonymous donor, “D”, in May 2023. D liked Future Fund’s regranting setup, and wanted to fund their own regrantors, to the tune of $1.5m dollars across this year."

So just looking at these documents my quick takes:

  • Seems pretty transparent, including publishing mistakes and not so favourable notes
  • No red flags in terms pointing to major criminal behaviour
  • Yellow flag concerning taking bigger anonymous donations in combination with weak governmence and compliance processes

Overall I would see a moderate chance of the organisation failing to pay out user assets due to underfunding, seeing serious risks to their charitable status or failing IRS checks on grantmaking. However I don't see the level of professionalism I would like to see in organisations in the EA ecosystem post FTX and OpenAI board discussions, which is why I wouldn't want to partake in a Manifund project with an EA branding and an anonymous donor.

But I'm probably missing information and context and happy to update with further facts.

I expect the amount of due diligence to vary based on funding, project size, and public engagement levels. Even a small amount might be a significant part of the yearly income of a small EA group that might have to deal with increased scrutiny or discussions based on the composition of its existing and potential group members. 

Load more