Patrick Gruban 🔸

COO / Co-Director @ Successif / EA Germany
1718 karmaJoined Working (15+ years)Munich, Germany

Bio

Participation
6

COO Successif, Co-Director EA Germany, Trustee Effective Ventures UK

Entrepreneur (currently textiles, previously software) for 25+ years and interested in EA since 2015, joining the local group and donating. I joined the EA Munich organiser team and took the GWWC pledge in 2020, developed the software for the new donation management system Effektiv Spenden is using in Germany and Switzerland in 2021 and have been co-director of EA Germany since November 2022.

I run the donation drive Knitters Against Malaria, which has raised over $100,000 for the Against Malaria Foundation since 2018.

How others can help me

Let me know if you have ideas for EA Germany or Successif

How I can help others

I can offer to mentor and be a sounding board if you are an EA-aligned non-profit entrepreneur

Comments
97

Topic contributions
5

I pushed back the time to 3-5 pm as the conference starts at 5.

Reading RP's work in the last months and the posts for debate week has made me more inclined towards AW funding. 

Congratulations on the new organsations and finding the right people for the new boards. I'm happy to see this happen; all the best to the independent entities!

Re-Reading Will MacAksill's Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims: 

The ideas that EA is about maximising and about being science-aligned (understood broadly) are uncontroversial. The two more controversial aspects of the definition are that it is non-normative, and that it is tentatively impartial and welfarist. 

He didn't include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.

Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does 'recognition of tradeoffs' involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn't 'scope sensitivity' basically a subset of the concerns implied by 'impartiality'? Is something like 'do a counterfactually large amount of good' supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does 'scout mindset' need to be on the list, when 'thinking through stuff carefully and scrupulously' is a prerequisite to effective counterfactual actions?

This poses some interesting questions, and I've thought about them a bit, although I'm still a bit confused.

Let's start with the definition on effectivealtriusm.org, which seems broadly reasonable:

Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.

So what EA does is:

  1. find the best ways to help others
  2. put them into practice

So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.

But going back to the points, we could add some questions:

  1. find the best ways to help others
    1. How do we find the best ways to help?
    2. Who are the others?
  2. put them into practice
    1. How do we put them into practice?

1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, you'll have to understand the science behind them. 

1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then I'm not impartial, but I still could use some of the methods EAs are using.

2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1. 

In conclusion, I would see EA as:

  1. A research field that aims to find the best ways to help others
  2. A practical community that aims to put the results of 1 into practice
  3. Both governed by the following values:
    1. Impartiality or radical empathy
    2. Good character or collaborative spirit

Those two values seem to me to reflect the boundaries that the movement's founders, the most engaged actors, and the biggest funders want to see. 

Some people are conducting local prioritisation research, which might sometimes be worthwhile from an impartial standpoint, but giving up on impartiality would radically change the premise of EA work.

Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If we're willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.  

In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these don't need to be the ones we'll use in the future, so I see everything else as being downstream from the definition above.

All of these activities sound like services provided to the EA community. [...] the same way Givedirectly is and should be judged by how effectively they serve their beneficiaries (e.g. Africans below the poverty line), CEA should be judged by how effectively it serves its effective beneficiaries by empowering them to do those things.

This doesn't sound right to me. If you want to focus on the customer analogy, the funders are paying CEA to provide impact according to their impact metrics. CEA engages with a subset of the EA community that they think will lead to effects that they think will lead to impact according to their own theory of change and/or the ToC of the funder(s). Target groups can differ based on the ToC of project, so you see people engaging on the forum but being rejected from EAGs

I think there is much room for criticism when looking more closely at the ToCs, which is more to your next point:

  • The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities - reviews for which transparency is an absolute prerequisite for recommendation
  • It seems importantly hypocritical as a movement to demand it of evaluees but not to practice it at a meta level

Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.

I personally think there is a good case to be made to try for broader meta-funding diversification, which would necessitate more transparency around impact measurement. The EA Meta Funding Landscape Report asks some good questions. However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.

(All opinions are my own and don't reflect those of the organisations I'm affiliated with.)

Thank you for writing this up! I was happy to hear you're taking this approach at your EAG London opening talk and now see it in writing.

One point that stands out is that the principles published on effectivealtruism.org also include a "collaborative spirit" that is missing from your list:

Collaborative spirit: It’s often possible to achieve more by working together, and doing this effectively requires high standards of honesty, integrity, and compassion. Effective altruism does not mean supporting ‘ends justify the means’ reasoning, but rather is about being a good citizen, while ambitiously working toward a better world.

In the footnote, you write:

This list of principles isn’t totally exhaustive. For example, CEA’s website lists a number of “other principles and tools” below these core four principles and “What is Effective Altruism?” lists principles like “collaborative spirit”, but many of them seem to be ancillary or downstream of the core principles. There are also other principles like integrity that seem both true and extremely important to me, but also seem to be less unique to EA compared to the four core principles (e.g. I think many other communities would also embrace integrity as a principle).

CEA created the website effectivealtruism.org, and my understanding was that it used a collaborative approach to getting input from different stakeholders and was also published after the list of principles on CEA's website. Maybe I'm wrong here, but I would find it helpful to know more about the decision process behind the principle selection. 

I expect disagreement about the principles, but an approach focussed on principles (which I support) could be more powerful when there is broader stakeholder consensus on what they are. In your EAG London speech, you talked about CEA taking a stewardship role for the EA community, which I interpreted as hearing members' perspectives when making community-wide decisions. When you write, "I view the community as CEA’s team, not its customers." this sounds similar.

While CEA can have its own principles that differ, for example, from national and regional EA groups, a more consensus-based approach could help promote the brand across different target groups.

Thank you for that assessment! I agree that the legal risk is low, and for this reason, I wouldn't refrain from participating in the project.

On the reputation side, I might have updated too much from FTX. As an EA meta organisation, I  want a higher bar for taking donations than a charity working on the object level. This would be especially the case if I took part in a project that is EA branded and asked me to promote the project to get funding. Suppose Manifund collapses or the anonymous donor is exposed as someone whose reputation would keep people from joining the community. In that case, I think it would reflect poorly on the ability of the community overall to learn from FTX and install better mechanisms to identify unprofessional behaviour. 

Perhaps the crux is whether I would actually lose people in our target groups in one of the scenario cases or if the reputational damage would be only outside of the target group. In the last Community Health Survey, 46% of participants at least somewhat agreed with having a desire for changes post-FTX. Leadership and scandals were two of the top areas mentioned, which I interpret as community members wanting fewer scandals and better management of organisations. Vetting donors is one way that leaders can learn from FTX and reduce risk. But there is also the risk of losing out to donations.

Yes, thank you for putting it this way, that was what I want to convey. For example I would be more comfortable with taking a grant funded by an anonymous donation to Open Phil as they has a history of value judgments and due diligence concerning grants and seem to be a well-run organisation in general.

Load more