Hide table of contents

I gave the opening talk at EA Global: London 2024. Here's a video of the talk and a lightly edited transcript.

​​EA has been on a journey over the past 13 years. It's no longer the scrappy collection of people in an Oxford basement that it once was. Many of us feel that we're at an inflection point, and it's important for us to own it.

As the new CEO of CEA, one of the most frequent questions I get asked is: what is CEA's relationship to the EA movement, and in particular, are we planning on leading the movement? I want to be clear that our relationship to EA is one of stewardship. Nobody controls EA. Even if someone wanted to, it wouldn't be possible given this is one of the most lovably disagreeable communities on the face of the planet. I'm not aiming for CEA to be the leader of the EA movement, but I do want CEA to be a leader as we work to raise others up, and as we steward the movement to live up to its maximum potential.

For some, I recognize that will feel like a shift from how CEA has viewed its relationship with the EA community before. Given I'm in my first few months on the job, I want to both share thoughts on what this may look like, but also be humble about my ability to say with certainty what the EA community needs and how CEA can help. A large part of what I want to do in the coming months is listen and hear how CEA can help the people in this community on their road to impact.

But even while I'm still in the process of listening and learning, in the spirit of stewardship, I also want to present a hypothesis of three journeys that I think it's important for EA to be on. These represent areas where I hope myself and CEA can help steward the community to live up to its potential for impact.

I want to introduce you to the first journey by introducing you to Quincy: they're a grants associate at Effective Ventures, which is an umbrella organization for many EA-affiliated organizations like CEA, EA Funds, and Giving What We Can. Those of you who have received money from one of those organizations may have Quincy to thank for that money arriving in your bank account. Quincy is holding up something incredibly cute that I assume is supposed to be a kidney, because they're in bed recovering after donating their kidney to a total stranger.

 

Quincy

Quincy's story is also the story of Quincy's friend, Abie. Abie has worked on a number of communications projects affiliated with EA, such as being the communications director at 1 Day Sooner as they pushed for human challenge trials for a COVID vaccine and leading communications at Forethought, which included publicity for Will MacAskill's book What We Owe the Future. Abie also donated his kidney to a stranger, and it was hearing Abie's story that inspired Quincy to donate theirs.

Abie

Abie's story is also the story of Dylan Matthews. Dylan writes for the EA-inspired Future Perfect at Vox. Dylan wrote about donating his kidney to a total stranger. And his writing inspired Abie to donate his kidney in turn.

Dylan

Dylan's story is also the story of Ben Strahs, who also donated his kidney to a stranger and was one of two friends who inspired Dylan to do the same. The other friend was Alexander Berger who now leads Open Philanthropy and — you guessed it — donated his kidney to a total stranger.

Even though kidney donation isn't a top EA cause, it's easy to see why EA and kidney donation have overlapping appeal. Both involve altruistically helping others, even if they're strangers. And there's an elegant tractability that stems from the fact that some people have two kidneys, some people have zero, and everyone needs one.

Now, I want to be extremely clear that I'm not pressuring anyone to part with their internal organs, and I promise admission to the next EAG won't cost you one of yours. But I think the story of Quincy, Abie, Dylan, Ben, and Alexander is a powerful reminder that telling human stories can inspire others to do good.

And these social effects aren't limited to kidneys. For example, in 2023, 43% of Giving What We Can Pledges came from people who were inspired to pledge either by people they knew or an EA group they participated in. And no matter how large EA or its institutions become, I think there's an extremely good reason for us to expect that human connections will continue to matter. And that reason is this: we are weird.

Many normal people read about the idea of kidney donation and it stays just that — an idea. Something to be read instead of something to be done. The level of moral seriousness that exists in this community is both admirable and unusual. To contextualize the level of weirdness, before Quincy was allowed to donate a kidney, medical staff asked them questions likeWhy do you want to donate a kidney to someone you don't know? Has anyone offered you money to donate? Is anyone pressuring you or coercing you to donate?

For some people, the idea of altruistically donating a kidney seems unfathomable. So too might the idea of donating 10% of your income, or pivoting to a significantly lower-paying career in pursuit of impact.

The prioritization of neglectedness means we are systematically chasing weirdness. And no matter how much EA evolves, we should never stop being at least a little bit weird. That said, we can help normalize these acts of doing good. We can make the weird feel more human, to more people. We can do it by putting a face on what may otherwise be perceived as pure philosophy.

Human connection has been essential in growing effective altruism, and I think it will continue to be essential. So I want to hold onto stories of individuals, and I think they should continue to be faces of EA. But I also think it's essential we combine the power of individuals with something else: the strengths of institutions.

Journey 1: The individual with the institutional

The first journey I want to highlight for EA is to build up its institutions to be worthy of the individuals and impact they support. I think there are at least two key reasons to care:

The first is simple — we should care because other people care. It's not a secret that the world is taking more notice of effective altruism. When people hear of us for the first time, I'd love for their entry point to be Quincy or Abie or Dylan or Alexander. Sometimes it will be, and I think we should actively invest in communications that help others connect with the human parts of our community.

But oftentimes, the world hears about the work of institutions, particularly as some of these institutions have raised tens of millions of dollars and established legitimate track records of changing the world. This will only become more true if these institutions grow by trying to fundraise and hire from increasingly broad audiences.

Moreover, we know that the world's other institutions care about ours. As many of you are aware, in January 2023, the UK's Charity Commission launched an investigation into Effective Ventures UK with regards to its handling of conflicts of interests and the management of its assets.

Notably, while the Charity Commission's approach was reactive to the FTX collapse, it wasn't in response to any specific concerns about EV. They announced their investigation by saying, “there is no indication of wrongdoing.” They also recently concluded their investigation by saying no wrongdoing had occurred and Effective Ventures had acted “diligently and quickly” after FTX's collapse.

While I know that it can feel frustrating to be investigated by a regulator when they don't have evidence of wrongdoing, I think there's a real lesson for us to learn from the Charity Commission's proactive approach in vetting the reliability of EV. 

This brings me to the second reason why I think we ought to care about investing in our institutions: institutions can play an important role in calibrating trust within a community.

As communities grow, it becomes increasingly difficult to know and trust every individual. Institutions end up being a bridge for more scalable trust. Think of the other institutions that play a role in modern life. It's not a matter of trusting every employee in the government or every member of the local congregation or every journalist at the New York Times; people talk about trusting the government, or the church, or the press. People still have opinions on how much or how little they can trust those institutions in aggregate and the individuals associated with them, even if not every interaction occurs on a personal level.

I think having limits to how easily we grant trust to individuals in a community of our size is a healthy thing. As we are unfortunately very well aware, wearing a lightbulb shirt doesn't mean someone can't lie or steal.

Nobody gets to decide who self-identifies with this community or not. As the community grows, it becomes increasingly impossible for everyone to know each other. I continue to trust that the average person in this room is significantly more likely to share my values than the average person outside of it, but that doesn't mean every single individual who affiliates with this community is always going to act with integrity, or that we should trust someone who shares our values to pursue them with excellent character or execution.

Looking back, it seems significant that FTX didn't have a CFO, or active board oversight, or significant vetting of its financials from investors. It lacked many hallmarks of competent oversight and institutional trust, which in retrospect, seems to have been prescient.

EA is now at a level where it needs to invest more in scalable and institutional trust. Part of that requires maturation in formal structures like well-functioning boards and conflict-of-interest policies. But it also requires that we continue to use and expand upon the human strengths that I discussed before. We need to strengthen our institutions by engaging the right individuals to build them.

Importantly, I think this growth requires not just a change in policies, but also a change in people. We're going to need a bigger coalition to get the job done. I still believe in the value of philosophers and social scientists and activists. EA's continued success will require us to continue to build on their successes and find more people like them. But I also deeply believe that success requires lawyers and accountants and HR specialists, and I think if we continue to believe that raw intelligence and value alignment can always act as a substitute for expertise, we're going to set ourselves up for failure. Even as our ethics ought to remain a bit weird, I don't think our accounting should be.

I believe that we can build a community worthy of the trust of the people in this room and the world outside it, and doing so will require us to remember that people matter, individually and as part of institutions.

Journey 2: Communicating inwards and outwards

When I talk with people about what they're hoping to see from CEA's stewardship of EA, the number-one request I hear — far and away — is for stewardship of EA communications and the EA brand. The post-FTX era has been extremely difficult for EA. Many of us have felt the increased scrutiny, and the data backs that up. Some extremely rough analysis of EA-related stories in the three months immediately before and after the FTX collapse suggests stories went from being over 70% positive to over 70% negative.

Obviously, there has been a need to grapple with the perception of EA in a post-FTX world. But I think it's a mistake to believe that moving beyond FTX would somehow mean we put all public scrutiny behind us. Because while scrutiny comes from scandal, it also comes from success.

I think this is a lesson that some parts of our community have been aware of for a long time. For example, some animal welfare advocates are well aware of how their work can attract attention from corporations. But I also think this has become increasingly salient for other parts of the community recently, particularly via a string of articles in Politico around the impact of EA on AI safety.

While SBF is oftentimes mentioned, significantly more interest from Politico seems to have been generated by questions about money in politics and the influence of coalitions on policy. Which makes sense. In many ways, that's a sign that journalism is doing its job. It's looking at powerful actors in an attempt to help their audiences understand them and hold them accountable.

So regardless of whether you call it effective altruism, or some other version of do-gooding 2.0, or refer to individual causes like global health and development and AI safety and animal welfare, substance still matters. Under any name, people will tell conspiracy theories about billionaires and people with weird ideas infiltrating the government unless you do something to help them understand what's actually going on.

And if you start screaming at the world to pay attention to poor people dying in Nigeria and chickens suffering in cages and existential risks posed by AI, then, well, they might listen. And that can't be the point where we run into hiding and stop trying to explain ourselves. If we want to optimize for avoiding scrutiny, the easiest way to do so is to simply stop mattering.

So, I feel that for many of us, the question shouldn’t be whether or not we’re noticed, but rather how we meet the moment when we are. In those moments, I think we have agency in how the world understands us. Which, to be clear, I'm not particularly convinced they currently do.

To cite one particularly egregious example, one article in Politico includes this quote: “Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” Less consequential, but more amusing, is their reference to “Harold Karnofsky”, the “Head of Open Philanthropy.”

Most of the world still hasn't heard of effective altruism, much less do they have an understanding of the politics surrounding our various causes or the diversity of beliefs people come to when they engage with EA principles.

Reporting from YouGov indicates only around 20% of Americans have heard of effective altruism, and more conservative analysis from Rethink Priorities indicates that the number may actually be less than 3%. Outside of America, I'd expect the numbers to be even lower. While I expect the numbers to be higher among some of the demographics people in this community interact with most, I think the point still stands that much of the world doesn't understand what EA is. While there are costs to that misunderstanding, it also represents an opportunity, as public opinion has yet to be fully shaped.

So, the question arises: What should we do about this? I think at some point, we have to stop waiting for other people to tell our stories to the world. We have to start telling them ourselves.

In other words, we need to communicate not only inwardly in our community, but also outwardly to help the world understand us. That will require us to change both the quantity and the quality of our communications.

When it comes to communicating more, we need to build up the capacity to do so. CEA is expanding our communications team, and I know other organizations are investing in increased communications capacity as well. I think it's important for us to use that capacity to be proactive instead of just reactive, and to help the world understand our intent and actions.

When it comes to quality, I want to be clear — I think this community has many truly exceptional communicators. People reading about EA on the internet and in books has historically been a major driver of interest in this community. I think we should continue to lean on what works to attract the audiences we have in the past. But I also think we need qualitatively different communications to interact with a broader audience, both for the sake of helping others realize how effective altruism can help them achieve their goals to do good better, and also for the sake of those in positions of power like government or corporations or journalism — people who many never use our principles but want to understand those of us who do.

This likely means diversifying our mediums and adapting our style to get beyond the barrier to entry of philosophy books, GiveWell's analysis of RCTs, 50-page-long Forum posts, and Harry Potter fanfiction.

I think that when many people in EA hear this, they're worried we'll become too salesy or less truthful. But I don't think becoming more accessible means we have to lose our idiosyncratic and nerdy soul.

As an example, GiveWell recently sent out an email celebrating their change to a more accessible blogging style. In that email, they took particular pride in mentioning how they managed to include a joke footnote in a recent post. The joke they went out of their way to email their followers about, of course, includes a reference to cost benefit analysis. GiveWell, I love you, and I hope you always find joy in your footnotes.

Journey 3: Engaging with EA principles

For the third journey, I thought I could start by talking about my own journey with EA. 

This is Zach circa 2019. As you can see, he really loved food. This is definitely one of the things that hasn't changed. Although they're not featured in the picture, he also loved consequentialism, vegetarianism for animal welfare, and using extremely detailed spreadsheets to optimize decision making.

Despite that, something 2019 Zach didn't love was effective altruism. He couldn't have because he had never heard of it. But I had a friend from college who was very obsessed with AI and the possibility that all of us might die. And we spent many, many, many hours on the phone where he tried to convince me to work on AI. I was very confused and very uninterested.

But my friend was stubborn, and during my time working at a research startup, he encouraged me to apply to some organization I had never heard of called Open Philanthropy. They were looking for researchers to start a new team dedicated to finding ways to expand their giving in global health and development.

Just like AI, global health and development wasn’t particularly interesting to me. But I was deeply interested in the idea that it could be my job to engage with the question: How can I do the most good?

My path isn't particularly unique. It turns out that “How do I do the most good with my career?” and “How do I do the most good with my donations?” are two questions that people actually ask, and I think EA is the most powerful tool in the world to help people engage with them.

I worry that outreach only for specific causes would never catch the eye of people who are asking the big-picture questions that EA's frameworks and principles try to answer. In a recent Rethink Priorities survey, 73% of respondents said the ideas related to EA were more important to them becoming involved in EA than any specific cause.

At the core of this approach to EA is promoting EA principles. While I don't expect there to be complete consensus on which principles count as fundamental, CEA focuses on scope sensitivity, scout mindset, impartiality, and the recognition of tradeoffs. I think that most people in the room are committed to putting those principles into practice. But I also worry that once we get far down the road towards impact, it’s easy to forget where we started — and how influential those principles were in getting us where we are now.

This is why the third journey I want to talk about is one that I feel like EA has been on, and I hope it continues to be on forever: Engaging with EA principles as a means of doing good better.

To be clear, I don't want these principles to exist to the exclusion of cause-specific work, which is essential to impact. But the principles continue to matter:

  1. They've been a source of inspiration, with a hard-to-replicate track record of being a beacon for thousands of people as they've searched for pathways to help them do good better.
  2. The principles and the community of talented people who take them seriously are flexible.
  3. Promoting principles that draw people from many causes allows for a productive cross-pollination of ideas and changing of minds and resources, whether that's learning how to better engage in advocacy or donating money to one cause even as someone pursues a career in another.

With that being said, I'm sympathetic to concerns about a principles-first approach and the case for spending more resources on building specific fields. I think cause-specific field-building can also be impactful, and I don't feel confident in sweeping claims about how either a principles-first or cause-specific approach is much better than the other.

When the search committee was looking to hire a new CEO for CEA, they were open to choosing a CEO who only focused on AI. Given that, I think it's important for me to state that I'm committed to stewarding a principles-first EA community, and CEA is as well.

For EA, I think the benefits of a principles-first approach outweigh the costs. I feel good about honestly saying, “Yes, cause-specific efforts can be very valuable, but so can a principles-first approach. Both should exist, and I'm focusing on the latter.” That's because I believe these principles have power that goes beyond any one cause, and they're worth celebrating as such.

I also want to be upfront that a principles-first approach doesn't mean we're principles-only, and we sometimes do have to make decisions where potential causes could trade off against each other, like how much content from various causes gets featured at EAGs. Based on my own understanding of these principles, I think it's appropriate that AI safety has the most representation in the community, and while the amount of attention specific causes receive may or may not change, I don't expect AI to stop being featured the most.

But I also think it's right that other major causes in EA like global health and development, animal welfare, and biosecurity also continue to have a meaningful stake in the EA community. I'm glad that taking EA principles seriously has led me on this journey. I celebrate the fact that it led me to prioritize AI safety, as unlikely as that seemed at one time, but I was also proud of the work I did contributing to global health and development and animal welfare, and I'm still proud of that work.

I want this to be a community that can make others feel that pride in their work, and I want us to celebrate the work that so many of you are doing on these immensely important and highly neglected causes — both within the community but also out in the wider world.

I recognize there's a journey ahead of me, CEA, and this community. While I outlined three key aspects of it today, I know the path ahead has yet to be fully mapped out, and I expect there to be twists in the road. I think the value of individuals, institutions, communications, and principles will matter to all of us, even if we don't have the same destination of donating a kidney or prioritizing AI safety. But wherever our roads go, I'm proud to be here right now, and I'm grateful to be working to make the world a better place with all of you.

125

1
0
2

Reactions

1
0
2

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Not much to add, this is inspiring !

I also liked it! You may want to submit an expression of interest (see "Submit an expression of interest" here).

Executive summary: The CEO of CEA outlines three key journeys for effective altruism: combining individual and institutional strengths, improving internal and external communications, and continuing to engage with core EA principles.

Key points:

  1. EA needs to build up trustworthy institutions while maintaining the power of individual stories and connections.
  2. As EA grows, it must improve both internal community communications and external messaging to the wider world.
  3. Engaging with core EA principles (e.g. scope sensitivity, impartiality) remains crucial alongside cause-specific work.
  4. CEA is committed to a principles-first approach to EA, while recognizing the value of cause-specific efforts.
  5. AI safety is expected to remain the most featured cause, but other major EA causes will continue to have meaningful representation.
  6. The CEO acknowledges uncertainty in EA's future path and the need for ongoing adaptation.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities