Holden recently claimed that EA is about maximizing, but that EA doesn't suffer very much because we're often not actually maximizing. I think that both parts are incorrect[1]. I don't think EA requires maximizing, and it certainly isn't about maximizing in the naïve sense that it often occurs.
In my view, Effective Altruism as a community has in many or most places gone too far towards this type of maximizing view, and it is causing substantial damage. Holden thinks we've mostly avoided the issues, and while I think he's right to say that many possible extreme problems have been avoided, I think we have, in fact, done poorly because of a maximizing viewpoint.
Is EA about Maximizing?
I will appeal to Will MacAskill's definition, first.
Effective altruism is:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
Part (i) is obviously at least partially about maximizing, in Will's view. But it is also tentative and cautious, rather than a binary - so even if there is a single maximum, actually doing part (i) well means we want to be very cautious about thinking we've identified that single peak. I also think it's easy to incorrectly think this appeals to utilitarian notions, rather than benficentric ones. Utilitarianism is maximizing, but EA is about maximizing with resources dedicated to that goal. It does not need to be totalizing, and interpreting it as "just utilitarianism" is wrong. Further, I think that many community members are unaware of this, which I see as a critical distinction.
But more importantly, part (ii), the actual practice of effective altruism, is not defined as maximizing. Very clearly, it is instead pragmatic. And pragmatism isn't compatible with much of what I see in practice when EAs take a maximizing viewpoint. That is, even according to views where we should embrace fully utilitarian maximizing - again, views that are compatible with but not actually embraced by effective altruism as defined - optimizing before you know your goal works poorly.
Before you know your goal exactly, moderate optimization pressure towards even incompletely specified goals that are imperfectly understood usually improves things greatly. That is, you can absolutely do good better even without finishing part (i), and that is what effective altruism has been and should continue to do. But at some point continuing optimization pressure has rapidly declining returns. In fact, over-optimizing can make things worse, so when looking at EA practice, we should be very clear that it's not about maximizing, and should not be.
Does the Current Degree of Maximizing Work?
It is possible in theory for us to be benefitting from a degree of maximizing, but in practice I think the community has often gone too far. I want to point to some of the concrete downsides, and explore how maximizing has been and is currently damaging to EA. To show this, I will start from exclusivity and elitism, go on to lack of growth, to narrow vision, and then focusing on current interventions. Given that, I will conclude that the "effective" part of EA is pragmatic, and fundamentally should not lead to maximizing, even if you were to embrace a (non-EA) totalizing view.
Maximizing and Elitism
The maximizing paradigm of Effective Altruism often pushes individuals towards narrow goals, ranging from earning to give, to AI safety, to academia, to US or international policy. This is a way for individuals to maximize impact, but leads to elitism, because very few people are ideal for any given job, and most of the areas in question are heavily selected for elite credentials and specific skills. Problems with this have been pointed out before.
It's also the case that individual maximization is rarely optimal for groups. Capitalism harnesses maximization to provide benefits for everyone, but when it works, that leads to diversity in specializations, not crowding into the single best thing. To the extent that people ask "how can I be maximally impactful," I think they are asking the wrong question - they are part of a larger group, and part of the world as a whole, and they can't view their impact as independent from that reality.
I think this is even more unsustainable given the amount of attention EA receives, and also validates an increasingly common, and I think correct criticism, that EA is sometimes pushing against actually important things like fighting climate change, in search of so-called optimal things to focus on. When 1% of college students had heard of EA, directing them towards AI safety instead of climate change might have made sense. When 25% of them have heard of it, we really need to diversify - because we're far less effective with 1% of the population supporting effective goals than we could be with 25%.
Self-Limiting EA, and Lack of Growth
Maximizing effective altruism is also therefore unfortunately self-limiting, since most people can't, and shouldn't, actually work on the single highest leverage thing. To continue the above example, it is absolutely great for 25% of college students to aim to give 1% of their income to effective charities, and wonderful if some choose to give more, or focus their careers on maximizing impact. But it is absolutely disastrous for the community for people to think of EA as a binary, either you're in, and working on AI safety, biorisk, or working for an EA org, or you are out, and doing something the community seems to disapprove of like working on fighting climate change, or only donating. Because if that happens, we don't grow. And we need to.
And before people say that we don't need money, no. Effective altruism is incredibly funding constrained. For example, EA has a nearly-unlimited funding opportunity in the form of Givedirectly, and right now, in 2022, Givewell is still short on donations to fund things that are 8x more effective than that. So the idea that when the movement is growing rapidly, and our funding base is potentially expanding further, we need to save money for later, just in case we might want another billion dollars for AI risk in five years seems myopic.
A maximizing viewpoint can say that we need to be cautious lest we do something wonderful but not maximally so. But in practice, embracing a pragmatic viewpoint, saving money while searching for the maximum seems bad. And Dustin Moskowitz can't, in general, seem to manage to give his money away fast enough to have his net worth stop increasing. So instead of maximizing, it seems, we could do more things that are less than maximally good, but get more good done.
Narrow Visions
A narrow vision of what is effective is also bad for the community's ability to work on priorities - even the narrow ones. Even if all we wanted to focus on was AI risk, we have too few graphic designers, writers, mental health professionals, or historians to work on all of the things we think would improve our ability to work on AI safety. There are presumably people who would have pursued PhDs in computer science, and would have been EA-aligned tenure track professors now, but who instead decided to earn-to-give back in 2014. Whoops!
And today, there are definitely people who are deciding not to apply to PhD programs in economics or bioengineering to work on AI risk. Maybe that's the right call in an individual case, but I'm skeptical that it's right in general. Maximizing narrowly is already leading to bad outcomes in the narrow domains.
Premature Optimization
Finally, focus on legible and predictable solutions is great when you're close to the finish line, but in most areas we are not. This means that any optimization is going to be premature, and therefore, perform non optimally.
For example, we fundamentally don't know what approaches will work for AI safety, we aren't anywhere near eliminating diseases, we haven't figured out how to stop cruelty in factory farming, much less replace meat, and the world isn't yet rich enough that it's possible to give everyone basic needs without needing political debates.[2] We need to explore, not just exploit - and things like the Open Philanthropy cause exploration contest are great, looking for more immediately exploitable opportunities - but I'd be even happier if it was an ongoing prize for suggestions that lead to funding something. We can and should to be working on picking all the low hanging fruit we find, and looking for more. That's not maximizing, but it's improving[3].
What could success of non-maximizing EA look like?
Success, in my view, would make EA look like exercise, or retirement savings. No-one seriously questions them as important parts of the life of people in rich western countries - and there are clear efforts to encourage more people to do it. But very few people are looking to maximize either, and that's mostly good.
What would Effective Altruism as exercise look like? We still need the equivalent of gyms and parks to encourage people to donate 10% of their income, or think about their career impact. But people could choose whether to work on their cardio, and just give 1%, or be really into exercising, and have debates with friends about the relative importance of animal suffering, mental health, and disease elimination. We would still have sports, for those that are really doing impressive things, and people could care, but not feel like they were wasting their life because they only played pickup basketball twice a week instead of getting into the NBA. The NBA is for maximalists, sports are for everyone.
What would Effective Altruism as retirement saving look like? Serious people would expect everyone to be donating to effective causes as the obvious thing to do, there would be something like default donation schemes for people to give more easily, and they would direct their donations to any of several organizations like Givewell that allocated charitable giving. Those organizations would pursue maximizing impact, just like investors pursue maximizing (risk-adjusted) returns - but the community would not.
"Doing Good Better" makes the argument that people often aren't doing good well at all, with examples like giving money for playpumps and to give a wish foundation. We should do better, it says, and pay attention to improving the world. That's not maximizing, it's simply moving away from an obvious-in-retrospect failure mode. But this has been transformed into "Doing Good Best," and as I've argued, that unjustified in theory, and worse, bad in practice.
- ^
I am being somewhat unfair to him here, especially given his disclaimer that "there’s probably some nuance I’m failing to capture," and I'm mostly arguing against some of the nuance. But given that I am, in fact, trying to argue against parts of his view, and lots of the implicit conclusion, I'm disagreeing with him, despite the fact that I agree with most of what he says.
- ^
And progress studies doesn't seem to have found interventions that are money-constrained, rather than politically constrained, though hopefully they identify some, and/or make progress on removing political constraints to growth in ways that benefit the world.
- ^
There is a valid theoretical argument that we'd end up in a local maximum by doing this, or would run out of resources to do even better things. I just doubt it's true in practice at the current moment.
In the sense of "maximizing" you're using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc.
However, I think the sense of "maximizing" used in the post you're responding to, and more broadly in EA when people talk about "maximizing ethics", is quite different. I understand it to mean something more like "doing the most good possible" - not aiming to clear a certain threshold, or trading off with other ethical or non-ethical priorities. It's a philosophical commitment that says "even if you're already saved a hundred lives, it's just as ethically important to save one more. You're not done."
It's possible that a commitment to a maximizing philosophy can lead people to adopt a mindset like the one you describe in this post - to the extent that that's true I don't disagree at all that they're making a mistake. But I think there may be a terminological mismatch here that will lead to illusory disagreements.
I think you're right that there are two meanings, and I'm primarily pointing to the failures on the more obviously bad level. But your view - that no given level is good enough, and we need to do marginally more - is still not equivalent to the maximizing view that I see and worry about. The view I'm talking about is an imperative to only do the best thing, not to do lesser good things.
And I think that the conception of binary effectiveness usually leads to the failure modes I pointed out. Unless and until the first half of Will's Effective Altruism is complete - an impossible goal, in my view - we need to ensure that we're doing more good at each step, not trying to ensure we do the most good, and nothing less.
I disagree with a couple specific points as well as the overall thrust of this post. Thank you for writing it!
I think I strongly disagree with this because opportunities for impact appear heavy tailed. Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile. I think the default of the world is that I donate to a charity in the 50th percentile. And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile. It is only when I take a maximizing stance, and a strong mandate to do lots of good (or when many thousands of hours have been spent on global priorities research), that I will find and donate to the very best charities. The ratios matter of course, and probably if I was faced with donating $1,000 to 90th percentile charities or $1 to a 99th percentile charity, I would probably donate to the 90th percentile charities, but if the numbers are $2 and $1, I should donate to the 99th percentile charity. I am claiming: the distribution of altruistic opportunities is roughly heavy tailed; the best (and maybe only) way to end up in the heavy tail is to take a maximizing approach; the “wonderful” thing that we would do without maximizing is, as measured ex post (looking at the results in retrospect), significantly worse than the best thing; a claim that I am also making, though which I think is weakest, is that we can differentiate between the “wonderful” and the “maximal available” opportunities ex ante (before hand) given research and reflection; the thing I care about is impact, and the EA movement is good insofar as it creates positive impact in the world (including for members of the EA community, but they are a small piece of the universe).
To me this seems like it doesn’t support the rest of your argument. I agree that the correct allocation of EA labor is not all doing AI Safety research, and we need to have outreach and career related resources to support people with various skills, but to me this is more-so a claim that we are not maximizing well enough — we are not properly seeking the optimal labor allocation because we’re a relatively uncoordinated set of individuals. If we were better at maximizing at a high level, and doing a good job of it, the problem you are describing would not happen, and I think it’s extremely likely that we can solve this problem.
With regard to the thrust of your post: I cannot honestly tell a story about how the non-maximizing strategy wins. That is, when I think about all the problems in the world: pandemics, climate change, existential threats from advanced AI, malaria, mass suffering of animals, unjust political imprisonment, etc., I can’t imagine that we solve these problems if we approach them like exercise or saving for retirement. If I actually cared about exercise or saving for retirement, I would treat them very differently than I currently do (and I have had periods in my life where I cared more about exercise and thus spent 12 hours a week in the gym). I actually care about the suffering and happiness in the world, and I actually care that everybody I know and love doesn’t die from unaligned AI or a pandemic or a nuclear war. I actually care, so I should try really hard to make sure we win. I should maximize my chances of winning, and practically this means maximizing for some of the proxy goals I have along the way. And yes, it's really easy to mess up this maximize thing and to neglect something important (like our own mental health), but that is an issue with the implementation, not with the method.
Perhaps my disagreement here is not a disagreement about what EA descriptively is and more-so a claim about what I think a good EA movement should be. I want a community that's not a binary in / out, that's inclusive and can bring joy and purpose to many people's lives, but what I want more than those things is for the problems in the world to be solved — for kids to never go hungry or die from horrible diseases, for the existence of humanity a hundred years from now to not be an open research question, for billions+ of sentience beings around the world to not live lives of intense suffering. To the extent that many in the EA community share this common goal, perhaps we differ in how to get there, but the strategy of maximizing seems to me like it will do a lot better than treating EA like I do exercise or saving for retirement.
I agree that we mostly agree. That said, I think that I either disagree with what it seems you recommend operationally, or we're talking past one another.
"Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile."
Yes, we should do that second thing. But how much of our resources do we spend on identifying what exists? I'd agree that1% of total EA giving going to cause exploration is obviously good, 10% is justifiable, and 50% is not even reasonable. It was probably the right allocation when Givewell was started, but it isn't now, and as a community we've looked at thousands of potential cause areas and interventions, and are colelctively sitting on quite a bit of money, an amount that seems to be increasing over time. Now we need to do things. The question now is whether we care about funding the 99.9% interventions we have, versus waiting for certainty that it's a 99.99% intervention, or a a 99.999% intervention, and spending to find it, or saving to fund it.
" I think the default of the world is that I donate to a charity in the 50th percentile..."
Agreed, and we need to fix that.
"...And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile."
And that's where this lost me. In the early EA movement, this was true, and I would have pushed for more research and less giving early on. (But they did that.) And for people who haven't previously been exposed to EA, yes, there's a danger of under-optimizing, though it is mostly mitigated about an hour after looking at Givewell's web site. The community is not at the point of looking at 90th percentile charities now, and continuing to think the things we've found are 90th percentile, and acting that way, and we need to evaluate another million interventions to be certain, that we should save until near-certainty is found, seems like an obviously bad decision today.
"I cannot honestly tell a story about how the non-maximizing strategy wins."
I think there's a conceptual confusion about optimizing versus maximizing here. If we use a binary maximal/non-maximal approach to altruism, we aren't just optimizing. And I'm not advising non-optimizing, or caring less. I'm advising pragmatic and limited application of maximization mindset, in favor of pragmatic optimization with clear understanding that our instrumental goals are poorly operationalized. I listed a bunch of places where I think we've gone too far, and now that it's happened, we should at least stop pushing further in the places we've seen it works poorly.
Thank you for this post.
As a newbie to the EA concept and community, but as someone who has practiced a pick-up game of ‘greatest good’ since 2004, I have to agree. The current EA entry points for newcomers do feel very narrow. I don’t come from an AI background or a particularly academic or financially endowed one. I’m one of those ‘graphic designer, writer, mental health…’ types you mention.
I joined the EA to find smart people who could help me think about my local, practical, and socially based ‘greatest good’ questions and issues. Now that I understand a little bit more about the macro longtermisim aspects of EA, I’m pretty sure my side game isn’t a good use of the community’s time or talent.
I'm sorry that you're feeling that way, and I suspect you could be more impactful in many ways - but I agree that much of EA seems designed to marginalize people who want to help without being able to upend their lives. I'm guessing that what you're doing now is beneficial, and I hope that can continue.
But if you'd be interested in doing volunteer brand analysis for EA firms, or finding jobs working in an EA org with those skills, please let me know.
I somewhat agree with this post, which is why I think EA needs more entry options, ala ecumenical religions.
In fact, I'd argue the first entry point for people interested in EA should be donating and GWWC.
I agree, but think EA has been building many of those entry points, from GWWC, to OneFortheWorld, to Givewell, to 80k, to university and local groups, to EA for Chistians, EA for Jews, and EA for Muslims - I'm just concerned that many of them implicitly or explicitly convey maximizing as an ideal.
From GWWC perspective I’m incredibly open to specific feedback as to where you think we can strike the right balance better.
I think I agree on most of this post but I'm unsure what concrete actions it suggests, especially for the median community member (as opposed to say Open Phil).
First, the median community member should appreciate that whatever they are working on doing can be good, and can be made better, without needing to judge on some absolute yardstick. Relatedly, second, everyone needs to take a huge break before judging what anyone else is doing, and ask themselves whether it's a good thing, rather than asking if it's "EA-enough."
Similarly, I often see "EA groups" that are looking for members to sign on to a manifest, instead of looking to inspire or encourage others to do more good, or to collaborate with others on beneficial projects.
I see far too much discussion of whether something "is EA" or "should be considered EA," without any appreciation that "EA" isn't a yardstick or a goal, it's a conceptual framework. Obviously, if that's true, we should stop pushing people to accept the framework, or join the community, and instead look for concrete steps towards helping them do the type of good they are interested in better - whether they are interested in global poverty, education, developmental health, mental health, biosecurity, animal welfare, or AI safety.
Thank you so much for this post. Your example of capitalism points the way. I plan to write a post soon suggesting that individuals following their personal passions for where to do good could lead to just this kind of effective distribution of altruistic energy that is globally optimized even though apparently less efficient at the local level.
One thing I kind of want to say is this: I agree that EA probably would benefit by being more capitalist in it, I do want to point out why a central authority will still be necessary.
The reason is that in capitalism, the downside is essentially capped or even made equivalent to zero as measured by profit metrics, while the benefits are unlimited money. This is not the case, as the risks of harm are not limited at all, as well as benefits, thus efficiency is not always good. Differential progress is necessary, thus the need for at least a central authority to weed out ideas that are dangerous. Capitalism and markets, by default are too neutral towards where progress goes.
Second, infohazards matter. In a free market, all information, including infohazards are shared freely, because they don't personally bear the cost. An example is AI progress insights would likely be infohazardous.
So free markets will need to have a central authority that can stop net-negative ideas.
I agree that open competition could lead to bad dynamics, but you absolutely don't need a central authority for this, you just need a set of groups trusted to have high epistemic and infohazard standards. Within the "core EA" world, I'll note that we have Openphil, Givewell, FTX Foundation, Survival and Flourishing Fund, Founders Pledge, and the various EA Funds, LTFF, Infrastructure, etc. and I'd be shocked if we couldn't easily add another dozen - though I'd be happier if they had somewhat more explicitly divergent strategies, rather than overlapping to a significant extent. So we do need authorities who are responsible, but no, they don't need to be centralized.
That seems wrong, at least in the naïve way you suggest. Yes, I often encourage people to consider what they are interested in and enjoy doing, as those are critical inputs into effectiveness, but I've been concerned that too many people are failing in the rationalist virtues around actually evaluating the evidence and changing their mind to think that following passions alone is going to be helpful.
This isn't the main point of this post, but I feel like it's a common criticism of EA so feel like it might be useful to voice my disagreement with this part:
I think viewing yourself as an individual is not in tension with viewing yourself as part of a whole. Your individual actions constitute a part of the whole's actions, and they can influence other parts of that whole. If everyone in the whole did maximize the impact of their actions, the whole's total impact would also be maximized.
100% agree. But again I don't think that's in tension with thinking in terms of where one as an individual can do the most good - it's just that for different people, that's different places.
I don't think we disagree - but there is a naïve approach I've seen people take, where they tell people that even though there are other critical tasks, everyone should be working on technical AI safety regardless of their fit, because it's the most important thing to work on.