This version of the essay has been lightly edited. You can find the original here.


When people come to an effective altruism event for the first time, the conversation often turns to projects they’re pursuing or charities they donate to. They often have a sense of nervousness around this, a feeling that the harsh light of cost-effectiveness is about to be turned on everything they do. To be fair, this is a reasonable thing to be apprehensive about, because many youngish people in EA do in fact have this idea that everything in life should be governed by cost-effectiveness. I've been there.

Cost-effectiveness analysis is a very useful tool. I wish more people and institutions applied it to more problems. But like any tool, this tool will not be applicable to all parts of your life. Not everything you do is in the “effectiveness” bucket. I don't even know what that would look like.

I have lots of goals. I have a goal of improving the world. I have a goal of enjoying time with my children. I have a goal of being a good spouse. I have a goal of feeling connected in my friendships and community. Those are all fine goals, but they’re not the same. I have a rough plan for allocating time and money between them: Sunday morning is for making pancakes for my kids. Monday morning is for work. It doesn’t make sense to mix these activities, to spend time with my kids in a way that contributes to my work or to do my job in a way that my kids enjoy.

If I donate to my friend’s fundraiser for her sick uncle, I’m pursuing a goal. But it’s the goal of “support my friend and our friendship,” not my goal of “make the world as good as possible.” When I make a decision, it’s better if I’m clear about which goal I’m pursuing. I don’t have to beat myself up about this money not being used for optimizing the world — that was never the point of that donation. That money is coming from my "personal satisfaction" budget, along with money I use for things like getting coffee with friends.

I have another pot of money set aside for donating as effectively as I can. When I'm deciding what to do with that money, I turn on that bright light of cost-effectiveness and try to make as much progress as I can on the world’s problems. That involves looking at the research on different interventions and choosing what I think will do the most to bring humanity forward in our struggle against pointless suffering, illness, and death. The best cause I can find usually ends up being one that I didn’t previously have any personal connection to, and that doesn’t nicely connect with my personal life. And that’s fine, because personal meaning-making is not my goal here. I can look for personal meaning in the decision afterward, but that's not what drives the decision.

When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal. It’s fine to support your local arts organization because their work gives you joy, because you want to be active in your community, or because they helped you and you want to reciprocate. If you also have a goal of improving the world as much as you can, decide how much time and money you want to allocate to that goal, and try to use those resources as effectively as you can.

 

 

This work is licensed under a Creative Commons Attribution 4.0 International License.

Comments28
Sorted by Click to highlight new comments since:

Stuff I'd change if I were rewriting this now:

  • not include the reference to "youngish" EAs wanting to govern everything by cost-effectiveness. I think it's more a result of being new to the idea than young.
  • make clearer that I do think significant resources should go toward improving the world. Without context, I don't think that's clear from this post.

This post is pushing against a kind of extremism, but it might push in the wrong direction for some people who aren't devoting many resources to altruism. It's not that I think people in general should be donating more to their friend's fundraiser or their community arts organization - I'd rather see them putting more resources towards things that are more important and cost-effective. But I would like people to examine whether they're doing things for more self-regarding personal reasons, or for optimizer-y improve-the-world reasons. And enjoy the resources they put toward themselves and their friends, but also take seriously the project of improving the world and put significant resources toward that. Rather than being confused about which project you're pursuing, which I think is suboptimal both for your own enjoyment and for improving the world.

"When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal"...this quote drives it home for me....what a way to end this introductory course on EA as a first timer. Amazing.

"When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal".

That actually sums up and as well clarified some of my uncertainties/curiosities on the application of Cost-effectiveness.

Thanks for sharing.

kbog
18
1
1

There is a difference between cost effectiveness the methodology, and utilitarianism or other impartial philosophy.

You could just as easily use cost-effectiveness for personal daily goals, and some people do with things such as health and fitness, but generally speaking our minds and society happen to be sufficiently well-adapted to let us achieve these goals without needing to think about cost-effectiveness. Even if we are only concerned with the global good, it's not worthwhile or effective to have explicit cost-effectiveness evaluation of everything in our daily lives, though that shouldn't stop us from being ready and willing to use it where appropriate.

Conversely, you could pursue the global good without explicitly thinking about cost-effectiveness even in domains like charity evaluation, but the prevailing view in EA is (rightfully) that this would be a bad idea.

What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic. Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results. Assuming anti-realism, I don't have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude. I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that's not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don't think in literal cost-effectiveness terms, but global benefits are still my general goal. It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

It's important to remember that having parochial attitudes towards some things in your own life doesn't necessarily justify attempts to spread analogous attitudes among other people.

What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic.
Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results.
Assuming anti-realism, I don't have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude.
I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that's not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don't think in literal cost-effectiveness terms, but global benefits are still my general goal. It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

(I broke the quoted text into more paragraphs so that I could parse it more easily. I'm thinking about a reply – the questions you're posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don't think he addresses the "what if there actually exist moral obligations?" question, instead assuming mostly non-moral-realism)

It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

On a different note though:

I actually agree with this claim, but it's a weirder claim.

People used to have real communities. And engaging with them was actually a part of being emotionally healthy.

Now, we live in an atomized society where where community mostly doesn't exist, or is a pale shadow of it's former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it's actually helping them be healthy.

And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.

Sometimes this does mean a local arts program, or dance community, or whatever. If that's something you're actually getting value from.

The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a "real community" than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the "I want to live in a world with nice things, this is a nice thing" standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)

I feel that my folk dance community is a pretty solidly real one - people help each other move, etc. The duration is reassuring to me - the community has been in roughly its current form since the 1970s, so folk dancers my age are attending each other's weddings and baby showers but we eventually expect to attend each other's funerals. But I agree that a lot of community institutions aren't that solid.

I recently chatted with someone who said they've been part of ~5 communities over their life, and that all but one of them was more "real community" like than the rationalists. So maybe there's plenty of good stuff out there and I've just somehow filtered it out of my life.

The "real communities" I've been part of are mostly longer-established, intergenerational ones. I think starting a community with almost entirely 20-somethings is a hard place to start from. Of course most communities started like that, but not all of them make it to being intergenerational.

I saw what seemed like potential communities over the years "soccer club, improv comedy club, local toastmasters" but I was afraid... to be myself, being judged, making a fool of me, worried about being liked... so I passed. Here I am now in EA giving it a shot. I may go to the improv comedy mtgs soon. According to Hari's "Lost connections" finding a community is very important; we social animals and don't do well in loneliness.

fold dance community sounds wonderful and fun :)

Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.

Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)

Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of "sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?"
But, as soon as you've thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his "replacing guilt" series). But for most people, there doesn't seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying "well then, the obvious-implications-of-EA-thinking must be wrong", or "I guess maybe I don't need to live healthily".
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like "okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it's a necessary evil that I feel guilty about."
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says "I'm just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare's writing and see if you're able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it's better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I'm in a community that prioritizes saying true things even if they're inconvenient, I'm willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you're on board with Point-A-the-way-I-meant-Point-A-to-be-interpreted requires going through some awkward and maybe unhealthy stages where you haven't fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with "we really think you should read some detailed blogposts about the psychology of this before you commit" (this may be a good idea), reading the blogposts wouldn't actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like "if you live off more than $20k a year that's basically murder". (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it's fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, there's still a fair number of prominent people hanging around who do earnestly lean towards "it's okay to make costs hidden, it's okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It."
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it's not worth talking about and taking seriously as a consideration.

The tldr I guess is:

Maybe it's the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don't).

But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.

kbog
13
0
0
But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.

Whether it typically requires it to the degree advocated by OP or Zvi is (a) probably false, on my basic perception, but (b) requires proper psychological research before drawing firm conclusions.

But for most people, there doesn't seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.

This is a crux, because IMO the way that the people who frequently write and comment on this topic seem to talk about altruism represents a much more neurotic response to minor moral problems than what I consider to be typical or desirable for a human being. Of course the people who feel anxiety about morality will be the ones who talk about how to handle anxiety about morality, but that doesn't mean their points are valid recommendations for the more general population. Deciding not to have a mocha doesn't necessarily mean stressing out about it, and we shouldn't set norms and expectations that lead people to perceive it as such. It creates an availability cascade of other people parroting conventional wisdom about too-much-sacrifice when they haven't personally experienced confirmation of that point of view.

If I think I shouldn't have the mocha, I just... don't get the mocha. Sometimes I do get the mocha, but then I don't feel anxiety about it, I know I just acted compulsively or whatever and I then think "oh gee I screwed up" and get on with my life.

The problem can be alleviated by having shared standards and doctrine for budgeting and other decisions. GWWC with its 10% pledge, or Singer's "about a third" principle, is a first step in this direction.

Minding Our Way by Nate Soares comes close, although I don't think he addresses the "what if there actually exist moral obligations?" question, instead assuming mostly non-moral-realism)

Not sure what he says (haven't got the interest to search through a whole series of posts for the relevant ones, sorry) but my point assuming antirealism (or subjectivism) seems to have been generally neglected by philosophy both inside and outside the academia: just because the impartial good isn't everything doesn't mean that it is rational to generically promote other people's pursuits of their own respective partial goods. The whole reason humans created impartial morality in the first place is that we realized that it works better than for us to each pursue partialist goals.

So, regardless of most moral points of view, the shared standards and norms around how-much-to-sacrifice must be justified on consequentialist grounds.

I should emphasize that antirealism != agent-relative morality, I just happen to think that there is a correlation in plausibility here.

This post, alongside Julia's essay "Cheerfully," are the posts I most often recommend to other EAs.

Thanks for writing this.

I feel an ongoing sense of frustration that even though this has seemed like the common wisdom of most "longterm EA folk" for several years... new people arriving in the community often have to go through a learning process before they can really accept this.

This means that in any given EA space, where most people are new, there will be a substantial fraction of people who haven't internalized this, and are still stressing themselves out about it, and are in turn stressing out new new people who are exposed more often to the "see everything through the utilitarian lens" than posts like this.

Thanks for this post. I think it provides a useful perspective, and I've sent it to a non-EA friend of mine who's interested in EA, but concerned by the way that it (or utilitarianism, really) can seem like it'd be all-consuming.

I also found this post quite reminiscent of Purchase Fuzzies and Utilons Separately (which I also liked). And something that I think might be worth reading alongside this is Act utilitarianism: criterion of rightness vs. decision procedure.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

[I work with Julia.]

I think this piece is maybe the best short summary of a strand in Julia's writing that has helped EA to seem more attainable for people.

This is a great post Julia. This helped me. I do a lot of volunteer work in my community and have been thinking about if I should give that up to attempt to devote more time to EA causes (even though I don't want to), but I really should not do this. Don't think I would be that effective with my extra time anyway, because something would be missing from my life. Much love.

Could you say a little more about how you decide what size each pot of money should be?

My advice on how to decide the pots of money is basically in this post: http://www.givinggladly.com/2012/03/tradeoffs.html

TL;DR: spend some time noticing how much other people and let that inform your budget, but don't try to pay attention them every day, because you probably can't go around powered by guilt forever.

That advice was written at a time when I thought of donation as basically the only path to impact, at least for myself. I do think it's worth seriously considering whether other paths are viable for you and not committing to a level of donation that will seriously reduce your ability to pursue other things. This probably won't be surprising coming from the person running Giving What We Can, but I think something like 10% is a level that's both significant and also compatible with, for example, working for a nonprofit.

I find the upside of deciding annually on my donation budget is that I can then make all the other decisions the way everyone else does. Vacation? Lunch with a friend? Donation to friend's fundraiser? They're all in the "stuff that will enrich my life" category, so I can trade them off against each other however I think will be best for me.

Thanks, Julia.

I think guilt is a powerful & fragile motivator that should basically be considered harmful, at least for people whose psychologies are shaped like mine.

This all reminds me of stuff that Raemon has been writing recently, as well as this part of the EA jobs are really hard to get thread.

This would be my practical question as well, for the following reasons.

I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only “satisfice” (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).

So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be satisficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.

In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their relational value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.

If others have found ways to reconcile infinite optimizing goals with satisficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)

Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I don't know if you ever ended up writing more about this, but if you did, I hope you'd consider publishing it -- I think that could help a lot of people!

Hi Ann, thanks for the reply! I agree that self-compassion can be an important piece of the puzzle for many people with an EA outlook.

I am definitely still working on reframing EA-related ideas and motivations so that the default language would not so easily lead to 'EA guilt' and some other problems. Lately I've been focusing on more general alternatives to 'compassion', because people often have different (and strong) preexisting notions of what compassion means, and so I'm not sure if compassion will serve as the kind of integrative 'bridge concept'  that I'm looking for to help solve many (e.g. terminological) problems simultaneously. 

So unfortunately I don't have much (quickly publishable) stuff on compassion specifically, having been rotating abstract alternatives like 'dissonance minimalism' or 'complex harmonization'. But who knows, maybe I'll end up relating things via compassion again, at some point!

I'm not up-to-date on what the existing EA-memesphere writings on (self-)compassion are, but I love the Replacing Guilt series by Nate Soares (http://mindingourway.com/guilt), often mentioned on LW/EA. It has also been narrated as a podcast by Gianluca Truda. I believe it is a good recommendation for anyone who is feeling overwhelmed by the ambitions of EA.

"I have lots of goals. I have a goal of improving the world. I have a goal of enjoying time with my children. I have a goal of being a good spouse. I have a goal of feeling connected in my friendships and community. Those are all fine goals, but they’re not the same." This post is one of my favorite articles in the EA program. Written in clear, easy to understand language and goes straight to how I feel. I have confusing emotions about doing the most good. 

 

Having clear and separate goals sounds helpful. Using some resources to go to a movies or meet a friend for coffee is ok. I am having a tougher time deciding whether to stop donating to areas that are close to me. Should I stop donating money in Haiti and instead donate more to high impact areas? This is a tough choice and I am still struggling with deciding.

Thanks so much for writing this! I feel like I end up trying to express this idea quite frequently and I'm really glad for the resource on it. I’d also love to see talking about our non-altruistic goals and motivations become more normalised within EA, so yes, thanks 🙂

Personally I identify with the approach you're expressing very strongly – I find it hard to understand the thought that I might care for my friends only because it ultimately helps me help the world more; I think of them in different categories. But then I know others who find it very alien that I both care a lot about helping the world as much as possible, but am also happy making some decisions for completely non-altruistic reasons. Have others come up against this divide as a problem issue in EA discussions? I feel like at times it is a place where discussions have got stuck.

I’d be interested in knowing too, as others have asked, how do you (and others) tend to approach weighing things to spend your time on against each other when they are part of different goals? I have various strategies that I try, but they usually boil down to using the non-EA goals as constraints – if there is a choice between a morally effective thing and something else, I usually end up doing the EA thing when I get the answer “no” to questions like “will doing it make me sad” or “would I be failing in something I owe to someone else”. I don’t find that very satisfactory – how do others do it?

Curated and popular this week
Relevant opportunities