I’m not keen on the recent trend of arguments that persuading people of longtermism is unnecessary, or even counterproductive, for encouraging them to work on certain cause areas (e.g., here, here). This is for a few reasons:
It’s not enough to believe that extinction risks within our lifetimes are high, and that extinction would constitute a significant moral problem purely on the grounds of harms to existing beings. Arguments for the tractability of reducing those risks, sufficient to outweigh the nearterm good done by focusing on global human health or animal welfare, seem lacking in the arguments I’ve seen for prioritizing extinction risk reduction on non-longtermist grounds.
Take the AI alignment problem as one example (among the possible extinction risks, I’m most familiar with this one). I think it’s plausible that the collective efforts of alignment researchers and people working on governance will prevent extinction, though I’m not prepared to put a number on this. But as far as I’ve seen, there haven’t been compelling cost-effectiveness estimates suggesting that the marginal dollar or work-hour invested in alignment is competitive with GiveWell charities or interventions against factory farming, from a purely neartermist perspective. (Shulman discusses this in this interview, but without specifics about tractability that I would find persuasive.)
More importantly, not all longtermist cause areas are risks that would befall currently existing beings. MacAskill discusses this a bit here, including the importance of shaping the values of the future rather than (I would say “complacently”) supposing things will converge towards a utopia by default. Near-term extinction risks do seem likely to be the most time-sensitive thing that non-downside-focused longtermists would want to prioritize. But again, tractability makes a difference, and for those who are downside-focused, there simply isn’t this convenient convergence between near- and long-term interventions. As far I can tell, s-risks affecting beings in the near future fortunately seem highly unlikely.
I was initially hesitant to post this, out of some vague fear of stigma and stating the obvious, and not wanting people to pathologize my ethical views based on the fact that I take antidepressants. This is pretty silly for two reasons. First, I think that if my past self had read something like this, he could have been spared years of suffering, and there are probably several readers in his position. EAs are pretty open about mental illness anyway. Second, if anything the fact that I am SFE "despite" currently not being depressed at all (indeed quite consistently happy), thanks to SSRIs, should make readers less likely to attribute my views to a mental illness.[1]
I claim that even if you don't feel so bad as to qualify as capital-D depressed, you might feel noticeably less bad on a daily basis if you try SSRIs.[2] That has been my experience, and I can honestly say this has likely been the cheapest sustainable boost in my well-being I've ever found. Being happier has also probably made me more effective/productive, though this is harder to assess.
(Obviously, my experience is not universal, I'm probably just lucky to an extent, this is not expert medical advice, and you might either find that SSRIs are ineffective for you or that the side effects are less tolerable than I have found them. You should definitely do your own research!)
In the months (at least) prior to SSRIs, my level of depression was "mild" according to the Burns checklist. I felt rather lonely during downtime, and like a bit of a failure for not having an exciting social life. I didn't endorse the latter judgment, and felt pretty fulfilled by my altruistic work, but that dissatisfaction persisted even when I tried to reason myself out of it (or tried taking up new hobbies). This wasn't debilitating by any means—so much so that I didn't really feel like I "deserved" to receive treatment intended for depression, and yes I realize how dumb that sounds in hindsight—but it was something of a pall hanging over my life all the same.
SSRIs just dispelled those feelings.
Waiting so long to give these things a try was a mistake. I made that mistake out of a combination of the aforementioned suspicion that I wasn't depressed enough to need them, and overestimation of how difficult it would be to get a prescription.[3] Just because my suffering wasn't as deep as others', that didn't mean it needed to exist.
This medication isn't magic; my life isn't perfect, and I still have some other dissatisfactions I'm working on. But, for the amount of difference this has made for me, it seemed negligent not to share my experience and hopefully encourage others in an analogous position to show themselves a bit of compassion.
[1] Yes, I have seen people do this before—not to me personally, but to other SFEs.
[2] This probably holds for other antidepressants too. I'm just focusing on SSRIs here because I have experience with them, and they incidentally have a worse reputation than, e.g., Wellbutrin.
[3] At least in the U.S., there are online services where you can share your symptoms with a doctor and just get a prescription at a pretty low price. For some reason, I expected a lot more awkward bureaucracy and mandatory therapy than this. I won't get specific here because I don't want to be a shill, but if you're curious, feel free to PM me.
SlateStarCodex has a long post on SSRIs and their side-effects (from 2014); including sexual side-effects. (Here is a 2016 paper which also reports on sexual side-effects.) I don't have expertise in this topic, however.
I’m not really sympathetic to the following common sentiment: “EAs should not try to do as much good as feasible at the expense of their own well-being / the good of their close associates.”
It’s tautologically true that if trying to hyper-optimize comes at too much of a cost to the energy you can devote to your most important altruistic work, then trying to hyper-optimize is altruistically counterproductive. I acknowledge that this is the principle behind the sentiment above, and evidently some people’s effectiveness has benefited from advice like this.
But in practice, I see EAs apply this principle in ways that seem suspiciously favorable to their own well-being, or to the status quo. When you find yourself trying to justify on the grounds of impact the amounts of self-care people afford themselves when they don’t care about being effectively altruistic, you should be extremely suspicious.
Some examples, which I cite not to pick on the authors in particular—since I think many others are making a similar mistake—but just because they actually wrote these claims down.
I felt a bit suspicious, looking at how I spent my time. Surely that long road trip wasn’t necessary to avoid misery? Did I really need to spend several weekends in a row building a ridiculous LED laser maze, when my other side project was talking to young synthetic biologists about ethics?
I think this is just correct. If your argument is that EAs shouldn’t be totally self-effacing because some frivolities are psychologically necessary to keep rescuing people from the bottomless pit of suffering, then sure, do the things that are psychologically necessary. I’m skeptical that “psychologically necessary” actually looks similar to the amount of frivolities indulged by the average person who is as well-off as EAs generally are.
Do I live up to this standard? Hardly. That doesn’t mean I should pretend I’m doing the right thing.
Minimization is greedy. You don’t get to celebrate that you’ve gained an hour a day [from sleeping seven instead of eight hours], or done something impactful this week, because that minimizing urge is still looking at all your unclaimed time, and wondering why you aren’t using it better, too.
How important is my own celebration, though, when you really weigh it against what I could be doing with even more time? (This isn’t just abstract impact points; there are other beings whose struggles matter no less than mine do, and fewer frivolities for me could mean relief for them.)
I think where I fundamentally disagree with this post is that, for many people, I don’t think aiming for the minimum puts you close to less than the minimum. Getting to the minimum, much less below it, can be very hard, such that people who aim at it just aren’t in much danger of undershooting. If you find this is not true for yourself, then please do back off from the minimum. But remember that in the counterfactual where you hadn’t tested your limits, you probably would not have gotten close to optimal.
This post includes some saddening anecdotes about people ending up miserable because they tried to optimize all their time for altruism. I don’t want to trivialize their suffering. Yet I can conjure anecdotes in the opposite direction (and the kind of altruism I care about reduces more suffering in expectation). Several of my colleagues seem to work more than the typical job entails, and I don’t have any evidence of the quality of their work being the worse for this. I’ve found that the amount of time I can realistically devote to altruistic efforts is pretty malleable. No, I’m not a machine; of course I have my limits. But when I gave myself permission to do altruistic things for parts of weekends, or into later hours of weekdays, well, I could. “My happiness is not the point,” as Julia said in this post, and while she evidently doesn’t endorse that statement, I do. That just seems to be the inevitable consequence of taking the sentience of other beings besides yourself (or your loved ones) seriously.
Personally have been trying to think of my life only as a means to an end. Will my life technically might have value, I am fairly sure it is rather minuscule compared to the potential impact can make. I think it's possible, though probably difficult, to intuit this and still feel fine / not guilty, about things. … I'm a bit wary on this topic that people might be a bit biased to select beliefs based on what is satisfying or which ones feel good.
I do think Tessa's point about slack has some force—though in a sense, this merely shifts the “minimum” up by some robustness margin, which is unlikely to be large enough to justify the average person’s indulgences.
If I donate to my friend’s fundraiser for her sick uncle, I’m pursuing a goal. But it’s the goal of “support my friend and our friendship,” not my goal of “make the world as good as possible.” When I make a decision, it’s better if I’m clear about which goal I’m pursuing. I don’t have to beat myself up about this money not being used for optimizing the world — that was never the point of that donation. That money is coming from my “personal satisfaction” budget, along with money I use for things like getting coffee with friends.
It puzzles me that, as common as concerns about the utility monster—sacrificing the well-being of the many for the super-happiness of one—are, we seem to find it totally intuitive that one can (passively) sacrifice the well-being of the many for one’s own rather mild comforts. (This is confounded by the act vs. omission distinction, but do you really endorse that?)
The latter conclusion is basically the implication of accepting goals other than “make the world as good as possible.” What makes these other goals so special, that they can demand disproportionate attention (“disproportionate” relative to how much actual well-being is at stake)?
Due to the writing style, it’s honestly not clear to me what exactly this post was claiming. But the author does emphatically say that devoting all of their time to the activity that helps more people per hour would be “premature optimization.” And they celebrate an example of a less effective thing they do because it consistently makes a few people happy.
I don’t see how the post actually defends doing the less effective thing. To the extent that you impartially care about other sentient beings, and don’t think their experiences matter any less because you have fewer warm fuzzy feelings about them, what is the justification for willingly helping fewer people?
Some reasons not to primarily argue for veganism on health/climate change grounds
I've often heard animal advocates claim that since non-vegans are generally more receptive to arguments from health benefits and reducing climate impact, we should prioritize those arguments, in order to reduce farmed animal suffering most effectively.
On its face, this is pretty reasonable, and I personally don't care intrinsically about how virtuous people's motivations for going vegan are. Suffering is suffering, no matter its sociological cause.
But there are some reasons I'm nervous about this approach, at least if it comes at the opportunity cost of moral advocacy. None of these are original to me, but I want to summarize them here since I think this is a somewhat neglected point:
Plausibly many who are persuaded by the health/CC arguments won't want to make the full change to veganism, so they'll substitute cows for chickens and fish. Both of which are evidently less bad for one's health and CC risk, but because these animals are so small and have fewer welfare protections, this switch causes a lot more suffering per calorie. More speculatively, there could be a switch to insect consumption.
Health/CC arguments don't apply to reducing wild animal suffering, and indeed emphasizing environmental motivations for going vegan might exacerbate support for conservation for its own sake, independent of individual animals' welfare. (To be fair, moral arguments can also backfire if the emphasis is on general care for animals, rather than specifically preventing extreme suffering.)
Relatedly, health/CC arguments don't motivate one to oppose other potential sources of suffering in voiceless sentient beings, like reckless terraforming and panspermia, or unregulated advanced simulations. This isn't to say all anti-speciesists will make that connection, but caring about animals themselves rather than avoiding exploiting them for human-centric reasons seems likely to increase concern for other minds.
While the evidence re: CC seems quite robust, nutrition science is super uncertain and messy. Based on both this prior about the field and suspicious convergence concerns, I'd be surprised if a scientific consensus established veganism as systematically better for one's health than alternatives. That said, I'd also be very surprised about a consensus that it's worse, and clearly even primarily ethics-based arguments for veganism should also clarify that it's feasible to live (very) healthily on a vegan diet.
Quick comment. With respect to your first point, this has always struck me as one of the better points as to why non ethical arguments should primarily avoided when it comes to making the case for veganism. However, after reading Tobias Leenaert's 'How to Create a Vegan World: A Pragmatic Approach', I've become a bit more agnostic on this notion. He notes a few studies from The Humane League that show that red-meat reducers/avoiders tend to eat less chicken than your standard omnivore. He also referenced a few studies from Nick Cooney's book, Veganomics, which covers some of this on p. 107-111. Combined with the overall impact non-ethical vegans could have on supply/demand for other vegan products (and their improvement in quality), I've been a bit less worried about this reason.
I think your other reasons are all extremely important and underrated, though, so still lean overall that the ethical argument should be relied on when possible :)
[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just don't know of a source.]
Prompted by Holden’s discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefits—at least for some conceptions of "benefit"—diverge from those in his post, when considering this version.
The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume we’re only considering fixed population sizes, so there’s no worry that this exercise sneaks in average utilitarianism, etc.)
But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.
Imagine that when evaluating two possible worlds, you don’t know which slice of experience in each world you would be. To make things easy enough to grasp, take a “slice” to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Let’s say one second.
These worlds might entail probabilities of experiences as well. So, since it’s hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is “re-rolled” a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holden’s example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.
So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiences—no happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).
I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-days—suspend your disbelief and imagine they never get bored—followed by a beach-day that ends in depression. Then I imagine I don’t know which moment of experience in either of these options I’ll be.
Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They won’t be comforted by the fact that they’re rare, or that they’re in the context of a “person” who otherwise is quite happy. They’ll just suffer.
I’m not saying the probabilities don’t matter. Of course they do; I’d rather take #2 than a third option where there’s a 1 in 100 thousand chance of depression. I’m also pretty uncertain where I stand when we modify #1 so that the person’s life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete “persons” don’t get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)
I find the framing of "experience slices" definitely pushes my intuitions in the same direction.
One question I like to think about is whether I'd choose to gain either (a) a neutral experience or (b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but I'd almost certainly not take option (b). I'd guess there's likely risk aversion intuition also being snuck here too though.
At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 999,000,000. Similarly, each of the lives of welfare 1 in world Z could be (a) purely level 1 good experiences, or (b) level 1,000,001 good experiences minus level 1,000,000 bad experiences.
To my intuitions, it’s pretty easy to accept the RC if our conception of worlds A and Z is the pair (a, a) from the (of course non-exhaustive) possibilities above, even more so for (b, a). However, the RC is extremely unpalatable if we consider the pair (a, b). This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.
To drive home how counterintuitive that is, we can apply the same reasoning often applied against NU views: Suppose the level 1,000,001 happiness in each being in world Z is compressed into one millisecond of some super-bliss, contained within a life of otherwise unremitting misery. There doesn’t appear to be any temporal ordering of the experiences of each life in world Z such that this conclusion isn’t morally absurd to me. (Going out with a bang sounds nice, but not nice enough to make the preceding pure misery worth it; remember this is a millisecond!) This is even accounting for the possible scope neglect involved in considering the massive number of lives in world Z. Indeed, multiplying these lives seems to make the picture more horrifying, not less.
Again, at the risk of sounding obvious: The repugnance of the RC here is that on total non-NU axiologies, we’d be forced to consider the kind of life I just sketched a “net-positive” life morally speaking.[2] Worse, we're forced to consider an astronomical number of such lives better than a (comparatively small) pure utopia.
[1] “Negative” here includes lexical and lexical threshold views.
[2] I’m setting aside possible defenses based on the axiological importance of duration. This is because (1) I’m quite uncertain about that point, though I share the intuition, and (2) it seems any such defense rescues NU just as well. I.e. one can, under this principle, maintain that 1 hour of torture-level suffering is impossible to morally outweigh, but 1 millisecond isn’t.
This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.
It seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.
I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.
Some things I liked about What We Owe the Future, despite my disagreements with the treatment of value asymmetries:
The thought experiment of imagining that you live one big super-life composed of all sentient beings’ experiences is cool, as a way of probing moral intuitions. (I'd say this kind of thought experiment is the core of ethics.)
It seems better than e.g. Rawls’ veil of ignorance because living all lives (1) makes it more salient that the possibly rare extreme experiences of some lives still exist even if you're (un)lucky enough not to go through them, and (2) avoids favoring average-utilitarian intuitions.
Although the devil is very much in the details of what measure of (dis)value the total view totals up, the critiques of average, critical level, and symmetric person-affecting views are spot-on.
There's some good discussion of avoiding lock-in of bad (/not-reflected-upon) values as a priority that most longtermists can get behind.
I was already inclined to think dominant values can be very contingent on factors that don't seem ethically relevant, like differences in reproduction rates (biological or otherwise) or flukes of power imbalances. So I didn't update much from reading about this. But I have the impression that many longtermists are a bit too complacent about future people converging to the values we'd endorse with proper reflection (strangely, even when they're less sympathetic to moral realism than I am). And the vignettes about e.g. Benjamin Lay were pretty inspiring.
Relatedly, it's great that premature space settlement is acknowledged as a source of lock-in / reduction of option value. Lots of discourse on longtermism seems to gloss over this.
I wrote a defense of an axiology on which an experience is perfectly good to the extent that it is absent of craving for change. This defense follows in part from a reductionist view of personal identity, which is usually considered in EA circles to be in support of total symmetric utilitarianism, but I argue that this view lends support to a form of negative utilitarianism.
Some vaguely clustered opinions on metaethics/metanormativity
I'm finding myself slightly more sympathetic to moral antirealism lately, but still afford most of my credence to a form of realism that would not be labeled "strong" or "robust." There are several complicated propositions I find plausible that are in tension:
1. I have a strong aversion to arbitrary or ad hoc elements in ethics. Practically this cashes out as things like: (1) rejecting any solutions to population ethics that violate transitivity, and (2) being fairly unpersuaded by solutions to fanaticism that round down small probabilities or cap the utility function.
2. Despite this, I do notintrinsically care about the simplicity of a moral theory, at least for some conceptions of "simplicity." It's quite common in EA and rationalist circles to dismiss simple or monistic moral theories as attempting to shoehorn the complexity of human values into one box. I grant that I might unintentionally be doing this when I respond to critiques of the moral theory that makes most sense to me, which is "simple." But from the inside I don't introspect that this is what's going on. I would be perfectly happy to add some complexity to my theory to avoid underfitting the moral data, provided this isn't so contrived as to constitute overfitting. The closest cases I can think of where I might need to do this are in population ethics and fanaticism. I simply don't see what could matter morally in the kinds of things whose intrinsic value I reject: rules, virtues, happiness, desert, ... When I think of these things, and the thought experiments meant to pump one's intuitions in their favor, I do feel their emotional force. It's simply that I am more inclined to think of them as just that: emotional, or game theoretically useful constructs that break down when you eliminate bad consequences on conscious experience. The fact that I may "care" about them doesn't mean I endorse them as relevant to making the world a better place.
3. Changing my mind on moral matters doesn't feel like "figuring out my values." I roughly know what I value. Many things I value, like a disproportionate degree of comfort for myself, are things I very much wish I didn't value, things I don't think I should value. A common response I've received is something like: "The values you don't think you 'should' have are simply ones that contradict stronger values you hold. You have meta-preferences/meta-values." Sure, but I don't think this has always been the case. Before I learned about EA, I don't think it would have been accurate to say I really did "value" impartial maximization of good across sentient beings. This was a value I had to adopt, to bring my motivations in line with my reasons. Encountering EA materials did not feel at all like "Oh, you know what, deep down this was always what I would've wanted to optimize for, I just didn't know I would've wanted it."
4. The question "what would you do if you discovered the moral truth was to do [obviously bad thing]?" doesn't make sense to me, for certain inputs of [obviously bad thing], e.g. torturing all sentient beings as much as possible. For extreme inputs of that sort, the question is similar to "what would you do if you discovered 2+2=5?" For less extreme inputs, such that it's plausible to me I simply have not thought through ethics enough that I could imagine that hypothetical but merely find it unlikely right now, the question does make sense, and I see nothing wrong with saying "yes." I suspect many antirealists do this all the time, radically changing their minds on moral questions due to considerations other than empirical discoveries, and they would not be content saying "screw the moral truth" by retaining their previous stance.
5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computation and - most importantly - subjective access to the hypothetical future experiences of all sentient beings. These conditions are so idealized that I am probably as pessimistic about AI as any antirealist, but I'm not sure yet if they're so idealized that I functionally am an antirealist in this sense.
No, longtermism is not redundant
I’m not keen on the recent trend of arguments that persuading people of longtermism is unnecessary, or even counterproductive, for encouraging them to work on certain cause areas (e.g., here, here). This is for a few reasons:
Why you should consider trying SSRIs
I was initially hesitant to post this, out of some vague fear of stigma and stating the obvious, and not wanting people to pathologize my ethical views based on the fact that I take antidepressants. This is pretty silly for two reasons. First, I think that if my past self had read something like this, he could have been spared years of suffering, and there are probably several readers in his position. EAs are pretty open about mental illness anyway. Second, if anything the fact that I am SFE "despite" currently not being depressed at all (indeed quite consistently happy), thanks to SSRIs, should make readers less likely to attribute my views to a mental illness.[1]
I claim that even if you don't feel so bad as to qualify as capital-D depressed, you might feel noticeably less bad on a daily basis if you try SSRIs.[2] That has been my experience, and I can honestly say this has likely been the cheapest sustainable boost in my well-being I've ever found. Being happier has also probably made me more effective/productive, though this is harder to assess.
(Obviously, my experience is not universal, I'm probably just lucky to an extent, this is not expert medical advice, and you might either find that SSRIs are ineffective for you or that the side effects are less tolerable than I have found them. You should definitely do your own research!)
In the months (at least) prior to SSRIs, my level of depression was "mild" according to the Burns checklist. I felt rather lonely during downtime, and like a bit of a failure for not having an exciting social life. I didn't endorse the latter judgment, and felt pretty fulfilled by my altruistic work, but that dissatisfaction persisted even when I tried to reason myself out of it (or tried taking up new hobbies). This wasn't debilitating by any means—so much so that I didn't really feel like I "deserved" to receive treatment intended for depression, and yes I realize how dumb that sounds in hindsight—but it was something of a pall hanging over my life all the same.
SSRIs just dispelled those feelings.
Waiting so long to give these things a try was a mistake. I made that mistake out of a combination of the aforementioned suspicion that I wasn't depressed enough to need them, and overestimation of how difficult it would be to get a prescription.[3] Just because my suffering wasn't as deep as others', that didn't mean it needed to exist.
This medication isn't magic; my life isn't perfect, and I still have some other dissatisfactions I'm working on. But, for the amount of difference this has made for me, it seemed negligent not to share my experience and hopefully encourage others in an analogous position to show themselves a bit of compassion.
[1] Yes, I have seen people do this before—not to me personally, but to other SFEs.
[2] This probably holds for other antidepressants too. I'm just focusing on SSRIs here because I have experience with them, and they incidentally have a worse reputation than, e.g., Wellbutrin.
[3] At least in the U.S., there are online services where you can share your symptoms with a doctor and just get a prescription at a pretty low price. For some reason, I expected a lot more awkward bureaucracy and mandatory therapy than this. I won't get specific here because I don't want to be a shill, but if you're curious, feel free to PM me.
SlateStarCodex has a long post on SSRIs and their side-effects (from 2014); including sexual side-effects. (Here is a 2016 paper which also reports on sexual side-effects.) I don't have expertise in this topic, however.
In Defense of Aiming for the Minimum
I’m not really sympathetic to the following common sentiment: “EAs should not try to do as much good as feasible at the expense of their own well-being / the good of their close associates.”
It’s tautologically true that if trying to hyper-optimize comes at too much of a cost to the energy you can devote to your most important altruistic work, then trying to hyper-optimize is altruistically counterproductive. I acknowledge that this is the principle behind the sentiment above, and evidently some people’s effectiveness has benefited from advice like this.
But in practice, I see EAs apply this principle in ways that seem suspiciously favorable to their own well-being, or to the status quo. When you find yourself trying to justify on the grounds of impact the amounts of self-care people afford themselves when they don’t care about being effectively altruistic, you should be extremely suspicious.
Some examples, which I cite not to pick on the authors in particular—since I think many others are making a similar mistake—but just because they actually wrote these claims down.
1. “Aiming for the minimum of self-care is dangerous”
I think this is just correct. If your argument is that EAs shouldn’t be totally self-effacing because some frivolities are psychologically necessary to keep rescuing people from the bottomless pit of suffering, then sure, do the things that are psychologically necessary. I’m skeptical that “psychologically necessary” actually looks similar to the amount of frivolities indulged by the average person who is as well-off as EAs generally are.
Do I live up to this standard? Hardly. That doesn’t mean I should pretend I’m doing the right thing.
How important is my own celebration, though, when you really weigh it against what I could be doing with even more time? (This isn’t just abstract impact points; there are other beings whose struggles matter no less than mine do, and fewer frivolities for me could mean relief for them.)
I think where I fundamentally disagree with this post is that, for many people, I don’t think aiming for the minimum puts you close to less than the minimum. Getting to the minimum, much less below it, can be very hard, such that people who aim at it just aren’t in much danger of undershooting. If you find this is not true for yourself, then please do back off from the minimum. But remember that in the counterfactual where you hadn’t tested your limits, you probably would not have gotten close to optimal.
This post includes some saddening anecdotes about people ending up miserable because they tried to optimize all their time for altruism. I don’t want to trivialize their suffering. Yet I can conjure anecdotes in the opposite direction (and the kind of altruism I care about reduces more suffering in expectation). Several of my colleagues seem to work more than the typical job entails, and I don’t have any evidence of the quality of their work being the worse for this. I’ve found that the amount of time I can realistically devote to altruistic efforts is pretty malleable. No, I’m not a machine; of course I have my limits. But when I gave myself permission to do altruistic things for parts of weekends, or into later hours of weekdays, well, I could. “My happiness is not the point,” as Julia said in this post, and while she evidently doesn’t endorse that statement, I do. That just seems to be the inevitable consequence of taking the sentience of other beings besides yourself (or your loved ones) seriously.
See also this comment:
I do think Tessa's point about slack has some force—though in a sense, this merely shifts the “minimum” up by some robustness margin, which is unlikely to be large enough to justify the average person’s indulgences.
2. “You have more than one goal, and that’s fine”
It puzzles me that, as common as concerns about the utility monster—sacrificing the well-being of the many for the super-happiness of one—are, we seem to find it totally intuitive that one can (passively) sacrifice the well-being of the many for one’s own rather mild comforts. (This is confounded by the act vs. omission distinction, but do you really endorse that?)
The latter conclusion is basically the implication of accepting goals other than “make the world as good as possible.” What makes these other goals so special, that they can demand disproportionate attention (“disproportionate” relative to how much actual well-being is at stake)?
3. “Ineffective Altruism”
Due to the writing style, it’s honestly not clear to me what exactly this post was claiming. But the author does emphatically say that devoting all of their time to the activity that helps more people per hour would be “premature optimization.” And they celebrate an example of a less effective thing they do because it consistently makes a few people happy.
I don’t see how the post actually defends doing the less effective thing. To the extent that you impartially care about other sentient beings, and don’t think their experiences matter any less because you have fewer warm fuzzy feelings about them, what is the justification for willingly helping fewer people?
Some reasons not to primarily argue for veganism on health/climate change grounds
I've often heard animal advocates claim that since non-vegans are generally more receptive to arguments from health benefits and reducing climate impact, we should prioritize those arguments, in order to reduce farmed animal suffering most effectively.
On its face, this is pretty reasonable, and I personally don't care intrinsically about how virtuous people's motivations for going vegan are. Suffering is suffering, no matter its sociological cause.
But there are some reasons I'm nervous about this approach, at least if it comes at the opportunity cost of moral advocacy. None of these are original to me, but I want to summarize them here since I think this is a somewhat neglected point:
Quick comment. With respect to your first point, this has always struck me as one of the better points as to why non ethical arguments should primarily avoided when it comes to making the case for veganism. However, after reading Tobias Leenaert's 'How to Create a Vegan World: A Pragmatic Approach', I've become a bit more agnostic on this notion. He notes a few studies from The Humane League that show that red-meat reducers/avoiders tend to eat less chicken than your standard omnivore. He also referenced a few studies from Nick Cooney's book, Veganomics, which covers some of this on p. 107-111. Combined with the overall impact non-ethical vegans could have on supply/demand for other vegan products (and their improvement in quality), I've been a bit less worried about this reason.
I think your other reasons are all extremely important and underrated, though, so still lean overall that the ethical argument should be relied on when possible :)
Wow, that's promising news! Thanks for sharing.
A Parfitian Veil of Ignorance
[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just don't know of a source.]
Prompted by Holden’s discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefits—at least for some conceptions of "benefit"—diverge from those in his post, when considering this version.
The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume we’re only considering fixed population sizes, so there’s no worry that this exercise sneaks in average utilitarianism, etc.)
But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.
Imagine that when evaluating two possible worlds, you don’t know which slice of experience in each world you would be. To make things easy enough to grasp, take a “slice” to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Let’s say one second.
These worlds might entail probabilities of experiences as well. So, since it’s hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is “re-rolled” a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holden’s example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.
So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiences—no happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).
I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-days—suspend your disbelief and imagine they never get bored—followed by a beach-day that ends in depression. Then I imagine I don’t know which moment of experience in either of these options I’ll be.
Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They won’t be comforted by the fact that they’re rare, or that they’re in the context of a “person” who otherwise is quite happy. They’ll just suffer.
I’m not saying the probabilities don’t matter. Of course they do; I’d rather take #2 than a third option where there’s a 1 in 100 thousand chance of depression. I’m also pretty uncertain where I stand when we modify #1 so that the person’s life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete “persons” don’t get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)
I find the framing of "experience slices" definitely pushes my intuitions in the same direction.
One question I like to think about is whether I'd choose to gain either
(a) a neutral experience
or
(b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but I'd almost certainly not take option (b). I'd guess there's likely risk aversion intuition also being snuck here too though.
The Repugnant Conclusion is worse than I thought
At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 999,000,000. Similarly, each of the lives of welfare 1 in world Z could be (a) purely level 1 good experiences, or (b) level 1,000,001 good experiences minus level 1,000,000 bad experiences.
To my intuitions, it’s pretty easy to accept the RC if our conception of worlds A and Z is the pair (a, a) from the (of course non-exhaustive) possibilities above, even more so for (b, a). However, the RC is extremely unpalatable if we consider the pair (a, b). This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.
To drive home how counterintuitive that is, we can apply the same reasoning often applied against NU views: Suppose the level 1,000,001 happiness in each being in world Z is compressed into one millisecond of some super-bliss, contained within a life of otherwise unremitting misery. There doesn’t appear to be any temporal ordering of the experiences of each life in world Z such that this conclusion isn’t morally absurd to me. (Going out with a bang sounds nice, but not nice enough to make the preceding pure misery worth it; remember this is a millisecond!) This is even accounting for the possible scope neglect involved in considering the massive number of lives in world Z. Indeed, multiplying these lives seems to make the picture more horrifying, not less.
Again, at the risk of sounding obvious: The repugnance of the RC here is that on total non-NU axiologies, we’d be forced to consider the kind of life I just sketched a “net-positive” life morally speaking.[2] Worse, we're forced to consider an astronomical number of such lives better than a (comparatively small) pure utopia.
[1] “Negative” here includes lexical and lexical threshold views.
[2] I’m setting aside possible defenses based on the axiological importance of duration. This is because (1) I’m quite uncertain about that point, though I share the intuition, and (2) it seems any such defense rescues NU just as well. I.e. one can, under this principle, maintain that 1 hour of torture-level suffering is impossible to morally outweigh, but 1 millisecond isn’t.
It seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.
I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.
Some things I liked about What We Owe the Future, despite my disagreements with the treatment of value asymmetries:
Linkpost: "Tranquilism Respects Individual Desires"
I wrote a defense of an axiology on which an experience is perfectly good to the extent that it is absent of craving for change. This defense follows in part from a reductionist view of personal identity, which is usually considered in EA circles to be in support of total symmetric utilitarianism, but I argue that this view lends support to a form of negative utilitarianism.
Some vaguely clustered opinions on metaethics/metanormativity
I'm finding myself slightly more sympathetic to moral antirealism lately, but still afford most of my credence to a form of realism that would not be labeled "strong" or "robust." There are several complicated propositions I find plausible that are in tension:
1. I have a strong aversion to arbitrary or ad hoc elements in ethics. Practically this cashes out as things like: (1) rejecting any solutions to population ethics that violate transitivity, and (2) being fairly unpersuaded by solutions to fanaticism that round down small probabilities or cap the utility function.
2. Despite this, I do not intrinsically care about the simplicity of a moral theory, at least for some conceptions of "simplicity." It's quite common in EA and rationalist circles to dismiss simple or monistic moral theories as attempting to shoehorn the complexity of human values into one box. I grant that I might unintentionally be doing this when I respond to critiques of the moral theory that makes most sense to me, which is "simple." But from the inside I don't introspect that this is what's going on. I would be perfectly happy to add some complexity to my theory to avoid underfitting the moral data, provided this isn't so contrived as to constitute overfitting. The closest cases I can think of where I might need to do this are in population ethics and fanaticism. I simply don't see what could matter morally in the kinds of things whose intrinsic value I reject: rules, virtues, happiness, desert, ... When I think of these things, and the thought experiments meant to pump one's intuitions in their favor, I do feel their emotional force. It's simply that I am more inclined to think of them as just that: emotional, or game theoretically useful constructs that break down when you eliminate bad consequences on conscious experience. The fact that I may "care" about them doesn't mean I endorse them as relevant to making the world a better place.
3. Changing my mind on moral matters doesn't feel like "figuring out my values." I roughly know what I value. Many things I value, like a disproportionate degree of comfort for myself, are things I very much wish I didn't value, things I don't think I should value. A common response I've received is something like: "The values you don't think you 'should' have are simply ones that contradict stronger values you hold. You have meta-preferences/meta-values." Sure, but I don't think this has always been the case. Before I learned about EA, I don't think it would have been accurate to say I really did "value" impartial maximization of good across sentient beings. This was a value I had to adopt, to bring my motivations in line with my reasons. Encountering EA materials did not feel at all like "Oh, you know what, deep down this was always what I would've wanted to optimize for, I just didn't know I would've wanted it."
4. The question "what would you do if you discovered the moral truth was to do [obviously bad thing]?" doesn't make sense to me, for certain inputs of [obviously bad thing], e.g. torturing all sentient beings as much as possible. For extreme inputs of that sort, the question is similar to "what would you do if you discovered 2+2=5?" For less extreme inputs, such that it's plausible to me I simply have not thought through ethics enough that I could imagine that hypothetical but merely find it unlikely right now, the question does make sense, and I see nothing wrong with saying "yes." I suspect many antirealists do this all the time, radically changing their minds on moral questions due to considerations other than empirical discoveries, and they would not be content saying "screw the moral truth" by retaining their previous stance.
5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computation and - most importantly - subjective access to the hypothetical future experiences of all sentient beings. These conditions are so idealized that I am probably as pessimistic about AI as any antirealist, but I'm not sure yet if they're so idealized that I functionally am an antirealist in this sense.