@Elizabeth and I recently recorded a conversation of ours that we're hoping becomes a whole podcast series. The original premise is that we were trying to convince each other about whether we should both be EAs or both not be EAs. (She quit the movement earlier this year when she felt that her cries of alarm kept falling on deaf ears; I never left.)

Audio recording (35 min)

Transcript

Some highlights:

If you like the podcast or want to continue the conversation, tell us about it in the comments (or on LW if you want to make sure Elizabeth sees it), and consider donating toward future episodes.

Comments10
Sorted by Click to highlight new comments since:

Thanks for the interesting conversation! Some scattered questions/observations:

  • Your conversation reminds me of the debate about whether EA should be cause-first or member-first.
    • My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, I'd call myself EA.
    • Elizabeth's self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
    • This might explain the difference between my and Elizabeth's attitudes about the importance of some EAs claiming that veganism doesn't entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but I'm far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabeth's perspective, this is so important. Do you think this is a fair characterization?
  • I'd love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganism's health tradeoffs relative to vegan advocacy:
    • If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganism's health tradeoffs relative to vegan advocacy.
    • By analogy, this feels like sounding an alarm because EA's kidney donation advocates haven't sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isn't kidney donation clearly the moral imperative?

If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs. 

I doubt that Elizabeth -- or a meaningful number of her potential readers -- are considering whether to be associated with anti-vegan advocates on Facebook or any movement related to them. I read the discussion as mainly about epistemics and integrity (these words collectively appear ~30 times in the transcript) rather than object-level harms. 

  • I think it's generally appropriate to be more concerned about policing epistemics and integrity in your own social movement than in others. This is in part about tractability -- do we have any reason to think any anti-vegan activist movement on Facebook cares about its epistemics? If they do, do any of us have a solid reason to believe we would be effective in improving those epistemics?
  • It's OK to not want to affiliate with a movement whose epistemics and integrity you judge to be inadequate. The fact that there are other movements with worse epistemics and integrity out there isn't particularly relevant to that judgment call.
  • It's unclear whether anti-vegan activists on Facebook are even part of a broader epistemic community. EAs are, so an erosion of EA epistemic norms and integrity is reasonably likely to cause broader problems.
    • In particular, the stuff Elizabeth is concerned about gives off the aroma of ends-justify-the-means thinking to me at points. False or misleading presentations, especially ones that pose a risk of meaningful harm to the listener, are not an appropriate means of promoting dietary change. [1] Moreover, ends-justify-the-means rationalization is a particular risk for EAs, as we painfully found out ~2 years ago.
  1. ^

    I recognize there may be object-level disagreement here as to whether a given presentation is false, misleading, or poses a risk of meaningful harm.

Yes, I would even say that the original comment (which I intend to reply to next) seems to suffer from ends-justify-the-means-logic as well (e.g. prioritizing "shutting up and multiplying" such as "shipping resources to the best interventions" over "being honest about health effects").

I like the distinction of cause-first vs member-first; thanks for that concept. Thinking about that in this context, I'm inspired to suggest a different cleavage that works better for my worldview on EA: Alignment/Integrity-first vs. Power/Impact-first.

I believe that for basically all institutions in the 21st century, alignment should be the highest priority, and power should only become the top priority to the extent that the institution believes that alignment at that power level has been solved.

By this splitting, it seems clear that Elizabeth's reported actions are prioritizing alignment over impact.

Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?

I believe that until we learn how to prioritize Alignment over Impact, we aren't ready for as much power as we had at SBF's height.

Thanks for this; I agree that "integrity vs impact" is a more precise cleavage point for this conversation than "cause-first vs member-first".

Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?

Unhelpfully, I'd say it depends on the tradeoff's details. I certainly wouldn't advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I'd currently prefer the marginal 1M be given to EA Funds' Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA's epistemics.

It seems to me that I think the EA community has a lot more "alignment/integrity" than you do. This could arise from empirical disagreements, different definitions of "alignment/integrity", and/or different expectations we place on the community.

For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn't have tradeoffs, and weren't corrected by other community members. While I'd prefer people say true things to false things, especially when they affect people's health, this just doesn't feel important enough to update upon. (I've also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)

One thing that could change my mind is learning about many more cases to the point that it's clear that there are deep systemic issues with the community's epistemics. If there's a lot more evidence on this which I haven't seen, I'd love to hear about it!

I might say kidney donation is a moral imperative (or good) if we consider only the effects on your welfare and the effects on the welfare of the beneficiaries. But when you consider indirect effects, things are less clear. There are effects on other people, nonhuman animals (farmed and wild), your productivity and time (which affects your EA work or income and donations), your motivation and your values. For an EA, productivity and time, motivation and values seem most important.

EDIT: And the same goes for veganism.

What do you mean by moral imperative?

I notice that I "believe in" minimum moral standards (like a code of conduct or laws) but not what I call moral imperatives (in X situation, I have no choice if I want to remain in good moral standing).

I also don't believe in requiring organ donation as part of a minimum moral standard, which is probably related to my objection to the concept of "moral imperative".

Thank you for sharing this Timothy. I left a long comment on the LW version of the post. I'm happy to talk about this more with you or Elizabeth — if you're interested, you're welcome to reach out to me directly.

Comment cross-posted on LessWrong

I've begun listening to this podcast episode. Only a few minutes in, I feel a need to clarify a point of contention over some of what Elizabeth said:

Yeah. I do want to say part of that is because I was a naive idiot and there's things I should never have taken at face value. But also I think if people are making excuses for a movement that I shouldn't have been that naive That's pretty bad for the movement.

She also mentioned that she considers herself to have caused harm by propagating EA. It seems like she might be being too hard on herself. While she might consider being that hard on herself to be appropriate, the problem could be what her conviction implies. There are clearly still some individual, long-time effective altruists she still respects, like Tim, even if she's done engaging with the EA community as a whole. If that wasn't true, I doubt this podcast would've been launched in the first place. Having been so heavily involved in the EA community for so long, and still being so involved in the rationality community, she may know hundreds of people, friends, who either still are effective altruists now, or used to be effective altruists, but no longer. Regarding the sort of harm caused by EA propagating itself as a movement, she provides this as a main example.

The fact that EA recruits so heavily and dogmatically among college students really bothers me.

Hearing that made me think about a criticism of the organization of EA groups for university students made last year by Dave Banerjee, former president of the student EA club at Columbia University. His was one of the most upvoted criticisms of such groups, and how they're managed, ever posted to the EA Forum. While Dave apparently realized what are presumably some of the same conclusions as Elizabeth about the problems with evangelical university EA groups, he did so with a much quicker turnaround than her. He shifted towards such a major update while still a university student, while it took her several years. I don't mention that so as to imply that she was necessarily more naive and/or idiotic than he was. From another angle, given that he was propagating a much bigger EA club than Elizabeth ever did, at a time when EA was being driven to grow much faster than when Elizabeth might've been more involved with EA movement/community building, Dave could have easily have been responsible for causing more harm. Therefore, perhaps he has perhaps been even a more naive idiot than she ever was. 

I've known other university students who were formerly effective altruists helping build student EA clubs, who quit because they also felt betrayed by EA as a community. Given that it's not like EA will be changing overnight, in spite of whoever considers it imperative some of it movement-building activities stop, there will be teenagers in the future, coming months, who may come through EA with a similar experience. Their teenagers who may be chewed up and spit out, feeling ashamed of their complicity in causing harm through propagating EA as well. They may not have even graduated high school yet, and within a year or two, they may also be(come) those effective altruists, then former effective altruists, who Elizabeth is anticipating and predicting that she would call naive idiots. Yet those are the very young people Elizabeth would seek to prevent from befalling harm themselves by joining EA in the first place. It's not evident that there's any discrete point at which they cease being those who should heed her warning in the first place, and instead become naive idiots to chastise. 

Elizabeth also mentions how she became introduced to EA in the first place.

I'd read Scott Alexander's blog for a long time, so I vaguely knew the term effective altruist. Then I met one of the two co founders of Seattle EA on OkCupid and he invited me to the in person meetings that were just getting started and I got very invested.

As of a year ago, Scott Alexander wrote a post entitled In Continued Defense of Effective Altruism. While I'm aware he made some later posts responding to some criticisms of that one he made, I'm guessing he hasn't abandoned that thesis of that post in its entirety. Meanwhile, as one of, if not the, most popular blog associated with either the rationality or EA communities, one way or another, Scott Alexander may still be drawing more people into the EA community than almost any other writer. If that means he may be causing more harm by propagating EA than almost any other rationalist still supportive of EA, then, at least in that particular way Elizabeth has in mind, Scott may right now continue to be one of the most naive idiots in the rationality community. The same may be true of so many effective altruists Elizabeth got to know in Seattle. 

What I'm aware is a popular refrain among rationalists is: speak truth, even if your voice trembles. Never mind on the internet, Elizabeth could literally go meet hundreds of effective altruists or rationalists she has known in either the Bay Area, and Seattle, and tell them that for years they, too, were also naive idiots, or that they're still being naive idiots. Doing so could be how Elizabeth could prevent them from causing harm. In not being willing to say so, she may counterfactually be causing so much more harm by saying or doing so much less to stop EA from propagating than she knows that she can.

Whether it be Scott Alexander, or so many of her friends who have been or still are in EA, or those who've helped propagate university student groups like Dave Banerjee, or those young adults who will come and go through EA university groups by the year 2026, there are hundreds of people Elizabeth should be willing to call, to their faces, naive idiots. It's not a matter of whether she, or anyone, expects that'd work as some sort of convincing argument. That's the sort of perhaps cynical and dishonest calculation she, and others, rightly criticize in EA. She should tell all of them that, if she believes it, even if her voice trembles. If she doesn't believe that, that merits an explanation of how she considers herself to have been a naive idiot, but so many of them to not have been. If she can't convincingly justify, not just to herself, but others, why she was exceptional in her naive idiocy, then perhaps she should reconsider her belief that even she was a naive idiot.

In my opinion she, or so many other former effective altruists, were not just naive idiots. Whatever mistakes they made, epistemically or practically, I doubt the explanation is that simple. The operationalization here of "naive idiocy" doesn't seem like a decently measurable function of, say, how long it took before it was just how much harm someone was causing by propagating EA, and how much harm they did cause in that period of time. "Naive idiocy" here doesn't seem to be all that coherent an explanation for why so many effective altruists got so much, so wrong, for so long. 

I suspect there's a deeper crux of disagreement here, one that hasn't been pinpointed yet, by Elizabeth or Tim. It's one I might be able to discern if I put in the effort, though I don't have a sense of what it might've been either. I could, given that I still consider myself an effective altruist, though I ceased to be an EA group organizer myself last year too, on account of me not being confident in helping grow the EA movement further, even if I've continued participating in it for what I consider its redeeming qualities. 

If someone doesn't want to keep trying to change EA for the better, and instead opts to criticize it to steer others away from it, it may not be true that they were just naive idiots before. If they can't substantiate their formerly naive idiocy, then to refer to themselves as having only been naive idiots, and by extension imply so many others they've known still are or were naive idiots too, is neither true nor useful. In that case, if Elizabeth would still consider herself to have been a naive idiot, that isn't helpful, and maybe it is also a matter of her, truly, being too hard on herself. If you're someone who has felt similarly, but you couldn't bring yourself to call so many friends you made in EA a bunch of naive idiots to their faces because you'd consider that false or too hard on them, maybe you're being too hard on yourself too. Whatever you want to see happen with EA, us being too hard on ourselves like that isn't helpful to anyone. 

This comment that I've cross-posted to LessWrong has quickly accrued negative karma. This comment is easy to misunderstand as I originally wrote it, so I understand the confusion. I'll explain here what I explained in an edit to my comment on LW, so as to avoid the confusion here on the EA Forum that I incurred there. 

I wrote this comment off the cuff, so I didn't put as much effort into writing it as clearly or succinctly as I could, or maybe should, have. So, I understand how it might read is as a long, meandering nitpick, of a few statements near the beginning of the podcast episode, without me having listened to the whole episode yet. Then, I call a bunch of ex-EAs naive idiots, like Elizabeth referred to herself as at least formerly being a naive idiot, and then say even future effective altruists will be proven to be idiots, and those still propagating EA after so long, like Scott Alexander, might be the most naive and idiotic of all. To be clear, I also included myself, so this reading would also imply that I'm calling myself a naive idiot.

That's not what I meant to say. I would downvote that comment too. I'm saying that

  1. If it's true what Elizabeth is saying about her being a naive idiot, then it would seem to follow that a lot of current, and former, effective altruists, including many rationalists, would also be naive idiots for similar reasons.
  2. If that were the case, then it'd be consistent with greater truth-seeking, and criticizing others for not putting enough effort into truth-seeking with integrity with regards to EA, to point out to those hundreds of other people that they either, at one point were, or maybe still are, naive idiots.
  3. If Elizabeth or whoever wouldn't do that, not only because they consider it mean, but moreover because they wouldn't think it true, then they should apply the same standards to themselves, and reconsider that they were not, in fact, just naive idiots.
  4. I'm disputing the "naive idiocy" hypothesis here as spurious, as it comes down to the question of 
    whether someone like Tim--and, by extension, someone like me in the same position, who has also mulled over quitting EA--are still being naive idiots, on account of not having updated yet to the conclusion Elizabeth has already reached.
  5. That's important because it'd seem to be one of the major cruxes of whether someone like Tim, or me, would update and choose to quit EA entirely, which is the point of this dialogue, so if that's not a true crux of disagreement here, speculating about whether hundreds of current and former effective altruists have been naive idiots is a waste of time. 
Curated and popular this week
Relevant opportunities