L

lilly

2752 karmaJoined

Posts
3

Sorted by New
6
lilly
· · 1m read

Comments
125

Yeah, to be clear, I think inappropriate interpersonal behavior can absolutely warrant banning people from attending events, and this whole situation has given me more respect for how CEA strikes this balance with respect to EAGs.

I was mainly responding to the point that "we might come up with ideas that let each side get more of what they want at a smaller cost to what the other side wants," by suggesting that, at a minimum, the organizers could've done things that would've involved ~no costs. 

I apologize if I did not characterize the fears correctly

I think you didn't. My fear isn't, first and foremost, about some theoretical future backsliding, creating safe spaces, or protecting reputations (although given the TESCREAL discourse, I think these are issues). My fear is:

  1. Multiple people at Manifest witnessed and/or had racist encounters.
  2. Racism has been, and continues to be, very insidious and very harmful.
  3. EA is meant to be a force for good in the world; even more than that, EA aims to benefit others as much as possible
  4. So the bar for EA needs to be a lot higher than "only some of our 'special guests' say racist stuff on a regular basis" and "not everyone experienced racism at our event."

I am bolstered by the fact that Manifest is not Rationalism and Rationalism is not EA. But I am frustrated that articulating the above position is seen as even remotely in the realm of "pushing society in a direction that leads to things like... the thought police from 1984." This strikes me as uncharitable pearl-clutching, given that organizers have an easy, non-speech-infringing way of reducing the likelihood that their events elicit and incite racism: not listing Hanania, who wasn't even a speaker, as a special guest on their website, while still allowing him to attend if he so chooses.

One feature I think it'd be nice for the Forum to have is a thing that shows you the correlation between your agree votes and karma votes. I don't think there is some objectively correct correlation between these two things, but it seems likely that it should be between, say, .2 and .6 (probably depending on the kind of comments you tend to read/vote on), and it might be nice for users to be able to know and track this. 

Making this visible to individual users (and, potentially, to anyone who clicks on their profile) would provide at least a weak incentive to avoid reflexively downvoting comments that one disagrees with, something that happens a lot, and that I also find myself doing more than I'd like.

The fact that racists is in quotes in the title of this post (“Why so many “racists” at Manifest?”) when there have been multiple, first-hand accounts of people experiencing/overhearing racist exchanges strikes me as wrongly dismissive, since I can only interpret the quotation marks as implying that there weren’t very many racists. (Perhaps relevantly, I have never overheard this kind of exchange at any conference I have ever attended, so the fact that multiple people are reporting these exchanges makes Manifest a big outlier in this regard, in my view.)

Nothing in the post seems to refute that the reported exchanges occurred among attendees, just that the organizers didn’t go out of their way to invite controversial/racist speakers or incite these exchanges. In other words, I think everything in the post is compatible with there having been “so many” racists at Manifest, but the quotation marks in the title seem to imply otherwise.

This isn’t so much a stylistic critique as it is a substantive one, since I think the title implies that not a lot of racist stuff went down, which feels importantly different from acknowledging that it did, but, say, disputing that the organizers caused this or suggesting that Hanania’s presence justified it.

I don't agree with @Barry Cotter's comment or think that it's an accurate interpretation of my comment (but didn't downvote). 

I think EA is both a truth-seeking project and a good-doing project. These goals could theoretically be in tension, and I can envision hard cases where EAs would have to choose between them. Importantly, I don't think that's going on here, for much the same reasons as were articulated by @Ben Millwood in his thoughtful comment. In general, I don't think the rationalists have a monopoly on truth-seeking, nor do I think their recent practices are conducive to it.

More speculatively, my sense is that epistemic norms within EA may—at least in some ways—now be better than those within rationalism for the following reason: I worry that some rationalists have been so alienated by wokeness (which many see as anathema to the project of truth-seeking) that they have leaned pretty hard into being controversial/edgy, as evidenced by them, e.g., platforming speakers who endorse scientific racism. Doing this has major epistemic downsides—for instance, a much broader swath of the population isn't going to bother engaging with you if you do this—and I have seen limited evidence that rationalists take these downsides sufficiently seriously.

lilly
88
36
18
3

I think it would be phenomenally shortsighted for EA to prioritize its relationship with rationalists over its relationship with EA-sympathetic folks who are put off by scientific racists, given that the latter include many of the policymakers, academics, and professional people most capable of actualizing EA ideas. Most of these people aren't going to risk working/being associated with EA if EA is broadly seen as racist. Figuring out how to create a healthy (and publicly recognized) distance between EAs and rationalists seems much easier said than done, though.

Think about how precious the life is of a young child—concretely picture a small child coughing up blood and lying in bed with a fever of 105. We—the effective altruists—are the ones doing something about that.

The vast majority of people trying to keep kids from dying of malaria are not effective altruists.

lilly
50
12
0
1

Somewhat unrelated, but since people are discussing whether this example is cherry-picked vs. reflective of a systemic problem with infrastructure-related grants, I'm curious about the outcome of another, much larger grant:

Has there been any word on what happened to the Harvard Square EA coworking space that OP committed $8.9 million to and that was projected to open in the first half of 2023?

I really enjoyed this series; thanks for writing it!

One piece of stylistic feedback on Anti-Philanthropic Misdirection: I think the piece's hostile tone—e.g., "Wenar is here promoting a general approach to practical reasoning that is very obviously biased, stupid, and harmful: a plain force for evil in the world"—will make your piece less persuasive to non-EA readers for two reasons. First, I suspect all the italics and adjectives will trigger readers' bias radars, making people who aren't already sympathetic to EA approach the piece more critically/less openmindedly than they would have otherwise (e.g., if you had written: "Wenar promotes a general approach to practical reasoning that is both incorrect and harmful"). Second, it reads as hypocritical, since in the piece you criticize "the hostile, dismissive tone of many critics." (And unless readers have read Wenar's piece pretty closely and are pretty familiar with EA, they're not going to be well-positioned to assess whose hostility and dismissiveness are justified.) So, while I understand the frustration, and think the tone is in some sense warranted, I suspect the piece would be more effective at morally redirecting people if it read as more neutral/measured. The arguments speak for themselves. 

I think it's a nice op-ed; I also appreciate the communication strategy here—anticipating that SBF's sentencing will reignite discourse around SBF's ties to EA, and trying to elevate the discourse around that (in particular by highlighting the reforms EA has undertaken over the past 1.5 years). 

Load more