I work for CEA, but these are my personal views.
Relevant background: I previously co-founded two EA groups, at Yale University and the healthcare corporation Epic. In one case, I had to make a decision about how to handle a potential guest speaker who was also a controversial figure; this is part of why I sympathize with EA Munich’s position, though a small part.
Epistemic status: A lot of pent-up venting, which I hope adds up to something moderately reasonable. But I wouldn’t be too surprised if it doesn’t.
Many things can be true at the same time.
A planned EA Munich event with Robin Hanson was recently cancelled. This is EA Munich’s explanation. This is a Twitter thread with lots of reactions.
For context, I’ll start with a factual clarification, based on conversations with others at CEA (all of this is also detailed in the Munich group’s document):
- When the Munich organizers got in touch with CEA, they were already considering whether to cancel the event.
- CEA staff told the organizers that they didn’t see a clear-cut “right decision,” and that it could be reasonable to cancel or not cancel the event. Most of CEA’s engagement with the Munich group on this matter involved thinking through ways to handle conflict that could arise from the event, rather than ways to cancel it.
- The organizers then held a vote among themselves and decided to cancel.
Here are some things about the situation which seem true to me (though this doesn’t necessarily make them true):
On the decision and ensuing social media kerfuffle
- It is generally good for groups interested in finding good ideas to choose speakers on the basis of the quality of their best ideas, rather than their most controversial or misguided ideas.
- However, if most members of a small group don’t want a speaker to present to their group, this is a good reason for that speaker not to present. The smaller the group, the more true this seems. (If a speaker is disinvited from an event at a large university, thousands of supporters might be left disappointed; this isn’t the case for a tiny event run by a local EA group.)
- The Slate piece cited as criticism of Hanson was uncharitable; reading it would probably leave most people with a different view of Hanson than they’d get from reading a wider selection of Hanson’s work.
- And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling.
- It’s unclear how many members/organizers of the Munich group were personally upset/unsettled by Hanson and how many were mostly concerned with the PR implications of his presence, but it seems likely that both groups were represented.
- Those who commented on the announcement were generally quite uncharitable to EA Munich — including people I’m certain would endorse the Principle of Charity in the abstract if I were to ask them about it independent of this context. Reading Hanson’s tweets likely left them with a very different view of EA Munich than they’d get from attending a few meetups.
- I wasn’t involved in CEA’s discussion with EA Munich, but CEA giving them the go-ahead to make their own decision seems correct.
- I don’t think Hanson’s supporters would actually have wanted CEA to say: “You should run the event even if it feels like the wrong decision.”
- Maybe they would have wanted CEA to say: “You should do what seems best, but keep in mind the negative consequences of deplatforming speakers.” But EA Munich was clearly aware of the negative consequences. What could CEA tell them that they didn’t know already, aside from “we trust you to make a decision”?
- There are ways in which EA Munich could have adjusted their announcement to better communicate their reasoning.
- There are many ways in which the EA Munich announcement is much, much better than other announcements of its type produced by institutions with far more power, prestige, and PR experience.
- Writing an announcement that has to be approved by eight people (all volunteers), involves a sensitive topic, and has to be published quickly… is something I wouldn’t wish on anyone. Be kind.
On Robin Hanson
- Based on my reading of some of Hanson’s work, I believe he cares a lot about the world being a better place and people living better lives, whoever they are. He is the respected colleague of several of my favorite bloggers. I’d probably find him an interesting person to eat lunch with.
- Much of Hanson’s writing (as EA Munich pointed out themselves!) is interesting and valuable. And some writing that doesn’t seem interesting or valuable to me is clearly interesting or valuable to other people, which probably means that I’m underestimating the total value of his output.
- Some of Hanson’s writing has probably been, on net, detrimental to his own influence. Had he chosen not to publish that writing (or altered it, gotten more feedback before publishing, etc.), his best and most important ideas would have a better chance of improving the world. Instead, much of the attention he gets involves ideas which I doubt he even cares about very much (though I don’t know Hanson, and this is just a guess).
- But as I said, many things can be true at the same time. There is something to the argument that an ideal scholarly career will involve some degree of offense, because filtering all of one’s output takes a lot of time and energy and will produce false positives. “If you never make people angry, you’re spending too much time editing your work.”
- Still, many other scholars have done a better job than Hanson at presenting controversial ideas in a productive way. (Several of them work in his academic department and have written thousands of blog posts on varied topics, many of them controversial.)
- To the extent that I support some of Hanson’s ideas and want to see them become better-known, I am annoyed that this may be less likely to happen because of Hanson’s decisions. (Though maybe the controversies lead more people to his good ideas in a way that is net positive? I really don't know.)
- And of course, Hanson's approach to his own work is none of my business, and he can write whatever he wants. I just have a lot of feelings.
On the EA movement’s approach to ideas, diversity, etc.
- EA Munich’s decision doesn’t say much, if anything, about EA in general. They are a small group and acted independently.
- That said, my impression is that, over time, the EA movement has become more attentive to various kinds of diversity, and more cautious about avoiding public discussion of ideas likely to cause offense. This involves trade-offs with other values.
- However, these trade-offs could easily be beneficial, on net, for the movement’s goals.
- Whether they actually are depends on many factors, including what a given person would define as “the movement’s goals.” Different people want EA to do different things! Competing access needs are real!
- Some of the people who have encouraged EA to be more attentive to diversity and more cautious about public discussion did so without thinking carefully about trade-offs.
- Some of the people who have encouraged EA not to become more cautious and attentive to diversity… also did so without thinking carefully about trade-offs.
- Given prevailing EA discussion norms, I would expect people who favor more attentiveness to diversity to be underrepresented in community discussions, relative to their actual numbers. My experience running anonymous surveys of people in EA (Forum users, org employees, etc.) tends to bear this out.
- However, underrepresentation isn’t exclusive to this group. I’ve heard from people with many different views who feel uncomfortable talking about their views in one or more places.
- The more time someone spends talking to a variety of community members (and potential future members), the more likely they are to have an accurate view of which norms will best encourage the community’s health and flourishing. Getting a sense of where the community lies on issues often involves having a lot of private conversations, because people often say more about their views in private than they will in a public forum.
- Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.
- I don’t know what the right trade-offs are myself, but I recognize that, compared to the aforementioned people, I have access to (a) the same knowledge about trade-offs and (b) less knowledge about actual people in the community.
- Hence, I’m inclined to weigh someone’s views more heavily if they’ve spent a lot of time talking to community members.
- That said (almost done), I spoke to some of the aforementioned people, who cautioned me not to defer too much to their views, and pointed out that “opinions about diversity” aren’t necessarily correlated with “time spent talking to community members,” presenting me with examples of other frequent conversation-havers who hold very different opinions.
- This drives home for me how open these kinds of questions are — and how wrongfooted it seems when people present EA or its biggest orgs as some kind of restrictive orthodoxy.
Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing "party line" in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?
I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?
[Tangent:] Based on developments since we last engaged on the topic, Wei, I am significantly more worried about this than I was at the time. (I.e., I have updated in your direction.)
Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think it's very unlikely that "full-scale cancel culture" (I'll use "CC" from here) evolves within EA.
Some elements of my doubt:
- Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
- EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
- Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
- CEA actively shrank the size of EA Global because they thought it would improve the quality of the event.
- 80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
- Obviously, none of these are perfect analogies, but I think there's a noteworthy pattern here.
- The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
- EA has a strong British influence (rather than being wholl
... (read more)(I'm occupied with some things so I'll just address this point and maybe come back to others later.)
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly don't think about this issue because it's not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this iss... (read more)
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don't recall all that was said, but I think a large part of my argument was that "jumping ship" or being forced off for ideological reasons was not "fine" when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I'm not sure if this changed Paul's mind.
I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn't currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I'm happy enough to discuss, but I'm no... (read more)
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people's minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don't want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it's an important problem that more people should work on. So instead of "and lots of people talk about it already" which seems to suggest that enough people are working on it already, something like "I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere."
Curious how things look from your perspective, or a third party perspective.
I don't think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. It's not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was "The Importance of Truth-Oriented Discussions in EA".
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don't know either way.
I've also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they'd feel comfortable commenting about their views, though I can't be sure.
... (read more)On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.
That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I've observed and as we've now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I'm worried however that each "victory" by CC has a ratcheting effect on EA culture, whereas failed cancellations don't really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.
Honestly I'm not sure what the solution is in the long run. I mean academia is full of smart people many of whom surely dislike CC as much as most of us and would push back against it if they could, yet academia is now the top example of cancel culture. What is something that we can do that they couldn't, or didn't think of?
I agree that that was definitely a step too far. But there are legitimate middle grounds that don't have slippery slopes.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.
I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.
On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn't like the discourse norms in the central EA spaces, I don't want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.
To say it another way, I think it is a mistake to have "advanced" and "introductory" EA spaces, at all.
I am intending to make a pretty strong claim here.
[One operationalization I generated, but want to think more about before I fully endorse it: "I would turn away bil... (read more)
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.
I agree that one of the th... (read more)
I agree with your conclusion about this instance, but for very different reasons, and I don't think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be ver... (read more)
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven't thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying "I think factory farming is terrible but XYZ" instead of just "XYZ".
"I think a model by which people gradually "warm up" to "more advanced" discourse norms is false."
I don't think that's the main benefit of disallowing certain forms of speech at certain events. I'd imagine it'd be to avoid making EA events attractive and easily accessible for, say, white supremacists. I'd like to make it pretty costly for a white supremacist to be able to share their ideas at an EA event.
We've already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don't moderate away or at least discourage such views will tend to attract them - it's not the pattern of activity you'd see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It's also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.
I don't know anything about Leverage but I can think of another situation where someone involved in the rationalist community was exposed as having misogynistic and white supremacist anonymous online accounts. (They only had loose ties to the rationalist community, it came up another way, but it concerned me.)
I didn't downvote it, though probably I should have. But it seems a stretch to say 'one guy who works for a weird organization that is supposedly EA' implies 'congregation'. I think that would have to imply a large number of people. I would be very disappointed if I had a congregation of less than ten people.
JoshYou also ignores important hedging in the linked comment:
So instead of saying
It would be more fair to say
Which is clearly much less worrying. There are lots of weird ideologies and a lot of weird people in California, who believe a lot of very incorrect things. I would be surprised if 'white nationalists' were really high up on the list of threats to EA, especially given how... (read more)
Describing members of Leverage as "white nationalists" strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck's initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.
I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely.
Buck's comment also says at the bottom:
I also want us to separate "really racist" from "white nationalist" which are just really not the same term, and which appear to me to ... (read more)
My description was based on Buck's correction (I don't have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don't believe. I don't mean to imply anything stronger than what Buck claimed about Leverage.
I invoked white nationalists not as a hypothetical representative of ideologies I don't like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.
Just to be clear, I don't think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like "advocating for nations based on values historically associated with whiteness", which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.
I've debated with a number of neoreactionaries, and I've never seen them bring up much stuff about racial superiority. Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don't have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I've seen at le... (read more)
You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed "introductory spaces" like undergrad classes to become "safe spaces", thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded "safe spaces" everywhere they went. And how is anyone supposed to argue against "safety", especially once its importance has been institutionalized (i.e., departments were built in part to enforce "safe spaces", which can then easily extend their power beyond "introductory spaces").
ETA: Jonathan Haidt has a book and an Atlantic article titled The Coddling of the American Mind detailing problems caused by the introduction of "safe spaces" in universities.
Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:
and this comment in the comments section of a NYT story about cancel culture among the students:
... (read more)EDIT: I realized that discussing this will not help me do more good or live a happier life so I'd rather not, but I'll leave it up for the record. You are welcome to reply to it.
Something I don't see discussed here is that there's a difference between a) not inviting a live speaker who has a history of being unpredictable and insensitive compared to b) refusing to engage with any of their ideas.
At this point, for my own mental health, I would not engage with Robin Hanson. If I knew he were going to be at an event and I'd have to interact with him, I wouldn't go. But I still might read one of his books - they've been through an editing process so I trust them to be more sensitive and more useful.
I see a lot of people saying "no one involved with EA would really object to Robin Hanson at an event" but there are actually a lot of us. And you can insult me however you want to - you can say that this makes me small-minded or irrational - but that won't make it an "effective" use of my time to hang around someone who's consistently unkind.
I appreciate you writing this and leaving it up, I feel basically the same (including the edit, so I'm pretty unlikely to reply to further comments) but felt better having seen your post, and think that you writing it was, in fact, doing good in this case (at least in making me and probably others not feel further separated from the community).
I think there's another difference between:
a) Thinking that a speaker shouldn't be allowed to speak at an event
b) Deciding not to attend an event with a confirmed speaker because you don't like their ideas
For the first half of your comment I thought you fell into camp b) but not camp a). However your last paragraph seems to imply you fall into both camps.
Personally I would not want a person to speak at an EA event if I thought they were likely to cause reputational damage to EA. In this particular case I (tentatively) don't think Hanson would have. Sure he's said some questionable things, but he was being invited to talk about tort law and I fail to see how allowing that signals condoning his questionable ideas. Therefore I would probably have let him speak and anyone who didn't want to hear him would obviously have been free to not attend.
It seems to me that people often imply that personally finding a speaker beyond the pale means that the speaker shouldn't be allowed to speak to anyone. I've always found this slightly odd.
Personally, I feel the same. I can engage with Robin's ideas online. I think he produces some interesting content. Also, some dumb content. I can choose to learn from either. I can notice if he 'offends' me and then decide I'm still interested in whether what he has to say might be useful somehow. ...That doesn't mean I have to invite the guy over to my house to talk with me about his ideas, because I realize that I wouldn't enjoy being around him in person. I think this is more common than people realize among people who know Robin. If Munich wanted to read and discuss his stuff, but not invite him to 'hang out,' I get it.
Thanks for writing about this. This incident bothered me and I really appreciate your thoughts and find them clarifying. I also tend to feel really frustrated with people finding offense in arguments (and notice this frustration right now), just to flag this here.
I found it improper that you call it "missteps", as if he made mistakes. As you said, openly discussing sensitive topics will cause offense if you don't censor yourself a lot. You mention that his collegues do a better job at making controversial ideas more palatable, but again, as you suggested, maybe they actually spent more time editing their work. This seems like a tradeoff to me, and I'm not convinced that Hanson is making missteps and we should encourage him to change how he runs his blog to have a more positive impact. Not saying this is true for Hanson, but for some thinkers it might be draining to worry about people taking offense at their thoughts. I'm worried about putting pressure on an important thinker to direct mental resources to other things than having smart thoughts about important topics.
Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.
For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it'... (read more)
I can think of at least three reasons for someone to be "edgy" like that:
One could think of "edgy" people as performing a valuable social service (2 and 3 above) while taking a large personal risk (if they accidentally cross the line), while receiving the personal benefits of intelligence signaling as compensation. On this view, it's regretable that more people aren't willing to be "edgy" (perhaps because we as a culture have devalued intelligence signaling relative to virtue signaling), and as a result our society is suffering the negative consequences of an increasingly narrow overton wi... (read more)
Thanks, those are good points. I agree that this is not black and white, that there are some positives to being edgy.
That said, I don't think you make a good case for the alternative view. I wouldn't say that the problem with Hanson's tweets is that they cause "emotional damage."The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people's impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.
The "alternative view" ("emotional damage") I mentioned was in part trying to summarize the view apparently taken by EA Munich and being defended in the OP: "And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling."
This would be a third view, which I hadn't seen anyone mention in connection with Robin Hanson until now. I guess it seems plausible although I personally haven't observed the "negative effect" you describe so I don't know how big the effect is.
Two other reasons to be "edgy" came to my mind:
Signalling frank discussion norms - when the host of a discussion now and then uses words and phrases that would be considered insensitive among a general audience, people in this discussion can feel permitted to talk frankly without having to worry about how the framing of their argument might offend anybody.
Relatedly, I noticed feeling relieved when a person higher in status made a "politically incorrect" joke. I felt like I could relax some part of my brain that worries about saying something that in some context could cause offense and me being punished socially (e.g. being labeled "problematic", which seems to be happening much quicker than I'd like, also in EA circles).
Only half joking, if somebody would leak the chats I have had with my best friend over the years, there is probably something in there to deeply offend every person on Earth. So maybe another reason to be "edgy" is just that it's fun for some people to say things in a norm-violating way? I remember laughing out loudly at two of Hanson's breaches of certain norms. Some part of me is worried about how this make... (read more)
Thanks for the pushback, I'm still confused and it helped me think a bit better (I think). What do you think about the idea that the issue resolves around what Kelsey Piper called competing access needs? I explained how I think about it in this comment. I feel like I want to protect edgy think aloud spaces like those from Hanson. I feel like I benefit a lot from it and I feel like I (not being on any EA insides) am already excluded from many valuable but potentially offending EA think aloud spaces because people are not willing to bare the costs like Hanson does.
That all makes sense. I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters. It feels to me like the intention isn't just to discuss ideas with people of a certain access need, but also some element of deliberate provocation. (But maybe you could say that's just a side product of curiosity about where the lines are – I just feel like some of the tweet wordings were deliberately optimized to be jarring.) If it wasn't for that one tweet that Hanson now apologized for, I'd have less strong opinions on whether to use the term "misstep." (And the original post used it in plural, so you have a point.)
Presumably every filter is associated with an edge, right? Like, the 'trolley problem' is a classic of philosophy, and yet it is potentially traumatic for the victims of vehicular violence or accidents. If that's a group you don't want to upset or offend, you install a filter to catch yourself before you do, and when seeing other people say things you would've filtered out, you perceive them as 'edgy'. "Don't they know they shouldn't say that? Are they deliberately saying that because it's edgy?"
[A more real example is that a friend once collected a list of classic examples and thought experiments, and edited all of the food-based ones to be vegan, instead of the original food item. Presumably the people who originally generated those thought experiments didn't perceive them as being 'edgy' or 'over the line' in some way.]
I read a lot of old books; for example, it's interesting to contrast the 1934 and 1981 editions of How to Win Friends and Influence Peopl... (read more)
If you're not saying that, then why did you make a comment? It feels like you're stating a fully general counterargument to the view that some statements are clearly worth improving, and that it matters how we say things. That seems like an unattractive view to me, and I'm saying that as someone who is really unhappy with social justice discourse.
Edit: It makes sense to give a reminder that we may sometimes jump to conclusions too quickly, and maybe you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic. That would make sense – but then I have a different opinion.
I'm afraid this sentence has too many negations for me to clearly point one way or the other, but let me try to restate it and say why I made a comment:
The mechanistic approach to avoiding offense is to keep track of the ways things you say could be interpreted negatively, and search for ways to get your point across while not allowing for any of the negative interpretations. This is a tax on saying anything, and it especially taxes statements on touchy subjects, and the tax on saying things backpropagates into a tax on thinking them.
When we consider people who fail at the task of avoiding giving offense, it seems like there are three categories to consider:
1. The Blunt, who are ignoring the question of how the comment will land, and are just trying to state their point clearly (according to them).
2. The Blithe, who would put effort into rewording their point if they knew how to avoid giving offense, but whose models of the audience are inadequate to the task.
3. The Edgy, who are optimizing for being 'on the line' or... (read more)
I am always really confused when someone brings up this point as a point of critique. The substance of Hanson's post where he used that phrase just seemed totally solid to me.
I feel like this phrase is always invoked to make the point that Hanson doesn't understand how bad rape is, or that he somehow thinks lots of rape is "gentle" or "silent", but that has absolutely nothing to do with the post where the phrase is used. The phrase isn't even referring to rape itself!
When people say things like this, my feeling is that they must have not actually read the original post, where the idea of "gentle, silent rape" was used as a way to generate intuitions not about how bad rape is, but about how bad something else is (cuckoldry), and about how our legal system judges different actions in a somewhat inconsistent way. Again, nowhere in that series of posts did Hanson say that rape was in any way not bad, or not traumatic, or not something that we should obviously try to prevent with a substantial fraction of our resources. And given the relati... (read more)
I did read the post, and I mostly agree with you about the content (Edit: at least in the sense that I think large parts of the argument are valid; I think there are some important disanalogies that Hanson didn't mention, like "right to bodily integrity" being way clearer than "moral responsibility toward your marriage partner"). I find it weird that just because I think a point is poorly presented, people think I disagree with the point. (Edit: It's particularly the juxtaposition of "gently raped" that comes also in the main part of the text. I also would prefer more remarks that put the reader at ease, e.g., repeating several times that it's all just a thought experiment, and so on.)
There's a spectrum of how much people care about a norm to present especially sensitive topics in a considerate way. You and a lot of other people here seem to be so far on one end of the spectrum that you don't seem to notice the difference between me and Ezra Klein (in the discussion between Sam Harris and Ezra Klein, I completely agreed with Sam Harris.) Maybe that's just because there are few people in the middle of this spectrum, and yo... (read more)
Sorry! I never meant to imply that you disagree with the point.
My comment in this case is more: How would you have actually wanted Robin Hanson to phrase his point? I've thought about that issue a good amount, and like, I feel like it's just a really hard point to make. I am honestly curious what other thing you would have preferred Hanson to say instead. The thing he said seemed overall pretty clear to me, and really not like an attempt to be intentionally edge or something, and more that the point he wanted to make kind of just had a bunch of inconvenient consequences that were difficult to explore (similarly to how utilitarianism quickly gives rise to a number of hard to discuss consequences that are hard to explore).
My guess is you can probably come up with something better, but that it would take you substantial time (> 10 minutes) of thinking.
My argument here is mostly: In context, the thing that Robin said seemed fine, and I don't expect that many people who read that blogpost actually found his phrasing that problematic. The thing that I expect to hav... (read more)
I realise you did not say this applied to Robin, but just in case anyone reading was confused and mistakenly thought it was implicit, we should make clear that Robin does not think rape is 'not a big deal'. Firstly, opposition to rape is almost universal in the west, especially among the highly educated; as such our prior should be extremely strong that he does think rape is bad. In addition to this, and despite his opposition to unnecessary disclaimers, Robin has made clear his opposition to rape on many occasions. Here are some quotations that I found easily on the first page of google and by following the links in the article EA Munich linked:
https://www.overcomingbias.com/2014/11/hanson-loves-moose-caca... (read more)
Yes, I'm not saying that Robin Hanson is a criminal, and it's good to point out that he's not pro-rape. Thanks for that.
I was thinking about what it would look like for the whole EA community to generally try to avoid upsetting people who have been traumatized by rape, and comparing that to if the EA community tried to avoid upsetting people who have been traumatized by trolley accidents, which was a suggestion above.
My intuition about the base rate of people who have experienced sexual assault and how often sexual assault happens at EA events is probably different from yours which may explain our different approaches to this topic.
I think you're missing my point; I'm not describing the scale, but the type. For example, suppose we were discussing racial prejudice, and I made an analogy to prejudice against the left-handed; it would be highly innumerate of me to claim that prejudice against the left-handed is as damaging as racial prejudice, but it might be accurate of me to say both are examples of prejudice against inborn characteristics, are perceived as unfair by the victims, and so on.
And so if you're not trying to compare expected trauma, and just come up with rules of politeness that guard against any expected trauma above a threshold, setting the threshold low enough that both "prejudice against left-handers" and "prejudice against other races" are out doesn't imply that the damage done by both are similar.
That said, I don't think I agree with the points on your list, because I used the reference class of "vehicular violence or accidents," which is very broad. I agree there's an important disanalogy in that 'forced choices' like in the trolley problem are high... (read more)
If you think my arguments are incorrect, it would be useful to explain how rather than silently downvoting.
I am starting to wonder if I will be downvoted on the EA Forum any time I point out that rape is bad. That can't be why people downvote these comments, right?
I'm glad you came back to look at this discussion again because I found your comments here (and generally) really valuable. I refrained from upvoting your comment because you called the comparison "pretty ridiculous". I would feel attacked if you called my reasoning ridiculous and would be less able to constructively argue with you.
I think you are right when pointing out that some topics are much more sensitive to many more people, and EAs being more careful around those topics makes our community more welcoming to more people. That said, I understood vaniver's point was to take an example where most people reading it would not feel like it is a sensitive topic, and *even there* you might upset some people (e.g. if they stumble on a discussion comparing the death of five vs. one). So the solution should not be to punish/deplatform somebody that discussed a topic in a way that was upsetting for someone, and going forward stop people from thinking publically when touching potentially upsetting topics, but something else.
That's a very helpful overview, thank you.
I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one.
I downvoted the above comment by Khorton (not the one asking for explanations, but the one complaining about the comparison of Trolley's and rape), and think Larks explained part of the reason pretty well. I read it in substantial parts as an implicit accusation of Robin to be in support of rape, and also seemed to itself misunderstand Vaniver's comment, which wasn't at all emphasizing a dimension of trolley problems that made a comparison with rape unfitting, and doing so in a pretty accusatory way (which meerpirat clarified below).
I agree that voting quality somewhat deteriorates in more heated debates, but I think this characterization of how voting happens is too uncharitable. I try pretty hard to vote carefully, and often change my votes multiple times on a thread if I later on realize I was too quick to judge something or misunderstood someone, and really spend a lot of time reconsidering and thinking about my voting behavior with the health of the broader discourse in mind, so I am quite confident about my own voting behavior being mischaracterized by the above.
I've also talked to many other people active on LessWrong and the EA Forum over the years, and a lot of people seem to put a lot of effort into how they vote, so I am also reasonably confident many others also spend substantial time thinking about their voting in a way that really isn't well-characterized by "roughly morphing barely restricted tribal warfare".
Thanks for the feedback. I think the word "missteps" is too presumptive for the reasons you outlined, and I've changed it to "decisions." I also added a caveat noting that the controversies he's provoked may lead to his ideas becoming better-known generally (though it's really hard to determine the overall effect).
I am skeptical of this. The EA survey shows that one of the most under-represented group in EA is conservatives, and I have seen little sign that EAs in general, and CEA in particular, have become more cautious about public discussion that will offend conservatives.
Similarly, I don't think there is much evidence of people suppressing ideas offensive to older people, or religious people, even though these are also dramatically under-represented groups.
I think a more accurate summary would be that as EA has grown, it has become subject to Conquest's Second Law, and this has made it less tolerant of various views and people currently judged to be unacceptable by SJWs. Specifically, I would be surprised if there was much evidence of EAs/CEA being more cautious about publicly discussing 'woke' views out of fear of offending liberals or conservatives.
I hear frequently from people who express fear of discussing "woke" views on the Forum or in other EA discussion spaces. They (reasonably) point out that anti-woke views are much more popular, and that woke-adjacent comments are frequently heavily downvoted. All I have is a series of anecdotal statements from different people, but maybe that qualifies as "evidence"?
My model of this is that there is a large fraction of beliefs in the normal Overton window of both liberals and conservatives, that are not within the Overton window of this community. From a charitable perspective, that makes sense, lots of beliefs that are accepted as Gospel in the conservative community seem obviously wrong to me, and I am obviously going to argue against them. The same is true for many beliefs in the liberal community. Since many more members of the community are liberal, we are going to see many more "woke" views argued against, for two separate reasons:
- Many people assume that all spaces they inhabit are liberal spaces, the EA community is broadly liberal, and so they feel very surprised if they say something that everywhere else is accepted as obvious, suddenly get questioned here (concrete examples that I've seen in the past that I am happy to see questioned are: "there do not exist substantial cognitive differences between genders", "socialized healthcare is universally good", "we should drastically increase taxes on billionaires", "racism is obviously one of the most important problems to be working on").
- There are simply many more liberal people so y
... (read more)Just logging in to say that, as someone who co-ran a large university EA group for three years (incidentally the one that Aaron founded many years prior!), I find it plausible that, in some scenarios, the decision that EA Munich made would be the all-things-considered best one.
Edited from earlier comment: I think I am mostly confused what diversity has to do with this decision. It seems to me that there are many pro-diversity reasons to not deplatform Hanson. Indeed, the primary one cited, one of intellectual diversity and tolerance of weird ideas, is primary an argument in favor of diversity. So while diversity plays some role, I think I am actually confused why you bring it up here.
I am saying this because I wanted to argue against things in the last section, but realized that you just use really high-level language like "diversity and inclusion" which is very hard to say anything about. Of course everyone is in favor of some types of diversity, but it feels to me like the last section is trying to say something like "people who talked to a lot of people in the community tend to be more concerned about the kind of diversity that having Robin as a speaker might harm", but I don't actually know whether that's what you mean. But if you do mean it, I think that's mostly backwards, based on the evidence I have seen.
I maybe should have said something like "concerns related to social justice" when I said "diversity." I wound up picking the shorter word, but at the price of ambiguity.
You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
A simple example of the kind of thing I'm thinking of (which I'm aware is too simplistic to represent reality in full, but does draw from the experiences of people I've met):
A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson'... (read more)
I find it interesting that you thought "diversity" is a good shorthand for "social justice", whereas other EAs naturally interpreted it as "intellectual diversity" or at least thought there's significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
Well, maybe not, if some of the apparent options aren't real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I'm starting to suspect that EA has taken quite a few steps down the slope already.)
It may be that in the end EA can't fight (i.e., can't win against) SJ-like dynamics, and therefore EA joining cancel culture is more "effective" than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I'll d... (read more)
There were extensive discussions around this at https://www.greaterwrong.com/posts/PjfsbKrK5MnJDDoFr/have-epistemic-conditions-always-been-this-bad, including one about the 1950s. (Note that those discussions were from before the recent cluster of even more extreme cancellations like David Shor and the utility worker who supposedly made a white power sign.)
ETA: See also this Atlantic article that just came out today, and John McWhorter's tweet:
If you're not sure whether EA can avoid sharing this fate, shouldn't figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of "50 or 60 bullet points"? (Unless you know that others are already working on the problem, and it sure doesn't sound like it.)
I think the biggest reason I'm worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/arts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and I've personally seen it in action in my local parent groups. Doesn't that give you a high enough base rate that you should think "I better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/dynamics don't apply to EA"?
Your reasoning (from another comment) is "I've seen various incidents that seem worrying, but they don't seem to form a pattern." Well if you only get seriously worried once there's a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/cultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.
... (read more)But isn't it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why it's so important to not give into those people'... (read more)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn't associate with people they think are wrong. So the real choice is not between "having all the people who think you shouldn't associate with people who think they are wrong" and "having all the weird intellectually independent people", it is instead between "having an intellectually uniform and conformist slice of the people who don't want to be associated with others they disagree with" and "having a quite intellectually diverse crowd of people who are tolerating dissenting opinions", with the second possibly actually being substantially larger, though generally I don't think size is the relevant constraint to look at here.
I think you're unintentionally dodging both Aaron's and Ben's points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn't the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn't sound like what you're talking about.
> A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson's were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don't care about someone who would distance themselves from EA for this reason anyway (it's not an actual 'loss'). And I'm not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn't even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesn't seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it's really hard to integrate them into a discourse that might come to unintuitive conclusions if they ... (read more)
[EDIT: As Oli's next reponse notes, I'm misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben's first option, since it's just a non-numeric write-up of what Ben said:
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don't want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I'm generally willing t... (read more)
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don't like is a world where you both won't have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of "if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart". You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don't claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Yeah, sorry, I do think the "by necessity" was too strong.
As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).
On a meta-level, the attitude we have towards "cancellation from a public event" is fairly weird. If only EA Munich chose not to host Hanson's talk to begin with, we almost certainly wouldn't have this discussion. Having instead chosen to host a talk and then changing their minds, they now face lots of handwringing and prompted a larger EA internet conversation.
This feels structurally similar to what Jai Withani calls "The Copenhagen Interpretation of Ethics," though of course it is not exactly the same.
I don't quite understand this asymmetry (though I too feel a similar draw to think/opine in great detail about the "withdraw an event" case, but not the "didn't choose to hold an event " case). But in terms of first-order outcomes, they seem quite similar*!
*They're of course not identical, for example asking someone to give a talk and then changing your mind is professionally uncourteous, can waste speaker's preparation time, etc. But I think in the first order, the lack of professional courtesy and the (say) 2 hours time wasted is quite small compared to the emotional griping we've had.
It sends public signals that you'll submit to blackmail and that you think people shouldn't affiliate with the speaker. The former has strong negative effects on others in EA because they'll face increased blackmail threats, and the latter has negative effects on the speaker and their reputation, which in turn makes it less likely for interesting speakers to want to speak with EA because they expect EA will submit to blackmail about them if any online mob decides to put their crosshairs on that speaker today.
Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here.
If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies.
Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure from third parties. Whether or not that was a good decision, it does not signal they're vulnerable to 'blackmail threats'. If anything, they've signalled the opposite by not reversing course after various folks castigated them on Twitter etc.
The distinction between 'changing our minds on the merits' and 'bowing to public pressure' can get murky (e.g. public outcry could genuinely prompt someone to change their mind that what they were doing was wrong after all, but people will often say this insincerely when what really happened is they were cowed by opprobrium). But again, the apparent abse... (read more)
Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin's ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.
As such, blackmail feels like a totally fair characterization of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).
More importantly, I am really confused why you would claim so confidently that no threats were made. The prior for actions like this being taken in response to implicit threats is really high, and talking to any person who has tried to organizing events like this, will sho... (read more)
I found it valuable to hear information from the debrief meeting, and I agree with some of what you said - e.g. that it a priori seems plausible that implicit threats played at least some role in the decision. However, I'm not sure I agree with the extent to which you characterize the relevant incentives as threats or blackmail.
I think this is relevant because talk of blackmail suggests an appeal to clear-cut principles like "blackmail is (almost) always bad". Such principles could ground criticism that's independent from the content of beliefs, values, and norms: "I don't care what this is about, structurally your actions are blackmail, and so they're bad."
I do think there is some force to such criticism in cases of so-called deplatforming including the case discussed here. However, I think that most conflict about such cases (between people opposing "deplatforming" and those favoring it) is not explained by different evaluations of blackmail, or different views on whether certain actions constitute blackmail. Instead, I think they are mostly garden-variety cases of conflicting goals and beliefs that lead to a different take on ce... (read more)
One ought to invite a speaker who has seriously considered the possibility that blackmail might be good in certain circumstances, written blog posts about it etc.
https://www.overcomingbias.com/2019/02/checkmate-on-blackmail.html
If I were reading this and didn't know the facts, I would assume the organization you're referring to might be CEA. I want to make clear that CEA didn't threaten EA Munich in any way. I was the one who advised them when they said they were thinking of canceling the event, and I told them I could see either decision being reasonable. CEA absolutely would not have penalized them for continuing with the event if that's how they had decided.
Yes! This was definitely not CEA. I don't have any more info on what organization it is (the organizers just said "an organization").
Sorry, didn't mean to imply that you intended this - just wanted to be sure there wasn't a misunderstanding.
FYI, I read this, didn't know the facts, and it didn't occur to me that the organisation Habryka was referring to was CEA - I think my guess was that it was maybe some other random student group?
As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite.
Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (which I say harms my interests), I'm going to do Y (and Y harms your interests)" are fairly characterised as (essentially equivalent to) blackmail. To give a much lower resolution of Daniel's treatment, if (conditional on you doing X) it would be in my interest to respond with Y independent of any harm it may do to you (and any coercive pull it would have on you doing X in the first place), informing you of my intentions is credibly not a blackmail attempt, but a better-faith "You do X then I do Y is our BATNA here, can we negotiate something better?" (In some treatments these are termed warnings versus thr... (read more)
I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).
This just seems like straightforward misrepresentation? What fervid hyper... (read more)
https://xkcd.com/137
I know that this is probably about clearly illustrating the emotional impetus behind one viewpoint, but I can't get on board with people going "fuck that shit" at difficult tradeoffs.
I think this is a complex issue, and a confident stance would require a fair bit of time of investigation.
I don't like the emotional hatred going on on both sides. I'd like to see a rational and thoughtful debate here, not a moralistic one. I don't want to be part of a community where people are colloquially tarred and feathered for making difficult decisions. I could imagine that many of us may wind up in similar positions one day.
So I'd like discussion of Robin Hanson to be done thoughtfully, and also discussions of EA Munich to be done thoughtfully.
The [Twitter threads](https://twitter.com/pranomostro1/status/1293267131270864903) seem like a mess to me. There are a few thoughtful comments, but tons of misery (especially from anonymous accounts and the like). I guess this is one thing that Twitter is just naturally quite poor at.
There are a lot of hints that the the EA Munich team is exhausted over the response:
"Note that this document tries to represent the views of 8 different people on a controversial topic, compiled within a couple of hours, and is therefore necessarily simplifying."
"Because we're kind of overwhelmed with the situation, we won't be able to r... (read more)
To be more clear, I think the snarky comments on Twitter on both sides are a pretty big anti-pattern and should be avoided. They sometimes get lots of likes, which is particularly bad.
I certainly agree that it would be great if the debate was thoughtful on all sides. But I am reluctant to punish emotional responses in these contexts.
When I look at this thread, I see a lack of women participating. Exceptions: Khorton, and Julia clarifying a CEA position. There were also a couple of people whose gender I could not quickly identify.
There are various explanations for this. I am not sure the gender imbalance on this thread is actually worse than on other threads. It could be noise. But I know why I said nothing: I found writing a thoughtful, non-emotional response too hard. I expect to fail because the subject is too upsetting.
This systematically biases the debate in favour of people who bear no emotional cost in participating.
In the 'Recent Discussion' feed of the front page of the EA forum, I found this page between Owen Cotton-Barratt's AMA and this question about insights in longtermist macrostrategy. The AMA had 9 usernames that appeared male to me, no usernames that appeared female to me, and 3 usernames whose gender I couldn't discern. The macrostrategy discussion had 12 names that appeared male to me, 1 that I gathered was female based on this comment, and 3 whose gender I couldn't discern. This should obviously be taken with a grain of salt, since determining gender from usernames is a tricky business.
Interesting, and thanks, Denise for a different take. When I read Ozzie's comment, I thought he meant that the people leaping to Robin's defense should consider that they might be over-emotion, chill out a bit, and practice their rationality skills. Which, I would agree with. I don't think there's *no* concern that reasonable people could have here. I can think of several concerns, some of which have been pointed out in the comments on this post. But I think people who are freaked out by this one decision seem just as likely to be reacting with the kind of knee-jerk fear, tribalism, confirmation bias, and slippery slope thinking that they'd be quick to criticize in others. This is human, but honestly, it's disappointing. I'm appreciating the more measured responses on this post, though there's still some catastrophizing that seems kind of tiresome. There's so much of that going around in the world, I'd like to see EAs or rationalists handle it better.
Thanks for the points Denise, well taken.
I think the issue of "how rational vs. emotional should we aim for in key debates" (assume there is some kind of clean distinction) is quite tricky.
I would point out some quick thoughts, that might be wrong.
1. I'm also curious to better understand why there isn't more discussion by women here. I could imagine a lot of possible reasons for this. It could be that people don't feel comfortable providing emotional responses, but it could also be that people notice that responses on the other side are so emotional that there may be severe punishment.
2. Around the EA community and on Twitter, i see much more emotional-seeming arguments in support of Robin Hanson than for him. Twitter is really the worst at this.
3. Courts have established procedures for ensuring that both judges and the juries are relatively unbiased, fair, and (somewhat) rational. There's probably some interesting theory here we could learn from.
4. I could imagine a bunch of scary situations where important communication gets much more emotional. If they get less emotional, it's trickier to tell. I like to think that rationally minded people could help seek out biases like the one you mention and respond accordingly, instead of having to modify a large part of the culture to account for it.
Correctness is not a popularity contest, it feels like this is an intellectual laundering of groupthink. Also, if you promote a particular view, that *changes* who is going to be a member of the community in the future, as well as who is excluded.
For example, the EA community has decided to exclude Robin Hanson and be more inclusive towards Slate journalists and people who like the opinions of Slate; this defines a future direction for the movement, rather than causing a fixed movement to either flourish or not.
A stark conclusion of "you're going to lose" seems like it's updating too much on a small number of examples.
For every story we hear about someone being cancelled, how many times has such an attempt been unsuccessful (no story) or even led to mutual reconciliation and understanding between the parties (no story)? How many times have niceness, community, and civilization won out over opposing forces?
(I once talked to a professor of mine at Yale who was accused by a student of sharing racist material. It was a misunderstanding. She resolved it with a single brief email to the student, who was glad to have been heard and had no further concerns. No story.)
I'm also not sure what your recommendation is here. Is it "refuse to communicate with people who espouse beliefs of type X"? Is it "create a centralized set of rules for how EA groups invite speakers"?
Thanks for writing this post. I'm glad this incident is getting addressed on the EA forum. I agree with most of the points being made here.
However, I'm not sure if 'becoming more attentive to various kinds of diversity' and maintaining norms that allow for 'the public discussion of ideas likely to cause offense' have to be at odds. In mainstream political discourse it often sounds like this is the case, however I would like to think that EA might be able to balance these two concerns without making any significant concessions.
The reason I think this might be possible is because discussions among EAs tend to be more nuanced than most mainstream discourse, and because I expect EAs to argue in good faith and to be well intentioned. I find that EA concerns often transcend politics, and so I would expect two EAs with very different political views to be able to have more productive discussions on controversial topics than two non-EAs.
I would really appreciate if commentators were more careful to speak about this specific instance of uninviting a speaker instead of uninviting speakers in general, or at least clarify why they choose to speak about the general case.
I am not sure whether they choose to speak about the general case because they think uninviting in this particular case would in itself be an appropriate choice, but it sets up a slippery slope to uninvite more and more speakers, or whether this is because uninviting in this particular case is already net negative for the movement.
Thanks for writing up your thoughts on the incident and showing that much respect to both sides of the argument!
I'm a bit confused about the last parts (7. and 8.):
1. Would a rephrasing of 8. as "Some of the people who spent a lot of time having private conversations with community members think that EA should be more cautious and attentive to diversity. And some of them don't. So we can't actually draw conclusions from this." be fair?
2. By whom is EA is presented as some kind of restrictive orthodoxy? So far, I did not get the i... (read more)