This is a linkpost for https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried#xj4y7vzkg
Try non-paywalled link here.
More damning allegations:
A few quotes:
At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.
In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”
Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.
Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.
Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”
That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.
The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.
The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.
On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.
In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)
Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”
One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.
For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.
The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.
Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.
Following CatGoddess, I'm going to share more detail on parts of the article that seemed misleading, or left out important context.
Caveat: I'm not an active member of the in-person EA community or the Bay scene. If there's hot gossip circulating, it probably didn't circulate to me. But I read a lot.
This is a long comment, and my last comment was a long comment, because I've been driving myself crazy trying to figure this stuff out. If the community I (digitally) hang out in is full of bad people and their enablers, I want to find a different community!
But the level of evidence presented in Bloomberg and TIME makes it hard to understand what's actually going on. I'm bothered enough by the weirdness of the epistemic environment that it drove me to stop lurking :-/
I name Michael Vassar here, even though his name wasn't mentioned in the article. Someone asked me to remove that name the last time I did this, and I complied. But now that I'm seeing the same things repeated in multiple places and used to make misleading points, I no longer think it makes sense to hide info about serial abusers who have been kicked out of the movement, especially when that info is easy to... (read more)
I hope a fair read of the subtext of your comment is: available evidence points towards community health concerns being dealt with properly, and: there's not much more the community could do. I want to try to steelman an argument in response to this:
I am not very well connected in "hubs" like London and the Bay area, but despite a lack of on the ground information, I have found examples of poor conduct that go largely unpunished.
Take the example of Kat Woods and Emerson Spartz. Allegations of toxic and abusive behaviour towards employees were made 4 months ago (months after being reported to CEA). Despite Kat Woods denying these concerns and attempting to dismiss and discredit those who attest to their abusive behaviour, both Kat Woods and Emerson Spartz continue to: post in the EA-forum and get largely upvotes ; employ EA's; be listed on the EA opportunity board; and control $100,000s in funding. As far as I can tell, nonlinear incubated projects (which they largely control) also continue to be largely supported by the community.
I've accounted further evidence of similar levels of misconduct by different actors, largely c... (read more)
I know of multiple people who are currently investigating this. I expect appropriate consequences to be taken, though it's not super clear to me yet how to make that happen (like, there is no governing body that could currently force nonlinear to do anything, but I think there will be a lot of pressure if the accusations turn out correct).
... (read more)I also think many commenters are missing a likely iceberg effect here. The base rate of survivors reporting sexual assault to any kind of authority or external watchdog is low. Thus, an assumption that the journalists at Time and Bloomberg identified all, most, or even a sizable fraction of survivors is not warranted on available information.
We would expect the journalists to significantly underidentify potential cases because:
Some survivors choose to tell no one, only professional supporters like therapists, or only people they can trust to keep the info confidential. Journalists will almost never find these survivors even with tons of resources.
Some survivors could probably be identified by a more extensive journalistic investigation, but journalism isn't a cash cow anymore. The news org has to balance the value of additional investigation that it internalized against the cost of a deeper investigation. (This also explains why news articles likely have a much higher percentage of publicly-known stories than the true percentage of all stories that are publicly known.)
There are also many reasons a survivor known to a journalist may decide not to agree to be a source, like:... (read more)
[Edit: If you want a visual analogy about discovery, but one that doesn't overweight any one perspective, might I suggest the parable of the blind men and the elephant? https://en.wikipedia.org/wiki/Blind_men_and_an_elephant ]
First of all, its a bit patronizing that you imply that people who aren't updating and handwringing on the Bloomberg piece haven't considered iceberg effect and uncounted victims. Iceberg effect has been mentioned in discussion before many times, and to any of us who care about sexual misconduct it was already an obvious possibility.
Second, the opinions of those of us who don't have problems with the EA community any worse than anywhere else (in fact some of us think it is better than other places!), also matter. Frankly I'm tired of current positive reports from women being downgraded and salacious reports (even if very old) being given all the publicity. So it's a bit of a tangent, but I'll say it here: I'm a woman and I enjoy the EA community and support how gender-related experiences are handled when they are reported. [I've been all the way down my side of the iceberg and I have not experienced anything in EA that implies that things are worse here than ... (read more)
I needed to walk away from this thread due to some unrelated stressful drama at work, which seems to have resolved earlier this week. So I took today off to recover from it. :) I wanted to return to this in part to point out what I think are some potential cruxes, since I expect some of the same cruxes will continue to come up in further discussions of these topics down the road.
1. I think we may have different assumptions or beliefs about the credibility of internal data-gathering versus independent data-gathering. Although the review into handling of the Owen situation is being handled by an outside firm, I don't believe the broader inquiry you linked is.
I generally don't update significantly on internal reporting by an organization which has an incentive to paint a rosy picture of things. That isn't anti-CEA animus; I feel the same way about religious groups, professional sports leagues, and any number of other organizations/movements.
In contrast, an outside professional firm would bring much more credibility to assessing the situation. If you want to get as close as ground truth as possible, you don't want someone with an incentive to sell more newspapers or someone hosti... (read more)
[Deleting the earlier part of my comment because it involved an anonymized allegation of misconduct I made, that upon reflection, I feel uncomfortable making public.]
I also want to state, in response to Ivy's comment, that I am a woman in EA who has been demoralized by my experience of casual sexism within it. I've not experience sexual harassment. But the way the Bloomberg piece describes the way EA/rats talk about women feels very familiar to me (as someone who interacts only with EAs and not rats). E.g., "5 year old in a hot 20 year old's body" or introducing a woman as "ratbait."
Hi to reply to your last paragraph, I am sorry you have been on the recieving end of such comments. You say they are not "sexual harassment" but I want to help provide clarity and a path to resolution by suggesting that, depending on context, comments you have recieved may indeed be sexual harassment. Sorry I don't have US/CA law on hand to share but I'd guess it would be similar to the UK law on harassment (it's very short and worth reading!). I recommend readers pay close attention to sections 1.b. and section 4. Also, intention to harass is not a relevant factor usually[1]
While I recommend people try to keep in mind cultural differences as discussed here rather than always assuming bad intent (I've been on the receiving end of some ribbing from actual friends of mine for being "hot" in EA, which I dish back in different ways), it looks to me like you are already being very careful of what you report (as most women are). So I'd like to also encourage you and other women to look closely and consider whether comment you might receive might actually be harassment, intended or even on technicality. If the comment feels demeaning including assuming too much familiarity, plea... (read more)
Jacy's org received funding from the SFF in 2022, if you consider that EA funding. More weakly, his organization is also advertised on the 80,000 hours job board. He also recently tried to seek funding and recruit on the forum (until he deleted his post, plausibly due to pushback), and thus still benefits from EA institutions and infrastructure even if that doesn't look like direct funding.
Forgive me for using an anonymous account, but I'm in the process of applying for SFF funding and I don't want to risk any uncomfortable consequences. However, I can't stay silent any longer – it's painfully obvious that the SFF has little regard for combating sexual harassment. The fact that Jacy was awarded funding is a blatant example, but what's more concerning is that Michael Vassar, a known EA antagonist, still appears to be involved with the SFF to this day.
It's alarming how Vassar uses his grant-making powers to build alliances in the EA community. He initiated a grant to Peter Eckersley's organization AOI after Peter's death. Peter was strongly against Vassar. Vassar seemed pleased that Peter's successor Deger Turan doesn't have the same moral compass.
But... this comment is false as far as I can tell? Like, I didn't express myself ideally in my comment below (which I think deserves the downvotes and the lower visibility), and it's an understandable misunderstanding to think that Michael still has somehow some kind of speculation grant budget, but at least to me it really looks like he is no longer involved.
Just to excerpt the relevant sections from the thread below:
... (read more)Hey can I just check a thing? Do people really think that someone asking other people out [Edit: okay thinking this is all he did has problems, because it requires taking his apology at face value despite how serious CEA took the claims] means that they should never be allowed to return to impactful work and request (and receive) funding? So treat this comment as a poll
Case details: (and agreevote directions below those)
[EDIT: Apologies, I wrote this hastily and it might be that he never did as much as I first implied, but then others feel he might have done worse. I recommend you make your own conclusions about Jacy by (1) reading pseudonym's comment below this one and (2) visiting Jacy's apology yourself: https://forum.effectivealtruism.org/posts/8XdAvioKZjAnzbogf/apology ]
The rest has been edited:]
I don't know that much about the case but IIRC Jacy was apologizing for asking some women out on dates, clumsily. He did this online on FB messenger. I think before that apology he [was alleged to have done some inappropriate things] in the animal advocacy community somehow related to him, but had sworn to not do so again (a promise it looks like he probably kept). Anyway, he was apolo... (read more)
I may chime back about the object level question around the case soon, but I do want to flag in the interim that this comment that suggests "Jacy had asked some women out on dates" is likely to be a misleading interpretation of the actual events. See also this thread, and this comment.
My view is that whether someone receives funding depends on the kind of work they are doing, as well as the level of risk they present to the community. On replaceability - he is pivoting to AI safety work. Would you say his difficult-to-replace nature in the animal space, to whatever extent this is true, translates to his AI safety work? His latest post was about establishing "a network of DM research collaborators and advisors". Is he difficult to replace in this context also?
I think it's fine for him to independently do research, but whether he should be associated with the EA community, receive EA funding for his work, or be in a position where he can be exposed to more victims (recruiting researchers) is less clear to me and depends on the details of what happened.
There has been no additional communication from CEA or Jacy acknowledging what actually happened, or why we can trust that Jacy has ta... (read more)
Do you have details of his college expulsion and accusations? I honestly couldn't find them. After going through the whole discussion of his apology I could only find his own letter about it from 10 years prior saying it was an incorrect expulsion and also someone linked some other cases of Brown doing a poor job on sexual misconduct cases: IIRC other courts deemed that the brown committee mishandled cases of students accused of sexual misconduct. It appears in one case (not necessarily Jacy's but I've seen this happen myself elsewhere, so I'd actually bet more likely than not that if it was allowed to happen one time it happened in Jacy's case too) that the students had banded together and written letters of unsubstantiated rumors to the Brown committee (eg, assuming what they'd heard in the gossip mill to be true and then trying to make sure the committee "knew" the unsubstantiated rumors, perhaps stating them as fact not even relaying how they had heard it), and then the Brown committee actually did use the letters as evidence in the University tribunal. The actual US court said that Brown, in doing this, went against due process. To reiterate, that was another Brown... (read more)
First, I want to broadly agree that distant information is less valuable, and no one should be judged by their college behavior forever. I learned about the Brown accusation (with some additional information, that I lack permission to pass on, and also don't know the source well enough to pass it on) in 2016 and did nothing beyond talking to the person and passing it on to Julia*, specifically because I didn't want a few bad choices while young to haunt someone forever.
[*It's been a while, I can't remember whether I told Julia or encouraged the other person to do so, but she got told one way or another]
The reason I think the college accusations are relevant is that, while I tentatively agree he shouldn't face more consequences for the college accusations, they definitely speak to Ariel's claim there's been no recidivism, and in general they shift my probability distribution over what he was apologizing for.
I don't necessarily think these concerns should have prevented the grant, or that SFF has an obligation to explain to me why they gave the grant. I wouldn't have made that grant, for lots of reasons, but that's fine, and I generally think the EA community acts too entitled ... (read more)
What's an acceptable amount of money, and what's an unacceptable amount of money?
I didn't make a claim personally that SFF was EA funding, which is why I said "if", though I think many people would consider SFF a funder that was EA-aligned. They have an EA forum page, their description is "Our goal is to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing." which sounds pretty EA, and they were included in a List of EA funding opportunities, with no pushback about their inclusion as a "funder that is part of the EA community" (OTOH, they are not included in the airtable by Effective Thesis)
I don't really understand what you mean by a process that gives an organization $ that isn't seen as endorsement of the organization. Can you clarify what you mean here?
He's currently listed on the website as co-founder, and he was the one who shared the post that included the call for funding and job application on the EA forum. His bio says "@SentienceInst researching AI safety".
What gives you the impression that he is no longer officially involved?
Michael Vassar is still active in the EA community as a grant giver at SFF. He recently initiated a grant to the new president of AOI after the death of the founder Peter Eckersley, which casts a bad mark on his successor.
Peter Eckersley had a strong moral compass and stayed far away from Vassar. The new president, Deger Turan, was either clueless or careless when he sold out Peter's legacy.
Hey! Angela Pang here. I am working on a unified statement with the person who I am referring to in that excerpt, Robert Cordwell: https://www.facebook.com/angela.pang.1337/posts/pfbid034KYHRVRkcqqMaUemV2BHAjEeti98FFqcBeRsPNfHKxdNjRASTt1dDqZehMp1mjxKl?comment_id=1604215313376156¬if_id=1678378897534407¬if_t=feed_comment&ref=notif
I actually wanted to say that I felt like Julia Wise handled my case extremely respectfully, though there still wasn't enough structural support to reach a solution I would've been satisfied with (Julia Wise wanted to ban him, whereas I was hoping for more options, but it seems like other reports came in a few weeks ago so he's banned now), but that can change.
I consider most of EA quite respectful, though I caught sexual harassment at least once at EAG (which I don't think was reported, since the woman in question called it out quickly and the man apologized). CEA handles reports well, though I've only reported Robert.
My complaint lies with the rationalist culture and certain parts of the rationalist community much, much more than CEA, since the lack of moderation leads to missing stairs feeling empowered. Overall, I think CEA did a decent... (read more)
I should clarify that "particularly bad" should be "unusually bad", and by "unusually" I mean "unusual by the standards of human behavior in other professional/intellectual communities".
If someone writes an article about the murder epidemic in New York City, and someone else points out that the NYC murder rate is not at all unusual by U.S. standards, and that murder tends to be common throughout human society, is that a trivializing thing to say?
You can believe a lot of things at once:
- Murder is terrible
- 433 murders is 433 too many
- Murderers should be removed from society for a long time
- NYC should strongly consider taking further action aimed at preventing murder
- The NYC murder rate doesn't point to NYC being more dangerous than other cities
- People in NYC shouldn't feel especially unsafe
- People who want to get involved in theater should still consider moving to NYC
- Some of the actions NYC could take to try preventing murder would likely be helpful
- Other actions would likely be unhelpful on net, either failing to prevent murder or causing other serious problems
- Focusing on the murder rate as a top-priority issue would have both good and bad impacts for NYC, and there may be other problems th
... (read more)Furthermore, if a community wants to command billions of dollars and exert major influence on one of the world's most important industries, it is both totally predictable and appropriate that society will scrutinize that community more rigorously and hold it to a higher standard than a group of "NPCs".
edit: typo
I think this article paints a fairly misleading picture, in a way that's difficult for me to not construe as deliberate.
It doesn't provide dates for most of the incidents it describes, despite that many of them happened many years ago, and thereby seems to imply that all the bad stuff brought up is ongoing. To my knowledge, no MIRI researcher has had a psychotic break in ~a decade. Brent Dill is banned from entering the group house I live in. I was told by a friend that Michael Vassar (the person who followed Sonia Joseph home and slept on her floor despite that it made her uncomfortable, also an alleged perpetrator of sexual assault) is barred from Slate Star Codex meetups.
The article strongly reads to me as if it's saying that these things aren't the case, that the various transgressors didn't face any repercussions and remained esteemed members of the community.
Obviously it's bad that people were assaulted, harassed, and abused at all, regardless of how long ago it happened. It's probably good for people to know that these things happened. But the article seems to assume that all these things are still happening, and it seems to be drawing conclusions on ... (read more)
It's unsurprising that the people who were willing to allow Bloomberg to print their names or identifying information about the wrongdoers were associated with situations where the community has rallied against the wrongdoer. It's also unsurprising that those who were met with criticism, retailation, and other efforts to protect the wrongdoer were not willing to allow publication of potentially identifying information. Therefore, I don't think it's warranted to draw inferences about community response in the cases without identifying information based on the response in cases with that information.
It would be helpful if the article mentioned both the status of the wrongdoer at the time of the incident and their current status in the relevant community.
This comment is gold. I believe there is an iceberg effect here-- EA cannot measure the number of times an accuser attempted to say something but got shut down or retaliated against by the community.
Personally, I would like to see discussion shift toward how to create a safe environment for all genders, and how to respond to accusers appropriately.
One book that I recommend is Citadels of Pride, which goes into institutional abuse in Hollywood and the arts scene. The patterns are similar: lack of clear boundaries between work/life, men in positions of power commanding capital, high costs to say anything, lack of feedback mechanisms in reporting. I am thankful that CEA is upping its efforts here; however, I also see that the amorphous nature (especially in the Bay Area) of these subcultures makes things difficult. It seems that most of the toxicity comes from the rationalist community, where there are virtually no mechanisms of feedback.
I am in touch with some of the women in the article, and they tell me that they feel safe speaking up now that they're no longer associated with these circles and have built separate networks. However, I agree that EA is very heterogeneous and diffuse... (read more)
Okay so you have noted 2 possible types of victims:
I just want to (respectfully) flag that there is a possible third and fourth group of victims (and likely more possibilities too tbh)
People who reported and were met with support, but who now want to continue to use a handled incident as proof of problems. These people would avoid using names to avoid claims of dishonestly controlling the narrative.
People who never reported their incidents to the community at all, and therefore could not be met with either support or criticism. If I were in this third group of people who didn't report, I would also not use my name when talking to a journalist, to avoid upset about taking a complaint public before allowing the community to actually handle something they might absolutely have wanted to handle.
I'm not saying what's going on, like it definitely looks like at least one person from group 2 is present. But I just want to note for readers that you also can't simplify the possibility sphere of name-redacted victims in general into only 1 group
With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:
... (read more)This says very bad things about the leadership of CFAR, and probably other CFAR staff (to the extent that they either agreed with leadership or failed to push back hard enough, though the latter can be hard to do).
It seems to say good things about the public that did the outcry, which at the time felt to me like "almost everyone outside of CFAR". Everyone* yelled at a venerable and respected org until they stopped doing bad stuff. Is this a negative update against EA/rationality, or a positive one?
*It's entirely possible that there were private whisper networks supporting Brett/attacking his accusers, or even public posts defending him that I missed. But it felt to me like the overwhelming community sentiment was "get Brent the hell out of here".
I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar.
It's worth noting that the article was explicit that ex-MIRI researcher Jessica Taylor's psychotic break was in 2017:
She also alleged in December 2021 that at least two other MIRI employees had experienced psychosis in the past few years:
Re: the MIRI employees, it seems relevant that they're "former" rather than current employees, given that you'd expect there to be more former than current employees, and former employees presumably don't have MIRI as a major figure in their lives.
He was banned, but still managed to slip through the cracks enough to be invited to an SSC online meetup in 2020. (To be very clear, this was not organised or endorsed by Scott alexander, who did ban Vasser from his events).
You can read the mea culpa from the organiser here. It really looks to me like Vasser has been treated with a missing stair approach until very recently, where those in the know quietly disinvite him to things but others, even within the community, are unaware. Even in the comments here where some very harsh allegations are made against him, people are still being urged not to "ostracise" him, which to me seems like an entirely appropriate action.
Neither Scotts banning of Vassar nor the REACH banning was quiet. It's just that there's no process by which those people who organize Slate Star Codex meetups are made aware.
It turns out that plenty of people who organize Slate Star Codex meetups are not in touch with Bay Area community drama. The person who organized that SSC online meetup was from Israel.
That's because some of the harsh allegations don't seem to hold up. Scott Alexander spent a significant amount of time investigating and came up with:
This definitely indicates a mishandling of the situation, that leaves room for improvement. In a better world, somebody would have spotted the talk before it went ahead. As it is now, it made it (falsely) look like he was endorsed by SSC, which I hope we can agree is not something we want. We already know he's been using his connection with Yud (via HPMOR) to try and seduce people.
With regards to the latter, if someone was triggering psychotic breaks in my community, I would feel no shame in kicking them out, even if it was unintentional. There is no democratic right to participate in one particular subculture. Ostracism is an appropriate response for far less than this.
I'm particularly concerned with the Anna Salamon statement that sapphire posted above, where she apologises to him for the ostracisation, and says she recommends inviting him to future SSC meetups. This is going in the exact wrong direction, and seems like an indicator that the rationalists are poorly handling abuse.
I think these were relatively quiet. The only public thing I can find about REACH is this post where Ben objects to it, and Scott's listing was just as "Michael A" and then later "Michael V".
Someone on the LessWrong crosspost linked this relevant thing: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
The "chinese robber fallacy" is being overstretched, in my opinion. All it says is that having many examples of X behaviour within a group doesn't necessarily prove that X is worse than average within that group. But that doesn't mean it isn't worse than average. I could easily imagine the catholic church throwing this type of link out in response to the first bombshell articles about abuse.
Most importantly, we shouldn't be aiming for average, we should be aiming for excellence. And I think the poor response to a lot of the incidents described is pretty strong evidence that excellence is not being achieved on this matter.
In the absence of evidence that rationalism is uniquely good at dealing with sexual harrasment (it isn't), then the prior assumption about the level of misconduct should be "average", not "excellent". Which means that there is room for improvement.
Even if these stories do not update your beliefs about the level of misconduct in the communities, they do give you information about how misconduct is happening, and point to areas that can be improved. I must admit I am baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community.
None of this was news to the people who use LessWrong.
The time to have a conversation about what went wrong and what a community can do better, is immediately after you learned that the thing happened. If you search for the names of the people involved, you'll see that LessWrong did that at length.
The worst possible time to bring the topic up again, is when someone writes a misleading article for the express purpose of hurting you, which was not written to be helpful and purposefully lacks the context that it would require in order to be helpful. Why would you give someone a button they can press to make your forum talk for weeks about nothing?
It was a low-quality article and was downvoted so fewer people saw it. I wish the same had happened here.
These stories are horrifying. I want to thank the victims for speaking up, I know it can't be easy.
It's worth noting that while some of these allegations overlap with the ones in the times article, a lot of them are new. This article also makes more of an effort to distinguish between the EA and Rationalist communities, which are fairly close but separate. I think most, but not all, of the new allegations are more closely tied to Rationalism than EA, but I could be wrong.
I find the comments there to be rather poorly reasoned. Kicking out two abusive people from your community does not mean there is no problem, especially when both cases were terribly handled. And just because a newspaper has a slant, it doesn't mean that allegations are not real.
To riff off a particularly disturbing line in the article:
Anyone who as a member of the AI safety community has committed, or is committing, sexual assault is harming the advancement of AI safety, and this Forum poster suggests that an agentic option for those people would be to remove themselves from the community as soon as possible. (I mean go find a non-AI job at Google or something.)
Whoever suggested to a survivor that they should consider death by suicide should also leave ASAP.
[Edit to add: My sentiment is not limited to sexual assault; many forms of sexual misconduct that do not involve assault warrant the same sentiment.]
I share this sentiment.
What you're referring to in the last sentence sounds like evil that doesn't even bother to hide.
But this other part maybe warrants a bit of engagement:
If the allegations are true and serious, then I think it makes sense even just on deterrence grounds for people to have their pursuits harmed, no matter their entanglement with EA/AI safety or their ability to contribute to important causes. In addition, even if we went with the act utilitarian logic of "how much good can this person do?," I don't buy that interpersonally callous, predatory individuals are a good thing for a research community (no matter how smart or accomplished they seem). Finding out that someone does stuff that warrants their exclusion from the community (and damages its reputation) is really strong evidence that they weren't serious enough about having positive impact. One would have to be scarily good at mental gymnastics to think otherwise, to think that this isn't a bad sign about someone's commitment and orientation to have impact. (It's already suspicious most researchers in EA ... (read more)
This claim seems misleading at best
12 Nov: +24 post score
14 Nov: +61 post score
Edit: That's not to say I disagree with the central thrust of the article -- I find it plausible, and I wish the community health team was able to handle this problem more effectively. I hope they are trying to figure out want went wrong in cases like Angela Pang's.
Hey! Angela Pang here. I am working on a unified statement with the person who I am referring to in that excerpt, Robert Cordwell: https://www.facebook.com/angela.pang.1337/posts/pfbid034KYHRVRkcqqMaUemV2BHAjEeti98FFqcBeRsPNfHKxdNjRASTt1dDqZehMp1mjxKl?comment_id=1604215313376156¬if_id=1678378897534407¬if_t=feed_comment&ref=notif
I actually wanted to say that I felt like Julia Wise handled my case extremely respectfully, though there still wasn't enough structural support to reach a solution I would've been satisfied with (Julia Wise wanted to ban him, whereas I was hoping for more options), but that can change.
My complaint lies with the culture and certain parts of the rationalist community much, much more than CEA, since the lack of moderation leads to abusers feeling empowered. Overall, I think CEA did a decent job with my case at least, and I appreciate Julia Wise's help.
Among various emotions, I'm really sad and disappointed at hearing about the multiple survivor reports that the relevant community's response to their stories was survivor-blaming and/or retaliation. In my view, that speaks to a more widespread pathology that can't be minimized by claiming there were a few bad apples in the past who have been dealt with. It seems to reflect a more widely accepted, yet profoundly disturbing, idea that those with power and skill can trample on others who are expected to not rock the boat.
Minor compared to much more important points other people can be making, but highlighting this line:
Wow, this is an interesting framing on Yudkowsky writing him in as literal Voldemort.
Maybe there's a lesson about trustworthiness and interpersonal dynamics here somewhere.
I think journalists are often imprecise and I wouldn't read too much into the particular synonym of "said" that was chosen.
Some of the behaviour described here happened solely in the rationalist community. This isn't a rationalist forum. While we can discuss it, we don't need to defend the rationalists (and I'd say I am one). They can look after themselves and there is a reason that EA and rationalism are separate. I think at times we want to take responsibility for everything, but some of this just didn't happen in EA community spaces.
Some of this behaviour is in EA spaces and that's different.
Seems to me worth discussing needs a bit here. Like there seem to be different parties in this discusion. So what are their needs?
Accusers - I guess these people want to feel safe and secure and to think they will be taken seriously and that their need for personal safety won't be ignored again. Perhaps want to not to feel insulted, as some discussion of this might leave them feeling, even if they are no longer part of the community. They also seem to want to feel safe and so to have the names in the article to stay anonymous.
Those worried about these events - I guess these people desire to feel safe and secure in EA and confident that their friends will be safe and secure in EA. They want to be able to not think about this stuff very much. Perhaps it makes them anxious and disturbs their work.
Rationalists - Rationalists generally want or events to be discussed accurately and precisely. In particularly, a story that's generally accurate but gets key details wrong seems to upset them. They desire clarity on who has been kicked out of the community and when these events happened. In short, to be able to judge if the community performed well or badly here.
[Rough group] - I sense a gro... (read more)
I'm actually confused about why this got so many downvotes, as I didn't think I was saying anything controversial. Can someone explain?
This is one of those situations where I'd prefer to see the gender and background of the commenter before reading the comment so I can understand and adjust for their bias.
Because that's what allows you to really estimate the epistemic status of the commentary, and not what seems to be logic/rationality behind it. I imagine it's not that you used to think but I think that's how confirmation bias works in cases like this.
So: I am a woman, I had experience of abuse of power dynamics and manipulation of the ideas of saving the world + exploitation by high-status men within the rational and EA community. Watched it from the inside (not in the Bay Area even), I can confirm most of the general points in their dynamics.
Society and its set of dynamics are so varied that you simply cannot make enough adjustments (if you really want to maintain the status quo).
I see a different power dynamic than you (by you I mean some commenters who say the article is exaggerating) and it's not about a few individual black sheeps, it's about rotten toxic institutions in general that you are doomed to reproduce over and over again until you are completely transparent about your motives, respect the person and her happiness, and start your efforts to save the world more modestly, without a bullshit about the heroic responsibility that turns people into functions. Only from an excess of internal resources, interest and prosperity, we will save the world so that it does not turn into another hell that is not worth saving.
One of the quotes is:
I think the implication here is that if you are working on global poverty or animal welfare, you must not be smart enough or quantitative enough. I'm not deeply involved so I don't know if this quote is accurate or not.
Edit: This statement is about my personal experience in the biggest EA AI safety hub. It’s not intended to be anything more than anecdotal evidence, and leaves plenty of room for other experiences. Thanks to others for pointing out this wasn’t clear.
I'm part of the AI-oriented community this part is referring to, and have felt a lot of pressure to abandon work on other cause areas to work on AI safety (which I have rejected). In my experience it is not condescending at all. They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness. A lot of the pressure is also not intentional but just comes from the fact that everyone around you is working on AI.
I think this is an empirical question, and likely varies between communities, so "definitely do not..." seems too strong. For example, here's Gregory Lewis, a fairly senior and well-respected EA, commenting on different cause areas (emphasis added):
I wouldn't be surprised if other people shared this view.
Hi Sonia,
You may not have the whole picture.
Source: https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy
Thanks for sharing this important information!
I want to add a couple important points from the Vox article that weren't explicit in your comment.
-This proposal was discarded
-The professional field scores were not necessarily supposed to be measuring intelligence. PELTIV was intended to measure many different things. To me professional field fits more into the "value aligned" category, although I respect that other interpretations based on other personal experiences with high status EAs could be just as valid.
I agree that work on AI safety is a higher priority for much of EA leadership than other cause areas now.
I suggest a period of talking about feelings and things we agree and then we wait 3 days to actually discuss the object level claims. Often I think we don't get consensus on the easy stuff before we move to the hard stuff.
I imagine we mostly agree that people being upset is sad and that, in a better world, none of these people would be upset.
Personally I'm sad and a bit frustrated and a bit defensive.
This is so much more damning than the Time article. It includes deeply disturbing details instead of references to people's feelings. We need to do so much more more soul searching over this than we did over the Time article. [Edit: I've been very critical of the Time article, and don't have an opinion about whether we should be doing more soul-searching on sexual misconduct overall than we are already]. I found the contrast between the two descriptions of Joseph's dinner with the older man particularly troubling.
This is the description in the Time article... (read more)
It's definitely important! It's also important to note that this person has likely already been banned from CEA events for 5 years and some other EA spaces: https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=jKJ4kLq8e6RZtTe2P
I honestly can't comment on how rationalists feel about it and what they have to learn. But I don't think non-rat EAs necessarily have to do "so much more soul searching"[edit: than we are already doing] about this. After all this entire piece is basically about the rationality community.
Oh awesome! That's a huge relief that this specific person has likely already been dealt with. It's a shame they didn't mention that in this article either.
Mentioning that in the article would have defeated the purpose of writing it, for the person who wrote it.
It is a shame – and I would guess a very deliberate one.
I've been a user on LessWrong for a long time and these events have resurfaced several times that I remember already, always instigated by something like this article, and many people discovering the evidence and allegations about them jumps to the conclusion that 'the community' needs to do some "soul searching" about it all.
And this recurring dynamic is extra frustrating and heated because the 'community members', including people that are purely/mostly just users of the site, aren't even the same group of people each time. Older users try to point out the history and new users sort themselves into warring groups, e.g. 'this community/site is awful/terrible/toxic/"rape culture"' or 'WTF, I had nothing to do with this!?'.
Having observed several different kinds of 'communities' try to handle this stuff, rationality/LessWrong and EA groups are – far and away – much better at actually effectively addressing it than anyone else.
People should almost certainly remain vigilant against bad behavior – as I'm sure they are – but they should also be proud of doing as good of a job as they have, especially given how hard of a job it is.
Given the gender ratio in EA and rationality, it would be surprising if women in EA/rationality didn’t experience more harassment than women in other social settings with more even gender ratios.
Consider a simplified case: suppose 1% of guys harass women and EA/rationality events are 10% women. Then in a group of 1000 EAs/rationalists there would be 9 harassers targeting 100 women. But if the gender ratio was even, then there would be 5 harassers targeting 500 women. So the probability of each woman being targeted by a harasser is lower in a group with mor... (read more)
I agree it wouldn't exactly be surprising by default but both communities are very high conscientiousness and EA specifically is meant to be a community of altruists? I know you sort of mentioned that, but honestly I think it should count for quite a lot if we are just doing conjecture here?
And again the two communities (EA and rationality) are getting tied together here? On gender ratio: EA has a 1:2 gender ratio which is honestly not that horrible for a community that grew out tech, philosophy, and econ. Obviously I want to improve it very very much but I kinda wish people would stop saying it is so incredibly uneven in such a way that it is implying that we can expect sexual misconduct to be egregious under the surface? 1:2 is about the gender ratio of the climbing gym I attend and I don't expect sexual misconduct to be egregious under the surface there? (but I do expect there to be some instances women have reported and I would expect that even if it were 1:1) Now compare that 1:2 ratio to the 1:9 gender ratio at best in rationality, well yeah that's probably gonna feel bad as a woman within rationality: even if rationalist conscientiousness bore out so that rat men do 1/... (read more)
Personally, I think this article was kind of sloppily written, but I still think the situation it describes is worth spending a lot of time trying to understand.
My sense is that a lot of people really care about us handling this well so I want to try and do so
And in the ones where we know the accused, do people think the right thing happened?
In those where we don't, I'd like to know what outcomes people would have liked to have happened.
I guess personally I struggle since it feels like there is energy to "do something" but I don't understand ho... (read more)
[EDIT: Okay I guess the current top comment is enough. FWIW I never meant to imply that the discussion already happening was not of good quality but just that I don't want to see people's time and energy wasted, nor do I want to see people's concern spiked for little reason. I still hope, if this not a job for forum mods, that the community health team chimes in much like they did here on the crossposted Time piece but yes perhaps this is not the job for mods and maybe I should weakly hold that it is not necesssary anyway]
Mods, can you please write a comme... (read more)