I have personally heard several CFAR employees and contractors use the word "debugging" to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.
In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.
This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:
Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizations.
It seems important, to me, that EA's history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would've been perfectly familiar to an EA in 2014 (except for "Should we let machines flood our information channels with propaganda and untruth?", which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).
IIRC, while most of Alameda's early staff came from EA, the early investment came largely from Jaan Tallinn, a big Rationalist donor. This was a for-profit investment, not a donation, but I would guess that the overlapping EA/Rationalist social networks made the deal possible.
That said, once Bankman-Fried got big and successful he didn't lean on Rationalist branding or affiliations at all, and he made a point of directing his "existential risk" funding to biological/pandemic stuff but not AI stuff.
the costs of a bad hire are somewhat bounded, as they can eventually be let go.
This depends a lot on what "eventually" means, specifically. If a bad hire means they stick around for years—or even decades, as happened in the organization of one of my close relatives—then the downside risk is huge.
OTOH my employer is able to fire underperforming people after two or three months, which means we can take chances on people who show potential even if there are some yellow flags. This has paid off enormously, e.g. one of our best people had a history of getting into disruptive arguments in nonprofessional contexts, but we had reason to think this wouldn't be an issue at our place... and we were right, as it turned out, but if we lacked the ability to fire relatively quickly, then I wouldn't have rolled those dice.
The best advice I've heard for threading this needle is "Hire fast, fire fast". But firing people is the most unpleasant thing a leader will ever have to do, so a lot of people do it less than they should.
I can readily believe the core claims in this post, and I'm sure it's a frustrating situation for non-native English speakers. That said, it's worth keeping in mind that for most professional EA roles, and especially for "thought leadership", English-language communication ability is one of the most critical skills for doing the job well. It is not a problem that people who grew up practicing this skill will be "overrepresented" in these positions.
There is certainly a cosmic unfairness in this. It's also unfair that short people will be underrepresented among basketball players, but this does not mean there's a problem with basketball.
The actions to address this ought to be personal, not structural. It's worth some effort on the margin for native speakers to understand the experience and situation of non-native speakers—indeed this is one part of "English-language communication ability". I'm grateful to my foreign friends for explaining many aspects of this to me, it's helped me in a fair number of professional situations. Things like your talk at an international conference to educate people about this stuff seems like a great idea. And of course most non-native speakers who seek positions in EA (or other international movements) correctly put a great deal of effort into improving their fluency in the lingua franca.
I mostly agree with your larger point here, especially about the relative importance of FTX, but early Leverage was far more rationalist than it was EA. As of 2013, Leverage staff was >50% Sequences-quoting rationalists, including multiple ex-SIAI and one ex-MetaMed, compared with exactly one person (Mark, who cofounded THINK) who was arguably more of an EA than a rationalist. Leverage taught at CFAR workshops before they held the first EA Summit. Circa 2013 Leverage donors had strong overlap with SIAI/MIRI donors but not with CEA donors. etc.
I think trying to figure out the common thread "explaining datapoints like FTX, Leverage Research, [and] the LaSota crew" won't yield much of worth because those three things aren't especially similar to each other, either in their internal workings or in their external effects. "World-scale financial crime," "cause a nervous breakdown in your employee," and "stab your landlord with a sword" aren't similar to each other and I don't get why you'd expect to find a common cause. "All happy families are alike; each unhappy family is unhappy in its own way."
There's a separate question of why EAs and rationalists tolerate weirdos, which is more fruitful. But an answer there is also gonna have to explain why they welcome controversial figures like Peter Singer or Eliezer Yudkowsky, and why extremely ideological group houses like early Toby Ord's [EDIT: Nope, false] or more recently the Karnofsky/Amodei household exercise such strong intellectual influence in ways that mainstream society wouldn't accept. And frankly if you took away the tolerance for weirdos there wouldn't be much left of either movement.
The history of big foundations shows clearly that, after the founder's death, they revert to the mean and give money mostly to whatever is popular and trendy among clerks and administrators, rather than anything unusual which the donor might've cared about. If you look at the money flowing out of e.g. the Ford Foundation, you'll be hard-pressed to find anything which is there because Henry or Edsel Ford thought it was important, rather than because it's popular among the NGO class who staffs the foundation. See Henry Ford II's resignation letter.
If you want to accomplish anything more specific than "fund generic charities"—as anyone who accepts the basic tenets of EA obviously should—then creating a perpetual foundation is unwise.