C

ClaireZabel

3388 karmaJoined

Comments
137

Drift isn't the issue I was pointing at it my comment

I really appreciate this post! I have a few spots of disagreement, but many more of agreement, and appreciate the huge amount of effort that went into summarizing a very complicated situation with lots of stakeholders over an extended period of time in a way that feels sincere and has many points of resonance with my own experience. 

Seconding Ben, I did a similar exercise and got similarly mixed (with stark examples in both directions) results (including in some instances you allude to in the post)

Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:

Anti-claims

(I.e. claims I am not trying to make and actively disagree with) 

  • No one should be doing EA-qua-EA talent pipeline work
    • I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of great people. However, my guess is that a medium-sized reallocation away from it would be good to try for a few years. 
  • The terms EA and longtermism aren’t useful and we should stop using them
    • I think they are useful for the specific things they refer to and we should keep using them in situations where they are relevant and ~ the best terms to use (many such situations exist). I just think we are over-extending them to a moderate degree 
  • It’s implausible that existential risk reduction will come apart from EA/LT goals 
    • E.g. it might come to seem (I don’t know if it will, but it at least is imaginable) that attending to the wellbeing of digital minds is more important from an EA perspective than reducing misalignment risk, and that those things are indeed in tension with one another. 
    • This seems like a reason people who aren’t EA and just prioritize existential risk reduction are less helpful from an EA perspective than if they also shared EA values all else equal, and like something to watch out for, but I don’t think it outweighs the arguments in favor of more existential risk-centric outreach work.

This isn’t mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didn’t decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term “GCR” over “longtermism”, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular. 

Seriously. Someone should make a movie!

Very strongly agree, based on watching the career trajectory of lots of EAs over the past 10 years. I think focusing on what broad kinds of activities you are good at and enjoy, and what skills you have or are well-positioned to obtain (within limits: e.g. "being a really clear and fast writer" is probably helpful in most cause areas, "being a great salsa dancer" maybe less so), then thinking about how to apply them in the cause area you think is most important, is generally much more productive than trying to entangle that exploration with personal cause prio exercises.

Our impression when we started to explore different options was that one can’t place a trustee on a leave of absence; it would conflict with their duties and responsibilities to the org, and so wasn’t a viable route.

Chiming in from the EV UK side of things: First, +1 to Nicole’s thanks :) 

As you and Nicole noted, Nick and Will have been recused from all FTX-related decision-making. And, Nicole mentioned the independent investigation we commissioned into that. 

Like the EV US board, the EV UK board is also looking into adding more board members (though I think we are slightly behind the US board), and plans to do so soon.  The board has been somewhat underwater with all the things happening (speaking for myself, it’s particularly difficult because a lot of these things affect my main job at Open Phil too, so there’s more urgent action needed on multiple fronts simultaneously). 

(The board was actually planning and hoping to add additional board members even before the fall of FTX, but unfortunately those initial plans had to be somewhat delayed while we’ve been trying to address the most time-sensitive and important issues, even though having more board capacity would indeed help in responding to issues that crop up; it's a bit of a chicken-and-egg dynamic we need to push through.)  

Hope this is helpful!

My favorite is probably the movie Colossus: the Forbin Project. For this, would also weakly recommend the first section of Life 3.0. 

Hey Jack, this comment might help answer your question. 

Load more