Bio

Participation
4

Operations Generalist at Anthropic & former President of UChicago Effective Altruism. Suffering-focused.

Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.

Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.

How others can help me

  • Learn more about s-risk: talk to me about what you think!
  • Learn how to get the most out of my first job and living in the Bay Area
  • Seek advice on how to become excellent at operations (and/or ways to tell that I may be a better fit for community-building or research)

How I can help others

  • Talk about university groups community-building
  • Be a sounding board for career planning (or any meta-EA topic)
  • Possibly connect you to other community-builders

Sequences
1

Building My Scout Mindset

Comments
203

Thanks a lot for your work on this neglected topic!

You mention,

Those counter-considerations seem potentially as strong as the motivations I list in favor of a focus on malevolent actors. 

Could you give more detail on which of the counter-considerations (and motivations) you consider strongest?

People being less scared to post! (FWIW I think this has increasingly become the case)

Thanks for this - I think this captures a quality (or set of qualities?) that has previously not had so accurate a handle! I think, in many ways, sincerity is the quality that leads people to really 'take seriously' (i.e., follow through in a coherent way) the project of doing good.

I see!  Yes, I agree that more public "buying time" interventions (e.g. outreach) could be net negative. However, for the average person entering AI safety, I think there are less risky "buying time" interventions that are more useful than technical alignment.

To clarify, you think that "buying time" might have a negative impact [on timelines/safety]?

Even if you think that, I think I'm pretty uncertain of the impact of technical alignment, if we're talking about all work that is deemed 'technical alignment.' e.g., I'm not sure that on the margin I would prefer an additional alignment researcher (without knowing what they were researching or anything else about them), though I think it's very unlikely that they would have net-negative impact.

So, I think I disagree that (a) "buying time" (excluding weird pivotal acts like trying to shut down labs) might have net negative impact and that & thus also that (b) "buying time" has more variance than technical alignment.

edit: Thought about it more and I disagree with my original formulation of the disagreement. I think "buying time" is more likely to be net negative than alignment research, but also that alignment research is usually not very helpful.

Haha aw, thanks! I would love to keep doing these some day.

To clarify, I agree that 80k is the main actor who could + should change people's perceptions of the job board!

I find myself slightly confused - does 80k ever promote jobs they consider harmful (but ultimately worth it if the person goes on to leverage that career capital)?

My impression was that all career-capital building jobs were ~neutral or mildly positive. My stance on the 80k job board—that the set up is largely fine, though the perception of it needs shifting—would change significantly if 80k were listing jobs they thought were net negative if they didn't expect the person to later take an even higher-impact role because of the net negative job.

I always appreciate reading your thoughts on the EA community; you are genuinely one of my favorite writers on meta-EA!

woah! I haven't tried it yet but this is really exciting! the technical changes to the Forum have seemed impressive to me so far. I also just noticed that the hover drop-down on the username is more expanded, which is visually unappealing but probably more useful.

Load more