Angelina Li

Data Analyst @ Centre for Effective Altruism
1475 karmaJoined Working (0-5 years)Berkeley, CA, USA
www.admonymous.co/angelinahli

Bio

Hiya! I work on data stuff at CEA. I used to be the content lead on the EA Global team at CEA, and before that I did economic consulting. Here's an old website I might update at some point.

Think I'm making a mistake? Want to give me feedback? Here's my admonymous. You can also give feedback for me directly to my manager, Oscar Howie.

Comments
125

Heart! "It's freakin' awesome!!" really resonates with me here (my initial reaction was "OMG yes").

Also this in general makes me feel relieved and grateful that the ecosystem is robust enough to deal with sudden funding shortfalls like this on a short timeline (although I imagine this was no trivial lift to juggle for everyone involved). This feels like an existence proof / credible test of at least one part of our collective resilience.

🤝🏻 Hope you & others get some time to celebrate this win before you have to dive back into resolving the longer term sustainability issue!

  • Background: Good Ventures is requiring Open Philanthropy (OP) to exit the wild animal welfare space.
  • Recent development: The Navigation Fund (TNF) plans to fill the gap left by OP, at least through the end of 2026.

This is awesome :) Really glad to hear this news!

Buying the WAW sector & other allies time to deal with the funding gap left by GV seems naively to me like a really good spend. Very glad someone filled it, thank you to those who made it happen and especially TNF / Jed!

This is so so awesome and so inspiring. Thanks to all working on this! 💜

Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?

Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."

FWIW, I did just send off an email, but it took me more like ~30 minutes to skim this post and then draft something. I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.

  • On WAW specifically, my view is something like:
    • Large scale interventions we can be confident in aren't that far away.
    • The intervention space is so large and impacting animals' lives generally is so easy that the likelihood of finding really cost-effective things seems high.
    • These interventions will often not involve nearly as much "changing hearts and minds" or public advocacy as other animal welfare work, so could easily be a lot more tractable.

I would love to hear you talk more about this :) What makes you hopeful that scalable interventions are coming, and can you say more about anything you're particularly excited about here? Also curious what "aren't that far away" caches out into in terms of your beliefs -- in 1 year? 3?

I wonder if your opinions are related to the following, which I'd also be excited to hear more about!

  • I think that my research has generally caused the EA space to focus too much on farmed insects, and less on insecticides. I am somewhat inclined toward thinking that insecticide-caused suffering is both more tractable and larger in scale. I’m now working on a insecticide project though, so trying to correct this.

(Thanks for sharing this post Abraham, I enjoyed reading it :) )

Thanks for this post Michelle! This seems like generally useful advice, and maybe EAG attendees should read it as well.

I'm curious:

  • What kind of specialists does the advising team feel unusually bottlenecked by access to right now?
  • Same for advisees?
  • Do you feel constrained most heavily by access to specialists vs advisees right now?

No worries about responding if you're busy :)

Wow, that is a beautiful salary source of truth doc. I'm impressed! Thanks for sharing.

I enjoyed reading your reflections, thanks for writing them up!

My advice: transferable skills are great because they are relevant to multiple actors and contexts. EA organizations are great, but do not hold a monopoly over impactful work. Plus, you are more likely to be impactful if you have a broader view of the world!

+1, I'm grateful in retrospect for not working at an EA organization right out of school :)

Nice, this was a helpful reframe for me. Thanks for writing this!

I wish more people posting during the debate week were more centered on addressing the specific debate question, instead of just generally writing interesting things — although it's easier to complain than contribute, and I'm glad for the content we got anyway :)

Thanks for publishing this + your code, I found this approach interesting :) and in general I am excited at people trying different approaches to impact measurement within field building.

I had some model qs (fair if you don't get around to these given that it's been a while since publication):

We define a QARY as:

  1. A year of research labor (40 hours * 50 weeks),
  2. Conducted by a research scientist (other researcher types will be inflated or deflated),
  3. Of average ability relative to the ML research community (other cohorts will be inflated or deflated),
  4. Working on a research avenue as relevant as adversarial robustness (alternative research avenues will be inflated or deflated),

[...]

I feel confused by the mechanics of especially adjustments 2-4:

  • On 2: I think you're estimating these adjustments based on researcher type — what is this based on?

Here, scientists, professors, engineers, and PhD students are assigned ‘scientist-equivalence’ of 1, 10, 0.1, and 0.1 respectively.

  • On 3: I feel a bit lost at how you're estimating average ability differences — how did you come up with these numbers?

Given the number of pre-PhD participants each program enrolls, Atlas participants have a mean ability of ~1.1x, Student Group and Undergraduate Stipends ~1x, and MLSS ~0.9x. Student Group PhD students have mean ability ~1.5x.

  • On 4:
    • Am I right that this is the place where you adjust for improvements in research agendas (i.e. maybe some people shift from less -> more useful agendas as per CAIS's opinion, but CAIS still considers their former agenda as useful)?
      • Is that why Atlas gets such a big boost here, because you think it's more likely that people who go on to do useful AI work via Atlas wouldn't have done any useful AI work but for Atlas?
    • I feel confused explicitly how to parse what you're saying here re: which programs are leading to the biggest improvements in research agendas, and why.

The shaded area indicates research avenue relevance for the average participant with (solid line) and without (dashed line) the program. Note that, after finishing their PhD, some pre-PhD students shift away from high-relevance research avenues, represented as vertical drops in the plot.

In general, I'd find it easier to work with this model if I understood better, for each of your core results, which critical inputs were based on CAIS's inside views v.s. evidence gathered by the program (feedback surveys, etc.) v.s. something else :)

I'd be interested to know whether CAIS has changed its field building portfolio based on these results / still relies on this approach!

Load more