Angelina Li

Data Analyst @ Centre for Effective Altruism
1461 karmaJoined Working (0-5 years)Berkeley, CA, USA
www.admonymous.co/angelinahli

Bio

Hiya! I work on data stuff at CEA. I used to be the content lead on the EA Global team at CEA, and before that I did economic consulting. Here's an old website I might update at some point.

Think I'm making a mistake? Want to give me feedback? Here's my admonymous. You can also give feedback for me directly to my manager, Oscar Howie.

Comments
123

This is so so awesome and so inspiring. Thanks to all working on this! 💜

Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?

Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."

FWIW, I did just send off an email, but it took me more like ~30 minutes to skim this post and then draft something. I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.

  • On WAW specifically, my view is something like:
    • Large scale interventions we can be confident in aren't that far away.
    • The intervention space is so large and impacting animals' lives generally is so easy that the likelihood of finding really cost-effective things seems high.
    • These interventions will often not involve nearly as much "changing hearts and minds" or public advocacy as other animal welfare work, so could easily be a lot more tractable.

I would love to hear you talk more about this :) What makes you hopeful that scalable interventions are coming, and can you say more about anything you're particularly excited about here? Also curious what "aren't that far away" caches out into in terms of your beliefs -- in 1 year? 3?

I wonder if your opinions are related to the following, which I'd also be excited to hear more about!

  • I think that my research has generally caused the EA space to focus too much on farmed insects, and less on insecticides. I am somewhat inclined toward thinking that insecticide-caused suffering is both more tractable and larger in scale. I’m now working on a insecticide project though, so trying to correct this.

(Thanks for sharing this post Abraham, I enjoyed reading it :) )

Thanks for this post Michelle! This seems like generally useful advice, and maybe EAG attendees should read it as well.

I'm curious:

  • What kind of specialists does the advising team feel unusually bottlenecked by access to right now?
  • Same for advisees?
  • Do you feel constrained most heavily by access to specialists vs advisees right now?

No worries about responding if you're busy :)

Wow, that is a beautiful salary source of truth doc. I'm impressed! Thanks for sharing.

I enjoyed reading your reflections, thanks for writing them up!

My advice: transferable skills are great because they are relevant to multiple actors and contexts. EA organizations are great, but do not hold a monopoly over impactful work. Plus, you are more likely to be impactful if you have a broader view of the world!

+1, I'm grateful in retrospect for not working at an EA organization right out of school :)

Nice, this was a helpful reframe for me. Thanks for writing this!

I wish more people posting during the debate week were more centered on addressing the specific debate question, instead of just generally writing interesting things — although it's easier to complain than contribute, and I'm glad for the content we got anyway :)

Thanks for publishing this + your code, I found this approach interesting :) and in general I am excited at people trying different approaches to impact measurement within field building.

I had some model qs (fair if you don't get around to these given that it's been a while since publication):

We define a QARY as:

  1. A year of research labor (40 hours * 50 weeks),
  2. Conducted by a research scientist (other researcher types will be inflated or deflated),
  3. Of average ability relative to the ML research community (other cohorts will be inflated or deflated),
  4. Working on a research avenue as relevant as adversarial robustness (alternative research avenues will be inflated or deflated),

[...]

I feel confused by the mechanics of especially adjustments 2-4:

  • On 2: I think you're estimating these adjustments based on researcher type — what is this based on?

Here, scientists, professors, engineers, and PhD students are assigned ‘scientist-equivalence’ of 1, 10, 0.1, and 0.1 respectively.

  • On 3: I feel a bit lost at how you're estimating average ability differences — how did you come up with these numbers?

Given the number of pre-PhD participants each program enrolls, Atlas participants have a mean ability of ~1.1x, Student Group and Undergraduate Stipends ~1x, and MLSS ~0.9x. Student Group PhD students have mean ability ~1.5x.

  • On 4:
    • Am I right that this is the place where you adjust for improvements in research agendas (i.e. maybe some people shift from less -> more useful agendas as per CAIS's opinion, but CAIS still considers their former agenda as useful)?
      • Is that why Atlas gets such a big boost here, because you think it's more likely that people who go on to do useful AI work via Atlas wouldn't have done any useful AI work but for Atlas?
    • I feel confused explicitly how to parse what you're saying here re: which programs are leading to the biggest improvements in research agendas, and why.

The shaded area indicates research avenue relevance for the average participant with (solid line) and without (dashed line) the program. Note that, after finishing their PhD, some pre-PhD students shift away from high-relevance research avenues, represented as vertical drops in the plot.

In general, I'd find it easier to work with this model if I understood better, for each of your core results, which critical inputs were based on CAIS's inside views v.s. evidence gathered by the program (feedback surveys, etc.) v.s. something else :)

I'd be interested to know whether CAIS has changed its field building portfolio based on these results / still relies on this approach!

Congratulations on launching this and reaching your one year mark!! Starting a new charity sounds like a tremendous amount of work, and I have so much respect for CE incubatees.

Based on these factors, we believe that Ansh’s program can reduce neonatal mortality by at least 50%[10].

I had a nitpicky impact evaluation question, sorry if I'm missing something.

Is this 50% number based on your actual observed reduction in neonatal mortality, given your baseline of ~[13% to 27%]? Or is it based on the studies linked in the prior paragraph? I was just a bit surprised to see you cite these papers instead of your own preliminary data :)

It naively seems to me that since you have ~5 months of operational data (given a Jan 2024 launch) + 900 enrollments, maybe you can estimate your actual, tentative observed effects? (Looks like the Cochrane review paper gives some data on the health outcomes of infants at the point of being discharged + at the 1-3 month post discharge mark, so maybe you have some observed effects already?)

So reasonable if you just haven't gotten around to this yet, or if there's another consideration I haven't thought of. Good luck with your work!!

Nice, thanks for keeping track of this and reporting on the data!! <3

No pressure to respond, but I'm curious how long it took you to find the relevant email addresses, send the messages, then reply to all the people etc.? I imagine for me, the main costs would probably be in the added overhead (time + psychological) of having to keep track of so many conversations.

Load more