NM

Nadia Montazeri

284 karmaJoined

Comments
9

Thanks for writing this!

Have you considered cutting down on EAG attendees overall by reducing the proportion of AI-Safety participants, and instead hosting (or support others doing so) large AI-Safety only conferences?

These in turn could be subsidized by industry - yes, this can be a huge conflict of interest, but given the huge cost on the one hand and the revenue in AI on the other, could be worth consideration.

Do you think the PPE/PAPR example is part of that very small subset? It just happens to be the area I started working on by deference, and I might've gotten unlucky.

Or is the crux here response vs prevention?

Thanks for your comment!

On what is lacking: It was written for reading groups, which is already a softly gatekept space. It doesn't provide guidance on other communication channels: what people could write blogs or tweets about, what is safe to talk to LLMs about, what about google docs, etc. Indeed, I was concerned about infinitely abstract galaxy-brain infohazard potential from this very post.

On dissent:

  1. I wanted to double down on the message in the document itself that is preliminary and not the be-all-end-all.
  2. I have reached out to one person I have in mind within EA biosecurity who pushed back on the infohazard guidance document to give them the option to share their disagreement, potentially anonymously.

Open Philanthropy has biosecurity scholarships which have also funded career transitions in the past. In the past, they opened applications around September.

Thanks for writing this up! Just a few rough thoughts:

Regarding the absorbency of AI Safety Researcher: I have heard people in the movement tossing around that 1/6th of the AI landscape (funding, people) being devoted to safety would be worth aspiring to. That would be a lot of roles to fill (most of which, to be fair, don't exist as of yet), though I didn't crunch the numbers. The main difference to working in policy would be that the profile/background is a lot more narrow. On the other hand, a lot of those roles may not fit what you mean by "researcher", and realistically won't be filled with EAs.

I'm also wondering if you're arguing against propagating the "hits-based approach" for careers to a general audience and find it hard to disentangle this here. There's probably a high absorbency for policy careers, but only few people succeeding in that path will have an extraordinarily high impact. I'm trying to point at some sort of 2x2 matrix with absorbency and risk aversion, where we might eventually fall short on people taking risks in a low-absorbency career path because we need a lot of people who try and fail in order to get to the impact we'd like.

Hey Henry, thanks for sharing and your ambitious donation goal!

I haven't looked into locum work yet, as I won't be working as a doctor in the long term but figured it would make sense to write down my experience here anyway for others.

My intuition is that I would need a lot more experience for locum work and that especially anesthesiology is very well-suited for it, because you can adapt to new patients and environments more quickly. In the (urban) clinics where I work, I never met someone on locum work. We do have one experienced resident who is employed as 0,5 FTE and does 7 night shifts in psychiatry every month.

What I have heard of is radiologists with a US-license who are paid to live in Australia and work US night shifts remotely. I found that quite intriguing.

Hey Markus, thanks, and thanks for asking, I made a mistake there and I'm glad you and another friend who is coincidentally also named Markus pointed it out.

Yes, it's supposed to be the amount one could (conservatively estimated) donate within the first year of residency in Switzerland.

I made the mistake of adding up the "difference in donations" and the "monthly remaining" with the Swiss income (2,364.57 * 12 + 15217.25 = 43592.09). That doesn't make sense.

I have now corrected it to 2,364.57 * 12 + 4,834.25 = 33209.09 in annual donations.