I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
I think this is a great initiative. SF is one of the most important (possibly the most important) places for EA/AIS work, but there aren't many high-effort community/field-building projects there. There are lots in Berkeley, but travelling from one place to the other happens less than you might naively expect.
Austen and his team are some of the best executors I have met in EA/AIS. I'm really excited to see where this goes!
I don't know if there is an official answer, but I would be very surprised if the 10% pledge required including your spouse's income as well.
I think the GWWC team generally (IMO correctly) cares more about people fulfilling the "spirit" of the pledge than splitting hairs over who has and hasn't fulfilled it in some technical sense. Including your spouse's income may make sense in some cases, but it probably depends on specifics that you should just make a call on.
I'm really excited about this change in direction. My impression is that 80k staff increasingly have wanted to double down on making AI go well for a while, and I think it's important that the outward brand/image is aligned with what people in the organisation are most excited about.
My impression is that many commenters who haven't run or worked at cause-neutral organisations will underestimate the challenges of having an org vision and mission that doesn't feel coherent and consistent to its employees. One way I expect this to improve 80k as an organisation is that 80k may have an easier time hiring people who care a lot about AI and are deeply knowledgeable on the topic, even if (hypothetically) the case for working at 80k for people who mostly care about AI risk was about as strong as before the official switch.
I really appreciate that 80k leadership are bold enough to focus on what they think is most useful. Pivoting 80k seems much better to me than having most senior people leave 80k to work on a different project and then 80k being a very different org people-wise with the same brand.
(I haven't been through the many comments on this post - apologies if this is wrong in meaningful ways that have been clarified in other comments)
I agree that a lot of EAs seem to make this mistake but I don't think the issue is with the neglectedness measure, ime people often incorrectly scope the area they are analysing and fail to notice that that specific area can be highly neglected whilst also being tractable and important even if the wider area it's part of is not very neglected.
For example, working on information security in USG is imo not very neglected but working on standards for datacentres that train frontier LMs is.
Fwiw I think the "deepfakes will be a huge deal" stuff has been pretty overhyped and the main reason we haven't seen huge negative impacts is that society already has reasonable defences against fake images that prevent many people from getting misled by them.
I don't think this applies to many other mouse style risks that the AI X-risk community cares about.
For example the main differences in my view between AI-enabled deepfakes and AI-enabled biorisks are:
* marginal people getting access to bioweapons is just a much bigger deal than marginal people being able to make deepfakes
* there is much less room for the price of deepfakes to decrease than the cost of developing a bioweapon (photoshop has existed for a long time and expertise is relatively cheap).
People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology.
I agree overall but fwiw I think that for the first few years of Open AI and Deepmind's existence, they were mostly pursuing blue sky research with few obvious nearby commercial applications (e.g. training NNs to play video games). I think a lab was a pretty reasonable term - or at least similarly reasonable to calling say, bell labs a lab.
Hi Markus,
For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual).
We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:
You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.