I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
Do you have a list of research questions that you think could easily be sped up with AI systems? I suspect that I'm more pessimistic than you are due to concerns around scheming AI agents doing intentional research sabotage and think that the affordances of AI agents might make some currently intractable agendas more tractable.
Thank you for replying - it's great that someone within the industry shared their perspective!
I don't really understand why that would make the US building DCs in allied countries destabilising. The short answer for why it might be stabilising is:
* It gives non-US actors more leverage, making deals where benefits are shared more likely.
* It's harder for the US to defect on commitments to develop models safely and not misuse them if it's easy for their allies to spy on them (or they have made commitments for DC use to be monitored)
* It keeps the Western democracies ahead of the CCP.
I think that allied countries themselves building DCs might be comparably stabilising - it gives more leverage to allied countries, at the cost of baking in less coordination and affordances to make deals around how AI is used and developed.
Some quick takes in a personal capacity:
I'm a bit confused. Some donors should be very excited about this, and others should be much more on the fence or think it's somewhat net-negative. Overall, I think it's probably pretty promising.
Thanks. We should probably try to display this on our website properly. We have been able to fund for-profits in the past, but it is pretty difficult. I don't think the only reason we passed on your application was that it's for-profit, but that did make our bar much higher (this is a consequence of US/UK charity law and isn't a reflection on the impact of non-profits/for-profits).
By the way, I personally think that your project should probably be a for-profit, as it will be easier to raise funding, users will hold you to higher standards, and your team seems quite value-aligned.
Some AI research projects that (afaik) haven't had much work done on them and would be pretty interesting:
Given that they've made a public Manifund application, it seems fine to share that there has been quite a lot of discussion about this project on the LTFF internally. I don't think we are in a great place to share our impressions right now, but if Connor would like me to, I'd be happy to share some of my takes in a personal capacity.
I think the main reasons that EAs are working on AI stuff over bio stuff is that there aren't many good routes into worst case bio work afaict largely due to infohazard concerns from field building, and the x-risk case for biorisk not being very compelling (maybe due to infohazard concerns around threat models).
Hi Markus,
For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual).
We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:
You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.