Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
29

Sorted by New
3
calebp
· · 1m read

Comments
376

Topic contributions
6

Answer by calebp19
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

To be clear, I'm open to building broad coalitions and think that a good documentary could/would feature content on low-stakes risks; but, I believe people should be transparent about their motivations and avoid conflating non-GCR stuff with GCR stuff.

Do you have a list of research questions that you think could easily be sped up with AI systems? I suspect that I'm more pessimistic than you are due to concerns around scheming AI agents doing intentional research sabotage and think that the affordances of AI agents might make some currently intractable agendas more tractable.

Thank you for replying - it's great that someone within the industry shared their perspective!

I don't really understand why that would make the US building DCs in allied countries destabilising. The short answer for why it might be stabilising is:
* It gives non-US actors more leverage, making deals where benefits are shared more likely.
* It's harder for the US to defect on commitments to develop models safely and not misuse them if it's easy for their allies to spy on them (or they have made commitments for DC use to be monitored)
* It keeps the Western democracies ahead of the CCP.

I think that allied countries themselves building DCs might be comparably stabilising - it gives more leverage to allied countries, at the cost of baking in less coordination and affordances to make deals around how AI is used and developed.

Some quick takes in a personal capacity:

  • I agree that a good documentary about AI risk could be very valuable. I'm excited about broad AI risk outreach and few others seem to be stepping up. The proposal seem ambitious and exciting.
  • I suspect that a misleading documentary would be mildly net-negative, and it's easy to be misleading. So far, a significant fraction of public communications from the AI safety community has been fairly misleading (definitely not all—there is some great work out there as well).
  • In particular, equivocating between harms like deepfakes and GCRs seems pretty bad. I think it's fine to mention non-catastrophic harms, but often, the benefits of AI systems seem likely to dwarf them. More cooperative (and, in my view, effective) discourse should try to mention the upsides and transparently point to the scale of different harms.
  • In the past, team members have worked on (or at least in the same organisation) comms efforts that seemed low integrity and fairly net-negative to me (e.g., some of their work on deepfakes, and adversarial mobile billboards around the UK AI Safety summit). Idk if these specific team members were involved in those efforts.
  • The team seems very agentic and more likely to succeed than most "field-building" AIS teams.
  • Their plan seems pretty good to me (though I am not an expert in the area). I'm pretty into people just trying things. Seems like there are too few similar efforts, and like we could regret not making more stuff like this happen, particularly if your timelines are short.


I'm a bit confused. Some donors should be very excited about this, and others should be much more on the fence or think it's somewhat net-negative. Overall, I think it's probably pretty promising.

Sorry, I agree this message is somewhat misleading - I'll ask our ops team to review this.

Thanks. We should probably try to display this on our website properly. We have been able to fund for-profits in the past, but it is pretty difficult. I don't think the only reason we passed on your application was that it's for-profit, but that did make our bar much higher (this is a consequence of US/UK charity law and isn't a reflection on the impact of non-profits/for-profits).

By the way, I personally think that your project should probably be a for-profit, as it will be easier to raise funding, users will hold you to higher standards, and your team seems quite value-aligned.

Some AI research projects that (afaik) haven't had much work done on them and would be pretty interesting:

  • If the US were to co-build secure data centres in allied countries, would that be geopolitically stabilising or destabilising?
  • What AI safety research agendas could be massively sped up by AI agents? What properties do they have (e.g. easily checkable, engineering > conceptual ...)?
  • What will the first non-AI R&D uses of powerful and general AI systems be?
  • Are there ways to leverage cheap (e.g. 100x lower than present-day cost) intelligence or manual labour to massively increase the US's electricity supply?
  • What kinds of regulation might make it easier to navigate an intelligence explosion (e.g. establishing quick pathways to implement policy informed by AI experts, or establishing zones where compute facilities can be quickly built without navigating a load of red tape)? 

Given that they've made a public Manifund application, it seems fine to share that there has been quite a lot of discussion about this project on the LTFF internally. I don't think we are in a great place to share our impressions right now, but if Connor would like me to, I'd be happy to share some of my takes in a personal capacity.

I think the main reasons that EAs are working on AI stuff over bio stuff is that there aren't many good routes into worst case bio work afaict largely due to infohazard concerns from field building, and the x-risk case for biorisk not being very compelling (maybe due to infohazard concerns around threat models).

Load more