Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
26

Sorted by New
3
calebp
· · 1m read

Comments
326

Topic contributions
6

Answer by calebp19
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

This is great! Thank you very much for writing this up. I'd be extremely excited for more local groups to self-fund retreats like this. I have seen similar events have large impacts on people's goals/career choices/etc. and they seem pretty viable to do without a huge amount of planning/money.

Answer by calebp10
1
0

A few things that come to mind that I appreciate in people’s applications:

  • apply to several funders where possible
  • try and point to a concrete plan (even if it’s basic)
  • talk about any tests you’ve done for your plan already (e.g. have you spent some time trying to upskill outside of a grant)
  • talk about why a grant is better than applying to a program/internship/job (or it could be that it’s worse, but you aren’t ready to do those alternatives yet)
  • try to talk about an end-to-end theory of change for your work - this is mostly about showing you’ve thought about how this project fits into a larger plan and you’re thinking strategically about your career

To be clear you don’t need to do any of these things to get funding, but I often find that applications are improved after people consider some of these bullet points.

simply become one of the most successful and influential ML researchers 🤷‍♂️

Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I'd guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct.

Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like "we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions" that seems extremely exciting to me. But I don't think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).

I think it's worth noting that the two papers linked (which I agree are flawed and not that useful from an x-risk viewpoint)

 

I haven't read the papers but I am surprised that you don't think they are useful from an x-risk perspective. The second paper "A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning" seems highly relevant to forecasting AI progress which imo is one of the most useful AIS interventions.

The OP's claim 

This paper has many limitations (as acknowledged by the author), and from an x-risks point of view, it seems irrelevant.

Seems overstated and I'd guess that many people working on AI safety would disagree with them. 


 

Great post - I really enjoyed reading this.

I would have thought the standard way to resolve some of the questions above would be to use a large agent-based model, simulating disease transmission among millions of agents and then observing how successful some testing scheme is within the model (you might be able to backtest the model against well-documented outbreaks).

I'm not sure how much you'd trust these models over your intuitions, but I'd guess they'd have quite a lot of mileage.

I've only skimmed these papers, but these seem promising and illustrative of the direction to me: 

The first two bullets don't seem like small UI changes to me; the second, in particular, seems too adversarial imo.

Fwiw, I don't think that being on the 80k podcast is much of an endorsement of the work that people are doing. I think the signal is much more like "we think this person is impressive and interesting", which is consistent with other "interview podcasts" (and I suspect that it's especially true of podcasts that are popular amongst 80k listeners).

I also think having OpenAI employees discuss their views publicly with smart and altruistic people like Rob is generally pretty great, and I would personally be excited for 80k to have more OpenAI employees (particularly if they are willing to talk about why they do/don't think AIS is important and talk about their AI worldview).

Having a line at the start of the podcast making it clear that they don't necessarily endorse the org the guest works for would mitigate most concerns - though I don't think it's particularly necessary.

I think this is a good policy and broadly agree with your position.

It's a bit awkward to mention, but as you've said that you've delisted other roles at OpenAI and that OpenAI has acted badly before - I think you should consider explicitly saying that you don't necessarily endorse other roles at OpenAI and suspect that some other role may be harmful on the OpenAI jobs board cards.

I'm a little worried about people seeing OpenAI listed on the board and inferring that the 80k recommendation somewhat transfers to other roles at OpenAI (which, imo is a reasonable heuristic for most companies listed on the board - but fails in this specific case).

Load more