Hide table of contents

Epistemic status: Hot takes for discussion. These observations are a side product of another strategy project, rather than a systematic and rigorous analysis of the funding landscape, and we may be missing important considerations. Observations are also non-exhaustive and mostly come from anecdotal data and EA Forum posts. We haven’t vetted the resources that we are citing; instead, we took numerous data points at face value and asked for feedback from >5 people who have more of an inside view than we do (see acknowledgments, but note that these people do not necessarily endorse all claims). We aim to indicate our certainty in the specific claims we are making.

Context and summary

While researching for another project, we discovered that there have been some significant changes in the EA funding landscape this year. We found these changes interesting and surprising enough that we wanted to share them, to potentially help people update their model of the funding landscape. Note that this is not intended to be a comprehensive overview. Rather, we hope this post triggers a discussion about updates and considerations we might have missed.

We first list some observations about funding in the EA community in general. Then, we zoom in on AI safety, as this is a particularly dynamic area at present.

Some observations about the general EA funding landscape (more details below):

  1. There is a higher number of independent grantmaking bodies
    1. Five new independent grantmaking bodies have started up in 2023 (Meta Charity Funders, Lightspeed Grants,  Manifund Regrants, the Nonlinear Network, and the Foresight AI Fund. Out of these, all but Meta Charity Funders are focused on longtermism or AI.
    2. EA Funds and Open Philanthropy are aiming to become more independent of each other.
    3. Charity Entrepreneurship has set up a foundation program, with a sub-goal of setting up cause-specific funding circles.
  2. There is a lot of activity in the effective giving ecosystem
    1. More than 50 effective giving initiatives, e.g., local fundraising websites, are active, with several launched in recent years
    2. GWWC is providing more coordination in the ecosystem and looking to help new initiatives get off the ground.
  3. There are changes in funding flows
    1. The FTX collapse caused a drastic decrease in (expected) longtermist funding (potentially hundreds of millions of dollars annually).
    2. EA Fund’s Long-Term Future Fund and Infrastructure Fund report (roughly estimated funding gaps of $450k/month and $550k/month respectively, over the next 6 months.
    3. Open Philanthropy seems like they could make productive use of more funding in some causes, but their teams working on AI Safety are capacity-constrained rather than funding-constrained.
    4. The Survival and Flourishing Fund has increased their giving in 2023. It’s unclear whether this increase will continue into the future.
    5. Effective Giving plans to increase their giving in the years to come.
    6. Longview Philanthropy expects to increase their giving in the years to come. Their 2023 advising will be >$10 million, and they expect money moved in 2024 to be greater than 2023.
    7. GiveWell reports being funding-constrained. and projects constant funding flows until 2025. 
    8. Charity Entrepreneurship’s research team expects that money dedicated to animal advocacy is unlikely to grow and could shrink.
  4. There might be more EA funding in the future
    1. Manifold prediction markets estimate a 45% chance of a new donor giving ≥$50 million to longtermist or existential risk work before the end of 2024; and an 86% chance of ≥1 new EA billionaire before the end of 2026.
    2. Smaller but still significant new donors seem likely, according to some fundraising actors.

Some observations about the AI safety funding landscape (more details and hot takes on what these observations might mean below):

  1. Analysing the AI safety funding landscape is hard
    1. Outside of the major EA-aligned AI safety funders, traditional philanthropy, governments, academia, and for-profit companies all fund things with some relevance to AI.
    2. Next to the difficulties of estimating the scale, it’s hard to assess the value of this kind of funding for reducing existential risks and risks from advanced AI.
  2. There may be more funding gaps in AI safety in 2023
    1. John Wentworth writes that in AI alignment, grantmaking is funding-constrained, and commenters seem to agree.
    2. Major EA funders are projected to spend less on AI safety in 2023 compared to 2022.
    3. Important AI funders like the Long-Term Future Fund indicate that they are funding-constrained.
    4. The Nonlinear Network and Lightspeed Grants received 500 and 600 applications, respectively, for relatively low amounts of available funding.
    5. There are some candidate reasons explaining why AI safety could be more funding-constrained now, e.g. the general growth of the field.
  3. There may be more funding for AI in the future
    1. Increased mainstream interest might lead to more funds for AI safety from outside EA.
    2. Longview Philanthropy (very roughly) estimates additional donations to AI safety that they are aware of to be ~$20-$100 million in 2024.
    3. Some of the forecasted increased money for EA might be allocated for AI safety.

Some observations on the general EA funding landscape

More funding sources

Certainty: high

Five new grantmaking bodies have been launched since the beginning of 2023 – the Meta Charity Funders (who are open to new funding members), Lightspeed Grants (from Lightcone Infrastructure, partly funded by Jaan Tallinn), Manifund Regrants (who are currently fundraising), Foresight Institute’s AI Safety Fund, and the Nonlinear Network. The first four are collectively committing $8.5-9.7 million in EA funding this year, but it is unclear how much is additional money.[1] Manifund and Lightspeed are focused on longtermist grants, Foresight and the Nonlinear Network are focused on AI, and the Meta Charity Funders are focused on meta-charity, across all causes. Importantly, all of these have a limited track record, and whether they will continue to operate remains to be seen.[2]

Meanwhile, two existing grantmakers EA Funds and Open Philanthropy (OP), are aiming to become more independent from each other. Asya Bergal and Max Daniel, who work at OP, have resigned or plan to resign as chairs of the Long-Term Future Fund and the EA Infrastructure Fund, respectively. OP will also shift from funding the EA Infrastructure Fund and Long-Term Future Fund directly to matching donations made by others, until the beginning of 2024 and for an amount of up to $3.5M for each fund.

Further, Charity Entrepreneurship has initiated a foundation program with the goal to equip funders and philanthropic professionals with the tools and skills they need to become more effective grantmakers. A sub-goal of the program is to set up cause-specific funding circles led by their alumni. So far, the Mental Health Funding Circle[3] and Meta Charity Funders[4] have launched, and the Antimicrobial Resistance Funding Circle has been initiated. In the future, the aim is to launch circles for Global Health, Animal Welfare, and Education. Also, Charity Entrepreneurship has opened up their previously non-public Seed Funding Network, which supports their incubated charities.

More activity in the effective giving ecosystem

Certainty: high

According to this database hosted by Giving What We Can (GWWC), there are now >50 initiatives active in the effective giving space, e.g. charity evaluators and local donation platforms, with several launched in recent years. GWWC has also increased their efforts to support the wider effective giving ecosystem, providing coordination and looking to help new effective giving initiatives get off the ground.

William MacAskill says that he has started spending more time on fundraising, which might lead to more activity, funding, and new actors entering the space.

Changes in funding flows

Certainty: medium - high on the numbers we cite, and general claims regarding things that have happened; low certainty that these numbers actually represent what we care about. (In general, it’s hard to tell from the outside what’s going on inside of grantmaking organisations and what different numbers represent. Taking publicly visible numbers at face value can lead to the wrong conclusions if misinterpreted. Two examples of things that might lead to misinterpretations and faulty conclusions are that some grants are paid out over several years with different organisations accounting differently for those grants, and that future spending can’t always be deduced from past spending.)

The FTX Future Fund collapsed, which led to the loss of several hundreds of millions of average annual funding in the coming years, compared to what we expected a year ago (among other consequences).[5]

EA Funds’ Long-Term Future Fund and Infrastructure Fund report that they have room for more funding; fund manager Linchuan Zhang wrote (in September 2023) that with currently committed funds, the LTFF and EAIF will have funding gaps of $450k/month and $550k/month respectively, over the next 6 months. This is partly due to the changed relationship between OP and EA Funds.[6] 

OP plan to increase their giving in 2023 from $650 million to $700 million, and are revisiting how they allocate funding between their Global Health and Wellbeing and Global Catastrophic Risks (previously Longtermist) portfolios. It’s not yet clear how they will divide funding between these portfolios going forward. However, they report that historically their spending on Global Catastrophic Risks was primarily limited by funding opportunities, whereas they now believe these grantmaking areas have matured to the point that they can productively absorb more money, giving the impression that the proportion of spending on Global Catastrophic Risk will continue to grow, while the proportion on Global Health and Wellbeing shrinks. This impression is strengthened by an update from GiveWell (from April this year), where they make a median prediction to get $251 million from OP in 2023, $102 million in 2024, and $71 million in 2025.[7]   

Although OP raised their cost-effectiveness bar in 2022, they were able to find enough regrantors to give out all their budget for the Regranting Challenge in Global Health and Wellbeing. Further, CEO Alexander Berger reports that they were ‘surprised and impressed by the strength of the applications [OP] received’ for the challenge, and that they would have been excited to look deeper into some regrantors that were outside of the Challenge’s eligibility criteria. This suggests that there are more high-impact opportunities to fund in Global Health and Wellbeing. However, some grantmakers at OP are mostly bottlenecked by time. Ajeya Cotra comments (in September 2023) that all of OP’s teams working on things related to AI (the governance team, technical team, and field-building team) are quite understaffed at the moment. They have hired several people recently,[8] but they still don’t have capacity to evaluate all plausible AI-related grants. Technical AI safety is particularly understaffed, with Ajeya Cotra being the only grantmaker primarily focused on this.[9] The wider Global Catastrophic Risks team is, at the time of writing, hiring for 16(!) roles, suggesting that they are capacity-constrained and aiming to increase their efforts in these areas.

The Survival and Flourishing Fund (SFF), aimed at supporting longtermist causes, has increased their giving this year from roughly $18 million in 2022 to $21 million in the first half of 2023, with another $9-21 million being expected for the second half  (expected $30-42 million in total for 2023 - we have not investigated these numbers in detail). Jaan Tallinn (SFF’s largest funder) updated his 2023 priorities (as of July), and it seems as if AI governance work is to be prioritised. The Future of Life Institute, also focused on longtermism, is exploring partnering with SFF (see their funding priorities). Hence, it seems plausible that SFF might be able to continue to increase its giving.

Longview Philanthropy expects that they will increase giving in 2024, compared to 2023. Longview have moved >$55 million since their founding in 2018; their 2023 advising will be >$10 million; and they expect 2024 money moved to be greater than 2023.

Effective Giving expects to significantly increase their giving, according to this post from 2022. It is not clear how much and if this is still accurate.

GiveWell reports that they are funding-constrained, and though uncertain predict that their research will yield more grantmaking opportunities than they will be able to fund over the next few years. Their funds raised will remain relatively constant until 2025, with roughly estimated median projections of $500-600 million each year, which is several hundred million dollars less than they predicted a couple of years ago, primarily due to a decrease in their expected funding from OP. For more detail, see this chart. They report that they may hold back some funding in order to maintain a stable cost-effectiveness bar from year to year.

Charity Entrepreneurship estimates that money dedicated to animal advocacy is unlikely to grow in the next few years, and could even shrink. This is based on “many conversations with experts and fund managers”.

Potential future EA funding

Certainty: high on the numbers we report; very low on the accuracy of the forecasts. (While we have no inside view on the forecasts, it is worth noting that all markets are relatively thinly traded, and we put limited weight on them.)

At the time of writing, a Manifold prediction market estimates that there is a 45% chance of a new donor giving ≥$50 million to longtermist or existential risk causes or organisations before the end of 2024, and be expected to continue giving ≥$50 million per year.

In other Manifold prediction markets, the chance that there will be ≥1 new EA billionaire by 2027 is estimated to 86%, and the chance that there will be ≥10 new EA billionaires by 2027 is estimated to 41%. Smaller but still significant, regular donors are even more likely, as reported by some actors, e.g., fundraising organisations, working in this space.

See more observations related to future funding for AI safety below.

Some observations on the AI safety funding landscape

In this section, we zoom in and list some of our observations about AI safety funding specifically. This is because we think AI safety is a comparatively dynamic cause area, with significant changes underway in both the funding landscape and the field itself.

A brief overview of the broad AI safety funding landscape

Certainty: low - high

Estimating the total amount and relevance of funding for AI safety is hard. Outside of the major EA-aligned AI safety funders, traditional philanthropy, governments, academia, and for-profit companies all fund things that have some relevance to AI safety, but it’s difficult to assess how much is spent and how relevant that spending is. Below are some very non-exhaustive observations on this, as we haven’t explored these sources of funding. For a more comprehensive account, see this post from Stephen McAleese.

Traditional philanthropy, distinct from the EA movement and major EA AI safety funders such as OP and SFF, plays a role in AI safety funding. E.g. the Patrick J. McGovern Foundation, which since its inception has disbursed $411 million, is now focused on exploring the potential of AI and data science for broader societal benefit and has given several million to this area.

Governments have also entered the space. For example, the UK has established the Advanced Research and Invention Agency (ARIA) and chosen 8 scientists who will each be given up to £50 million to allocate to research. The aim of ARIA is to "create transformational research programmes with the potential to create new technological capabilities for the benefit of humanity", and it partly focuses on AI. Also, the UK Research and Innovation Programme has committed £1.5 million “to ensure that AI technologies are designed, deployed and used responsibly within societies”.

Academia funds some relevant AI safety work. The estimated scale of this funding depends a lot on what you measure, e.g. if you include upskilling in subjects like math and computer science and physical infrastructure, or take a more conservative approach only including concrete AI safety research. Stephen McAleese gives a best-guest conservative estimate of $1 million per year, and in addition, highlights a commitment from the National Science Foundation of $10 million in 2023 and $10 million in 2024, of which $5 million is from OP.

For-profit companies use profits or investments to fund their research, rather than philanthropic funding, e.g. OpenAI, Anthropic, DeepMind, and Conjecture are all companies that have AI safety teams. A notable recent event is that Amazon will invest up to $4 billion in Anthropic, as part of a broader collaboration to develop reliable and high-performing foundation models. Other companies are also spending money in the space. For example, recently, Google unveiled the Digital Futures Project; with an aim to 'support responsible AI,' they pledged $20 million for research initiatives.

Next to the difficulties of estimating the scale, it’s hard to assess the value of this kind of funding for the type of AI safety work we think is the most neglected and important, reducing existential risks and risks from advanced AI. For example, McGovern highlights funding work that involves applying AI to beneficial causes and fostering social equality and diversity, and the Digital Futures Project plans to fund research that’s not obviously safety-focused (though they mention ‘global security’). However, the contributions of traditional philanthropy, governments, academia, and for-profits seem substantial, and it’s plausible that they will increase over time.

The following sections focus mostly on observations of the AI safety funding landscape, more closely tied to the EA ecosystem.

There might be more funding gaps[10] in AI safety this year

Certainty: low - medium.

There are signals from both grantmakers and grantees that AI safety is more funding-constrained now than in previous years.

Wentworth writes (in July 2023) that in AI alignment, grantmaking is funding-constrained right now:

‘For the past few years, I've generally mostly heard from alignment grantmakers that they're bottlenecked by projects/people they want to fund, not by amount of money. … Within the past month or two, that situation has reversed. My understanding is that alignment grantmaking is now mostly funding-bottlenecked. This is mostly based on word-of-mouth, but for instance, I heard that the recent lightspeed grants round received far more applications than they could fund which passed the bar for basic promising-ness.’

According to comments on this post, Lightspeed Grants and the Nonlinear Network had a lot of applications, ~600 and ~500 respectively, and relatively low disbursal rates.[11] This does not imply a funding gap, since we don’t know if any unfunded applications were above their bars, but it is an indication that many projects are seeking limited funding. See further comments in the post for a relevant discussion of this question, and people working in AI claiming to be funding-constrained.

Stephen McAleese estimates (in July 2023) that major EA funders are projected to fund AI safety by about 30% less in 2023, compared to 2022. We are not convinced the 30% estimate is correct, as it seems to underestimate expected OP funding for AI Safety in 2023.[12] But it seems likely that major EA funders will spend less on AI safety in 2023, than 2022.

As noted above, the Long-Term Future Fund (which also supports AI safety projects) indicate that they are funding-constrained, and Longview Philanthropy could make productive use of more money for AI Safety. Also, we think that there is value in having a diversity of well-funded grantmakers in the ecosystem. Thus it seems the core AI funding system could make use of more money, even though OP is primarily grantmaker-constrained when it comes to AI safety.

We see a few candidate reasons why AI safety might be more funding-constrained now compared to previous years. Firstly, nascent fields (like AI safety) tend to grow each year partly due to field-building efforts, so naturally, there is more to fund each year. For example, estimates suggest that there were (very roughly) 20 technical AI safety FTEs at the end of 2016[13] and that there were (very roughly) 300 full-time technical AI safety researchers around September 2022.[14] Also, the mainstreaming of AI safety is likely to have made the field relevant for new people, including people from underrepresented groups (e.g. civil servants).

Secondly, there seems to be higher strategic clarity on what to fund. OP reports that their ‘longtermist grantmaking areas have matured to the point that [OP] believe they can productively absorb more spending’. An example might be evals; many now seem to think that this is a good idea, whereas there was previously more uncertainty on what things would be robustly good for AI safety governance.

Thirdly, and more generally, there is a lot happening in the AI space, with decision-making processes and events that might have far-reaching impacts taking place, e.g. legislation such as the EU AI Act, the Schumer AI forum, and the global UK AI summit. Funding might be able to productively influence such decision-making processes and events.

Potential future AI safety funding

Certainty: low - medium (Some speculative hot takes are highlighted accordingly, with further clarification)

As mainstream interest in the field rises, new money pools might be available for AI safety in the years to come. One indication of this is that we have recently seen institutional players enter the space, such as some of the actors mentioned above. Marius Hobbhahn writes that he believes individual donors, government grants, and VCs might become more relevant funding sources for AI safety. Further, philanthropic advisor Longview Philanthropy has noticed an increased interest in AI safety, and they expect this interest to result in increased donations in the years to come. A very rough estimate would be an additional ~$20-$100 million in donations in 2024 that they are aware of, not that they directly advise on (this includes individual donors and foundations entering the space). See also the previously mentioned forecast on future funders, which could partly increase AI safety funding.

If one thinks it is correct that there will be increased interest in AI safety from funders outside EA, it seems important to build capacity to ensure that this money is used effectively. This would make it (even more) important to increase the number of capable AI safety grantmakers and capable matchmakers; actors who can connect new funders to those grantmakers.

Moreover, if one thinks it is correct both that there currently are bigger funding gaps in AI safety compared to previous years and that there is likely to be more money available for AI safety in the future, then that implies that now could be a particularly good time to donate to AI safety, as donors now may cover a temporary funding gap that is about to close. Also, recent progress in AI safety has led many to shorten their timelines, which in turn pushes some to increase their p(doom). Those with shorter timelines and higher p(doom) should generally be biased towards donating sooner rather than later, as this means there is less time for their donations to bear fruit and/or fewer equally impactful opportunities. (Note that there are several arguments against now being an especially good time to donate to AI safety. For example, you might think (1) that there will not be more funding for AI safety in the future,[15] (2) that we will gain higher strategic clarity and find better funding opportunities,[16] or (3) that mission-correlated investing is more promising.[17] While we, the authors, are not convinced by these particular counterarguments, we are both unsure and would investigate this speculative hot take further before making important decisions. Also, importantly, we are not arguing that people should donate to AI safety over other causes, only making the observations that if considering donating to AI safety, now might be a good time to do so.)

Further questions

We think there is much room for a more systematic analysis of the funding landscape, as we heavily prioritised publishing something over doing an in-depth investigation. Some questions that readers might be interested in discussing/investigating:

On the general grantmaking landscape

  • What are grantmakers’ spending plans this year and the years to come?
  • Is it worth for someone to update Joey Savoie’s February 2022 post on funding gaps across cause areas?
  • Are there plausibly unusually large funding gaps in other cause areas than AI? Which ones? For example, take global development, which is still suffering strong negative effects from the pandemic and the war in Ukraine.
  • Although this seems like an unusual year for AI safety funding, it still might be better to invest the marginal dollar in financially neglected cause areas, such as farmed animal welfare. How should we think about that?
  • How can the new funding bodies coordinate effectively? How much coordination is beneficial?[18]
  • To which grantmaker should I give my money?

On AI safety funding

  • How funding-constrained are AI organisations really?
  • What do increased for-profit investments mean for donating to AI Safety?
  • How can we leverage non-EA funding for AI safety? For example, perhaps EAs applying for non-EA sources of funding would be a good way to mitigate both funding and grantmaking bottlenecks.
  • Would an AI charity evaluator (or a Global Catastrophic Risk evaluator) be a valuable addition to the space?
  • How can we most effectively steer potentially incoming money to productive ends? What are the main bottlenecks; grantmakers, advisors, talent for direct work, coordination, or something else?
  • How do we approach incoming non-EA funders responsibly to help them avoid downside risks? What are those risks?

Further resources (non-exhaustive)

Donation decisions

Funding opportunities

Other discussions of EA or AI safety funding

Data

Acknowledgments

Thanks to JueYan Zhang, Lowe Lundin, Eirik Mofoss, Henri Thunberg, Simran Dhaliwal, Habiba Islam, Caleb Parikh, Philip Mader and others for helpful and informative comments (commenters do not necessarily endorse the content of this post). Thanks to Amber Dawn Ace and Sanjana Kashyap for research, structuring and editing support.

 

  1. ^

     $500, 000 — $1.5 million from Meta Charity Funders, $5 million from Lightspeed, $2 million from Manifund, and $1 million — $1.2 million from Foresight. We have not been able to find how much the Nonlinear Network has committed or funded, or if they are still active. Importantly, some of this money comes from actors who have supported similar work previously, e.g., Lightspeed Grants’ primary funder is Jaan Tallinn.

  2. ^

     We don’t have a particular inside view to doubt these actors, but we have a low prior on new funding bodies existing for many years.

  3. ^

     The Mental Health Funding circle was set up in the fall of 2022, and has so far disbursed ~$1.1 million. They expect to disburse $1-2 million per year in the coming years.

  4. ^

     Meta Charity Funders were set up in the summer of 2023 and are currently running their first grant round, which they expect to be $500,000 — $1.5 million. They expect to disburse $2-4 million per year in the coming years.

  5. ^

     The FTX Future Fund leaders reported in July 2022 that they had disbursed $132 million in grants and investments, with $25 million in grants in the pipeline in roughly 5 months(?). Some of the $132 million was probably not disbursed after all, or is being saved in case it’s reclaimed in legal clawbacks.

  6. ^

    As mentioned, OP are currently matching donations made to EA Funds’ Long-Term Future Fund and Infrastructure Fund. It remains to be seen if they continue to do so, including what the intended increased separation between OP and EA Funds mean for funding constraints at EA Funds.

  7. ^

     Through 2022, roughly 70% of Open Philanthropy's total funding went toward global health and wellbeing, and 30% went toward longtermism.

  8. ^

     E.g. Trevor Levin joined in June, and Alex Lawsen and Julian Hazell joined in August.

  9. ^

     Ajeya Cotra expects that one person will join the technical team in October, and that a wider hiring round will be launched soon, but that it will take several months to build the team’s capacity substantially.

  10. ^

     We are aware that the term ‘funding gap’ is nebulous, and potentially problematic in this context. When GiveWell mentions a funding gap, it can be understood as everything they could fund beyond their set benchmark (currently 10x GiveDirectly's cash transfers), and the grading for each grant is based on a specific process. Typically, for a nonprofit, a funding gap indicates the amount required to sustain operations. Here, we do not have a precise definition in mind, and simply mean that there might be more effective funding opportunities; more opportunities that would be above the funding bar for various grantmakers going unfunded.

  11. ^

     According to comments on this post, Lightspeed Grants received 600 applicants, who collectively requested ~$150 million in default funding, and ~$350 million in maximum funding. The original amount to be distributed was $5 million, so only ~3% of the default requested funds could be awarded.

  12. ^

     Stephen McAleese expects $50 million spent in 2023. Looking at public grants up for January-July we count ~$36 million in grants made. A linear projection combined with a bit of delayed reporting suggests spending closer to $60 million spent in 2023. Note that this does not include fieldbuilding from their Global Catastrophic Risks Capacity Building program. Also, OP say in a job ad for building the AI Governance and Policy team, that “[t]his program sits under our broader focus area of Potential Risks from Advanced Artificial Intelligence, and aims to distribute >$100 million in grants each year”, indicating ambition to spend more money in the area.

  13. ^

     As we understand this account - though it is hard to read.

  14. ^

     There are likely big issues with both these accounting pieces; it seems very hard to account for these things and requires judgment calls on what work is relevant.

  15. ^

     If you think there won’t be much more money spent on AI safety in the future, it makes sense to wait for better opportunities. One thing that might cause constant or decreased funding for AI safety is that increased interest in AI might lead to more funding for the field in general, but not the kind of safety work EA prioritizes. And there is a risk of people stepping away because they assume increased interest in AI means that funding gaps will be covered. Another thing that might delay more money coming in by a year or so is that it takes time for potential donors who recently gained an interest in AI safety to build conviction and actually give substantial amounts.

  16. ^

     We’re still uncertain about which technical AI safety and governance interventions are the most useful. So it might make sense to wait until our picture is clearer. This seems especially relevant to consider if one has longer AI timelines.

  17. ^

     Mission-correlated investing is any investment strategy that produces more money in worlds where money is relatively more valuable. In this context that means that we might want to invest money now in assets that are likely to dramatically increase in value if AI becomes a big deal, so we have even more money then.

  18. ^

     It is encouraging to see the increased number of independent actors in the EA space. While we believe this development is likely to be positive, relevant coordination seems very important. As Linchuan Zhang points out, there are risks of adverse selection; sometimes a project goes unfunded because grantmakers have evaluated it and decided that it was low impact or even net negative, but it then receives funding from someone else due to them lacking the relevant information. That being said, if new grantmakers simply defer to the verdict of other grantmakers, there is an increased risk of impactful projects going unfunded, and some of the value in diversification and having independent grantmakers is lost.  

Comments12
Sorted by Click to highlight new comments since:

Thanks for this! I would like someone to be funded to regularly report on the funding landscape. Ideally, I'd like periodic reports providing simple graphics like those in here. Data could be easily visualised and updated for free on Google Data Studio.

Meta point:
I think EA has a big blind spot about how information quality mediates the behavioural outcomes we want. As an example, we presumably want more people to set up EA organisations and apply to EA jobs etc. These people will care about the funding landscape when making decisions between career options. They will dislike uncertainty and be time poor. If we want to maximise the number who plan/execute career transitions we should therefore be thinking about maximising the quality of available information. However, we rarely seem to think about, or properly fund, these sorts of systems level improvements.

Thank you Peter!

I agree, some kind of regular report would be useful. And definitely think they should include more graphics (erred on the side of getting this out there).

On your meta point, I would be curious to hear if you know of communities or similar that have better information quality, more effectively mediating sought-after behavioral outcomes? My feeling is that this indeed is very important, but rarely is invested in and/or done very well. It would be very interesting with some kind of survey mapping what (easy) system-level improvements/public goods the community would be most excited about (e.g. regular funding updates).  

@Vilhelm Skoglund Would you be able to share how much time it took to put together this report at this level of quality. Curious as to its "costs" should it be a regularly updated public good.

Unfortunately I don't think anything I can say will be meaningful. Jona and I have spent alot of time (above 40 hours in total) trying understand funding flows and thinking about what might be particular needs atm. Also we had great help from Amber. Obviously you could do it with less effort than this. Best guess if I tried to 80/20 doing something similar in the future with some collection of feedback is 10-15 hours from me and 2 hours from feedback givers. But this is very crude.

FWIW, I also think one key consideration is the likelihood of organizations providing updates and making sure the data means the same thing across organizations (see caveats in the report for more)

Thanks for the post! This seems a useful summary, and I didn't spot anything that contradicted existing information I have (I didn't check very hard, so this isn't strong data)

I'm curating this post. I know you don't intend for it to be exhaustive, but it is nevertheless very thorough. I agree with @PeterSlattery that people considering founding/running orgs in these spaces would benefit from seeing this information, and I think you do a good job of presenting it.

Thank you for the encouraging words! Will consider doing this again in the future.

I love this post! And I think Ville and Jona might have done this at least partially unpaid so no criticism here (and also the date preceded the announcement): I just want to put a pin for future such "EA funding overviews" to strongly consider the Navigation Fund (and any other significant donors I might have missed). If anyone comes across other overviews of funding here on the EAF I suggest leaving comments such as this both for people looking for funding but also for future authors of such information to include additional sources of funding.

Nice summarization post!

On the point of non-EA funders coming into the space, it's important to consider messaging -we don't want to come off as being alarmist, overly patronizing or too sure certain of ourselves, but rather as a constructive dialogue that builds some shared understanding of the stakes involved. 

There also needs to be incentive alignment, which in the short-term might also mean collaborating with people on things that aren't directly X-risk related, like promoting ethical AI, enhancing transparency in AI development, etc.

+1 on not being alarmist, overly patronizing, or too certain of ourselves. And to be clear I think this is about more than messaging! Also, agree that we need to be able to collaborate with people who have different prioritise, but think it is important with integrity, prioritising and not giving up too much.

Curated and popular this week
Relevant opportunities