LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.
I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th).
We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.
About the Fund
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.
In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.
Related posts
- LTFF and EAIF are unusually funding-constrained right now
- EA Funds organizational update: Open Philanthropy matching and distancing
- Long-Term Future Fund: April 2023 grant recommendations
- What Does a Marginal Grant at LTFF Look Like?
- Asya Bergal’s Reflections on my time on the Long-Term Future Fund
- Linch Zhang’s Select examples of adverse selection in longtermist grantmaking
About the Team
- Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.
- Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.
- Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
- Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
You can find a list of our fund managers in our request for funding here.
Ask Us Anything
We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.
There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th.
Because we’re unusually funding-constrained right now, I’m going to shill again for donating to us.
If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.
What fraction of the best projects that you currently can't fund has applied for funding from OpenPhilantropy directly? Reading this it seems that many would qualify.
Why doesn't OpenPhilantropy fund these hyper-promising projects if, as one grantmaker writes, they are "among the best historical grant opportunities in the time that I have been active as a grantmaker?" OpenPhilantropy writes that LTFF "supported projects we often thought seemed valuable but didn’t encounter ourselves." But since the chair of the LTFF is now a Senior Program Associate at OpenPhilantropy, I assume that this does not apply to existing funding opportunities.
I have many disagreements with the funding decisions of Open Philanthropy, so some divergence here is to be expected.
Separately, my sense is Open Phil really isn't set up to deal with the grant volume that the LTFF is dealing with, in addition to its existing grantmaking. My current guess is that the Open Phil longtermist community building team makes like 350-450 grants a year, in total, with 7-8 full-time staff [edit: previously said 50-100 grants on 3-4 staff, because I forgot about half of the team, I am sorry. I also clarified that I was referring to the Open Phil longtermist community building team, not the whole longtermist part]. The LTFF makes ~250 grants per year, on around 1.5 full-time equivalents, which, if Open Phil were to try to take them on additionally, would require more staff capacity than they have available.
Also, Open Phil already has been having a good amount of trouble getting back to their current grantees in a timely manner, at least based on conversations I've had with various OP grantees, so I don't think there is a way Open Phil could fill the relevant grant opportunities, without just directly making a large grant to the LTFF (and also, hon... (read more)
How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?
Here are my guesses for the most valuable qualities:
- Deep technical background and knowledge in longtermist topics, particularly in alignment.
- Though I haven't studied this area myself, my understanding of the history of good funding for new scientific fields (and other forms of research "leadership"/setting strategic direction in highly innovative domains) is that usually you want people who are quite good at the field you want to advance or fund, even if they aren't the very top scientists.
- Basically you might not want the best scientists at the top, but for roles that require complex/nuanced calls in a deeply technical field, you want second-raters who are capable of understanding what's going on quickly and broadly. You don't want your research agenda implicitly set by mediocre scientists, or worse, non-technical people.
- Because we give more grants in alignment than other technical fields, I think a deep understanding of alignment and other aspects of technical AI safety should be prioritized over (eg) technical biosecurity or nuclear security or forecasting or longtermist philosophy.
- The other skillsets are still valuable ofc, and would be a plus in a fund manager.
- Consi
... (read more)How does the team weigh the interests of non-humans (such as animals, extraterrestrials, and digital sentience) relative to humans? What do you folks think of the value of interventions to help non-humans in the long-term future specifically relative to that of interventions to reduce x-risk?
I've heard that you have a large delay between when someone applies to the fund, and when they hear back from you. How large is this delay right now? Are you doing anything in particular to address?
I think last time we checked, it was ~a month in the median, and ~2 months on average, with moderately high variance. This is obviously very bad. Unfortunately, our current funding constraints probably makes things worse[1], but I'm tentatively optimistic that with the a) new guest fund managers, b) more time to come up with better processes (now that I'm onboard ~full-time, at least temporarily) and c) hopefully incoming money (or at least greater certainty about funding levels), we can do somewhat better going forwards.
(Will try to answer other parts of your question/other unanswered questions on Friday).
Because we are currently doing a mix of a) holding on to grants that are above our old bar but below our current bar while waiting for further funding, and b) trying to refer them to other grantmakers, both of which takes up calendar time. Also, the lower levels of funding means we are, or at least I am, prioritizing other aspects of the job (eg fundraising, public communications) over getting back to applicants quickly.
The level of malfunctioning that is going on here seems severe:
Given all of the above, I would hope you could aim to get more than "somewhat better", and have a more comprehensive plan of how to get there. I get that LTFF is pretty broke rn and that we need an OpenPhil alternative, and that there's a 3:1 match going on, so probably it makes sense for LTFF to receive some funding for the time being. Also that you guys are trying hard to do good, probably currently shopping around unfunded grants etc. but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.
Oof. Apologies, I thought we've fixed that everywhere already. Will try to fix asap.
Yeah I think this is very fair. I do think the funding ecosystem is pretty broken in a bunch of ways and of course we're a part of that; I'm reminded of Luke Muelhauser's old comment about how MIRI's operations got a lot better after he read Nonprofit Kit for Dummies.
We are trying to hire for a new LTFF chair, so if you or anybody else you know is excited to try to right the ship, please encourage them to apply! There are a number of ways we suck and a chair can prioritize speed at getting back to grantees as the first thing to fix.
I can also appreciate wanting a new solution rather than via fixing LTFF. For what it's worth people have been consistently talking about shutting down LTFF in favor of a different org[1] approximately since I started volunteering here in early 2022; over the last 18 months I've gotten more pessimistic about replacements, which is one... (read more)
Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention (which exceptions in some cases in both directions for EA Funds and other funders)
Just FWIW, this feels kind of unfair, given that like, if our grant volume didn't increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of "the basics".
Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy rhythm again when there is a new fund chair, and the basics will be better covered again, when the funding ecosystem settles into more of an equilibrium again.
If someone told me about a temporary 5x increase in volume that understandably messed things up, I would think they were talking about a couple month timeframe, not 8 months to 2 years. Surely there’s some point at which you step back and realise you need to adapt your systems to scale with demand? E.g. automating deadline notifications.
It’s also not clear to me that either supply or demand for funding will go back to pre-LTFF levels, given the increased interest in AI safety from both potential donors and potential recipients.
Thank you for hosting this! I'll repost a question on Asya's retrospective post regarding response times for the fund.
I would love to hear more about the numbers and information here. For instance, how did the median and mean change over time? What does the global distribution look like? The disparity between the mean and median suggests there might be significant outliers; how are these outliers addressed? I assume many applications become desk rejects; do you have the median and mean for the acceptance response times?
Continuing my efforts to annoy everyone who will listen with this genre of question, what value of X would make this proposition seem true to you?
Feel free to answer based on concrete example researchers if desired. Earlier respondents have based their answer on people like Paul Christiano.
I'd also be interested in hearing answers for a distribution of different years or different levels of research impact.
(This is a pretty difficult and high variance forecast, so don't worry, I won't put irresponsible weight on the specifics of any particular answer! Noisy shrug-filled answers are better than none for my purposes.)
I'd love to see a database of waitlisted grant applications publicly posted and endorsed by LTFF, ideally with the score that LTFF evaluators have assigned. Would you consider doing it?
By waitlisted, I mean those that LTFF would have funded if it wasn't funding constrained.
What are some types of grant that you'd love to fund, but don't tend to get as applications?
Why did the LTFF/EAIF chairs step down before new chairs were recruited?
What kinds of grants tend to be most controversial among fund managers?
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?
If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
What are your AI timelines and p(doom)? Specifically:
1. What year do you think there is a 10%[1] chance that we will have AGI by? (P(AGI by 20XX)=10%).
2. What chance of doom do we have on our current trajectory given your answer to 1? P(doom|AGI in year 20XX).
[I appreciate that your answers will be subject to the usual caveats about definitions of AGI and doom, spread of probability distributions and model uncertainty, so no need to go into detail on these if pushed for time. Also feel free to be to give more descriptive, gut feel answers.]
I put 50% originally, but think 10% is more salient (recalling last year's blog).
Presumably this will differ a fair bit for different members of the LTFF, but speaking personally, my p(doom) is around 30%,[1] and my median timelines are ~15 years (though with high uncertainty). I haven't thought as much about 10% timelines, but it would be some single-digit number of years.
Though a large chunk of the remainder includes outcomes that are much "better" than today but which are also very suboptimal – e.g., due to "good-enough" alignment + ~shard theory + etc, AI turns most of the reachable universe into paperclips but leaves humans + our descendants to do what we want with the Milky Way. This is arguably an existential catastrophe in terms of opportunity cost, but wouldn't represent human extinction or disempowerment of humanity in the same way as "doom."
What disagreements do the LTFF fund managers tend to have with each other about what's worth funding?
What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don't already exist?
Can grantees return money if their plans change, eg they get hired during a period of upskilling? If so, how often does this happen?
How do you internally estimate you compare against OP/SFF/Habryka's new thing/etc.
So I wish the EA funding ecosystem was a lot more competent than we currently are. Like if we were good consequentialists, we ought to have detailed internal estimates of the value of various grants and grantmakers, models for under which assumptions one group or another is better, detailed estimates for marginal utility, careful retroactive evaluations, etc.
But we aren't very competent. So here's some lower-rigor takes:
- My current guess is that of the reasonably large longtermist grantmakers, solely valued at expected longtermist impact/$, our marginal grants are at or above the quality of all other grantmakers, for any given time period.
- Compared to Open Phil longtermism, before ~2021 LTFF was just pretty clearly more funding constrained. I expect this means more triaging for good grants (though iiuc the pool of applications was also worse back then; however I expect OP longtermism to face similar constraints).
- In ~2021 and 2022 (when I joined) LTFF was to some degree trying to adopt something like a "shared longtermist bar" across funders, so in practice we were trying to peg our bar to be like Open Phil's.
- So during that time I'm not sure there's much difference, naivel
... (read more)My sense is that many of the people working on this fund are doing this part time. Is this the case? Why do that rather than hiring a few people to work full time?
Any thoughts on Meta Questions about Metaphilosophy from a grant maker perspective? For example have you seen any promising grant proposals related to metaphilosophy or ensuring philosophical competence of AI / future civilization, that you rejected due to funding constraints or other reasons?
How do you think about applications to start projects/initiatives that would compete with existing projects?
How many evaluators typically rate each grant application?
What are some past LTFF grants that you disagree with?
Is there a place to donate to the operations / running of LTFF or the funds in general?
Given the rapid changes to the word that we're expecting to happen in the next few decades, how important do you feel that it is to spend money sooner rather than later?
Do you think there is a possibility of money becoming obsolete, which would make spending it now make much more sense than sitting on it and not being able to use it?
This could apply to money in general, with AI concerns, or any particular currency or of store value.
On LessWrong, jacquesthibs asks:
In light of this (worries about contributing to AI capabilities and safetywashing) and/or general considerations around short timelines, have you considered funding work directly aimed and slowing down AI, as opposed to the traditional focus on AI Alignment work? E.g. advocacy work focused on getting a global moratorium on AGI development in place (examples). I think this is by far the highest impact thing we could be funding as a community (as there just isn't enough time for Alignment research to bear fruit otherwise), and would be very grateful if a fun... (read more)
Do you know how/where people usually find out about the LTFF (to apply for funding and to donate)? Are some referral/discovery pathways particularly successful?
On LessWrong, jacquesthibs asks:
How does your infrastructure look like? In particular, how much are you relying on Salesforce?
What kind of criteria or plans do you look for in people who are junior in the AI governance field and looking for independent research grants? Is this a kind of application you would want to see more of?
Can applicants update their application after submitting?
This was an extremely useful feature of Lightspeed Grants, because the strength of my application significantly improved every couple of weeks.
If it’s not a built-in feature, can applicants link to a google doc?
Thank you answering our questions!
I’ll phrase this as a question to not be off-vibe: Would you like to create accounts with AI Safety Impact Markets so that you’ll receive a regular digest of the latest AI safety projects that are fundraising on our platform?
That would save them time since they don’t have to apply to you separately. If their project descriptions left open any questions you have, you can ask them in the Q & A section. You can also post critiques there, which may be helpful for the project developers and other donors.
Conversely, you can also send any rejected projects our way, especially if you think they’re net-positive but just don’t meet your funding bar.
How small and short can a grant be? Is it possible for a grant to start out small, and then gradually gets bigger and sources more people if the research area turns out to be significantly more valuable than it initially appeared? If there's very few trustworthy math/quant/AI people in my city, could you help me source some hours from some reliable AI safety people in the Bay Area if the research area clearly ends up being worth their time?
In relations to EA related content (photography, YouTubevideos, documentary, podcasts, TikTok accounts) what type of projects would you like to see more of?
Is there any way for me to print out and submit a grant in paper, non-digital form, also without mailing it? e.g. I send an intermediary to meet one of your intermediaries at some berkeley EA event or something, and they hand over an envelope containing several identical paper copies of the grant proposal. No need for any conversation, or fuss, or awkwardness, and the papers can be disposed of afterwards and normal communication would take place if the grant is accepted. I know it sounds weird, but I'm pretty confident that this mitigates risks of a specific class.
Infohazard policy/committment? I'd like to make sure that the person who read the grant takes AI safety seriously and much more seriously than other X-risks, to me that's the main and only limiting factor (I don't worry about taking credit for others ideas, profiting off of knowledge, or sharing info with with others as long as the sharing is done in a way that takes AI safety seriously, only that the reader is not aligned with AI safety). I'm worried that my AI-related grant proposal will distract large numbers of people from AI safety, and I think that someone who also prioritizes AI safety would, like me, act to prevent that (consistently enough for the benefits of the research to outweigh the risks).