1Day Sooner made our last proposal to OP public (with some minor redactions), but I do think for a lot of groups (particularly those doing advocacy) there could be a significant tradeoff between candor/clarity and transparency, so it's not a costless choice. I do tend to think OP making grant requests public as a default would probably be good (I think information often has a lot of positive externalities that can be hard to observe or predict). But doing it in some cases and not others might draw attention/criticism to the more controversial areas, and it would create more work for OP and for applicants.
Thanks for sharing this! I've also been working on this question of "what would better forecasting by AIs enable?" (or stated differently, "what advances could instantaneous superforecasting 'too cheap to meter' unlock?") I've come at this from a bit of a different angle of thinking about how forecasting systems could fit into a predictive process for science and government that imitates active inference in brains. Here're slides from a presentation I gave on this topic at Manifest, and here is a half-finished draft essay I'm working on in case you're interested.
I haven't had time to read all the discourse about Manifest (which I attended), but it does highlight a broader issue about EA that I think is poorly understood, which is that different EAs will necessarily have ideological convictions that are inconsistent with one another.
That is, some people will feel their effective altruist convictions motivate them to work to build artificial intelligence at OpenAI or Anthropic; others will think those companies are destroying the world. Some will try to save lives by distributing medicines; others will think the people those medicines save eat enough tortured animals to generally make the world worse off. Some will think liberal communities should exclude people who champion the existence of racial differences in intelligence; others will think excluding people for their views is profoundly harmful and illiberal.
I'd argue that the early history of effective altruism (i.e. the last 10-15 years) has generally been one of centralization around purist goals -- i.e. there're central institutions that effective altruism revolves around and specific causes and ideas that are the most correct form of effective altruism. I'm personally much more a proponent of liberal, member-first effective altruism than purist, cause-first EA. I'm not sure which of those options the Manifest example supports, but I do think it's indicative of the broader reality that for a number of issues, people on each side can believe the most effective altruist thing to do is to defeat the other.
Different charities will have different effects but broadly speaking if you save someone's life, that person continues to live and generate economic value (they do work, other people benefit by associating with them, etc.) Some things (like animal welfare) may be more like consumption (though accomplishing animal welfare advocacy goals may change policy that continues on further in time).
But also there may be a category error because even if the nonprofit just pays people to do nothing. The money you gave doesn't cease to exist -- it just becomes income for the employees and is spent on some combination of consumption and savings by them and the government that taxes them. So i think it may lead to a question of: in the broader economy, is savings or consumption preferred? And I guess that would probably vary over time based on the macroeconomic sittuation.
I think the transfer from the philanthropic actor to the charity preserves the “altruism” of the resource-utilizer so there shouldn’t be a net loss there unless you think gains due to charity don’t accrue as quickly as capital gains in the private market. So I think then the question kind of reduces to give now or give later. Unless there’s some belief in concentrating resources being inherently better than diffusing them.
Personally, I think specifically forecasting for drug development could be very impactful: Both in the general sense of aligning fields around the probability of success of different approaches (at a range of scales -- very relevant both for scientists and funders) and the more specific regulatory use case (public predictions of safety/efficacy of medications as part of approvals by FDA/EMA etc.)
More broadly, predicting the future is hugely valuable. Insofar as effective altruism aims to achieve consequentialist goals, the greatest weakness of consequentialism is uncertainty about the effects of our actions. Forecasting targets that problem directly. The financial system creates a robust set of incentives to predict future financial outcomes -- trying to use forecasting to build a tool with broader purpose than finance seems like it could be extremely valuable.
I don't really do forecasting myself so I can't speak to the field's practical ability to achieve its goals (though as an outsider I feel optimistic), so perhaps there are practical reasons it might not be a good investment. But overall to me it definitely feels like the right thing to be aiming at.
Dan Watendorf at the Gates Foundation has said they've funded a few different companies that produce broadly effective antiviral prophylactics (e.g. a nasal spray that would keep you from getting colds, flus, and COVID for 3 months). He seemed to be optimistic about the technical solvability of the problem but pessimistic about a financing model that would make it viable (i.e. that transmission-reduction is not properly incentivized by the market)
If anyone is based in the Toronto area and wants to support challenge studies, there's a chance you could be quite helpful to 1Day Sooner's hepatitis c work. Please email me (josh@1daysooner.org) if you want to learn more.