On 17 February 2024, the mean length of the main text of the write-ups of Open Philanthropy’s largest grants in each of its 30 focus areas was only 2.50 paragraphs, whereas the mean amount was 14.2 M 2022-$[1]. For 23 of the 30 largest grants, it was just 1 paragraph. The calculations and information about the grants is in this Sheet.
Should the main text of the write-ups of Open Philanthropy’s large grants (e.g. at least 1 M$) be longer than 1 paragraph? I think greater reasoning transparency would be good, so I would like it if Open Philanthropy had longer write-ups.
In terms of other grantmakers aligned with effective altruism[2]:
- Charity Entrepreneurship (CE) produces an in-depth report for each organisation it incubates (see CE’s research).
- Effective Altruism Funds has write-ups of 1 sentence for the vast majority of the grants of its 4 funds.
- Founders Pledge has write-ups of 1 sentence for the vast majority of the grants of its 4 funds.
- Future of Life Institute’s grants have write-ups roughly as long as Open Philanthropy.
- Longview Philanthropy’s grants have write-ups roughly as long as Open Philanthropy.
- Manifund's grants have write-ups (comments) of a few paragraphs.
- Survival and Flourishing Fund has write-ups of a few words for the vast majority of its grants.
I encourage all of the above except for CE to have longer write-ups. I focussed on Open Philanthropy in this post given it accounts for the vast majority of the grants aligned with effective altruism.
Some context:
- Holden Karnofsky posted about how Open Philanthropy was thinking about openness and information sharing in 2016.
- There was a discussion in early 2023 about whether Open Philanthropy should share a ranking of grants it produced then.
- ^
Open Philanthropy has 17 broad focus areas, 9 under global health and wellbeing, 4 under global catastrophic risks (GCRs), and 4 under other areas. However, its grants are associated with 30 areas.
I define main text as that besides headings, and not including paragraphs of the type:
- “Grant investigator: [name]”.
- “This page was reviewed but not written by the grant investigator. [Organisation] staff also reviewed this page prior to publication”.
- “This follows our [dates with links to previous grants to the organisation] support, and falls within our focus area of [area]”.
- “The grant amount was updated in [date(s)]”.
- “See [organisation's] page on this grant for more details”.
- “This grant is part of our Regranting Challenge. See the Regranting Challenge website for more details on this grant”.
- “This is a discretionary grant”.
I count lists of bullets as 1 paragraph.
- ^
The grantmakers are ordered alphabetically.
I think you are placing far too little faith in the power of the truth. None of the events you list above are bad. It's implied that they are bad because they will cause someone to unfairly judge Open Phil poorly. But why presume that more information will lead to worse judgment? It may lead to better judgment.
As an example, GiveWell publishes detailed cost-effectiveness spreadsheets and analyses, which definitely make me take their judgment way more seriously than I would otherwise. They also provide fertile ground for criticism (a popular recent magazine article and essay did just that, nitpicking various elements of the analyses that it thought were insufficient.) The idea that GiveWell's audience would then think worse of them in the end because of the existence of such criticism is not credible to me.
Agreed. GiveWell has revised their estimates numerous times based on public feedback, including dropping entire programmes after evidence emerged that their initial reasons for funding were excessively optimistic, and is nevertheless generally well-regarded including outside EA. Most people understand its analysis will not be bug free.
OpenPhil's decision to fund Wytham Abbey, on the other hand, was hotly debated before they'd published even the paragraph summary. I don't think declining to make any metrics available except the price tag increased people's confidence in the decision making process, and participants in it appear to admit that with hindsight they would have been better off doing more research and/or more consideration of external opinion. If the intent is to shield leadership from criticism, it isn't working.
Obviously GiveWell exists to advise the public so sharing detail is their raison d'etre, whereas OpenPhil exists to advise Dustin Moskovitz and Cari Tuna, who will have access to all the detail they need to decide on a recommendation. But I think there are wider considerations to publicising more on the project and the rationale behind decisions even if OpenPhil d... (read more)
As a critic of many institutions and organizations in EA, I agree with the above dynamic and would like people to be less nitpicky about this kind of thing (and I tried to live up to that virtue by publishing my own quite rough grant evaluations in my old Long Term Future Fund writeups)
Thanks for the thoughtful reply, Mathias!
I think this applies to organisations with uncertain funding, but not Open Philanthropy, which is essentially funded by a billionaire quite aligned with their strategy?
Even if the analyses do not contain errors per se, it would be nice to get clarity on morals. I wonder whether Open Philanthropy's prioritisation among human and animal welfare interventions in their global health and wellbeing (GHW) portfolio considers 1 unit of welfare in humans as valuable as 1 unit of welfare in animals. It does not look like so, as I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 680 times Open Philanthropy's GHW bar.
... (read more)There's a lot of room between publishing more than ~1 paragraph and "publishing their internal analyses." I didn't read Vasco as suggesting publication of the full analyses.
Assertion 4 -- "The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits" -- seems to be doing a lot of work in your model here. But it seems to be based on assumptions about the nature and magnitude of errors that would be detected. If a number of errors were material (in the sense that correcting them would have changed the grant/no grant decision, or would have seriously changed the funding level), I don't think it would take many errors for assertion 4 to be incorrect.
Moreover, if an error were found in -- e.g., a five-paragraph summary of a grant rationale -- the odds of the identified error being material / important would seem higher than the average error found in (say) a 30-page writeup. Presumably the facts and conclusions that made the short writeup would be ~the more important ones.
What you say is true. One thing to keep in mind is that academic data, analysis and papers are usually all made public these days. Yes with OpenPhil funding rather than just academic rigor is involved, but I feel like we should aim to at least have the same level of transparency as academia...
What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?