Hide table of contents

[This is my best attempt at summarizing a reasonable outsider's view of the current state of affairs. Before publication, I had this sanity checked (though not necessarily endorsed) by an EA researcher with more context. Apologies in advance if it misrepresents the actual state of affairs, but that's precisely the thing I'm trying to clarify for myself and others.]

At GiveWell, the standard of evidence is relatively well understood. We can all see the Cost Effectiveness Analysis spreadsheet (even if it isn't taken 100% literally), compare QALYs and see that some charities are likely much more effective than others.

In contrast, Open Philanthropy is purposefully opaque. As Holden describes in "Anti-principles" for hits-based giving:

We don't: require a strong evidence base before funding something. Quality evidence is hard to come by, and usually requires a sustained and well-resourced effort. Requiring quality evidence would therefore be at odds with our interest in neglectedness.

And:

We don't: expect to be able to fully justify ourselves in writing... Process-wise, we've been trying to separate our decision-making process from our public writeup process. Typically, staffers recommend grants via internal writeups. Late in our process, after decision-makers have approved the basic ideas behind the grant, other staff take over and "translate" the internal writeups into writeups that are suitable to post publicly. One reason I've been eager to set up our process this way is that I believe it allows people to focus on making the best grants possible, without worrying at the same time about how the grants will be explained.

These are reasonable anti-principles. I'm not here to bemoan obfuscation or question the quality of evidence.

(Also note this recent post which clarifies a distinction within Open Phil between "causes focused on maximizing verifiable impact within our lifetimes" and "causes directly aimed at affecting the very long-run future". I'm primarily asking about the latter, which could be thought of as HoldenOpenPhil in contrast to the former AlexOpenPhil.)

My question is really: Given that so much of the decision making process for these causes is private, what are we actually debating when we talk about them on the EA Forum?

Of course there are specific points that could be made. Someone could, in relative isolation, estimate the cost of an intervention, or do some work towards estimating its impact.

But when it comes to actually arguing that X is a high priority cause, or even suggesting that it might be, it's totally unclear to me both:

  1. What level of evidence is required.
  2. What level of estimated impact is required.

To give some more specific examples, it's unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.

Or in the more recent back-and-forth between Rethink Priorities and Mark Lutter on Charter Cities, Linch (of RP) wrote that:

I don't get why analyses from all sides [keep] skipping over detailed analysis of indirect effects.* To me by far the strongest argument for charter cities is the experimentation value/"laboratories of governance"angle, such that even if individual charter cities are in expectation negative, we'd still see outsized returns from studying  and partially generalizing from the outsized successful charter cities that can be replicated elsewhere, host country or otherwise (I mean that's the whole selling point of the Shenzhen stylized example after all!).

At least, I think this is the best/strongest argument. Informally, I feel like this argument is practically received wisdom among EAs who think about growth. Yet it's pretty suspicious that nobody (to the best of my knowledge) has made this argument concrete and formal in a numeric way and thus exposed it to stress-testing.

I agree that this is a strong argument for charter cities. My (loose) impression is that it's been neglected precisely because it's harder to express in a formal and numeric way than the existing debate (from both sides) over economic growth rates and subsequent increases to time-discounted log consumption.

Again, I'm not here to complain about streetlight effects or express worry that EA tries too hard to quantify things. I understand the value of that approach. I'm specifically asking, as far as it concerns the Holden Open Phil world, which is expressly (as I understand it) more speculative, risk-neutral and non-transparent than some other EA grant makers, what is the role of public EA Forum discussion?

Some possibilities:

  • Public discussion is meant to settle specific questions, but not address broader questions of grant-worthiness.
  • Even in public, discussions can be productive as long as they have sufficient context on Holden Open Phil priorities, either through informal channels, or interfacing with HOP directly (perhaps as a consultancy).
  • Absent that context, EA Forum serves more like training grounds on the path to work within a formal EA organization.

69

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

I was really confused by your post because it seemed to ask for normative rules about not talking about philanthropy and grants to EA causes, which doesn't seem reasonable.

Now, after reading your comments, I think what you meant is closer to:

It seems unworkably hard to talk about grants in the new cause areas. What do we do?”

I’m still not sure if this is what you want, but since no one has really answered, I want to try to give thoughts that might serve your purposes.


From your comment:

We consider AI Safety to be very important
A trusted advisor is excited
Everything checks out at the operational level
 

As far as I can tell, these are not the kinds of issue we are (or should be) discussing on EA Forum

I don't understand the statement this "these are not the kinds of issue we are (or should be) discussing". 

To be specific:


We consider AI Safety to be very important


This is a cause area question and this seems totally up for discussion.

For example, someone could criticize a cause area by pointing to a substantial period of time, like 3 or 5 years where progress in a cause area is low or stagnant, or that experts say this, or that it is plausibly funded or solved.

(This seems possible but very difficult this is because of the moral and epistemic uncertainty but also because cause areas are not non-zero sum games.) 

On the positive side, people can post new cause areas and discuss why they are important.

This seems much more productive, and there may even be strong demand for this.

It seems unlikely that an EA forum discussion alone will establish a new cause area but such a discussion seems like an extremely valuable use of the forum.

A trusted advisor is excited


It seems reasonable to say that existing advisors are low in value or that new advisors can be added. This can be done diplomatically:

  • "EA has really benefited from increase in Longtermism community, I wonder if the pool in Open Phil’s advisors has been expanded to match?"
  • "Here are a list of experts who are consistently highly valued by the community. Has Open Phil considered adding them as advisors?"
  • "I see that person A was an advisor for this grant. I understand Person B who is also an expert has these beliefs that [plausible for these reasons] that seems to suggest different views for this intervention."


Everything checks out at the operational level (“application prototyping, testing, basic organizational set-up, and public talks at Stanford and USC”)


It seems easy to unduly pick holes in new orgs, but there are situations where things are very defective and the outlook is bad, and it’s very reasonable to point this out, again diplomatically:

  • “I think this org had several CEOs over a 2 year period. This is different from what I've seen in other EA orgs and clarification about [issues with tangible output] is useful.
  • “I heard the founder talk at Stanford. During the talk, person A pointed out that X and Y were true. I think person A is an expert and their concerns weren't addressed. Here is a summary of them...”

(Note that I think I have examples of most of the above that actually occurred. I don't think it's that productive or becoming to link them all.)

In the above, I tried to focus on criticism, because that is harder.

I think your post might be asking for more positive ways to communicate meta issues—this seems sort of easy (?). 

To be clear, you say:
 

I could be wrong, but it's hard to imagine endorsing a norm where many top EA Forum posts are of the form "I talked to Alexey Guzey from New Science, it seems exciting" or worse "I talked to Adam Marblestone about New Science, and he seems excited about it".

But it's not totally clear to me which of these are both useful and appropriate. For example, I could write a post on whether or not the constructivist view of science is correct (FWIW I don't believe Alexey actually holds this view), but it's not clear that the discussion would have any bearing on the grant-worthiness of New Science.


I think a red herring is that in the “Case for the grant”, the wording is very terse. But I don't think this terseness is a norm outside of grant descriptions, or necessarily the only way to talk or signal the value of organizations.

For example, a post, a few pages long, with a perspective about New Science that point out things that are useful and interesting would certainly would be well received (the org does seem extremely interesting!). For example, it can mention tangible projects, researchers and otherwise write truthful narratives that suggest they are attracting and influencing talent or otherwise improving the Life Sciences ecosystem.

 

I might have more to say but I am worried I still "don't get" your question.

a post, a few pages long, with a perspective about New Science that point out things that are useful and interesting would certainly would be well received

Okay that's helpful to hear.

A lot of this question is inspired by the recent Charter Cities debate. For context:

  • Charter Cities Institute released a short paper a while back arguing that it could be as good as top GiveWell charities
  • Rethink Priorities more recently shared a longer report, concluding that it was likely not as good as GiveWell charities
  • Mark Lutter (who runs CCI) replied, arguing that
... (read more)
1
Charles He
But isn't the GiveWell-style philanthropy exactly not applicable for your example of charter cities?  My sense is that the case for charter cities has some macro/systems process that is hard to measure (and that is why it is only now a new cause area and why the debate exists). I specifically didn't want to pull out examples, but if it's helpful here's another example of a debate for an intervention that relies on difficult to measure outcomes and involves, hard to untangle, divergent worldviews between the respective proponents. (This is somewhat of a tangent but honestly, your important question is inherently complex and there seems to be a lot of things going on, so clarity from smoothing out some of the points seems valuable.) I don't understand why my answer in the previous post above, or these debates aren't object level responses to how you could discuss the value of these interventions.  I'm worried I'm talking past you and not being helpful. Now, trying more vigorously / speculatively here: 1. Maybe one answer is that you are right, it is hard to influence direct granting—furthermore, this means that directly influencing granting is not what we should be focused on in the forum.  At the risk of being prescriptive (which I dislike) I think this is a reasonable attitude on the forum, in the sense that "policing grants" or something, should be a very low priority for organic reasons for most people, and instead learning/communicating and a "scout mindset" is ultimately more productive. But such discussion cannot be proscribed and even a tacit norm against them would be bad.   2. Maybe you mean that this level of difficulty is "wrong" in some sense. For example, we should respond by paying special, unique attention to the HOP grants or expect them to be communicated and discussed actively. This seems not implausible. I could see how HOP areas are harder but as in my first comment, I think it's inherently hard for anyone to criticize any w

It seems to me that the problem isn't just with Open Phil-funded speculative orgs, but with all speculative orgs.

To give some more specific examples, it's unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.

I think it's just as unclear how someone inside Open Phil could advocate for those. Open Phil might have access to some private information, but that won't help much with something like estimating the EV of a highly speculative nonprofit.

I also don't know for sure, but this examples might be illustrative:

Ought General Support:

Paul Christiano is excited by Ought’s plan and work, and we trust his judgement.

And:

We have seen some minor indications that Ought is well-run and has a reasonable chance at success, such as: an affiliation with Stanford’s Noah Goodman, which we believe will help with attracting talent and funding; acceptance into the Stanford-Startx4 accelerator; and that Andreas has already done some research, application prototyping, testing, basic organizational set-up, and public talks at Stanford and USC.

So it's not really a big expected value calculation. It's more like:

  • We consider AI Safety to be very important
  • A trusted advisor is excited
  • Everything checks out at the operational level

It might not follow point-by-point, but I can imagine how a similar framework might apply to New Science / QRI / Charter Cities.

Returning to the original point: As far as I can tell, these are not the kinds of issue we are (or should be) discussing on EA Forum. I could be wrong, but it's hard to imagine endorsing a norm where many top EA Forum posts are of the form "I talked to Alexey Guzey from New Science, ... (read more)

Comments6
Sorted by Click to highlight new comments since:

This is my best attempt at summarizing a reasonable outsider's view of the current state of affairs. Before publication, I had this sanity checked (though not necessarily endorsed) by an EA researcher with more context. Apologies in advance if it misrepresents the actual state of affairs, but that's precisely the thing I'm trying to clarify for myself and others.

I just want to note that I think this question is great and does not misrepresent the actual state of affairs.

I do think there's hope for some quantitative estimates even in the speculative cases; for example Open Phil has mentioned that they are investigating the "value of the last dollar" to serve as a benchmark against which to compare current spending (though to my knowledge they haven't said how they're investigating it).

I do think there's hope for some quantitative estimates even in the speculative cases; for example Open Phil has mentioned that they are investigating the "value of the last dollar" to serve as a benchmark against which to compare current spending (though to my knowledge they haven't said how they're investigating it).

Ajeya explains it in her 80k interview and the result is:

"this estimate is roughly $200 trillion per world saved, in expectation. So, it’s actually like billions of dollars for some small fraction of the world saved, and dividing that out gets you to $200 trillion per world saved. This is quite good in the scheme of things, because it’s like less than two years’ worth of gross world product. It’s like everyone in the world working together on this one problem for like 18 months, to save the world."

Open Phil only has so much energy and cognitive diversity to spend evaluating your project, and if you project is too weird for that, they just can't fund it. Instead, you can donate/work/volunteer for weird projects yourself, and convince others on the forum to do the same. There are even other billionaires out there, and ways to get your charter cities or whatever without enormous philanthropic funding. If you're the only one who believes, maybe you need to bet.

I think the EA forums have an important role in being a platform where EA leaders can make oaths of fealty to the appropriate Open Phil staff member in their cause area. 

(Important Reminder!—have you remembered to upvote this week's Cold Takes post?)

 

Given that so much of the decision making process for these causes is private, what are we actually debating when we talk about them on the EA Forum?
 

More seriously, can you elaborate on the nuances or thought of what you think makes the new ("HOP") cause areas private compared to any other cause area? 

Taking the opposing perspective to your post, and using examples from the "Global Health and Well Being" space:

  • I know of one cause area that is critical, well known, yet that no one really discusses here for compelling reasons, even though we would really want to.
     
  • Other cause areas have powerful, historic, non-EA aligned wings, which makes discussion difficult. I would argue AI and x-risk are freer from these spectres and enjoy more open discussion.
     
  • Also, I think it’s very likely that "GiveWell"-style interventions, whose epistemics might be viewed as prosaic, could have been equally difficult to discuss or promote. Even if something seems well rated and has strong evidence now, it doesn’t mean the marginal decision to fund it was opaque and difficult in the time it was created (e.g. concerns with RCT scaling up). Whole subdomains of development have risen and fallen with an epistemology that can seem impenetrable and often social/politically motivated.
     
  • If you think that intangible qualities such as technical depth make "HOP" causes impenetrable, the same criticism applies to the large Open Phil spending on specific scientific bets.

It's unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.
 

To give some more specific examples, it's unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.
 

(I just found these new orgs from your post. These are really interesting!). 

I think it’s reasonable to pattern match Qualia Research Institute to orgs like OpenAI or MIRI. 

MIRI has active Lesswrong-style forums and their work is literally discussed on Lesswrong and here. 

As you know, AI interested folks are also usually associated with the applied rationality community, which is exceptionally open to debate and direct reasoning. 

New Science has a website (which is a little polemical) with perspectives and critiques that I think are fairly easy to understand:

With quotes like “And let's make science advance one young scientist at a time, not one funeral at a time.”, it seems to have something like a constructivist view of science (?). 

This worldview and explicitness of its mission seems promising for open discussion of itself and other meta issues.

Also, New Science explicitly models itself on Cold Spring Harbor. I have a small sample size, but the one scientist I know who went there is welcome to skeptical about science and is open to discussion. 

Using discussion or openness as a good proxy for your question, the following suggests these orgs would be easy to talk about.

There are objections that aren’t addressed here, but these can be discussed in other comments.

I think your post is great honestly, and this comment only touches on one facet, is a little devil's advocate and moderate in quality.



 

the same criticism applies to the large Open Phil spending on specific scientific bets.

Sorry, just to clarify again (and on the topic of swearing fealty), I don't mean any of this as a criticism of Open Phil. I agree enthusiastically with the hits-based giving point, and generally think it's good for at least some percentage of philanthropy to be carried out without the expectation of full transparency and GiveWell-level rigor.

It's unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.

I think that's what I'm saying. It's unclear to me if EA Forum, and public discussions more generally, play a role in this style of grant-making. If the answer is simply "no", that's okay too, but would be helpful to hear.

these orgs would be easy to talk about.

I agree that there are avenues for discussion. But it's not totally clear to me which of these are both useful and appropriate. For example, I could write a post on whether or not the constructivist view of science is correct (FWIW I don't believe Alexey actually holds this view), but it's not clear that the discussion would have any bearing on the grant-worthiness of New Science.

Again, maybe EA Forum is simply not a place to discuss the grant-worthiness of HOP-style causes, but the recent discussion of Charter Cities made me think otherwise.

I think your post is great honestly

Thanks!

Thanks for the thoughtful response!

Again, maybe EA Forum is simply not a place to discuss the grant-worthiness of HOP-style causes, but the recent discussion of Charter Cities made me think otherwise.

I don't think this is true or even can be true, as long as we value general discussion.

I think I have a better sense of your question and maybe I will write up a more direct answer from my perspective. 

I am honestly worried my writeup will be long-winded or wrong, and I'll wait in case someone else writes something better first.

Also, using low effort/time on your end, do you have any links to good writeup(s) on the "constructivist view of science"?

I'm worried I don't have a real education and will get owned on a discussion related to it, the worst case while deep in some public conversation relying on it.

Curated and popular this week
Relevant opportunities