In this 2017 talk, the Open Philanthropy Project's Nick Beckstead discusses their grants supporting the growth of the effective altruism community. He also analyzes the nature of the effective altruism community, and discusses what sorts of work it needs more and less of. At the very end you will find an added section on what you can do to help.
The below transcript is lightly edited for readability.
The Talk
I'm Nick. I'm from the Open Philanthropy Project, where I’m a Program Officer. Open Phil is a philanthropic foundation which uses effective altruist principles to guide our grant-making. I want to talk a little bit about my role as a Program Officer supporting the growth of the effective altruism community, discuss the goals that I have as the person who's guiding Open Phil’s grant-making on that topic, discuss some of the grants that we have made on that topic, and some of the grants we might make in the future, and some of the suggestions that I have for the effective altruism community that have arisen from my experience with it in the process of thinking about that work.
I think of my main objective here as empowering the effective altruism community. That requires basically having good relationships with people in this community, understanding what is going well and badly in the community, what the community wants from Open Phil, what needs it has, and how Open Phil can help it to thrive. Then my goal is to recommend grants that are responsive to all of those, and all of the community's needs.
When I think about how to characterize the effective altruism community, you can do it by demonstration, like looking at these people who show up at these conferences and talk about these things, and have been doing so since maybe 2007 or 2009, depending on where you want to start counting it from. It’s nice to think about what is special about this community and this set of people, why this is an interesting group to be thinking about supporting.
I guess the things I would say is it has a distinctive set of values and a distinctive set of norms around thinking. Some things that you could call ‘intellectual flavors’ that I've listed on this slide, which aren't deep commitments of the community or anything, but I feel like they tinge most of the discussion that happens here. You can also think of the community and characterize it, as is perhaps most commonly done, in terms of the sets of issues that it prioritizes.
When I speak of helping the effective altruism community, this is in addition to the list of grantees that I'll discuss later. This is what I mean. I think a lot of these things are pretty rare and pretty interesting and valuable. When I think about the grants that Open Phil are making in this area, I guess a question is, why make them? What would constitute success? I think the thing that's highest on my list right now is recruiting and identifying talent to work on the set of issues that's prioritized by the effective altruism community. I would go back to the standard list of EA causes: global development, animal welfare, transformative technologies, and global catastrophic risks, especially AI.
I think there are a number of other major benefits, but I think recruiting and identifying talent for those things is the way, if I imagine looking back on the grants that Open Phil has made in this area over the coming years, the way that I would most anticipate seeing a lot of success would be if we had brought people in to do valuable work on those areas. That could include strategic analysis to figure out which parts of these topics are most important to prioritize and in what ways, and also just doing valuable work on them, especially AI and global catastrophic risk missions.
I should say I've lumped some of the organizations that work on global catastrophic risks and risks from artificial intelligence into this portfolio because there's just a lot of overlap in the networks and trust relationships, and those are areas that I prioritized pretty highly in thinking about grants to make in this field.
I think there are some longer-term objectives that could be hoped for as well, such as shifting intellectual culture. You could imagine a world in which some of these core EA concepts become more popular and shape intellectual discourse in universities and permeate things, instead of just being something that all the people who show up at conferences like this are inclined to think about.
I also think a little bit about providing funding to impactful organizations. That's certainly been a major thing that the effective altruism community does, but it's not really a major focus of the grant-making that Open Philanthropy is doing in this area at this time. I'll say a little bit about that later in terms of the reasons for it.
These are the grants. You can think about a foundation at the end of the day in terms of the grants that it's making, and the grants that have been made so far through Open Phil to the effective altruism community are listed as follows. These were all made in the last year or so. So 80,000 hours in the Center for Effective Altruism, Center for Applied Rationality, SPARC, Founders Pledge, and ESPR (previously called EuroSPARC). We've written about all of these on our grants pages. I don't think I'll go into a lot of them much here because you'll be running into these organizations, but this is the core set of grants we've made over the last year in the EA community.
Then there are some grants that I would say are more focused on potential risks from advanced AI and global catastrophic risks. The grantees here include the Future of Humanity Institute, Future of Life Institute, the Machine Intelligence Research Institute, and Robin Hanson at George Mason University. I won't get too much into the rationale for these things. We've written about all these grants on our website.
So why is additional funding a secondary priority for grants that Open Phil is making in this space? Well, I think the effective altruism community focused on this quite a lot in the past and has had a fair amount of success with it, and I think as a result lots more funding is available for the causes that the effective altruism community cares about the most. That's been a change in the situation, and I think it's caused me, as a Program Officer, to think about what this field needs, and it's caused a change in my view about what is most important for us to do.
I would love to see a world where, when people are talking about effective altruism, the focus is a little bit less on philanthropy as such, and is more broadly about what people can do with their lives in order to contribute to the world and most effectively make a difference. I see in many ways more potential for this community to make a big difference through actions in that kind of category. Anyway, that's a response to a change in the situation as I see it.
Then the other reason that I’ve changed my mind on this is just a change in view about how the world works, particularly in terms of allocating diligent attention towards some vision, and allocating funding in order to solve problems in the world. I used to have a view that I would now describe more like naïve microeconomics, where accomplishing good with money is like, say, buying a commodity in the world, where if you had a mission of having there be more cars of a certain type made, then really there's a very efficient way to translate money into cars. You have some verifiable specification of what this kind of car is and you have people who manufacture these cars, and if you just communicate to them exactly what you want, and you can really get quite a lot of cars.
I think that philanthropy, and especially in some of the more opaque and difficult to communicate areas that are important to the effective altruism community, is really not that similar to a commodity. I think that there are very large transaction costs, and it's very difficult to communicate exactly what the vision is for these different areas. As a result, I think it can be very difficult to scale by just bringing more money into the field and having more people than we already have working on earning to give type strategies. That's really an update to my view, and an input to a change in how important I think it is to bring in additional funding to the effective altruism community.
Partly in response to thinking about the grants that we've made over the last year or so, if I were going to summarize the theme of these grants I would say it's something like this: New funder enters the space. When the new funder enters the space, usually the easy thing to do is to look at who's doing work that resonates with their goals, who's doing the work that they are most excited about. We then work with those people to get them funding to do their shovel-ready projects, and expand them in a way that makes sense to the funder. It's more community-led in that way, and that's grant-making over the last year or so.
In the future I'm thinking about this constraint, as I see it in the effective altruist community, which is something like this: engaging people who have a deep understanding of the important visions that inspire this community, and getting them working on innovative projects carry those visions forward more. It's really about talent identification and recruitment to these problems, as I see it. I'm thinking about things for Open Phil to fund in this area, and here are a few priority areas.
One of them is just fellowships providing education and training for community members who want to enter relevant fields. I think Will mentioned earlier in his talk the importance of increasing specialization as this community grows, and I think there are a number of fields where I would love to see a bit more of that. I'll talk about the details of that later. I'm thinking also a bit about recruitment in undergraduate communities and I'm also thinking a bit about research we could fund with an EA lens in academia. That's a little bit more of a nascent topic.
On that note, to summarize a bit in terms of suggestions for the EA community, I have said that I think we need less of an emphasis on earning to give in the community right now. I think people have been hearing that message for a while over the last couple of years, but I just thought I'd repeat that a little bit, and think about it and state my reasons for it. I would like to see more people in our community getting involved with areas where we don't have as much deep expertise and activity. High on my priority list there are areas in AI strategy. I think 80,000 hours have a very nice page explaining the need for that area and giving a bit of flavor for the questions that are involved there. Also expertise and machine learning for technical AI safety work.
I would really love to see a lot more people in the effective altruist community getting involved in biosecurity. I think that is a smaller potential global catastrophic risk than AI, but I think it's an important one, and I think it's an area where people from this community, a lot of them could plausibly make substantial contributions. It currently doesn't receive very much attention. There are also other roles in the US government, a variety of them that I would love to see EA's pursuing. I think Jason Matheny is a great example in his role as director of IARPA of what can be accomplished in that domain. I'd love to see more people in this community get expertise in biology and economics. That's part of the reason I mentioned funding, the possibility of Open Phil considering funding fellowships to get people in this community into roles like those.
More generally I think given the difficulty of communicating nuanced visions and deep context, I think what we want in terms of our outreach should be outreach plans that find the people who deeply understand the core ideas of effective altruism, especially people who want to provide full time attention and effort, who are needed to implement and refine some of these strategies that I think are the most important for the success of the community. Those are the main things that I wanted to say. I think I'll wrap up there and then we'll go for the chat. Thanks.
Q&A
Question: Thanks for the talk. I wanted to start by asking if you could expand a bit on what you're talking about with getting EA and academia a bit more synced up. How do you think EA should relate to academia?
Nick Beckstead: Okay, well, that's a big question, but the thing I had in mind is something like this: I think there's something powerful that can happen when ideas are traced back to their foundations and deeply explained, and there are people in a field who are working within a paradigm. I think EA has a certain lens and set of questions that it asks when looking at the world. I can imagine a world in which people, perhaps most immediately in philosophy and economics departments, were thinking about questions that are of particular interest to the effective altruist community, and in which that scene is a more central part of their disciplines.
For example, there are certain views that many effective altruists have about the long-run future that are not standard orthodoxy and even clash with standard orthodoxy in economics, say about discount rates or how you're doing population ethics. You could imagine a world in which you had some economists who are thinking about some of these problems, these same questions, and using more similar assumptions to the effective altruist community. I could imagine good things happening from that shift.
I don’t know. It's not a great answer exactly, but I think what I mean is somebody who is in one of these fields taking questions of substantial interest to the effective altruist community, working on them and tracing out their intellectual foundations.
Question: I guess another model of how EA could relate to academia is setting up EA as an academic field in itself or something. It sounds like you might be more enthusiastic about interacting with existing fields.
Nick Beckstead: I wouldn't say that necessarily. I think those are both very interesting kinds of opportunities. Open Phil supports the Future of Humanity Institute and in some ways they're technically part of the philosophy department in Oxford, but a lot of the papers that they're writing are not addressing central preoccupations of some other field. And yet I think there are important questions that someone should be thinking about rigorously. In some ways that's EA and academia, but it's not really trying to be prestigious according to the norms of some other discipline. I think both are interesting in their own way.
Question: Thanks. One thing someone asks is what can we do to build a more effective community outside of the Bay Area? I'd be interested in how you might go about doing that, but also how important is online interaction versus meeting in person and creating hubs for the community in your view?
Nick Beckstead: I'm not sure I have anything very systematic to say about that. I think online interaction is very important and in-person interaction is very important and they play different roles. How do you create effective hubs? I think that is a bit mysterious to me, so I won't attempt to answer that.
Question: Going back to our previous discussion about interacting with philosophy and economics, someone asks what areas or questions of economics more specifically would you consider relevant for EA?
Nick Beckstead: When I think about academic research, there are different types of value that you could hope to derive from it. One kind of value you could hope to derive from it would be: this is brand new information or knowledge that no one has and it's super important. There's another category which is more like this: if you think about EA type research, there's some knowledge that maybe a lot of the most engaged members of the EA community have strong opinions on, and I would be willing to bet they're right, but it hasn't really been traced back to its foundations and explained very well, or it hasn't been explained in the language of a certain discipline. Then there would be value if it were explained better, there would be more people who might think about the problem with that kind of framework and lens. That’d be another kind of category.
The stuff I was saying about discount rates would be something I’d put in the second category, where the EA community has worked out views on this that I think are right, but they're not the orthodoxy in economics, and there's a lot of nuance to it. I don't think all the nuance has been worked out exactly. That would be a simple example of something.
I think my mind often goes for the far future-y ‘fate of the world’ type questions because that's where I spent my time thinking as a researcher in academia more than anything else. I guess in that domain I see less in terms of category number one, where the economist could use the standard tools and frameworks of economics to obviously help a lot in a way that I can anticipate. I don't feel like I could tell you what the questions would be that would be really useful. You had a team of awesome economists working on those kinds of questions. But I can think of some in the second category.
Question: I've heard some discussion of perhaps EAs going and looking more into how, say, the market for meat works, which might be a bit more in the first category where we can use standard tools of economic analysis to assess an issue.
Nick Beckstead: Sure. Also it depends on how broadly you want to cast the lens of economics. There's a bunch of related questions that I would lump under the suggestive title of ‘improving the reasonableness of institutions and broad functionalness of society’ - a lot of Phil Tetlock’s work is in that category. There could be field trials of ideas in that genre that I think would be valuable for a number of causes, including EAs who are interested in ‘fate of the world’ type stuff.
Question: I have a slightly more critical question, which is how suspicious is it that the people working for EA orgs think that funding EA orgs is a top cause area?
Nick Beckstead: I would say it's not incredibly suspicious. I think a lot of people who run organizations and ask for money say that about their organization or their field. Or maybe they wouldn't have an opinion about it, but they act as if it were true, which is similar. I think we should just assess it on its merits. But yeah, it would probably be a mistake to just believe that on a trust basis.
Question: Maybe part of the problem is that a lot of people who are assessing it on its merits also work for EA orgs or are closely related to EAs.
Nick Beckstead: Yeah. I don't know. I guess somebody who's not one of those people and hasn't dug in as far, maybe they should just see if it makes sense to them or if they don't have time for that, spot-check pieces of the argument that seem most likely to fail. Then if it seems fine, maybe you can believe it if you have limited time to spend on it. But yeah, I think it would be probably perverse to say “Well, they said it, so therefore it’s probably true.”
Question: You’ve spent a lot of time thinking about this sort of area and how to build the EA community, and I expect you have reached some unusual or counterintuitive conclusions. I'm interested in where your views on the question of how to build an EA community differ most from the views that most people in the community have.
Nick Beckstead: Depends on which segments of the community you're talking about, I guess. I feel like I'm not contrarian on my point that we should reallocate out of earning to give and into direct work and building deep expertise in the main EA cause areas. Although I still feel like if I hear a media article describing what effective altruism is, it’s mostly as something that’s about optimal charity donations, and I think that's not the meat of it for me. Where else am I very contrarian? I don't feel that contrarian actually, so I'm not sure.
Question: Fair enough. Yeah, in the questions we've got a little bit of pushback against the idea that earning to give is not as valuable as direct work. So when a person asks, a lot of centrally EA orgs don't value earning to give donations, but it seems like a lot of valuable projects don't get funded sufficiently or the directors spend a lot of their time on fundraising. What best explains this discrepancy?
Nick Beckstead: I guess you could distinguish between different claims that could be being made about earning to give. The most extreme claim would be like, “Money is useless. No one should earn to give.” That's not the claim. I guess the claim is that the community is over-allocated in terms of labor and attention on earning to give relative to direct work: going and getting a PhD in a relevant field or starting your own venture.
So how can that still be true, even though not all the organizations are fully funded? I guess it depends partly on your opinions about what are the most pressing need in this community that are going strongly unfunded. I think if you look at the list of grantees that I presented, most of them are pursuing their most important projects and aren't spending a huge portion of their time on fundraising. Perhaps that question comes from somebody who has a difference of opinion from me about what the best work is to get funded. I guess the other way to be skeptical of me would be to say “Well, of course, you're a funder. Your job is to recommend what should be funded. Maybe the gaps that remain are very invisible to you, but they're plain to somebody else who has different judgments.”
I guess this is in some ways an interesting illustration of the points about earning to give. You can distinguish between the effective altruism community not having enough money and the effective altruism community not having enough diligent attention to make sure that money goes to all the best things. I think that we are somewhat bottlenecked on that, but that's different from just earning to give in a way. It’s more like if someone's earning to give and they're spending a lot of diligent attention to try to think “Alright, where exactly should this go? What is being missed by the other important players in this community?” I'm not saying there's no low-hanging fruit left, and I think you can find it that way.
The thing I'm more skeptical of and would be very skeptical of would be if somebody said “Well, I'm going to be earning to give and I'm going to just spend it all on the things that Nick Beckstead says are good. I'm going to donate it to the two EA funds that Nick runs, and that's the best idea.” I would feel pretty strongly that if this person could go and provide diligent attention to something, it’d be a lot less surprising to me if they did something great, if they were thinking very independently about how they were doing their funding.
I guess I'm saying the bottleneck for me is not how much funding I can recommend to be used well. It's much more the ability to spend time and really evaluate new things. Somebody has to do that. I think that's the thing that's bottlenecking us on the philanthropy side of this community.
Other caveats would be that you've got to think about what your comparative advantage is. If you feel like “I don't know how I would do that at all, but I'm making great money in this job” then great, maybe you should just keep earning to give and that's the best thing for you to do. I think that would be the strongest case somebody could make, if they were saying, “I'm doing this earning to give thing. It's really going to be transformative.” I guess that’s my answer in rambling form.
Question: It seems like you're enthusiastic about having a greater diversity of views about the funding decisions. Someone asks how can we avoid EA being an echo chamber, which is a slightly more general question for that point.
Nick Beckstead: I am pretty enthusiastic about diversification of who's going to decide how funding is used. How do we avoid EA becoming an echo chamber? Well, if we had a community where we had more people getting deep expertise in other fields, I would expect that to be more of a limit on the echo chamber effect to some extent. There are things you can get away with saying and no one will critique them in the effective altruist community, they'll probably be different from the things you can get away with saying without anyone pushing back in other communities, and you would be introducing a bunch of different cognitive styles if we had that. People should get their information from a variety of sources and probably not tune in exclusively to an RSS feed or information diet that's all EA all the time.
Question: Yeah, going to some of the seminars at the Academic Institute at Oxford on effective altruism, we get people in from philosophy and economics who are less familiar with effective altruism, and often their views are really interesting and they can sometimes rip to shreds some of the arguments that EAs present, which is really cool. But I also noticed, maybe particularly in the Bay Area, there’s a bit more skepticism of academia as an outside view. I wonder if you had thoughts on that?
Nick Beckstead: I don't share the skepticism of academia to that degree. I think academia has its own set of strengths and weaknesses, but at the end of the day I've been thinking a bit about the education and training aspect. You have this question: how could you have more people in this community with really valuable expertise in fields like biology and machine learning? Academia has this machine where you go in as a smart person with a basic knowledge of a subject, and you spend several years in there in a relatively low supervision setting, and then you come out as someone who at least has basic familiarity with most areas of the subject, can tell the difference between good and bad argumentation according to the norms of the discipline, and can incrementally advance on things in the field, and sometimes it produces a person who can make real advances on the field. That's a pretty remarkable thing to have.
When I look at the effective altruism community and a lot of the people who've done good things in it, certainly not all of them spend a bunch of time in academia, but people who have PhDs in some relevant academic field are not horribly underrepresented. I guess I would have to respond more specifically to a specific way of being skeptical of academia. But it seems to me that it provides substantial value.
Question: And probably the final area because we’re nearly out of time, but there are a few questions talking about diversity. Someone says that the EA community tends to recruit people with quite similar backgrounds. One of the questions talks about socio-economic, racial, cultural backgrounds. And another one talks a bit more about intellectual diversity, about rational, well-educated, privileged people. How worried should we be about diversity, and if we should be worried, what can we do to address that?
Nick Beckstead: Yeah. I guess I would have different thoughts on each of them. I think it's certainly true that in EA as a field, we have a lot of people who are math, philosophy, computer science type people, and it's pretty disproportionate. I think those are good fields and it's done some interesting and good things for this community. I think there are fields where it would be great to expand that out and cover a number of other intellectual fields.
I think it's also very true that this is a community that skews on the privileged end on pretty much any access of privilegedness that you could name. I think that probably does bad things for us in terms of viewpoint diversity and who feels most welcome or who feels like they have good role models in this field and probably a number of other difficulties that I can’t easily name. It would be great to improve that. I don’t have a lot of great solutions for it though at this time.