Hide table of contents

Though I've no idea what the raw numbers are, my sense is that a non-trivial amount of EA funding supports academic philosophy research. It seems an interesting and underexplored question when we should expect such funding to be "worth it". In this post, I especially want to explore two dimensions of variation: (1) what entities (individuals or institutes) to support, and (2) what kind of work (broad or targeted) to support. My sense is that EA funders have a preference for institutes over individuals, and targeted/specialist projects over broad/generalist ones. So one question I'm interested in is whether generalist work from individual academics might be comparatively neglected under current approaches.

COI warning: I'm an individual academic who does (what I would call) "generalist" EA-aligned work. So I'm implicitly—and at some points explicitly—making the case for why I think the sort of work I do may be under-valued. Feel free to discount accordingly. :-)

How Academic Funding Can Be Used

Research Institutes

With many millions of dollars, one could support a full-blown Research Institute where many academics collaborate on a shared project or loose research theme. (Examples I’m familiar with—and think highly of—include GPI or the (sadly ended) FHI at Oxford; PWI at UT Austin; and Rethink Priorities, which seems to be an independent organization that maybe employs some academics part-time, alongside independent researchers? (I’m not exactly sure how RP works, but they certainly seem to have some good people. Maybe someone involved can say more about how it's set up, esp. in relation to the affiliated academics?)

I see three main advantages to research institutes:

(1) Mission alignment: unlike in an academic department where individual faculty can work on whatever they like, I assume that in a research institute there’s a strong expectation (at least in hiring, and likely on a more ongoing basis, depending on the local culture) that the research it supports should be relevant to the institute’s mission.

(2) Improving the career pipeline for aligned researchers. Due to the abysmal job market in philosophy, even excellent young philosophers can very easily fall through the gaps and fail to get a tenure-track position. By funding postdocs and other “stepping stone” positions, research institutes can provide an extra chance to junior researchers who do promising work on especially important topics to remain in academia and (hopefully!) eventually break through to a permanent job. (Or, some funding might be used to create new permanent positions within the institute?)

(3) Agglomeration effects from having broadly aligned (yet diverse) academics closely interacting, sharing ideas, etc., maybe even co-authoring work together, all of which could (one hopes) prove more fruitful than working in isolation.[1]

Fancy Endowed Chairs

This is something I discussed a bit in my reflections on Peter Singer’s career: it seems like good things could come from reserving space at the most prestigious universities for global priorities research or other impartially valuable philosophy. But I assume a lot depends on how easily such a position could be “captured” by the local powers to continue their preferred business-as-usual—in which case the funding[2] may merely serve to boost the University’s endowment (if they end up hiring whoever they would have done anyway, just using the philanthropist’s funds rather than their own).

Support (e.g. course buyouts) for specific researchers

Research institutes will (I gather) typically use a bunch of their funds to buy out the teaching obligations of their senior researchers. But one could also do this for researchers in ordinary academic departments, if you think they do especially good work and would like them to have the time to do more.

I’ve personally had some luck with this—I couldn’t have written so much for utilitarianism.net without the time provided by grants from the Forethought Foundation and Longview Philanthropy in previous years. But my sense is that it's harder for an individual academic to get ongoing funding support compared to an institute-funded pure research position. (I'm still teaching three out of four courses this academic year, for example, and expect to have less time for public philosophy next semester as a result.)

A quick argument for supporting individual academics

Lots of great work comes out of EA research institutes. But ask which you would expect to do better: the marginal hire at an institute with several researchers, or the best of the (aligned & interested) researchers you can find in all the rest of academia. Seems like you should expect the best people outside of an institute to do more valuable work than the worst people inside of it! So that's a very quick argument for thinking that more marginal funding should go to support work by individual academics outside of EA research institutes.

(I don't mean to suggest that this quick argument is decisive or anything. Feel free to share considerations that point more in the opposite direction in the comments! One obvious one: university-employed individual academics are already employed, and so can pursue their research—albeit at a slower pace—even without any external support.[3])

What kinds of (EA-aligned) philosophy are worth funding?

My sense is that EA funders are most keen to support philosophical work that is either (i) public-facing "outreach" like utilitarianism.net or (ii) original research on a relatively specific, and preferably applied, cause area or problem of interest (e.g. AI safety, digital minds, animal welfare weights, maybe population ethics in some cases).

In my (possibly biased!) opinion, this risks undervaluing the potential significance of original research in general ethical theory (of a broadly beneficentric bent). I think it's not a coincidence that so many EAs are utilitarians or utilitarian-adjacent. The moral lens through which we view things, as represented by our ethical theories, can have a big influence on how salient we find the importance of helping others in a scope-sensitive way. So I expect that it could do a lot of good to find new and better (more intuitive and broadly appealing) ways of conceptualizing utilitarian motivations and moral theories. [4] I also expect there are other general research projects that could be similarly valuable, but would struggle to find support in the current funding environment.

How to support specific academics?

OllieBase recently wrote to Consider donating to whoever helped you get more into EA or have more impact. This might not make so much sense if your target is an academic, however: we already have a salary through our university employer. In order to secure more time for research projects, the main available means is through grants for teaching buy-outs, where the money goes to our university, not to us personally. I'm not even sure whether my university would allow grant funding to come from an individual rather than an institutional funder, and the lump sum required is non-trivial.[5] So I'm guessing that a better route for small donors to support academic work may be to donate to EAIF (or similar), and let them know the kind of work you'd be especially excited to see them use the funds to support. (They remain free to disregard your opinion, of course.)

In my own case, to (hopefully) improve my chances at future grant applications, I'm trying to track my philosophical impact by collecting anecdata from anyone who has found themselves significantly influenced by my work to date. If my writing has led you to feel more positively about EA, to think more clearly about relevant important issues, or to be more motivated to pursue beneficent projects, please leave a brief comment on that post to share the details. (Thanks in advance!)

Conclusion

I've raised two broad questions about the funding of academic work (esp. philosophy) within the EA landscape. One concerns the balance of support for research institutes vs individual academics; the second concerns the case for supporting foundational work (of a broadly beneficentric bent) in general ethical theory. My personal impression is that individual work of this general sort might be undervalued. But then, I would think that—it's precisely the sort of work that I do! So I welcome pushback from those who view the current balance here as well-calibrated, as well as inviting any other thoughts on the general topic.

  1. ^

    Having visited both GPI and PWI, I definitely felt that I learned a lot just from being in those intellectual environments.

  2. ^

    Typically $2 - 5 million, I gather, depending on the institution. Likely on the higher end for private universities.

  3. ^

    If you think this reason is weighty, it would presumably also apply within institutes--e.g. as a reason to prioritize additional postdoc funding over teaching buyouts for senior faculty. By contrast, I expect it will often make sense to prioritize the research time of the senior researchers in a major institute, when they do especially important work.

  4. ^

    See Beyond Right and Wrong for an overview of my current research project to this end. Fwiw, I'm very excited about the potential value of this project. But I haven't had much luck in eliciting interest from grant funders to support it.

    [I do currently have one course release this semester, partly funded by EAIF (who were, thankfully, willing to provide the "top-up" funding necessary to make the course release possible, even though they would not have funded the full cost of any course releases for me to pursue this project this year). The bulk of the funding comes from a university prize that would otherwise have provided me with summer salary. So I'm actually taking a bit of a financial hit by instead using the prize for this purpose.]

    It's obviously very hard to predict how much good a foundational research project of this sort will do, but I'd think the "hits-based" EV case would be pretty strong. Funders may simply judge the matter differently, which is of course fine. But I figured I'd share my thoughts here for a broader range of people to consider.

  5. ^

    But if anyone's keen to give it a shot, shoot me a DM and I'll happily look into the possibility. (Last I checked, my university would charge around $17k to buy me out from one course, but that will increase slightly with my base salary each year.)

23

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I think you may greatly understate your case. I would argue that, especially in the US, the lack of credible "public intellectuals" is one of the greatest problems of our age, and that there is a huge opportunity for the right people to fill this role. 

EAs with the right communication skills could be perfect public intellectuals, and if they could move the debate, or even the Overton window, a bit more towards effective positions, that would be a massive contribution to the world. 

True, there are plenty of opinionated people out there, but it feels like mostly they are trotted out to support the party line rather than to provide genuine insight. They are more like lawyers arguing their "side" - and realistically, people don't trust lawyers to give honest insight. 

If I look at France or Italy, for comparison, there have always been a few figures who tend to be asked for opinions about major topical questions, and their views carry weight. In other countries and in previous times, church leaders play or played a similar role - rarely with positive consequences ... 

Today there are so many questions where public "debate" consists of people shouting slogans at each other, and whoever shouts loudest wins. I don't think most people like this. There are a few journalists (e.g. David Brooks in the NY Times) who have the confidence and authority to express opinions that are not necessarily partisan, and are presented with careful arguments, evidence and reference to critical thinking by others, including those who do not support him. 

This is the work of the public intellectual, and when it is done well, it can still help people to change their minds or at least to understand both sides of an argument. It feels like philosophy (and maybe history) are the most obvious fields in which this kind of skillset and credibility can be achieved and earned. 

I see this as a great opportunity for effective altruists because, unlike so many knee-jerk positions, EA's tend to have very carefully and analytically investigated every question, and to have done so with a very clear and tangible criterion. We need more EA's writing and being interviewed in places where the general public can hear them - and we need those people to be trained in the art of communicating to the general public (not just other EAs) without dumbing down (which would defeat the purpose of aiming to be seen as a public intellectual. The best speak in such a way that other people share their ideas, in part, as a sign that they are smart enough to understand them. 

I see support for philosophers as very valuable if it can lead not just to new insights, but more importantly, to new voices ready to communicate in the public domain. 

I really appreciate your work, Richard, and over the last few years, I've loved the opportunity to work on some foundational problems myself. Increasingly, though, I'd like to see more philosophers ignore foundational issues and focus on what I think of as "translational philosophy." Is anyone going to give a new argument for utilitarianism that significantly changes the credences of key decision-makers (in whatever context)? No, probably not. But there are a million hard questions about how to make existing policies and decision-making tools more sensitive to the requirements of impartial beneficence. I think the model should be projects like Chimpanzee Rights vs., say, the kinds of things that are likely to be published in top philosophy journals.

I don't have the bandwidth to organize it myself right now, but I'd love there to be something like a "Society for Translational Philosophy" that brings like-minded philosophers together to work on more practical problems. There's a ton of volunteer labor in philosophy that could be marshaled toward good ends; instead, it's mostly frittered away on passion projects (which I say as someone who has frittered an enormous amount of time away on passion projects; my CV is chaos). A society like that could be a very high-leverage opportunity for a funder, as a small amount spent on infrastructure could produce a lot of value in terms of applicable research.

Executive summary: Effective Altruism (EA) funding for philosophy research should carefully consider supporting both research institutes and individual academics, with a potential undervaluation of generalist, foundational ethical theory work.

Key points:

  1. Research institutes offer advantages like mission alignment, improved career pipelines for researchers, and collaborative research environments.
  2. Individual academics might be overlooked, despite potentially producing more valuable work than marginal institute hires.
  3. Current EA funding tends to prioritize specific, applied philosophical research over broader ethical theory work.
  4. Generalist philosophical research on ethical frameworks could significantly influence how people perceive and approach helping others.
  5. Funding individual academics is challenging, with course buyouts being a primary mechanism for supporting research time.
  6. Small donors are recommended to contribute to funds like EAIF that can strategically allocate resources to philosophical research.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I think a key crux here is whether you think AI timelines are short or long. If they're short, there's more pressure to focus on immediately applicable work. If they're long, then there's more benefit to having philosophers develop ideas which gradually trickle down.

In PIBBSS, we've had a mentor note that for alignment to go well, we need more philosophers working on foundational issues in AI rather than more prosaic researchers. I found that interesting, and I currently believe that this is true. Even in short-timeline worlds, we need to figure out some philosophy FAST.

Curated and popular this week
Relevant opportunities