Hi all,
We're the staff at Rethink Priorities and we would like you to Ask Us Anything! We'll be answering all questions starting Friday, November 19.
About the Org
Rethink Priorities is an EA research organization focused on helping improve decisions among funders and key decision-makers within EA and EA-aligned organizations. You might know of our work on quantifying the number of farmed vertebrates and invertebrates, interspecies comparisons of moral weight, ballot initiatives as a tool for EAs, the risk of nuclear winter, or running the EA Survey, among other projects. You can see all of our work to date here.
Over the next few years, we’re expanding our farmed animal welfare and moral weight research programs, launching an AI governance and strategy research program, and continuing to grow our new global health and development wing (including evaluating climate change interventions).
Team
You can find bios of our team members here. Links on names below go to RP publications by the author (if any are publicly available at this point).
Leadership
- Marcus Davis — Co-CEO — Focus on animal welfare and operations
- Peter Wildeford — Co-CEO — Focus on longtermism, global health and development, surveys, and EA movement research
Animal Welfare
- Dr. Kim Cuddington — Senior Ecologist — Wild animal welfare
- Dr. William McAuliffe — Senior Research Manager — Wild animal welfare, farmed animal welfare
- Jacob Peacock — Senior Research Manager — Farmed animal welfare
- Dr. Jason Schukraft — Senior Research Manager — Moral weight, global health and development
- Daniela Waldhorn — Senior Research Manager — Invertebrate welfare, farmed animal welfare
- Dr. Neil Dullaghan — Senior Researcher — Farmed animal welfare
- Dr. Samara Mendez — Senior Researcher — Farmed animal welfare
- Saulius Šimčikas — Senior Researcher — Farmed animal welfare
- Meghan Barrett — Entomology Specialist — Invertebrate welfare
- Dr. Holly Elmore — Researcher — Wild animal welfare
- Michael St. Jules — Associate Researcher — Farmed animal welfare
Longtermism
- Michael Aird — Researcher — Nuclear war, AI governance and strategy
- Linch Zhang — Researcher — Forecasting, AI governance and strategy
Surveys and EA movement research
- David Moss — Principal Research Director — Surveys and EA movement research
- Dr. David Reinstein — Senior Economist — EA Survey, effective giving research
- Dr. Jamie Elsey — Senior Behavioral Scientist — Surveys
- Dr. Willem Sleegers — Senior Behavioral Scientist — Surveys
Global Health and Development
- Dr. Greer Gosnell — Senior Environmental Economist — Climate change, global health interventions
- Ruby Dickson — Researcher — Global health interventions
- Jenny Kudymowa — Researcher — Global health interventions
- Bruce Tsai — Researcher — Climate change, global health interventions
Operations
- Abraham Rowe — COO — Operations, finance, HR, development, communications
- Janique Behman — Director of Development — Development, communications
- Dr. Dominika Krupocin — Senior People and Culture Coordinator — HR
- Carolina Salazar — Project and Hiring Manager — HR, project management
- Romina Giel — Operations Associate — Operations, finance
Ask Us Anything
Please ask us anything — about the org and how we operate, about the staff, about our research… anything!
You can read more about us in our 2021 Impact and 2022 Strategy update or visit our website: rethinkpriorities.org.
If you're interested in hearing more, please subscribe to our newsletter.
Also, we’re currently raising funds to continue growing in 2022. We consider ourselves funding constrained — we continue to get far more qualified applicants to our roles than we are able to hire, and have scalable infrastructure to support far more research. We accept and track restricted funds by cause area if that is of interest.
If you'd like to support our work, visit https://www.rethinkpriorities.org/donate, give on Giving Tuesday via Facebook to potentially secure matching funds, or email Janique Behman at janique@rethinkpriorities.org.
We'll be answering all questions starting Friday, November 19.
In your yearly report you mention:
This surprised me, because I fairly often hear the advice of "donate to EA Funds" as the optimal thing to do, but it seems that if everybody did that, RP would not get funded. Do you have any thoughts on this?
I think donating to the EA Funds is a very good thing to do, but I don't think every donor should do this. I think for donors who have the time and personal fit, it would be good to do some direct donations on your own and support organizations to help those organizations hedge against idiosyncratic risk from particular funders and help give them more individual support (which matters for showing proof to other funders and also matters for some IRS stuff).
I don't think any one funder likes to fund the entirety of an organization's budget, especially when that budget is large. But between the different institutional funders (EA Funds, Survival and Flourishing Fund, OpenPhil, etc.), I still think there is a strong (but not guaranteed) chance we will be funded (at least enough to meet somewhere between our "Low" and "High" budget amounts). Though if everyone assumed we were not funding constrained, than we definitely would be.
My other pitch is that I'd like RP, as an organization, to have some direct financial incentive and accountability to the EA community as a whole, above and beyond our specific institutional funders who have specific desires and fund us for specific reasons that ... (read more)
A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you've hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?
So there are a bunch of questions in this, but I can answer some of the ops related one:
- We haven't had ops talent bottlenecks. We've had incredibly competitive operations hiring rounds (e.g. in our most recent hiring round, ~200 applications, of which ~150 were qualified at least on paper), and I'd guess that 80%+ of our finalists are at least familiar with EA (which I don't think is a necessary requirement, but the explanation isn't that we are recruiting from a different pool I guess).
- Maybe there was a bigger bottleneck in ~2018 and EA has grown a lot since or reached people with more ops skills since?
- We spend a lot of time resources on recruiting, and advertise our jobs really widely, so maybe we are reaching a lot more potential candidates than some other organizations were?
- Management bottlenecks are probably our biggest current people-related constraint on growth (funding is a bigger constraint).
- We've worked a lot on addressing this over the summer, partially by having a huge internship program, and getting a lot of current staff management experience (while also working with awesome interns on cool projects!) and sending anyone who wants it through basic managemen
... (read more)Here's some parts of my personal take (which overlaps with what Abraham said):
I think we ourselves feel a bit unsure "why we're special", i.e. why it seems there aren't very many other EA-aligned orgs scaling this rapidly & gracefully.
But my guess is that some of the main factors are:
- We want to scale rapidly & gracefully
- Some orgs have a more niche purpose that doesn't really require scaling, or may be led by people who are more skilled and excited about their object-level work than about org strategy, scaling, management, etc.
- RP thinks strategically about how to scale rapidly & gracefully, including thinking ahead about what RP will need later and what might break by default
- Three of the examples I often give are ones Abraham mentioned:
- Realising RP will be be management capacity constrained, and that it would therefore be valuable to give our researchers management experience (so they can see how much they like it & get better at it), and that this pushes in favour of running a large internship with 1-1 management of the interns
- (This definitely wasn't the only motivation for running the internship, but I think it was one of the main ones, though that's partly guessin
... (read more)I have private information (e.g. from senior people at Rethink Priorities and former colleagues) that suggests operations ability at RP is unusually high. They say that Abraham Rowe, COO, is unusually good.
The reason why this comment is useful is that:
I appreciate it, but I want to emphasize that I think a lot of this boils down to careful planning and prep in advance, a really solid ops team all around, and a structure that lets operations operate a bit separately from research, so Peter and Marcus can really focus on scaling the research side of the organization / think about research impact a lot. I do agree that overall RP has been largely operationally successful, and that's probably helped us maintain a high quality of output as we grow.
I also think a huge part of RP's success has been Peter, Marcus, and other folks on the team being highly skilled at identifying low-hanging fruit in the EA research space, and just going out and doing that research.
I definitely think that we are very lucky to have Abraham working with us. I think another thing is that there are at least three people (Abraham, Marcus, me, and probably other people too if given the chance) each capable of founding and running an organization all focused instead on making just one organization really great and big.
I definitely think having Abraham be able to fully handle operations allows Marcus and me to focus nearly entirely on driving our research quality, which is a good thing. Marcus and I also have clear subfocuses (Marcus does animals and global health / development, whereas I focus on longtermism, surveys, and EA movement building) which allow us to further focus our time specifically on making things great.
To what extent do you think a greater number of organisations conducting similar research to RP would be useful to promote healthy dialogue? Compared to having one specialist organisation in a field who is the go-to for certain questions.
I'll let Peter/Marcus/others give the organizational answer, but speaking for myself I'm pretty bullish about having more RP-like organizations. I think there are a number of good reasons for having more orgs like RP (or somewhat different from us), and these reasons are stronger at first glance than reasons for consolidation (eg reduced communication overhead, PR).
- The EA movement has a strong appetite for research consultancy work, and RP is far from sufficient for meeting all the needs of the movement.
- RP clones situated slightly differently can be helpful in allowing the EA movement to unlock more talent than RP will be able to.
- For example, we are a remote-first/remote-only organization, which in theory means we can hire talent from anywhere. But in practice, many people may prefer working in an in-person org, so an RP clone with a physical location may unlock talent that RP is unable to productively use.
- We have a particular hiring bar. It's plausible to me that having a noticeably higher or lower hiring bar can result in a more cost-effective organization than us.
- For example, having a higher hiring bar may allow you to create a small tight-knit group of superge
... (read more)I agree with that suspicion, especially if we include things like "Just collect a bunch of stuff in one place" or "Just summarise some stuff" as "research". I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: "I’m addicted to creating collections"). It basically always feel like (a) a lot of other people could've done what I'm doing and (b) it's kinda crazy no one had yet. I also sometimes don't have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress other work that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically "research" outputs than collections or summaries are.
But I want to push back on "this frees up other EA researchers to do more important work". I think you probably mean "this frees up ot... (read more)
Strongly agree with this. While I was working on LEAN and the EA Hub I felt that there were a lot of very necessary and valuable things to do, that nobody wanted to do (or fund) because they seemed too easy. But a lot of value is lost, and important things are undermined if everyone turns their noses up at simple tasks. I'm really glad that since then CEA has significantly built up their local group support. But it's a perennial pitfall to watch out for.
Do you also feel funding constrained in the longtermist portion of your work? (Conventional wisdom is that neartermist causes are more funding constrained than longtermist ones.)
Mostly yes. It definitely is the case that, if we were given more cash than the cash that we already have, we could meaningfully accelerate our longtermism team in a way that we cannot do with the cash we currently have. Thus funding is still an important constraint to scaling our work, in addition to some other important constraints.
However, I am moderately confident that between the existing institutional funders (OpenPhil, Survival and Flourishing Fund, Long-Term Future Fund, Longview, and others) that we could meet a lot of our funding request - we just haven’t asked yet. But (1) it’s not a guarantee that this would go well so we’d still appreciate money from other sources, (2) it would be good to add some diversity from these sources, (3) money from other sources could help us spend less time fundraising and more time accelerating our longtermism plans, (4) more funding sooner could help us expand sooner and with more certainty, and (5) its likely we could still spend more money than these sources would give.
This comment matches my view (perhaps unsurprisingly!).
One thing I'd add: I think Peter is basically talking about our "Longtermism Department". We also have a "Surveys and EA Movement Research Department". And I feel confident they could do a bunch of additional high-value longtermist work if given more funding. And donors could provide funding restricted to just longtermist survey projects or even just specific longtermist survey projects (either commissioning a specific project or funding a specific idea we already have).
(I feel like I should add a conflict of interest statement that I work at RP, but I guess that should be obvious enough from context! And conversely I should mention that I don't work in the survey department, haven't met them in-person, and decided of my own volition to write this comment because I really do think this seems like probably a good donation target.)
Here are some claims that feed into my conclusion:
- Funding constraints: My impression is that that department is more funding constrained than the longtermism department
- (To be clear, I'm not saying the longtermism department isn't at all funding constrained, nor that that single factor guarantees t
... (read more)Assume you had uncapped funding to hire staff at RP from now on. In such a scenario, how many more staff would you expect RP to have in 5 years from now? How much more funding would you expect to attract? Would you sustain your level of impact per dollar?
For instance, is it the case that you think that RP could be 2x as large in five years and do 3x as much funded work at a 1.5x current impact per dollar? Or a very different trajectory?
I ask as an attempt to gauge your perception of the potential growth of RP and this sector of EA more generally.
It’s been hard for me to make five year plans, given that we’re currently only a little less than four years old and the growth between 2018 when we started and now has already been very hard to anticipate in advance!
I do think that RP could be 2x as large in five years. I’m actually optimistic that we could double in 2-3 years!
I’m less sure about how much funded work we’d do - actually I’m not sure what you mean by funded work, do you mean work directly commissioned by stakeholders as opposed to us doing work we proactively identify?
I’m also less sure about impact per dollar. We’ve found this to be very difficult to track and quantify precisely. Perhaps as 80,000 Hours talks about “impact-adjusted career changes”, we might want to talk about “impact-adjusted decision changes” - and I’d be keen to generate more of those, even after adjusting for our growth in staff and funding. I think we’ve learned a lot more about how to unlock impact from our work and I think also there will have been more time for our past work to bear fruit.
This is a little hard to tell, because often we receive a grant to do research, and the outcomes of that research might be relevant to the funder, but also broadly relevant to the EA community when published, etc.
But in terms of just pure contracted work, in 2021 so far, we've received around $1.06M of contracted work, (compared to $4.667M in donations and grants (including multi-year grants)), though much of the spending of that $1.06M will be in 2022.
In terms of expectations, I think that contracted work will likely grow as a percentage of our total revenue, but ideally we'd see growth growth in donations and grants too.
How valuable do you think your research to date has been? Which few pieces of your research to date have been highest-impact? What has surprised you or been noteworthy about the impact of your research?
By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.
Do you have any insights that explain your success and quality, especially that might inform other organizations or founders?
Alternatively, is your success due to intrinsically high founder quality, which is harder to explain?
Thanks Charles for your unprompted, sincere, honest, and level-headed assessment.
Your check will be in the mail in 3-7 business days.
Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:
As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.
What are the top 2-3 issues Rethink Priorities is facing that prevent you from achieving your goals? What are you currently doing to work on these issues?
I think to better expand Rethink Priorities, we need Rethink Priorities to be bigger and more efficient.
I think the relevant constraints for "why aren't we bigger?" are:
(1): sufficient number of talented researchers that we can hire
(2): sufficient number of useful research questions we can tackle
(3): ability to ensure each employee has a positive and productive experience (basically, people management constraints and project management constraints)
(4): ops capacity - ensuring our ops team is large enough to support the team
(5): Ops and culture throughput - giving the ops enough time to onboard people (regardless of ops team size), giving people enough time to adapt to the org growth ...that is, even if we were otherwise unconstrained I still think we can't just 10x in one year because that would just feel too ludicrous
(6): proof/traction (to both ourselves and to our external stakeholders/funders) that we are on the right path and "deserve" to scale (this also just takes time)
(7): money to pay for all of the above
~
It doesn't look like (1) or (2) will constrain us anytime soon.
My guess is that (3) is our current most important constraint but that we are working by experimenting with... (read more)
What lessons would you pass onto other EA orgs from running an internship program?
Thanks so much for this question!
We have learned a lot during our Fellowship/Internship Program. Several main considerations come to mind when thinking about running a fellowship/internship program.
- Managers’ capacity and preparedness – hosting a fellow/intern may be a rewarding experience. However, working with fellows/interns is also time-consuming. It seems to be important to keep in mind that managers may need to have a dedicated portion of time to:
- Prepare for their fellows/interns’ arrival, which may include drafting a work plan, thinking about goals for their supervisees, and establishing a plan B, in case something unexpected comes up (for example, data is delayed, and the analysis cannot take place)
- Explain tasks/projects, help set goals, and brainstorm ideas on how to achieve these goals
- Regularly meet with their fellows/interns to check in, monitor progress, as well as provide feedback and overall support/guidance throughout the program
- Help fellows/interns socialize and interact with others to make them feel included, welcomed, and a part of the team/organization.
- Operations team capacity and preparedness – there are many different tasks associated with each stage of the fell
... (read more)Two things I'd add to the above answer (which I agree with):
Why do you have the distribution of focus on health/development vs animals vs longtermism vs meta-stuff that you do? How do you feel about it? What might make you change this distribution, or add or remove priority areas?
What is your process for identifying and prioritizing new research questions? And what percentage of your work is going toward internal top priorities vs. commissioned projects?
[This is like commentary on your second question, not a direct answer; I'll let someone else at RP provide that.]
Small point: I personally find it useful to make the following three-part distinction, rather than your two-part distinction:
I think RP, the EA community, and the world at large should very obviously have substantial amounts of each of those three types of projects / theor... (read more)
Is there any particular reason why biosecurity isn't a major focus? As far as I can see from the list, no staff work on it, which surprises me a little.
The short answer is that a) none of our past hires in longtermism (including management) had substantive biosecurity experience or biosecurity interest and b) no major stakeholder has asked us to look into biosecurity issues.
The extended answer is pretty complicated. I will first go into why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity, explain why I think those reasons aren't as applicable to RP, and then why we haven't gone into biosecurity anyway.
Why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity
My personal impression is that EA/existential biosecurity experts currently believe that it's very easy for newcomers in the field to do more harm than good, especially if they do not have senior supervision from someone in the field. This is because existential biosecurity in particular is rife with information hazards, and individual unilateral actions can invoke the unilateralist's curse.
Further, all the senior biosecurity people are very busy, and are not really willing to take the chance with someone new unless they a) have experience (usually academic) in adjacent field... (read more)
What is your comparative advantage?
What have you been intentional about prioritising in the workplace culture at Rethink Priorities? If you focus on making it a great place for people to work, how do you do that?
This is a great question! Thank you so much!
At Rethink Priorities we take an employee-focused approach. We do our best to ensure that our staff have relevant tools and resources to do their best work, while also having enough flexibility to maintain their work-life balance. Staff happiness is a high priority for us and one of our strategic goals.
Some aspects of our employee-centered approach include:
- Competitive benefits and perks – we offer unlimited time off, flexible work schedule, professional development opportunities, stipends etc., which are available to full- and part-time staff, as well as our fellows/interns.
- Opportunities to socialize, make decisions, and take on new projects – for example, we have monthly social meetings, we run random polls to solicit opinions/ideas from staff, and create opportunities for employees to participate in various initiatives, like leading a workshop.
- Biannual all staff surveys – we collect feedback from our staff twice a year. The survey asks a series of questions about leadership, management, organizational culture, benefits and compensation, psychological safety, amongst others. The results are thoroughly analyzed and guide our de
... (read more)We’re working right now on a values and culture setting exercise where we are figuring out intentionally what we like about our culture and what we want to specifically keep. I appreciate Dominika's comment but I want to add a bit more of what is coming out of this (though it isn't finished yet).
Four things I think are important about our culture that I like and try to intentionally cultivate:
Work-life balance and sustainability in our work. Lots of our problems are important and very pressing and it is easy to burn yourself out working hard on it. We have deliberately tried to design our culture for sustainability. Sure, you might get some more hours of work this year if you work harder but it isn’t worth burning out just a few years later. We want our researchers here for the long haul. We’re invested in their long-term productivity.
Rigor and calibration. It’s very easy to do research poorly and unfortunately easy to do bad research that misleads people because it is hard to see how the research is bad. Thus a lot of work must be done by our researchers to ensure that our work is accurate and useful.
Ownership. In a lot of organizations, managers want their employees to do exactly... (read more)
What kinds of research questions do you think are better answered in an organisation like RP vs. in academia, and vice versa?
One major factor that makes some research questions more suited to academia is requiring technical or logistical resources that would be hard to access or deploy in a generalist EA org like RP (some specialist expertise also sometimes falls into this category). Much WAW research is like this, in that I don't think it makes sense for RP to be trying to run large-scale ecological field studies.
Another major factor is if you want to promote wider field-building or you want the research to be persuasive as advocacy to certain audiences in the way that sometimes only academic research can. This also applies to much WAW research.
Personally, I think in a most other cases academia is typically not the best venue for EA research, although the latter considerations about field-building and the prestige/persuasiveness of academic research recurs sufficiently commonly that I think the question of whether a given project is worth publishing academically recurs fairly commonly even within RP.
Are there any ways that the EA community can help RP that we might not be aware of? Or any that we do already that you would like more of?
Commenting on our public output, particularly if you have specialized technical expertise, can often be somewhere from mildly to really helpful. RP has a lot of knowledge, but so does the rest of the EA community and extended EA network, so if you can route our reports to the relevant connections, this can be really valuable in improving the quality of our reasoning and epistemics.
One thing the EA community can help us with is by encouraging suitable candidates to apply to our jobs. (New ones will be posted here and announced in our newsletter.) Some of our most recent hires have transitioned from fields which, at first sight, would seem unlikely to produce typical applicants. But we're open to anyone proving us they can do the job during the application process (we do blinded skills assessments). I think we're really not credentialist (i.e. we don't care much about formal degress if people have gained the skills that we're looking for). So whenever you read a job ad and think "Oh, this friend could actually do that job!", do tell them to apply if they're interested.
More importantly, I think EA community builders in all geographies and fields can greatly help us by training people to become good at the type of reasoning that's important in EA jobs. I particularly think of reasoning transparency, expressing degrees of (un)certainty and clarifying the epistemic status of what you write. Furthermore, probabilistic thinking and Bayesian updating. Also learning to build models and getting familiar with tools like Guesstimate and Causal. Forecast... (read more)
To any staff brave enough to answer :D
You're fired tomorrow and replaced by someone more effective than you. What do they do that you're not doing?
I recently spent ~2 hours reflecting on RP's longtermism department's wins, mistakes, and lessons learned from our first year[1] and possible visions for 2022. I'll lightly adapt the "lessons learned for Michael specifically" part of that into a comment here, since it seems relevant to what you're trying to get at here; I guess a more effective person in my role would match my current strengths but also already be nailing all the following things. (I guess hopefully within a year I'll ~match that description myself.)
(Bear in mind that this wasn't originally written for public consumption, skips over my "wins", etc.)
- "Focus more
- Concrete implications:
- Probably leave FHI (or effectively scale down to 0-0.1 FTE) and turn down EA Infrastructure Fund guest manager extension (if offered it)
- Say no to side things more often
- Start fewer posts, or abandon more posts faster so I can get other ones done
- Do 80/20 versions of stuff more often
- Work on getting more efficient at e.g. reviewing docs
- Reasons:
- To more consistently finish things and to higher standards (rather than having a higher number of unfinished or lower quality things)
- And to mitigate possible stress on my part, [personal thing], a
... (read more)Some ways someone can be more effective than me:
- I'm not as aggressive at problem/question/cause prioritization as I could be. I can see improvements of 50-500% for someone who's (humanly) better at this than me.
- I'm not great at day-to-day time management either. I can see ~100% improvement in that regard if somebody is very good at this.
- I find it psychologically very hard to do real work for >30h/week, so somebody with my exact skillset but who could productively work for >40h/week without diminishing returns would be >33% more valuable.
- I pride myself of the speed and quantity I write, but I'm slower than eg MichaelA, and I think it's very plausible that a lot of my outputs are still bottlenecked by writing speed. 10-50% effectiveness improvement seems about right.
- I don't have perfect mental health and I'm sometimes emotional. (I do think I'm above average at both). I can see improvements of 5-25% for people who don't have these issues.
- I'm good at math* but not stellar at it. I can imagine someone who's e.g. a Putnam Fellow be 3-25% more effective than me if they chose to work on the same problems I work on (though plausibly they'd be more effective because they'd gravitat
... (read more)The person who replaces me has all my same skills but in addition has many connections to policymakers, more management experience, and stronger quantitative abilities than I do.
I've adjusted imperfectly to working from home, so anyone who has that strength in addition to my strengths would be better. I wish I knew more forecasting and modeling, too.
Are there any skills and/or content expertise that you expect to particularly want from future hires? Put differently, is there anything that you think aspiring hires might want to start working on to be better suited to join/support RP over the next few years?
I agree, but would want to clarify that many people should still apply and very many people should at least consider applying. It's just that people shouldn't optimise very strongly for getting hired by one specific institution that's smaller than, say, "the US government" (which, for now, we are 😭).
What percentage of your work/funding comes from non-EA aligned sources?
I once told people in a programmer group chat what I was doing when I got my new job at RP. One of them looked into the website and gave like a $10 donation.
To the best of my limited knowledge, this might well be our largest non-EA aligned donation in longtermism.
It's a little hard to say because we don't necessarily know the background / interests of all donors, but my current guess is around 2%-5% in 2021 so far. It's varied by year (we've received big grants from non-EA sources in the past). So far, it is almost always to support animal welfare research (or unrestricted, but from a group motivated to support us due to our animal welfare research).
One tricky part of separating this out - there are a lot of people in the animal welfare community who are interested in impact (in an EA sense), but maybe not interested in non-animal EA things.
Minor nit:
should be
As discussed in this comment thread (by you :P), an increasingly high percentage of our work is targeted towards specific decision-makers, and whether we choose to publish is due to a combination of researcher interest, decision-maker priorities, and the object-level of what the research entails.
I'm particularly glad you note this since the survey team's research in particular is almost exclusively non-public research (basically the EA Survey and EA Groups Survey are the only projects we publish on the Forum), so people understandably get a very skewed impression of what we do.
Thanks for asking. We've run around 30 survey projects since we were founded. When I calculated this in June we'd run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks.
Most of the projects aren't exactly top secret, but I err on the side of not mentioning the details or who we've worked with unless I'm certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them to estimate how many US college students have heard of EA.
An illustrative example of the kind of project a lot of these are would be an org approaching us saying they are considering doing some outreach (this could be for any cause area) and wanting us to run a study (or studies) to assess what kind of message would be most appropriate. Another common type of project is just polling support for different policies of interest and testing the robustness of these results with different approaches. Both these kinds of projects are the most common but generally take up proportionately less time.
There are definitely a lot of other ... (read more)
In your past experiences, what are the biggest barriers to getting your research in front of governmental organisations? (ex: official development aid grantmakers or policy-makers)
Biggest barriers in getting them to act on it?
I would break this down into a) the methods for getting research in front of government orgs and b) the types of research that gets put in front of them.
In general I think we (me for sure) haven’t been optimising for this enough to even know the barriers (unknown unknowns). I think historically we’ve been mostly focused on foundations and direct work groups, and less on government and academia. This is changing so I expect us to learn a lot more going forward.
As for known unknowns in the methods, I still don’t know who to actually send my research to in various government agencies, what contact method they respond best to (email, personal contact, public consultations, cold calling, constituency office hours?), or what format they respond best to (a 1 page PDF with graphs, a video, bullet points, an in person meeting? - though this public guide Emily Grundy made on UK submissions while at RP has helped me). Anecdotally it seems remarkably easy to get in front of some: I know of one small animal advocacy organization that managed to get a meeting with the Prime Minister of their country, and I myself have had 1-1 meetings with more than two dozen members of the UK and Irish parliam... (read more)
In your yearly review you mention that Rethink may significantly expand its Longtermism research group in the future, including potentially into new focus areas and topics. Do you have any ideas of what these might be (beyond the mentioned AI governance), and how you might choose (i.e. looking for a niche where Rethink can play a major role, following demand of stakeholders, etc.)?
If in 5 and/or 10 years time you look back on RP and feel its been a major success, what would that look like? What kind(s) of impact would you consider important, and by what bar would you measure your attainment/progress towards that?
How have you or would you like to experiment with your organisational structure or internal decision making to improve your outputs?
Any advice for researchers who want to conduct research similar to Rethink Priorities? or useful resources that you point your researchers towards when they join?
It has been said before elsewhere by Peter, but worth stating again:read and practice Reasoning Transparency . Michael Aird compiled some great resources recently here.
I'd also refer people to Michael and Saulius' replies to arushigupta's similar subquestion in last year's RP AMA.
For longtermist work, I often point people to Holden Karnofsky's impressions on career choice, particularly the section on building aptitudes for conceptual and empirical research on core longtermist topics .
I've also personally gained a lot from arguing with People Wrong on the Internet, but poor application of this principle may be generally bad for epistemic rigor. In particular, I think it probably helps to have a research blog and be able to do things like spot potential holes in (EA social media, EA forum, research blogs, papers, etc). That said, I think most EA researchers (including my colleagues) are much less Online than I am, so you definitely don't need to develop an internet argument habit to be a good researcher.
Making lots of falsifiable forecasts about short-term conclusions of your beliefs may be helpful. Calibration training is probably less helpful, but lower cost.
Trying to identify important and tractable (sub)questions is often even more important than the ability to answer them well. In particular, very early on in a research project, try to track "what if I answered this question perfectly? Does it even matter? Will this meaningfully impact anyone's decisions... (read more)
Let's say your research directly determined the allocation of $X of funding in 2021.
Let's say you have to grow that amount by 10 times in 2022, but keep the same number of staff, funding, and other resources.
What would you change first in your current campaigns, internal operations, etc.?
What are the bottlenecks to using forecasting better in your research?
Lazy semi-tangential reply: I recently gave a presentation that was partly about how I've used forecasting in my nuclear risk research and how I think forecasting could be better used in research. Here are the slides and here's the video. Slides 12-15 / minutes 20-30 are most relevant.
I also plan to, in ~1 or 2 months, write and publish a post with meta-level takeaways from the sprawling series of projects I ended up doing in collaboration with Metaculus, which will have further thoughts relevant to your question.
(Also keen to see answers from other people at RP.)
Will you have some kind of internship/fellowship oppurtunities next summer?
What are some key research directions/topics that are not currently being looked into enough by the EA movement (either at all or in sufficient depth)?
Interesting that you've got climate change in your global health and development work rather than with longtermism. What are the research plans for the climate change work at RP?
I'm interested in your current and future work on longtermism.
One of your plans for 2022 is to:
Have you decided the possible additional research directions you are hoping to explore? When you're figuring this out, are you more interested in spotting gaps or do you feel the field in young enough that investigating areas others are working on/have touched is still likely to be beneficial? Perhaps both!
What should one do now if one wants to be hired by Rethink Priorities in the next couple years? Especially in entry-level or more junior roles.
I realize this is a general question; you can answer in general terms, or specify per role.
From a talk at EAG in 2019, I remembered that your approach could be summarized as empirical research in neglected areas (please correct me if I'm wrong here). Is this still the case? Do you still have a focus on empirical research (Over, say, philosophy)?
Answered here and here and here.
About funding overhang:
Peter wrote a comment on a recent post:
You also wrote in your plans for 2022:
... (read more)We'd expect to find new funding opportunities in each cause area we work in. Our work is aspirational and inherently about exploring the unknown though, so it's very difficult to know in advance how large the funding gaps we uncover will be. But hopefully our work will contribute to a part of work that overall shifts EA from not having a funding overhang but instead having substantial room for more funding in all cause areas. This will be a multi-year journey.
Sorry if the answer for this is readily available elsewhere, but are there recommended times of the year to donate if you are based in the UK, e.g. to make use of matching opportunities? My understanding is that the Giving Tuesday facebook matching is only for US donors.
Thanks!