Open Philanthropy strives to help others as much as we can with the resources available to us. To find the best opportunities to help others, we rely heavily on scientific and social scientific research.
In some cases, we would find it helpful to have more research in order to evaluate a particular grant or cause area. Below, we’ve listed a set of social scientific questions for which we are actively seeking more evidence.[1] We believe the answers to these questions have the potential to impact our grantmaking. (See also our list of research topics for animal welfare.)
If you know of any research that touches on these questions, we would welcome hearing from you. At this point, we are not actively making grants to further investigate these questions. It is possible we may do so in the future, though, so if you plan to research any of these, please email us.
Land Use Reform
Open Philanthropy has been making grants in land use reform since 2015. We believe that more permissive permitting and policy will encourage economic growth and allow people to access higher-paying jobs. However, we have a lot of uncertainty about which laws or policies would be most impactful (or neglected/tractable relative to their impact) on housing production.
- What are the legal changes that appear to spur the most housing? E.g. can we estimate the effects of removing parking mandates on housing production? How do those compare to the effects of higher FAR or more allowable units?
- Why we care: We think that permitting speed might be an important category to target, but have high uncertainty about this.
- What we know: There are a number of different studies of the effects of changes in zoning/land use laws (e.g. see a summary here in Appendix A), but we’re not aware of studies that attempt to disentangle any specific changes or rank their importance. We suspect that talking to advocates (e.g. CA YIMBY) would be useful as a starting point.
- Ideas for studying this: It seems unlikely that there have been “clean” changes that only affected a single part of the construction process, but from talking to advocates, it seems plausible that it would be possible to identify changes to zoning codes that primarily affect one parameter more than others. It also seems plausible that this is a topic where a systematic review, combining evidence from many other studies, would be unusually valuable.
- What is the elasticity of construction with regards to factors like “the likelihood of acquiring permission to build” or “the length of an average permitting delay”?
- Why we care: We are highly uncertain about how to best encourage more construction, and thus about where to target our grants.
- What we know: there have been many recent changes to permitting requirements, such as the California ADU law that requires cities to respond to permit requests within 60 days and a new law in Florida that requires cities to respond to permit requests quickly or return permitting fees. This blog post by Dan Bertolet at Sightline predates those changes, but is the best summary we’ve seen on the impacts of permitting requirements.
- Ideas for studying this: one might compare projects that fall right below or above thresholds for permitting review (e.g. SEPA thresholds in Washington state), and try to understand how much extra delay projects faced as a result of qualifying for review. It could also be valuable to analyze the effects of the Florida law (e.g. a difference-in-difference design looking at housing construction in places that had long delays vs. short delays prior to the law passing).
- Does the value of new housing (in terms of individual earnings gains and/or innovation benefits) in a commuting zone or metro area vary depending on whether that housing is in the center or on the periphery?
- Why we care: Currently, estimates of this value are typically made at the level of the metro area, but it seems plausible that we should be differentiating more – e.g. putting higher values on units built in Manhattan relative to those built in Westchester.
- What we know: there’s a lot of work on the gradient of land/house prices with regards to transit costs across metro areas, but we aren’t aware of work that explicitly tries to estimate within-metro differences (in the vein of Card, Rothstein, and Yi (2023), for example).
- Ideas for studying this: it should be possible to use similar designs looking at moves at a more granular level (e.g. rather than defining effects at the metro level, use changes in distance-weighted job availability). There may also be ways to directly use the land price gradient to estimate this (though in general that will also reflect amenity values).
- What are the impacts of a land value tax?
- Why we care: Some people have proposed that a land value tax could encourage land redevelopment and reduce the economic inefficiency of taxation, but we do not know how well this reflects the real-world impact of land value taxes.
- What we know: Land value taxes have been used in some Pennsylvania cities, and in some countries outside the US. There has also been increasing interest in implementing a land value tax in other places (e.g. this FT editorial). See here for many more arguments and references related to land value taxation.
- Ideas for studying this: one could use a difference-in-difference design looking at when cities adopt a land value tax (or a split value tax) and examine changes in construction or other outcomes (e.g. volume of land transactions). Alternatively, one could also try a border regression discontinuity looking at differences in land transactions or other metrics at the border between a place that implements a land value tax and one that does not.
Health
Treatments now potentially within reach may extend the human lifespan and improve quality of life. We aim to support tractable and cost-effective research on the world’s most burdensome diseases, including cardiovascular disease, infectious diseases, malaria, and others.
- How do different components of PM 2.5 pollution differ in their impact on mortality? Should Open Philanthropy be considering the type of pollutant when modeling the harms of poor air quality?
- Why we care: Open Philanthropy makes many grants focused on South Asian air quality. However, we still have a lot of uncertainty about the impacts of air pollution. One potentially important variable is the type of pollutant; it would be important for our grantmaking to know if some forms of pollution were much more impactful to reduce than others.
- What we know: We know that the components of PM 2.5 pollution can vary substantially by location. There has been some associational work done on this in the US context, but we are more interested in areas with high baseline PM 2.5 levels.
- Ideas for studying this: there is some existing data on how the components of PM 2.5 pollution vary across India. This could be linked with mortality data for associational studies. One could also use policy changes that changed the makeup of particulate emissions in a certain area as a natural experiment.
- What has been the impact of Bloomberg’s tobacco-focused grantmaking? Has tobacco usage decreased in Bloomberg focus countries, and can that be causally linked to Bloomberg’s involvement?
- Why we care: Open Philanthropy has made some grants attempting to influence public health regulation. We are interested in knowing how successful other philanthropists have been when making similar grants, and are particularly interested in knowing the effects of Bloomberg’s anti-tobacco advocacy, which we see as one of the most focused (and promising) programs of its type.
- What we know: there has been substantial research on the effects of tobacco policy, but we are not aware of any work that focuses specifically on the effect of Bloomberg’s investments.
- Ideas for studying this: some of Bloomberg’s grantmaking in tobacco is public; one could use a variety of approaches to assess the impact of those grants (e.g. a synthetic control).
- What are the causal impacts of lead exposure on cardiovascular health?
- Why we care: we have made grants on reducing lead exposure in low-income countries in the past and are likely to make more in the future. These grants are made assuming that lead affects both health and income, but we are quite uncertain about the magnitude of the effect of either, especially on health (where we think there is less data). Better estimates of the effect of lead on health would reduce the level of uncertainty around the cost-effectiveness of these grants.
- What we know: according to epidemiological (observational) studies, lead has negative impacts on cardiovascular health (see a helpful systematic review here). However, there is limited causal evidence on the impacts of lead on cardiovascular disease in humans; our primary evidence comes from a study that leverages exposure to NASCAR races to determine changes in ischemic heart disease in the elderly, but we don’t know much about chronic exposure and are reluctant to rely heavily on a single study.
- Ideas for studying this: one might use exogenous variation in exposure to lead, such as the recent removal of lead from spices in Bangladesh, or study workers in lead-exposed industries (while taking steps to handle the “healthy worker effect”[2]).
- Historically, how have properties of vaccines other than their price – in particular measured efficacy and duration/waning in pre-licensure trials – affected the demand and speed of rollouts of those vaccines in South Asian and sub-Saharan African countries? What rules of thumb should we use for how an increase of, say, 10 percentage points in measured efficacy affects demand and eventual health impact?
- Why we care: Open Philanthropy invests in vaccines for a variety of illnesses, with the primary (though not exclusive) goal of reducing mortality. Having better estimates for how properties of vaccines translate to demand and eventual health impact will help us prioritize when to support “good” leads in clinical trials vs. hold out longer for “great” ones.
- What we know: the efficacy of vaccines for different diseases varies considerably, and improved technologies can lead to more promising candidates even for diseases where one or more products are already available.
- Ideas for studying this: one could interact the efficacy of a given year’s vaccine (see data here for example) with propensity to get the flu vaccine to determine how this changed flu dynamics. (Though data from South Asia or sub-Saharan Africa would be even better.)
- We’d be interested in a Mendelian randomization study that followed people heterozygous for the sickle cell variant to estimate the long run effect of malaria on income and/or consumption.
- Why we care: Much of Open Philanthropy’s grantmaking in global health R&D is focused on preventing malaria in high-risk populations (as are several charities recommended by GiveWell, with whom we work closely on global health). However, we have little causal evidence on the long-run effects of having had malaria, on either health or income. Thus, we do not have a good sense of the true (long-run) value of preventing malaria.
- What we know: a Mendelian study found that the likelihood of stunting increases with each malaria infection.
- Ideas for studying this: Mendelian randomization is a technique that looks at people with different genes to determine the causal impact of genes on observable outcomes. Being heterozygous for the sickle cell variant is symptomless but protective against malaria. Thus, those with sickle cell trait are less likely to get malaria and can be compared against those without the trait to understand the long-run impacts of malaria.
- Which vaccines, and what fraction of common vaccines, have been trialed for fractional dosing? How often does a lower vaccine dose lead to the same or better efficacy? Which vaccines might be promising for fractional dosing research trials?
- Why we care: Open Philanthropy is interested in cost-effectively improving health. Fractional dosing has the potential to lower cost and expand coverage of vaccines. If we had better evidence on this topic, OP could know in which cases (if any) to advocate for more fractional dosing.
- What we know: a fractional dose for yellow fever and flu appeared to be non-inferior, but fractional dosing for polio was less successful.
- Ideas for studying this: we are not aware of any systematic review of fractional vaccine trials, but many such trials have been run. Studying this topic could involve simply examining data from past trials, rather than running new trials.
- Where is the Global Burden of Disease (GBD) likely to be inaccurate? Which figures or estimates is it likely to significantly revise in the future after improving their modeling? Can we get some kind of ‘certainty or quality of evidence score’ for the individual point estimates we sometimes source from GBD? We are particularly interested in data on environmental pollutants like arsenic or lead; South Asian and sub-Saharan African estimates on mortality from rheumatic heart disease, syphilis, dengue, hepatitis B, hepatitis C, and hepatitis E; as well as understudied issues like fungal disease or endocrine-disrupting chemicals.
- Why we care: our estimates of the importance of a particular problem or disease are often based on GBD estimates. However, these estimates have changed substantially across editions of GBD[3] and can have wide error bars. We are interested in knowing the true burden so that we can appropriately estimate the importance of particular diseases/etiologies, as well as more generally understanding how reliable the GBD is for guiding our own investments in the measurement of disease burden.
- What we know: as GBD covers all deaths and DALYs in the world, the team behind it necessarily spends limited time researching any one cause of DALYs. While GBD revisions attempt to address issues with previous estimates, we believe that there may still be substantial errors.
- Ideas for studying this: one could look for sharp changes in burden figures between the current and previous GBD studies, or examine a particular cause of death in detail and compare one’s own estimates to those generated by the GBD at different levels of age or geographic aggregation.
- What drives the diffusion of medications or medical technologies across countries?
- Why we care: new medications and medical technologies can substantially improve disease burdens and make treating or eliminating an illness more cost-effective. However, different countries adopt technologies at different rates; we are interested in knowing why. Open Philanthropy might then be able to make grants to encourage adoption of particularly promising technologies in underserved areas.
- What we know: it seems that patents, price regulation, and market structure affect drug adoption.
- Ideas for studying this: follow up on the approach in Kyle (2007). One could extend her estimates to estimate the diffusion of FDA-approved drugs globally via patent filings and then look at predictors of diffusion: disease burden, GDP per capita, price controls, language (English vs. not), and path dependency (whether the same companies sell to the same countries repeatedly).
Migration
- What are migration routes with unused capacity? What are the world’s open borders (particularly where one side of the open border is a low- or middle-income country)?
- Why do we care: Open Philanthropy tries to cost-effectively improve health and income. Migration is often considered to be one of the best ways to improve income; for instance, a person moving from a low-income country to a high-income country might raise their income by a factor of 50. We have previously made grants in both international and internal migration, and are interested in knowing whether there are underutilized migration channels whereby migrants might substantially increase their income. Our understanding is that aging populations are causing some HICs to offer more work visas than they previously offered, but that the uptake of these visas is poorly understood (and may be quite low).
- What we know: there are some international borders that do not require authorization for labor migration (e.g. within the EU, or between India and Nepal). At least one such border includes a low-income country (India/Nepal — India’s per capita GDP is over twice that of Nepal’s), but as far as we are aware, there is no database of such borders.
- Ideas for studying this: we think valuable descriptive papers could gather information on the relative usage levels of different work visas (in HICs or MICs) that could be accessible to people from LMICs, or on migration paths that don’t have caps on work visas (such as India-Nepal). Limiting to the largest HICs for ease of initial study (e.g. US, Japan, Germany, France, UK) would probably still be very valuable.
Education
- What are the general equilibrium returns to education in LMIC settings?
- Why we care: education may be one of the best ways to increase long-run income. However, most education studies focus only on a small number of treated students; it is less clear what the general equilibrium effects are (that is, effects across an entire city/region/nation). These are important in understanding how valuable education is in raising wages — and if Open Philanthropy should consider education interventions as a cost-effective way of improving income.
- What we know: this question has been examined in both Indonesia and India, but re-examination of these findings has made them seem less robust. In addition, we continue to be surprised that there are so few studies on how large schooling expansions affect wages.
- Ideas for studying this: one might use other large-scale expansions of schooling, such as Ghana’s free senior high school program or the Kenyan schooling expansion studied in Mbiti & Lucas (2012).
- What is the effect of elite policy education? How valuable are programs giving policymakers from LMICs economic policy training?
- Why we care: we think that economic growth is likely to be very important, but it isn’t clear how best to produce higher growth rates through philanthropic funding. One idea would be to increase the supply of highly trained policymakers, who might be able to influence policy that affects many people.
- What we know: we’re not aware of work trying to measure the impact of policy training programs, such as the masters program at the Williams Center for Development Economics or the MPA ID at Harvard.
- Ideas for studying this: if you could get access to the admissions data for a program like one of the above examples, you could compare people who were nearly admitted to those who were actually admitted to see whether the programs have an effect on career trajectories. This wouldn’t prove anything directly about growth, but would provide evidence that the programs have some counterfactual effect.
Science and Metascience
- Do efforts to improve the rigor of social science work?
- Why we care: many of Open Philanthropy’s decisions are based on social scientific work. As such, we have a vested interest in this work being reliable and replicable. Unreliable or non-replicable work might lead us to make weaker, less impactful funding decisions.
- What we know: the peer review process does not seem to weed out papers with signs of p-hacking, but pre-registration may reduce publication bias.
- Ideas for studying this: one might consider the effects of efforts like the AEA pre-analysis plan registry or the Institute for Replication.
- How do scientists make decisions about allocating marginal time to applications for further funding or research? How valuable is the process of writing applications if they are ultimately unsuccessful? Is it true that scientists are “wasting” a lot of time writing grants?
- Why we care: we think that scientific progress is hugely important to growth and health advances. One issue in current science is that scientists spend a huge amount of time on high-stakes grant applications instead of doing science (and that the applications may be excessively long relative to what’s necessary for identifying the best science). If this is true, advocating for changes to the grantmaking process might be a high-leverage opportunity for Open Philanthropy.
- What we know: descriptive data suggests that scientists now spend a huge amount of their time applying for grants, and that spending more time on a grant application does not increase the chance of success.
- Ideas for studying this: there is valuable descriptive work to be done on this topic (e.g. surveying scientists on time use and decisions). One could also consider a quasi-experiment related to looking at people who narrowly win or lose a grant and seeing how it affects their funding applications/time use.[4]
- How would different decision-making criteria (picking favorites vs consensus) at science funders change the kind of work that’s done? How should we compare criteria, and which outcomes matter for such a comparison?
- Why we care: as above, we believe scientific progress is important to growth and health advances. Therefore, we are interested in making sure scientific funding processes work as well as possible to maximize the amount of impact per federal research dollar. If there are improvements that can be made to how science is funded, Open Philanthropy might fund advocacy for such improvements.
- What we know: Carson, Graff Zivin and Shrader (2023) find that reviewers would prefer to prioritize papers with more variance in review scores, and that if this preference were taken into account it would likely lead to different projects being funded. A review of the literature suggests that peer review of applications can identify some of the most promising ideas, but the level of signal is fairly weak.
- Ideas for studying this: one might look at data on past applications and see how the set of funded projects would have differed given the use of different selection criteria, such as max score or random selection (among projects over a certain level of quality). Alternatively, one could randomize within a specific RFP (so that some proposals are selected under different criteria) or randomize across RFPs (so that you can also see how various selection criteria affect the kinds of applications received). The Institute for Progress is currently studying this in collaboration with NSF.
- How impactful are systematic reviews (e.g., Cochrane collective reviews, Campbell Collaboration), evidence clearinghouses (e.g., Pathways to Work, Impact Genome Registry), and literature reviews? Does policy or decision-making within a specific domain measurably improve when reviews become available?
- Why we care: a large share of the value of academic research comes from its ultimate impact on human decisions, but ultimate decision-makers are usually not academics who are well equipped to read and understand individual academic studies. Open Philanthropy would like to know how decision-makers use academic research, and whether there might be improvements to systematic reviews such that decision-makers could be better informed.
- What we know: We know remarkably little. This study argues that academic citation networks are significantly impacted by literature reviews, and suggests that they help to organize and orient fields. This study finds that policymakers respond more to sets of studies finding the same thing across multiple settings than to individual studies – but the results are mixed.
- Ideas for studying this: we think the rollout of evidence clearinghouses is likely pseudorandom across topics, such that measuring their impact may be tractable with difference-in-difference methods. For example, one could study outcomes across different disease categories as the Cochrane collective rolled out new systematic reviews, starting when it was founded in 1993.
- How well do prize competitions really work? What is the best way to design prize competitions to get the most useful information?
- Why we care: Open Philanthropy has occasionally run prize competitions to try and generate useful knowledge. See, for example, our Cause Exploration Prizes and AI Worldviews Contest. We may run more prizes in the future; as such, we would like to know how likely a prize competition is to gather useful information and how to best attract talented entrants.
- What we know: a 2010 paper argues that proportional prize contests produce more total achievement, but another paper is less prescriptive about ideal prize structure.
- Ideas for studying this: Innocentive has done a lot of prize-like competitions; they might be able to share some useful retrospective data.
- What are the impacts of opening a hub or office focused on producing impact evaluations on overall research generation?
- Why we care: we believe that rigorous social scientific research is key to identifying the most impactful and cost-effective interventions and policies in developing countries, some of which we may go on to fund. We are interested in knowing cost-effective ways to produce more of said research. We have funded a new IPA office previously, and might fund more such work in the future if we had more evidence about its impact on research, both overall and specific to the target country.
- What we know: Matt Clancy, who leads our grantmaking in innovation policy, coauthored an article on the extent to which research done in one place can be usefully applied in other places. Obstacles to this include different places having different underlying conditions, as well as evidence that policymakers prefer research conducted in their own countries. The article’s bibliography includes many relevant sources.
- Ideas for studying this: Getcher and Meager (2021) collected data on the openings of developing-country offices for NGOs interested in conducting research within said countries. One could use difference-in-difference design to look at how research production (and RCT production in particular) changes when a new office opens – does it cause an increase in total research in those countries? Is there evidence of substitution from non-RCTs to RCTs? Substitution from neighboring countries to the country with a new office? Do new offices tend to produce research on different topics from existing offices (e.g. focusing more on financial inclusion instead of agriculture)?
- What forms of research do policymakers find most persuasive or useful?
- Why we care: Open Philanthropy is often interested in influencing policy. Therefore, we want to learn about what is most likely to influence policymakers’ decision-making. We are quite uncertain what types of evidence are most likely to influence policymakers, or in what venues this evidence is likely to be presented.
- What we know: Policy documents cite a relatively small number of scientific publications. In one study, policymakers do not seem to respond to strength of evidence in deciding what to implement; in another, policymakers cared more about external validity than internal validity; in another, policymakers cared substantially about sample size.
- Ideas for studying this: what evidence do central banks (and other governmental institutions) cite most often, and how does this differ from academic citation practices? Is there additional evidence on what types of evidence best persuade policymakers or are most likely to get cited as part of regulatory decisions? E.g. how do citations from a government agency (e.g. the FTC) compare to citations in academic work on similar topics?
- What factors drive public sector spending levels for R&D, especially changes to the status quo?
- Why we care: Open Philanthropy wants to raise income levels across society. Our previous work has suggested that public spending on R&D is one of the most effective ways for governments to increase their countries’ income levels. We are thus interested in knowing how the level of public spending on R&D is set, and if there are tractable ways that Open Philanthropy might advocate for this to be increased.
- What we know: there is relatively little information available about the process of setting national-level priorities, but there is some data available about agenda-setting within NIH.
- Ideas for studying this: we aren’t sure of the best approach. Focusing on particular periods of growth in R&D spending and producing case studies might yield evidence that could be explored in a quantitative way later.
Global Development
- How much do USAID Administrators or other senior aid staff affect division of funds? How much influence can a (senior) staff member actually have?
- Why we care: Open Philanthropy makes grants in global aid advocacy and is interested in increasing both the amount and efficacy of rich countries’ foreign aid. We are interested to know how much influence agency leadership has on the distribution of aid in order to benchmark how much change we should expect over different time frames.
- What we know: we’re not aware of any work addressing this.
- Ideas for studying this: when a new Administrator is appointed, how much does the distribution of aid change across different categories? Ideally, it would be interesting to compare USAID (which is known to have many Congressional earmarks) to other countries with more flexible aid budgets.
- How differentiated are growth diagnostics? Do they have the same core recommendations a large fraction of the time?
- Why we care: we believe that sustained economic growth is one of the best ways to improve health and income. We are interested in knowing how to obtain this. Growth diagnostics are a common tool for trying to select growth-friendly policies, but we are uncertain how valuable this tool is. We are interested in knowing what additional information is gained from using growth diagnostics – how useful they are, and the extent to which this suggests that countries face common vs. distinct growth challenges.
- What we know: while there are many papers on growth diagnostics, we are not aware of any evaluation of growth diagnostics across countries.
- Ideas for studying this: taking a large body of growth diagnostics from a common source (e.g. the World Bank or Harvard Growth Lab); using automated methods to measure the similarity of recommendations, compare how similar they are, and determine whether that similarity varies by base GDP (e.g. do similarly rich/poor countries have similar diagnostics?) or region (e.g. do Central Asian countries have similar diagnostics?).
Other
- Does social issue content in TV shows or movies change opinions? What is the value of a widely-viewed documentary?
- Why we care: many social changes — such as encouraging migration or expanding one’s moral circle to include farmed animals — are often covered in widely-viewed media channels. We are interested in knowing if such coverage changes minds.
- What we know: media seems to be able to influence decision-making (as with fertility in Brazil). Blackfish decreased attendance at Seaworld and decreased the value of the company that owned the park.
- Ideas for studying this: we think there is more scope to study individual documentaries or shows (did Waiting for Superman affect views on education? Did Bowling for Columbine affect views on guns?). One could also conduct meta-analyses, looking across a variety of documentaries or shows to look for common effects.
- How common are non-compete agreements outside the US? What are their effects on wages?
- Why we care: we believe that non-competes are likely to reduce labor mobility and decrease innovation.
- What we know: there are some surveys on the prevalence of non-competes outside the US, but few are recent or comprehensive. Outside of recent work in Italy, we have little information about how prevalent non-competes are, or how harmful they are in labor markets outside the US.
- Ideas for studying this: one could gather information on the prevalence of non-competes and their effects on wages in other large labor markets, like Germany, France, and Spain.
- ^
Note that this list does not include purely scientific questions that would impact future OP grantmaking.
- ^
That is, people who remain employed tend to be healthier than those who stop working.
- ^
Including estimates for exactly the same timeframe — that is, the estimates aren’t changing because of some change in the world.
- ^
Adda et al. (2023) looks at a very similar question.
As a social scientist, these lists are very helpful, thank you team. It's useful to be able to point students and colleagues to open questions that are immediately decision-relevant.
This statement cracked me up for some reason: "At this point, we are not actively making grants to further investigate these questions. It is possible we may do so in the future, though, so if you plan to research any of these, please email us."
I.e., this isn't an RFP (request for proposals). Instead, it's more like a RFINP, BMITFWDEK? Request for Information Not Proposals, But Maybe In The Future (We Don't Even Know)?
OK, mostly joking -- in all seriousness, I haven't seen wealthy philanthropies release lists of ideas that are hopefully funded elsewhere, but maybe that actually makes sense! No philanthropy can fund everything in the world that might be interesting/useful. So maybe all philanthropies should release lists of "things we're not funding but that we hope to see or learn from."
I'm curating this post. I appreciate OpenPhil taking the time to signal the research they would value!
If anyone has thoughts about a research question in this list, consider writing some preliminary notes for Draft Amnesty Week.
Nice list. I wish I had the econ expertise to do land use reform research, but since I don't, I'm happy to be a resource if anyone has law-related questions about it.
On media influencing decision-making, you might want to give Giulia Buccione and Marcela Mello's working paper "Religious Media, Conversion and its Socio-Economic Consequences: The Rise of Pentecostals in Brazil" a read.
On this topic, Desmond Ang's recent AER paper “The Birth of a Nation: Media and Racial Hate” is also worth a look.
On Health (point 8), this paper shows that patent pooling can be effective at improving access to drugs in LMICs:
https://www.sciencedirect.com/science/article/pii/S0167629622000868?via%3Dihub
Executive summary: Open Philanthropy is interested in funding additional social science research to help guide their grantmaking decisions in areas like health, migration, and economic growth. They list specific questions where more evidence would be useful.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.