By Peter Hurford and Tee Barnett
This is the seventh article in the EA Survey 2017 Series. You can find supporting documents at the bottom of this post, including previous EA surveys conducted by Rethink Charity, and an up-to-date list of articles in the series. Get notified of the latest posts in this series by signing up here.
Summary
-
We use past survey data to shed light on community shifts in cause area preferences over time.
-
Our evidence suggests that EAs are becoming more favorable toward AI and less favorable toward politics.
-
EAs in both the 2015 and 2017 surveys shifted away from viewing poverty as a “top” or "near top" cause.
-
Newcomers in the 2015 survey were less accepting of global poverty than veterans. However, the reverse was true in the 2017 survey, with newcomers being more accepting of global poverty than veterans.
-
There is no indication that EAs are getting less interested in animal welfare with time.
Cause Preference Shifts
Our previous posts in this series were largely descriptive, often reporting on 2015 and 2016 to provide an approximate snapshot of the current EA community. As the series progresses into late 2017, we’ll look to extract further insight from the data, which will include various longitudinal analyses, commentary on the Pledge, and potentially other angles upon request.
We turn first to a commonly held narrative within the community – that new EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure. (For example, see Michael Plant’s post “The marketing gap and a plea for moral inclusivity”.) While we previously reported higher support for global poverty as a top cause, we find reason to support some version of a narrative suggesting that EAs are shifting away from global poverty.
There are two ways we’ve looked at changes in preference toward causes over time. First, we took the information on what year EAs joined the community, and compared the cause preferences of earlier EAs to newcomers. Our second method involved taking the population of EAs who took the EA Survey in both 2015 and 2017 and seeing how the same people changed their opinions of their top cause over this two year gap. The first method has a larger sample size, while the second version captures intrapersonal attitude shifts over time. Both tell a similar tale.
Using the longitudinal method, there were 184 people who took both the 2015 and 2017 EA Surveys that we could match (using a hashed email address to preserve anonymity). To get a quick overview of cause preference change over time, we looked at the number of people who shifted toward a cause (they previously had not considered the cause to be a “top priority” or “near the top priority” in 2015, but now do as of 2017) and subtracted the number of people who shifted away from a cause (they previously had considered the cause to be “top” or “near top” and now don’t). This gave us a number we called a “net shift” from a cause.
Cause area preferences fluctuated slightly between the 2015 and 2017 EA surveys (Table 1). Poverty remains the clear community favorite, although the net shift in preference broken down by cause area reveals that interest has been waning in poverty since the 2015 EA survey, with a net shift of -8. Interestingly, politics has hemorrhaged the most interest (-13) in the wake of Brexit, Trump’s victory, and other significant political developments in traditional EA hubs. The biggest winner in net gains is AI (+29) and non-AI far future (+14), which suggests at least some directional movement toward long-term concerns over time.
We were compelled to take a closer look at the dropping interest in poverty, particularly due to its continued popularity in the aggregate and traditional status as an EA mainstay. Between the 2015 and the 2017 surveys, 14.13% of EAs in the longitudinal sample changed their mind about how much importance should be placed on the cause (Table 2), with 9.24% of these EAs no longer considering poverty as a “top” or “near top” cause, and 4.89% of EAs upgrading their estimation of poverty’s importance.
However, there has been more movement within the distinction between “top” and “near top”, with 19.02% of EAs in the longitudinal sample relegating poverty from being the top cause two years later and only 5.98% of EAs upgrading their estimation of poverty as the most important cause area (Table 3).
To look at this from another perspective, we took the 2017 EA Survey population and distinguished between whether an EA was more of a “veteran” who learned about EA in 2013 or earlier or was more of a relative newcomer who learned about EA in 2014 or later[1]. The hypothesis is that veteran EAs would have had more time to shift their beliefs in causes and may be predictive of how newcomers will eventually shift.
Taking initial preferences into consideration, EAs who joined in 2013 or earlier were far less likely to rank poverty as the “top” or “near top” priority than EAs who joined in 2014 or later (Table 4), though a majority of these veteran EAs still ranked poverty as the “top” or “near top” cause.
One potential explanation for this shift might not be a genuine change in opinion over time, but instead that veteran EAs were always less likely to be into poverty, whereas newer EAs are a lot more likely to be into poverty. To check our base assumption about whether there has been a significant influx of poverty-focused EAs in recent years, we looked back at the 2015 EA Survey and compared it to the 2017 EA Survey (Table 5).
As of the 2015 Survey, newcomers were actually relatively less accepting of global poverty than the veterans, but this effect reverses as of the 2017 EA Survey. This could point to a difference in attitudes for newcomers in 2015 and 2017 skewing the data, rather than newcomers from 2015 changing their minds over time.
The data is not entirely clear on whether initially interested EAs change their views away from poverty with time. The perceived separation between veteran EAs being less poverty-focused may be down to initial dispositions, rather than later conversions. The 2017 EA survey data does suggest that most newcomers enter the movement interested in poverty, which may have implications for movement building organizations to bear in mind.
Attitudes Toward AI
Turning to AI, not only has resistance to devoting resources to AI safety reduced substantially since the 2015 EA Survey, but we showed that this set of concerns is now actively competing with other cause areas for top priority billing.
There were more people changing their minds on AI than global poverty (Table 6), with 19.57% of EAs in our longitudinal sample choosing to upgrade the importance of AI in their view to a “top” or “near top” cause and only 3.8% of EAs choosing to downgrade it out of “top” and “near top”. When looking at just top cause area preference, the trends were roughly similar, with 13.04% of EAs in the longitudinal sample promoting AI to the top cause and 7.07% demoting AI from top cause to something else.
Among those veteran EAs who joined in 2013 or earlier, the support for AI as a “top” or “near top” priority was closer to 50-50, whereas for EAs who joined in 2014 or later, there is less support for AI as a “top” or “near top” cause (Table 7). The net shift of aggregate interest toward AI (Table 1), a broad trend favoring AI (Table 6), combined with our knowledge that newer EAs favor AI relatively less (Table 7), would seem to suggest that more exposure to EA increases the likelihood of becoming more inclined to support AI safety over time.
Attitudes Toward Animal Welfare
We were also curious to check the same for animal rights, to see how EA interest in helping animals as a cause has changed over the years.
Here we see that among the 2017 EA Survey respondents, unlike with AI, there is no statistically significant difference between the rate at which newcomers and veterans support animal rights (Table 9). Furthermore, there has been a net shift toward animal welfare among those who took both the 2015 and 2017 EA Surveys (Table 8). Thus, suggestions that EAs are getting less interested in animal welfare over time does not seem to be confirmed by EA Survey data.
Among the 2017 EA Survey respondents, newcomers to EA are relatively more likely to support politics than veterans, though the majority of both newcomers and veterans do not support politics as a “top” or “near top” cause (Table 11). Similarly, among those who took both the 2015 and 2017 EA Surveys, people are shifting away from thinking of politics as a “top” or “near top” cause (Table 10). This may mean that while politics is less popular as an EA cause overall, EAs tend to shift away from it over time. Likewise, it is interesting that it seems like contentious developments of late may have not had any sort of energizing effect on getting EAs interested in politics, as far as we can tell in this survey data.
Endnotes
[1]: This effect is statistically significant at p < 0.00001 for both. We chose 2013 because we felt it properly conveyed “veteran” status before a lot of popular growth in EA in 2014, but this effect remains the same in direction and statistical significance, with similar strength, regardless of your choice for cut-off year (tested with 2011, 2012, 2013, 2014, and 2015 as cut-off years).
Credits
Post written by Peter Hurford and Tee Barnett
The annual EA Survey is a volunteer-led project of Rethink Charity that has become a benchmark for better understanding the EA community. A special thanks to Ellen McGeoch, Peter Hurford, and Tom Ash for leading and coordinating the 2017 EA Survey. Additional acknowledgements include: Michael Sadowsky and Gina Stuessy for their contribution to the construction and distribution of the survey, Peter Hurford and Michael Sadowsky for conducting the data analysis, and our volunteers who assisted with beta testing and reporting: Heather Adams, Mario Beraha, Jackie Burhans, and Nick Yeretsian.
Thanks once again to Ellen McGeoch for her presentation of the 2017 EA Survey results at EA Global San Francisco.
We would also like to express our appreciation to the Centre for Effective Altruism, Scott Alexander via SlateStarCodex, 80,000 Hours, EA London, and Animal Charity Evaluators for their assistance in distributing the survey. Thanks also to everyone who took and shared the survey.
Supporting Documents
EA Survey 2017 Series Articles
I - Distribution and Analysis Methodology
II - Community Demographics & Beliefs
III - Cause Area Preferences
IV - Donation Data
V - Demographics II
VI - Qualitative Comments Summary
VII - Have EA Priorities Changed Over Time?
VIII - How do People Get Into EA?
Please note: this section will be continually updated as new posts are published. All 2017 EA Survey posts will be compiled into a single report at the end of this publishing cycle
Prior EA Surveys conducted by Rethink Charity (formerly .impact)
The 2015 Survey of Effective Altruists: Results and Analysis
The 2014 Survey of Effective Altruists: Results and Analysis
Thanks for doing these analyses. I find them very interesting.
Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:
Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make progress on, like climate change or pandemic risk. But that's not all of what EAs are doing (or should be doing) with respect to AI.
This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.
So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there's a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I'd love to see more EAs foraging those carrots.