In May, Julia Wise posted about how EA will likely get more attention soon and the steps that EA organizations are taking to prepare. This is a brief follow-up post that includes some specific actions that EA organizations are taking in light of this increase in attention.
We wanted to provide this update in particular due to the upcoming launch of What We Owe The Future by Will MacAskill, which is releasing on August 16th in the US and September 1st in the UK. Will is doing an extensive podcast tour to promote the book, as well as several interviews and articles in high-profile newspapers, many of which are releasing in the two weeks before the book launch. We are hoping that the media around the launch will help fill a gap in accurate, public-facing descriptions of longtermism. You shouldn't be surprised if there’s a significant uptick in public discourse about both effective altruism and longtermism in August!
Below are some updates about EA communications activity in the last couple of months.
New Head of Communications at CEA
The Centre for Effective Altruism recently hired Shakeel Hashim as Head of Communications, to focus on communicating EA ideas accurately outside EA. Shakeel is currently a news editor at The Economist, and will be starting at CEA in early September. As several recent forum posts have noted, there has thus far been somewhat of a vacuum for a proactive EA and longtermism press strategy, and we hope that Shakeel will help to fill that gap. This will include seeking coverage of EA and EA-related issues in credible media sources, building strong relationships with spokespeople in EA and advising them on opportunities to talk to journalists, and generally helping to represent effective altruism accurately. Importantly, Shakeel will not focus on leading communications for CEA as an organization, but rather for the overall EA movement, in coordination with many partner organizations.
Greater coordination between organizations on EA communications
As Julia mentioned in her original post, staff at CEA, Forethought Foundation, Open Philanthropy, and TSD (a strategic communications firm that has worked with Open Philanthropy for several years) have been meeting regularly to discuss both immediate communications issues and our long-term strategy. Some immediate focuses have been on responding to incoming news inquiries, preparing spokespeople for interviews, and preparing possible responses to articles about EA and longtermism. We’ve also drafted a longer-term strategy for how we can communicate about EA and related ideas.
New Intro to EA Essay
CEA has also just posted a new Introduction to Effective Altruism article. The article may go through some further iterations in the coming weeks, but we think it functions as a far better and more up-to-date description of effective altruism than the previous essay. We believe this new essay should serve as a main illustration of communications best practices in action: it uses examples of good EA-inspired projects to illustrate core values like prioritization, impartiality, truthseeking, and collaborative spirit; it recognizes the importance of both research to identify problems and the practical effort to ameliorate them; and it foregrounds both global health and wellbeing and longtermist causes. We hope this essay, beyond being a soft landing spot for people curious about EA, can serve as talking points for EA communicators. We welcome comments and feedback on the essay, but please don’t share it too widely (e.g. on Twitter) yet: we want to improve this further and then advertise it.
Resources
If you’d like to flag any concerns or questions you have about media pieces, please email media@centreforeffectivealtruism.org. The CEA team will likely be aware of major pieces, but it’s possible that some smaller ones may slip through the cracks.
If someone requests that you do an interview, please also feel free to reach out to media@centreforeffectivealtruism.org — the team can provide guidance on whether or not to accept the interview, and can provide media training for the interview if you move forward with it. And if you would like advice on any aspect of communications (such as how to frame a blog post or whether to seek coverage for an upcoming announcement), please don’t hesitate to get in touch — the team is here to help.
Conclusion
We’re hoping that these new initiatives will help us not only mitigate the risks of the increased media attention, but use it to share accurate versions of EA ideas to new audiences and accelerate the change we hope to see in the world. This is a great opportunity to share important ideas, and we’re very excited to make the most of it.
This is great!
In the intro article, I don't think I really like the comparison between pandemic prevention and counterterrorism.
A couple reasons:
First, counterterrorism might be construed to include counter bio terrorism. In which case, it's not obvious to me that pandemic prevention and counterterrorism are even exclusive.
Second, both pandemics and counterterrorism are heavy-tailed and dominated by tail events. Tail events don't happen...until they do. To give an example, here is the same graph but for 2009-2019:
Essentially no deaths from COVID-19! Looks like it's unimportant!
Knowing almost nothing about terrorism, I would expect that a terrorism tail event, such as the detonation of a nuclear dirty bomb, could be similar: we wouldn't see it in the statistics until it was too late.
When we broaden the scope, we can see that many more people died in global pandemics (other than COVID, since COVID barely existed) in that time period than terrorism:
However, this is extremely influenced by another tail event: HIV/AIDS. In a world without HIV/AIDS, it would look like this:
This would imply that in some counterfactual world where nothing was different except that AIDS did not exist, I should have thought in 2019 that global pandemics were about equal to terrorism in scale. This is not a conclusion that should be drawn from the data, because for tail-dominated phenomena, you can't just consider historical average data (certainly not from a ten year period), you have to consider black swan events: events unlike any that have ever happened.
Comparing the most recent pandemic tail event to the average statistics on terrorism doesn't make sense: it's comparing apples to oranges. Either compare the average statistics over a long time period or the probability and severity of possible tail events. For newly emerging threats like engineered pandemics, average statistics doesn't even make sense at all, since we've never had an engineered pandemic.
Hey, just a quick comment to say something like this line of objection is discussed in footnote 3.
I'm going to propose the following further edits:
I missed that part of footnote 3, it does seem to address a lot of what I said. I appreciate your response.
I do think the vast majority of people will not read footnote 3, so it's important for the main body of the text (and the visuals) to give the right impression. This means comparing averages to averages, or possible tail events to possible tail events. It sounds like this is your plan now, and if so that's great!
Good post, though I should point out that HIV entered the human population independently at least twice (HIV-1 and HIV-2), so your counterfactual world missing HIV might not be as likely as one might otherwise think.
(There are also counterfactual worlds where SARS-CoV-1 or MERS-CoV or similar took off as well with an even higher death count.)
Didn't actually know that about HIV, good to know!
Agree with this completely.
The fact that this same statistical manoeuvre could be used to downplay nuclear war, vaccines for diseases like polio, climate change or AI risk, should also be particularly worrying.
Another angle is that the number of deaths is directly influenced by the amount of funding- the article says that "the scale of this issue differs greatly from pandemics", but it could plausibly be the case that terrorism isn't an inherently less significant/ deadly issue, but counterterrorism funding works extremely well- that's why deaths are so low.
Fantastic news about Shakeel. I am very happy to hear that someone is having a crack at the role you described (comms for EA in general, especially the accurate communication of EA ideas outside of EA).
LOVE the new intro article!!
Feedback
At least for me, it was hard to tell the hierarchy of the content. I wonder if a table of contents might be helpful?
I think the issue stems from H3 and H4 tags being hard for me to tell apart, so a little confusing to subconsciously keep track of where I was in the document. Another problem could be the "What values unite effective altruism?" and "What are some examples of effective altruism in practice?" are H3 but "How can you take action?" and "FAQ" are H2 but in my mind they should all have been at the same level? Maybe just promoting the first two headers to H2 would be good enough to solve most of my confusion.
Also, the preview image when sending the link to someone on LinkedIn strikes me as a little odd and might hinder virality when it's time to share it on social media.
Ideas for Iteration
If the intro article takes off and becomes the "top of the funnel" of effective altruism for a lot of people, optimizing the "conversion rate" of this article could have big downstream effects.
I would definitely encourage collecting 1 on 1 feedback by having people new to EA read the content in person and speak their thoughts out loud.
Qualitative feedback can also be gathered more quickly with a tool like Intercom to directly chat with people while they're reading it and hear their thoughts / answer their questions.
It might also be a good idea to get some quantitative feedback with a tool like hotjar to see how far people scroll.
If the goal of the article is to get people intrigued with EA and diving deeper, perhaps emphasizing the "How can you take action" or the other parts in the "What's next?" section with special graphics or banners kind of like the newsletter box would be helpful. Then you can A/B test different iterations to see what gets people to tap more.
Speaking of A/B tests, you might be able to squeeze out a few more percentage of engagement by experimenting with the order of the examples, the number of examples, which examples are shown, the actual words in the content itself, the preview image, etc.
Thanks Eddie. We're planning to make some design tweaks and some edits in the coming weeks including a table of contents. I'll post in the forum when this is done. To be clear, I wouldn't recommend sharing widely until then.
We have done exactly that in the process of writing this essay!
Thanks for the feedback on the image preview - I hadn't spotted that.
One quick note about that new introduction article:
The article says "From 2011-2022, about 260,000 people were killed by terrorism globally, compared to over 20 million killed by COVID-19."
However, the footnote provided seemingly cuts off terrorism deaths data at 2020: "258,350 people were killed by terrorism globally between 2011 and 2020 inclusive."
In my view, this isn't substantive, but it seems worth trying to fix if this is "the" introduction article.
Good point, thanks.
Has anyone given significant thought to the possibility that hostility to EA and longtermism is stable and/or endemic in current society? For example, if opposing AI capabilities development (a key national security priority in the US and China) has resulted in agreements to mitigate the risk of negative attitudes about AI from becoming mainstream or spreading to a lot of AI workers, regardless of whether the threat comes from an unexpected place like EA (as opposed to an expected place, like Russian Bots on social media, which may even have already used AI safety concepts to target western AI industry workers in the past).
In that scenario, social media and news outlets will generally remain critical to EA, and staying out of the spotlight and focusing on in-person communications would have been a better option than triggering escalating media criticism of EA. We live in the post-truth era after all, and reality is allowed to do this sort of thing to you.
I enjoyed the new intro article, especially the focus on solutions. Some nitpicks:
This is really exciting! I’m glad there are so many talented people on the case, and hope the good news will only grow from here :)
This is wonderful news!
A couple of comments on the new intro to EA article:
Superb.
The new intro article is excellent. I appreciate all the work that must have gone into it. Reading with a critical eye, there were a few things I might consider changing. I'll post each below as a reply to this so people can upvote/downvote agreement or disagreement.
This sentence doesn't quite make sense to me, When I get to "such as", I am expecting an example of a career to follow. Maybe "...such as those recommended by 80,000 Hours..."
"or by finding ways to use their existing skills..." doesn't seem to quite work either. Why isn't someone who chooses a career based on 80k advice also using their existing skills?
Lastly, I think you mean " to contribute to solutions to these problems", obviously not contribute to the problems themselves :)
From the "What resources have inspired people to get involved with effective altruism in the past?" FAQ, I think the above is missing the word "altruism." It seems like it should be "...get involved in effective altruism (but don’t..."
The use of the word "give" in these two paragraphs makes me worry people will interpret it as exclusively giving money. In the first paragraph, you've also gotten a little far down the page from the four values by this point. Perhaps this could be simplified to "...no matter how much you contribute, you try to make your efforts as effective as possible."
And in the second paragraph,
"...using whatever resources (time, money, etc.) you are willing to give"
I think this is put very eloquently in the "What is the definition of effective altruism?" FAQ below "Effective altruism, defined in this way, doesn’t say anything about how much someone should give. What matters is that they use the time and money they want to give as effectively as possible."
This doesn't seem like it should be a bullet point (maybe just a sentence that follows) since it is not a way people apply the ideas in their lives.
Would "using research from GiveWell or taking the Giving What We Can pledge." make more sense? Does GWWC do their own charity research?
It's not really clear what "different" refers to here. Different than what?
I love that Impossible Burgers exist and that they reduce meat consumption. I even think they taste fine, but they do not taste much like meat to me. I am sure they do to some people, but I would say this is not a fact and should not be stated like it's a fact. It might seem like a small point, but when I imagine being introduced to new ideas by an article like this, small details that seem wrong to me can really reduce how credible I find the rest of it. I think something as simple as "approaches the taste and texture of meat" would resolve my issue with it.
Repeating "human-compatible" feels a bit weird/redundant here.
Referring to an AI system as a "being" might be a bit alienating or confusing to people coming to EA for the first time, and the background discussion/explanation seems a bit out of scope for an intro article.
Minor point, but using omicron variant as an example might seem dated once we get to the next variant or two in 6 months or a year. Perhaps Measles would be a better choice?
My understanding is that EAs origins are a bit broader than just Oxford and this sort of gives the impression that they aren't. It also might be off-putting to some, depending on their views of academia (though others might be impressed by it). The word "formalized" gives the impression that things are a bit more set in stone and feels a bit contradictory to the truthseeking/updating beliefs stuff.
I didn't scrutinize, but at a high-level, new intro article is the best I've seen yet for EA. Very pleased to see it!