Great piece :) Nitpick:
Quote:
"When things feel particularly bleak, I sometimes tell myself that even if I had the time and energy to try to make the world better, I’d probably fail.
Effective altruists try anyway. They know it’s impossible to take the care you feel for one human and scale it up by a thousand, or a million, or a billion."
Quote 2:
"We could really make things very good in the future,” he tells me. “Imagine your very best days. You could have a life that is as good as that, 100 times over, 1,000 times over."
(highlights mine)
At the face value the question comes up: if it is impossible to scale the care you feel by a factor of a 1000 or more why would it be possible to have a life that is a 1000 times over as good as how you might imagine "your very best days"? Wouldn't that max out at some point too?
There is some nuance to both of these quotes, which removes the conflict somewhat:
1. the first quote is about your "care-o-meter" (as given in the linked essay), while the second one is about "goodness" of life in general. The word "imagine" suggests the latter quote is about feeling in your life as good as you feel on your best days times 1000, however, the word "imagine" can also mean other things (you can think that your best day was when you donated to rescue a 1000 birds, which does not necessarily feel much different to saving one, but the "goodness" factor comes up from other reasons than subjective wellbeing here)
2. perhaps it's about having 1000 times more "very best days", or 500 times more "very best days", which are subjectively two times as "best" as they are now - or some other combo
3. perhaps there are limits to "care-o-meter" but not on how we percieve subjective wellbeing, the scales don't necessarily need to have same limits and same progression patterns. (is it even the question one should be asking? Do these scales actually work that way in the first place?)
Obviously hard to give all these caveats in a single quote in an introductory press article, so it's nobody's fault, but still - an interesting conundrum.
Probably it would be worthwhile to cross-reference your post with sources such as:
https://www.centreforeffectivealtruism.org/ceas-guiding-principles
https://resources.eagroups.org/running-a-group/communicating-about-ea/what-to-say-pitch-guide
These sources seem to encapsulate key claims of EA nicely, so points raised there could serve as additional points for your analysis, clarify some things up maybe (haven't thought of it much, just dropping the links).
A) Covid has tangibly demonstrated for many people how a disease can get out of hand and biorisk is one of the most severe x-risks. Maybe playing up that angle would be beneficial? Something along the lines of "the pandemic is not over, yet we need to think about how to safeguard ourselves - and future generations - against another pandemic and other x-risks". Such a message could open a lot of doors into podcasts/newsletters/newspapers etc. Of course, that message would have to be crafted carefully and sensibly in order to avoid the accusations of profiteering on the tragedy of Covid.
B) As for the websites to pitch the book towards:
technologyreview.com (having the book featured in their daily newsletter "The Download" would surely be something)
aeon.co
wired.com
thebulletin.org
quantamagazine.org
scientificamerican.com
futurity.org
vox.com
theverge.com
project-syndicate.org
vice.com
axios.com
clearerthinking.org
1 Will MacAskill mentions that "What We Owe The Future" is somewhat complimentary to "The Precipice". What can we expect to learn from "WWOTF" having previoulsy read "The Precipice"?
2 How would Will go about estimating the discount rate for the future? We shouldn't discriminate against future "just because", however we still need some estimate for a discount rate, because:
a) there are other reasons for applying discount rate other than discrimination eg. "possibility of extinction, expropriation, value drift, or changes in philanthropic opportunities" (see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate#Significance_of_mis_estimating_the_discount_rate )
b) not applying a discount rate at all makes all current charity etc. negligably effective compared to working towards better future - eg. by virtue of the future having much, much greater number of moral agents for which we can safeguard said future (people, animals, but also AIs/robots perhaps or some post-human or trans-human species). Not having any discount rate would completely de-prioritize all current charity, which is what a lot of EAs would not agree with.
In other words: How do we divide our resources (time, attention, money, career etc.) between short-term and long-term causes?
3 What are the possible criticisms that the book could receive - both from within and from the outside of EA community?
4 To which extent the book will discuss value shift/drift? It seems an interesting topic, which also appears not to be discussed very extensively in other EA sources
5 What comes next after "WWOTF"? If another book, what will it be about?
6 What is Will's stance on War in Ukraine? How does it contribute to x-risks, s-risks and how can it influence the future (incl. deep future)? It appears to be one of the first major conflicts involving (to an extent unseen earlier) technologies such as: social media (for shaping public opinion, organizing), cyberwarfare, AI (eg. for analyzing open source intelligence, face recognition), renewable energy sources (touted as an alternative to dependence on Russian fossil fuels) etc.
Potential issue: desertion is deliberately hard in most militaries, by creating conditions akin to Prisonner's Dillema or The Tragedy of the Commons - what's rational for the group to do (desert) is very risky and irrational for an individual to attempt alone (any one soldier trying to desert, if they do it alone, risks getting caught and executed).
In Russian military the case is even more difficult - most of Russian soldiers probably have their families back in Russia, and it's very likely that deserters' families would be harassed, given that there are already many human rights' violations going on there.
Case in point - https://www.nytimes.com/2006/08/13/world/europe/13hazing.html - one of the Russian soldiers lost his legs in (peacetime) brutal hazing. His family was pressured with bribery to drop the charges against the army (they didn't). It's not hard to imagine similar, albeit brutal pressure put on families of deserters.
Good idea on creating a database. One misleading article (with community members' rebuttal) here: https://forum.effectivealtruism.org/posts/Fm4vAtKoH4nBCzsoQ/linkpost-a-response-to-rebecca-ackermann-s-inside-effective