I was reading Lifeblood by Alex Perry (it details the story of malaria bed nets). The book initially criticizes a lot of aid organizations because Perry claims that the aim of aid should be "for the day it's no longer needed". E.g., the goal of the Canadian Cancer Society should be to aim for the day when cancer research is unnecessary because we've already figured out how to beat it. However, what aid organizations actually do is expand to fill a whole range of other needs, which is somewhat suboptimal.
In this case, EA is really no exception. Suppose that in the future, we've tackled global poverty, animal welfare, and climate change/AI risk/etc. We would just move on to the next most important thing in EA. Of course, EA is separate from classical aid organizations, because it's closer to a movement/philosophy than a single aid effort. Nevertheless, I still think it might be useful to define "winning" as "alleviating a need for something". This could be something like "to reach a day when we no longer need to support GiveDirectly [because we've already eliminated poverty/destitution/because we've reached a quality of wealth redistribution such that nobody is living below X dollars a year]."
I find a lot of value in academic papers, especially in STEM fields, and am going to spend some time outlining my defense for writing them. That being said, I'm not necessarily looking at them as a "discussion forum for EAs". I think there are many reasons why academic papers can be useful to Effective Altruism even if they're not directly geared towards promoting EA-type ideas (though they certainly can be). Specifically, I think they're hugely important in research, and not just for helping secure a research-type job.
Academic papers are, in general, more mathematically rigorous than books. This is not to say that books can never be as rigorous as academic papers, but the overall trend is that books tend to summarize information or lay out information in layman's terms, whereas academic papers lay out detailed information in very data-heavy terms. Thus, academic papers may or may not be good for discourse depending on the audience's interests. Take Kahneman's books on cognitive psychology/behavioural economics--although his books are probably more popular than his papers (by people-at-large), his books reference his papers. It would be folly to try to publish his experimental results as a book (where would you store the results and supplemental materials)?
Academic papers also take less time/energy to write than books, especially for incremental research. For instance, if you're trying to demonstrate the efficacy of a particular anti-malarial drug, it's much less time consuming to publish this in a paper than it is to write a book about it.
Most importantly, academic papers are peer-reviewed. You can trust that papers that have come out in reputable journals have the support of experts in the same field. You really can't do the same for books. Take the large amount of books that exist in support of intelligent design. None of these would hold a dime in any reputable genetics or biology journal. In many cases, the reputability of a book is often dependent on the number of citations it has to journal papers or other material. (Again, this is not to say that books can never be reputable--you just have less confidence about whether one is or not.)
I'm more curious about specific cases of where these book-vs-journal-paper questions arise.
This is really interesting. I have a lot of thoughts about this (most of which I might elaborate in a separate post) but I'll post a quick summary here.
I think one of the biggest challenges to spreading EA right now is perhaps that we rely too much on word-of-mouth. It would be better to have some sort of centralized social media or infrastructure that we can use to share EA more broadly. EA.com is a very good one, but even that requires a large amount of reading, and people might be put off by the large amount of learning/background info needed to get involved.
In Kahneman's "Thinking Fast & Slow", he repeatedly points out that the main way in which most people decide to donate to charities is by intensity matching--they look at a charity (e.g., saving dolphins), quickly assess how much they feel about a certain cause ("yeah, I care a bit about saving dolphins") and then match a net value to that amount ("sure, I'll donate $50 to that"). Most charities draw on a person's System 1.
EA, of course, does not do this. EA's core tenet is basically to draw upon someone's System 2, and as a result, it almost excludes people who donate to charities based on S1. Perhaps one could argue that since we want EA to teach people to actively use their S2 to evaluate charities, not catering to S1-centric charity evaluators might be desirable. However, I think the net utility gained from:
a) widely promoting S1-type ad campaigns,
b) still allowing people to access S2-type information
would generate a lot more money and involvement than just making an S2 hard-sell. This is not too different from EAs who say "Yes, I'll donate to GiveWell's top charities [even though I don't want to spend the time to actually learn why these charities are effective myself]".
tl;dr: To actively promote EA and reach wider audiences, we should be doing more to produce the heartwrenching sob-story ad campaigns (youtube videos, etc.) that other big name charities do, and only really promote the analysis to people who have interest in hearing it.
I'm starting a PhD in Bioengineering soon, so my question mainly relates to academia. Are there any specific benefits that academic collaborations could provide the EA movement that currently aren't available? How can we encourage researchers to join the EA movement without making it seem as though we might be condemning some of their research for being too low-impact?
I'm actively choosing not to go out with people who I don't find particularly interesting or fun (i.e., people in the "they're nice" category, but who either aren't really interested in the type of discussion I want to have, are really judgemental/cynical about trying new things, etc.). Before, I'd feel like I needed to be nice and make friends with everybody or I'd be a mean person, but as my social circle has expanded and the number of things I've wanted to do has increased, I've become more selective.
Oddly, this has actually made me enjoy meeting new people much more. I'm always willing to give the benefit of the doubt that I could have a really good conversation, or really good connection, with someone I haven't met--but am not too disappointed, and don't feel "guilted" into spending time with someone, if I don't.