I took a look at the Goodreads data. Unfortunately it’s pretty messy and I don’t think it’ll be much help in understanding the popularity of the new book. Goodreads does distinguish between the different editions, but it’s clear from reading the reviews that some people are reviewing the old book but talking about the new one. And the interface won’t let me see total reviews across all copies by year, so we can’t see if that number has spiked.
Looking past the Goodreads data, I think it’s safe to say that the launch of the new edition was a success. TLYCS is on a much improved growth trajectory (including moving a record $18.2 million in 2020), new donors are telling us that they found us through the book (more so than prior to the relaunch), other organizations like GWWC are also seeing improvements they can trace to the book, etc. Some of this is discussed in TLYCS’s 2020 annual report, which just came out.
Of course some of this impact is because we released a new book, but I’m very confident that it helped a lot to make it free. It’s hard to believe that some people contributing impact weren’t enticed by the free offer and/or would have been put off by a financial cost. “Read this book” is an easier pitch than “buy this book.”
Owning the rights also gives us a lot of flexibility we wouldn’t have if we went the normal publishing route. We can do whatever we want in terms of cutting up the content into pieces to distribute via various channels, we can do translations on our own schedule, making future updates/additions to the ebook, etc.
My take is that owning the rights to foundational books or other IP has a lot of potential for EA. And for anyone considering this I'd say the earlier you figure out that you'll be giving the book away for free the better, as you'll need to have a distribution plan and may want to shape some of the content accordingly.
I asked my contact at Fugue Foundation about this, and here's their response:
We list information about our corporate structure on our website at the link below. We are incorporated in Arizona and await word from the IRS regarding our 501c3 application. Indeed, as a private foundation, and one that specifically lists privacy protection as one of our core principles, we do try to maintain a certain distance with online identities. I will certainly speak with any of the organizations that are selected to receive the grant.
Thanks to everyone who voted and commented! It was helpful to learn more about how EAs think about multiplier orgs, and I hope it was helpful to hear my perspective from inside one of those orgs.
Here are my biggest takeaways from the discussion, apologies that it took me so long to post this:
Outcomes I’d like to see going forward:
I’d love to see someone write up an overview of the multiplier space, similar to Larks’ annual AI Alignment Literature Review and Charity Comparison. Consolidating information would make it much easier for donors to engage with the space. Something as simple as a list of organizations with a few sentences about their work, their multiplier data, and links to more info would go a long way. (Ideally this would be done by someone who doesn’t work at a multiplier org; I’ll post this as a volunteer project on EA Work Club.)
I’d hope that overview would encourage more EAs dip their toes in the water by making a small donation to one or more multiplier orgs and/or subscribing to their mailing lists (I just did this to put some skin in the game). This is less about the actual money, and more about making it more likely you’ll stay informed about their work going forward. The more you do that, the more you’ll be able to make your own informed decision about whether their model is working.
A final note… While I’d love to see more people donating to multiplier orgs, I’d hate to see donors naively donating to the organization with the highest multiplier or otherwise incentivizing multiplier orgs to prioritize maximizing their short term multiplier. Ideally, both donors and organizations will prioritize strategies that maximize long run impact, and prioritize the magnitude of that long run impact (money moved – expenses) rather than the efficiency of that impact (money moved / expenses). For donors, I’d recommend asking 1) “do I believe in the strategy?” and 2) “do I believe the team can execute the strategy?”
There's also an EA classic available as an audiobook: The Live You Can Save (the fully updated 10th anniversary edition) is freely available (in audiobook or ebook format) making it a book to share with people you think might find it interesting.
Search engine optimization.
Based on my very limited understanding, links are critical for SEO (though not as important as a few years ago). So conventions like “EA blogs should generally have blogrolls (i.e. lists of links to related blogs)” or “references to organizations (e.g AMF) on the EA Forum should generally link to them” would probably help the entire community.
These sophisticated donors’ support of such a wide range of multiplier orgs supports the idea that there could be a lot of leverage out there to be had. If that’s true, it also has some interesting implications for the “it’s hard to get a job at an EA org” discussion that’s been going on for a while, most recently here.
Here's a simplified thought experiment. Let’s say you invested $1 million in the orgs listed above, allocated proportionally to their current size (not all that far off from what the infrastructure fund has actually done, but we’ll use stylized numbers to keep the math simple). Salaries are typically the biggest expense for multiplier orgs, so let’s say $800k flows through to hiring new people. Assume $100k/year per new person and that’s 8 new hires. If 75% of those jobs go to people in the EA community, that’s 6 EA’s getting the sorts of jobs that are immensely desireable and immensely scarce.
If the multiplier model really works, $1 million will be a small fraction of what’s needed to build a flourishing system multiplier orgs with models spanning research (e.g. GiveWell, ACE), fundraising for targeted causes (e.g. TLYCS, OFTW), fundraising for targeted donors (e.g. Founders Pledge and REG), and country-level organizations that provide tax deductible giving (e.g. RC Forward, EA Netherlands). If you built that ecosystem, you’d quickly create dozens of new roles. So if the multiplier model works at scale, you’ll move a ton of incremental money while also making real headway on the issue of EA jobs being scarce. (To be clear, I don’t think we should fund multiplier orgs so EAs will be able to get the jobs they want, I’m just saying that would be a nice added benefit if the multiplier model works and another reason to investigate whether it does work.)
Thanks for doing this, hadn't thought to use the Wayback machine. Really cool to see the quantitative perspective line up with our qualitative impressions!