Reviews 2020

Sorted by Top

This straightforwardly got the novel coronavirus (now "covid-19") on the radar of many EAs who were otherwise only vaguely aware of it, or thought it was another media panic, like bird flu. 

The post also illustrates some of the key strengths and interests of effective altruism, like quantification, forecasting, and ability to separate out minor global events from bigger ones.

Eight years later, I still think this post is basically correct. My argument is more plausible the more one expects a lot of parts of society to play a role in shaping how the future unfolds. If one believes that a small group of people (who can be identified in advance and who aren't already extremely well known) will have dramatically more influence over the future than most other parts of the world, then we might expect somewhat larger differences in cost-effectiveness.

One thing people sometimes forget about my point is that I'm not making any claims ab... (read more)

For me, and I have heard this from many other people in EA, this has been a deeply  touching essay and is among the best short statements of the core of EA.

 

This was the single most valuable piece on the Forum to me personally. It provides the only end-to-end model of risks from nuclear winter that I've seen and gave me an understanding of key mechanisms of risks from nuclear weapons. I endorse it as the best starting point I know of for thinking seriously about such mechanisms. I wrote what impressed me most here and my main criticism of the original model here (taken into account in the current version).

This piece is part of a series. I found most articles in the series highly informative, but this particula... (read more)

I think this is one of the best pieces of EA creative writing of all time.

Since writing this post, I have benefited both from 4 years of hindsight, and also significantly more grantmaking experience with just over a year at the long-term future fund. My main updates:

  • Exploit networks: I think small individual donors are often best off donating to people in their network that larger donors don't have access to. In particular I 70% believe it would have been better for me to wait 1-3 years and donate the money to opportunities as and when they came up. For example, there have been a few cases where something would help CHAI but c
... (read more)

Stuff I'd change if I were rewriting this now:

  • not include the reference to "youngish" EAs wanting to govern everything by cost-effectiveness. I think it's more a result of being new to the idea than young.
  • make clearer that I do think significant resources should go toward improving the world. Without context, I don't think that's clear from this post.

This post is pushing against a kind of extremism, but it might push in the wrong direction for some people who aren't devoting many resources to altruism. It's not that I think people in general should be... (read more)

Excellent and underrated post. I actually told Greg a few years ago that this has become part of my cognitive toolkit and that I use this often (I think there are similarities to the Tinbergen Rule - a basic principle of effective policy, which states that to achieve n independent policy targets you need at at least n independent policy instruments). 

This tool actually  caused me to deprioritize crowdfunding with Let's Fund which I realized was doing a  multiobjective optimization problem (moving money to effective causes and doing re... (read more)

The most common critique of effective altruism that I encounter is the following: it’s not fair to choose. Many people see a fundamental unfairness in prioritizing the needs of some over the needs of others. Such critics ask: who are we to decide whose need is most urgent? I hear this critique from some on the left who prefer mutual aid or a giving-when-asked approach; from some who prefer to give locally; and from some who are simply uneasy about the idea of choosing. 

To this, I inevitably reply that we are always choosing. When we give money only to... (read more)

I would like to suggest that Logarithmic Scales of Pleasure and Pain (“Log Scales” from here on out) presents a novel, meaningful, and non-trivial contribution to the field of Effective Altruism. It is novel because even though the terribleness of extreme suffering has been discussed multiple times before, such discussions have not presented a method or conceptual scheme with which to compare extreme suffering relative to less extreme varieties. It is meaningful because it articulates the essence of an intuition of an aspect of life that deeply matters to ... (read more)

There are many reasons why I think this post is good:

  • This post has been personally helpful to me in exploring EA and becoming familiar with the arguments for different areas.
  • Having resources like this also contributes to the "neutrality" and "big-tent-ness" of the Effective Altruism movement (which I think are some of the most promising elements of EA), and helps fight against the natural forces of inertia that help entrench a few cause areas as dominant simply because they were identified early.
  • Honestly, having a "Big List" that just neutrally presents ot
... (read more)

Key points

  • We (the effective altruism community) probably want a community centred around values/principles/axioms rather than conclusions.
  • This post is a great starting point to value-centric discussions instead of conclusion-centric.
    • It communicates that asking the question honestly and then acting on its answer is more important than the specific current conclusions
    • It is very accessible so newcomers can easily understand it (having this be articulated for people new to effective altruism helps make this a central part of the effective altruism movement in
... (read more)
  • This post introduced the concept of Task Y, which I have found very helpful framing in thinking about ways to engage community members at scale, which to me is a very important question for movement sustainability. 
  • I think it's unlikely the Task Y concept would have been popularized in EA community building without this post. 
  • I think the motivation to find Task Y is asking the question in the correct direction, even though I think the Task Y framing is not useful beyond a point. But it seems like an important first step.  
  • My current thinking
... (read more)

I think this post contributes something novel, nontrivial, and important, in how EA should relate to economic growth, "Progress Studies," and the like. Especially interesting/cool is how this post entirely predates the field of progress studies.

I think this post has stood the test of time well. 

For a long time, I've believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: "I doubt it". And sometimes, "Do you want to bet?"

So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, "This really seems like something someone should be ringing the alarm bells about." But for a while, very few people were predicting anything big on respectable f... (read more)

Longtermism and animal advocacy are often presented as mutually exclusive focus areas. This is strange, as they are defined along different dimensions: longtermism is defined by the temporal scope of effects, while animal advocacy is defined by whose interests we focus on. Of course, one could argue that animal interests are negligible once we consider the very long-term future, but my main issue is that this argument is rarely made explicit.

This post does a great job of emphasizing ways in which animal advocacy should inform our efforts to improve the ver... (read more)

As I nominate this, Holden Karnofsky recently wrote about "Minimal Trust Investigations" (124 upvotes), similar to Epistemic Spot Checks. This post is an example of such a minimal trust investigation.

The reason why I am nominating this post is that

  • It seems to me that Guzey was right on several object-level points
  • The EA community failed both Guzey and itself in a variety of ways, but chiefly by not rewarding good criticism that bites.

That said, as other commenters point out, the post could perhaps use a re-write. Perhaps this decade review would be a good t... (read more)

As an employer, I still think about this post three years after it was published, and I regularly hear it referenced in conversations about hiring in EA. The experiences in it clearly resonated with a lot of people, as evidenced by the number of comments and up votes. I think it's meaningfully influenced the frame of many hiring rounds at EA organizations over the past three years.

Cool Earth was the EA community's default response to anyone who wanted to donate to climate change for years, without particularly good reason. Sanjay's work overturned that recommendation and shortly after more rigorous recommendations were published.

Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good ... (read more)

Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.

The post highlighted (among other things):

  • People first hear about EA from a fairly wide variety of sources, rather than a small number dominating. Even the largest source, personal contacts,
... (read more)

This was one of many posts I read as I was first getting into meta EA that was pretty influential on how I think about things. It was useful in a few different ways:


1. Contextualising a lot of the other posts that were published around the same time, written in response to the "It's hard to get an EA job" post

2. Providing a concrete model of action with lots of concrete examples of how to implement a hierarchical structure 

3. I've seen the basic argument for more management made many times over the last few years in various specific contexts. W... (read more)

I come back to this post quite frequently when considering whether to prioritize MCE (via animal advocacy) or AI safety. It seems that these two cause areas often attract quite different people with quite different objectives, so this post is unique in its attempt to compare the two based on the same long-term considerations. 

I especially like the discussion of bias. Although some might find the whole discussion a bit ad hominem, I think people in EA should take seriously the worry that certain features common in the EA community (e.g., an attraction towards abstract puzzles) might bias us towards particular cause areas.

I recommend this post for anyone interested in thinking more broadly about longtermism.

This post significantly adds to the conversation in Effective Altruism about how pain is distributed. As explained in the review of Log Scales, understanding that intense pain follows a long-tail distributions significantly changes the effectiveness landscape for possible altruistic interventions. In particular, this analysis shows that finding the top 5% of people who suffer the most in a given medical condition and treating them as the priority will allow us to target a very large fraction of the total pain such a condition generates. In the case of clus... (read more)

This was useful pushback on the details of a claim that is technically true, and was frequently cited at one point, but that isn't as representative of reality as it sounds.

EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don't understand our values nor aren't very sure about how to understand them much better, reliably. Zoe's post highlights that it's too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.

(My thanks to the post authors,  velutvulpes and juliakarbing, for transcribing and adding a talk to the EA Forum, comments below refer to the contents of the talk).

I gave this a decade review downvote and wanted to set out why. 

 

Reinventing the wheel

I think this is on the whole a decent talk that sets out an personal individual's journey through EA and working  out how they can do the most good.

However I think the talk involves some amount of "reinventing the wheel" (ignoring and attempting to duplicate existing research). 

In the ... (read more)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

Might be one of the best intros to EA?

I think the post made some important but underappreciated arguments at the time, especially for high stakes countries with more cultural differences, such as China, Russia, and Arabic speaking countries. I might have been too negative about expanding into smaller countries that are culturally closer. I think it had some influence too, since people still often ask me about it.

One aspect I wish I'd emphasised more is that it's very important to expand to new languages – my main point was that the way we should do it is by building a capable, native-language ... (read more)

I think of this post often - the pattern comes up in so many areas.

I see Gwern's/Aaron's post about The Narrowing Circle as part of an important thread in EA devoted to understanding the causes of moral change.  By probing the limits of the "expanding circle" idea for counterexamples, perhaps we can understand it better.

Effective altruism is popular among moral philosophers, and EAs are often seeking to expand people's "moral circle of concern" towards currently neglected classes of beings, like nonhuman animals or potential future generations of mankind.  This is a laudable goal (and one which I share), but it'... (read more)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important, but which I don't have time to re-read or say very nuanced things about.]

I think this post makes good points on a really important topic. 

I also think it's part of what is maybe the best public debate on an important topic in EA (with Will's piece).  I would like to see more public debate on important topics, so I also feel good about including this in the review to signal-boost this sort of public debate.

Overall, I think this is worth including in the review.

This is the most foundational, influential, and classic article in effective altruism. It's fitting that the Review coincides with this article's 50th anniversary. This article defends and discusses the proposition

If it is in our power to prevent something very bad from happening, without thereby sacrificing anything else morally significant, we ought, morally, to do it.

This proposition sounds quite reasonable and simultaneously has broad implications. Singer continues:

The uncontroversial appearance of the principle just stated is deceptive. If it we

... (read more)

I'm surprised to see that this post hasn't yet been reviewed. In my opinion, it embodies many of the attributes I like to see in EA reports, including reasoning transparency, intellectual rigor, good scholarship, and focus on an important and neglected topic.

IIRC a lot of people liked this post at the time, but I don't think the critiques stood up well. Looking back 7 years later, I think the critique that Jacob Steinhardt wrote in response (which is not on the EA forum for some reason?) did a much better job of identifying more real and persistent problems:

  • Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.
  • Over-confident claims coupled with insufficient background research.
  • Over-reliance on a small set o
... (read more)

This post is a really good example of an EA organisation being open and clear to the community about what it will and will not do.

I still have disagreements about the direction taken (see the top comment of the post) but I often think back to this post when i think about being transparent about the work I am doing and overall I think it is great for EA orgs to write such posts and I wish more groups would do so.

When I wrote this post back in 2014, the effective altruism movement was very different than it is today and I think this post was taken more seriously than I wanted it to be. I think generally speaking at the time, the EA movement was not yet taken seriously and I think it needed to appear a bit more "normal" and appeal more to mainstream sensibilities and get credibility / traction. But now, in 2022, I think the EA movement unequivocally has credibility / traction and the times now call for "keeping EA weird" - the movement has enough sustaining power no... (read more)

This post takes a well-known story about impact (smallpox eradication), and makes it feel more visceral. The style is maybe a little heavy-handed, but it brought me along emotionally in a way that can be useful in thinking about past successes. I'd like to see somewhat more work like this, possibly on lesser-known successes in a more informative (but still evocative) style.

I considered Evan Williams' paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.

If I were to write this summary again, I think I'd be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress v... (read more)

This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn't being spent. I've also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum. 

Increase fidelity for better delegation

In 2021 I still like to frame this as a principal-agent problem.

First of all there's the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would jus... (read more)

In How to generate research proposals I sought to help early career researchers in the daunting task of writing their first research proposal.

Two years after the fact, I think the core of the advice stands very well. The most important points in the post are:

  1. Develop a pipeline to collect ideas as they come to you.
  2. Think about scope (is your question concrete?) and methodology (is your question tractable?).
  3. Devote some time to figuring out what good research looks like.

None of this is particularly original. The value I added is collecting all the advice in a ... (read more)

I thought this post was a very thoughtful reflection of SHIC and what went wrong in approaching highschoolers for EA outreach, which is made all the more interesting given that as of 2021 high school outreach is now a pretty sizable effort of EA movement building. SHIC in many ways was too ahead of its time. I hope that the lessons learned from SHIC have made current high school outreach attempts more impactful in their execution.

Disclaimer: I am on the board of Rethink Charity which SHIC was a part of at the time of this post, but I am writing this review... (read more)

This post influenced my own career to a non-insignificant extent. I am grateful for its existence, and think it's a great and clear way to think about the problem. As an example, this model of patient spending was the result  of me pushing the "get humble" button for a while. This post also stands out to me in that I've come back to it again and again.

If I value this post at 0.5% of my career, which I ¿do? ¿there aren't really 200 posts which have influenced me that much?, it was worth 400 hours of my time, or $4,000 to $40,000 of my money. I probably... (read more)

  • I think this post makes a very good point in a very important conversation, namely that we can do better than our currently identified best interventions for development.
  • The argument is convincing, and I would like to see both more people working on growth-oriented interventions, and counter-arguments to this. 
  • As a PhD in economics, this post may influence what topic I choose to work on during the dissertation phase. I think most EA economists at the start of their PhD would benefit from reading this. 

This post is really approximate and lightly sketched, but at least it says so. Overall I think the numbers are wrong and the argument is sound.

Synthesising responses:

  • Industry is going to be a bigger player in safety, just as it's a bigger player in capabilities.

  • My model could be extremely useful if anyone could replace the headcount with any proxy of productivity on the real problem. Any proxy at all.

  • Doing the bottom up model was one of the most informative parts for me. You can cradle the whole field in the palm of your mind. It is a small and pr

... (read more)

This is my favorite introduction to existential risk. It's loosely written from the perspective of global policy, but it's quite valuable for other approaches to existential risk as well. Topics discussed (with remarkable lucidity) include:

  • Natural vs anthropogenic existential risk
  • Meta-uncertainty
  • Qualitative risk categories
  • Magnitude of our long-term potential
  • Maxipok heuristic
  • Classification of existential risk
  • Option value
  • Sustainable state vs sustainable trajectory
  • Neglectedness of existential risk

I think this post mostly stands up and seems to have been used a fair amount. 

Understanding roughly how large the EA community seems moderately fairly, so I think this analysis falls into the category of 'relatively simple things that are useful to the EA community but which were nevertheless neglected for a long while'.

One thing that I would do differently if I were writing this post again, is that I think I was under-confident about the plausible sampling rates, based on the benchmarks that we took from the community. I think I was understandably un... (read more)

This made me more likely to give non tax-deductibily, and gives a useful resource to link to for other people

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

This is a concept that's relatively frequently referred back to, I think? Which seems like a reason to include it.

I think it's pointing to a generally important dynamic in moral debates, though I have some worry that it's a bit in "soldier" mindset, and might be stronger if it also tried to think through the possible strengths of this sort of interpretation. I'm also not sure q... (read more)

The EA forum is one of the key public hubs for EA discourse (alongside, in my opinion, facebook, twitter, reddit and a couple of blogs). I respect the forum team's work in trying to build better infrastructure for its users.

The EA forum is active in attempting to improve experience for its users. This makes it easier for me to contribute with things like questions, short forms, sequences etc, etc. 

I wouldn't say this post provides deep truth, but it seeks to build infrastructure which matches the way EAs are. To me, that's an analogy to articles which... (read more)

I think this talk, as well as Ben's subsequent comments on the 80k podast, serve as a good illustration of the importance of being clear, precise, and explicit when evaluating causes, especially those often supported by relatively vague analogies or arguments with unstated premises. I don't recall how my views about the seriousness of AI safety as a cause area changed in response to watching this, but I do remember feeling that I had a better understanding of the relevant considerations and that I was in a better position to make an informed assessment.

I reviewed this post four months ago, and I continue to stand by that review.

This post, alongside Julia's essay "Cheerfully," are the posts I most often recommend to other EAs.

I think this research into x-risk & economic growth is a good contribution to patient longtermism.  I also think that integrating thoughts on economic growth more deeply into EA holds a lot of promise -- maybe models like this one could someday form a kind of "medium-termist" bridge between different cause areas, creating a common prioritization framework.  For both of these reasons I think this post is worth of inclusion in the decadal review.

The question of whether to be for or against economic growth in general is perhaps not the number-on... (read more)

This continues to be one of the most clearly written explanations of a speculative or longtermist intervention that I have ever read.

I think this report is still one of the best and most rigorous investigations into which beings are moral patients. However, in the five years since it's been published it's influenced my thinking less than I had expected in 2017 – basically, few of my practical decisions have hinged on whether or not some being merits moral concern. This is somewhat idiosyncratic, and I wouldn't be surprised if it's had more of an impact on e.g. those who work on invertebrate welfare.

I first listened to Wildness in February 2021. This podcast was my introduction to wild animal welfare, and it made me reconsider my relationship to environmentalism and animal welfare. I've always thought of myself as an environmentalist, but I never really considered what I valued in the environment. But after listening to this, my concern for "nature" became more concrete: I realized that the well-being of individual wild animals was important, and because there could be trillions of sentient wild animals, extremely so. I especially liked the third episode, which asks tough questions about who and what nature is "for."

This post made talking about diversity in EA less of a communication minefield and I am very grateful for that.

This was a very practical post. I return to it from time to time to guide my thinking on what to research next. I suggest it to people to consider. I think about ways to build on the work and develop a database. I think that it may have helped to catalyse a lot of good outcomes.

This post was helpful to me in understanding what I should aim to accomplish with my own personal donations.  I expect that many other EAs feel similarly -- donating is an important part of being an EA for many people, but the question of how to maximize impact as a small-scale individual donor is a complex puzzle when you consider the actions of other donors and the community as a whole.  This post is a clear, early articulation of key themes that show up in the continual debate and discussion that surround real-world individual donation decisio... (read more)

I thought this post was interesting, thoroughly researched and novel. I don't really recall if I agree with the conclusion but I remember thinking "here's a great example of what the forum does well - a place for arguments about cause prioritisation that belong nowhere else"

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think a lot of EAs (including me) believe roughly this, and this is one of the best summaries out there.

At the same time, I'm not sure it's a core EA principle, or critical for arguing for any particular cause area, and I hesitate about making it canonical.

But it seems plausible that we should include this.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is one of the best introductions to the overall EA mindset.

I think this line of thinking has influenced a lot of community building efforts, mostly for better. But it feels a bit inside-baseball: I'm not sure how much I want meta pieces like this highlighted in "best of EA's first decade".

This essay had a large effect on me when I read it early on in my EA Journey. It's hard to assign credit, but around the time I read it, I significantly raised my "altruistic ambition", and went from "I should give 10%" to "doing good should be the central organizing principle of my life."

I know many smart people who disagree with me, but I think this argument is basically sound. And it has had, for me anyway, formed a healthy voice in my head pushing me towards strong conviction.

Congratulations on a very interesting piece of work, and on the courage to set out ideas on a topic that by its speculative nature will draw significant critique.

Very positive that you decided on a definition for "civilizational collapse", as this is broadly and loosely  discussed without the associated use of common terminology and meaning. 

A suggested further/side topic for work on civilizational collapse and consequences is more detailed work on the hothouse earth scenario (runaway cliamte change leading to 6C+ warming + ocean chemistry change... (read more)

This post highlighted an important problem that would have taken much longer to address otherwise. I would point to this post as an example of how to hold powerful people accountable in a way that is fair and reasonable.

(Disclosure: I worked for CEA when this post was published)

This had a large influence on how I view the strategy of community building for EA.

This was popular, but I'm not sure how useful people found it, and it took a lot of time. I hoped it might become an ongoing feature, but I couldn't find someone able to and willing to run it on an ongoing basis.

Most people who know about drugs tend to have an intuitive model of drug tolerance where "what goes up must come down". In this piece, the author shows that this intuitive model is wrong, for drug tolerance can be reversed pharmacologically. This seems extremely important in the context of pain relief: for people who simply have no option but to take opioids to treat their chronic pain, anti-tolerance would be a game-changer. I sincerely believe this will be a paradigm shift in the world of pain management, with a clear before-and-after cultural shift arou... (read more)

I personally know of thousands of extra pounds going to AMF because of this post.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this still might be the best analysis of an important question for longtermism? But it is also a bit in-the-weeds.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

If we include Helen's piece, I think it might be worth including both this and the first comment on this post, to show some other aspects of the picture here. (I think all three pieces are getting at something important, but none of them is a great overall summary.)

  • I really like this post. 
    • It's short
    • It presents two models of community building very clearly
    • The diagrams are super helpful and intuitive  
    • The call to action is very clear & concrete
  • Other posts had previously discussed network building, but this one really stuck with me in a useful way for developing strategy 
  • I think I've linked to/sent this post to at least 5-10 people over the past few years. 
  • When I first read this I found it useful because it presented an alternative model to the dominant one (EA as community or EA as talent pipel
... (read more)

There can never be too many essays recommending EAs not to push themselves past their breaking point. This essay may not be the most potent take on this concept, but since there are bound to be some essays on optimization-at-all-costs among the most-upvoted EA essays, there should be some essays like this one to counterbalance that. For instance, this essay is an expanded take on the concept, but is too new to be eligible for this year's EA Review.

I think that it's generally useful to share a clear paradigm which is useful for non-experts, based on deep knowledge of a subject, and that's what I tried to do here. In this case, I think that the concept and approach are a very generally useful point, and I would be very excited for more people to think about Value of Information more often, though as the post notes, mostly informally.

Apparently this post has been nominated for the review! And here I thought almost no one had read it and liked it.

Reading through it again 5 years later, I feel pretty happy with this post. It's clear about what it is and isn't saying (in particular, it explicitly disclaims the argument that meta should get less money), and is careful in its use of arguments (e.g. trap #8 specifically mentions that counterfactuals being hard isn't a trap until you combine it with a bias towards worse counterfactuals). I still agree that all of the traps mentioned here are ... (read more)

I don't have time to write a detailed self-review, but I can say that:

  • I still think this is one my most useful posts; I think it's pretty obvious that something like this should exist, and I recommend it often, and I see it cited/recommended often.
  • I think it's kind of crazy that nothing very much like this database existed before I made it, especially given how simple making it was.
    • Note that I made this in not a huge amount of time and when still very "junior"; I was within 4 months of posting my first EA Forum post and within 2 years of having first learn
... (read more)

Focusing on tax-deductibility too much can be a trap for everyday donors, including myself. I keep referring to this article to remind my peers or myself of that.

One piece of information is not mentioned: At least in some countries, donating to a not-tax-deductible charity may be subject to gift tax. I recommend that you check out if this applies to you before you donate . But even then the gift tax can be well worth paying.

This post helped clarify to me which causes ought to be prioritized from a longtermist standpoint. Although we don't know the long-term consequences of our actions (and hence are clueless), we can take steps to reduce our uncertainties and reliably do good over the long term. These include:

... (read more)

This approach seems to be being neglected by GiveWell, and not taken up by others in this space. (I don't have time to write a full review).

This post seems to have started a conversation on diversity in EA:

... (read more)

I think the approach taken in this post is still good: make the case that extinction risks are too small to ignore and neglected, so that everyone should agree we should invest more in them (whether or not you're into a longtermism).

It's similar to the approach taken in the Precipice, though less philosophical and longtermist.

I think it was a impactful post in that it was 80k's main piece arguing in favour of focusing more of existential risk during a period when the community seems to have significantly shifted towards focusing on those risks, and during ... (read more)

As explained in the review of Log Scales, cluster headaches are some of the most painful experiences people can have in life. If a $5 DMT Vape Pen produced at scale is all it takes to fully take care of the problem for people sufferers, this stands to be an Effective Altruist bargain.

In the future, I would love to see more analysis of this sort. Namely, analysis that look at particular highly painful conditions (the "pain points of humanity", as it were), and identify tractable, cost-effective solutions to them. Given the work in this area so far, I expect... (read more)

I thought this post was great for several reasons:
- It generated ideas and interesting discussion about the definition of one of the most important ideas that the community has developed.
- I regularly referred back to it as "the key discussion about what Longtermism means". I expect if Will published this as an academic paper, it would've taken years to come out and there wouldn't be as much public discussion. 
- I'm grateful Will used the forum to share his fairly early thoughts. This can be risky for a thinker like him, because it exposes him to publ... (read more)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I like this piece.

This post offered concrete suggestions for increasing representation of women at EA events and in the movement as a whole. Before reading this, I thought of diversity-type issues as largely intractable, and that I had limited influence over them, even at the local level.

Immediately after reading this, I stopped doing pub socials (which was the main low-effort event I ran at the time). Over time, I pivoted towards more ideas-rich and discussion-based events.

There has been too little focus on nuclear risks compared to importance in EA, and I think that this post helps ameliorate that. In addition to the book itself, which was worth highlighting, the review also functions as a good review of how systemic incentives create issues up to and including existential risks, and allows us to think about how to address them.

As the author of this post, I found it interesting to re-read it more than a year later, because even though I remember the experience and feelings I describe in it, I do feel quite differently now. This is not because I came to some rational conclusion about how to think of self-worth vs instrumental value, but rather the issue has just kind of faded away for me.

It's difficult to say exactly why, but  I think it might be related to that I have developed more close friendships with people who are also highly engaged EAs, where I feel that they genuine... (read more)

I still think this was a useful post. It's one of many posts of mine that seem like they were somewhat obvious low-hanging fruit that other people could've plucked too; more thoughts on that in other self-reviews here and here

That said, I also think that, at least as far as I'm aware, this post has been less impactful and less used than I'd have guessed. I'm not actually aware of any instances where I know for sure that someone used this post to pick a research project and then followed it through to completion. I am aware of two other well-received... (read more)

(I'm the author)

Yep, I still endorse the post. It does what it says on the tin, and it does it well. Highest compliment I've received about it (courtesy of Katja): Good Judgment project guy got back to us [...] and also said, “And I just realized that your shop wrote a very smart, subtle review of Tetlock’s book Superforecasting a couple years ago. I’ve referred to it many times.”

I recently had an opportunity to reflect on how it influenced me and what if anything I now disagree with:

Two years ago I wrote a deep-dive summary of Superforecasting a

... (read more)

This post did a really good thing to how I see the world's problems. So much of what's wrong with the world is the fault of no one. Encapsulating the dynamics at play into "Moloch" helped me change the way I viewed/view the world, at a pretty fundamental level.

These are still the best data on community drop out I'm aware of.

I still think this post was making an important point: that the difference in cause views in the community was because the most highly engaged several thousand people and the more peripheral people, rather than between the 'leaders' and everyone else.

There is still little writing about what the fundamental claims of EA actually are, or research to investigate how well they hold, or work to communicate such claims. This post is one of the few attempts, so I think it's still an important piece. I would still really like people to do further investigation into the questions it raises.

I thought this post was particularly cool because it seems to be applicable to lots of things, at least in theory (I have some concerns in practice). I'm curious about further reviews of this post.

I find myself using the reasoning described in the post in a bunch of places related to the prioritization of longtermist interventions. At the same time,  I'm not sure I ever get any useful conclusions out of it. This might be because the area of application (predicting the impact of new technologies in the medium-term future) is particularly challenging. (... (read more)

This post represents the culmination of research into the severity of the risks of nuclear war. I think the series as a whole was very helpful in figuring out how much the EA movement should prioritize nuclear risk and whether nuclear risk represented a true existential risk. Moreover, I think this post in particular was a great example of how there can be initial errors in analysis and how these errors can be thoughtfully corrected.

Disclaimer: I am co-CEO at Rethink Priorities and supervised some of this work, but I am writing this review in a personal ca... (read more)

This post represents the various team opinions on invertebrate sentience (including my own) and I think was a great showcase of how people's opinions looking at the same information can differ and how to present this complexity and nuance in an approachable way. I also think it continued to help make the case that invertebrate welfare is worth taking seriously and that this case was made in a way that was credible and taken seriously by the EA movement.

Disclaimer: I am co-CEO at Rethink Priorities and supervised some of this work, but I am writing this rev... (read more)

This post establishes a framework for comparing the moral status of different animals and has started a research program within Rethink Priorities that I think will eventually lead to a large and meaningful reprioritization of resources within the EA movement that accounts for much more accurate and well thought out views of how to prioritize within animal welfare work and how to prioritize between human-focused work and nonhuman-focused work.

Disclaimer: I am co-CEO at Rethink Priorities and supervised some of this work, but I am writing this review in a p... (read more)

I appreciated this post as culmination of over a year of research into invertebrate sentience done by a team at Rethink Priorities that I was a part of. Prior to doing this research, I was pretty skeptical that invertebrates were of moral concern and moreover I was skeptical that there even was a tractable way to figure it out. Now, we somehow managed to make a large amount of forward progress on figuring out this thorny issue and as a result I believe invertebrate welfare issues have a lot more forward momentum both within Rethink Priorities and elsewhere... (read more)

This post is concise and clear, and was great for helping me understand the topics covered when I was confused about them. Plus, there are diagrams! I'd be excited to see more posts like this.
[Disclaimer: this is just a quick review.]

As the Creative Writing Contest noted, Singer's drowing-child thought experiment "probably did more to launch the EA movement than any other piece of writing".   The key elements of the story have spread far and wide -- when I was in high school in 2009, an English teacher of mine related the story to my class as part of a group discussion, years before I had ever heard of Effective Altruism or anything related to it.  

Should this post be included in the decadal review?  Certainly, its importance is undisputed.  If anything, Singer's es... (read more)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this piece is a nice short summary of one of the very most core principles of EA thinking.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

Seems like a nice short summary of an important point (despite its current karma of 2!)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I really like the direct, personal, thoughtful style of this talk, and would like to see more posts like it. Seems like maybe one of the best intros-of-this-length to the reasons for working on AI alignment.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is a nice summary of some important community norms

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

[COI I helped fund this work and gave feedback on it.]

I think this is one of the best public analyses of an important question.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think that this is a nice visual, metaphorical, introduction to some economics/cause prioritization theory.

Looking back, I think it's trying to do a lot and so doesn't have a single clear point in the way that the most compelling pieces do. I could imagine it being pretty good cut into separate sections and referenced when the appropriate concept comes up.

This post introduced the "hinge of history hypothesis" to the broader EA community, and that has been a very valuable contribution. (Although note that the author states that they are mostly summarizing existing work, rather than creating novel insights.)

The definitions are clear, and time has proven that the terms "strong longtermism" and "hinge of history" are valuable when considering a wide variety of questions.

Will has since published an updated article, which he links to in this post, and the topic has received input from others, e.g. this critique f... (read more)

Since I originally wrote this post I've only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.

In this post I use the idea of "legibility" to talk about impact that can be easily measured. I'm now less sure that was the right move, since legibility is a bit of jargon that, while it's taken off in some circles, hasn't caught on more broadly. Although the post deals with this, a better version of this post might... (read more)

In the past year I have seen a lot of disagreement on when cultivated meat will be commercially available with some companies and advocates saying it will be a matter of years and some skeptics claiming it is technologically impossible. This post is the single best thing I have read on this topic. It analyses the evidence from both sides considers the rate of technological progress that will be needed to lead to cultivated meat and and makes realistic predictions. There is a high degree of reasoning transparency throughout. Highly recommended.

Some quick self-review thoughts:

  • I still stand by these points and by the implicit claims that they're worth stating and that they're often not adhered to.
  • I think these points are pretty obvious and probably were already in many people's heads. I think probably many people could've easily written approximately this. If I recall correctly, I wrote it in ~2 hours, just
... (read more)

Very enlightening and useful post for understanding not only life sciences, but other areas of science funding as well.

These investigations on suffering intensity challenge widely accepted beliefs that seem to be very convenient about how other species experience suffering. 

This line of study seems more native and dependent on EA than others. It may be a major achievement of the Effective Altruism movement.

 

This particular paper brings to attention the idea that different creatures can experience time differently. 

This idea is both really obvious and also weird and hard to come up with. I think that is common of many great ideas and contributions.

This article affected me a lot when I first read it (in 2015 or so), and is/was a nontrivial part of what I considered "effective altruism" to mean. Skimming it again, I think it might be a little oversimplified, and has a bit of a rhetorical move that I don't love of conflating "what the world is like' vs "what I want the world to be like."

Still, I think this article was strong at the time, and I think it is still strong now. 

Like NunoSempere, I appreciate the brutal honesty. It's good and refreshing to see someone  recognize that the lies in the thing that a) their society views as high-status and good and b) they personally have a vested interest in believing is really good.

I think this is an important virtue in EA, and we should applaud it in most situations where we see it.  

This post (and also chapter 2 of Doing Good Better, but especially this post) added "we're in triage" to my mental toolbox of ways to frame aspects of situations. Internalizing this tool is an excellent psychological way to overcome forces like status quo bias (when triage is correct), and sometimes an excellent way to get people to understand why we sometimes really ought to prioritize doing good over making our hands feel clean.

I would guess that this post would be even better if it was more independent of the podcast episode.

This is an emotionally neutral introduction to the thinking about solving global issues (compared to, for example this somewhat emotional introduction). The writing uses an example of one EA-related area while does not consider other areas. Thus, this piece should be read in conjunction with materials overviewing popular EA cause areas and ways of reasoning to constitute an in-depth introduction to EA thinking.

Corporate campaigns have been a large part of the "effective animal advocacy" movement since they exploded in funding and effort in 2015. Given the prominence, it was strongly worth investigating whether corporate campaigns were worth the prior investment and - more importantly - would be worth the continued marginal investment into the future. This review established that corporate campaigns actually seem to have been a strong success.

I also think this post demonstrates a strong and thoughtful approach to cost-effectiveness estimation that served as a tem... (read more)

An introductory reading list on wild animal welfare that covers all the important debates:

  • Should we intervene in wild animal welfare?
  • Will interventions work? Are they tractable?
  • What impact will wild animal welfare have on the long-term future?

The post was part of a 2018 series, the Wild Animal Welfare Literature Library Project.

Wild animal welfare has increased in prominence since then, e.g. Animal Charity Evaluators has regularly identified wild animal welfare as a key cause area.

I regularly refer back to this piece when thinking about movement-building or grants in that space. It provides a lot of really thoroughly-researched historical evidence as well as clear insight. It's a shame that it only has a few karma on the forum - I wouldn't want that to cloud its significance for the decade review.

Writing something brief to ensure this gets into the final stage - I recall reading this post, thinking it captured a very helpful insight and regularly recalling the title when I see claims based on weak data. Thanks Greg!

This post (and the series it summarizes) draws on the scientific literature to assess different ways of considering and classifying animal sentience. It persuasively takes the conversation beyond an all-or-nothing view and is a significant advancement for thinking about wild animal suffering as well farm animal welfare beyond just cows, pigs, and chickens.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

[I work with Julia.]

I think this piece is maybe the best short summary of a strand in Julia's writing that has helped EA to seem more attainable for people.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is another piece that can probably help people relate to EA ideas more healthily.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is a great summary. I have some hesitations about quite how canonical this has become, but I do think this is a really important piece in terms of EA's history.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I commented on this back in the day, and still like it. I think it's thoughtfully reflecting on some common ways in which EA can be offputting or overwhelming for people, in a way that I think will help them to cope better.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important, but which I don't have time to re-read or say very nuanced things about.]

I think this piece either helped to correct some early EA biases towards legible/high-certainty work (e.g. cash transfers), or more publicly signalled that this correction was taking place (I'm not quite sure of the intellectual history, or what caused what). 

Skimming it again, I think it makes some compelling points, and does a good job adding nuance and responding to possible concerns.

I think this is probably one of the most important pieces of the last decade.

Led to personal lifestyle changes, bought an air purifier and gave them as gifts to friends and family. 

I'm quite neutral on this post. On the one hand, it is short and simple, with a clear concept that has some impact/relevance. On the other hand, I think the concept is somewhat inaccurate (and very imprecise): the reasoning mainly supposes a pattern/relationship that the size of your audience is inversely proportional to the size of your message. This makes some vague sense, but the numbers used seem to fall apart when you think about it and expand it a bit more: if you want to coordinate thousands of people, you have five words... but what if I want to co... (read more)

These investigations on suffering intensity challenge widely accepted beliefs that seem to be very convenient about how other species experience suffering. 

This line of study seems more native and dependent on EA than others. It may be a major achievement of the Effective Altruism movement.

This is my favorite post for making the opportunity costs of our spending feel real and urgent.

The most compelling empirical observations have some kind of normative implications. For example, Peter Singer observed that almost everyone strongly believes that we are obligated to save a child drowning nearby, even if it requires some personal sacrifice. Combined with the basic normative principle that distance doesn't matter morally, that observation tells us that something is wrong with our moral instincts. This post presents another such empirical fact: ou... (read more)

This is the post I most often refer to when talking about donating now versus investing to donate later. It provides a good summary of the main considerations and is accessible for non-expert donors. Having a back of the envelope model with real numbers is also really great.

Individual non-expert donors can defer to experts to decide where to donate by using charity evaluators and the EA Funds. But the question when to donate they have to make mostly themselves (except maybe in case of the Patient Philanthropy Fund).

Suggestions for follow-up posts.

  • updatin
... (read more)

Low-cost lives are not something to celebrate. They are a reminder that we live on an injured planet, where people suffer for no reason save poor luck.

This is motivational quote that I keep reminding myself of. This is one way I see the dark world.

This piece is not part of the replacing guilt series but has the same vibe. It deserves the same credit as replacing guilt.

Really good that someone took the time to write this

This piece examines the accuracy of Peter Singer’s expanding moral circles theory by reasoning and examples. Since the validity of this arguably fundamental thesis can have wide implications, I recommend this post.

This is still our most current summary of our key advice on career planning, and I think useful as a short summary.

If I was writing it again today, there are a few points where it could be better synced with our updated key ideas series, and further simplified (e.g. talking about 3 career stages rather than 4). 

This post helped establish ballot initiatives as an underapprechiated way to do good within effective altruism in a variety of cause areas. However, since the post was published, Rethink Priorities (the author organization of this post) did not end up doing much more work in ballot initiatives and as far as I know, no one advancing ballot initiatives within effective altruism was specifically motivated to do so by this post (though it's possible some efforts were still sped up on the margin). So I think this post did not end up having the impact that I was... (read more)

This is going to be a quick review since I there has been plenty of discussion of this post and people understand it well.  But this post was very influential for me personally, and helped communicate yet another aspect of the key problem with AI risk -- the fact that its so unprecedented, which makes it hard to test and iterate solutions, hard to raise awareness and get agreement about the nature of the problem, and hard to know how much time we have left to prepare.

AI is simply one of the biggest worries among longtermist EAs, and this essay does a ... (read more)

This is a great post in both content and writing quality. I'm a little sad that despite winning a forum prize, there was relatively little followup. 

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

Maybe worth including, for similar reasons to its sister post.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is a pretty good intro to EA-ish thinking, particularly the nuance around how to relate to commitment. But maybe less good than some of the other intros.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

This seems to do a good job 

addressing a somewhat-common failure mode for EAs.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

In some sense, it would be pretty weird to focus on such a particular event/charity. But I think this post is maybe a really good pointer to EA's commitment to make hard decisions to maximise impact. 

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think it's pretty important that we don't get too anchored on a specific set of 3-4 cause areas, and I think it's also important that people feel there are other things to do even if they're not well placed to work on those 3-4 cause areas. I think this piece makes these sorts of general points well and is the best collection of other plausible-seeming things to explore.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this piece is making an important intellectual point (clarifying a pretty reasonable-seeming objection to EA thinking). And it's also pretty emotionally motivating to me.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this sort of objection is common to some EA cause areas, and this article is a pretty decent response to that objection.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think that the core idea here (about a few different worldviews you could bet on) is frequently referenced and important. I'm not 100% sure I agree with this approach theoretically, but it seems to have happened practically and be working fine. The overall framing of the post is around OP's work, which maybe would make it seem a bit out-of-place in some sort of collection of articles. 

I think I'd be pro including this if you could do an excerpt that cut out some of the OP-specific context.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is an accessible intro to why we should care about AI safety. I'm not sure if it's the best intro, but it seems like a contender.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

This feels like an important piece almost just as a reference to the core case/thinking around GWWC and GiveWell.

Skimming it again, it's clearly academic, but it's more readable than I remembered. I think it makes an important point well.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think this is a nice poetic summary of a core strand of EA thinking, that is then backed up by longer, more critical work. I think this would be a pretty good thing to include as a summary of some of the motivation for the X-risk/longtermist side of EA.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

There's been a variety of work over the last few years focused on examining the arguments for focusing on AI alignment. I think this is one of the better and more readable ones. It's also quite long and not-really-on-the-Forum. Not sure what to do with that. The last post has a bunch of comment threads, which might be a good way of demonstrating EA reasoning.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think that this is one of the classic bits of EA-related creative writing.

I think I was already intellectually aligned with this piece before reading it, but I felt the emotional pull of the cause area more after reading it. 

It's advocating for life extension etc research, which doesn't seem like the most pressing problem on the margin, but does seem potentially really g... (read more)

As the author, I think that the post stands up relatively well on its own, and addresses a still-critical issue for thinking about many potentially effective interventions which relate in various ways to geopolitics. 

Unfortunately, despite my thinking that the introduction is good, the sequence wasn't really finished, or as polished as I had hoped - and by February 2020 I had started focusing on COVID-19 and related issues, and haven't gotten a chance to return to this. (And if this were included in the Decade Review, I would want to edit it to stand alone more.)

This had a large influence on how I view the strategy of community building for EA.

I definitely still stand by the overall thrust of this post, which I'd summarize as:

"The default Recommended EA Action should include saving up runway. It's more important to be able to easily switch jobs, or pivot into a new career, or absorb shocks while you try risky endeavors, than to donate 10%, especially early in your career. This seems true to me regardless of whether you're primarily earning to give, or hoping to do direct work, or aren't sure."

I'm not particularly attached to my numbers here. I think people need more runway than they think, and I... (read more)

I wrote a fairly detailed self-review of this post on the LessWrong 2019 Review last year. Here are some highlights:

  • I've since changed the title to "You have about Five Words" on LessWrong. I just changed it here to keep it consistent. 
  • I didn't really argue for why "about 5". My actual guess for the number of words you have is "between 2 and 7." IConcepts will, in the limit, end up getting compressed into a form that one person can easily/clumsily pass on to another person who's only kinda paying attention or only reads the headline. It'll hit some ev
... (read more)

This analysis accurately frames my starting point in working on Dozy. It convinced me to commit full-time to the project on the merits of its potential impact. Some of the awareness the post generated led to me raising a small pre-seed round from EAs, which has been invaluable. The problem size and treatment claims have been largely substantiated by studies that came out since, but I underestimated the challenges of getting users through the complete treatment program. Also, there are a few direct-to-consumer CBT-i options coming out as of now, so the coun... (read more)

While the central thesis to expand one’s moral circles can be well-enjoyed by the community, this post is not selling it well. This is exemplified by the “One might be biased towards AIA if…” section, which makes assumptions about individuals who focus on AI alignment. Further, while the post includes a section on cooperation, it discourages it. [Edit: Prima facie,] the post does not invite critical discussion. Thus, I would not recommend this post to any readers interested in moral circles expansion, AI alignment, or cooperation. Thus, I would recommend this post to readers interested in moral circles expansion, AI alignment, and cooperation, as long as they are interested in a vibrant discourse.

Prima facie, this looks like a thoroughly researched innovative piece recommending an important focus area. The notions of preventing malevolent actors from causing harm if they rise to power by developing robust institutions, advancing benevolent frameworks to enable actors join alternatives as they are gaining decisionmaking leverage, and creating narratives which could inspire malevolent actors to contribute positively while keeping true to their beliefs are not discussed. Thus, using this framework for addressing malevolence can constitute an existential risk.

This post introduces the idea of structuring the EA community by cause area/career proximity as opposed to geographical closeness. Persons in each interest-/expertise-based network are involved with EA to a different extent (traditional CEA’s funnel model) and the few most involved amend EA thinking, which is subsequently shared with the network.

While this post offers an important notion of organizing the EA community by networks, it does not specify the possible interactions among networks and their impact on global good or mention sharing messages with k... (read more)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I like this overall (re-)framing of EA: I think that both words are things that are important to EA (and that we maybe want even more of on the margin).

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

Seems like a core post/idea to me.

This is extremely useful information for deciding what AI charities to donate to.

One of the most straightforward and useful introductions to MIRIs work that I've read.

This is my favorite introduction to making tradeoffs between causes, with Efficient Charity a close second.

This framing of the “drowning child” experiment can best appeal to philosophy professors (as if hearing from a friend), so can be shared with this niche audience. Some popular versions include this video (more neutral, appropriate to diverse age audiences) and this video (using younger audience marketing). This experiment should be used together with some more rational writing on high-impact opportunities and engagement specifics in order to motivate people to enjoy high-impact involvement.

This post summarizes the 2019 perspectives of the authors (some of whom continue to lead the Rethink Priorities research on animal sentience) on whether different taxa are sentient. The writing uses relatively little references to sentience studies, especially for taxa of which sentience has not been extensively researched. The piece does not include a definition of sentience, although the ability to experience pleasure and pain and to have emotions is suggested in different parts of the writing. The post does not introduce the notion of the intensity of c... (read more)

This writing may be addressing the risk of rejection of EA due to its consideration of individuals. Prima facie, the post claims that “we don't extend empathy to everyone and everything,” but it implies that the opposite should be the case with one’s thinking development. The post seeks to gain authority by critiquing others. It does not invite collaborative critical thinking.

Thus, readers can aspire to critique others based on their ‘extent of empathy,’ which can mean the consideration of broad moral circles, without developing the skill of inviting criti... (read more)

This post advocates for greater prioritization of global priorities research, including questions related to longtermism, because such research can increase the impact of the EA community.

The Positive recent progress section implies that research is thought of as traditional philosophical academic journal paper and similar writing and further suggests that innovative discourse happens predominantly within academia. This thinking could hinder progress within the GPR field.

The Implications of longtermism and Patient longtermism sections can be interpreted as... (read more)