Bio

"EA-Adjacent" now I guess.

🔸 10% Pledger.

Likes pluralist conceptions of the good.

Dislikes Bay Culture being in control of the future.

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
357

I have some initial data on the popularity and public/elite perception of EA that I wanted to write into a full post, something along the lines of What is EA's reputation, 2.5 years after FTX? I might combine my old idea of a Forum data analytics update into this one to save time.

My initial data/investigation into this question ended being a lot more negative than other surveys of EA. The main takeaways are:

  • Declining use of the Forum, both in total and amongst influential EAs
  • EA has a very poor reputation in the public intellectual sphere, especially on Twitter
  • Many previously highly engaged/talented users quietly leaving the movement
  • An increasing philosophical pushback to the tenets of EA, especially from the New/Alt/Tech Right, instead of the more common 'the ideas are right, but the movement is wrong in practice'[1]
  • An increasing rift between Rationalism/LW and EA
  • Lack of a compelling 'fightback' from EA leaders or institutions

Doing this research did contribute to me being a lot more gloomy about the state of EA, but I think I do want to write this one up to make the knowledge more public, and allow people to poke flaws in it if possible.

  1. ^

    To me this signals more values-based conflict, which makes it harder to find pareto-improving ways to co-operate with other groups

I do want to write something along the lines of "Alignment is a Political Philosophy Problem"

My takes on AI, and the problem of x-risk, have been in flux over the last 1.5 years, but they do seem to be more and focused on the idea of power and politics, as opposed to finding a mythical 'correct' utility function for a hypothesised superintelligence. Making TAI/AGI/ASI go well therefore falls in the reference class of 'principal agent problem'/'public choice theory'/'social contract theory' rather than 'timeless decision theory/coherent extrapolated volition'. The latter 2 are poor answers to an incorrect framing of the question.

Writing that influenced my on this journey:

I also think this view helps explain the huge range of backlash that AI Safety received over SB1047 and after the awfully botched OpenAI board coup. They were both attempted exercises in political power, and the pushback often came criticising this instead of looking on the 'object level' of risk arguments. I increasingly think that this is not an 'irrational' response but perfectly thing, and "AI Safety" needs to pursue more co-operative strategies that credibly signal legitimacy.

  1. ^

    I think the downvotes these got are, in retrospect, a poor sign for epistemic health

I don't think anyone wants or needs another "Why I'm leaving EA" post but I suppose if people really wanted to hear it I could write it up. I'm not sure I have anything new or super insightful to share on the topic.

My previous attempt at predicting what I was going to write got 1/4, which ain't great.

This is partly planning fallacy, partly real life being a lot busier than expected and Forum writing being one of the first things to drop, and partly increasingly feeling gloom and disillusionment with EA and so not having the same motivation to write or contribute to the Forum as I did previously.

For the things that I am still thinking of writing I'll add comments to this post separately to votes and comments can be attributed to each idea individually.

Not to self-promote too much but I see a lot of similarities here with my earlier post, Gradient Descent as an analogy for Doing Good :)

I think they complement each other,[1] with yours emphasising the guidance of the 'moral peak', and mine warning against going too straight and ignoring the ground underneath you giving way.

I think there is an underlying point that cluelessness wins over global consequentialism, which is pratically unworkable, and that solid moral heuristics are a more effective way of doing good in a world with complex cluelessness.

  1. ^

    Though you flipped the geometry for the more intuitive 'reaching a peak' rather than the ML-traditional 'descending a valley'

I also think it's likely that SMA believes that for their target audience it would be more valuable to interact with AIM than with 80k or CEA, not necessarily for the 3 reasons you mention.

I mean the reasoning behind this seems very close to #2 no? The target audience they're looking at is probably more interested in neartermism than AI/longtermism and they don't think they can get much tractability working with the current EA ecosystem?

The underlying idea here is the Housing Theory of Everything.

A lossy compression of the idea is that if you fix the housing crisis in Western Economies, you'll unlock positive outcomes across economic, social, and political metrics which you can then have high positive impact.

A sketch, for example, might be that you want the UK government to do lots of great stuff in AI Safety. But UK state capacity in general might be completely borked until it sorts out its housing crisis.

Reminds me of when an article about Rutger popped up on the Forum a while back (my comments here)

I expect SMA people probably think something along the lines of:

  1. EA funding and hard power is fairly centralised. SMA want more control over what they do/fund/associate with and so want to start their own movement.
  2. EA has become AI-pilled and longtermist. Those who disagree need a new movement, and SMA can be that movement.
  3. EA's brand is terminally tarnished after the FTX collapse. Even though SMA agrees a lot with EA, it needs to market itself as 'not EA' as much as possible to avoid negative social contagion.

Not making a claim myself about whether and to what extent those claims are true.

Like Ian Turner I ended up disagreeing and not downvoting (I appreciate the work Vasco puts into his posts).

The shortest answer is that I find the "Meat Eater Problem" repugnant and indicitative of defective moral reasoning that, if applied at scale, would lead to great moral harm.[1]

I don't want to write a super long comment, but my overall feelings on the matter have not changed since this topic came up on the Forum. In fact, I'd say that one of the leading reasons I consider myself drastically less 'EA' since the last ~6 months have gone by is the seeming embrace of the "Meat-Eater Problem" inbuilt into both the EA Community and its core ideas, or at least the more 'naïve utilitarian' end of things. To me, Vasco's bottom line result isn't an argument that we should prevent children dying of malnutrition or suffering with malaria because of these second-order effects.

Instead, naïve hedonistic utilitarians should be asking themselves: If the rule you followed brought you to this, of what use was the rule?

  1. ^

    I also agree factory farming is terrible. I just want to find pareto solutions that reduce needless animal suffering and increase human flourishing.

Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
 

Best Forum Post I read this year:

Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal 

It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here

This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏

Honourable Mentions:

  • Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a "legitimacy problem".[1] As such, I think Richard's call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the 'vibe shift' in Silicon Valley as a chaser.
  • On Owning Our EA Affiliation by @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of "do the opposite". She's careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
  • Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKB🔸: EA Megaprojects are BACK baby! More seriously, this post people had the most 'blow my mind' effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and I'm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.

Forum Posters of the Year:

  • @Vasco Grilo🔸 - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forum's current postchild of 'calculate all the things' EA. I think this year he's been an awesome presence on the Forum, and long may it continue.
  • @Matthew_Barnett - Matthew is somewhat of an engima to me ideologically, there have been many cases where I've read a position of his and gone "no that can't be right". Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.

Non-Forum Poasters of the Year:

  • Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means it's not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
  • Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if you're also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider 'AGI Twitter', including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and I've never (or rarely) seen him get that angry on the platform, which might even deserve another award!

Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.

My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!

  1. ^

    I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy

  2. ^

    e.g. here and here

Load more