C

ConcernedEAs

401 karmaJoined

Bio

Group account for the authors of Doing EA Better.

Get in touch at concernedEAs@proton.me !

Sequences
1

Doing EA Better

Comments
14

Hi JWS,

The term is explored in an upcoming section, here.

Speaking only for ConcernedEAs, we are likely to continue remaining anonymous until costly signals are sent that making deep critiques in public will not damage one's career/funding/social prospects within EA.

We go into more detail in Doing EA Better, most notably here:

Prominent funders have said that they value moderation and pluralism, and thus people (like the writers of this post) should feel comfortable sharing their real views when they apply for funding, no matter how critical they are of orthodoxy.

This is admirable, and we are sure that they are being truthful about their beliefs. Regardless, it is difficult to trust that the promise will be kept when one, for instance:

  • Observes the types of projects (and people) that succeed (or fail) at acquiring funding
    • i.e. few, if any, deep critiques or otherwise heterodox/“heretical” works
  • Looks into the backgrounds of grantmakers and sees how they appear to have very similar backgrounds and opinions (i.e they are highly orthodox)
  • Experiences the generally claustrophobic epistemic atmosphere of EA
  • Hears of people facing (soft) censorship from their superiors because they wrote deep critiques of the ideas of prominent EAs
    • Zoe Cremer and Luke Kemp lost “sleep, time, friends, collaborators, and mentors” as a result of writing Democratising Risk, a paper which was critical of some EA approaches to existential risk.[23] Multiple senior figures in the field attempted to prevent the paper from being published, largely out of fear that it would offend powerful funders. This saga caused significant conflict within CSER throughout much of 2021.
  • Sees the revolving door and close social connections between key donors and main scholars in the field
  • Witnesses grantmakers dismiss scientific work on the grounds that the people doing it are insufficiently value-aligned
    • If this is what is said in public (which we have witnessed multiple times), what is said in private?
  • Etc.

We go into more detail in the post, but the most important step is the radical viewpoint-diversifying of grantmaking and hiring decision-making bodies.

As long as the vast majority of resource-allocation decisions are made by a tiny and homogenous group of highly orthodox people, the anonymity motive will remain.

This is especially true when one of the (sometimes implicit) selection criteria for so many opportunities is percieved "value-alignment" with a very specific package of often questionable views, i.e. EA Orthodoxy.

We appreciate that influential members of the community (e.g. Buck) are concerned about the increasing amounts of anonymity, but unfortunately expressing concern and promising that there is nothing to worry about is not enough.

If we want the problem to be solved, we need to remove the factors that cause it.

Hi Dustin,

We’re very happy to hear that you have seriously considered these issues.

If the who-gets-to-vote problem was solved, would your opinion change?

We concur that corrupt intent/vote-brigading is a potential drawback, but not an unsolvable one.

We discuss some of these issues in our response to Halstead on Doing EA Better:

There are several possible factors to be used to draw a hypothetical boundary, e.g.

  • Committing to and fulfilling the Giving Pledge for a certain length of time
  • Working at an EA org
  • Doing community-building work
  • Donating a certain amount/fraction of your income
  • Active participation at an EAG
  • Etc.

These and others could be combined to define some sort of boundary, though of course it would need to be kept under constant monitoring & evaluation.

Given a somewhat costly signal of alignment it seems very unlikely that someone would dedicate a significant portion of their lives going “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.

In any case, it seems like something at least worth investigating seriously, and eventually become suitable for exploring through a consensus-building tool, e.g. pol.is.

What would your reaction be to an investigation of the boundary-drawing question as well as small-scale experimentation like that we suggest in Doing EA Better?

What would your criteria for “success” be, and would you be likely to change your mind if those were met?

Very fair: DMing for the sake of the anonymity of both parties.

Hi John,


Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.


We’re going to respond to your points in the same format that you made them in for ease of comparison.
 

Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
 

In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism. 
 

  1. But why, as you say, was so little attention paid to the risk FTX posed? One of the points we make in the post is that the artificial separation of individual “risks” like this is frequently counterproductive. A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon. 

    These things are related, and must be treated as such. 

    Complex patterns of causation like this are just the kind of thing we are advocating for exploring, and something you have confidently dismissed in the recent past, e.g. in the comments on your recent climate post.
     
  2. We agree that the literature does not all point in one direction; we cited the two sources we cited because they act as recent summaries of the state of the literature as a whole, which includes findings in favour of the positive impacts of e.g. gender and age diversity.

    We concede that “essentially all dimensions” was an overstatement: sloppy writing on our part, of which we are sure there is more of in the manifesto, for which we apologise. Thank you for highlighting this. 

    On another note, equating “criticising diversity” in any form with “career suicide” seems like something of an overstatement.
     
  3. We agree that there is a balance to be struck, and state this in the post. The issue is that EA uses seemingly neutral terms to hide orthodoxy, is far too far towards one end of the value-alignment spectrum, and actively excludes many valuable people and projects because they do not conform to said orthodoxy.

    This is particularly visible in existential risk, where EA almost exclusively funds TUA-aligned projects despite the TUA’s surprisingly poor academic foundations (inappropriate usage of forecasting techniques, implicit commitment to outdated or poorly-supported theoretical frameworks, phil-of-sci considerations about methodological pluralism, etc.) as well as the generally perplexed and unenthusiastic reception it gets in non-EA Existential Risk Studies.
     
  4. Unfortunately, you are not in the best position to judge whether EA is hostile to criticism. You are a highly orthodoxy-friendly researcher (this is not a criticism of you or your work, by the way!) at a core EA organisation with significant name-recognition and personal influence, and your critiques are naturally going to be more acceptable.

    We concede that we may have neglected the role of the seniority of the author in the definition of “deep” critique: it surely plays a significant role, if only due to the hierarchy/deference factors we describe. On examples of chilled works, the very point we are making is the presence of the chilling effect: critiques are not published *because* of the chilling effect, so of course there are few examples to point to.

    If you want one example in addition to Democratising Risk, consider our post? The comments also hold several examples of people who did not speak up on particular issues because they feared losing access to EA funding and spaces.
     
  5. We are not arguing that general intelligence is completely nonexistent, but that the conception commonplace within EA is highly oversimplified: to say that factors in intelligence are correlated does not mean that everything can be boiled down to a single number. There are robust critiques of the g concept that are growing over time (e.g.  here) as well as factors that are typically neglected (see the Emotional Intelligence paper we cited). Hence, calling monodimensional intelligence a “central finding of psychological science”, implying it to be some kind of consensus position, is somewhat courageous,

    In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.

    Our post also mentions other issues with intelligence based-deference: how being smart doesn’t mean that someone should be deferred to on all topics, etc.
     
  6. We are not arguing that every aspect of EA thought is determined by the preferences of EA donors, so the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.

    We concede that we may have neglected cultural factors: in addition to the “hard” money/power factors, there is also the “softer” fact that much of EA culture comes from upper-middle-class Bay Area tech culture, which indirectly causes EA to support things that are popular within that community, which naturally align with the interests of tech companies.*

    We are glad that you agree on the spokesperson point: we were very concerned to see e.g. 80kH giving uncritical positive coverage to the crypto industry given the many harms it was already know to be doing prior to the FTX crash, and it is encouraging to hear signals that this sort of thing may be less common going forward.
     
  7. We agree that getting climate people to think in EA terms can be difficult sometimes, but that is not necessarily a flaw on their part: they may just have different axioms to us. In other cases, we agree that there are serious problems (which we have also struggled with at times) but it is worth reminding ourselves that, as we note in the post, we too can be rather resistant to the inputs of domain-experts. Some of us, in particular, considered leaving EA at one point because it was so (at times, frustratingly) difficult to get other EAs to listen to us when we talked about our own areas of expertise. We’re not perfect either is all we’re saying. 

    Whilst we agree with you that we shouldn’t only take Rockstrom etc. as “the experts”, and do applaud your analysis that existential catastrophe from climate change is unlikely, we don’t believe your analysis is particularly well-suited to the extremes we would expect for GCR/x-risk scenarios. It is precisely when such models fall down, when civilisational resilience is less than anticipated, when cascades like in RIchards et al. 2021 occur etc., that the catastrophes we are worried about are most likely to happen. X-risk studies relatively low probability unprecedented scenarios that are captured badly by economic models etc. (as with TAI being captured badly by the markets), and we feel your analysis demands certain levels of likelihood and confidence from climate x-risk that is (rightfully, we think) not demanded of e.g. AI or biorisk. 

    We should expect IPCC consensus not to capture x-risk concerns, because (hopefully) the probabilities are low enough for it not to be something they majorly consider, and, as Climate Endgame points out, there has thus far not been lots of x-risk research on climate change. 

    Otherwise, there have been notable criticisms of much of the climate economics field, especially its more optimistic end (e.g. this paper), but we concur that it is not something that needs to be debated here.
     
  8. We did not say that differential technological development had not been subjected to peer review, we said that it has not been subjected to “significant amounts of rigorous peer review and academic discussion”, which is true; apologies if it implied something else. This may not be true forever: we are very excited about the discussion of the current Sandbrink et al 2022 pre-print, for instance. All we were noting here is that important concepts in EA are often in their academic infancy (as you might expect from a movement with new-ish concepts) and thus often haven’t been put to the level of academic scrutiny that is often made out internally.
     
  9. You assume incorrectly, and apologies if this is also an issue with our communication. We never advocated for opening up the vote to anyone who asked, so fears in this vein are fortunately unsupported. We agree that defining “who gets a vote” is a major crux here, but we suggest that it is a question that we should try to answer rather than using it as justification for dismissing the entire concept of democratisation. In fact, it seems like something that might be suitable for consensus-building tools, e.g. pol.is.

    Committing to and fulfilling the Giving Pledge for a certain length of time, working at an EA org, doing community-building work, donating a certain amount/fraction of your income, active participation at an EAG, as well as many others that EAs could think of if we put some serious thought into the problem as a community, are all factors that could be combined to define some sort of boundary.

    Given a somewhat costly signal of alignment it becomes unlikely that someone would go “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
     
  10. We are puzzled as to how you took “collaborative, mission-oriented work” to refer exclusively to for-profit corporations. Naturally, e.g. Walmart could never function as a cooperative, because Walmart’s business model relies on its ability to exploit and underpay its workers, which would not be possible if those workers ran the organisation. There are indeed corporations (most famously Mondragon) that function of co-operative lines, as well as the Free Open-Source Software movement, Wikipedia, and many other examples.

    Of most obvious relevance, however, is social movements like EA. If one wants a movement to reliably and collaboratively push for certain types of socially beneficial changes in certain ways and avoid becoming a self-perpetuating bureaucracy, it should be run collaboratively by those pushing for those changes in those certain ways and avoid cultivating a managerial elite – cf. the Iron Law of Institutions we mentioned, and more substantively the history of social movements; essentially every Leninist political party springs to mind.
     

As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
 

Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
 

* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.

Thank you so much for your response, DM'd!

The word before the Yang & Sandberg link

Yup, we're going to split it into a sequence (I think it should be mentioned in the preamble?)

Hi AllAmericanBreakfast,

The other points (age, cultural background, etc.) are in the Critchlow book, linked just after the paper you mention.

Load more