BW

Brennan W.

Founder's Associate @ Merantix Momentum
145 karmaJoined Working (0-5 years)Seeking workBerlin, Germany

Bio

Participation
4

Originally from the US, but have been living, studying and working around the world since 2018 and look forward to keeping up the adventure! EA adjacent - very interested in trying to have a positive impact, but not quite resonating with upper case "Effective Altruism". Happy to share more
--
Strongly values-driven operator with diverse analytical and managerial skillset, always excited to drive solutions for positive impact. 

Dynamism | Empiricism | Collaboration | Solutions-Orientation | Mission Centricity

How others can help me

I'm always on the look out for opportunities where I can help out and do some good - so if you have any good leads I'd love to get an intro :)

How I can help others

Business operations, business development, people operations, hiring, organizational design, career coaching, and an outsider's perspective that might be a bit different from many in EA

-> My strategy has been to develop a strong business skillset and professional network (career capital) so that I can better help impact-driven organizations and people. The hope is to provide a complimentary professional skill set as well as a foot in the door with private sector funders, collaborators, and stakeholders. If anything in that ballpart sounds helpful for you, please don't hesitate to reach out!

Posts
1

Sorted by New

Comments
17

*Edit: I accidentally hit Save before I was finished, went back to finish

*I started writing this the week after your reply but went down too deep of a rabbit hole and didn't get around to finishing it. Apologies for the delay! Note, the first portion was written 3 months ago (Novemberish 2023) and the latter portion was written today (12 Feb 2024)

Preamble

Ok - I've had a bit more time to read through some of your writing and some of the comments to give myself a little context and hopefully I can contribute a bit more meaningfully now.

Before getting into details though, probably best to frame things:

  1. My initial comment was solely aimed at responding to your original comment in this thread in a relative vacuum without having read through the paper or summary. Now that I've read the summary you shared[1], I imagine that we could have a much longer discussion on quite a few different points where we may productively disagree -> however, to keep things concise I'll be focusing the discussion here on this specific line from your comment above: "My suspicion is that [no article of a similar style arguing against EA principles] can exist because there's no reasonable way to make such an argument; insinuation and "political" critique is all that the critics have got"
  2. As I had not, at the time of my original comment, read further I was not aware of your definition of "insinuation" and "political critique" -> now, having read more, it would probably be helpful to clearly share those definitions, as I understand them from you writing, here. (If I've misunderstood, please let me know!)
    1. Insinuation: any critical, disparaging, or otherwise negative commentary that is made without significant explanation, evidence, reasoning, good-faith argumentation, or further context.
    2. Political Critique: criticism that focuses not upon principles, but rather on practical, real-world matters. [2]
  3. While I have personally engaged with people who have presented many critiques of Effective Altruism, I've never approached trying to assess criticism systematically and most of my familiarity with critiques of EA comes from undocumented, anecdotal encounters. I also don't regularly read or subscribe to many of the various media wherein formalized criticism of EA might be most common, so I'm not very familiar with whatever existing body of external criticism there is[3]. It is probably worth while to distinguish which kinds of criticism we want to address:
    1. Formal critiques: Pieces of criticism that are documented and were made with at least a reasonable degree of intentionality, thought, and a clear purpose of arguing against some aspect of/associated with EA. Examples may include academic and non-academic articles, in-depth blog posts, podcasts, pieces of journalism, formal debates, books, etc. But probably better to not consider idle social media commentary, one-sided ranting in informal settings, or casual anecdotal conversation
    2. External critiques: Pieces of criticism that come from sources that don't identify as part of the EA movement. While there is plenty of criticism shared by and among people under the EA umbrella, I posit that external criticism provides some unique value as it seems more likely to represent 'public opinion', to consider factors that may be neglected within EA, to propose different ways of thinking than those commonly used within EA, and to be less biased by various 'in-group' effects

^I hope this sounds reasonable - if you'd like to modify any points please let me know :)

On another note, at some point (time permitting) I would love to flesh out a more comprehensive post synthesizing and summarizing criticism of EA in a more rigorous, systematic and thoughtful way. However, a project like that seems like it would take quite a bit of work and collaboration, so I'm not too optimistic I'll be able to take it on personally (at least not in the near future) :( 

Examples of (semi-)Formal Criticism

Here I've tried to collect an incomplete list of several critiques of EA and tried to sort them by my best guess of where they fall along several relevant criteria

Concerns about Narrow Goal-Posts and dismissing 'Political' Criticism

"As an academic, I think we should assess claims primarily on their epistemic merits, not their practical consequences." from page 33 of your paper -> from a purely academic philosophical perspective I can understand this claim if the word 'epistemic' was replaced with a term like 'ethical, logical, or philosophical' as the basic tenants of EA are pretty defensible on paper. However, the word 'epistemic' relates to knowledge, and generally considers evidence alongside logic. To ignore 'practical consequences' would be to ignore a large body of evidence that may help to inform our perspective on EA's merits. Of course, there are many confounding variables that abstract the relationship between the core philosophical tenants of EA and the 'practical consequences' of EA that should lead us to think carefully before updating our perspective of EA's merits based on any one given piece of real-world evidence. However, to deprioritize practical consequences entirely seems like it would lead us to miss out on some key considerations. 

Let's imagine that EA's core ideas are applied in many different scenarios and that, separately a randomized sample of main-stream ethical frameworks are applied in those same scenarios. If we started to observe that after a statistically robust amount of trials that the EA-applied scenarios led to worse outcomes on average than the other group, it would certainly lead me to question the epistemic merits of EA's core claims. While this level of experimental rigor would be impractical, I believe a naturalistic observation comparing the successes and failings of EA vs equivalent non-EA frameworks would be a reasonable proxy for modestly bolstering or weakening (updating) my perception of the merits of EA's core tenants. 

Additionally, given the focus within Effective Altruism on applied ethics, which is a highlighted in the title's usage of the word "Effective", it seems to me that one of the core claims is that it is important to examine practical consequences when evaluating how good or bad an idea is. To assess the merit of EA's core ideas purely on non-'political' critique seems to run counter to those very core ideas. In fact, I would imagine that a good-faith interpretation of EA's core principles would lead one to rigorously assess all kinds of critiques, philosophical as well as political, to constantly update our beliefs and actions.

Circling back to your paper, on page 33 & 34, you continue 

But insofar as the political critique disavows this academic norm, it must
also expose itself to practical evaluation. And in this case, the harm
it risks is clear and grave. Political opponents of effective altruism
have very likely caused the deaths of a great many children*
*In the counterfactual sense that, had they not acted thus, those deaths would
not have occurred. Which is not, of course, to claim that they are the direct cause
of death.

Personally, I don't find this argument particularly compelling as 1) it lumps all political opponents of EA into one group, 2) makes a very large claim with no supporting evidence and 3) the hypothetical 'political' wrongness of the critics doesn't affect the hypothetical 'political' wrongness of EA (seems like a form of 'What About-ism'[4]). Of course, I'm sure that you have many more perfectly legitimate arguments for why we shouldn't place an undue amount of credence in political critiques, but that's a debate I would like to see more fleshed out than the attention it has been afforded in this discussion thus far before I am convinced.

Side note, JerL's comment on your Substack Post raises some points I find compelling :)

Concerns about how we approach engaging with Criticism of EA

I posit that, people in the EA space should be more receptive to criticism from outside of EA, even if it is flawed by EA standards for several reasons:

  • People in EA, even those who have trained in 'good epistemics', are still susceptible to any number of biases that could lead us to under-value external critique and over-value things that confirm our views
  • Engaging in good faith with diverse critiques of EA aligns with several of the core values of EA
  • The way people in the EA space behave in response to criticism can have an impact -> responding to criticism with an openness and empathy is likely to lead to better outcomes for EA

Regardless of how 'correct' or not EA's principles are, the way that people in the EA orbit absorb, assess, and respond to criticism is important can have real consequences. I have noticed a trend both on the EA forum, as well as in discussions with people from EA aligned organizations, at EAGs and other EA events, that most popular responses to external criticism of EA tend to be highly dismissive and focus more on tearing down the arguments of the critic rather than making a good faith effort to engage with the underlying sentiment and intention of the critic. 

EA, as you have cited, places a very high value on self-critique and has invested in a significant amount of diverse initiatives to promote such critique, such as the red-teaming contest. However, such criticism suffers from a huge blind spot as people who are already associated with EA enough to participate in that type of critique are a severely biased sample. 

It can often seem like critiques of EA from people outside the EA space are only taken seriously by EAs if those critiques mold themselves to meet the specific criteria, argumentative formulations, and style preferred by people within the EA space. If that is the case (it could just be my personal perception!), then we risk missing out on the diverse perspectives of the vast majority of people who are not inclined to communicate their perspectives in an 'EA way'.

A portion of EA thought emphasizes the value of worldview diversification[5], in large part because there's been a significant amount of research on the practical value-add of diversity (though the evidence is much more nuanced than is often portrayed in common discussion)[6]. Part of worldview diversification includes engaging with style of argument that do not align with our own, as well as engaging with arguments coming from people with beliefs and backgrounds very different to our own. A very well intentioned person who isn't comfortable speaking in academic jargon or assembling logical arguments to a forensic standard may still have great points, and we would benefit to engage with those points.

Beyond the potential epistemic benefits of engaging with external critique, the way in which we engage with critique has an impact in and of itself. If the EAs most popular reactions to external criticism of EA are negative, dismissive, patronizing, or just generally don't attempt to meet the critic where they are, then we may only serve to perpetuate negative impressions of EA and create a chilling effect on dissent within the EA space. 

I'm not sure if pro-EA responses to critiques of EA get more upvotes and agrees and karma than critical-of-EA responses on the forum, but it seems plausible that might be the case. I'm also not present enough on X or any other social media platforms to see what the average response of EAs to criticism looks like, it could be very respectful and well received! But it isn't hard to imagine that some responses by some EAs to criticism might dismissive, come across as 'elitist', or are at least somewhat alienating to the non-EAs who see the responses. Regardless, such responses are bound to have at least a modest effect on the EA 'brand' and I would hope that we err on the side of engaging in good-faith, empathetic, personable responses when reasonable. (If the majority of EA responses to external criticism already are like that, great, let's keep it up! If they aren't, that's unfortunate)

To try to get some sense of how this dynamic plays out (at least on the EA Forum) I spent some time looking through the EA Forum for external and internal critiques of EA and luckily @JWS shared this list collecting some criticism of EA criticism. As a little exercise, reading through the pieces JWS linked and the comments below a couple things popped out to me:

  • There are drastically more entries under the topic tag "Criticism of Effective Altruism" that are written by, and for, EAs than there are entries that engage with external criticism of EA
  • Of the entries that do engage with external criticism of EA, several simply share the original critiques to open discussion and several counter-criticize the criticism, but I haven't found any posts that agree with, or claim to have updated their thoughts based upon, external critiques -> my assumption would be that people on the EA forum are on average more motivated to refute external criticism than to engage with it or empathize with it.
  • There are quite a few critiques of EA that aren't even mentioned anywhere on the EA forum - can't be sure why this is, it is plausible that this confirms the point above
  • It's hard to find external critiques of EA on the forum...

One last note

I really appreciate you engaging on this so openly! Really respect your ideas and everything you bring to the table :) 

Apologies of any of my counter-arguments misunderstood your original points or don't seem fair, I'm sure I'm off base in a few places and am happy to update 

  1. ^

    Unfortunately I don't have the time to make it through the full paper right now :( I'm sure you share a lot of very valuable arguments therein 

  2. ^

    In my limited understanding, the distinction between "Political" vs "Principle" critique is similar to the distinction between a "Consequentialist" vs "Deontological" approach whereby "Political" criticism refers to how things have actually played out in the real world and "Principle"-based criticism refers to how good the actual underlying ideas are

  3. ^

    I'm much more familiar with internal criticism shared on the EA Forum, during EA events, etc.

  4. ^

    https://en.wikipedia.org/wiki/Whataboutism

  5. ^

    Example from Open Philanthropy: https://www.openphilanthropy.org/research/worldview-diversification/

  6. ^

    A couple relevant studies: 
    https://pubmed.ncbi.nlm.nih.gov/30765101/
    https://journals.sagepub.com/doi/10.1177/0149206307308587

Thank you so much for taking on this project and communicating the results! I find this kind of work highly valuable and would love to see similar initiatives conducted more regularly across the spectrum of topics where there are gaps between the relevant EA communities and non-EA communities.

It’s very encouraging to see a good faith attempt at “worldview diversification” in practice :)

2- makes sense!

1,3,4- Thanks for sharing (the NYT summary isn’t working for me unfortunately) but I see your reasoning here that the intention and/or direction of the attempted ouster may have been “good”.

However, I believe the actions themselves represent a very poor approach to governance and demonstrate a very narrow focus that clearly didn’t appropriately consider many of the key stakeholders involved. Even assuming the best intentions, in my perspective, when a person has been placed on the board of such a consequential organization and is explicitly tasked with helping to ensure effective governance, the degree to which this situation was handled poorly is enough for me to come away believing that the “bad” of their approach outweighs the potential “good” of their intentions.

Unfortunately it seems likely that this entire situation will wind up having a back-fire effect from what was (we assume) intended by creating a significant amount of negative publicity for and sentiment towards the AI safety community (and EA). At the very least, there is now a new (all male 🤔 but that’s a whole other thread to expand upon) board with members that seem much less likely to be concerned about safety. And now Sam and the less cautious cohort within the company seem to have a significant amount of momentum and good will behind them internally which could embolden them along less cautious paths.

To bring it back to the “good guy bad guy” framing. Maybe I could buy that the board members were “good guys” as concerned humans, but “bad guys” as board members.

I’m sure there are many people on this forum who could define my attempted points much more clearly in specific philosophical terms 😅 but I hope the general ideas came through coherently enough to add some value to the thread.

Would love to hear your thoughts and any counter points or alternative perspectives!

Hey Ryan :)

I definitely agree that this situation is disappointing, that there is a wedge between the AI safety community and Silicon Valley mainstream, and that we have much to learn.

However, I would push back on the phrasing “we are at least the good guys” for several reasons. Apologies if this seems nit picky or uncharitable 😅 just caught my attention and I hoped to start a dialogue

  1. The statement suggests we have a much clearer picture of the situation and factors at play than I believe anyone currently has (as of 22 Nov 2023)
  2. The “we” phrasing seems to suggest that the members of the board in question are representative of EA as a group a. I don’t believe their views or actions are well enough known to assess how in line they are with general EA sentiment b. I don’t think the EA community has a strong consensus on the issues of this particular case
  3. Many people, in good faith and with substantive arguments, come to the opposite conclusion and see the actions of the board as having been “bad”, and are highly critical of the potential influence EA may have had in the situation
  4. Flattening the situation to “good guys” and “bad guys” seems to be a bit of a rhetorical trap that is risky from an epistemological perspective. (I’m sure you have much more nuanced underlying reasons and pieces of evidence, and I completely respect using a rhetorical shortcut to simplify a point - apologies for the pedantry!)

Maybe on a more interesting note, I actually interpret this case quite differently and think that the board made a serious mistake and come out of this as the “less favorable” party. I’d love to discuss in more depth about your reasons for seeing their actions positively and would be happy to share more about why I see them negatively if you’re interested 😊

I would strongly push back against the idea that “insinuation and ‘political’ critique’” are all that critics have. Currently posting from my phone before bed, but happy to follow up at a later date once I have some free time with a more in depth and substantive discussion on the matter if you’d be interested :)

For this quick message though I hope it is at least fair to suggest that dismissing critiques off hand is potentially risky as we are naturally inclined to steal man our own favored conclusions and straw man arguments against, which doesn’t do us any favors epistemologically speaking

I believe the TIME article has been updated since its original publication to reflect your response. If you have the chance, would you be able to comment on the updated version?

Excerpt taken as of 18:30 PST 3 Feb 2023:

"In an email following the publication of this article, Wise elaborated. “We’re horrified by the allegations made in this article. A core part of our work is addressing harmful behavior, because we think it’s essential that this community has a good culture where people can do their best work without harassment or other mistreatment,” Wise wrote to TIME. “The incidents described in this article include cases where we already took action, like banning the accused from our spaces. For cases we were not aware of, we will investigate and take appropriate action to address the problem.”"

Thank you for the very in depth post! I've had a lot of conversations about the subject myself over the past several months and considered writing a similarly themed post, but it's always nice to find that some very talented people have already done a fantastic job carefully considering the topic and organizing the ideas into a coherent piece :) 

On that note, I'm currently conducting a thesis on effective hiring / selection methods in social-mission startups with the hopes of creating a free toolkit to help facilitate recruitment in EA (and other impact-driven) orgs. If you have any bandwidth I'd love to learn more about your experience regarding the talent ecosystem in EA and see if I could better tailor my project to help address some of the gaps/opportunities you've identified

Thank you for the very in depth post! I've had a lot of conversations about the subject myself over the past several months and considered writing a similarly themed post, but it's always nice to find that some very talented people have already done a fantastic job carefully considering the topic and organizing the ideas into a coherent piece :) 

On that note, I'm currently conducting a thesis on effective hiring / selection methods in social-mission startups with the hopes of creating a free toolkit to help facilitate recruitment in EA (and other impact-driven) orgs. If you have any bandwidth I'd love to learn more about your experience regarding the talent ecosystem in EA and see if I could better tailor my project to help address some of the gaps/opportunities you've identified

Thank you for the very in depth post! I've had a lot of conversations about the subject myself over the past several months and considered writing a similarly themed post, but it's always nice to find that some very talented people have already done a fantastic job carefully considering the topic and organizing the ideas into a coherent piece :) 

On that note, I'm currently conducting a thesis on effective hiring / selection methods in social-mission startups with the hopes of creating a free toolkit to help facilitate recruitment in EA (and other impact-driven) orgs. If you have any bandwidth I'd love to learn more about your experience regarding the talent ecosystem in EA and see if I could better tailor my project to help address some of the gaps/opportunities you've identified

The tent and campground analogy and vocabulary is very helpful, thank you! I wish I'd had it in my toolkit a few weeks ago when trying to discuss the nuances of community building at an EA retreat - probably would've saved a lot of time and made for better mutual understanding. Glad to have it going forward though!

Load more