EG

Evan_Gaensbauer

@ N/A
2363 karmaJoined Working (6-15 years)Pursuing other degree/diploma

Participation
3

  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Posts
74

Sorted by New

Sequences
3

Setting the Record Straight on Effective Altruism as a Paradigm
Effective Altruism, Religion and Spirituality
Wild Animal Welfare Literature Library

Comments
862

I'm tentatively interested in participating in some of these debates. That'd depend on details of how the debates would work or be structured.

This is a section of a EAF post I've begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn't intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I've still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.

@JWS 🔸 self-describes as "anti-Bay Area EA." I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn't limited to the Bay Area. It's bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically "Bay Area EA" culture entails the stereotypes-both accurate and misguided--of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.

Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn't only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn't @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn't anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).

In short, EA is an Anglo-American movement and philosophy, if it's going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself "pro-" or "anti-" Bay Area EA is pointless.

I'm working on some such resources myself. Here's a link to the first one, what is up to now a complete list of posts, of the still ongoing series, on the blog Reflective Altruism.

https://docs.google.com/document/d/1JoZAD2wCymIAYY1BV0Xy75DDR2fklXVqc5bP5glbtPg/edit?usp=drivesdk

To everyone on the team making this happen:

This seems like it could potentially one day become the greatest thing to which Open Philanthropy, Good Ventures and--by extension--EA ever contribute. Thank you!

To others in EA who may understandably be inquisitive about such a bold claim:

Before anyone asks, "What if EA is one day responsible for ending factory farming or unambiguously reducing existential risk to some historic degree? Wouldn't that be even greater?"

Yes, those or some of the other highest ambitions among effective altruists might be greater. Yet there's so much less reason to be confident EA can be that fulcrum for ending those worst of problems. Ending so much lead exposure in every country on Earth could be the most straightforward grand slam ever.

When I mention it could be the greatest, though, that's not just between focus areas in EA. That's so meta and complicated that the question of which focus area has the greatest potential to do good has still generally never been resolved. It's sufficient to clarify this endeavour could have the potential to be the greatest outcome ever accomplished within the single focus area in EA of global health and development. It could exceed the value of all the money that has ever flown through EA to any charity Givewell has ever recommended.

I'll also clarify I don't mean "could" with that more specific claim in some euphemistic sense, of making some confident but vague claim to avoid accountability in making a forecast. I just mean "could" in the sense that it's a premise worth considering. The fact there's even a remote chance this could exceed everything achieved with EA to treat neglected tropical diseases is remarkable enough.

Indeed, something is lost even when AI makes dank memes.

I agree that it's not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it's reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if "EA" is going to do it, then we don't need to worry about any of those things. I'm sure someone can just do it, right?

I am at least one someone who not only can, but already has decided that I will, at least begin doing it. To that end, for myself or perhaps even others, there are already some individuals I have in mind to begin contacting who may be willing to provide at least a modicum of funding, or would know others who might be willing to do so. In fact, I have already begun that process. 

There wouldn't be a tradeoff with other uses of at least some of that money, given I'm confident at least some of those individuals would not donate or otherwise use that money to support, e.g., some organization affiliated with, or charity largely supported by, the EA community. (That would be due to some of the individual funders in question not being effective altruists.) While I agree it may not be a good idea for EA as a whole to go about this in some quasi-official way, I've concluded there aren't any particularly strong arguments made yet against the sort of "someone" you had in mind doing so. 

bem of the While recognizing the benefits of the anti-"EA should" taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.

One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write "X should be done," it is far from certain others will agree, and then additional significant time to figure out/write "and it should be done by Y," then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like "who") means that effort can be saved if there's not enough traction on the basic merits.

As I've already mentioned in other comments, I have myself already decided to begin pursuing a greater degree of inquiry, with haste. I've publicly notified others who'd offer pushback solely on the basis of reinforcing or enforcing such a taboo is likely to only motivate to do so with more gusto.

knowledge, or resources relevant to part of a complex question

I have some knowledge and access to resources that would be relevant to solving at least a minor but still significant part of that complex question. I refer to the details in question in my comment that I linked to above.

This isn't a do-ocracy project. Doing it properly is not going to be cheap (e.g., hiring an investigative firm), and so ability to get funded for this is a prerequisite. Expecting a Forum commenter to know who could plausibly get funding is a bit much. To the extent that that is a reasonable expectation, we would also expect the reader to know that -- so it is a minor defect. To the extent that who could get funded is a null set, then bemoaning a perceived lack of willingness to invest in a perceived important issue in ecosystem health is a valid post.

To the extent I can begin laying the groundwork for a more thorough investigation to follow what is beyond the capacity of myself and prospective collaborators further, such an investigation will now at least start snowballing as a do-ocracy project. I know multiple people who could plausibly begin funding this, who themselves in turn may know several other people who'd be willing to do it. Some of the funders in question may be willing to uniquely fund myself, or a team I could (co-)lead, to begin doing the investigation in at least a semi-formal manner. 

That would be some quieter critics in the background of EA, or others who are no longer effective altruists but have definitely long wanted such an investigation to like has begun to proceed. Why they might trust me in particular is due to my reputation in EA community for years now as being one effective altruist who is more irreverent towards the pecking orders or hiearchies, both formal and informal, of any organized network or section of the EA movement. At any rate, at least to some extent, a lack of much willingness from within the EA to fund the first steps of an inquiry is no longer a relevant concern. I don't recall if we've interacted much before, though as you may soon learn, I am someone in the orbit of effective altruism who sometimes has an uncanny knack for meeting unusual or unreasonable expectations. 

Many good investigations do not have a specific list of people/entities who are the target of investigatory concern at the outset. They have a list of questions, and a good sense of the starting points for inquiry (and figuring out where other useful information lies).

Having begun several months ago thinking of pursuing what I can contribute to such a nascent investigation, I already have in mind a list of several people in mind, as well as some questions, starting points for inquiry, and an approach for how to further identify potentially useful information. I intend to begin drafting a document to organize the process I have in mind, and I may be willing to privately share it in confidence with some individuals. You would be included, if you would be interested. 

Load more