EG

Evan_Gaensbauer

@ N/A
2370 karmaJoined Working (6-15 years)Pursuing other degree/diploma

Participation
3

  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Posts
74

Sorted by New

Sequences
3

Setting the Record Straight on Effective Altruism as a Paradigm
Effective Altruism, Religion and Spirituality
Wild Animal Welfare Literature Library

Comments
863

I'm tentatively interested in participating in some of these debates. That'd depend on details of how the debates would work or be structured.

This is a section of a EAF post I've begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn't intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I've still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.

@JWS 🔸 self-describes as "anti-Bay Area EA." I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn't limited to the Bay Area. It's bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically "Bay Area EA" culture entails the stereotypes-both accurate and misguided--of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.

Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn't only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn't @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn't anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).

In short, EA is an Anglo-American movement and philosophy, if it's going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself "pro-" or "anti-" Bay Area EA is pointless.

I'm working on some such resources myself. Here's a link to the first one, what is up to now a complete list of posts, of the still ongoing series, on the blog Reflective Altruism.

https://docs.google.com/document/d/1JoZAD2wCymIAYY1BV0Xy75DDR2fklXVqc5bP5glbtPg/edit?usp=drivesdk

To everyone on the team making this happen:

This seems like it could potentially one day become the greatest thing to which Open Philanthropy, Good Ventures and--by extension--EA ever contribute. Thank you!

To others in EA who may understandably be inquisitive about such a bold claim:

Before anyone asks, "What if EA is one day responsible for ending factory farming or unambiguously reducing existential risk to some historic degree? Wouldn't that be even greater?"

Yes, those or some of the other highest ambitions among effective altruists might be greater. Yet there's so much less reason to be confident EA can be that fulcrum for ending those worst of problems. Ending so much lead exposure in every country on Earth could be the most straightforward grand slam ever.

When I mention it could be the greatest, though, that's not just between focus areas in EA. That's so meta and complicated that the question of which focus area has the greatest potential to do good has still generally never been resolved. It's sufficient to clarify this endeavour could have the potential to be the greatest outcome ever accomplished within the single focus area in EA of global health and development. It could exceed the value of all the money that has ever flown through EA to any charity Givewell has ever recommended.

I'll also clarify I don't mean "could" with that more specific claim in some euphemistic sense, of making some confident but vague claim to avoid accountability in making a forecast. I just mean "could" in the sense that it's a premise worth considering. The fact there's even a remote chance this could exceed everything achieved with EA to treat neglected tropical diseases is remarkable enough.

Indeed, something is lost even when AI makes dank memes.

I agree that it's not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it's reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if "EA" is going to do it, then we don't need to worry about any of those things. I'm sure someone can just do it, right?

I am at least one someone who not only can, but already has decided that I will, at least begin doing it. To that end, for myself or perhaps even others, there are already some individuals I have in mind to begin contacting who may be willing to provide at least a modicum of funding, or would know others who might be willing to do so. In fact, I have already begun that process. 

There wouldn't be a tradeoff with other uses of at least some of that money, given I'm confident at least some of those individuals would not donate or otherwise use that money to support, e.g., some organization affiliated with, or charity largely supported by, the EA community. (That would be due to some of the individual funders in question not being effective altruists.) While I agree it may not be a good idea for EA as a whole to go about this in some quasi-official way, I've concluded there aren't any particularly strong arguments made yet against the sort of "someone" you had in mind doing so. 

Load more