Robi Rahman

Data Scientist @ Epoch
1452 karmaJoined Working (6-15 years)New York, NY, USA
www.robirahman.com

Bio

Participation
9

Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.

Comments
217

@Ben Kuhn has a great presentation on this topic. Relatedly, nonprofits have worse names: see org name bingo

Hey! You might be interested in applying to the CTO opening at my org:

https://careers.epoch.ai/en/postings/f5f583f5-3b93-4de2-bf59-c471a6869a81

(For what it's worth, I don't think you're irrational, you're just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don't do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don't do anything about it, they made the right call and you'd be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn't irrational to be upset.)

Thanks, that's a great reason to downvote my comment and I appreciate you explaining why you did it (though it has gotten some upvotes so I wouldn't have noticed anyone downvoted except that you mentioned it). And yes, I misread whom your paragraph was referring to; thanks for the clarification.

However, you're incorrect that those factual errors aren't relevant. Your feelings toward EA leadership are based on a false factual premise, and we shouldn't be making decisions about branding with the goal of appealing to people who are offended based on their own misunderstanding.

Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became "EA Adjacent" when Scott Alexander's followers attacked a journalist for daring to scare him a little -- that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values.

  1. Scott Alexander isn't in EA leadership
  2. This is also extremely factually inaccurate - every clause in the part of your comment I've italicized is at least half false.

This is actually disputed. While so-called "bird watchers" and other pro-bird factions may tell you there are many birds, the rival scientific theory contends that birds aren't real.

  • Birds are the only living animals with feathers.

That's not true, you forgot about the platypus.

When a reward or penalty is so small, it is less effective than no incentive at all, sometimes by replacing an implicit incentive.

In the study, the daycare had a problem with parents showing up late to pick up their kids, making the daycare staff stay late to watch them. They tried to fix this problem by implementing a small fine for late pickups, but it had the opposite of the intended effect, because parents decided they were okay with paying the fine.

In this case, if you believe recruiting people to EA does a huge amount of good, you might think that it's very valuable to refer people to EAG, and there should be a big referral bounty.

From an altruistic cause prioritization perspective, existential risk seems to require longtermism

No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.

When I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)

working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn't likely to lead to extinction

Ah yes I get it now. Thanks!

Load more