Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7141 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
838

Topic contributions
1

I'm cheating a bit, because both of these are well on their way, but two big current goals:

  1. Get Israel to iodize its salt!
  2. Run an expert elicitation on Biorisk with RAND and publish it.
     

Not predictions as such, but lots of current work on AI safety and steering is based pretty directly on paradigms from Yudkowsky and Christiano - from Anthropic's constitutional AI to ARIA's Safeguarded AI program. There is also OpenAI's Superalignment reserach, which was attempting to build AI that could solve agent foundations - that is, explicitly do the work that theoretical AI safety research identified. (I'm unclear whether the last is ongoing or not, given that they managed to alienate most of the people involved.)

I strongly agree that you need to put your own needs first, and think that your level of comfort with your savings and ability to withstand foreseeable challenges is a key input. My go-to in general, is that the standard advice of keeping 3-6 months of expenses is a reasonable goal - so you can and should give, but until you have saved that much, you should at least be splitting your excess funds between savings and charity. (And the reason most people don't manage this has a lot to do with lifestyle choices and failure to manage their spending - not just not having enough income. Normal people never have enough money to do everything they'd like to; set your expectations clearly and work to avoid the hedonic treadmill!)

To follow on to your point, as it relates to my personal views, (in case anyone is interested,) it's worth quoting the code of Jewish law. It introduces its discussion of Tzedakah by asking how much one is required to give. "The amount, if one has sufficient ability, is giving enough to fulfill the needs of the poor. But if you do not have enough, the most praiseworthy version is to give one fifth, the normal amount is to give a tenth, and less than that is a poor sign." And I note that this was written in the 1500s, where local charity was the majority of what was practical; today's situation is one where the needs are clearly beyond any one person's ability - so the latter clauses are the relevant ones.

So I think that, in a religion that prides itself on exacting standards and exhaustive rules for the performance of mitzvot, this is endorsing exactly your point: while giving might be a standard, and norms and community behavior is helpful in guiding behavior, the amount to give is always a personal and pragmatic decision, not a general rule.

You seem to be framing this as if deontology is just side constraints with a base of utilitarianism. That's not how deontology works - it's an entire class of ethical frameworks on its own.

Deontology doesn't require you not to have any utilitarian calculations, just that the rules to follow are not justified solely on the basis of outcomes. A deontologist can believe they have a moral obligation to give 10% of their income to the most effective charity as judged by their expected outcomes, for example, making them in some real sense a strictly EA deontologist.

You seem to be generally conflating EA and utilitarianism. If nothing else, there are plenty of deontologist EAs. (Especially if we're being accurate with terminology!)

Agreed, this shouldn't be an update for anyone paying attention. Of course, lots of people skeptical of AI risks aren't paying attention, so that the actual level of capabilities is still being dismissed as impossible Sci-Fi; it's probably good for them to notice.

I don't think that people making mild bounded commitments is bad - I'm more concerned about the community dynamics of selecting people who make these commitments and stick with them, and the impact it has on the rest of the community.

Load more