Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7360 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
882

Topic contributions
1

I've said much the same, explicitly focused on this.

See: https://forum.effectivealtruism.org/posts/jGYoDrtf8JGw85k8T/my-personal-priorities-charity-judaism-and-effective:

To quote the most relevant part. "Lastly, local organizations or those where I have personal affiliations or feel responsibilities towards are also important to me - but... this is conceptually separate from giving charity effectively, and as I mentioned, I donate separately from the 10% dedicated to charity. I give to other organizations, including my synagogue and other local community organizations, especially charities that support the local poor around Jewish holidays, and other personally meaningful projects. But in the spirit of purchasing fuzzies separately, this is done with a smaller total amount, separate from my effective giving. "

To respond to you points in order:

  1. Sure, but I think of, say, a 5% probability of success and a 6% probability of success as similarly dire enough not to want to pick either.
  2. What we call AGI today, human level at everything as aminimum but running on a GPU, is what Bostrom called speed and/or collective superintelligence, if chip prices and speeds continue to change.
  3. and 4. Sure, alignment isn't enough, but it's necessary, and it seems we're not on track to make even that low bar.

I think we basically agree, but I wanted to add the note of caution. Also, I'm evidently more skeptical of the value of evals, as I don't see a particularly viable theory of change.

Don't cause harm

It is not obvious to me that a number of suggested actions here meet this bar. Developing evals, funding work that accidentally encourages race dynamics, or engaging in fear-mongering about current largely harmless or even net-positive AI applications all seem likely to qualify.

In my personal view, there was a tremendous failure to capitalize on the crisis by global health security organizations, which were focused on stopping spread, but waited until around mid 2021 to start looking past COVID. This was largely a capacity issue, but it was also a strategic failure, and by the time anyone was seriously looking at things like the pandemic treaty, the window had closed.

This seems great - I'd love to see it completed, polished a bit, and possibly published somewhere. (If you're interested in more feedback on that process, feel free to ping me.)

I certainly agree it's some marginal evidence of propensity, and that the outcome, not the intent, is what matters - but don't you think that mistakes become less frequent with greater understanding and capacity?

Agreed on impacts - but I think intention matters when considering what the past implies about the future, and as I said in another reply, on that basis I will claim the great leap forward isn't a reasonable basis to predict future abuse or tragedy.

Thanks for writing and posting this!

I think it's important to say this because people often over-update on the pushback to things that they hear about, because of visible second order effects, but they don't notice the counterfactual is the thing in question not happening, which far outweighs those real but typically comparatively minor problems created.

Not to answer the question, but to add a couple links that I know you're aware of but didn't explicitly mention, there are two reasons that EA does better than most groups. First, the fact that EA is adjacent to and overlaps with the lesswrong-style rationality community, and the multiple years of texts on better probabilistic reasoning and why and how to reason more explicitly had a huge impact. And second, the similarly adjacent forecasting community, which was kickstarted in a real sense by people affiliated with FHI (Matheny and IARPA, Robin Hanson, and Tetlock's later involvement.)

Both of these communities have spent time thinking about better probabilistic reasoning, and have lots of things to say about not just the issue of thinking probabilistically in general instead of implicitly asserting certainty based on which side of 50% things are. And many in EA, including myself, have long-advocated the ideas being even more centrally embraced in EA discussions. (Especially because I will claim that the concerns of the rationality community keep being relevant to EA's failures, or being prescient of later-embraced EA concerns and ideas.)

Load more