Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7509 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
923

Topic contributions
1

This would benefit greatly from more in-depth technical discussion with people familiar with the technical, regulatory, and economic issues involved. It talks about a number of things that aren't actually viable as described, and makes a number of assertions that are implausible or false.

That said, I think it's directionally correct about a lot of things.

You seem to have ignored a central part of what was said by Daniela Amodei; "I'm not the expert on effective altruism," which seems hard to defend.

As always, and as I've said in other cases, I.don't think it makes sense to ask a disparate movement to make pronouncements like this.

You should add an edit to clarify the the claim, not just reply.

In addition to the fundamental problem that we don't know how to tell if models are safe after release, much less in advance, blacklists for software, web sites, etc. historically have been easy to circumvent, for a variety of reasons, effectively all of which seem likely to apply here.

Strong +1 to the extra layer of scrutiny, but at the same time, there are reasons that the privileged people are at the top in most places, having to do with the actual benefits they have and bring to the table. This is unfair and a bad thing for society, but also a fact to deal with.

If we wanted to try to address the unfairness and disparity, that seems wonderful, but simply recruiting people from less privileged groups doesn't accomplish what is needed. Some obvious additional parts of the puzzle include needing to provide actual financial security to the less privileged people, helping them build networks outside of EA with influential people, and coaching and feedback.

Those all seem great, but I'm uncertain it's a reasonable use of the community's limited financial resources - and we should nonetheless acknowledge this as a serious problem.

This seems great, but it does something which is kind of indefensible that I keep seeing, assuming longtermism requires consequentialism.

Given the resolution criteria, the question is in some ways more about Wikipedia policies than the US government...

What about the threat of strongly superhuman artificial superintelligence?

Davidmanheim
2
0
0
79% agree

If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for "futures where we survive."

See my post here arguing against that tractability.

Load more