I'm also practicing how to give good presentations and introductions to AI Safety. You can see my YouTube channel here:
You might also be interested in one of my older presentations, number 293, which is closer to what you are working on.
Feel free to book a half-hour chat about this topic with me on this link:
>PauseAI suffers from the same shortcomings most lobbying outfits do...
I'm confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate).
This doesn't feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.
I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out.
As an example: If you don't think shrimp can suffer, then that's a strong argument against the Shrimp Welfare Project. However, that criticism doesn't belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.
Your link is broken - it looks like it's been pasted twice.
This seems to be of questionable effectiveness. Brief answers/challenges:
Evaluations are key input to ineffective governance. The safety frameworks presented by the frontier labs are "safety-washing", more appropriately considered roadmaps towards an unsurvivable future.
Disagreement on AI capabilities underpin performative disagreements on AI Risk. As far as I know, there's no recent published substantial such disagreement - I'd like sources for your claim, please.
We don't need more situational awareness of what current frontier models can and cannot do in order to respond appropriately. No decision-relevant conclusions can be drawn from evaluations in the style of Cybench and Re-Bench.