The discussion of AI risk is only recently mainstream and therefore amateurs made contributions within the past decade. I think this experience exacerbates the self-assuredness of nuke risk amateurs and leads them to not bother researching the expertise of the nuke community.
For decades, many experts have worked on nuke strategy, and have come up with at least a few risk-reducing paradigms:
- Arms control can work, and nations can achieve their achievable nuke goals (eg deterrence and maybe compellance) despite lower nuke counts and can save money doing so.
- Counterforce is (arguably) better than countervalue
- Escalation is arguably a ladder, not a binary on/off switch
Based on its history of at least partial risk-reducing success, academically rigorous argument, and the sheer number of thoughtful hours spent, the establishment nuke community has probably done a decent job and improvements are probably hard to find. One place to start is the book Wizards of Armageddon by Fred Kaplan. It isn't the best book on nuke strategy generally, but it focuses on the history of the nuke community, so it will hopefully engender at least some respect for the nuke community and inspire further reading.
I think that in a field as well-established as nuke risk, improvements are more likely to be made on top of the existing field rather than by trying to re-invent the field.
Post-script: The EA community is criticized as unusefully amateurish in a recent podcast by a nuke professional https://www.armscontrolwonk.com/archive/1216048/the-wizards-of-armageddon/ but he does mention some positive work by Peter Scoblic, which I believe is https://forum.effectivealtruism.org/posts/W8dpCJGkwrwn7BfLk/nuclear-expert-comment-on-samotsvety-nuclear-risk-forecast-2
Well, this isn’t how I wanted to start my engagement with the EA community.
I wouldn’t call the efforts of the EA community amateurish; if I said it or implied it, I am wrong. I am actually really happy you exist.
Other things I actually think:
We need to do better – both providing better data and providing data that you need -- but I am slightly freaked out about the size of the gap we need to close. I want to close that gap and I am kind of bummed if the way I said that in the podcast makes that less likely.
TL;DR: I don’t think you suck, I think you are poorly served by those of us who make your data.
Thanks for the engagement. To be clear, are you Jeffrey Lewis quoted in the post above? :)
Yes, I am me.
Thanks Jeffrey! I hope we're a community where it doesn't matter so much whether you think we suck. If you think the EA community should engage more with nuclear security issues and should do so in different ways, I'm sure people would love to hear it. I would! Especially if you'd help answer questions like: How much can work on nuclear security reduce existential risk? What kind of nuclear security work is most important from an x-risk perspective?
I'd love to hear more about what your concerns and criticisms are. For example, I'd love to know: Is the Scoblic post the main thing that's informing your impression? Do you have views on this set of posts about the severity of a US-Russia nuclear exchange from Luisa Rodriguez (https://forum.effectivealtruism.org/s/KJNrGbt3JWcYeifLk)? Is there effective altruist funding or activity in the nuclear security space that you think has been misguided?
It seems extremely clear that working with the existing field is necessary to have any idea what to do about nuclear risk. That said, being a field specialist seems like a surprisingly small factor in forecasting accuracy, so I’m surprised by that being the focus of criticism.
I was interested in the criticism (32:02), so I transcribed it here:
It’s a shame that this doesn’t identify any specific errors, although that is consistent with Lewis’ view that the errors can’t be explained in minutes, perhaps even in years.
Speaking for myself, I agree with Lewis that popular ideas about nuclear weapons can be wildly, bizarrely wrong. That said, I’m surprised he highlights effective altruism as a community he’s pessimistic about being able to teach. The normal ‘cocktail party’ level of discourse includes alluring claims like ‘XYZ policy is totally obvious; we just have to implement it’, and the effective altruism people I’ve spoken to on nuclear issues are generally way less credulous than this, and hence more interested in understanding how things actually work.
I am skeptical of attempts to gatekeep here. E.g. I found Scoblic's response to Samotsvety's forecast less persuasive than their post, and I am concerned here that "amateurish" might just be being used as a scold because the numbers someone came up with are too low for someone else's liking, or they don't like putting numbers on things at all and feel it gives a false sense of precision.
That isn't to say this is the only criticism that has been made, but just to highlight one I found unpersuasive.
I am not an expert, but personally I see the current crop of nuke experts as primarily "evangelizers of the wisdom of the past". The nuke experts of the past, such as Tom Schelling, are more impressive (and more mathematical). If a better approach to nuke risk was easy to find, it would have probably already been found by one of the many geniuses of the 20th century who looked at nuke risk. If so, the best place to make a marginal contribution to nuke risk is by evangelizing the wisdom of the past: this can help avoid backsliding on things like arms control treaties (this also raises the question of the tractability of a geopolitical approach to reducing risk versus preparation/adaptation to nuclear war's environmental damage and versus other non-nuke cause areas).
Speaking as someone who 1) has never been prompted to make any career or philanthropic decisions regarding nuclear risk reduction (and therefore not been motivated to think very rigorously about the subject), 2) may not have had a good sample/exposure to nuclear risk reduction advocacy (although I have had very little interaction with e.g., ICAN, which is a plus) 3) does not have formal academic or career experience in the nuclear realm; but 4) has been exposed to nuclear risk and strategy more so than the average person through personal research/curiosity, listening to podcasts on the subject, discussing the topic briefly with friends, and a summer position at the Center for Global Security Research:
I have long been skeptical of making serious net positive progress in the nuclear security realm, and every time I’ve tried to be open-minded about the idea of devoting lots of attention and resources to the subject, I’ve come away with equally if not more pessimistic views on the field. It often gives me the surface level feeling of watching people trying to kick down a brick wall, insisting that “it’s going to work, we just need more funding and time.” People like the NTI’s director get asked a straight question: “What are we going to do to reduce nuclear risk,” and she can’t seem to give a straight or compelling answer, just vague goal-wishing (vs. policy proposals, let alone compelling advocacy strategies) or policy proposals which seem like they may even introduce some risks (even if not adding risk on balance and possibly even reducing it)—assuming that the policy proposals were even politically tractable. Of course, much of this might not be so problematic, but then you hit a foundational issue: it seems very, very unlikely that we will face extinction due to nuclear war, whereas the probability of risks from alternative sources (e.g., engineered pandemics, unaligned AI) are much greater (at least in magnitude terms).
So, perhaps I don’t understand their perspective, as Dr. Lewis suggests in the podcast. However, when I have tried to understand their perspective—including by listening to hours of videos (talks) and podcasts by people in the nuclear risk field and reading various Bulletin/UCS articles—I haven’t seen a compelling case made by the traditional figures in the field. That’s not to say the field is hopeless, but I am fairly skeptical of many of the existing approaches’ likelihood of having much positive expected value if scaled. Perhaps it would have helped if Dr. Lewis made clear what EA doesn’t get, but I either missed it or he didn’t specify… (reinforcing my skepticism)
That was a lot of bottled up negativity and skepticism, but I’m happy that people are working on risk reduction as opposed to ~95% of other policy fields, I just want to see the work be more efficacy-oriented rather than prinicipalistic (among other desires).
I really appreciate many of the points mentioned herein, and understand/share some of the skepticism and concern. These comments by Jeffrey:
Mycommunityfindsthisuncomfortableforthesamereasonthatdinosaursdon′tlikeasteroids.
and Harrison
Itoftengivesmethesurfacelevelfeelingofwatchingpeopletryingtokickdownabrickwallare interrelated to me & relevant to whether and how these communities get more involved with each other. There is much promising work to do, yet our field also needs to evolve. Perhaps we can create more nuclear expert/EA engagement opportunities. From the policy wonk side, we need to do so in open-minded and genuine ways if so. (FWIW, I think Jeffrey is a top-notch expert for such dialogue.) I'm at the start of a 2-week EA coworking experience, and the mutual benefits and learning were clear within the first hours of my time here.
Definitely agree! We should definitely engage more with the field. I would note there's good stuff, eg here, here, here, here.
Who critiques EA, and at what timestamp in the podcast?
It's Dr. Jeffrey Lewis at 32:08
Merci