Director of Research at CEARCH: https://exploratory-altruism.org/
I construct cost-effectiveness analyses of various cause areas, identifying the most promising opportunities for impactful work.
Previously a teacher in London, UK.
Hi Mike, thanks for taking the time to respond to another of my posts.
I think we might broadly agree on the main takeaway here, which is something like people should not assume that nuclear winter is proven - there are important uncertainties.
The rest is wrangling over details, which is important work but not essential reading for most people.
Comparing the estimates, the main cause of the differences in soot injection are if firestorms will form. Conditional on firestorms forming, my read of the literature is that at least significant lofting is likely to occur - this isn’t just from Rutgers.
Yes, I agree that the crux is whether firestorms will form. The difficulty is that we can only rely on very limited observations from Hiroshima and Nagasaki, plus modeling by various teams that may have political agendas.
I considered not modeling the detonation-soot relationship as a distribution, because the most important distinction is binary - would a modern-day countervalue nuclear exchange trigger firestorms? Unfortunately I could not figure out a way of converting the evidence base into a fair weighting of 'yes' vs. 'no', and the distributional approach I take is inevitably highly subjective.
Another approach I could have taken is modeling as a distribution the answer to the question "how specific do conditions have to be for firestorms to form?". We know that a firestorm did form in a dense, wooden city hit by a small fission weapon in summer, with low winds. Firestorms are possible, but it is unclear how likely they are.
These charts are made up. The lower chart is an approximation of what my approach implies about firestorm conditions: most likely, firestorms are possible but relatively rare.
Los Alamos and Rutgers are not very helpful in forming this distribution: Los Alamos claim that firestorms are not possible anywhere. Rutgers claims that they are possible in dense cities under specific atmospheric conditions (and perhaps elsewhere). This gives us little to go on.
Fusion (Thermonuclear) weaponry is often at least an order of magnitude larger than the atomic bomb dropped on Hiroshima. This may well raise the probability of firestorms, although this is not easy to determine definitively.
Agreed. My understanding is that fusion weapons are not qualitatively different in any important way other than power.
Yet there is a lot of uncertainty - it has been proposed that large blast waves could smother much of the flammable materials with concrete rubble in modern cities. The height at which weapons are detonated also alters the effects of radiative heat vs blast, etc.
you only need maybe 100 or so firestorms to cause a serious nuclear winter. This may not be a high bar to reach with so many weapons in play.
Semi agree. Rutgers model the effects of a 100+ detonation conflict between India and Pakistan:
Conclusion
In the post I suggest that nuclear winter proponents may be guilty of inflating cooling effects by compounding a series of small exaggerations. I may be guilty of the same thing in the opposite direction!
I don't see my model as a major step forward for the field of nuclear winter. It borrows results from proper climate models. But it is bolder than many other models, extending to annual risk and expected damage. And, unlike the papers which explore only the worst-case, it accounts for important factors like countervalue/force targeting and the number of detonations. I find that nuclear autumn is at least as great a threat as nuclear winter, with important implications for resilience-building.
The main thing I would like people to take away is that we remain uncertain what would be more damaging about a nuclear conflict: the direct destruction, or its climate-cooling effects.
I wonder whether it is worth for me to invest significant time in writing comments like mine above. It seems that they are often downvoted, and that I can sometimes tell before hand when this is going to be case. So, to the extent karma is a good proxy for what people value, I wonder whether I am just spending signicant time on doing something which has little value.
I am sad to see your comment getting downvotes as I do think it contributes a lot of value to the discussion.
I can guess why you might be getting them. You often respond to cause-prio posts with "what about corporate campaigns for chicken welfare?", and many people now probably switch off and downvote when they see this. Maybe just keep the chicken comparison to one line and link to your original post/comment?
Also, you comment is 3200 words long - about 3x longer than the actual post. I think a 200-word summary-of-the-comment with bullet points would be really useful for readers who have only read this post and are unable to pick up the finer points of your modeling critique.
On animal welfare
Hi Mike,
Firstly, thanks to you and all of ALLFED for your willingness to let me prod and poke at your work in the past year.
You make some excellent points and I think they will help readers to decide where they stand on the important cruxes here.
We assign a higher probability that a nuclear conflict occurs compared to your estimates, and also assume that conditional on a nuclear conflict occurring that higher detonation totals are likely. This raises the likelihood and severity of nuclear winters versus your estimates.
For anyone wanting to get up to speed on my nuclear winter model, plus a quick intro to why nuclear cooling is so uncertain, see my just-released nuclear winter post.
We estimate that the expected mortality from supervolcanic eruptions (VEI 8+) would be comparable to VEI 7 eruptions, so their inclusion could increase cost effectiveness significantly.
I don't exclude supereruptions; I estimate that the right tail of my volcanic cooling model already accounts for them.
We feel that you are selling short the importance of research in building resilience to nuclear winters in particular and ASRSs in general [...] Overall, we see research as the foundation on which you then build the policy work and other actions. Broadening and strengthening this foundation is therefore vital in allowing the work that finally effects change to occur - it isn’t an either/or.
I want to be clear that I recommend that funders prioritize policy advocacy over R&D on the margin at this point in time. I totally agree that advocacy on such an uncertain topic can only be effective if it is grounded in research, and that ALLFED's research will very likely form the foundations of policy work in this area for years to come.
One key takeaway from my analysis is that mild and moderate scenarios form a larger proportion of the threat than the lore of nuclear winter might suggest. Resilient foods would likely have a role to play in these scenarios, but I think the calories at stake in distribution and adaptation are likely to be more pivotal.
One reason for my focus on resilient food pilot studies is that they are a possible next step for ALLFED if it were to receive a funding boost. ALLFED has been ticking along with modest but reliable core funding for some time now, and perhaps I am guilty of taking its theoretical research for granted.
Feel free to set the record straight and give some indication of the kinds of work ALLFED might be interested in accepting funding for.
Good point.
I looked at WTO agreements early in the research process and eventually decided that WTO advocacy was probably not, on the margin, the best way forward.
Food stockholding
I focused on the consequences of the AoA for food stockpiling (known as "stockholding"), the most urgent concern being that it may dissuade countries from stockholding as much as they otherwise would (as suggested by Adin Richards here). Although food reserves would never be big enough to get us through a full catastrophe, they would buy time for countries to adapt to the cooling shock.
The feedback I got from someone with experience at the WTO since the 1990's was
On the other hand, basic amendment to the AoA seems obviously needed (imho). The original agreement does not properly allow for inflation. It does not make adequate exceptions for very low-income countries (whose market share is so low that allowing them to subsidize farmers would not be very distortionary).
Overall the WTO seems deadlocked at the moment and suffering a crisis of legitimacy. Tit-for-tat between US and China has led to a breakdown of trust in the organization.
If the US was on-side for AoA amendment, it is possible that the other dissenting countries would fall in line. But it is not clear that the US can be influenced on this. The US is doing fine with the system as it currently is, and has other ways of subsidizing domestic agriculture.
Other WTO theories of change
I don't know much about the implications of the AoA beyond stockholding.
The most important things is that we ensure trade continues in a catastrophe, which seems congruent with the AoA. The second most important thing is that countries are able to quickly adapt their food systems in a crisis. In a major catastrophe I think all WTO rules would go out of the window. But could the AoA prevent countries from preparing?
I may well be missing something. Are there other ways the AoA could frustrate resilience work?
Hi Vasco. Thanks for all of your help giving feedback on the report and the modeling underpinning the CEA.
I am going to focus on the main points that you make. I hope to explain why I chose not to adopt the changes you mention in your comment and also to highlight some key weaknesses and limitations of my model.
Points I address (paraphrasing what you said):
By asking domain experts you probably got an overestimate for "probability that advocacy succeeds". You should have also asked people in other fields.
Although you mention fungibility, you don't account for it in cost-effectiveness estimates. You should be more explicit that fungibility undermines cost-effectiveness of grants that perform other, less effective interventions.
I think that in ideal circumstances, fungibility should be accounted for in cost-effectiveness analysis. But since it depends on the organization receiving the funding, I decided not to do quantitative estimates of fungibility effects in this report. Maybe we will do so when we evaluate specific grants in this area.
I agree that funding to orgs who only do one highly cost-effective thing is generally less fungible.
You overestimate the mortality effects of mild cooling events: if we apply your model to the 1815 eruption, we get higher mortality rates than actually occurred.
I love this analysis, thanks for doing it.
First, let me say that yes, my model is very sensitive to mortality estimates in mild cooling scenarios. My estimate may be too high, but I believe there are compelling reasons not to be confident of this.
To illustrate my model for mortality in a mild cooling event (1-2.65 degrees cooling):
My counterarguments are as follows:
Poverty is a strong predictor of famine mortality. If your persistence estimate relied only on poverty & malnutrition burden indicators, the full-term benefits of policy advocacy would be significantly lower.
To push back:
Thanks again for the detailed feedback!!
Admittedly, this would have made some people more vulnerable as it was difficult to relieve famine-stricken areas.
A counterargument could be that Western Europe appears to have had particularly bad summer cooling in 1816 - as well as better record-keeping than much of the world - and their famines were not so bad. On the other hand, spring cooling may be more important, as late frosts can ruin harvests of wheat, potatoes etc.
I'll rewrite completely because I didn't explain myself very clearly
I would like to push back slightly on your second point: Secondly, isn't it a massive problem that you only look at the 27% that completed the program when presenting results?
By restricting to the people who completed the program, we get to understand the effect that the program itself has. This is important for understanding its therapeutic value.
Retention is also important - it is usually the biggest challenge for online or self-help mental health interventions, and it is practically a given that many people will not complete the course of treatment. 27% tells us a lot about how "sticky" the program was. It lies between the typical retention rates of pure self-help interventions and face-to-face therapy, as we would expect for an in-between intervention like this.
More important than effect size and retention - I would argue - is the topline cost-effectiveness in depression averted per $1,000 or something like that. This we can easily estimate from retention rate, effect size and cost-per-treatment.
[Update: the tool does capture diminishing marginal cost-effectiveness - see reply]
Cool tool!
I'd be interested to see how it performs if each project has diminishing marginal cost-effectiveness - presumably there would be a lot more diversification. As it stands, it seems better suited to individual decision-making.