Of course it will be smaller, however that does mean that tackelling climate change will not make a sizeable contribution towards reducing the risk of nuclear winter. The question for me is whether nuclear winters that relate to climate change are more or less tractable than nuclear winter as a whole. My view would be that trying to reduce the risk of nuclear winter by tackelling climate change and its consequences may be a more tractable problem then doing so by trying to get nuclear weapons states to disarm or otherwise making nuclear war less likely in general, but that efforts to make nuclear winter more survivable are probably more efficient than either of these policies from a purely x-risk reduction perspective.
However, I also do not think that nuclear winter is the only way in which climate change may lead to an existential threat (at least reading existential threat to include the prospect for an unrecoverable from civilisational collapse) as there are some interesting feedback loops between environmental and social collapse that have the potential to cause non-linear and self-perpetuating shifts in the structure of global civilisation. Admittedly these are hard to study, but from a value maximisation perspective I would say that in the face of uncertainty we will do better if we assume that global civilisation is relatively fragile to such changes than if we assume that it is more robust to them.
I think we might disagree about what constitutes a near miss or precipitating event. I certainly think that we should worry about such events even if their probability of leading to a nuclear exchange are pretty low (0.001 lets say) and that it would not be merely a matter of luck to have had 60 such events and no nuclear conflict, it is just that given the damage such a conflict would do they still reprasent an unaceptable threat.
The precise role played by climate change in increasing our vulnerability to such threats depends on the nature of the event. I certainly think that just limiting yourself to a single narrative like migration —> instability —> conflict is far to restrictive.
One of the big issues here is that climate change is percieved as posing an existential threat both to humanity generally (we can argue about the rights and wrongs of that, but the perception is real) and to specific groups and communities (I think that is a less contraversial claim). As such I think it is quite a dangerous element in international relations - especially when it is combined with narratives about individual and national reponsibility, free riding and so on. Of course you are right to point out that climate change is probably not an existential threat to either the USA or Russia but it will be a much bigger problem for India and Pakistan and for client states of global superpowers.
We are indeed writing something on this (sorry it is taking so long!). I would dispute your characterization of the principle contributor of climate change to nuclear war though. Working on Barrett and Baum’s recent model of how nuclear war’s might occure I would argue that the greatest threat from climate change is that it creates conditions under which a prec[ititating event such as a regional war might escalate into a nuclear conflict are more likely - i.e. it increases our vulnerability to such threats. This is probably more significant than its direct impact on the number of precipitating events. Since such events are not actually that uncommon (Barret and Baum find over 60 I seem to remember whilst a Chatham House survey found around 20) I think that any increase in our vulnerability to these events would not be insignificant.
What is certainly correct is that the nature of the threat posed by climate change is very different in many ways to that posed by AI. Indeed the pathways from threat to catastrophe for anything other than AI (including pandemics, nuclear weapons, asteroids and so on) are generally complex and circuitous. On the one hand that does make these threats less of a concern because it offers multiple opportunities for mitigation and prevention. However, on the other hand, it makes them harder to study and assess, especially by the generally small research teams of generalists and philosophers who undertake the majority of x-risk research (I am not patronising anyone here, that is my background as well).
Below I paste a really brief summary of some papers you might find interesting. This is taken from a draft literature review I have been working on with others, into methods for quantifying existential risk, hence its particular format. I hope you find it useful and would be grateful to heard if anyone knows of any good papers we have missed.
My personal takeaway from this exercise with regard to the risk from pandemics is that many GCR scholars may be overstating the risks from a major pandemic at the 'spanish flu' level (and one thing I should have mentioned in the preceeding comment is that when one takes account of the systemic impacts of such a pandemic these effects may actually decrease, for instance taking account of the social and economic impacts of the black death, the overall impact of that pandemic may have been net positive, although that is contraversial). However, on the other hand, many non-GCR scholars of pandemic may be understating the likelihood of a pandemic that would be considerably worse than the Spanish flu (both from naturally occurring and engineered pathogens). These two things do not necessarily cancel each other out!
1. Source: Troy Day, Jean-Baptiste Andre & Andrew Park, 2006, “The Evolutionary Emergence of Pandemic Influenza”, Proceedings of the Royal Society – Biological Sciences, 273, pp. 2945-2953.
Probability: The Probability of a pandemic occurring in any given year is 4%. A conservative estimate of the 95% support interval for the yearly pandemic probability is 0.7–7.6%.
Methodology: This probability is derived from combining ‘anecdotal’ evidence about the number of influenza pandemics over the past 250 years with more recent data about the expected interval between pandemics emerging.[1]Evidence was combined using a well defined Bayesian formula set out in an appendix to the paper.
2. Source: Madhav, N. (2013). Modelling a modern-day Spanish flu pandemic. AIR Worldwide, February, 21, 2013.
Probability: There is a 0.5-1%annual probability of a ‘modern day Spanish Flu’ event, with similar characteristics to the 1918 pandemic including considerable excess deaths amongst young adults. Such a pandemic would likely cause between 21 and 33 million deaths worldwide.
Methodology: The AIR Pandemic Flu Model, which combines demographic and epidemiological, demographic and technological modelling to produce a complete model for pandemic influenza. This model has been extensively peer reviewed.
3. Source: Fan, V. Y., Jamison, D. T., & Summers, L. H. (2016). The inclusive cost of pandemic influenza risk. National Bureau of Economic Research. [this has now been partially published as an, V. Y., Jamison, D. T., & Summers, L. H. (2018). Pandemic risk: how large are the expected losses?. Bulletin of the World Health Organization, 96(2), 129.]
Prediction: The annual probability of a severe influenza pandemic (one that increases global mortality by at least 0.1%) is 1.6%and the average impact of such pandemics is a global mortality increase of 0.58% (±40 million fatalities). Severe flu pandemics represent 95% of the costs associated with all pandemic influenza.
Methodology: The historical record was used to estimate the total frequency and severity of all influenza pandemics and to generate likely age-specific death rates as a result of a global pandemic. The U.S.’ historical age distributions, being the most[JF1] complete, were used as the template for global age distributions. The authors then model the “expected deaths from pandemic influenza risks” with a highly fat tailed distribution of mortality meaning that the vast majority of deaths occurring from the most severe pandemics.
4. Source: Bagus, G. (2008) Pandemic Risk Modelling. Chicago Actuarial Association
Estimate: A pandemic of the scale of the Spanish Flu, causing a ±27% increase in mortality, occur around once every 420 years. More severe pandemics causing a ±42% increase in global mortality may have a return rate of 2,700 years
Methodology: An ‘actuarial model’ is constructed in the form of a severity curve based on historical data for the past 420 years of influenza outbreaks and was found to approximate an exponential curve. This was then extrapolated to estimate the probability and severity of more extreme pandemics. The model takes account of shifting demographic features over time but assumes that pandemics have equal severity across all countries.
5. Source: Klotz, L. C., & Sylvester, E. J. (2014). The Consequences of a Lab Escape of a Potential Pandemic Pathogen. Frontiers in Public Health, 2.
Prediction:The likelihood of a pandemic, through an undetected lab-acquired infection, “could be as high as 27%” over a 10-year research period.
Methodology:The authors take the annual probability per lab of an escape of a virus through an undetected lab-acquired infection (LAI) to be 2.4%. This statistic is taken from the Department of Homeland Security’s risk assessment for a planned National Bio- and Agro-defence Facility in Manhattan, Kansa. They then assume that a research enterprise will comprise of 10 labs working for 10 years to make a virus. So, across this period, the probability of no escape through an LAI will be 0.088. Therefore, the probability of at least one escape from the enterprise through an LAI will be 91%. This is multiplied by the assumed, as a worst-case scenario,likelihood of one LAI leading to a pandemic, 30%, to give the overall prediction.
6. Source: Marc Lipsitch & Thomas V. Inglesby, “Moratorium on research intended to create novel potential pandemic pathogens”, MBio, 5, 2014, pp. 1-6.
Probability: Each laboratory-year of Gain of Function research into virulent, transmissible influenza virus might have an 0.01% to 0.1%chance of triggering a global infection via an accidental laboratory escape. Such a pandemic could be expected to kill between 2 million and 1.4 billion people.
Methodology: The risk of a global pandemic resulting from a laboratory escape of influenza is determined from multiplying two different probabilities. The first is the risk of laboratory incidents and accidental infections in biosafety level 3 laboratories in which such research may be conducted (estimated to be between 0.2%, on the basis that 4 infections have been observed over <2,044 laboratory-years of observation, and 1%, using data from the National Institute of Allergies and Infectious Diseases). The second is the probability that an accidental infection of a working lab could lead to a laboratory escape spreading widely around the world (estimated to be between 5% and 60% according to a range of simulation models, with the authors own model indicating a 10-20% risk).
Noting that “readily transmissible influenza, once widespread, has never before been controlled before it spreads globally,” the expected severity of such a pandemic is determined by multiplying the historical infection rate of influenza pandemics (24-38%) by possible values for the case-rate fatality of a novel, virulent influenza strain (1-60%). However, it is unlikely that these two figures vary independently and so simple multiplication is likely to be inappropriate.
7. Source: Ron Fouchier, 2015, “Studies on Influenza Virus Transmission between Ferrets: the Public Health Risks Revisited”, MBio, Vol. 6, No. 1, pp. 1-4.
Probability: Each laboratory-year of Gain of Function research into virulent, transmissible influenza virus might have an 2.5x10-13to 3x10-12chance of triggering a global infection via an accidental laboratory escape.
Methodology: This paper is a direct response to Lipsitch and Inglesby (2014), arguing that their estimates “were based on historical data and did not take into account the numerous risk reduction measures that are in place in the laboratories where the research is conducted.”
8. Source: Piers Millett and Andrew Snyder-Beattie, 2017, “Existential Risk and Cost-Effective Biosecurity”,Health Security, Vol. 15, No. 4, pp. 1-11.
Probability: The annual probability of an existential catastrophe arising from a global pandemic is between 8 x 10-5 and 1.6 x 10-8.
Methodology: The authors construct a toy model to assess this risk, citing a Gryphon Scientific report (2015) as suggesting that the annual probability of a global pandemic arising from an accident with research into Potentially Pandemic Pathogens (PPP) in the US is 0.002% to 0.1%.[2]Next, they note that: “The Gryphon report also concluded that risks of deliberate misuse were about as serious as the risks of an accidental outbreak, suggesting a twofold increase in risk. Assuming that 25% of relevant research is done in the US as opposed to elsewhere in the world, gives us a further fourfold increase in risk. In total, this eightfold increase in risk gives us a 0.016% to 0.8% chance of a pandemic in the future each year.”
Next, the authors directly estimate the probability that a pandemic will cause an existential catastrophe and combine with this with the previous probability: “For the purposes of this model, we assume that for any global pandemic arising from this kind of research, each has only a one in ten thousand chance of causing an existential risk.”[3]
9. Source: Piers Millett and Andrew Snyder-Beattie, 2017, “Existential Risk and Cost-Effective Biosecurity”,Health Security, Vol. 15, No. 4, pp. 1-11.
Probability: The annual probability of an existential catastrophe resulting from biowarfare or bioterrorism is0.0000019 (or 1.9x10-6).
Methodology: The authors assume that the casualty numbers from terrorism and warfare follow a power law distribution. Previous studies have determined the power law exponent for terrorism using chemical or biological weapons to be -0.5. This means that for every order of magnitude increase in casualties from a terrorist attack, the probability of that attack occurring is multiplied by a factor 10-0.5, which is approximately 1/3. Assuming one attack per year, the annual probability that an attack kills more than 5 billion people will be (5 billion)-0.5, which is 0.000014 or 1.4x10-5. Historical data gives the power law exponent for warfare to be 0.41 and the authors assume 1 new war every other year and that bioweapons are used in 10% of wars. Therefore, the annual probability that a war involving biological weapons kills more than 5 billion people is 0.5x0.1x(5 billion)-0.41, which is 0.000005 or 5x10-6. The authors assume that of all wars or terrorist attacks that kill more than 5 billion people, 10% of these would lead to extinction. Therefore, the authors reach an annual probability of existential catastrophe from biowarfare or bioterrorism of 1.9x10-6.
10.Source: Sandberg, A. & Bostrom, N. (2008): “Global Catastrophic Risks Survey”, Technical Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5.
Prediction: 2%chance of human extinction being caused by an engineered pandemic and 0.05%chance of it being caused by a natural pandemic.
Method: Median response of an informal survey of 13 participants at the 2008 Oxford Conference on Global Catastrophic Risk. Participants were surveyed on their estimate of human extinction, the death of more than 1 billion people and the death of more than 1 million people from a list of 8 specific threats. However, this list was not taken to be exhaustive.
11. Source: Dennis Pamlin, & Stuart Armstrong, 2015, Global Challenges: 12 Risks that Threaten Human Civilisation, Global Challenges Foundation.
Probability: “Based on available assessments the best current estimate of a global pandemic in the next 100 years is: 5%for infinite threshold [and] 0.0001%for infinite impact” (p. 150).[4]
‘Infinite impact’ refers to the state where civilization collapses and does not recover, or a situation where all human life ends. ‘Infinite threshold’ refers to a scenario that has the potential to lead to such a collapse, dependent upon other factors (Dennis & Armstrong, 2015: 11).
Methodology: This is one of a series of risk specific predictions that resulted from a large, informal, structured expert elicitation exercise conducted by the Global Challenges Foundation. This constituted an “expert review” of the relevant literature for each risk following which “[two] workshops were arranged where the selection of challenges was discussed, one with risk experts in Oxford at the Future of Humanity Institute and the other in London with experts from the financial sector.” Based on all the evidence that was gathered, probability estimates were produced for each risk (p. 12).
[1]The authors back up this claim with evidence from the following sources: Robert G. Webster, 1998, Influenza: An Emerging Disease”, Emerging Infectious Diseases, Vol. 4, No. 3, pp. 436-441, p. 437, and Ann H. Reid, Jeffery. Taubenberger & Thomas G. Fanning, 2004, “Evidence of an Absence: the Genetic Origins of the 1918 Pandemic Influenza Virus”, Nature Reviews Microbiology, 2, pp. 909-914.
[2]There is no explicit reference to these particular probabilities in the original report.
[3]The authors state that this figure is a “conservative guess”. It is not precisely clear whether the authors mean that one in ten thousand pandemics are predicted to causeextinction, or whether one in ten pandemics will have a risk of extinction. The latter reading is implausible because surely there is at least a risk, however small, that any global pandemic would cause extinction.
[4]Stated sources include: Bagus, Ghalid (2008): Pandemic Risk Modeling http://www.chicagoactuarialassociation.org/CAA_PandemicRiskModelingBagus_Jun08.pdf; Broekhoven, Henk van, Hellman, Anni (2006): Actuarial reflections on pandemic risk and its consequences http://actuary.eu/documents/pandemics_web.pdf; Brockmann, Dirk and Helbing, Dirk (2013): The Hidden Geometry of Complex, Network-Driven Contagion Phenomena SCIENCE VOL 342 http://rocs.hu-berlin.de/resources/HiddenGeometryPaper.pdf, W. Bruine de Bruin, B. Fischhoff; L. Brilliant and D. Caruso (2006): Expert judgments of pandemic influenza risks, Global Public Health, June 2006; 1(2): 178193http://www.cmu.edu/dietrich/sds/docs/fischhoff/AF-GPH.pdf; Khan K, Sears J, Hu VW, Brownstein JS, Hay S, Kossowsky D, Eckhardt R, Chim T, Berry I, Bogoch I, Cetron M.: Potential for the International Spread of Middle East Respiratory Syndrome in Association with Mass Gatherings in Saudi Arabia. PLOS Currents utbreaks. 2013 Jul 17. http://currents.plos.org/outbreaks/article/assessing-riskfor-the-international-spread-of-middle-east-respiratorysyndrome-in-association-with-mass-gatherings-insaudi-arabia/; Murray, Christopher JL, et al.: Estimation of potential global pandemic influenza mortality on the basis of vital registry data from the 1918–20 pandemic: a quantitative analysis. The Lancet 368.9554 (2007): 2211-2218. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(06)69895-4/fulltextand Sandman , Peter M. (2007): Talking about a flu pandemic worst-case scenario, http://www.cidrap.umn.edu/newsperspective/2007/03/talking-about-flu-pandemic-worstcase-scenario
I am a researcher at the Centre for the Study of Existential Risk. Given the short time frame here I thought it worth saying that if anyone is interested in applying for this and would like to work on a project that may be assised by partnershing with a more established X-risk org then I would be happy to hear from you and will make sure to turn around any e-mails in as little time as possible. You can reach me at sjb316@cam.ac.uk.
This paper very much builds upon a more detailed working paper published by the authors in 2016 (https://www.nber.org/papers/w22137.pdf). That seemed to receive a reasonable amount of discussion at the time whilst I was working at FHI and it has certainly been one of my go to resources for pandemic influenza since, but there are definately some problems with it.
The first thing that you need to know is that when the authors talk about ‘extreme events’ this may not quite mean what you think. They basically divide all possible influenza pandemics into two classes, moderate and extreme, and their conclusion is based on the fact that they find that 95% of all costs are due to the extreme events. However, in the working paper (and hence it seems in this paper) they define an ‘extreme’ pandemic as one that increases global mortality by more than 0.01%. That is definately not to be laughed at in terms of impact relative to other pandemics , it means that such a pandemic would kill at least 750,000 which is way higher than something like ebola (though these deaths are likely to fall disproportionately on the elderly and the already sick). However the key point is that when you look at their analysis it turns out that the reason why 95% are attributable to pandemics that fall into the extreme catagory is that less severe pandemics are actually surprisingly rare. The figures they use give a return rate of 50 years for a pandemic of moderate severity but 63 years for a pandemic of extreme severity! To be honest I have not gone through the new paper in enough detail to be sure that they are doing the same thing here, but when you see the figures they actually present in the working paper it is hard not to conclude that they carved up their dataset in the wrong way and should have set the threashold for extreme pandemics higher (or else included pandemics caused by less dangerous pathogens as well).
The other key thing is that although they note that a single pandemic (The Spanish Flu of 1918) plays an extremely disproportionate role in their analysis due to the fact that it may actually have contributed the majority (or even the vast majority) of all influenza deaths over their study period of the last 250 years they show no real curiosity about the potential for even worse pandemics that may have plaid a similar disproportionate role if they had tried to extend this, say to 2,000 years (the black death and the plague of Justinian come to mind). So whilst they conclude that the costs of pandemic influenza have a fat tail distribution they don’t really offer any insight into either the real fatness or the real length of this tail. I wouldn’t fault them on this in particular, since investigating such historical events would really be pushing the bounds of epedemiology. However it would be a clear issue if a GCR scholar just took these figures and ran with them, because we probably should be concerned about 1 in 2,000 year or 1 in 200,000 year pandemics, since that is where the greatest risk is likely to come from.
One final quick point is that you are right that it is surpising that not much attention was previously paid to the costs of influenza in terms of lost lives. However, there is an argument to be made that this study is no less shocking in that it only assesses the direct costs of pandemics and does not consider their potential indirect costs or systemic effects. Again I am not going to fault them for doing this as these are very hard to assess. However that is clearly a missing element in this analysis and I hope that in future studies we will be able to adress these as well.
Can I ask why you actually want to catagorize ethics like this at all? I know that it is traditional and it can be very helpful when teaching ethics to set things out like this as if you don’t then students often miss the profound difference between different ethical theories. However a lot of exciting work has never fallen into these catagories, which in any case are basically post hoc classifications of the work of Aristotle, Plato and Mill. Hume’s work for instance is pretty clearly ‘none of the above’ and a lot of good creative work has been done over the past century or more in trying to break down the barriers between these schools (Sidgwick and Parfit being the two biggest names here, but by no means the only ones). Personally I think that there are a lot of good tools and poweful arguments to be found across the ethical spectrum and that so long as you apreciate the true breadty of diversity among ethical theories then breaking them down like this is no longer much help for anything really.
From an EA perspective I think that the one distinction that may be worth paying attention to, and that can fit into your ‘consiquentialism’ Vs ‘deontology and virtue ethics’ distinction, though it is not a perfect fit, are moral theories that can be incorporated into an ‘expected moral value’ framework and those that can’t. This is an important distinction because it places a limit on how far one can go in making precise judgements about what one ought to do in the face of uncertainty which is something that may be of concern to all EAs. However this is a distinction that emerges from practice rather than being baked into moral theories at the level of first principles and there are other aproaches, such as the ‘parliamentary model’ for handelling moral uncertainty, that seek to overcome it.
I think that referring to the successes you mention as 'peak Quakerism' actually misses the point rather. At the time of its establishment Quakerism was a very outward looking sect seeking for radical renewal of the world in the aftermath of the English Civil War. At one point perhaps 25% of the population of England were convinced of the truth of the tenets of Quakerism. However, this period was short lived and the movement faltered in size and importance and became highly insular shortly afterwards.
This insularity is not at all unrelated to the successes you talk about. For instance, one of the key aspects of Quaker business successes was that the society (Quakers refer to ourselves as The Religious Society of Friends) provided a very strong social network with a high degree of interpersonal trust. Quakers would know that they could seek one another out wherever they were and arrange business transactions on favourable terms. If anyone was found to have been acting in bad faith they would be 'cast out' and shunned by other Quakers, which due to the highly insular nature of the society would generally mean losing contact with all their family and friends and quite likely a good deal of their property as well. There was also a tradition that if a Quaker business were to fail then other Quakers would pay its debts off, which again was generally very good for business. While some Quakers still did act highly dishonourably (e.g. The famous mid-victorian case of Overend, Gurney and Company) these advantages were of significant value, especially to bankers and merchants, and explain lot of Quakers commercial successes.
One can make a similar argument about Quaker opposition to the slave trade. One of the things that caused Quakers to produce some of the earliest corporate statements in opposition to slavery was that slave owning was a huge problem for individual Quakers and their communities. Quakers, especially in the US, were among the most prolific slave owners. However, other Quakers were also convinced that slavery was abhorrent and needed to be abolished. The insular nature of the Society meant that in many meetings in the US these two groups were in close connection with one another and it was very difficult for either group to escape from this connection. This meant that individual meetings had to work to resolve such disagreements if they were going to survive. There were many instances in which meetings resolved them by ejecting those who were opposed to slavery (again have a look at the history of Benjamin Lay, perhaps the most 'cast out' Quaker in history). However, within time the dispute was eventually settled largely in favour of the anti-slavers and this is often what lead to these early statements. When the society was finally lead to go further and say that not only was slavery not fit for Quakers but that it should be abolished in society as a whole (this certainly did not happen overnight) it's dense social networks also provided highly valuable as a means of organising the anti-slavery movement and local meetings became valuable resources.
I could go further, it is a fascinating history. I am a longstanding Quaker and I love the society and our history. However, I do think that a lot of people have a very limited view of how the society operates and why it has had some notable successes. For what it is worth my personal view is that Quakers have a lot to teach those who are willing to patiently come to understand these things but that there are also many many examples of people drawing the wrong conclusions. The Quakers are, and always were, a peculiar people and in many ways the society is a failure. At only 300,000 members worldwide and divided into many factions, most of which do not see eye to eye on a great many things, the society is a long way from peak anything. However, we still have our successes (for instance I was directly involved in successful efforts to use our special marital exemptions to force the UK government's hand on introducing equal marriage, since we believed that we could go ahead and do it anyway if they didn't change the law), and I am glad of that.