- Thanks to helpful comments from Michael Wulfsohn, Aaron Gertler, my mother Abby Ohl, and my wife, Elana Ohl. Of course, all mistakes are my own.
Epistemic Status
- I have been reading about global public goods (GPGs) provision and international cooperation recently. One belief I’ve formed is that international cooperation on existential risks[1] should be more highly prioritized in the EA community, especially among long-termists. I have no formal training in economics/international relations/philosophy. My knowledge of international cooperation and GPGs derives primarily from reading papers and books on this topic.
Main Points:
- Several EA or EA-adjacent sources recommend research and direct work on international cooperation. However, the 80,000 hours “China Specialist” priority path, and other paths which work to improve international cooperation, could merit even more support, since these may be especially effective ways to reduce existential risks, especially those posed by AI safety and Engineered Pandemics (AI/EP).
- In addition to arguing my above claim, I aggregate various resources on international cooperation, from inside and outside the EA community.
- My recommendation relies heavily on my claim that improved international cooperation on AI/EP risks alone could subtract a significant portion of existential risk over the next century.
- AI/EP risks would especially benefit from international cooperation if they are best modeled as “weakest-link” global public goods.
- Several possibilies would make international cooperation less of a priority if they were true. I think the most likely are: (1) governments will cooperate on AI/EP risks regardless of increased research and advocacy of international cooperation, and (2) monitoring of AI/EP activities to the degree necessary is not feasible, or only possible with serious compromise of civil liberties.
- Finally, I list specific goals for international cooperation on existential risk.
What is a global public good and why are global public goods usually underprovided?
-
Feel free to skip this part if you’re familiar with the definition of a global public good and basic concepts from game theory, like the prisoner’s dilemma.
-
What are public goods? Public goods are those goods that any individual can access (non-excludable), and an individual’s usage of them doesn’t reduce the supply available to other individuals (non-rivalrous). Examples include world peace, a clean atmosphere, and a pandemic-free world.
-
Global public goods are those public goods which benefit, or at least do not harm, everyone in the world[2]. Note, foreign aid is not a GPG. This is because aid violates the non-rivalrous condition: that country’s consumption of aid reduces the resources available to the donor and other countries.
-
GPG provision commonly requires international cooperation. However, it is possible for a country to provide a GPG unilaterally, in which case cooperation is not required. For example, if a country had the resources to build a defense system against a catastrophic asteroid impact, that country would have an incentive to do so even if others would not help build it.[3] Despite this possibility, the GPGs I will discuss would benefit from cooperation, so I may use the phrase “GPG provision” interchangeably with “international cooperation” in this post.
Why are GPGs underprovided?
- The model of the prisoner's dilemma (Kuhn, 2019) helps explain why GPGs are systematically underprovided. Although the assumptions made by this model (especially lack of communication between participants, no repeated interaction, certainty of outcomes) do not apply to the real world, it provides insight into why individual countries may act selfishly, even though doing so makes every country worse off from a selfish point of view.
- For example, a country may be $10 billion better off in aggregate by polluting, which makes every other country in the world $5 billion worse off. Polluting is a good idea from a selfish perspective, but only if your home country is the only one that realizes it: it is not universalizable. If 192 other countries think this way, then each country’s payoff will be as follows: ($10B benefit from polluting) minus ($5B cost imposed by each polluter) x (192 polluting countries) = -$950B, so each country is $950B worse off than if no one had polluted. Then again, no one wants to be the “sucker” and not get their $10B if everyone else will be polluting anyway. The payoff for a country that doesn’t pollute is even worse: -$960B. If the pollute/non-pollute choice were a one-round game, everyone would pollute in the Nash equilibrium.
- Game theory tells us that countries acting in their own self-interest will tend to under-provide global public goods (in the above example, a reduction in emissions is the public good).
What disciplines are useful for international cooperation research?
- I’ve found learning about the following areas to be helpful: economics (especially game theory), international law, political science, psychology (especially cognitive biases), and history (historical relations between countries, and history of international bodies, such as the UN)
- If someone is focused on international cooperation in a specific area (e.g. emissions reductions, AI safety, or pandemic response), rather than international cooperation generally, they should be fluent in the technical areas relevant to that GPG. For example, to increase cooperation on AI safety, it would be helpful to have a basic understanding of machine learning and the AI control problem.
Aggregator Functions
What is an aggregator function?
-
One can classify GPGs in terms of aggregator functions[4],[5]. The inputs to these aggregation functions are the contribution levels of the public good from each country, and the outputs are each country’s benefit, given those contribution levels. These aggregator functions are necessarily simplifying. But I think the intuition they provide is nuanced enough to tell us when international cooperation may be especially impactful. Below, I discuss two aggregator functions: “best shot” and “weakest-link”.[6]
-
What matters most in determining outcomes for a given country? The best effort that any single country makes (asteroid deflection)? Or the efforts of the least well-resourced country (disease eradication)?
-
Asteroid defense is an example of a best-shot aggregator because if a single country (likely wealthy) can knock the asteroid off course, every country receives the benefit of asteroid defense, even those that do not contribute. The outcomes of all countries depend on the country which invested the most in asteroid deflection. Generally, total provision of a best-shot GPG will depend only on the amount that the largest contributor to the GPG provides.
-
For smallpox to be eradicated, if even one country still had incidence of the virus, all other countries would’ve had to bear the costs of annual vaccination[7]. Hence, the collective interest depended on the “weakest-link”, or the country which invested the least in eradication[8]. Generally, total provision of a weakest-link GPG will depend primarily on the amount that the smallest contributor to the GPG provides.
Aggregator functions applied to two existential risks
- Let’s consider the aggregator functions of two of the leading existential risks, as determined by Ord[9]: unaligned artificial intelligence and engineered pandemics. I claim that reduction of both AI and EP risks are both approximated by “weakest-link” aggregator functions, which makes improving international cooperation on these issues an especially good bet.
Unaligned Artificial intelligence
- For artificial general intelligence (AGI) to pose a existential threat to humanity, unaligned superintelligence may need to be developed only once, resulting in a “fast takeoff”[10]. The unaligned AI could then halt progress on any AGI projects which might be able to oppose itself.[11] Thus, the country, or non-state actor, with the weakest concern for AI safety may be the most likely one to have a negative impact on every other country. If the first AGI developed is not aligned with human values, it is likely that no other projects will matter. A single bad apple would likely spoil the bunch. This is why I classify reduction in unaligned AI risk as a weakest-link GPG.
Engineered Pandemics
- As technology improves, it will likely become easier for less sophisticated, less well-resourced actors to synthesize lethal, fast-spreading diseases. It is already possible to develop smallpox-like viruses from scratch (Koblentz, 2020). Unless response capabilities become much stronger (e.g. on-demand in-home tests for any possible virus), an engineered pandemic released in any country could easily spread globally, so reduction of EP risk also seems to be a weakest-link GPG.
Both of these risks also have commonalities with the Type-1 vulnerabilities defined in Bostrom’s Vulnerable World Hypothesis paper[12], since a single actor could cause catastrophic outcomes for large populations. Fortunately, both AGI and engineered pandemics are currently much more difficult to develop than the “easy nukes” in Bostrom’s example.
How could international cooperation reduce undiscovered existential risks?
I’ll continue to focus primarily on AI/EP risks in this post, in order to highlight the more tangible and actionable benefits from international cooperation. However, I believe that the benefits from international cooperation extend far beyond reducing these risks[13], or for that matter, any existential risks we are currently aware of. In (Bostrom, 2019), Bostrom points out that we don’t know how dangerous and accessible future technologies might be[14]. For example, he points out that we were simply lucky that nuclear weapons required a large amount of resources to build. If they could’ve been built with “a piece of glass, a metal object, and a battery”, then their discovery would’ve spelled a global catastrophe. By improving global institutions now, before such technologies are discovered, responses to future risky technologies could be more robust than the uncoordinated actions of ~200 sovereign countries. Since improving international cooperation could reduce future risks larger than even the most dire ones we’re currently facing, the benefits to reducing known existential risks may be just a small fraction of the total value generated by improving international cooperation. For more on the benefits from global governance on yet-unknown existential risks, I refer the reader to (Bostrom, 2019).
Importance / Neglectedness / Tractability
- Now that I have given a background on international cooperation, I will use the 80,000 hours methodology to assign scores to the importance, neglectedness and tractability of improving international cooperation to reduce AI/EP risks.
Importance: High
-
If enhancing global cooperation could significantly reduce the risk of just 2 types of existential catastrophe, this cause would probably merit increased efforts from the EA community. Ord estimates a roughly 1-in-6 chance of existential catastrophe over the next 100 years[15]. Two primary contributors[16] to this risk are unaligned AI and engineered pandemics. International cooperation could significantly reduce the likelihood of these two risks materializing, via, for example (1) a UN Security Council resolution establishing AI development norms, with trade sanctions for non-cooperators, and (2) an international treaty to improve monitoring of actors capable of releasing an engineered pandemic.
-
While I will not assign an exact number to the risk reduction claimed above, I do feel capable of estimating a rough lower bound on this cause’s potential impact, given my intuition of how these risks might unfold. I believe improved cooperation alone could probably prevent at least 1 in 20 AI/EP-related existential catastrophes.
-
A risk reduction of this order of magnitude seems to be supported by the role of international cooperation in past catastrophic risks. For example, I believe a significant amount of the nuclear war threat in the Cold War was generated by poor international cooperation between the US and the USSR, rather than, say faulty detection systems, human error, or lack of scientific understanding. I think roughly 1/10 of the scenarios which almost led to nuclear war probably could have been prevented if only both countries had committed earlier to mutual arms control. Ord lists several of these close calls in The Precipice [17].
-
Overall, it seems likely that for every 20 scenarios where existential catastrophes occur from engineered pandemics or unaligned AI, at least one could have been avoided if there had only been stronger international cooperation.
-
The reasoning above is the most important factor in my recommendation. If international cooperation were demonstrated to have little impact on existential risk, then this cause should not be a priority. I feel approximately 60% confident that improved international cooperation alone could eliminate a significant proportion of AI/EP existential catastrophes.
Comparison to other methods of reducing existential risk:
- Even if international cooperation could significantly reduce existential risk, it’s important to consider the cost of reducing risk via advocating for improved cooperation, especially if other causes can reduce existential risk more effectively. I’ll attempt to provide intuition on this intervention’s potential for impact relative to other methods of reducing AI risk, specifically.
- If poor international cooperation triggers a race to develop AGI, which in turn results in a certain country / non-state actor cutting corners and accidentally developing unaligned AI, earlier AI safety technical research could be for naught[18]. Since safety research may have few positive effects if a risky race dynamic ensues, I believe international governance should be at least as highly prioritized by the EA community as AI safety research, at least until a strong level of global cooperation[19] is reached. Similarly, I believe technical research into reducing other existential risks could be rendered moot if the actions of potentially reckless actors aren't monitored by global agreements pertaining to those risks.
- I am not sure what the cost would be to reach a strong level of international cooperation on these existential risks. Depending on the risk, it seems likely that international cooperation efforts could hit diminishing marginal returns before technical research does, so there may be less total room for high-impact direct work or funding in the former than the latter. However, if an amount equal to just a fraction of the funding for AI safety technical research were invested in advocating for international cooperation, I believe existential risk could probably be somewhat reduced As a thought experiment, consider what could be done with $1B [20]: one could launch a public awareness campaign reaching nearly 100M people for ~50 minutes each[21]. This campaign could communicate the risks of a race to develop AGI[22] between the US and China, and encourage voters to write their representatives to prioritize international cooperation on AI safety, which could increase the probability of an international treaty on AI safety.
Finally, despite my focus on existential risk in this post, international coordination could be impactful for non-existential, non-longtermist causes. Improved cooperation on medical research, climate change, and trade could improve our longevity, environment, and wealth in the current generation. Using the 80,000 hours framework, I assign a 14 to the importance factor, as I believe improvements in global cooperation on AI/EP could reduce existential risk by approximately 0.5-1%. This corresponds to a very high importance relative to other priority causes.
Neglectedness: Medium
- Research into international coordination on AI/EP existential risks is occurring, though I believe increased political advocacy could offer a high expected value.
How much funding does the area have?
-
I found few organizations which focus exclusively on international cooperation on existential risks. But I believe many institutions devote a minority of their resources to improving cooperation on these issues[23].
-
While I will list estimates of funding below, I caution that neglectedness in this area is probably not well-captured by a funding number. Rather, prioritization of international cooperation on existential risks by major governments is likely a better measure. So, progress depends more on influencing the opinions of the public and high-ranking government officials, than financial resources alone. That said, more money may be used to indirectly influence opinion, e.g. by launching public awareness campaigns and advertisements on the importance of improved international cooperation.
Organizations that research or advocate for international cooperation on AI Safety and Engineered Pandemics:
-
Nuclear Threat Initiative: upper bound of $19M on EP expenditures, 2019[24]
-
Johns Hopkins Center for Health Security: Estimated budget: $3M - $15M[25]
-
Partnership on AI: Revenue of $8M in 2019[26]
-
Future of Life Institute, $2.4M in 2019 revenue[27]
-
Future of Humanity Institute, which houses the Centre for the Governance of AI: roughly $1.4M annual budget in 2017.[28]
-
Centre for the Study of Existential Risk: Estimated budget: $1-$10M[29]
-
Leverhulme Centre for the Future of Intelligence, roughly $1.4M in annual funding[30]
-
Global Catastrophic Risk Institute: $0.3M annual budget[31]
The total annual budgets of the above institutions is approximately $57M[32]. This number is likely an overestimate, since these organizations are not entirely devoted to international cooperation on AI/EP risks.
- Other institutions study or advocate for international cooperation on areas besides existential risk (e.g. trade, climate change, monetary policy). It’s possible that efforts to improve international cooperation in one area may spill over into AI/EP. These institutions include academic departments at universities, think tanks, lobbying firms, private companies, national governments, the WHO, and the United Nations.
- Doubling the above amount to allow for these spillover effects implies approximately $114M in annual funding.
Research occurring in this area:
- The topic of international cooperation is well studied in academia, but has only occasionally been applied to existential risk.
- In 2005, Thomas Schelling won the Nobel Prize in Economics for applying game theory to global conflict (Nobel, 2005), and he has influenced many researchers who study cooperation on reducing existential risk.
- Todd Sandler has published extensively on global public goods, including the environment.
- Scott Barrett studies international cooperation, especially in the context of climate change.
- Nick Bostrom has published on the importance of global coordination in reducing existential risk (Bostrom, 2019) and coined the term singleton.
- Allan Dafoe studies the global politics of AI, specifically.
- Toby Ord recommends “exploration of new international institutions aimed at reducing existential risk” in The Precipice.[33]
- GPG provision and international institutions are mentioned several times on the University of Oxford’s Global Priorities Institute research agenda[34].
- Research in this area is referenced on an 80,000 hours list of important research questions. (#7 in the Economics section).
- There have also been several EA forum posts on international cooperation, listed in Appendix 2
How much attention are governments paying to this area?
-
One news story that shows this issue is becoming less neglected are the “track-II” diplomatic talks between Fu Ying, former Chinese ambassador to the U.K., and Brookings President John Allen, which centered on AI safety[35]. These talks indicate that influential individuals outside government place high importance on this issue.
-
The Asilomar Conference on Beneficial AI also points to increased attention to this cause area. An outcome of the conference was the Asilomar AI principles.
-
Events that would make me think international cooperation on existential risk is becoming even less neglected include: diplomatic talks on existential risk moving to “track-I” (especially official conversations between China and U.S. governments), a large number of countries endorsing the Asilomar principles, and incorporation of the Asilomar principles into an enforceable international agreement.
Using the 80,000 hours framework, I assign a 6 to the neglectedness factor. This corresponds to a moderate level of neglectedness relative to other priority causes.
Tractability: Medium
-
International cooperation has usually been successful when (1) benefits to cooperating clearly outweigh costs by a large factor, and (2) all countries benefit from cooperation. I’ve included a few case studies of successes and failures in international cooperation in Appendix 4.
-
The benefits of reducing risks from engineered pandemics and AI risks are fairly uncertain: unaligned AI has not been developed yet. Bio-warfare has not killed large numbers since World War II[36]. I do not say this to discourage work on these areas or diminish the size of these risks, but to show that governments cannot point to a clear benefit to be gained from cooperation.
-
The impact of an engineered pandemic would likely be bad for everyone, which means international cooperation is probably more likely on this issue than AI safety. AI might be well-aligned to only one country’s values, so cooperation on AI safety may be less likely if a government believes that not cooperating could give that country a strategic advantage.
-
This is all to say: Past successes in international cooperation have little in common with reducing AI / EP risk. For example, the costs and benefits of eradicating smallpox were fairly certain, and there was no strategic advantage to be gained by any country not contributing to eradicate smallpox[37].
-
However, small successes have already occurred in international cooperation on both AI and EP. International membership in the Partnership on AI indicates the potential success of international agreement in this area, although Baidu’s withdrawal is worrying for future US-China cooperation. The near global ratification of the Biological Weapons Convention (BWC) of 1972 shows that collaboration on reducing the risk of engineered pandemics may be tractable, although the convention is not monitored, so compliance levels are uncertain. Certain countries, such as North Korea, are in blatant violation of the BWC[38].
-
Due to international cooperation’s moderate track record on issues with uncertain benefits, I assigned a 4 to the tractability score. This corresponds to a moderate level of solvability relative to other priority causes.
Ways I might be wrong
In this section, I discuss why you might not prioritize international cooperation for AI safety and engineered pandemics.
My argument for international cooperation has at least four areas of potential weakness. I believe objection 1 is the most plausible, as there are at least 3 ways in which it could be true. I am especially uncertain about the propensity of governments to cooperate in the absence of further research and advocacy into international cooperation. I am also uncertain if relevant governments are able to monitor either of these risks with enough scope to reduce existential risk.
(1) You believe international cooperation will not be a bottleneck to reducing existential risk.
This could be true if:
(a) There are good outcomes in the counterfactual: You believe governments will cooperate regardless of increased research/advocacy into cooperation on AI/EP
- You may reject the need for increased support for international cooperation if you think the necessary cooperation would happen anyway. Cooperation has happened “on its own” in past times of crisis, for example, when countries ally against a common enemy in war. I believe this “spontaneous cooperation” is more likely for EP than AI, since all countries would stand to lose from an engineered pandemic.
- While it is possible that countries will cooperate before an existential catastrophe occurs, i.e. “just in time”, I believe that without deliberate efforts to increase AI/EP international cooperation, the probability of such cooperation is low, as these issues have been consistently neglected by governments.
(b) You believe regional enforcement capabilities and individual values preclude effective international agreements
-
You may believe cooperation is important, but skeptical that the international scale is the best level to focus on.
-
While central institutions are generally strong in developed countries, I am not sure if even the richest and most technologically advanced nations could monitor AI/EP activities to the degree necessary to reduce existential risk. Since small, poorly funded actors might be able to contribute significant risk to both unaligned AI and EPs in the future, monitoring may need to be extensive. So, even if technically feasible, surveillance may come at the cost of seriously invading individual privacy[39]. If effective surveillance is practically impossible with today’s social norms and technology, investing in the technical capabilities needed to make monitoring more palatable[40] and increasing public awareness of the gravity of existential risks (which may increase support for the necessary surveillance) could be greater priorities than international cooperation.
-
If you believe change comes from the “bottom-up”, and a more altruistic or engaged citizenry is necessary before leaders will engage in global cooperation, you may instead want to focus on building the EA community.
(c) The aggregator function is wrong: reduction of these two existential risks are not weakest-link GPGs:
-
I framed the reduction of AI/EP risks as weakest-link GPGs. If you believe these risks are better modeled by another aggregator function, especially a “best-shot” GPG, you should want to invest more into technical research instead of international coordination.
-
What would it mean for these risks to be best modeled by a best-shot aggregator?
-
For AI, if you are confident that actors in a certain country or project will develop safe AGI first, i.e. there is little risk of a race to develop AGI (in which speed is prioritized ahead of alignment with human values), you may prefer to focus solely on AI technical research in the most promising country/project, to ensure that the first AGI is as safe as possible. This human-aligned AGI could then halt the development of any non-aligned AGI. However, if you believe that projects in multiple countries will compete to reach AGI, as currently appears to be the case[41], then some amount of international cooperation is likely wise. On the other hand, Eliezer Yudkowsky seems to believe that AI risk is best reduced by “best-shot”-like efforts[42].
-
For engineered pandemic response and prevention, a best-shot aggregator would apply if you believe that some sort of silver-bullet technology could neutralize most viruses. For example, imagine a medical device which could synthesize vaccines within seconds of coming into contact with the saliva of an infected person, no matter the virus. You would probably choose to invest in developing this technology, or any other similarly protective technology, since invention would render any pandemic nearly harmless. If you think that such silver bullets are unlikely or prohibitively expensive, international cooperation on biosecurity would likely still be worthy of attention. I am unsure of the probability of invention of such “silver bullets” over the next decade.
(2) You prefer more easily quantifiable and less risky causes to work on and donate to:
- While the expected benefits to cooperation are likely high, the probability and timing of success are difficult to quantify. You cannot conduct a randomized controlled trial on a UN Resolution. Successful international cooperation on AI/EP risks may be a high expected value, low probability event, also known as a Pascal's mugging (Yudkowsky, 2007), a class of causes in which GiveWell founder Holden Karnofsky has pointed out flaws (Karnofsky, 2011).
- You may view international cooperation as especially risky if you do not believe large countries can be persuaded to relinquish the sovereignty required to cooperate, regardless of the risks of non-cooperation.
(3) You believe that most moral worth lies in living (rather than future, unborn) people and/or animals
- Given the ability of not-for-profits to greatly reduce poverty and improve animal welfare without government cooperation, the most certain way to improve the lives of currently living humans and animals is likely not through improving international cooperation.
(4) You believe international cooperation is a slippery slope to totalitarianism
- Improved international cooperation could increase the probability of a single world government arising. The fewer governments there are the higher risk of “locking in” a totalitarian government.[43] I believe it’s possible to improve global governance while being mindful of the risks of a single government arising. Efforts like the Montreal Protocol show that countries can voluntarily cooperate for their mutual benefit while remaining sovereign.
Recommendations
- If international cooperation is as beneficial as I claim, then I think increased political advocacy is the best way to increase international cooperation. Below, I outline goals for international cooperation and specific ways individuals can contribute.
In the EA community, I recommend:
- Increased documentation and promotion of career paths which improve international cooperation (e.g. “China Specialist” and similar roles)
- Further discussion of the interaction between technical research and international cooperation, e.g. as I discuss above for AI safety in “Comparison to other methods of reducing existential risk”
Individuals that want to contribute to this cause could do so in the following ways:
- Political advocacy to increase the prioritization of cooperation on existential risk. Methods of political advocacy include launching public awareness campaigns to spread knowledge of the benefits from international cooperation, contacting your political representative to explain the importance of this cause, and voting and fundraising for political candidates which prioritize this cause (although I am not aware of any candidates who have made any existential risk a major campaign issue).
- Policy careers oriented on improving international cooperation on existential risk, especially between the US or China. I specified these two countries because they have a low level of general cooperation, and especially high technological expertise. 80,000 hours already recommends “Improving Sino-Western coordination on global coordination risks” as a priority path, and also points out that global cooperation failures may increase AI risk in the US AI Policy cause writeup.
- Research into enforcement mechanisms for international agreements. I believe a major reason for the failure of past international cooperation efforts was lack of positive and negative incentives for compliance and non-compliance, respectively. Research into this area might uncover more effective ways to structure international agreements.
Below I include what I view as key components of any future international agreements on AI / EP safety, specifically. Countries that cooperate on these points may recommend trade sanctions on non-cooperators to incentivize participation.
Goals for an international agreement on AI safety:
- International collaboration on large-scale AI projects, potentially reinforced by public commitments to stop competing and assist with any project near to building AGI, as OpenAI has committed to[44]; commitments like these would reduce incentives to pursue capabilities at the cost of safety
- Norms surrounding government regulation of projects working towards AGI, including potential conditions for nationalization
- Agreements on permissibility of publishing academic research which may be useful in developing AGI
- Bostrom cites further potential monitoring mechanisms in Superintelligence[45].
Goals for an international agreement to reduce engineered pandemic risk:
- Establishing norms for government oversight of projects with capability of developing engineered pandemics
- Requiring automated screening software to be loaded on de novo DNA synthesis technologies (which would notify authorities if dangerous DNA were being synthesized)
- An international licensing system for DNA providers[46]
What would success in international cooperation look like?
I believe international cooperation on existential risk would no longer be neglected, and would soon reach diminishing marginal returns if events on a scale similar to the below occurred:
- A binding international treaty facilitating cooperation on at least 1 existential risk (e.g. by including some of the points listed above), ratified by the US and China, with support from a major international body, such as the UN.[47]
- An EU-wide regulation to monitor at least 1 existential risk
- International cooperation on at least 1 existential risk being a campaign issue for one of the major-party US Presidential nominees (as climate change was in 2020 for Biden, for example)
Appendix 1: Further Questions
- How can we calculate a quantitative measure (i.e. in QALYs) of the expected benefit from international cooperation, to compare to other causes?
- From a longtermist perspective, how much funding should be allocated to international coordination on existential risks?
- How many researchers are studying international coordination on existential risks?
- Which existential risks benefit most from international cooperation?
- What is the best aggregator function to model reduction of each of the highest probability existential risks?
- What is the optimal ratio between technical research and international cooperation research, for various existential risks?
- What is the optimal ratio between political advocacy for international cooperation and international cooperation research?
- What are the specific enforcement mechanisms which countries seeking to cooperate on various existential risks should implement?
- How effective and socially acceptable are current existential risk monitoring capabilities?
Appendix 2: Related EA forum posts and videos
- AI Governance: Opportunity and Theory of Impact, Allan Dafoe
- Sino-Western cooperation in AI safety, Brian Tse
- International Cooperation Against Existential Risks: Insights from International Relations Theory, Jenny Xiao
- Dual moral obligations and international cooperation against global catastrophic risks, Jenny Xiao
- International Affairs reading lists, evelynciara
- Problem areas beyond 80,000 Hours' current priorities by Ardenlk
- Global governance and global public goods are both mentioned in this post
Appendix 3: Papers / books I have found helpful in learning about international cooperation on existential risk
- This overlaps with the citations section a bit, and also includes many texts that are not cited below
Books
- Why Cooperate?, Scott Barrett, 2007 - An overview of different types of global public goods and several international cooperation case studies, from time zones to oil tanker standards.
- Global Catastrophic Risks (2008), edited by Nick Bostrom and Milan Ćirković - An introduction to existential risk followed by 22 chapters written by experts in their respective fields. Many possibilities for international cooperation are discussed.
- Superintelligence, Nick Bostrom, 2014 - Detailed explanation of the risks of unaligned AI and how superintelligent AI could lead to an existential catastrophe.
- The Precipice, Toby Ord, 2020, A survey of existential risk, includes a rough estimate of probabilities for largest existential risks.
- Governing the Commons, Elinor Ostrom, 1990 - Ostrom explores case studies where the models of the prisoner’s dilemma and the tragedy of the commons does not apply. She outlines the conditions that seem to be necessary for cooperation to occur voluntarily.
- The Strategy of Conflict, Thomas Schelling, 1960 - A series of essays applying game theory to cases where there is common interest. Dives into the theory of threats, promises, and credible commitments. Famous for defining focal points.
Papers
- Stop! Polio Vaccination Cessation Game, Barrett, 2011
- Collective Action to Avoid Catastrophe, Barrett, 2016
- Vulnerable World Hypothesis, Bostrom, 2019
- Global Public Goods: A Survey, Buchholz and Sandler, 2021
- Strategies for the International Protection of the Environment, Carraro and Siniscalco, 1993
- The Intellectual Property Regime: Are There Lessons for Climate Change Negotiations?, Drahos, 2011
- Minimizing Global Catastrophic and Existential Risks from Emerging Technologies through International Law, Wilson, 2013
Appendix 4: What is the track record of international cooperation?
A look into the history of GPGs is helpful in estimating the tractability of improving GPG provision. Roughly, GPG provision efforts have succeeded when (1) the benefits from cooperating clearly outweigh the costs, and (2) every country benefits from cooperation.
Successes:
Montreal Protocol
- Evidence arose in the 1970s that chlorofluorocarbons (CFCs) may have been damaging the ozone layer. The goal of the Montreal Protocol, signed in 1987, was phasing out usage of CFCs. The primary enforcement mechanism was restricting trade between treaty parties and non-treaty parties in CFCs and goods containing CFCs. The treaty also included payments from countries that benefited more from the agreement to countries with less incentive to reduce CFCs (poorer countries and countries nearer the equator), so that no country was made worse off by the deal. Because of these side payments, all countries were expected to comply. There was no exemption for developing countries (as has been common in other climate agreements, such as the Kyoto Protocol). The estimated global cost/benefit ratio from the Montreal Protocol and subsequent ozone agreements was 1:11[48] The efforts of the Montreal Protocol seem to have succeeded. The ozone layer has already begun recovering and should be completely repaired by the end of this century (United Nations, 2019).
Eradication of smallpox
- Eradicating smallpox is perhaps one of the best investments in world history; Scott Barrett estimates the cost/benefit ratio in 1978 , the year smallpox was eradicated, was 1:159[49]. The smallpox eradication probably saved more lives than a hypothetical world peace beginning in 1978 would have saved[50]. Eradication requires an extremely high level of cooperation, since a single undetected case could spread and invalidate all previous efforts; however, every country also has a strong incentive to cooperate, since each one will have to bear the costs of vaccination if the disease continues to circulate.
Failures:
- There is no shortage of international cooperation failures. Cooperation failures have been the rule throughout history, not the exception. In addition to the Kyoto Protocol detailed below, other cooperation failures include the mutually destructive US-China trade war, the Cold War, and both World Wars.
- The Kyoto Protocol, signed in 1997, committed signatories to reduce greenhouse gas (GHG) emissions through 2020. There were no penalties for withdrawing from the agreement. In fact, countries like Canada withdrew from the agreement once they calculated they would be required to pay penalties by remaining a member (Poplawski-Stephens, 2014).
- Developed countries, known as Annex-1 countries, were held to emission reductions while developing countries, including India and China, were not[51].
- The US Senate never ratified the Kyoto Protocol. Although some countries did reach their targets, it is hard to say what causal effect the Kyoto Protocol had on these reductions.
- I believe there are two factors which contributed to the ineffectiveness of the Kyoto Protocol.
- Lack of perceived fairness. I think international agreements will be more likely to succeed with commitments from all countries, even if those commitments are small. Research has shown individuals will punish non-cooperators, even at cost to themselves, when they perceive lack of fairness in an agreement (Rabin, 1993). President George W. Bush also cited unfairness as a reason for his opposition to Kyoto: “it exempts 80% of the world, including major population centers such as China and India, from compliance, and would cause serious harm to the US economy."[52]
- No enforcement mechanisms Any international agreement needs “teeth”. The Kyoto Protocol was basically toothless. For example, the U.S. is free-riding off the costly emissions reductions by Kyoto-compliant countries, due to the benefits to Americans from reduced climate change risk. Despite this free-riding, the U.S. continues to benefit from trade with these countries. If the Kyoto Protocol had encouraged trade sanctions for non-compliant countries, perhaps compliance would have been higher.
Bibliography
- Allwood, et al, 2014: Glossary. In: Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change.
- Barrett, 2007: Why Cooperate? The Incentive To Supply Public Goods. Oxford University Press
- Barrett, 2010: Stop! The Polio Vaccination Cessation Game, 2010
- Bostrom, 2001: Existential Risks: Analyzing Human Extinction Scenarios
- Bostrom, 2014: Superintelligence, 2014, Oxford University Press
- Bostrom, 2019: The Vulnerable World Hypothesis, p. 458
- Buchholz and Sandler, 2021: Global Public Goods: A Survey, Final Draft: Forthcoming )Journal of Economic Literature_ 2021
- Dessai, 2001: Tyndall Centre Working Paper 12: The climate regime from The Hague to Marrakech: Saving or sinking the Kyoto Protocol?
- Frischknecht, 2003: The history of biological warfare, Science and Society
- Future of Life Institute, 2019: Tax Forms, 2019
- Global Catastrophic Risk Institute, 2020: Summary of 2020-2021 GCRI Accomplishments, Plans, and Fundraising
- Global Priorities Institute, 2020: A Research Agenda for the Global Priorities Institute, 2020
- Johns Hopkins, 2021: School At A Glance, Retrieved, April 4, 2021
- Karnofsky, 2011: Why we can’t take expected value estimates literally, GiveWell Blog
- Koblentz, 2020: A biotech firm made a smallpox-like virus on purpose. Nobody Seems to Care, 2020, Bulletin of the Atomic Scientists
- Kuhn, 2019: Prisoner’s Dilemma, Stanford Encyclopedia of Philosophy
- Leverhulme, 2021: Leverhulme Centre
- MacAskill, 2018: What are the most important moral problems of our time?, TED2018
- Nobel, 2005: Thomas C. Schelling, Facts
- Nouri and Chyba, 2008: <span style="text-decoration:underline;">Biotechnology and Biosecurity</span>, book chapter in Global Catastrophic Risks, Edited by Nick Bostrom and Milan Ćirković,
- Nuclear Threat Initiative, 2019: 2019 Annual Report
- O’Meara, 2020: Will China lead the world in AI by 2030?, Nature
- Open Philanthropy, 2017: Future of Humanity Institute - General Support
- Ord, 2020: The Precipice: Existential Risk and the Future of Humanity,
- Partnership on AI, 2019: 2019 Annual Report
- Poplawski-Stephens, 2014: What would be the consequences of not meeting Kyoto carbon targets?, The Institution of Environmental Sciences
- Rabin, 1993: Incorporating Fairness into Game Theory and Economics,
- Reisen, 2004: Financial Global and Regional Public Goods through ODA, OECD Development Centre
- United Nations, 2019: Ozone on track to heal completely in our lifetime, United Nations News
- US State Department, 2019: Adherence to and compliance with arms control, nonproliferation, and disarmament agreements and commitments
- Wilson, 2013: Minimizing Global Catastrophic and Existential Risks from Emerging Technologies through International Law
- Yudkowsky, 2007 Pascal’s Mugging, LessWrong
- Yudkowsky, 2008: <span style="text-decoration:underline;">Artificial Intelligence as a positive and negative factor in global risk</span>, book chapter in Global Catastrophic Risks, Edited by Nick Bostrom and Milan Ćirković,
- Zheng, 2020: China-US relations: work together to prevent an AI arms race, experts say
Endnotes
As defined on p. 3 in (Bostrom, 2001) ↩︎
p. 16 (Reisen, 2004) ↩︎
This example was borrowed from (Barrett, 2007) p.2 ↩︎
(Buchholz and Sandler, 2021) p.10 ↩︎
(Barrett, 2007) p.20 ↩︎
Other aggregator functions are discussed at length in (Buchholz and Sandler, 2021), p. 10-14. ↩︎
See Appendix 4 for further details on the eradication of smallpox. ↩︎
Barrett discusses the details of disease eradication in the context of polio in (Barrett, 2010). ↩︎
(Ord, 2020) p. 167 ↩︎
(Bostorm, 2014) p. 77 ↩︎
(Bostrom, 2014) p.100. This argument cuts both ways: if the first superintelligence is aligned to human values, it could also stop all progress on unaligned AI. ↩︎
(Bostrom, 2019) p. 458 ↩︎
Thanks to Michael Wulfsohn for raising this point. ↩︎
(Bostrom, 2019) p.455 ↩︎
(Ord 2020) p. 167 ↩︎
Ord estimates the risks of existential catastrophe from AI and engineered pandemics as approximately 10% and 3.3%, respectively, over the next century.(Ord, 2020) p. 167 ↩︎
(Ord, 2020) pp. 96-97 ↩︎
See the prior section: “Aggregator functions applied to two existential risks”, above, for why this path dependence exists. ↩︎
See Recommendations section for example criteria of a “strong level of cooperation” ↩︎
The amount of funding for Open AI, one research lab working on developing safe AI ↩︎
The approximate amount of viewers for the Super Bowl in 2021, typically the most-watched American television event each year. See footnote 22 for further details. ↩︎
$1B could have bought all of the Super Bowl commercials in 2021 (assuming $5M for a 30 second advertisement, and 50 minutes of ads), with $500M left over for production costs. Even more cost-effective advertising is probably achievable via Facebook or other targeted advertising. ↩︎
I have not listed organizations that advocate for international coordination on other risks, such as nuclear war or climate change, since the focus of this post is AI/EP risk. ↩︎
Despite its name, NTI also funds programs to reduce biosecurity risk. I excluded the Global Nuclear Policy Program and International Fuel Cycle Strategies line items, to arrive at an upper bound on pandemic response and prevention funding. Funding amounts can be found on p. 31 of (Nuclear Threat Initiative, 2019). ↩︎
I assumed that John Hopkins’ Bloomberg School of Public Health’s $598m budget (Johns Hopkins, 2021) was allocated to research centers on a pro rata basis based on the number of faculty in each research center. There are 12 faculty members ranking Senior Scholar or above at the Center for Health Security. There are 837 total faculty at the School of Public Health (12 / 837) * $598M yields a midpoint estimate of $9M. ↩︎
(Partnership on AI, 2019) p. 9 ↩︎
(Future of Life Institute, 2019) ↩︎
(Open Philanthropy, 2017). Used exchange rate of $1.39 per £1. ↩︎
I could not find an exact budget. I estimated these numbers based on FHI’s budget, a similar research center at Oxford. ↩︎
(Leverhulme, 2021) Calculated as 1/10 of the 10 million pound grant from the Leverhulme Trust. See footnote 28 for exchange rates. ↩︎
(Global Catastrophic Risk Institute, 2020), retrieved April 14, 2021 ↩︎
I used the upper bound of my estimates for the Johns Hopkins Center for Health Security and Centre for the Study of Existential Risk budgets. ↩︎
(Ord, 2020) p. 280 ↩︎
(Global Priorities Institute, 2020) p. 43 ↩︎
(Zheng, 2020) ↩︎
(Frischknecht, 2003), p. 2 ↩︎
See Appendix 4 for further details on the eradication of smallpox. ↩︎
(US State Department, 2019), p.47 ↩︎
In the Preventive Policing header in (Bostrom, 2019), Bostrom explores the tradeoff between enforcement efficacy and privacy. While a “high-tech panopticon” probably would not be necessary to reduce AI/EP risks to acceptable levels today, if the means to unleash existential catastrophes come into the hands of many, citizens may choose to trade civil liberties for increased safety. ↩︎
“Such as automated blurring of intimate body parts, and...the option to redact identity-revealing data such as faces and name tags”, (Bostrom, 2019). ↩︎
The Communist Party of China has set a goal to be the global leader in AI by 2030 (O’Meara, 2020). ↩︎
(Yudkowsky, 2008) p.333-338, Yudkowsky lists several reasons to invest in “local” efforts, which he defines as actions that require a “concentration of will, talent and funding to overcome a threshold”. Yudkowsky argues that “majoritarian” action (like international cooperation) may be possible but local action (like technical research) is probably faster and easier. My view is that both majoritarian and local actions should be undertaken to reduce AI risk, especially when there is not common knowledge of all actors’ potentially risky activities. ↩︎
(Ord, 2020) p. 201-202 ↩︎
“if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.” ↩︎
(Bostrom, 2014) pp. 102-106 ↩︎
These final two measures are cited in (Nouri and Chyba, 2008) p. 463-464, ↩︎
(Wilson, 2013), pp. 351-364 discusses what such a treaty might include. ↩︎
(Barrett, 2007d) pp. 79-82 ↩︎
(Barrett, 2007) p. 52 ↩︎
(MasAskill, 2018) This statement is made at 3 minutes, 3 seconds. ↩︎
See p. 1252 of (Allwood, et al 2014), which defines Annex-1 countries ↩︎
(Dessai, 2001) p. 5 ↩︎
I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it's hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I'm exploring in the AI governance area, but I'm not sure how valuable and tractable they'll look upon further inspection. If you're curious, some concrete ideas in the AI space are laid out here and here.
Great points. I wonder if building awareness of x-risk in the general public (i.e. outside EAs) could help increase tractability and make research papers on cooperation more likely to get put into practice.
I'm curious which ideas you're exploring too. I saw your post on the topic from last year. Reading some of the research linked there has been super helpful!
Thanks for linking these resources too. Looking forward to reading them.
Interesting post! My colleague Stephen Clare (at Founders Pledge) is currently doing an investigation into this topic, it will be great to exchange.
Thank you! Sounds great. DM'd!