Context
On behalf of my firm, I recently wrote a short paper (around 2000 words) on fundamental challenges in AI governance, emerging trends and themes in different jurisdictions, and what issues the recently-formed UN High-Level Advisory Board on AI ought to consider, in response to a call for submissions.
I want to share an edited version of the paper here, to get a sense of what the community thinks of how I've condensed the issues. My goal in this piece is to take some key EA insights - like that policymakers should focused on what these systems may be capable of in the near-future rather than on what they can do at present - and to make them legible to a broad audience, using as little jargon as possible.
I think I've done a decent job of condensing what I take to be the fundamental challenges from a governance perspective. The other sections of the paper - on emerging trends and issues to consider - are perhaps less well thought out. And of course, this is really just a sketch of the problem - more impressionism than photorealism.
In any event, my sense is that while there is currently immense interest from policymakers and civil society groups (both globally and in Africa, where I'm currently based) in understanding issues of AI governance, there is a lack of pithy and accessible material that speaks to this audience from a perspective that takes existential risk seriously. This paper is one attempt to close this gap; and is part of my broader efforts to build bridges between the EA and human rights communities - as I think these groups often talk past each other, and are worse off for it. I may write more about this latter point in a separate post.
Below is an adapted version of the paper, with some references omitted. For the full version, please see the link above.
The global challenge
There are three reasons that governing AI poses a distinct challenge – the nature of the technology, the sociopolitical context in which it is being created, and the nature of its attendant risks.
The nature of the technology
Modern AI systems have three key components: the computing power necessary to train and run the system, the data used to train the system, and the model architecture and machine-learning algorithms that produce the system’s behaviour. In the present paradigm, scaling the amount of computing power and data available to train systems has led to exponential improvements in their performance. While it is not guaranteed that these exponential improvements will continue indefinitely, at present they show no sign of slowing down.
The rapid advances in AI models in recent years, and particularly the existence of their “emergent capabilities” – where AI models demonstrate abilities which their creators neither foresaw nor explicitly encoded, such as the ability to reason or to do algebra – make the governance of AI a distinct challenge. It is difficult for technologists, let alone policymakers, to predict what cutting-edge AI systems will be capable of in the next two, five, or fifteen years. Even so, it seems likely that AI technologies will become increasingly capable across an increasingly wide range of domains.
A related challenge here is that at present, cutting-edge AI systems are “black boxes” – their creators are typically unable to explain why a given input produces an associated output. This makes demands that these systems be transparent and explainable not just a political challenge, but a technical one.
Sociopolitical context
AI governance is made more complex by the sociopolitical context in which AI is being developed. Two factors are particularly salient.
- Asymmetric AI development: The immense costs of training advanced AI models limits who can create them, with private companies in the US leading much of the development and Chinese AI labs in second place. The countries in which major AI labs reside have disproportionate influence, as these labs are bound first by national regulation. And the AI labs themselves, thanks to the strength of their technology, have significant political power, allowing them to shape norms and regulatory approaches in the space, and thus to influence the rest of the world.
- Global power dynamics: AI has clear military applications, from use in autonomous weaponry to advanced surveillance and intelligence operations. It also has the potential to enhance a state’s control over its people, and its economic productivity more generally. Thus, some states may aim to advance these technologies as quickly as possible, and to prioritise their own national security objectives over collaborative, globally harmonised regulatory efforts.
AI risks
AI technologies can be used both to produce immense benefits (for example by advancing scientific research or increasing access to quality healthcare and education) and grave harms. With their current capabilities, AI systems can be used to generate disinformation, perpetuate prejudices, and violate copyright law. As capabilities scale, future systems could be used by malicious actors to create bioweapons, execute advanced cyber-attacks, and otherwise cause catastrophic harm.
Some have argued this puts the risks posed by advanced AI systems in the same category as those posed by pandemics and nuclear war.8 And this is before one accounts for the possibility of an “artificial general intelligence”: an AI system that is as competent as humans at most economically valuable tasks.9 Such a system may not be aligned to human values, may be capable of deception, and may be able to act agentically (operating beyond human control) unless appropriately constrained.
These risks are compounded by the fact that many AI models are being released in an open-source fashion, allowing anyone in any region to access them. Even if 99,9% of people who use these models have no malicious intentions, the 0,01% of people who do could cause enormous harm.
Emerging International Trends
[This section is particularly sparse. Its analysis draws on a report I prepared for a client which I can't yet make public, but I'm excited to produce more on this front in the near future.]
AI governance is a nascent field. As of September 2023, no country has comprehensive AI legislation, and existing governance efforts largely take the form of international statements and declarations, national policy and strategy documents, and draft laws.
There is emerging consensus across the world on the key principles in AI governance. For example, the need for systems to be transparent and explainable, and to actively counter algorithmic discrimination, are referenced in policies, strategies, and draft laws emerging from the European Union, the United States, the United Kingdom, and China. Other cross-cutting themes include provisions on safety and security (particularly on the need to conduct both internal and external audits of advanced AI systems); data privacy; and the need for human oversight and accountability.
However, there is substantial divergence in how these principles should be implemented – as if each region were using virtually the same ingredients to create different dishes. National and regional governance regimes are being built atop pre-existing regulatory cultures. Thus, the United States’ approach to date has been piecemeal, sector-specific, and distributed across federal agencies; while the European Union – through its draft AI Act – seeks to create comprehensive risk-based regulation that would require each member state to establish new national authorities to administer.
Given this divergence of approaches, and that the most appropriate national governance regime will likely vary based on the local context of a given region, UN bodies can play an important role in fostering international collaboration and in clarifying how emerging principles should be interpreted.
[The original paper then includes a brief discussion of the state of AI governance in Africa, which I've omitted from this post.]
Recommendations for the High-Level Advisory Board on AI
Global guidance from international bodies such as the UN is necessary to ensure that all states can benefit from the boons offered by increasingly advanced AI systems, while constraining these technologies’ risks. Acknowledging the complexity of this topic, we invite the High-Level Advisory Board to consider the following points in its work:
- Focus on foundation models: The Advisory Board ought to focus not on what today’s AI systems can do, but on what these systems are likely to be able to do within the next five to ten years.
- Focus on human rights: UN bodies can play an invaluable role in clarifying the human rights implications of advanced AI systems, connecting novel risks to well-established philosophical and legal frameworks.
- Mechanisms for participation in the development of advanced systems: Considerable thought needs to be given to involving governments and civil society the world over in the development of advanced AI systems. This is particularly challenging as these systems are primarily being developed by private companies. But given that many of these companies have expressed willingness both to be regulated and to foster democratic participation in the design of their systems, creative solutions surely exist. This issue is central to ensuring that the African continent, and the Global South more broadly, are not left to be passive recipients of technological advancements.
- Creation of an international institution to govern AI: Another central question is whether an international institution to govern AI ought to be created. A recent paper, “International AI Institutions: A Literature Review of Models, Examples, and Proposals”, offers guidance on this issue.
- Clarity on “transparency and explainability” principles: Transparency in the context of AI is as much a technical challenge as a policy one. Guidance should be given on the appropriate threshold for transparency in a given system, and on the consequences for a system if its developers do not meet that threshold. Best practices should also be considered on how public institutions and regulatory bodies can get insights on the operation of advanced AI systems, so they can conduct proper oversight.
- Clarity on liability: Guidance on who ought to be liable in the use of an advanced AI system would be invaluable in fleshing out the requirements for oversight and accountability in relation to AI governance.
- Open sourcing: Open sourcing poses unique challenges, as open-source technology can circumvent restrictions that may be imposed when a technology is centrally controlled. Guidance on how companies, states, and individuals should approach this challenge would therefore be welcomed.
- Auditing models: It is widely agreed that advanced AI models should be subject to independent audits both in development and after they are deployed. Guidance ought to be provided on best practices in auditing, detailing for example what constitutes independence, who is qualified to conduct such audits, and so on.
- Compute governance: Given that significant computing power is necessary to run advanced AI systems, the Advisory Board could consider the possibility of tracking the distribution of computing power globally, as how nuclear weapons are tracked.
- Licensing regimes: The Advisory Board should consider whether private companies ought to attain a license certifying that certain ethical and safety standards have been met before companies can deploy their systems; and if so, who ought to have the authority to grant such licenses.
Postscript
In laying out the above case, I have tried to rely on as few premises as possible. For example, there are clearly compelling reasons for policymakers the world over to take seriously risks from misuse without them needing to accept the concept of AGI (particularly if one thinks of AGI as emerging gradually rather than at a discrete moment in time). I've done this because I think it makes the work of persuasion easier by keeping the focus at a level of abstraction more easily understandable to a general audience (so it sticks within the Overton window); and because I think many of the intermediate steps necessary to establish effective global governance regimes are the same regardless of precisely how the problem is framed. But I might be mistaken on this.
As I've had the opportunity to lay out arguments like this to a range of audiences this year (particularly those in the media and in civil society), I continue to workshop the material. So I'd welcome constructive feedback of any form.
Executive summary: The paper argues that AI governance poses unique challenges due to rapid advances in capabilities, global power dynamics, and risks of misuse.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.