Hide table of contents

tl;dr: A large amount of citizen assemblies about AI will soon be run, called “Global coalition for inclusive AI”. A window is about to open for a short amount of time informing politicians and the public about what citizens think of AI in general. I'm suggesting funders should reevaluate the priority of communication, AI Safety workers should consider speaking in public, and specialists of epistemics/disagreements/debate should coordinate and join these efforts.

Epistemic status: This is written quickly and may be heavily modified in the coming days.

On Tuesday, February 11th, Missions Publiques, a french think tank announced an international coalition for Citizen Assemblies on AI, in partnership with Yale and Stanford. Why is this important ?

1-Substantially discussing AI Safety in the assembly is not guaranteed

If you look up the program and speakers of the pilot, which happened in ENS Paris, you find some mention of Safety, but in a vague and allusive form. 

However, on some occasions the organizers can turn out to be open to voluntary contributions to the program. If you are interested and have the right credentials, I'd encourage you to contact the initiative in your local country.

It may be that participation in the Citizen Assembly as a deliberating member is voluntary (rather than purely randomized) or randomized but conditional on volunteering first. If this is the case, I highly suggest you to compare your counterfactuals, there are surely some well-read people for whom the commitment would be valuable enough.

However, please note that I’m not sure of what your final impact consists of. My belief is that the outcome of citizen assemblies has the power to legitimate or downgrade the credibility and importance of AI Safety for key decision makers -I’m worried about the default, because I expect most assemblies, except the US/UK, to strongly dilute the topic, contributing to making it invisible. I don’t believe assemblies will lead to strong commitments (as has been seen in France for the Climate Assembly).

From personal experience, once in the assembly, talking about Safety and even X-risk is fine ! The setup allows for genuine discussion and minimizes strong disagreements.

Factors that could increase your counterfactual impact in the case of voluntary participation could be: 

  • You live outside of the US/UK, or in countries where AI Safety is not mainstream / seriously considered yet, or considered with skepticism/hostility. In this case, participating as someone who speaks about Safety prevents misrepresentation or simply leaving Safety out of the picture.
    • This is, except if you expect the outcome of the US / UK Citizen Assembly to be more influential than the aggregation of the other assemblies, since they’re home to world-tier efforts in AI research. If you have dual citizenship and the US is part of them, this is an important variable (I’m skeptical of this argument, however).
  • [Unsure]: You’re professionally unaffiliated with existing AI Safety efforts, and this reinforces your legitimacy for participating in a citizen’s assembly as a deliberating member.

If you have strong credentials, you can serve as an expert during the Assembly. I don't know if you can help organize it.

Note : if you live in a country like the US or the UK where there are significantly more Safety-aware and technical people, deciding whether to participate creates a need for coordination where you may want to prioritize knowledgeable people

Not very plausibly, but plausibly enough to warrant action, the whole event might turn out to be a very short action window with some amount of value lock-in as a result. To give you a sense of the situation, imagine we ran a World Climate Citizen Assembly, but we’re in the early 70s. If people had cached the thought ‘‘but we did a citizen assembly in the 70s and concluded air pollution was important, and climate change a problem for the long-term’’, this would undermine our current ability to react accordingly This also might turn out to be the only democratic deliberation we will ever have on AI on a global level. The ‘long reflection’ might be just now.

2-Low awareness of AI Safety Calls for Careful Action

By default, I don’t think AI Safety will come up as a concern in most assemblies outside a few countries. The safeguard is therefore to educate the population at large on AI Safety, rather than hope to become a participant or appointed expert. The plan is that enough people are exposed to high-quality content, so that one of them ends up in the Citizen Assembly.

Although the vague talking points of AI Safety are clearly becoming mainstream, the actual, technical models of AI Safety from the public’s perspective compared to the state of the art has yet to reach the current situation on e.g. Climate change. Only few people around me could name an empirical study on AI safety, which helped make the concerns much more legitimate. I strongly encourage, after reflexion :

1- Funders / donors to re-assess the priority of producing explainers, such as multilingual Kurzgesagt partnerships, original videos, adverts or collabs with big YouTubers or podcasters from different linguistic areas, documentaries, or high-fidelity media outreach which include key distillations of up-to-date AIS observations and arguments that could eventually make it to future participants. This of course has to be done in full transparency -if communicators want to mention other concerns upon learning that it is motivated by a Citizen Assembly, then their concern should be taken into account.

2- Scientists to consider speaking in public. Real Citizen Assemblies rely on guest speakers who can be requested by the public. You should expect some governments (such as France) to be hostile to inviting Safety-aware speakers, yet not as much as to oppose the participants inviting a speaker they themselves thought of. If you have credentials, I’d strongly encourage you to dedicate some time for this, so as to become someone the average Joe spontaneously thinks of when wondering « who could we request as a speaker ? ». This means at least appearing on TV or a highly viewed YouTube channel.

3- [Low confidence] Specialists of epistemics and disagreement handling to consider joining one of the national CA projects and actively coordinate together. If you have a background in deliberation, at any scale, AI-assisted or not, your help might be needed. This is a rare opportunity to underline potential improvements in the epistemics of the CA process, or just point CA organizers to things that already exist and help improve Citizen Assemblies (such as Double Crux, Forecasting, etc).

What I’m not suggesting : 

1-Biasing the conversation

A citizen assembly on AI will go through many topics. I’m not asking for participants to hijack the theme and make it all about Safety. It is a crucial theme, but the aim of the assembly is to discuss broader concerns. This process is followed diligently, and attempts to bias the conversation will credibly lead you to have your participation removed. Conflicts of interest should of course be a no-go as a participant, or at least be transparently flagged.

2-Being ideological, tempering with the deliberative process

A citizen assembly is a place for dialogue. If you plan going there with a Soldier Mindset and a fleet of pro-Safety hooligans, please don’t. It’s important to treat these questions with an open mind, given all the uncertainties at stake, and to interact courteously. I think the greatest benefit we can have is making existing concerns and science visible, so being adversarial with people who disagree seems a particularly useless endeavor. Also, please do not flood any Citizen Assembly with excessive EA applications.

3-Being careless

I’m not suggesting to sell a solution or a particular threat model as being the “definitive” one, whether acting as an expert or a participant, nor to use the first arguments you can think of. Talking about Safety requires carefulness -when learning about the risks of AGI, some people see this as a motivation to join the race. If you end up participating in the assembly, coordinate yourself with experts to flag down common errors and misconceptions on the pro-safety side.

A last caveat: I’m not an AI Safety expert, either technical or in terms of governance. If you’re interested, I highly suggest you read the comments below, as I’m expecting important nuances to be brought up.

If you are interested, please join this discord server as a means to coordinate and be kept up to date. You can also send me a DM on the forum.

25

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Where and when are these supposed to occur, and how can we track that for our respective countries?

Good question. The international coalition is still being built right now, which means that no official dates have been decided. I've heard a credible source say the assemblies are planned to start in June. I'll update the post and discord server as soon as I get more information.

Curated and popular this week
Relevant opportunities