Hide table of contents

Note: This is a draft of a page eventually intended to be shared broadly, updated frequently, and referenced often. For the moment, it reflects only my personal views, though it was inspired by conversations at work. Suggestions welcome. 

Introduction

On first encountering the prospect of human extinction by AI, a common response is feeling helpless and frustrated. What could I, a lone person with no important connections or power, possibly do about a threat so large and complicated? 

This is my personal answer to that question. I say: We are not alone. We are not powerless. We will not go quietly. 

NORMALIZE

AI is on track to kill us all. 

This is not a fringe concern. While exact estimates differ, scientists have been warning about this outcome since the dawn of computing. In recent years, the advent of modern machine learning has catapulted a reckless and unsafe-by-default research paradigm to global prominence, and some of the fields’ own champions are sounding the alarm. The public is not deaf to these concerns, either. Polls suggest that Americans who favor slowing down AI development outnumber those in favor of speeding up by a factor of more than 10 to 1. 

And yet, many concerned policymakers still refrain from publicly acknowledging their own fears or those of their constituents, perhaps motivated by a competing fear of sounding too “alarmist” or “sci-fi”. 

This has to change.

My first and foremost call, therefore, is to share your concerns with others

At this very moment, researchers are attempting to build machines smarter and more capable than any human. Before the decade is out, they may very well succeed. 

Many who internalize these facts for the first time experience a mixture of shock and fear. That is, it turns out, a healthy and reasonable response. This state of affairs is terrifying. Yet those expressing their concerns frequently encounter a strange sort of social resistance or deflection, such as when Senator Blumenthal said to Sam Altman in a 2023 hearing[1]

I think you have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long term…

…and rather than correct this misunderstanding, Altman proceeded to talk about the “far greater jobs” in the future. 

There is an insidious resistance to confronting the bare facts of the situation with frank and unashamed candor. We need to lower that resistance. 

You can help with this effort. Air your concerns about AI, especially but not limited to concerns that it might actually, literally kill everyone. Help others feel more comfortable sharing theirs. 

Lead by example; brave the mild social discomfort of stating your sincere beliefs so that others may know them as such. Be real. Be vulnerable. Then take the further step of asking others what they think. Opine, acknowledge, DISCUSS

The rest of this post outlines many possible ways to contribute, but if you carry away but one message from this reading, let it be this: When in doubt, talk to people

PREPARE

Ask yourself: what’s my next best move? If you’re new to the field, you may want to LEARN about the current situation or ENGAGE with the ideas of others. If you feel like you understand the problem, but are at a loss for what to do, it may be time to PLAN your next steps. Perhaps you have a lever that can INFLUENCE the gears of power, a desire to PARTICIPATE in actions by allies, or the means to DONATE to a cause that has earned your support. 

Because I believe in Putting My Money Where My Mouth Is, I’ve shared some examples of times I took my own advice. Look for the “What I'm doing” tag. 

LEARN

These are ways to expand your knowledge about modern artificial intelligence, how it works, the threats it poses, and how you can help.  

WATCH

Watch a video explaining a concept you care about. I recommend this YouTube playlist. Pick your favorite, or start at the beginning. 

READ

Find a website, blog, or article that explores the problem. Aisafety.info has a good introduction for newcomers. See also their “build your knowledge” page. 

What I'm doing: Nowadays I do my best to keep up with the latest AI news from many sources. Among others, I attempt to keep up with Zvi’s AI posts, which are fairly comprehensive at the cost of being quite long. I also skim articles and papers shared by friends and colleagues, and save ones that seem important. 

LISTEN

If you prefer audio, a podcast or audiobook might be your thing. You may enjoy Doom Debates

STUDY

Dive deeper. BlueDot Impact runs a number of courses on the basics of AI safety. You can apply to a specific course, or explore the curriculum on your own. 

Dan Hendrycks and the Center for AI Safety also have an Intro to ML Safety course. I am less familiar with that curriculum, but you can read a review here and decide if it’s for you. 

Before you commit many hours to a course of study, though, ask yourself: Is learning this my top priority? Is there something I could be doing instead with the information I have right now? 

TEST

Test your knowledge of the field and what AI can do. Try out a modern large language model. What can it do? What are its limits? Can you guess what recent models can and cannot do

You might try: 

What I'm doing: Nowadays, I routinely use LLMs for tasks both mundane and critical. I’m careful to check their outputs, as they’re not always reliable[2], but I’ve found them to be extremely useful and at times frighteningly competent. Remember: This is the worst the technology will ever be. 

SUBSCRIBE

BlueDot Impact also has an excellent list of suggestions on their resources page; look under “podcasts and newsletters”. Overwhelmed? Pick one at random. I hear they’re all pretty good. 

What I'm doing: I’m subscribed to a couple of these, though finding time to read them is challenging. I like the CAIS newsletter

ENGAGE

Spread the word. Make it just a bit more normal to take AI seriously. 

DISCUSS

Talk to people! In person is best; online works too. Talk to a tech-savvy friend whose opinion you trust. Talk to your family about your concerns. A good lead-in might be: Have you tried the latest AI chatbots? What do you think? Invite them to TEST the AI on a difficult challenge in their field. 

This is a good time to warn about deepfake impersonation scams and talk about precautions, or talk about how AI can manipulate the messages you see online. These are likely not the capabilities that will kill us, but demonstrating the very real dangers we’re seeing right now can be a good primer for the question of what happens as AI labs continue to recklessly scale. 

If you find yourself doing this a lot, you might find it worthwhile to RECORD your conversations so that others might LISTEN

What I'm doing: I recently had a lengthy email conversation with a machine learning engineer about the wisdom of working for Anthropic or another large AI lab, and the strategic considerations involved. I found it a valuable discussion, and much of the contents may one day become an article. I also keep space on my calendar every week for Navigation Calls, advising newcomers to AI Safety about the field and helping them get their bearings. I sometimes follow-up with previous callers to see how they’ve progressed. 

RESPOND

Reply to the thoughts of others. Comment on this post, or another. Offer your feedback. Ask questions; resolve confusion. If something seems off or confusing to you, say so. Chances are, many others share your curiosity. 

What I'm doing: I recently responded to a post by Richard Hanania about AI taking human jobs. I shared my article and a much shorter response on Twitter, and wrote a brief comment on the post as well. 

SHARE

Share an artifact you found helpful. Send a thought-provoking article to a friend. If you agree with something - even mostly agree - share it with your network. 

What I'm doing: I recently reposted this excellent story by Joshua Clymer about AI takeover. (I also RESPONDED briefly with my thoughts. Two for one!) 

PREDICT

Make a concrete prediction about the future of AI. What will it be capable of next month? Next year? What would it mean for you if your prediction is right? If it’s wrong? Set a reminder; check your prediction later. 

If you made a prediction some time ago, revisit it. What’s changed since you made it? What was the outcome? 

What I'm doing: I use Fatebook to track my concrete predictions, though I admit I want to be making more of them. When I have a specific question about the future, whether AI or otherwise, I check whether prediction market sites like Metaculus or Manifold are trading on that question or a related one. As it happens, they currently have some relevant markets on extinction by AI and on safety-related legislation. (Prediction markets are known to struggle with some long-term questions, and can be distorted by the fact that it’s hard to collect if you’re dead; but they can still be a good starting point from which to update. Do you think an estimate is too high or too low?)

PLAN

Sometimes the next best action is to sit down and figure out the next best action. Here are some ways to approach that step. 

If you’re looking for a more in-depth approach: I have used, and frequently recommend to others, the 80,000 Hours career planning template as an anchor for ideas and planning. It is not exclusively aimed at AI extinction, but it asks many excellent questions that you may want to consider in a systemic way. 

THINK

Consider the nature of the problem and the skills you have available. Set a 5-minute timer. Reflect on what you’ve learned so far. What else do you want to know? How could you come to know it? 

Perhaps you have a decision to make; perhaps you’ve already made it. What comes next? 

What I'm doing: I’ve been through many iterations of this. I ultimately decided to switch careers after about eight years doing Reliability Engineering in oil & gas. My main regret is dragging my feet for so long. I didn’t feel that my strengths played well into any opportunities in the field. It turned out that I’d underestimated my strengths. Are you underestimating yours? 

CONFRONT

Face an ugly possibility head-on. What does it mean for you, personally, if AI is on track to cause human extinction? In the world where AI really can kill everyone on Earth in your lifetime, what’s the first thing on your mind? What does the version of you who lives in that world decide to do? 

Perhaps you’re not fully convinced that AI could transform the world in your lifetime. Perhaps you don’t think it will ever be better than humans at the things that really matter. Perhaps you have some other objection. For a few minutes, set aside your skepticism and take the possibility seriously. 

After this exercise, reflect on the possibility with fresh eyes. How likely does it seem that you live in the world you imagined? How might you tell? 

What I’m doing: I’ve already made my peace with the prospect of humanity’s extinction — or perhaps it would be more accurate to say that I resolved to make war upon it. I may fail, of course. The odds are stacked against me. Yet for now, life goes on. For most of human history, our ancestors dealt with the possibility that a nearby volcano erases everyone they know, a deadly plague empties their town, or the likes of Genghis Khan or Tamerlane massacres an entire village and builds a tower of their skulls. In the looming shadow of such disasters, they persevered. They lived, wed, raised their children, and planted their trees, even while doing what they could to protect their loved ones from cataclysm. So can I, and so can you. 

BRAINSTORM

Make a list of next steps you might take. Write down all your ideas, even the ones that seem silly. You can even ask an AI to brainstorm too. You may be surprised by how useful that simple step can be. 

What I'm doing: This article draws from a MIRI brainstorming session on “calls to action” that was collected and organized by a colleague (Thanks, Mitch!). Here’s me asking ChatGPT for ideas on expanding an early draft of this article.

PRUNE

Look at a list of ideas - perhaps one you just BRAINSTORMED, perhaps this one, or perhaps a list made by others. Which seem especially promising to you? Which are easy to begin, right now? If there’s any that meet both criteria - there’s your next step! 

ESTIMATE

If your next steps hinge on something you don’t know, make your best guess and move forward. Make an assumption and write it down. Then proceed. Given the assumption you just made, what’s your next step? Revisit that assumption later; does it seem true? To what extent do your actions depend on it? 

INFLUENCE

Grab a lever of power. Yank it as hard as you can. 

More ideas can be found here.

VOTE

Not many politicians have taken a firm stance on AI development yet. But many have made statements about their intent. What does your local representative have to say on the subject? What commitments have they made? Make it embarrassing for the answer to be “none”. 

UK voters can view which of their leaders support the Control AI campaign statement here. I’m not aware of a similar effort in the US yet; perhaps you’ll be the one to make it. 

CALL

Contact a representative and ask what they’re doing to prevent extinction by AI. It’s a rule of thumb in politics that for every person who calls about an issue, many more are silently concerned. Tell your representative your concerns; ask them their position and what they intend to do. If you like this and want to keep doing it, consider writing down the answers in a format like Control AI’s campaign statement above. 

SPEAK

Share your thoughts publicly on your platform of choice, even if you’re unsure. Help others navigate this issue. Become the person others look to when they have questions. 

What I'm doing: Last year, I ran a workshop at the Virtual AI Safety Unconference (VAISU) discussing the gaps in current alignment plans. (You can view the rest of the recorded talks here.) 

WRITE

Whether you’re an amateur, professional, or intermittent sharer of memes, you may have a way to get the word out online or in print. 

What I'm doing: I recently wrote up my views on AI to share with family, friends, and the broader public, and shared them on FacebookTwitter/X, and my personal blog

PARTICIPATE

Direct action for those looking to up the ante. You’ll have to decide for yourself how to best use your time, but you’ll find some ideas below. 

ATTEND

Many universities and large cities will have an AI Safety group. There might be one in your area; check them out. If there isn’t a nearby group you want to join or support…then perhaps you could LEAD one yourself? 

For more communities, online or in person, see here.

What I'm doing: Here’s the one at my alma mater, Georgia Tech

VOLUNTEER

Plenty of organizations could use your support. You can find lots of projects here. Whether your contribution involves a few minutes of website feedback, a few hours a week of support, or a long and fruitful collaboration, every little bit helps. Learn more about volunteering here.

What I'm doing: One of my first direct contributions to the AI Safety ecosystem was providing early feedback on websites like moratorium.ai and aisafety.com. I later facilitated study groups through AI Safety Quest, and this experience proved useful when I applied to facilitate BlueDot’s curriculum. I still run the occasional navigation call for newcomers to the field. As it happens, they’re looking for more volunteers…

MENTOR

Guide others in the field. There’s always a need for experienced mentors who can help others learn, transition, or grow their careers in alignment, policy, or a related field. If you’re experienced in machine learning and might want to mentor, check out SPAR and MATS.

SIGN

Put your name to something you agree with. PauseAI shares recommended petitions on their action page. If you command substantial influence in AI or another field, consider signing the CAIS Statement on AI Risk.

PROTEST

Join others in person who object to the reckless pursuit of AIs that could destroy humanity. Pause AI organizes demonstrations and protests worldwide. If you protest, remember that your actions reflect on the movement as a whole. Keep it civil. 

APPLY

Consider applying for a job in AI Safety, or applying your own skills to improve our chances. It’s not all technical work; whatever your skillset, there’s a good chance the field could use your help. 

  • If you’re in tech or engineering, you may be suited to technical governance or information security.
  • If you’re in law or policy, consider leveraging those skills in AI policy. One possible route is liability law for present and future harms caused by AI.
  • If you’ve an entrepreneurial mindset, consider filling a key gap with a startup of your own.
  • If you have connections or are a skilled communicator, consider how you might use those to NORMALIZE extinction concerns. 

Direct alignment research is important, too, but I’m not optimistic that a new career in alignment will have time to bear fruit before it’s too late. Consequently, I tend to direct the marginal technically-inclined researcher towards solving the technical problems needed for good governance

You can learn more about career options here

What I'm doing: Before my current job with MIRI, I was a teaching fellow at BlueDot Impact. Teaching others helped me learn about the field. 

DONATE

Money pretty much always helps. You can support MIRI or find other causes in the field

ACT

If you’re still on the fence, here’s a final challenge: Pick one action that you can do right now. Set a reminder to yourself to review your options again tomorrow, or at a time you expect will be good for it. Then, close this list and do the thing. 

  1. ^

    Timestamp is around 53:30 if, like me, you want to actually hear the exchange. 

  2. ^

    While I was attempting to source the Blumenthal quote, and before someone kindly linked me to Daniel Eth's quotes from the hearing, DeepSeek correctly identified the date of the hearing, but repeatedly hallucinated Sam Altman saying something much more reasonable, e.g.: "No, Senator. The existential risk I focus on is the potential for AI systems, particularly as they become more powerful than humans, to act in ways that could lead to human extinction or catastrophic harm. Job displacement is a serious concern, but it’s a separate category." I wish, DeepSeek. I wish. 

22

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Thank you for this post!

Executive summary: Preventing human extinction from AI requires widespread awareness, proactive preparation, and direct action, including normalizing concerns, engaging in discussions, influencing policy, and supporting AI safety initiatives through learning, volunteering, and advocacy.

Key points:

  1. Normalize concerns – Publicly acknowledge and discuss the risks of AI extinction to reduce social resistance and encourage more policymakers to take the issue seriously.
  2. Prepare and learn – Educate yourself on AI capabilities, safety concerns, and policy debates through reading, courses, and hands-on testing of AI models.
  3. Engage and spread awareness – Talk to friends, colleagues, and the public about AI risks, respond to misconceptions, and contribute to the discussion through social media, writing, and predictions.
  4. Plan and strategize – Reflect on personal strengths, career options, and actions that can have the highest impact, such as leveraging existing skills for AI governance or technical safety.
  5. Influence decision-makers – Contact representatives, advocate for AI safety policies, vote for candidates addressing AI risks, and support transparency in AI governance.
  6. Take direct action – Join AI safety groups, mentor newcomers, sign petitions, participate in protests, apply for AI safety roles, or donate to relevant organizations.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities