Hi,

As a disclaimer, this will not be as eloquent or well-informed as most of the other posts on this forum. I’m something of an EA lurker who has a casual interest in philosophy but is wildly out of her intellectual depth on this forum 90% of the time. I’m also somewhat prone to existential anxiety and have a tendency to become hyper-fixated on certain topics - and recently had the misfortune of falling down the AI safety internet rabbit hole.

It all started when I used ChatGPT for the first time and started to become concerned that I might lose my (content writing) job to a chatbot. My company then convened a meeting where they reassured as all that despite recent advances in AI, they would continue taking a human-led approach to content creation ‘for now’ (which wasn’t as comforting as they probably intended).

In a move I now somewhat regret, I decided my best bet would be to find out as much about the topic as I could. This was around the time that Geoffrey Hinton stepped down from Google, so the first thing I encountered was one of his media appearances. This quickly updated me from ‘what if AI takes my job’ to ‘what if AI kills me’. I was vaguely familiar with the existential risk from AI scenarios already, but had considered them far off enough the the future to not really worry about.

In looking for less bleak perspectives than Hinton’s, I managed to find the exact opposite (ie that Bankless episode with Eliezer Yudkowsky). From there I was introduced to whole cast of similarly pessimistic AI researchers predicting the imminent extinction of humanity with all the confidence of fundamentalist Christians awaiting the rapture (I’m sure I don’t have to name them here - also I apologise if any of you reading this are the aforementioned researchers, I don’t mean this to be disparaging in any way - this was just my first impression as one of the uninitiated).

I’ll be honest and say that I initially thought I’d stumbled across some kind of doomsday cult. I assumed there must be some more moderate expert consensus that the more extreme doomers were diverging from. I spent a good month hunting for the well-established body of evidence projecting a more mundane, steady improvement of technology, where everything in 10 years would be kinda like now but with more sophisticated LLMs and an untold amount of AI-generated spam clogging up the internet. Hours spent scanning think-pieces and news reports for the magic words ‘while a minority of researchers expect worst-case scenarios, most experts believe…’. But ‘most experts’ were nowhere to be found.

The closest I could find to a reasonably large sample size was that 2022 (?) survey that gave rise to the much-repeated statistic about half of ML researchers placing a >10% chance on extinction from AI. If anything, that survey seemed reassuring, because the median probability was something around 5% as opposed to the >50% estimated by the most prominent safety experts. There was also the recent XPT forecasting contest, which, again produced generally low p(doom) estimates and seemed to leave most people quibbling over the fact that domain experts were assigning single digit probabilities to AI extinction, while superforecasters thought the odds were below 1%. I couldn’t help but think that these seemed like strange differences of opinion to be focused on, when you don’t need to look far to find seasoned experts who are convinced that AI doom is all but inevitable within the next few years.

I now find myself in a place where I spend every free second scouring the internet for the AGI timelines and p(doom) estimates of anyone who sounds vaguely credible. I’m not ashamed to admit that this involves a lot of skim reading, since I, a humble English lit grad, am simply not smart enough to comprehend most of the technical or philosophical details. I’ve filled my brain with countless long-form podcasts, forum posts and twitter threads explaining that, for reasons I don’t understand, I and everyone I care about will die in the next 3 years. Or the next 10. Or sometime in the late 2030s. Or that there actually isn’t anything to worry about at all. It’s like having received diagnoses from about 30 different doctors.

At this point, I have no idea what to believe. I don’t know if this is the case of the doomiest voices being the loudest, while the world is actually populated with academics, programmers and researchers who form the silent, unconcerned majority - or whether we genuinely are all screwed. And I don’t know how to cope psychologically with not knowing which world we’re in. Nor can I speak to any of my friends or family about it, because they think the whole thing is ridiculous, and I’ve put myself in something of a boy who cried wolf situation by getting myself worked up over a whole host of worst-case scenarios over the years.

Even if we are all in acute danger, I’m paralysed by the thought that I really can’t do anything about it. I’m pretty sure I’m not going to solve the alignment problem using my GCSE maths and the basic HTML I taught myself so I could customise my tumblr blog when I was 15. Nor do I have the social capital or media skills to become some kind of everywoman tech Cassandra warning people about the coming apocalypse. Believing that we’re (maybe) all on death’s door is also making it extremely hard to motivate myself to make any longer term changes in my own life, like saving money, sorting out my less-than-optimal mental health or finding a job I actually like.

So I’m making this appeal to the more intelligent and well-informed - how do you cope with life through the AI looking glass? Just how worried are you? And if you place a signifiant probability on the death of literally everyone in the near future, how does that impact your everyday life?

Thanks for reading!

42

1
0

Reactions

1
0

More posts like this

Comments21
Sorted by Click to highlight new comments since:

I have a masters degree in machine learning and I've been thinking a lot about this for like 6 years, and here's how it looks to me:

  • AI is playing out in a totally different way to the doomy scenarios Bostrom and Yudkowsky warned about
  • AI doomers tend to hang out together and reinforce each other's extreme views
  • I think rationalists and EAs can easily have their whole lives nerd-sniped by plausible but ultimately specious ideas
  • I don't expect any radical discontinuities in the near-term future. The world will broadly continue as normal, only faster.
  • Some problems will get worse as they get faster. Some good things will get better as they get faster. Some things will get weirder in a way where it's not clear if they're better or worse.
  • Some bad stuff will probably happen. Bad stuff has always happened. So it goes.
  • It's plausible humans will go extinct from AI. It's also plausible humans will go extinct from supervolcanoes. So it goes.

I’m paralysed by the thought that I really can’t do anything about it.

IMO, a lot of people in the AI safety world are making a lot of preventable mistakes, and there's a lot of value in making the scene more legible. If you're a content writer, then honestly trying to understand what's going on and communicating your evolving understanding is actually pretty valuable. Just write more posts like this.

>It's plausible humans will go extinct from AI. It's also plausible humans will go extinct from supervolcanoes. 

Our primitive and nontechnological ancestors survived tens of millions of years of supervolcano eruptions (not to mention mass extinctions from asteroid/comet impacts) and our civilization's ability to withstand them is unprecedentedly high and rapidly increasing. That's not plausible, it's enormously remote, well under 1/10,000 this century.

I agree with what I think you intend to say, but in my mind plausible = any chance at all.

This is what I meant, yeah.

There's also an issue of "low probability" meaning fundamentally different things in the case of AI doom vs supervolcanoes.

P(supervolacano doom) > 0 is a frequentist statement. "We know from past observations that supervolcano doom happens with some (low) frequency." This is a fact about the territory.

P(AI doom) > 0 is a Bayesian statement. "Given our current state of knowledge, it's possible we live in a world where AI doom happens." This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we're just confused and there's no way this could ever actually happen.

What.

Supervolcano doom probabilities are more resilient because the chain of causation is shorter and we have a natural history track record to back up some of the key points in the chain. But the difference are a matter of degree, not kind. It is very much not the case that we've had a long-term track record of human civilizations that died to supervolcanoes to draw from; almost every claim about the probability of human extinction is ultimately a claim about (hopefully improving) models, not a sample of long-run means.

Hi, thanks for your piece. You write beautifully and content writing is actually needed in AI alignment. Maybe we can help you to find a meaningful career that you're happy with? It would be great to have a chat, if you're up for that. Kind regards, Rochelle (rochelle@ashgro.org)

Hi Rochelle, thanks for this, I appreciate it! I’ve dropped you an email :)

You might appreciate Julia Wise's finding equilibrium in a difficult time. The post was about how to relate to the world at the start of covid (notably, a global pandemic that >99% of people survived, despite large human costs to both lives and livelihoods). But it can apply just as easily to AI doom fears.
 

This is a historic event. I find it kind of comforting to know that other people have been through similar historic events. Other people throughout the centuries have experienced epidemics and have worried, argued about what to do, and done their best to take care of each other.

C.S. Lewis wrote in 1948:

"In one way we think a great deal too much of the atomic bomb. “How are we to live in an atomic age?” I am tempted to reply: “Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.”

"In other words, do not let us begin by exaggerating the novelty of our situation. Believe me, dear sir or madam, you and all whom you love were already sentenced to death before the atomic bomb was invented: and quite a high percentage of us were going to die in unpleasant ways. We had, indeed, one very great advantage over our ancestors—anesthetics; but we have that still. It is perfectly ridiculous to go about whimpering and drawing long faces because the scientists have added one more chance of painful and premature death to a world which already bristled with such chances and in which death itself was not a chance at all, but a certainty.

"This is the first point to be made: and the first action to be taken is to pull ourselves together. If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds."

(emphasis mine)

I think it's great that you're asking for support rather than facing existential anxiety alone, and I'm sorry that you don't seem to have people in your life who will take your worries seriously and talk through them with you. And I'm sure everyone responding here means well and wants the best for you, but joining the Forum has filtered us—whether for our worldviews, our interests, or our susceptibility to certain arguments. If we're here for reasons other than AI, then we probably don't mind talk of doom or are at least too conflict-averse to continually barge into others' AI discussions.

So I would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you're in doubt and perseverating on thoughts of Hell. You don't have to listen to any of us, and you wouldn't have to even if we really were all smarter than you.

You point out the XPT forecasts. I think that's a great place to start. It's hard to argue that a non-expert ought to defer more to AI-safety researchers than to either the superforecasters or the expert group. Having heard from XPT participants, I don't think the difference between them and people more pessimistic about AI risk has to do with facility with technical or philosophical details. This matches my experience reading deep into existential risk debates over the years—they don't know anything I don't. They mostly find different lines of argument more or less persuasive.

I don't have first-hand advice to give on living with existential anxiety. I think the most important thing is to take care of yourself, even if you do end up settling on AI safety as your top priority. A good therapist might have helpful ideas regarding rumination and feelings of helplessness, which aren't required responses to any beliefs about existential risk. 

I'm glad to respond to comments here, but please feel free to reach out privately as well. (That goes for anyone with similar thoughts who wants to talk to someone familiar with AI discussions but unpersuaded about risk.)

"would caution you that asking this question here is at least a bit like walking into a Bible study and asking for help from people more righteous than you in clarifying your thinking about God because you're in doubt and perseverating on thoughts of Hell. You don't have to listen to any of us, and you wouldn't have to even if we really were all smarter than you."

This is #wisdom love it.

I'm really sorry you're experiencing this. I think it's something more and more people are contending with, so you aren't alone, and I'm glad you wrote this. As somebody who's had bouts of existential dread myself, there are a few things I'd like to suggest:

  1. With AI, we fundamentally do not know what is to come. We're all making our best guesses -- as you can tell by finding 30 different diagnoses! This is probably a hint that we are deeply confused, and that we should not be too confident that we are doomed (or, to be fair, too confident that we are safe).
  2. For this reason, it can be useful to practice thinking through the models on your own. Start making your own guesses! I also often find the technical and philosophical details beyond me -- but that doesn't mean we can't think through the broad strokes. "How confident am I that instrumental convergence is real?" "Do I think evals for new models will become legally mandated?" "Do I think they will be effective at detecting deception?" At the least, this might help focus your content consumption instead of being an amorphous blob of dread -- I refer to it this way because I found the invasion of Ukraine sent me similarly reading as much as I could. Developing a model by focusing on specific, concrete questions (e.g. What events would presage a nuclear strike?) helped me transform my anxiety from "Everything about this worries me" into something closer to "Events X and Y are probably bad, but event Z is probably good".
  3. I find it very empowering to work on the problems that worry me, even though my work is quite indirect. AI safety labs have content writing positions on occasion. I work on the 80,000 Hours job board and we list roles in AI safety. Though these are often research and engineering jobs, it's worth keeping an eye out. It's possible that proximity to the problem would accentuate your stress, to be fair, but I do think it trades against the feeling of helplessness!
  4. C. S. Lewis has a take on dealing with the dread of nuclear extinction that I'm very fond of and think is applicable: ‘How are we to live in an atomic age?’ I am tempted to reply: ‘Why, as you would have lived in the sixteenth century when the plague visited London almost every year...’ 

 

I hope this helps!

Rereading your post, I'd also strongly recommend prioritizing finding ways to not spend all free time on it. Not only do I think that that level of fixating is one of the worst things people can do to make themselves suffer, it also makes it very hard to think straight and figure things out!

One thing I've seen suggested is dedicating time each day to use as research time on your questions. This is a compromise to free up the rest of your time to things that don't hurt your head. And hang out with friends who are good at distracting you!

Perhaps it would be useful to talk to someone alive in 1960 about how they carried about their lives under the constant threat of nuclear war? 

Nor can I speak to any of my friends or family about it, because they think the whole thing is ridiculous, and I’ve put myself in something of a boy who cried wolf situation by getting myself worked up over a whole host of worst-case scenarios over the years.

This seems important to me, having people to talk to.

How about sharing that you have uncertainty and aren't sure how to think about it, or something like that? Seems different from "hey everyone, we're definitely going to die this time" and also seems true to your current state (as I understand it from this post)

Hi bethhw!

Thanks for taking the time to write up this post. Many of us have gone through similar things and can relate to the struggle you are experiencing.

Regarding the "I can't do anything about it" part

  1. I see this meme a lot in the AI safety community. I think it's a function of a) the underlying complexity of the problem and b) the reverance we have for some of the people working on this, with a dash of c) "my academic background is completely irrelevant" and d) "it's too late for me to build the skills I need to contribute" thrown in there.
  2. I won't argue against a) and b) - the problem IS hard and the people working on it ARE often very impressive with regards to their intellectual chops.
  3. But c) and d) are a completely different story, and I want to push back hard against them. People routinely underestimate how many different skills a given field can benefit from. AI Safety needs cognitive scientists, STEM people, historians, activists, political scientists, artists, journalists, content editors, Office Managers, educators, finance specialists, PAs- if you truly think your background is irrelevant, send me a DM and I'm happy to take bets on whether I can find a position that would benefit from your skillset. ;-) 
    (Anecdotally, I used to be a teacher, and I'm now working on case studies for AI Standards, field-building and Research Management. It turned out people really appreciate it when you can explain something in clear terms, organize processes well and help others to engage with important but thorny ideas.)


On building skills: The field is so young and nascent that literally nobody is "on the ball" and while this is deeply concerning from an x-risk perspective, it is good news for you - there is a limited number of key concepts and models to understand. For many people, it's is not too late to learn about these things and to build skills, and there are many resources and programs to support this.

Last but not least: Reach out to me if you'd like to discuss your options or just want to talk to a kind voice. I'd be happy to. :)

I just wanted to thank everyone who commented for your thoughtful responses. They really helped me think through this issue a bit more. I really appreciate this community and will definitely be sticking around!

At this point, I have no idea what to believe. I don’t know if this is the case of the doomiest voices being the loudest, while the world is actually populated with academics, programmers and researchers who form the silent, unconcerned majority - or whether we genuinely are all screwed.

My sense is that this is broadly true, at least in the sense of 'unconcerned' meaning 'have a credence in AI doom of max 10% and often lower'. All the programmers and academics working on AI presumably don't think they're accelerating or enabling the apocalypse, otherwise they could stop - these are highly skilled software engineers who would have no trouble finding a new job. 

Also, every coder in the world knows that programs often don't do what you think they're going to do, and that as much as possible you have to break them into components and check rigorously in lab conditions how they behave before putting them in the wild. Of course there are bugs that get through in every program, and of course there are ways that this whole picture could be wrong. Nonetheless, it gives me a much more optimistic sense of 'number of people (implicitly) working on AI safety' than many people in the EA movement seem to have.

Unfortunately, I do not have time for a long answer, but I can understand very well how you feel. Stuff that I find helpful is practising mindfulness and/or stoicism and taking breaks from internet. You said that you find it difficult to make future plans. In my experience, it can calm you down to focus on your career / family / retirement even if it is possible that AI timelines are short. If it turns out that fear of AI is the same as fear of grey goo in the 90s, making future plans is better anyway.

You may find this list of mental health suggestions helpful:

https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of

Be not afraid to seek help if you get serious mental health issues.

My answer might be useless, but I somewhat went through the same. Purely from my side, after a week it was gone, as I was just temporarily overwhelmed. 

I also hold the opinion that Eliezer Yudkowsky might be doing important work (I am not educated enough to know what that valuable work is), but I think he should stop being anywhere near a spotlight. He might seem like one of the key people within EA, he certainly did seem to me when I discovered EA, but he is awful at PR. There are loads of skeptics around, and he is not the only one to represent the EA.

On the other hand maybe EY is a blessing in disguise because he creates so much fear, and actually helps the AI safety area. I don't think there is any harm done in exploring that area after all!

However, also bear in mind that even the “senior” EAs, are still people and their opinions might mean nothing. As an example Will MacAskill expressed this "I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%" https://forum.effectivealtruism.org/posts/oPGJrqohDqT8GZieA/ask-me-anything?commentId=HcRNG4yhB4RsDtYit. Well thanks a lot Will, because that fear of a 20% chance of death almost forced me into changing a career, which would have been useless now, and I don't think Will would do anything to help me fix such a mistake once I would switch.

I think this is an extremely good post laying out why the public discussion on this topic might seem confusing:

https://www.lesswrong.com/posts/BTcEzXYoDrWzkLLrQ/the-public-debate-about-ai-is-confusing-for-the-general

It might be somewhat hard to follow, but this little prediction market is interesting (wouldn't take the numbers too seriously):

In December of last year it seemed plausible to many people online that by now, August 2023, the world would be a very strange, near-apocalyptic place full of inscrutable alien intelligences. Obviously, this is totally wrong. So it could be worth comparing others' "vibes" here to your own thought process to see if you're overestimating the rate of progress.

Paying for GPT-4 if you have the budget may also be helpful to calibrate. It's magical, but you run into embarrassing failures pretty quickly, which most commentators tend to talk about rarely.

More from bethhw
Curated and popular this week
Relevant opportunities