Hide table of contents

Overview

In this ~28 min-video, Toby Ord discussed humanity's journey from the past to the present, emphasizing the current era as critical due to existential risks. He introduced the concept of 'The Precipice', highlighting the potential for human extinction from technological advancements. The importance of moral foundations, future potential, probabilities of existential risks, and strategies for mitigating them were also discussed. Overall, he emphasized the need for intergenerational cooperation, preserving cultural heritage, and donating to long-term oriented causes.

Notes

Humanity's Grand Journey (00:05 - 00:20)

  • Toby Ord discusses humanity's journey from the savannas of Africa to the present day.
  • Highlights key transitions: Agricultural Revolution, Scientific Revolution, Industrial Revolution.
  • Emphasizes the current era as the most critical due to existential risks.

Existential Risks (00:20 - 00:35)

  • Introduction of the concept of 'The Precipice' - a period of high existential risk.
  • Discussion on the potential for human extinction due to technological advancements.
  • Comparison of natural risks vs. anthropogenic risks.

Moral Foundations and Future Potential (00:35 - 00:50)

  • Exploration of different moral foundations for safeguarding humanity.
  • Discussion on the potential future of humanity if existential risks are mitigated.
  • Importance of intergenerational cooperation and preserving cultural heritage.

Probabilities and Strategies (00:50 - 01:05)

  • Presentation of probabilities for different existential risks.
  • Discussion on the need for a grand strategy for humanity.
  • Introduction of the concept of 'Existential Security' and 'The Long Reflection'.

Book Overview and Q&A (01:05 - 01:20)

  • Overview of Toby Ord's book 'The Precipice'.
  • Discussion on the importance of donating to long-term oriented causes.
  • Q&A session addressing various questions from the audience.

Transcript

Speaker 1: 
Please join me in welcoming to the stage Toby Ord. 

Toby Ord: 
In the grand course of human history. Where do we stand? Could we be living at one of the most influential times there will ever be? Our species, homo sapiens, arose on the savannas of Africa 200,000 years ago. What set us apart from the other animals was both our intelligence and also our ability to work together, to build something greater than ourselves. From an ecological perspective, it was not a human that was so distinctive, but humanity. And crucially, were able to cooperate across time as well as across space. If each generation had to learn everything anew, then even a crude iron shovel would be forever beyond our reach. But we learned from our ancestors, added innovations of our own, and passed them all down to our children. 

And instead of dozens of humans in cooperation, we had tens of thousands of humans cooperating and improving ideas across deep time. Little by little, our knowledge and our culture grew. At several points in humanity's long history, there's been a great transition, a change in human affairs that accelerated our progress and shaped everything that would follow. 10,000 years ago was the agricultural revolution. Farming could support 100 times as many people as foraging on the same piece of land, making much wider cooperation possible. Instead of dozens of people in cooperation, we had millions. This allowed people to specialize into thousands of different trades. There were rapid developments in our institutions, our culture, and technology. We developed writing, mathematics, engineering, law, and we established civilization. Then, 400 years ago, was the scientific revolution. 

The scientific method replaced our reliance on received authorities with careful observation of the natural world and testable explanations of what we saw. The ability to test and discard those bad explanations helped us to break free of dogma and allowed us, for the first time, to have the systematic creation of knowledge about the natural world. And some of this newfound knowledge could be used to improve the world around us. So the accelerated accumulation of knowledge brought with it an acceleration of technological innovation, giving humanity increasing power over the natural world. Then, 200 years ago, was the industrial revolution. This is made possible by the discovery of immense reserves of energy in the form of fossil fuels, allowing us to capture a portion of the sunlight that had shone down upon the earth over the millions of years before. 

Productivity and prosperity began to accelerate, and a rapid sequence of innovations ramped up the efficiency, the scale and variety of automation, giving rise to the modern era of growth. But there's recently been another transition that I believe is more important than any of these that have come before. With the detonation of the first atomic bomb, a new age for humanity began. At that moment, our rapidly accelerating technological power finally reached the threshold where we might be able to destroy ourselves. The first point where the threat to humanity from without exceeded. Sorry, from within exceeded that from the natural world. A point where the entire future of humanity hangs in the balance, where every advance that our ancestors achieved could be squandered, and every advance our descendants could achieve would be denied. These threats to humanity, and how we address them, define our time. 

The advent of nuclear weapons posed a real risk of human extinction in the 20th century. With the continued acceleration of technology, and without serious effort to protect humanity, there's strong reason to believe that the risk will be higher this century and every century that technological progress continues. And because these anthropogenic risks outstrip all of the natural risks combined, they set the clock on how long humanity has to pull back from the brink. If I'm even roughly right about their scale, then we cannot survive many centuries with risk like this. It's an unsustainable level of risk. Thus, one way or another, this new period is unlikely to last more than a few centuries. Either humanity takes control of its destiny and reduces the risk to a sustainable level, or we destroy ourselves. I sometimes think of human history as a grand journey through the wilderness. 

There are wrong turns and times of hardship, but also times of sudden progress and heady views. In the middle of the 20th century, we came through a high mountain pass and found that the route onwards was a narrow path along the cliffside, a crumbling ledge on the brink of a precipice. Looking down brings a deep sense of vertigo. If we fall, everything is lost. We do not know just how likely we are to fall, but it's the greatest risk to which we have ever been exposed. This comparatively brief period is a unique challenge in the history of our species. Our response to it will define our story. Historians of the future will name this time, and school children will study it. But I think we need a name now, and I call it the precipice. 

The precipice gives our time immense meaning in the grand course of history. If we make it that far, this is what our time will be remembered for the highest levels of risk, and for humanity opening its eyes and coming into its maturity and guaranteeing a long and flourishing future. I'm not glorifying our generation, nor am I vilifying us. The point is that our actions have uniquely high stakes. Whether we are great or terrible will depend upon what we do with this opportunity. And I hope that we live to tell our children and grandchildren that we did not stand by but that we used our chance to play the part that history gave us. Humanity's future is ripe with possibility. We've achieved a rich understanding of the world we inhabit and a level of health and prosperity of which our ancestors could only dream. 

We've begun to explore the other worlds and the heavens above us, and to create virtual worlds completely beyond our ancestors comprehension. We know of almost no limits to what we might ultimately achieve. But human extinction would foreclose our future. It would destroy our potential and would eliminate all possibilities but one, a world bereft of human flourishing. Extinction would bring about this failed world, and it would lock it in forever. There would be no coming back. But it's not the only way our potential could be destroyed. Consider a world in ruins, where catastrophe has done such damage to the environment that civilization has completely collapsed and is unable to ever be recovered. Even if such a catastrophe did not cause our extinction, it would have a similar effect upon our future. 

The vast realm of futures currently open to us would have collapsed into a narrow range of meager options. We would have a failed world with no way back. Or consider a world in chains, where the entire world has become locked under the rule of an oppressive totalitarian regime determined to perpetuate itself. If such a regime could be maintained indefinitely, then descent into this totalitarian future would also have much in common with extinction, just a narrow range of terrible futures with no way out. What all these possibilities have in common is that humanity's once soaring potential has been permanently destroyed, not just the loss of everything we have, but the loss of everything we could have ever achieved. This is what I call an existential catastrophe, and the risk of it occurring is an existential risk. 

There are different ways of understanding what makes an existential catastrophe so bad. In the book, I explore five different moral foundations for the importance of safeguarding humanity from existential risk. Our concern could be rooted in the present, the immediate toll such a catastrophe would have on everyone alive at the time that it struck. Or our concern could be rooted in the future, stretching so much further than our own moment, everything that would be lost or in the past, on how we would fail every generation that came before us. We could also make a case based on virtue, on how, by risking our entire future, humanity itself displays a staggering deficiency of patience, of prudence, and of wisdom. 

Or we could make a case based on our cosmic significance, on how this might be the only place in the universe with intelligent life, the only chance for the universe to understand itself, or the only beings who can deliberately shape the future towards what is good or just. The importance of protecting humanity's potential thus draws support from a wide range of moral traditions. It isn't some kind of odd or contrarian view, and it doesn't rely on a narrow or disputed ethical theory, but instead enjoys broad support, making the case much more robust than is commonly realized, and much more able to reach people who come from different moral traditions. I don't have time to do justice to all of these today, but I want to say a little bit more about the future and about the past. 

The case based on the future is the one that inspires me most towards action. If all goes well, human history is just beginning. Humanity is about 200,000 years old, but the earth will remain habitable for hundreds of millions of years. More enough time for millions of future generations. Enough to end disease, poverty and injustice forever. Enough to create heights of flourishing unimaginable today. And we could learn to reach out further into the cosmos. We could have more time yet trillions of years to explore billions of worlds. And such a lifespan puts present day humanity in its earliest infancy. A vast and extraordinary adulthood awaits. This is the long termest argument for safeguarding humanity's potential. Our future could be so much longer and better than our fleeting present. 

And there are actions that only our generation could take that would affect that entire span of time. This could be understood in terms of all the value in all the lives, in every future generation, or in many other terms, because in expectation, almost all of humanity's life lies in the future. Therefore, almost everything of value lies in the future as well. Almost all the flourishing, almost all the beauty, our greatest achievements, our most just societies, our most profound discoveries. This is our potential, what we could achieve if we passed the precipice and continue striving for a better world. But this isn't the only way to make a case for the pivotal importance of existential risk. Consider our relationship to the past. We are not the first generation. 

Our culture, institutions and norms, our technology, our knowledge, our prosperity, were gradually built up by our ancestors over the course of 10,000 generations before us. Humanity's remarkable success has been entirely reliant on our capacity for intergenerational cooperation. Without it, we'd have no houses or farms. We would have no traditions of dance or song, no writing, no nations. Indeed, when I think of the unbroken chain of generations leading to our time and of everything they've built for us, I'm humbled, I'm overwhelmed with gratitude and shocked by the enormity of the inheritance and at the impossibility of returning even the smallest fraction of the favorite. Because 100 billion of the people to whom I owe everything are gone forever, and because what they created is so much larger than my life, than my entire generation. 

If were to drop the baton succumbing to an existential catastrophe, we would fail our ancestors in many different ways. We would fail to achieve the dreams they hoped for, as they worked towards building a just world. We would betray the trust they placed in us, their heirs, to preserve and pass on their legacy. And we would fail in any duty we had to pay forward the work they did for us and help the next generation as they helped ours. Moreover, we'd lose everything of value from the past that we might have reason to preserve. Extinction would bring with it the ruin of every cathedral and temple, the erasure of every poem in every tongue, and the final and permanent destruction of every cultural tradition the earth has known. 

In the face of serious threats of extinction or of a permanent collapse of civilization, a tradition rooted in preserving and cherishing the richness of humanity would also cry out for action. We don't often think of things at this scale. Ethics is most commonly addressed from the individual perspective of what should I do? Occasionally, it's considered from the perspective of a group or nation, or even more recently, from the global perspective of everyone who's alive today. But we can take this a step further, exploring ethics from the perspective of humanity, not just our present generation, but humanity over deep time, reflecting on what we've achieved in the last 10,000 generations, and what we may be able to achieve in the aeons to come. This perspective is a major theme of my book. 

It allows us to see how our own time fits into the greater story and how much is at stake. It changes the way we see the world and our role in it, shifting our attention from things that affect the present moment to those that make a fundamental alteration in the shape of the long term future. What matters most for humanity, and what part in that plan should our generation play? And what part should I play? My book has three chapters on the risks themselves, where I delve deeply into the science behind them. It's not the focus of this talk, but there are the natural risks, there are the anthropogenic risks, and there are the emerging risks. One of the most important conclusions is that these risks aren't equal. The stakes are similar, but some risks are much more likely than others. 

I show how we can use the fossil record to bound the entire natural risk to about a one in 10,000 chance per century. But I judge the existing anthropogenic risks to be about 30 times larger and the emerging risk to be about 50 times larger again, roughly one in six over the coming century. Russian roulette this makes a huge difference when it comes to our priorities. Though it doesn't quite mean that everyone should work on the most likely risks. We also care about their tractability and neglectedness and the quality of the opportunity at hand. And for direct work. About your personal fit. What we do with the future is up to us. Our choices will determine whether we live or die, fulfill our potential, or squander our chance at greatness. We are not hostages to fortune. 

While each of our lives may be tossed about by external forces, sudden illness, or the outbreak of war, humanity's future is almost entirely within humanity's control. In the precipice, I examine what I call grand strategy for humanity. I ask what kind of plan would give humanity the greatest chance of achieving our full potential? And I divide things into three phases. The first great task for humanity is to reach a place of safety, a place where existential risk is low and stays low. I call this existential security. This requires us to do the work commonly associated with reducing existential risk, lowering the risk from each individual threat. And it also requires putting in place the norms and the institutions that are required to keep this risk low forever. And this really is within our power. 

There appear to be no major obstacles to humanity lasting many millions of years, many millions of generations. If only that were a key global priority. There will be great challenges in getting people to look far enough ahead and to see beyond the parochial conflicts of the day. But the logic is clear and the moral arguments powerful. It can be done, but that is not the end of the journey. After achieving existential security, we would have room to breathe. With humanity's long term potential secured, we would be past the precipice, free to contemplate the range of futures that lie open before us. And we will be able to take our time to reflect upon what we truly desire. Upon which of these visions for humanity would be the best realization of our potential. We should call this the long reflection. We rarely think this way. 

We focus on the here and now. Even those of us who care deeply about the long term future are focused on making sure that we have such a future. But once we achieve existential security, we will have as much time as we need to compare the kinds of futures available to us and to judge which is best. Most work in moral philosophy so far has focused on the negatives on avoiding wrong action and bad outcomes. The study of the positive is at a much earlier stage of development. During the long reflection, we would need to develop mature theories to allow us to compare the grand accomplishments that our descendants might achieve with aeons and galaxies as their canvas. While moral philosophy would play a central role, the long reflection would require insights from many different disciplines. 

For it isn't just about determining which futures are best, but which are feasible and what strategies are most likely to get us there. This would require analysis from science, engineering, economics, political theory, and beyond. Our ultimate aim, of course, is the final step, fully achieving humanity's potential. But this can wait upon step two, the serious reflection about which futures would best and how to achieve them, while avoiding any fatal missteps. And while it wouldn't hurt to begin such reflection now, it's not the most urgent task to maximize our chance of success. We need first to get ourselves to safety to achieve existential security. Only we can make sure we get through this period of danger and give our children the very pages on which they will author the future. Now a few words about the book itself. 

It won't be released until March next year, though one can order it now. I wrote the book because I wanted to give existential risk the treatment that it deserves, putting together the state of the art knowledge, making a compelling case, and bringing it to the world stage. The book is for everyone, for people new to the idea of existential risk so that it can reach a much wider audience. This is the book that you can give to your colleagues, friends, or family, and it's also aimed at people who are already familiar or even experts in the area. There's a wealth of new information, copious endnotes, novel analyses, and new framings of the key issues. Thank you. 

Speaker 1:
Thank you very much, Toby. All right, we've got a few minutes for Q and A. And again, reminder, that is in the comments section of this session in the Hoover app. So, excuse me, listening to you, kind of a striking amount of emotional language for a philosopher. Maybe that's not so uncommon for you, but it seems a little surprising for me how much of this project is about making an appeal that you really think everyone can buy into? And are there any worldviews that you weren't able to kind of find the right appeal for as you make this kind of overarching case? 

Toby Ord:
That's a good question. I was definitely focused in the book on. Yeah, trying to make a case for existential risk that is compelling and, in retrospect, obvious. The type of thing that people will just assume after reading the book that they've always thought this, rather than trying to make a contrarian case. You might have thought these other things were important, but actually this thing's important and really trying to get that across. And then in this talk, I also focused on those aspects of the book, on the different kinds of framings of it that just make it seem very obvious and natural, rather than the framings that make it seem very kind of geeky or maths y or counter intuitive. Whereas the book itself is more of a middle ground. 

It includes quite a lot of technical and quantitative information, as well as these powerful framing arguments. I've tried to find a lot of ways in which people come from different backgrounds and different traditions could all find this centrally compelling. But I didn't just look through the list of everything that everyone believes and try to kind of work out how they could do it. I restricted myself to the ones that I actually myself find compelling and where I see where these people are coming from and endorse it. 

Speaker 1:
So to give maybe a challenging example, you sometimes hear from people who say, oh, humanity, we've got it so bad we should just go extinct. And then whatever the plants and animals have, the earth. Is there an argument in your book that speaks to those people and tries to bring them into the fold? 

Toby Ord:
There is a little bit on that, yeah, on these ideas. Depending on what brought someone to think that, then, yes. 

Speaker 1:
Not too much time here. One very practical question that somebody asked in the comments is, given this analysis, do you think that people should be donating to all long term oriented causes, and is that what you personally have come to do? 

Toby Ord:
I think so. No. I do think that this is very important and that there's a strong case to be made that this is the central issue of our time and potentially the most cost effective as well. I think that effective altruism would be much the worst, though, if it just specialized completely into one area. I think that having a breadth of causes that the people are interested in, united by their interest in the effectiveness, is central to the community's success. And in my personal case, all of the proceeds of the book, including the advance and all of the royalties, are going to organizations focused on making sure we have a long term future. 

But my own income I'm donating to global health organizations as back when I started giving what we can and made this pledge, that's how I worded it, and I'm happy to stick with that. I think there's really important things going on there as well. And we want to be careful not to get too much into criticizing each other for supporting the second best thing. The same is true for the different risks themselves. I want to stress huge variability in probabilities. It's even more so if you break it down into individual things in each of those categories. Say the risk of stellar explosions like supernovae, is maybe nine orders of magnitude less than the risk of other things, such as engineered pandemics or artificial intelligence, in my view. So there's a huge variation. I really want to stress that. But it's still the case that. 

That your own personal fit or the quality of the opportunity that's on the table at the moment. They're both multipliers. That could easily be a multiplier of, say, ten or more, which could actually change it, so that you should be working on something else. And I do think that we should be ultimately doing a portfolio of things where that portfolio should be balanced towards the things that we think are most important, that should also include some of these other things. 

Speaker 1:
Going just a little bit deeper into this question of probability. So you had the slide that showed the one in 10,000, the one in 300, the one in six. So maybe a two part question briefly, how do you think about the one in six? I'm sure, obviously the book will have a more complete treatment, but if it's possible to tell us how you get there briefly, we'd love to hear that. And then for the topic of, or the concept of existential security, one in how many would count in your mind, how low does that risk have to go in order to be sustainable for the long term review? 

Toby Ord:
Well, good question. So, in terms of how I think about the probabilities, these are not probabilities that I expect all readers will come to based on the evidence I present. You know, there's only. There are three chapters on the risks themselves, and a whole lot of chapters illuminating them from different angles, the ethics and the economics and history of it and so on. But the one in six is largely due. Due to my beliefs about the risk from engineered pandemics, artificial intelligence, and also unknown things that could happen. That's where a lot of that is coming from, and I say a bit more about it in the book. It's hard to explain briefly, but on the question of existential security, what you ultimately need to do is get to a position where the risk per century is going down. So it gets low. 

And when I said, gets low and stays low. But. But suppose it got to say one in a million per century and stayed there. Well, you get about a million centuries before something goes wrong on average. I think we could do even better than that. If we have a declining schedule of risk, if we make it so that each century we lower the risk by some very small percentage compared to the century before then, we could potentially last all the way until we reach the limits of what's possible. Rather than that, we wait long enough until we screw things up. And I really do think that we can actually achieve that. 

Speaker 1:
Well, there are a ton of great questions coming in through the app. Unfortunately, we're already a few minutes over time, so we won't be able to get to them all here. You will have office hours. 

Toby Ord:
That's right. 

Speaker 1:
Is that at noon today? Do I have that right? 

Toby Ord:
You do. 

Speaker 1:
Okay, perfect. So come see Toby and ask all these great questions in person at noon during his office hours. And for now, another warm round of applause. Thank you very much, toby.org. 

Toby Ord:
Thanks. 

Speaker 1:
Awesome. 

Toby Ord:
Great job. 

Comments1
Sorted by Click to highlight new comments since:

People new to EA might not know they can get the book for free by signing up to the 80,000 hours mailing list, and it's also available on Audible

Curated and popular this week
Relevant opportunities