Hide table of contents

In this Rational Animations video, we look at dangerous knowledge: information hazards (infohazards) and external information hazards (exfohazards).  We talk about one way they can be classified, what kinds of dangers they pose, and the dangers that come from too much secrecy. The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), and other members of the Rational Animations team.  Outside reviewers, including some authors of the cited sources, provided input as well.  Production credits are at the end of the video.  You can find the script of the video below.


“What you don’t know can’t hurt you”, or so the saying goes.  In reality, what you don’t know absolutely can hurt or even kill you. Hidden dangers, whether it’s a lion or an incoming asteroid, can catch you by surprise.  Knowledge of a hazard allows you to prepare for or even prevent catastrophe. So you're better off knowing more rather than less. But this isn’t always the case.   Dangerous knowledge has been a recurring theme in stories from Homer to Monty Python, but it’s not just fiction.  Philosopher Nick Bostrom, in a 2011 paper, coined the term information hazards - or “infohazards” for short - to refer to these cases.[1]

 

Since Bostrom’s 2011 paper, many researchers have come up with ways of classifying infohazards.  For instance, Anders Sandberg at the Future of Humanity Institute distinguished different types of infohazards in a lecture in 2020.[2]  One type is what Sandberg calls “direct information hazards”, where the knowledge in question poses a risk to the knower by its very nature.  These are relatively rare, and we’d be ill-advised to share a serious example in a public video, but they show up frequently in fiction.  Think of the unfathomable terrors of HP Lovecraft that drive people insane, or even Monty Python’s sketch about a joke that makes the listener laugh themselves to death.  For a real world example, think of anything you’ve read on the internet that you wish you hadn’t.

 

Another category of infohazard are those which pose a hazard by affecting an important state of mind in the knower.  For example, spoilers for a book you’re reading could destroy your state of ignorance about the book’s ending.  Another example could be temptations that might bring you harm later, like the knowledge that you can order candy in bulk online[3] or that a game you’ve been interested in is going on sale right before you have a major deadline.  One more type of hazard in this category is ideas that are self-fulfilling prophecies: if a team thinks they’re doomed to lose a sports game, they might not try as hard to win.  These “state of mind” hazards differ from direct infohazards in that they require some other condition in order to cause harm: while a disturbing internet post directly affects you, spoilers only are harmful if you actually want to read the end of the book.

 

These two categories of infohazard are harmful to the person who knows them in particular.  By contrast, Sandberg’s category of “external information hazards” consists of information that would allow someone to cause harm more generally, either through malice or incompetence.  Eliezer Yudkowsky used the term “exfohazard” for this same category.[4]  This includes, for instance, classified military plans or corporate trade secrets, which if discovered could be exploited by an adversary or competitor.  This could also include the hypothetical scenario in our video about SETI risk, where aliens send humanity the knowledge of how to make antimatter bombs using common household objects.  The risk comes not from the knowledge by itself, but from the high likelihood that it will be used or misused.

 

This kind of hazard isn’t just hypothetical: for a real world example, we can look at deadly drugs like fentanyl.  These drugs have caused hundreds of thousands of deaths by overdose.  After a simpler method for synthesizing fentanyl was published anonymously online in the 1990s, illegal labs were able to use it to produce much more of the drug.  This contributed to the current fentanyl crisis,[5] which has seen overdose deaths rise dramatically[6].  Another example might come from AI development.  In some of our other videos, we’ve argued that powerful, uncontrolled artificial intelligence is a grave risk to humanity.  So, knowledge that would allow a rogue AI to be built more easily is also an exfohazard, since it would increase the risk that such an AI will come into being and cause human extinction before we figure out how to align such a system.  We can think of this kind of exfohazard as knowledge that’s dangerous to a society, just as a direct infohazard is dangerous to an individual.

 

So, those are three types of infohazards: direct infohazards, “state of mind” hazards, and exfohazards. But how can we avoid their dangers?

 

For small-scale direct infohazards and state of mind hazards, communities have already come up with techniques like creating spoiler-free zones on online platforms or allowing users to opt out of seeing ads for addictive substances like tobacco and alcohol.  You might also be familiar with content warnings, which allow people to make an informed decision about what kinds of information they want to see before they see it.

 

When it comes to exfohazards, especially the most serious ones, systems already exist to keep sensitive information secret, like classified government documents, though such systems are far from perfect.  Even the absence of a particular piece of information can reveal its importance to a well-prepared adversary.  In 1945, shortly after the atomic bombings of Hiroshima and Nagasaki, the scientists of the US’s Manhattan Project that created those bombs published the so-called “Smyth report”,[7] telling the world how they created the atomic bombs in general terms, and telling the nation’s scientists what could be discussed about atomic weaponry in public.  Between the first and second public editions of the report, a few sentences were deleted about an effect where certain fission products made further fission reactions more difficult, nearly preventing the operation of the Hanford nuclear reactors. This drew the attention of researchers in the Soviet Union, since they now knew this effect was important enough for the Americans to want to keep it secret.[8]  This is similar to the “Streisand effect”, named for singer Barbara Streisand.  In 2003, Streisand filed a lawsuit in an attempt to get an image of her house removed from a relatively obscure public database. This backfired spectacularly, as hundreds of thousands of people suddenly took notice and downloaded that particular photo.[9]

 

So, if we want to avoid falling into these traps, we have to be more careful about how we protect people, and ourselves, from infohazards of all types.  It might be useful to first share information with a trusted individual or organization rather than the world as a whole.  An illustrative case is the way we currently treat vulnerabilities in computer systems, where people discovering exploits often act responsibly and give advance notice to affected companies before publishing the exploit publicly, giving the companies a bit of time to try and find a solution.  Of course, there’s still a time limit: a bad actor might soon discover the same exploit.  This system isn’t perfect, and sometimes is ignored, but it represents at least a step towards creating norms of avoiding leaking dangerous information.

 

On the other hand, secrecy comes with its own dangers.  Going back to the example of the US military’s nuclear weapons, one key material for manufacturing certain fusion weapons, given the code name FOGBANK, was kept so secret that the US military essentially forgot how to make it. When new FOGBANK was needed for those weapons to be refurbished, the instructions were too vague for the new engineers to realize that they were accidentally removing an impurity that was required for FOGBANK to work.  It took years to discover the problem and produce Fogbank that met quality requirements.[10]  Additionally, many tragic disasters have had at their root a failure to share critical knowledge with decision makers.  Sandberg brings up the example of the Chernobyl nuclear power plant disaster: if the plant’s operators had been informed of the flaws in the reactor’s designs, which were already known to the Soviet government,[11] the operators might not have put the reactor through the risky safety test that caused the plant’s destruction and the poisoning of the entire region.  Along with these practical concerns, there are also moral questions like “who gets to decide what information should be kept secret?”, and “does the public have the right to know information even if it could be dangerous?”.  Hiding information from others, even potential infohazards, also risks hurting trust if people come to feel that they are being misled or kept in the dark.

 

So it’s important to take infohazards seriously, but that doesn’t mean that risky information needs to always be kept secret from everyone.  We should definitely keep the conversation about infohazards going - but maybe we should just keep it limited to general ideas.  And of course, you should definitely never tell anyone about [BEEEEEP]

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^

     Smyth, Henry DeWolf (1945). Atomic Energy for Military Purposes; the Official Report on the Development of the Atomic Bomb under the Auspices of the United States Government, 1940–1945. Princeton: Princeton University Press. ISBN 978-0-8047-1722-9.

  8. ^
  9. ^
  10. ^
  11. ^

12

0
0

Reactions

0
0

More posts like this

No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities