Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”

If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.

Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?

Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.

It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock’s second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could simplistically model their minds by drawing a little node labeled “Phlogiston,” and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance.

Or suppose your English professor teaches you that the famous writer Wulky Wilkinsen is actually a “retropositional author,” which you can tell because his books exhibit “alienated resublimation.” And perhaps your professor knows all this because their professor told them; but all they're able to say about resublimation is that it's characteristic of retropositional thought, and of retropositionality that it's marked by alienated resublimation. What does this mean you should expect from Wulky Wilkinsen’s books?

Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all. But you had better remember the propositional assertions that “Wulky Wilkinsen” has the “retropositionality” attribute and also the “alienated resublimation” attribute, so you can regurgitate them on the upcoming quiz. The two beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these “floating” beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens’s ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit. Do you believe that phlogiston is the cause of fire? Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a retropositional author? Then what do you expect to see because of that? No, not “alienated resublimation”; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you? Do you believe that Élan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you. It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsens writing being retropositional, you can go on arguing forever.

Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

 

This work is licensed under a Creative Commons Attribution 4.0 International License.

Comments6
Sorted by Click to highlight new comments since:

I didn't undertand this article on the first read. I had trouble understanding and spent two days coming back to this article. I wasn't sure what was wrong. Did I not like the writing; is it too complicated; a topic I am not interested in?

After using ChatGPT to clarify parts of the articles I arrived at this conclusion: I believe the writing is too complicated for me to extract the author's message in a single reading. I expected the articles of Intro to EA to be easier to understand. 

I see the connection between beliefs and anticipation now but I am not sure if I fully understand this yet.

I agree that the idea could be restated in a clearer way. Here is an alternative way of saying essentially the same thing:

The project of doing good is a project of making better decisions. One important way of evaluating decisions is to compare the consequences they have to the consequences of alternative choices. Of course we don't know the consequences of our decisions before we make them, so we must predict the consequences that a decision will have.

Those predictions are influenced by some of our beliefs. For example, do I believe animals are sentient? If so, perhaps I should donate more to animal charities, and less to charities aiming to help people. These beliefs pay rent in the sense that they help us make better decisions (they get to occupy some space in our heads since they provide us with benefits). Other beliefs do not influence our predictions about the consequences of important decisions. For example, whether or not I believe that Kanye West is a moral person does not seem important for any choice I care about. It is not decision-relevant, and does not "pay rent".

In order to better predict the consequences of our decisions, it is better to have beliefs that more accurately reflect the world as it is. There are a number of things we can do to get more accurate beliefs -- for example, we can seek out evidence, and reason about said evidence. But we have only so much time and energy to do so. So we should focus that time and energy on the beliefs that actually matter, in that they help us make important decisions.

I find your summary and examples helpful. Thanks for sharing your thoughts. Here's another examples from chatgpt I found useful: 

"Consider the belief that "regular exercise contributes to better health." This belief is connected to sensory experiences and anticipations. If someone adheres to a regular exercise routine, they may anticipate improved well-being, increased energy levels, and positive changes in their physical condition. The belief is grounded in empirical evidence and aligns with sensory experiences related to the benefits of exercise.

On the other hand, imagine a belief like "wearing a specific color underwear guarantees good health." This belief lacks a connection to sensory experience and does not provide meaningful anticipations. It is a "floating" belief because it lacks empirical support and doesn't contribute to a realistic understanding of the factors influencing health. In the context of the article, the focus is on promoting beliefs that are firmly grounded in sensory experiences and contribute to accurate predictions and understanding."

not helpful in Intro to EA. too meta, too academic, and not contextualized for explicit purpose or message for this topic of AI ethics.

We come at this from very different directions but I like very much the paragraph about human ability to model the unseen as both a strength and weakness. It truly is an amazing ability from a cognitive perspective, from my perspective it is both the fount of the arts and of harmful religious behavior. A blessing and a curse in one, as the word cleave can mean to stick together or to separate apart.

When you think deeply like this as a human can you imagine given your experience working in AI that an AI could also write this post someday? Will AI self criticize its own cognitive issues? Or anticipate potential cognitive issues?

My separate question to you while I have you here is that of desire. Humans often originate thoughts because their flesh has a desire…this physical body desire for comfort, love, companionship, hunger, etc will inspire thoughts leading to actions. If AI has no body, what will itch it to scratch? Why would it develop agriculture to feed what? Why would it study nature if it had no need to live beside nature? And without those initiating desires could it ever be as robust as human intelligence?

I take the article to be endorsing the following principle (P): "all beliefs should make an empirical prediction". However, I'm not convinced. Here's why:

  1. The conclusion seems self-defeating. What empirical prediction does the belief "all beliefs should make an empirical prediction" make?
  2. Ethical counterexamples. Ethical beliefs like "we should save more lives rather than fewer" does not make any empirical prediction, yet (and I think the EA community would agree) it is a legitimate belief.
  3. Lack of motivation: P seems to be motivated by the rejection of the professor's "retropositional author" beliefs and the alchemists "phIogiston" beliefs. However, I do not think we need to endorse P to claim that these beliefs are poor. Instead, we can claim that the professor's belief is poor because it does not do one of the things a good definition should: explain unknown concepts in terms of known ones. Additionally, the alchemists belief is poor because we now have better explanations of how fire works. 

If I've interpreted the principles of the article uncharitably, or any of my critiques are mistaken, please let me know. I'm interested so see what others think.

Curated and popular this week
Relevant opportunities