Hide table of contents

Key Takeaways

  • The Conscious Subsystems Hypothesis (“Conscious Subsystems,” for short) says that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains—namely, the ones who report their experiences to us.
  • Given that humans’ brains are likely to support more such subsystems than animals’ brains, EAs who have explored Conscious Subsystems have suggested that it provides a reason for risk-neutral expected utility maximizers to assign more weight to humans relative to animals.
  • However, even if Conscious Subsystems is true, it probably doesn’t imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals. There are three reasons for this:
    • If humans have conscious subsystems, then animals probably have them too, so taking them seriously doesn’t increase the expected value of, say, humans over chickens as much as we might initially suppose.
    • Risk-neutral expected utility maximizers are committed to assumptions—including the assumption that all welfare counts equally, whoever’s welfare it is—that support the conclusion that the best animal-focused neartermist interventions (e.g., cage-free campaigns) are many times better than the best human-focused neartermist interventions (e.g., bednets).
    • Independently, note that the higher our credences in the theories of consciousness that are most friendly to Conscious Subsystems, the higher our credences ought to be in the hypothesis that many small invertebrates are sentient. So, insofar as we’re risk-neutral expected utility maximizers with relatively high credences in Conscious Subsystems-friendly theories of consciousness, it’s likely that we should be putting far more resources into investigating the welfare of the world’s small invertebrates.
  • We assign very low credences to claims that ostensibly support Conscious Subsystems.
    • The appeal of the idea that standard theories of consciousness support Conscious Subsystems may be based on not distinguishing (a) theories that are just designed to make predictions about when people will self-report having conscious experiences of a certain type (which may all be wrong, but have whatever direct empirical support they have) and (b) theories that are attempts to answer the so-called “hard problem” of consciousness (which only have indirect empirical support and are far more controversial).
    • Standard versions of functionalism say that states are conscious when they have the right relationships to sensory stimulations, other mental states, and behavior. But it’s highly unlikely that many groups of neurons stand in the correct relationships, even if they perform functions that, in the abstract, seem as complex and sophisticated as those performed by whole brains.
  • Ultimately, we do not recommend acting on Conscious Subsystems at this time.

 

Introduction

This is the fifth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to assess a hypothesis that's been advanced by several members of the EA community: namely, that brains have subsystems that realize phenomenally conscious states that aren’t accessible to the subjects we typically associate with those brains (i.e., the ones who report their experiences to us; see, e.g., Tomasik, 2013-2019Shiller, 2016Muehlhauser, 2017Shulman, 2020Crummett, 2022).[1]

If there are such states, then we might think that there is more than one conscious subject per brain, each supported by some neural subsystem or other.[2] Let’s call this the Conscious Subsystems Hypothesis (or Conscious Subsystems, for short). 

Conscious Subsystems could affect how we ought to make tradeoffs between members of different species. Suppose, for instance, that the number of these subsystems scales proportionally with neuron count. A human has something like 86 billion neurons in her brain; a chicken, 220 million. So, if there are conscious subsystems in various brains, there could be roughly 400 times as many in humans as in chickens. If we were to assume that every subsystem is in pain when the main system reports pain, then it could work out that in a case where a human and a chicken appear to have comparable pain levels, it’s nevertheless true that there is roughly 400 times as much pain in the human than in the chicken. Given the aim of maximizing expected welfare and all else equal, it could follow that it’s roughly 400 times more important to alleviate the human’s pain than the chicken’s pain. It matters, then, whether Conscious Subsystems is true.

Accordingly, this post:

  1. Develops one argument for Conscious Subsystems;
  2. Explains why Conscious Subsystems, even if true, may not be practically significant in some key decision contexts; 
  3. Provides some reasons to assign low probabilities to the premises of the argument for Conscious Subsystems; and
  4. Offers some general reasons to be wary of allocating resources based on hypotheses like Conscious Subsystems. 

Motivating Conscious Subsystems

We begin by considering a way of motivating Conscious Subsystems: namely, the classic “China brain” thought experiment in Ned Block’s famous 1978 paper, “Troubles with Functionalism.” Very roughly, functionalism is the view that mental states are the kinds of states they are due to their functions, or their roles within larger systems. Block argued that functionalism implies that systems that clearly aren’t conscious, are conscious. For instance:

Imagine a body externally like a human body, say yours, but internally quite different. The neurons from sensory organs are connected to a bank of lights in a hollow cavity in the head. A set of buttons connects to the motor-output neurons. Inside the cavity resides a group of little men. Each has a very simple task: to implement a “square” of a reasonably adequate machine table that describes you. On one wall is a bulletin board on which is posted a state card, i.e., a card that bears a symbol designating one of the states specified on the machine table. Here is what the little men do: Suppose the posted card has a ‘G’ on it. This alerts the little men who implement G squares—‘G-men’ they call themselves. Suppose the light representing I17 goes on. One of the G-men has the following as his sole task: when the card reads ‘G’ and the I17 light goes on, he presses output button O191 and changes the state card to ‘M’…. In spite of the low level of intelligence required of each little man, the system as a whole manages to simulate you because the functional organization they have been trained to realize is yours.

The “China-brain” thought experiment makes essentially makes the same point:

Suppose we convert the government of China to functionalism, and we convince its officials that it would enormously enhance their international prestige to realize a human mind for an hour. We provide each of the billion people in China… with a specially designed two-way radio that connects them in the appropriate way to other persons and to the artificial body mentioned in the previous example.[3] We replace the little men with a radio transmitter and receiver connected to the input and output neurons. Instead of a bulletin board, we arrange to have letters displayed on a series of satellites placed so that they can be seen from anywhere in China. Surely such a system is not physically impossible. It could be functionally equivalent to you for a short time, say an hour. 

If functionalism is true, Block argues, then this system wouldn’t just be conscious; it would have exactly the same mental states that you have. If that’s right, then functionalism implies that a conscious mind just like yours can be composed of other conscious minds. After all, it seems clear that the people of China don’t cease to be conscious simply because they’ve taken up this odd work of replicating the functions that give rise to your mind.

Of course, Block offered these thought experiments as reasons to reject functionalism. Many consciousness researchers now endorse “anti-nesting” principles to prevent their theories from having this implication (e.g., Kammerer, 2015). At the same time, some just bite the bullet, agreeing that while it might be counterintuitive that this system is conscious, you and the “China brain” would indeed have the same mental states for as long as it operates (e.g., Schwitzgebel, 2015). Suppose that’s true. Then, we’re on our way to an argument for Conscious Subsystems, one version of which goes as follows:

  1. Some neural subsystems would be conscious if they were operating in isolation. 
  2. If a neural subsystem would be conscious if it were operating in isolation, then it's conscious even if part of a larger conscious system.
  3. So, some neural subsystems are conscious.

We can read Premise 2 as a way of biting the bullet on the China brain thought experiment. So, we’re now left wondering about the case for Premise 1.

But before we turn to that premise, there are two points to note. First, while this conclusion sounds radical, it might not be practically significant as stated. After all, it isn’t clear that all conscious states are valenced states—that is, states that feel good or bad. So, if we’re hedonists—that is, we assume that all and only positively valenced conscious states contribute positively to welfare and all and only negatively valenced conscious states contribute negatively to welfare—then it could work out that all these conscious subsystems are morally irrelevant. If the states are conscious but not valenced, then they don’t realize any welfare at all.

Moreover, even if these subsystems do have valenced conscious states—and so realize some welfare—it doesn’t follow that we can assess the net impact of our actions on their welfare. Suppose we can’t. Then, if we’re risk-neutral expected utility maximizers—that is, we want to maximize utility and we’re equally concerned to avoid realizing negative utility and promote the realization of positive utility—the welfare of the subsystems “cancels out” in expectation.

But let’s grant that if these subsystems are conscious, then they would have valenced states. Moreover, let’s grant that the welfare of the subsystems is correlated with reports or other typical measures of welfare.[4]

This brings us to the second point: namely, that it may not matter whether we have strong reasons to believe any of the premises of the argument for Conscious Subsystems—or the assumptions we just granted—if we’re risk-neutral expected utility maximizing total utilitarians. If we assign some credence to each of the relevant claims, then as long as there are enough subsystems, the argument will still be practically significant.

Recall, for instance, that a human has something like 86 billion neurons in her brain; a chicken, 220 million. So, if we thought that there’s around one conscious system for every 220 million neurons, we would conclude that a human brain supports around 400 times as many conscious subjects as a chicken brain. Given that, if we assign low credences to each premise in the argument for Conscious Subsystems—e.g., 0.2—it follows that, in expectation, we ought to conclude that a human brain supports roughly 17 times as many conscious subjects as a chicken brain.[5]

We chose this example because most people agree that chickens are conscious. But as Shulman points out, while it isn’t clear whether insects are conscious, there is some suggestive evidence in favor of that hypothesis. So, we should assign some credence to the hypothesis that they’re conscious. If we grant this much, though, then he thinks we should assign some credence to (something like) Premise 1, since, in his view, human brains contain many subsystems that are at least as complex as, and have “capabilities greater than,” insect brains. And since there are mature insects with fewer than 10,000 neurons, our credences can be lower without threatening the practical significance of the argument—again, assuming we’re expected utility maximizers. 

Suppose, for instance, that we assign even lower credences to each premise—e.g., 0.05. And suppose that, conditional on these premises, we assign the same credence to the hypothesis that we can separate all the neurons of a human brain into conscious subsystems with around 10,000 neurons each. Then, we ought to conclude that a human brain supports roughly 1076 times as many conscious subjects as a 10,000-neuron insect brain in expectation.[6] (Shulman (2015) and Tomasik (2016-2017) make similar calculations between chickens, cattle, and insects or springtails, normalizing by insects or springtails, but uncertainty-free and, for some calculations, with diminishing marginal returns to additional neurons.)

Moreover, these credences may be too low. St. Jules (2020), for example, argues that several of the most prominent theories of consciousness, such as Global Workspace Theory, Integrated Information Theory, and Recurrent Processing Theory, imply that many neural subsystems (or, more generally, many very simple systems) would be conscious if they occurred in isolation—or, at least, would have that implication if certain ostensibly-arbitrary assumptions were dropped, namely, assumptions designed solely to block the implication that consciousness systems can be built out of other conscious systems. To give just one example, Global Workspace Theory says, in essence, that a mental state is conscious just when its content is broadcast to an array of neural subsystems. St. Jules points out that a mental state’s content can be broadcast to all the subsystem’s subsystems even if it isn’t globally broadcast—which we might call “local” rather than global broadcasting. So, unless there’s something special about broadcasting to all subsystems rather than some subset of them, Global Workspace Theory implies that if subsystems locally broadcast a state’s content, then those subsystems are conscious.[7]

Suppose that, on this basis, we revised all our credences upward to 0.2 for each premise and 0.2 for 10,000-neuron subsystems being conscious (based on a comparable credence for 10,000-neuron insects being conscious), both for humans and chickens. Then, in expectation, we ought to attribute around 68,801 conscious subjects to each human brain—and around 177 to each chicken (~389x fewer, which is basically the ratio of the number of neurons in a human brain over the number in a chicken brain). At that point, the practical significance of the argument may be quite radical.

Assessing Conscious Subsystems

Given a commitment to risk-neutral expected utility maximization, there are two basic ways to assess the practical significance of Conscious Subsystems:

  1. Given some range of reasonable credences, we can consider whether Conscious Subsystems would alter what we would otherwise think we should do in some particular decision context.
  2. We can consider arguments for adjusting our credences in the claims that support Conscious Subsystems.

Ultimately, the first point is the most important one. Conscious Subsystems matters insofar as it makes a practical difference. So, we’ll begin there. Then, we’ll spend some time on the second.

Either Conscious Subsystems probably doesn’t affect what we ought to do or it should have a minimal impact on what we ought to think we ought to do

There are many contexts in which Conscious Subsystems might be practically significant. Here, though, we’re especially concerned to assess whether Conscious Subsystems should alter the way EAs think they ought to allocate resources. We think that it probably doesn’t. Or, if it does, it probably favors very strange courses of action, such as allocating much more toward invertebrate welfare. 

Even if Conscious Subsystems is true, neartermists should keep spending on animals

Let’s begin with why Conscious Subsystems probably shouldn’t alter the way EAs think they ought to allocate resources. Open Philanthropy once estimated that, “if you value chicken life-years equally to human life-years… [then] corporate campaigns do about 10,000x as much good per dollar as top [global health] charities.” Two more recent estimates—which we haven’t investigated and aren’t necessarily endorsing—agree that corporate campaigns are much better. If we assign equal weights to human and chicken welfare in the model that Grilo, 2022 uses, corporate campaigns are roughly 5,000x better than the best global health charities. If we do the same thing in the model that Clare and Goth, 2020 employ, corporate campaigns are 30,000 to 45,000x better.[8] So, if even the most conservative of these estimates is ten times too high, Conscious Subsystems wouldn’t imply that risk-neutral expected utility maximizers ought to allocate neartermist dollars to humans instead of animals, at least if we estimate the number of human-vs.-nonhuman conscious subsystems as we did earlier, at a ratio of less than 400. 

In fact, there’s a sense in which Conscious Subsystems may bolster the cost-effectiveness argument for spending more on animals, if similarly sized conscious subsystems generate valenced states with similar intensities regardless of the brain to which they belong. To see why, consider that, first, if many brains house many 10,000-neuron conscious subsystems, then it’s likely these subsystems are less sophisticated than the conscious subject who can report their conscious experiences. Still, if conscious subsystems really do produce valenced states, we see no reason to suppose that they produce valenced states that are, say, 8,600,000x less intense than the valenced states of the main subject (a number derived by dividing the number of neurons in a human brain by 10,000), to ensure most of the total valence in a human brain to come from the main subject. We aren’t aware of any empirical theory about the function of valenced states that would support such huge differences; likewise, as discussed in our previous post in this sequence, we aren’t aware of any compelling philosophical reason to think that the intensity of valenced states scales with neuron counts. So, while the valenced states of conscious subsystems might be less intense, our default would be to assume much smaller differences between the intensities of the states of the subsystems and the intensities of the states of the main subject.

So, since conscious subsystems will vastly outnumber the main subject in many cases, many organisms’ moral value will be based largely on their subsystems, not on the main subjects who can report their experiences (or reveal them via behavior, etc.). So, instead of having to assess complicated questions about the relative intensity of valenced experiences, we can use “subsystem counts” to approximate the relative moral importance of humans and animals. And as we suggested earlier, when we do that with an expected number of subsystems roughly proportional to neuron counts, relative subsystem counts don’t support allocating resources to the best neartermist human interventions over the best neartermist animal interventions. Instead of undermining pro-animal cost-effectiveness analyses with a human-favoring philosophical theory (namely, Conscious Subsystems itself), Conscious Subsystems appears to support the conclusion that there are scalable nonhuman animal-targeting interventions that are far more cost-effective than GiveWell’s recommended charities.

That being said, we grant that Conscious Subsystems could make longtermist interventions seem better, though the issue is complicated by questions about the role of digital minds in the value of the long-term future. If most of the value of the long-term future is in digital minds, then given the possibility that digital minds might themselves have conscious subsystems—potentially even more conscious subsystems than humans (on a per-individual basis)—then Conscious Subsystems could provide a reason for those already inclined toward longtermism to be even more inclined toward it.

However, it’s hard to believe that this would matter to anyone who has reservations about longtermism. Suppose, for instance, that we’re undercounting the number of conscious beings in the future relative to the number in the present by several orders of magnitude. If your primary reservation about longtermism is, say, complex cluelessness or very low probabilities of making any difference, that undercounting hardly seems relevant.

Conscious Subsystems probably supports spending far more on small invertebrates

Let’s now turn to the possibility that Conscious Subsystems should alter the way EAs ought to allocate resources, albeit in a surprising way. It’s estimated that there are 1.5 million to 2.5 million mites on the average human’s body and together they probably have about 1% of the number of neurons as does the average human brain. However, the views about consciousness that support Conscious Subsystems will tend to assign greater probabilities to the hypothesis that those mites are conscious. The lower the “bar” for consciousness, the more likely it is that mites clear it. So, the expected number of conscious systems on the human body might not be more than an order of magnitude smaller than the expected number in the human brain. Moving on from organisms living on humans, Schultheiss et al. (2022) estimate that 20 quadrillion (20*1015) ants are alive at any moment, and based on some of our past research, we tend to think we ought to assign a non-negligible credence to the hypothesis that they’re sentient (Rethink Priorities, 2019; Schukraft et al., 2019).[9]

The upshot here is simple. There are relatively restrictive views of consciousness, like certain higher-order theories, and relatively permissive views of consciousness, like panpsychism. The higher our credences in restrictive views, the fewer conscious subsystems we ought to posit in expectation—and the lower the odds that many small invertebrates are sentient. The higher our credences in permissive views, the more conscious subsystems we ought to posit in expectation—and the higher the odds that many small invertebrates are sentient. So, insofar as we’re risk-neutral expected utility maximizers with relatively high credences in permissive views of consciousness, it’s likely that we should be putting far more resources into investigating the welfare of the world’s small invertebrates. And depending on how many invertebrates we can help and how much we can help them, it could work out that we ought to prioritize them over humans.[10]

We should probably assign low credences to the claims that support Conscious Subsystems 

We now turn to issues that are relevant to the credences we ought to assign to the claims that support Conscious Subsystems. The first is that they’re based on a failure to distinguish neural correlate theories of consciousness with explanatory theories of consciousness. The second is that functionalism probably doesn’t support attributing consciousness to neural subsystems per se, whatever it might imply about other entities.

Neural correlate theories of consciousness =/= explanatory theories of consciousness

Again, the basic argument for Conscious Subsystems goes as follows:

  1. Some neural subsystems would be conscious if they were operating in isolation. 
  2. If a neural subsystem would be conscious if it were operating in isolation, then it's conscious even if part of a larger conscious system.
  3. So, some neural subsystems are conscious.

In support of the first premise, we have Shulman and St. Jules appealing to the implications of various theories of consciousness and Shulman making comparisons between the subsystems in human brains. A thought experiment supports the second.

Let’s step back and highlight the difference between two types of theories about consciousness. One kind of “theory of consciousness” is a “neural correlate of consciousness (NCC).” This refers to a set of conditions in brains that are, as the name suggests, reliably correlated with conscious experiences. In general, most attempts to scientifically study consciousness are attempts to find the neural correlates of consciousness, primarily relying on finding conditions that reliably correlate with self-reports of consciousness, since consciousness itself cannot be observed in others. Importantly, a neural correlate of consciousness need not provide an explanation of how subjective experiences exist in the physical world. NCCs need only identify patterns between observable features of the world.

A second type of theory of consciousness is an “explanatory theory of consciousness.” An explanatory theory doesn’t merely posit a correlation; it provides some story about how subjective experiences exist in the physical world. In short, an explanatory theory of consciousness is an attempt to solve the “hard problem of consciousness”: that is, it attempts to explain why and how we have phenomenally conscious states. Many scientists studying consciousness are explicitly uninterested in solving the hard problem of consciousness or believe it to be unsolvable.

This distinction is important because there’s a certain type of move that’s made in discussions about the distribution of consciousness that involves a category error. Consider the following argument schema:

  1. Brains with property X are conscious.
  2. Ys have property X.
  3. So, Ys are conscious.

This argument might be fine for some values of X and Y. However, it’s often confused if X is a neural correlate of consciousness and Y is something for which we don’t have any independent reason to posit consciousness. This is because Premise 1 here is really shorthand for a longer claim, which is something like: brains with property X that can self-report consciousness are conscious. Essentially, Premise 1 is really a claim about a very specific kind of system—the human system—that research has revealed to have the following feature: whenever X is present, systems of that type (people) self-reported conscious experiences of a certain type; whenever X was absent, systems of that type (people) did not self-report having a conscious experience of that type. Premise 1 doesn’t say anything about systems that can’t self-report consciousness.

Again, this is because in a NCC, X doesn’t explain what consciousness is; it isn’t an account of consciousness. Instead, it serves as the basis for a research program. Because we know that brains with X often have conscious experiences, we should investigate X in more detail to learn more about consciousness in that organism. If X were supposed to explain what consciousness is—if, in other words, it’s a theory that attempts to provide the list of necessary and sufficient conditions for consciousness—then there’s no conceptual issue here. But the move is a category error when used with a NCC, because such theories aren’t attempting to provide lists of necessary and sufficient conditions; they aren’t trying to provide accounts of what consciousness is. Rather, these theories are only trying to identify promising features that can be reliably correlated with self-reports of consciousness. 

Consider, for example, someone arguing as follows.

  1. Brains that engage in recurrent processing are conscious.
  2. Electrons do something that, at an abstract level, could be described as recurrent processing (e.g., “an electron influences other particles, which in turn influence the electron,” etc. (Tomasik, 2020 and St. Jules, 2020)).
  3. So, panpsychism is true.

This argument is based on a misunderstanding of the first premise, at least as it’s intended by many proponents of recurrent processing theory. These individuals are not saying that anything that exhibits the property of recurrent processing is thereby conscious. Instead, they were making the empirical claim that recurrent processing of a certain sort is reliably correlated with self-reports of conscious experiences in humans. It doesn’t make sense to generalize this view to “electrons influencing one another” because electrons influencing one another is decidedly not correlated with self-reports of consciousness.

Of course, some proponents of recurrent processing theory probably do take themselves to be giving an account of what consciousness actually is. We can’t assess such claims here. However, it’s important to recognize that we probably ought to assign much lower credences to the explanatory interpretations of theories than their neural correlate interpretations. Neural correlate interpretations of theories of consciousness have whatever fairly direct empirical support they have (or don’t have, as the case may be). Explanatory interpretations of theories of consciousness borrow their support from their corresponding neural correlate interpretations and then go well beyond them, staking out much more controversial positions, and in any case ones that we can’t clearly disconfirm empirically.

The upshot here is that Premise 1 faces one of two problems. On the one hand, it could be unmotivated, as it’s a mistake to think that because some neural subsystems have X—some neural correlate of consciousness—that they would be conscious if they were operating in isolation. On the other hand, it could be that we ought to assign it a rather low credence.

Functionalism doesn’t support conscious subsystems

Let’s turn to the second problem for the argument for Conscious Subsystems. Roughly, functionalism about consciousness is the view that a physical state realizes a given mental state by playing the right functional role in a larger system. Crucially, standard versions of functionalism don’t entail that all functional roles are conscious: only the ones with the right relationships to sensory stimulations, other mental states, and behavior. 

Now consider the following argument:

  1. Fruit flies have roughly 200,000 neurons.
  2. A human brain has 420,000x as many neurons as a fruit fly.
  3. So, if fruit flies are conscious and we can’t rule out states being conscious merely because they’re embedded in a larger system, then human brains contain something like 420,000 as many conscious subsystems as fruit flies.

This argument isn’t valid as it stands, but that isn’t the point. Instead, the point is that you can’t make it valid by adding a standard version of functionalism. Standard functionalism, as noted above, doesn’t claim that any system with a certain amount of processing power is thereby conscious. It says that states are conscious if they realize particular functional relationships between inputs, outputs, and other states. But there is no reason to believe that any of the subsets of neurons in the human brain with the same number of neurons in a fruit fly are arranged with the correct functional relationships.

Moreover, there are positive reasons to deny that given subsets of neurons in the human brain are arranged in the right manner to be conscious. Fruit fly brains faced evolutionary pressures and, as such, are designed to realize a set of input-output relations that increase the likelihood of fruit flies surviving and passing on their genes. Human brains also evolved to promote the likelihood of survival and reproduction. However, subsets of human brains, facing evolutionary pressures, including parallel tracks of sensory information, would have evolved to contribute to the overall system in a way that facilitates the overall system behaving accordingly. In other words, the evolutionary pressures on any subset of a human brain would push this subset to realize functions that are different from any function designed to maximize the fitness of a fruit fly.

Granted, someone could insist that the input-output relations that happen to maximize the fitness of fruit flies are also likely to be present in the human brain. However, it seems extremely unlikely that the pattern of neural activations that would lead to maximizing fruit fly fitness through a mental state would just so happen to be the same pattern of activations realized in human subsystems that make small contributions to the behavior of the organism as a whole. In fact, if we just look at the organization of the human brain, the distances that signals need to travel, and overall interconnections, it seems almost certain that there are no roughly fruit-fly-brain-sized subsystems of the human brain that realize identical functions to the fruit fly brain.

Abstracting away from fruit fly brains, it’s likely that some functions required for consciousness or valence—or realized along the way to generate conscious valence—are fairly high-order, top-down, highly integrative, bottlenecking, or approximately unitary, and some of these are very unlikely to be realized thousands of times in any given brain. Some candidate functions are selective attention,[11] a model of attention,[12] various executive functionsoptimism and pessimism bias, and (non-reflexive) appetitive and avoidance behaviors. Some kinds of valenced experiences, like empathic pains and social pains from rejection, exclusion, or loss, depend on high-order representations of stimuli, and these representations seem likely to be accessible or relatively few in number at a time, so we expect the same to hold for the negative valence that depends on them. Physical pain and even negative valence generally may also turn out to depend on high-order representations, and there’s some evidence they depend on brain regions similar to those on which empathic pains and social pains depend (Singer et al., 2004Eisenberger, 2015). On the other hand, if some kinds of valenced experiences occur simultaneously in huge numbers in the human brain, but social pains don’t, then, unless these many valenced experiences have tiny average value relative to social pains, they would morally dominate the individual’s social pains in aggregate, which would at least be morally counterintuitive, although possibly an inevitable conclusion of Conscious Subsystems.

Furthermore, the extra neurons in the human brain used to realize some of these functions have other more plausible roles than realizing these functions thousands of times simultaneously, like greater acuity, greater categorization power, or integrating more inputs (Birch et al., 2020), and broadcasting the resulting signals to more neurons after (or processes, according to Shulman, 2020). But even if each type of function that’s necessary for conscious valence were realized many times in the human brain, each subsystem would need to realize an instance of each type of function and have them all fit together in the right way to generate conscious valence.

In other words, standard functionalism does not support the claim that the mere presence of large numbers of neurons in human brains is evidence in favor of the argument that there are numerous conscious subsystems. That more neurons are devoted to the same functions in one brain than another isn’t enough to establish that those functions, especially those generating conscious valence, are realized more often in the first brain than in the second. Those arguing in favor of conscious subsystems need to present some positive evidence for believing that the functions that are realized in subsets of human brains are identical, or at least similar enough, to those in other organisms to also be considered conscious. We don’t have such evidence ourselves and we aren’t aware of claims by neuroscientists that would suggest that it’s out there. 

Again, the upshot here is that Premise 1 of the argument for Conscious Subsystems seems unmotivated: standard versions of functionalism don’t allow us to make analogical arguments from the complexity of subsystems or their contributions to generating consciousness or valence to their being separately conscious.

Conclusion

As we’ve argued, there are some key decision contexts in which Conscious Subsystems probably shouldn’t affect how we ought to act. In part, this is because animal-directed interventions look so good; on top of that, the theories of consciousness that support Conscious Subsystems also support consciousness being widespread in the animal kingdom, which is likely to cause small invertebrates to dominate our resource allocation decisions. However, Conscious Subsystems also shouldn’t affect our resource allocation decisions because we ought to assign it a very low probability of being true. The basic argument for it is probably based on a category error. In addition, it doesn’t get the support from functionalism that we might have supposed.

Nevertheless, some will insist that the probability of Conscious Subsystems is not so low as to make it practically irrelevant. While it might not affect any decisions that EAs face now, it may still affect decisions that EAs face in the future. In what remains, we explain why we disagree. On our view, while it might seem as though expected utility maximization supports giving substantial weight to Conscious Subsystems, other considerations, such as credal fragility, suggest that we should give limited weight to Conscious Subsystems if we're careful expected utility maximizers.[13]

From the armchair, we can—as we have!—come up with arguments for and against Conscious Subsystems. However, it’s hard to see how any of these arguments could settle the question in some decisive way. There will always be room to resist objections, to develop new replies, to marshal new counterarguments. In principle, empirical evidence could radically change our situation: Imagine a new technology that allowed subsystems to report their conscious states! But we don’t have that evidence and, unfortunately, may forever lack it. Moreover, we should acknowledge that it’s probably possible to come up with inverse theories that imply that smaller brains are extremely valuable—perhaps because they realize the most intense valenced states, having no cognitive resources to mitigate them. So, we find ourselves in a situation where our credences should be low and fairly fragile. Moreover, they may alternate between theories with radically different practical implications. 

This isn’t a situation where it makes sense to maximize expected utility at any given moment. Instead, we should acknowledge our uncertainty, explore related hypotheses and try to figure out whether there’s a way to make the questions empirically tractable. If not, then we should be very cautious, and the best move might just be assuming something like neutrality across types of brains while we await possible empirical updates. Or, at least, seriously limiting the percentage of our resources that’s allocated in accord with Conscious Subsystems. This seems like a good epistemic practice, but it also makes practical sense: actions involve opportunity costs, and being too willing to act on rapid updates can result in failing to build the infrastructure and momentum that’s often required for change.


 

Acknowledgments


This research is a project of Rethink Priorities. It was written by Bob Fischer, Adam Shriver, and Michael St. Jules. It is indebted to previous work on this topic by David Mathers. Thanks to Marcus Davis, Jim Davies, Gavin Taylor, Teo Ajantaival, Jacy Reese Anthis, Magnus Vinding, Brian Tomasik, Anthony DiGiovanni, Joe Gottlieb, David Mathers, Richard Bruns, David Moss, and Derek Shiller for helpful feedback on earlier versions of this report. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.


 

  1. ^

    These non-accessible conscious states go under many names: e.g., “hidden qualia” (Shiller, 2016), “paraconsciousnesses,” “underselves,” and “co-consciousnesses” (Blackmore, 2017), while “phenomenal overflow” may be a special case (Block, 2007 and Block, 2011, as well as discussion of the partial awareness response in Kouider et al., 2010 and Tsuchyia et al., 2015). We should also note that there are ways of interpreting some of these authors, such as Tomasik, 2013-2019 and Shulman, 2020, where they do not mean to be talking about inaccessible states. Instead, they may have a view where the components of consciousness are a bit like pixels with a fixed size, so that the more of those pixels you have in each experience, the “more consciousness”—or the more independently valuable components of consciousness—you’ve got. Shulman (2020), for instance, writes that “[if] each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.” However, only the valenced components would be valuable given hedonism, which we assume in this report, and we’d expect these valenced components to occur in relatively small numbers and be tied to high-level features of a subject’s experiences. We don’t address that view in detail, though we suspect that some of the arguments below could be adapted to apply to it. 

  2. ^

    For present purposes, we're simply granting the assumption behind this inference, which is that if phenomenal states aren’t integrated with or accessible to one another, then they’re possessed by different subjects. However, that assumption is controversial and we aren't endorsing it.

  3. ^

    China had a population of roughly one billion when Block wrote this, which is about 100x fewer people than the number of neurons in a human brain. It’s an open question whether that’s enough people for Block’s purposes, but the general philosophical point remains.

  4. ^

    It’s complicated to assess how much of a concession this is. For instance, it’s clear that whatever probability you assign to Conscious Subsystems, you should assign a lower probability to the hypothesis that Conscious Subsystems is true, that the states are valenced, and the valenced states are correlated with higher-level reports. Moreover, it’s plausible that even if the subsystems have valenced states, those states don’t affect the behavior of the whole organism; so, there aren’t any adaptive pressures on those subsystems that would result in correlations with higher-level reports. As a result, the expected value implications of Conscious Subsystems might be trivial. At the same time, someone might argue that some neurons may have specific functions that make such a correlation plausible. For instance, there may be neurons that play a role in generating reportable negative valence but no role in generating reportable positive valence. All else equal, these negative valence-selective neurons may seem more likely to help realize the same negative valence-specific functions they do for reportable negative valence—and so negative valence—in subsystems containing them than to help realize positive valence in those subsystems. That being said, whatever variation of this hypothesis we entertain, it isn’t clear whether the number of relevant conscious subsystems scales linearly with neuron counts—where the relevant ones are those with valenced states that are correlated with the reports of the whole system. So, the open and challenging question is about the discount to apply.

  5. ^

    0.2 * 0.2 * 400 + (1 - 0.2 * 0.2) * 1 = 16.96. We’re assuming that the subsystems aren’t overlapping, and, for simplicity, that if there aren’t 400 conscious subsystems conditional on the Conscious Subsystems premises, there’s just the one conscious system.

  6. ^

    0.05*0.05*0.05*(86,000,000,000/10,000) + (1-0.05*0.05*0.05)*1 ≈ 1076 in a human, in expectation, vs. 1 in the insect.

  7. ^

     Shulman (2020) suggests something similar, claiming that “trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples.” Granted, the local / global distinction might be epistemically significant, as we may not have ways to confirm that local broadcasting generates consciousness, whereas we can ask people to self-report about global broadcasting. However, it may not be theoretically significant, in the sense that there may not be any principled reason why global broadcasting would be required for consciousness.

  8. ^

    This range reflects just the default set of parameters in their Guesstimate model, after setting the node “moral weight (DALY/cDALY)” to 1. We get a range because their Guesstimate model is noisy and different samples give different results.

  9. ^

    For more discussion of the population numbers of different groups of animals and the total numbers of neurons across these groups, see Shulman, 2015Tomasik, 2015-2019Ray, 2019Ray, 2019Tomasik, 2009-2019McCallum, Martini and Shwartz-Lucas, 2022. Land use, especially agricultural land use, plausibly has very large impacts on them, given that half of the world’s habitable land is used for agriculture (Ritchie and Roser, 2019), and climate change also probably has very large impacts on them, good or bad. For a different version of this argument, see Sebo, 2022.

  10. ^

    Furthermore, given objections of fanaticism and decision-theoretic irrationality to expected utility maximization with unbounded utility functions (McGee, 1999Russell and Isaacs, 2020Russell, 2021Christiano, 2022Pruss, 2022), including the risk-neutral expected value maximizing total utilitarianism we’ve assumed in our section Motivating Conscious Subsystems, we should give some weight to alternative decision theories or to bounded social welfare (utility) functions, perhaps aggregating across these views through some method for handling normative uncertainty (MacAskill, Bykvist and Ord, 2020). (Though we should not commit solely—or perhaps at all—to a version of maximizing expected choice-worthiness, especially with intertheoretic comparisons, to handle normative uncertainty, since that takes for granted an assumption we’re calling into question: expected utility maximization, especially with an unbounded utility function.) Compared to risk neutral expected utility maximizing total utilitarianism, we expect these alternatives to be less fanatical and to give similar or substantially less weight to Conscious Subsystems, and so we expect to give less weight overall to Conscious Subsystems as a result of their consideration. This would also probably mean further discounting animals the more unlikely they are to be conscious, more so than just by their probability of consciousness, and could therefore potentially block the total domination of small invertebrate welfare in the short term. Indeed, this seems to be one of the few ways a total hedonistic utilitarian would prevent small invertebrates from totally dominating in the short term.

  11. ^

     About Global Workspace Theory, Baars (2003) writes:

    The sensory "bright spot" of consciousness involves a selective attention system (the theater spotlight), under dual control of frontal executive cortex and automatic interrupt control from areas such as the brain stem, pain systems, and emotional centers like the amygdala. It is these attentional interrupt systems that allow significant stimuli to "break through" into consciousness in a selective listening task, when the name is spoken in the unconscious channel.

  12. ^

     About Attention Schema Theory, Graziano (2020) writes:

    AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.

  13. ^

    … and even more so if we consider alternative decision theories, bounded social welfare functions, or limiting aggregation.

Comments12
Sorted by Click to highlight new comments since:

It may be that certain mental subsystems wouldn't be adequate by themselves to produce consciousness.  But certainly some of them would.  Consider a neuron in my brain and name it Fred.  Absent Fred, I'd still be conscious.  So then why isn't my brain-Fred conscious?  The other view makes consciousness weirdly extrinsic--whether some collection of neurons is conscious depends on how they're connected to other neurons. 

(Not speaking for my co-authors or RP.)

I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.

We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:

https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we?commentId=pAZtCqpXuGk6H2FgF

But then wouldn't this by brain has a bunch of different minds?  How can the consciousness of one overlap with the consciousness of another? 

Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they're all minds. I'm not sure I'd say they're different minds, because if two overlap enough, they should be treated like the same one.

See also the problem of the many on SEP:

As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans.

I broadly agree that the hypothesis is not firmly established and would have mild, no, or hard-to-gauge practical implications. Two quick comments though:

- The assumption that conscious subsystems don’t overlap seems unmotivated, and if it’s relaxed I think the sort of thinking that takes one in the direction of conscious subsystems (that something could be conscious while being a highly integrated component of something else) probably starts to make the individuation of subjects very difficult, producing indeterminate numbers of conscious subsystems.

- Cerebral hemispheres are an especially promising case of conscious subsystems, where no theoretical argument is needed for premise 1 (that the subsystem is capable of consciousness by itself), because people with one hemisphere removed can self-report consciousness. Worth noting that strictly, what can report consciousness is a system consisting of a hemisphere and the midbrain, etc., so drawing the inference to conscious subsystems requires accepting a degree of overlap: if the two hemisphere+midbrain systems are both conscious (as well as the whole brain) they overlap at the midbrain. 
 

Hi Luke, thanks for your comment!

I agree with you about overlap and individuation. We decided to stick with this presentation for simplicity and brevity.

Some thoughts, speaking only for myself and not my co-authors:

  1. I would treat the indeterminacy and issue of what kind of overlap you allow as partly a normative question, and therefore partly a matter of normative intuition and subject to normative uncertainty. If you assign weights to different ways of counting subsystems that give precise estimates (including precisifications of imprecise approaches), you can use a method for handling normative uncertainty to guide action (e.g. one of the methods discussed in https://www.moraluncertainty.com/ ). 
  2. While I actually expect some overlap to be allowed, I think reasonable constraints that prevent what looks like counterintuitive double counting to me will give you something that scales at most roughly proportionally with the number of neurons, if you pick the largest number of conscious subsystems of a brain you can get while following that set of constraints. This leaves no indeterminacy (other than more standard empirical or logical uncertainty), conditional on a set of precise constraints and this rule of picking the largest number. But you can have normative uncertainty about the constraints and/or the rule. As one potential constraint, if you have A1, A2 and A1+A2, you could count any two of them, but not all three together. Or, cluster them based on degree of overlap with some arbitrary sharp cutoff and pick one representative from each cluster. Or, you could pick non-overlapping subsets of neurons of the conscious subsystems to individuate them, so that each neuron can help individuate at most one conscious subsystem, but each neuron can still be part of and contribute to multiple conscious subsystems. You could also have function-specific constraints.
  3. Furthermore, without such constraints, you may end up with huge and predictable differences in expected welfare ranges between typically developed humans, and possibly whale and elephant interventions beating human-targeted interventions in the near term (because they have more neurons per animal), despite how few whales and elephants would be affected per $ on average. This seems very morally counterintuitive to me, but largely based on intuitions that depended on there not being such huge differences in the number of conscious subsystems in the first place.

 

On the two hemispheres case, we have a report on phenomenal unity coming out soon that will discuss it. In this context, I’ll just say that 1 or 2 extra conscious subsystems or even a doubling or tripling of the number (in case there would still be many otherwise) wouldn’t make much difference to prioritization between species just on the basis of the number of conscious subsystems, and we wanted to focus on cases where individuals have suggested very large gaps between species.

Hi people. The (preoperative diagnostic) Wada test in which brain hemispheres are alternately anesthetized while the still-conscious parts of the patient's brain attempt to name and recall presented objects provides strong medical evidence of conscious subsystems. Hemispherectomies, as Luke points out, and even strokes would also seem to be supportive. 

But more to the point, the fact that there is a conscious experience of losing neural communication with an entire hemisphere (for example, the realization that one can no longer produce speech or lift one arm) provides, I've argued, good reason to think that the substratum of that experience was conscious prior to the loss. There are alternative interpretations that seem more intuitive at first, but I think they require some serious metaphysical commitments. I have a hemispherectomies 2016 paper on this, and I give a more elaborate defense in my 2021 paper on IIT for anyone who's interested.

[anonymous]2
0
0

Hi Matt! I don’t think that follows. At best, those premises cut off one way that functionalism could support spending more on small invertebrates (namely, via Conscious Subsystems), leaving many others open. Functionalism is such a broad view that it probably doesn’t have any practical implications at all without lots of additional assumption—which, of course, will vary wildly in terms of the support they offer for spending on the spineless members of the animal kingdom.

Thanks Bob -- appreciate it!

"Imagine a new technology that allowed subsystems to report their conscious states! But we don’t have that evidence and, unfortunately, may forever lack it. "

We already have this technology. It is called Internal Family Systems therapy. Mindfulness meditation also results in awareness of the brain processes that were formerly shielded from conscious awareness, and knowledge of the fact that they have valenced experiences of their own, separately and often in conflict with the valence that the conscious mind reports.

Denying the existence of conscious subsystems in the human brain is like denying the existence of jhana. The lived experience of thousands of people is that they exist. We have watched them, and talked to them, and watched them talk to each other. We have seen that the 'I' of our assumed personal identity is actually a process that results from 'passing a microphone' from one subsystem to another as they take turns reacting to various stimuli.

Humans are vast, we contain multitudes. If you hurt me, you are hurting a lot of things. Theories of a singular consciousness are based on a narrow and limited sense of identity that anyone with meditation attainment will tell you is a delusion.

Hi Richard, thanks for your comment!

I think those subsystems' states (whether conscious or not) have to pass through the larger system's access (e.g. attention) and report functions to be reported. They can't report directly without passing through the larger system. At least, I'm not aware of any compelling evidence that they can do so directly and I'm not sure what it would look like.

Another interpretation of what's happening with meditation is that it's just generating different kinds of accessible conscious states, not revealing hidden ones. (It at least is generating different kinds of accessible conscious states, since otherwise we wouldn't be discussing them.) You could have multiple subagents or personalities or similar, but they may only be conscious when accessed as part of the larger system.

Curated and popular this week
Relevant opportunities