Hide table of contents

In 2023, Nick Humphrey published his book Sentience: The invention of consciousness (S:TIOC). In this book he proposed a theory of consciousness that implies, he says, that only mammals and birds have any kind of internal awareness.

[EDIT: This post aims to summarize that book. Nick Humphrey has written a précis, a summary, of his own book on aeon here, and has done a much better job summarizing his book than I have. Consider reading that instead of this post, then coming back to comment on this forum post]

His theory of consciousness has a lot in common with the picture of consciousness is described in recent books by two other authors, neuroscientist Antonio Damasio and consciousness researcher Anil Seth. All three agree on the importance of feelings, or proprioception, as the evolutionary and experiential base of sentience. Damasio and Seth, if I recall correctly, each put a lot of emphasis on homeostasis as a driving evolutionary force. All three agree sentience evolved as an extension of our senses–touch, sight, hearing, and so on. But S:TIOC is a bolder book which not only describes what we know about the evolutionary base of consciousness but proposes a plausible theory coming as close as can be to describing what it is short of actually solving Chalmers’ Hard Problem.

The purpose of this post is to describe Humphrey’s theory of sentience, as described in S:TIOC, and explain why Humphrey is strongly convinced that mammals and birds–not octopuses, fish, or shrimp–have any kind of internal experience. Right up front I want to acknowledge that cause areas focused on animals like fish and shrimp seem on-expectation impactful even if there’s only a fairly small chance those animals might have capacity for suffering or other internal experiences. Those areas might be impactful because of the huge absolute numbers of fish and shrimp who are suffering if they have any internal experience at all. But nevertheless, a theory with reasonable odds of being true that can identify which animals have conscious experience should update us on our relative priorities. Furthermore, if there is substantial uncertainty, which I think there is, such a theory should motivate hypothesis testing to help us reduce uncertainty.


Blindsight

To understand this story, you should hear about three fascinating personal encounters which lead Humphrey to some intuitions about consciousness. Humphrey describes blindsight in a monkey and a couple of people. Blindsight is the ability for an organism to see without conscious awareness of seeing. Humphrey tells of a story of a monkey named Helen whose visual cortex had been removed. Subsequent to the removal of her visual cortex, Helen was miserable and unmotivated to move about in the indoor world she lived in. After a year of this misery, her handlers allowed her to get out into the outside world and explore it. Over the course of time she learned to navigate around the world with an unmistakable ability to see, avoid obstacles, and quickly locate food. But Humphrey, knowing Helen quite well, thought she lacked the confidence in herself to be able to have the awareness that she clearly did. This was a clue that perhaps Helen was using her midbrain system, the superior colliculus, which processes visual information in parallel with the visual cortex, and that she was unaware of the visual information her brain could nevertheless use to navigate her body around obstacles and to locate food. Of course this is somewhat wild speculation considering that Helen couldn’t report her own experience back to Humphrey.


 

The second observation was of a man known to the scientific community as D.B. In an attempt to relieve D.B. of terribly painful headaches, doctors had removed D.B.’s right visual cortex. D.B. reported not being able to see anything presented only to his left eye (the left and right eyes each reporting information to the visual cortex in the opposite side of the brain). But strangely, when doctors encouraged him to guess what as in his left visual field, he could correctly identify shape, color, and position of objects presented to him, even as he had no conscious awareness of seeing them.


 

I’d like to add two caveats to this story about D. B.. First, I am a little sceptical this story is really evidence for unconscious sight. Split-brain patients, patients whose cortex has been cut in half to reduce seizures, can only describe objects presented to the one eye connected to the side of their brain that produces speech. Present the object to the other eye, and the information will to go the other side of their brain and these patients will verbally report not being able to see the object. Nevertheless, they will correctly be able to write the name of the object down on a piece of paper. The fascinating possibility here is that split-brain patients might have split-consciousness: potentially parallel tracks of conscious experience that are to some degree uncoordinated and independent. But in the context of D.B., the patient whose visual cortex was removed, we might infer that perhaps D.B. was still conscious of the objects he was seeing with his superior colliculus, but was merely unable to describe them because of his remaining visual awareness was disconnected from his phonological systems.


 

Second, @mako yass  suggested an interesting empirical test of Humphrey’s observations about D. B.. If we were to help D. B. train to use his unconscious sight by giving continuous feedback on his guesses as to what he was seeing, would he learn to recognize–and be conscious of–whatever intuitions he is drawing on to report his awareness? If he did, then perhaps he has had some kind of conscious experience of the visual stimuli after all–just not the sort of qualia you get with a visual cortex–and in that case, perhaps a theory of conscious vision that places conscious visual sensation entirely in the visual cortex is misplaced.


 

Having added those caveats, I’ll move on to discussing the final compelling case study that Humphrey uses to set up his theory of sentience in S:TIOC.


 

H.D. is a woman who tragically lost her eyesight at the age of 3 due to scarring of her corneas. Her corneas weren’t restored until the age of 27, and following the operation, she was convinced her vision hadn’t improved. Without any training from visual stimuli between the ages of 3 and 27, her visual cortex had perhaps atrophied and was unable to make sense of the signals coming in from her eyes. And like Helen the monkey, H.D. was able to identify obstacles, and she was able to point to objects in the world. But she reported a lack of any subjective sensory quality of visual experience.


 

The common thread running through the experiences of Helen, D. B., and H.D., is that although in each cases the evidence was not entirely complete, it seems fairly likely each were able to see but unable to experience the qualia of seeing. Perhaps it existed as a sort of sixth sense, inperceivable except as a sort of intuition. Somewhat like a Jedi could sense and learn to train to swing a light sabre using an intuitive sense without conscious awareness, these three seemed to be able to sense visual stimuli without consciously experiencing them.


 

The implication is that visual sensation and perception are separable in important ways. I’m sure I’m oversimplifying the story somewhat, but Humphrey's rough sketch is that visual sensations are conscious experiences generated in the visual cortex, while perceptions are unconscious signals existing in the midbrain’s superior colliculus. In the normal operation of a human brain, sensation and perception might become intermingled, but take away one and something of the other will remain; an animal with only a midbrain might have the perception without the sensation.

Sensation, sentition, and the ipsundrum

In early animals, reflexive neural circuitry generates direct response to perceptions of stimuli. If an aversive stimulus is perceived on the left, the organism reflexively moves right. In S:TIOC, that sort of reflex response Humphrey calls “sentition”: a meaningful but automatic response to stimuli. Humphrey proposes a four-step evolutionary development from those automatic sensory responses to conscious sensation.

In the first step, an additional copy, an efference copy, of that motor command is generated and set additional neurons internal to the brain that simply represent and store information about the motor response itself. The body monitors its own response (This is a remarkably similar story to Damasio’s “somatic marker hypothesis”) in order to do things with that response, such as to learn new associations with it.

In a second step, the animal reaches a level of evolutionary sophistication where some reflexive responses are no longer appropriate. At that point, the reflexive responses are privatized, so that only the internal  model of the motor response remains–there’s no longer an automatic command going back out to the body. In this sense, the brain now for the first time has a privatized record of the response. This forms a proprioceptive map of the body internal to the brain.

In Step 3, because motor signals formerly sent to the body now go from one place in the brain to another, a feedback loop can generate. A sensory feedback loop can be initially triggered by an incoming sensory signal, but that signal can now reverberate in the brain in a continuous and lasting neural response. This ‘thickens up’ (Humphrey’s term) the response, giving the signal some persistence over time.

In Step 4 of our development towards consciousness, evolution shapes the brain to push that recursive activity into a stable attractor state which can repeat the same pattern at different times. That complex system Humphrey describes as the “ipsundrum”, and he says it is those stable, recursive patterns that are phenomenal sensations. I’m still not entirely sure why he thinks these patterns in particular are phenomenal, but lets say only they have the persistence and complexity to reach a threshold of conscious feeling. Because it’s a recursive feedback state shaped by evolution to a stable attractor system, the ipsundrum is “all-or-nothing”--you have a particular phenomenal consciousness, or you don’t. Animals without this complex recursive attractor system do not have conscious sensations at all.

I want to give an apology to the reader if you are at this point feeling a little lost. There are gaps for me at this point too, and I’m not sure I’ve entirely faithfully reproduced the argument. In particular, the distinction between Step 3: thickened up recursive sensory feedback loops, and Step 4: attractor tates for those loops, seems not clearly defined. But I hope I have communicated a gist!

Animals and sentience

There were two behavioral patterns Humphrey discusses which to me were compelling for his argument that birds, mammals, and no other animals are sentient. First, Humphrey claims that all animals with sentience, that experience internal qualia, now have a motivation to engage sensory play in order to experiment or learn about their internal qualia. When judging whether a species has sentience, then, a conclusive lack of sensory play is strong evidence that species lacks sentience. Play is necessary but not sufficient for sentience, in an evidential sense, i.e., sensory play is an inevitable consequence of experiencing sentience. Second, Humphrey says sensation seeking is strong evidence an animal is sentient; non-sentient animals have no reason to seek out sensations, he says. So sensation seeking is sufficient to indicate sentience, albeit not entirely necessary.

I go back and forth on this. Brian Christian gives a strong argument for the utility of reinforcement learning agents having intrinsic motivation to explore and learn about the world around them. That might imply a kind of intrinsic attraction to novelty that seems not too different to sensation-seeking. You might also imagine it is evolutionary adaptive for some specific kinds of perceptions, like the warmth of a close companion, to be intrinsically rewarding, irrespective of whether there are sensory qualia associated with them.

But if I understand Humphrey right, he’s clear that these behavioral patterns could in theory be replicated in non-conscious, non-sentient machines. It’s just that in humans, the solution evolution has hit upon is to create sensory feedback loop attractor states that Humphrey calls ipsundrums (ipsundra?) which happen to generate conscious experiences. Humans engage in play in order to learn about those conscious experiences, and engage in sensation-seeking because some of those experiences are inherently pleasurable. Other mammals and birds exhibiting the same behavior, having much the same neural circuitry, are probably engaging in that behavior for the same reason humans are–because they have internal conscious experiences. Other animals like fish, reptiles, and octopuses do not engage in sensation-seeking or play and so do not have those internal conscious experiences.

Implications

Humphrey’s theory of consciousness in S:TIOC implies that machines could, in principle, be conscious, if they have the same kind of reflective systems that sentient animals like humans do; but also that there’s probably no particular function or kind of intelligence that would require consciousness to operate. But it does seem possible that, if we were to try to emulate the human proprioceptive, learning, and decision-making system, we might (accidentally or otherwise) produce machine consciousness.

Humphrey’s ipsundrum theory of consciousness suggests that efforts to improve the living conditions of fish and shrimp may not actually decrease suffering of sentient creatures, because those animals are not sentient. This will be quite a controversial implication. 

My hope is that if we can all agree the ipsundrum theory is really just at the level of a hypothesis at this point, we can all agree that, on expectation, given the current state of knowledge, fish and shrimp welfare are morally relevant, but also that future evidence in favor of the ipsundrum hypothesis may change that expectation and suggest fish and shrimp welfare are no longer morally relevant. It may also continue to be that, due to the vast number of fish and shrimp, even a tiny probability that the ipsundrum hypothesis is wrong might continue to make fish and shrimp morally relevant relative to mammals and birds, who are more clearly sentient, but fewer in number.

Research should try to investigate the hypotheses Nick Humphrey describes, in order to reduce uncertainty about his hypothesis. Unfortunately, Humphrey doesn’t spend much time outlining hypothesis tests for his theories. There are several parts of the theory which might use additional testing:

  • At the neuroscientific level, how exactly should we identify the ipsundrum in humans? 
  • Can we identify feedback loops attractor states that correlate with presence of conscious experience, appearing (for instance) during wakeful experience and REM sleep but disappearing within deep sleep?
  • Might we simply look for bidirectional connectivity patterns between relevant brain areas; if so, which are the relevant brain areas?
  • How do we separate out the presence of a sensory loop “attractor state” from mere accidental feedback loops? Should we look to structural connectivity?
  • Within animal ethology, is it really true that fish, shrimp, octopuses, and other animals of particular concern do not engage in sensation seeking or play? 
  • Additional research into blindsight is also likely relevant. 

There’s probably a much longer and more precise list of hypothesis tests we could respond with.

Probably most hypothesis-testing should concern biological organisms. But perhaps there is computational consciousness work to experiment with too. What sort of reinforcement-learning-based, embodied artificial intelligence endowed with sensory feedback loops to track its own embodiment might use play to learn about its own sensory processes? Is it possible that, without building explicit reward processes for sensation seeking, a sensation-seeking behavior might emerge simply because of the reward structure of the sensory system?

Such work run with appropriate agents would not necessarily be more unethical than research on animals, and might be much more ethical if agents were deliberately designed without, e.g., a desire for self-preservation, although if Humphrey is right, such drive does seem to be intrinsic to sentience. That sort of experimentation is also not in itself dangerous from an existential risk perspective provided it is performed on systems with fairly with limited intelligence, awareness of the wider world or how to modify its own basic reward system. If Humphrey is right, sentience could arise from a machine fairly limited in intelligence.

61

0
1
4

Reactions

0
1
4

More posts like this

Comments37
Sorted by Click to highlight new comments since:

Upvoted because I’m a fan of people summarizing and signal-boosting literature that bears on EA priorities. Disagree-voted because I’m not convinced that any of the observations or considerations put forward support the headline claim that only mammals and birds are sentient.

Also, I’m pretty sure that octopuses do play? A quick search appears to confirm this: “Octopuses like to play” (BBC).

I mention this in response to the part of the post that reads: “Other animals like fish, reptiles, and octopuses do not engage in sensation-seeking or play and so do not have those internal conscious experiences.”

(ETA: I see that in their closing section, OP acknowledges some uncertainty here, listing as a question for further investigation: “Is it really true that fish, shrimp, octopuses, and other animals of particular concern do not engage in sensation seeking or play?”)

EDIT: I don't think I had the right idea of what sensory play is. Sensory play seems to be a kind of exploratory play directed at things with novel or unusual sensory properties, like sand, bubbles, squishy things, different sounds, different smells, etc..

This and this, where fish are thrown into the water and come back and thrown again, also looks like sensory play (unless I've misunderstood what Humphrey meant). But, there might be other explanations, e.g. maybe the fish aren't coming back to be thrown again, but because they've been trained to, or because they want something else. It doesn't seem like something they'd specifically have been evolved to be motivated by this, given how unnatural it is.

 

There's also this study of ball-rolling in bumble bees that the authors conclude meets the criteria for play:

Here, we show that rolling of wooden balls by bumble bees, Bombus terrestris, fulfils behavioural criteria for animal play and is akin to play in other animals. We found that ball rolling (1) did not contribute to immediate survival strategies, (2) was intrinsically rewarding, (3) differed from functional behaviour in form, (4) was repeated but not stereotyped, and (5) was initiated under stress-free conditions.

 

Rethink Priorities collected some evidence of play behaviour across species. From their Welfare Range Table (EA Forum post):

No studies of carp could be found. An anecdotal observation of possible play was described in two other cyprinid species, the redeye (Scardinius erythrophthalmus) and the rudd (Leuciscus cephalus), showing that these fish returned over and over for the experience of being thrown out of the water by a human hand and the fish often competed vigorously to be the next one to be thrown (Burghardt, 2005). Burghardt (2005) provides a review of a large body of anecdotal evidence that suggests that play may exist in multiple species of teleost fish. However, further empirical studies involving controlled and systematic observation of fish play behaviors are needed and could follow up on the anecdotal observations outlined by Burghardt (2005). Importantly, the current lack of documented play behavior in fish may not indicate that fish do not play but rather that they are too uncomfortable in the typical housing we provide for them to engage in play (Fife-Cook & Franks, 2019). Thus, further research requires housing fish in environmental and social conditions conducive to a relaxed state (Fife-Cook & Franks, 2019).

 

It is common for juvenile and adult salmonids to jump into the air from the water, and this behaviour is highly relevant in salmonid net-pen culture and may be related to buoyancy regulation, parasitic infections, or stress (Fagen, 2017). However, Fagen (2017) has suggested that some jumping behavior seen in Atlantic salmon (Salmo salar) may represent a form of locomotor play but calls for additional research. Burghardt (2005) also reports on anecdotal observations of possible instances of locomotor play (not involving jumping) resembling adult redd-digging behavior in juvenile Coho salmon (Oncorhynchus kisutch). Burghardt (2005) provides a review of a large body of anecdotal evidence that suggests that play may exist in multiple species of teleost fish. However, further empirical studies involving controlled and systematic observation of fish play behaviors are needed and could follow up on the anecdotal observations outlined by Burghardt (2005). Importantly, the current lack of documented play behavior in fish may not indicate that fish do not play but rather that they are too uncomfortable in the typical housing we provide for them to engage in play (Fife-Cook & Franks, 2019). Thus, further research requires housing fish in environmental and social conditions conducive to a relaxed state (Fife-Cook & Franks, 2019).

 

Play behaviour has frequently been reported in octopuses. In captivity, octopuses are eager to explore inanimate objects (e.g., Lego, balls); they carry them around their aquarium tank and pass them from arm to arm (Kuba et al. 2003; Kuba & Byrne 2006). Giant Pacific octopuses, Enteroctopus dofleini, manipulate floating objects (e.g., plastic bottles) by squirting jets of water at the item, sending it to the far end of their aquarium and repeating the behaviour when the object floats back to them (Mather & Anderson, 1999). In the wild, different species have been observed collecting and manipualting different objects such as plastic and glass bottles (Mather 1994).

Humphrey spent a lot of time saying that authors like Peter Godfrey-Smith (whose book on octopus sociality and consciousness I have read, and also recommend) are wrong or not particularly serious when they argue that octopus behavior is play, because there are more mundane explanations for play-like behavior. I can't recall too much detail here because I no longer have Humphrey's book in my possession. In any case I think if you convinced him octopuses do play he would probably change his mind on octopuses without needing to modify any aspects of the overall theory. He'd just need to concede that the way consciousness developed in warm blooded creatures is not the only way it has developed in evolutionary history.

Actually, I have to correct my earlier reply. Iirc the argument is that all conscious animals engage in physical play, not necessarily that all playful animals are conscious. On the other hand, Humphrey does say that all animals engaging in pure sensation-seeking type play are conscious, so that's probably the sort of play he'd need to bring him around on octopuses.

I’m surprised how confident people are in their theories.

Other mammals and birds exhibiting the same behavior, having much the same neural circuitry, are probably engaging in that behavior for the same reason humans are–because they have internal conscious experiences

Since self-modelling, having feedback mechanisms for reinforcement learning, and having qualia all feel closely related to us, humans, it’s easy to think that “conscious experience” includes all three, and perceive evidence for some of the three as evidence for others. This seems invalid, unless the author does something like making an argument for how exactly having qualia has been evolutionary useful and what properties it’s connected to and then demonstrating related properties in some animals but not others. Does the author actually do that in the book?

No, the author is ultimately unclear why qualia in itself is useful, but by reasoning about the case studies I listed, his argument that qualia is in fact related to recursive internal feedback loops is ultimately a bit stronger than just "these things all feel like the same things so they must be related".

Humphrey first argues through his case studies that activity in the neocortex seems to generate conscious experience, while activity in the midbrain does not. Further, midbrain activity is sophisticated and can do a lot of visual and other perceptual processing independent of the neocortex, so all of that seems to be dissociable from consciousness. What remains is the generation of sensations in the neocortex. From that we can understand sensations ([in certain parts of?] the neocortex) are separable from perceptions (in the midbrain). So while that doesn't tell us what consciousness is needed for, it does tell us what it is not used for, including at least some perception and midbrain processes like RL. The vulnerability in this argument IMO is mainly that self report about conscious experience might not be entirely reliable.

Then he advances his "ipsundrum" hypothesis about how recursive (or recurrent, perhaps) sensory feedback loops could have evolved and gives strong arguments about why this was evolutionarily useful. The argument includes the idea that recurrent sensory feedback allows complex integrative processes such as self reflection and theory of mind, and those have strong evolutionary advantages. The development of warm blood might have both facilitated those feedback loops by speeding up neural processing, and necessitated them by requiring a more sophisticated homeostatic apparatus. So an evolutionary feedback loop created the sensory feedback loop.

At this point, I guess we still can't be confident these processes are inseparable from consciousness, but they seem more closely related to consciousness than other clearly separable processes. That seems valuable for at least lowering our expectations of sentience in species that don't have the cognitive processes whose relationship to consciousness we haven't ruled out.

Thanks Ben for reviewing and sharing the post (and to Michael for your resources) I did find it very interesting. But after reading it, giving it some thought, and reading the Aeon essay you recommended I came away quite intellectually frustrated. I think whenever some claims to have answered the hard problem, the more likely explanation is that they haven't understood it.

(Full disclosure for me confusion, I've all but become a strong dualist about consciousness after many many years of being or identitfying as a physicalist. This was a result of reading the philosophical literature, but also a lot of what was a personal reflection on my own experiences through meditation practice, especially informed by Douglas Harding's Headless Way. I basically can't grok illusionist perspectives any more try as I might[1], so readers might better read this comment as coming from a dualist loyal opposition, rather than a balanced assessment)

Something I found particularly puzzling was in the 3rd section of the essay (beginning 'Over the past 50 years...'), Humphrey seems to confuse mind-brain identity theory with panyschism, which doesn't for me bode well my expectation of his 'solving' of the hard problem. M-B-I to me is a strongly physicalist (if not eliminativist) position, while pansychism is by necessity dualist. [2]

The D.B. case is also an interesting one - I don't see why it isn't plausible to imagine that the operation (and similarly split brain cases as documented by Nagel) might lead to a second vestige of consciousness as cut off from you as that of your family, friends, and coworkers. Except this fragment doesn't have control of motor or speach functions, how horrible! It can only pass information onto the 'dominant' one.

The case of whether Humphrey is right about Octopi not playing is outside of my domain of expertise but initially I am sceptical. But again just because we know that we (humans) are conscious in a certain way, why ought we to imagine that consciousness must only exist in this way? I'm not sure it follows, and there seem to be gaps like this in his arguments that actually cover up important parts of his case.

Two examples of the above to end off my comment:

  1. In the Aeon article Humphrey states that sensations (i think he means qualia) are ideas. I agree! And these ideas exist. It feels very hard to explain how they exist in a physicalist or reductionist story, and attempts to explain often fail to the 'Moorean' argument for phenomenal realism.[3]
  2. In Figure 2 he shows a simplified, 4-step diagram of how a brain might create an 'attractor state' of phenomenalisation. This is a clear story but seems to me to lead to eliminativist illusionism. The hard question is this, why is this attractor state experiencing phenomenality, and not just a p-Zombie? And it seems that, as all physicalist theories go, Humphrey has no answer to this.
  1. ^

    And so help me I have tried! I've directly read Dennett, Frankish, and Kammerer! But they all just seem so obviously wrong when they make clear claims; or annoyingly slippery and vague when they don't. Imo Chalmers blows them all away.

  2. ^

    Galen Strawson actually has an interesting argument that physicalism must collapse into pansychism.

  3. ^

    Kammerer reviews this argument here and states it fails. But I just found it a convincing refutation of his own illusionism! I suppose one man's modus ponens really is another's modus tollens

In the Aeon article Humphrey states that sensations (i think he means qualia) are ideas. I agree! And these ideas exist. It feels very hard to explain how they exist in a physicalist or reductionist story, and attempts to explain often fail to the 'Moorean' argument for phenomenal realism.

Couldn't they just be beliefs? There are various (physicalist+reductionst-compatible) accounts of belief here. (I'm not sure if that's what Humphrey intended, though.)

The hard question is this, why is this attractor state experiencing phenomenality, and not just a p-Zombie? And it seems that, as all physicalist theories go, Humphrey has no answer to this.

I think Humphrey is an illusionist and so would dny that it's experiencing phenomenality and that the hard problem needs to be solved (instead, illusionists dissolve it). However, a complete illusionist theory should explain how beliefs of phenomenality arise (e.g. how these attractor states cause these beliefs); that's the "hard problem" for illusionists. From what I'm reading in this post and comments, it seems he doesn't explain that. So, whether interpreted as realist or illusionist, it seems there's still an important gap.

Yeah I'm not quiet sure what Humphrey means by belief here (thanks for the link!). But then I don't really know what I mean by 'belief' to be honest! I'm not sure I can define it without evoking my own phenomenal perspective, whether directly (a flash of inspiration) or via thought and introspection (trying to update my current beliefs with new evidence) - and I think that'd already put me at odds with physicalists/illusionists

I do think Humphrey is an illusionist (even if he doens't like the term) and view him as somewhat adjacent to Frankish. I think that the 'meta-problem of consciousness' isn't quite what I'm hinting at (though it is a problem) - I'm taking my phenomenal experiences as true. Dualists (like myself) need to try and get them to accord with our understanding of the physical world, but I think illusionists need to explain why I'm experiencing anything at all rather than just reporting I am. We probably have very different intuitions on this, but part of the reason I've become more 'dualist' over time is that I found that I never had a good response to this criticism when I was a materialist/physicalist, so in the end I accepted it as a worthy criticism that disproved my original ideas.

Finally, I'll note this isn't the first time we've had a Forum discussion about consciousness[1] - maybe it's something we could  explore in a dialogue if it's something you think would be a valuable use of our time and potentially useful for those reading on the Forum? It definitely touches on a number of EA cause areas.

  1. ^

    And I've very much enjoyed learning from your perspective :)

I think illusionists need to explain why I'm experiencing anything at all rather than just reporting I am

We need to first decide what we mean by 'experience'. I think there are two broad approaches (interpretations) of illusionism, which I described here and which could give us two different broad characterizations of 'experience':

  1. In the first, beliefs (illusions) of phenomenality/mysteriousness/nonphysical essence/etc. themselves could be what distinguishes what's experienced from what's not experienced. These beliefs need not be verbalized (whether in inner speech or reported) and could be of a more intuitive kind, like Graziano's attention schema or Humphrey's ipsundrum are meant to capture.[1] They might just be representations or models "depicting" phenomenal properties. See also my footnote here on Graziano's Attention Schema Theory. So, these beliefs would explain why you're experiencing anything at all.
  2. In the second, the physical properties that dispose us to have such beliefs could be what distinguishes experiences. This could be a kind of placeholder, but I suspect Frankish and Dennett would say that any reactive patterns and discriminations count, at least to some minimal degree.[2] So, thermometers and bacteria could be minimally experiencing things, too, and that you're reacting and making any discriminations at all would explain why you're experiencing anything at all. Those with blindsight could still have visual experiences, but in a way that's not accessible for standard verbal report and possibly of a more simple/minimal kind.

I suspect there's no real fact of the matter which approach is "right", but I'm more inclined towards 1.

If you have something else in mind by 'experience', I could try to respond to that.

 

Finally, I'll note this isn't the first time we've had a Forum discussion about consciousness[1] - maybe it's something we could  explore in a dialogue if it's something you think would be a valuable use of our time and potentially useful for those reading on the Forum? It definitely touches on a number of EA cause areas.

I might be interested in having a (recorded) call, and then we can release it, the (edited) transcripts and/or notes. I spend way too long writing comments (including this one, and others on this post), so I think I shouldn't commit to a text-based discussion.

That being said, I'm not sure how useful this would be for other people, compared to them just reading writing by or listening to Graziano or Frankish. It was Graziano's papers (2021, 2022, some clarifications in 2020) that made illusionism click for me,[3] and I suspect I couldn't do a better job in explaining illusionism than to just linkpost or quote him, as well as Kammerer, 2022 (or just the short summary in Shabasson, 2021, section 9), which helps illustrate how the illusion could be so persistent.

I think the basic argument that convinced me roughly goes like this, based on Graziano (2021, 2022), and from a draft I wrote but never posted:

Our claims of conscious experience result from the depiction/representation of information processed in our brains as having properties we believe as common to our conscious experiences, like phenomenality, subjectivity, qualitativeness or a nonphysical essence. There must be information in our brains depicting these properties, because without such information, we wouldn't consistently talk about these properties in the first place. Of course, maybe the information processing appears to have these properties precisely because it actually has these properties, and that's a realist position. However, the depiction itself and access to it by systems necessary for belief formation would be enough, and that's the illusionist position. There's no need to posit the actual existence of these properties, and in my view, there's currently no plausible explanation for the actual existence of these properties.

However, some things may make me unusually likely to accept illusionism:

  1. I suspect my direct intuitions about physical phenomena and consciousness are relatively weak, and I'm unusually inclined towards abstraction, so I've found little to count against illusionism for me. That consciousness just seems phenomenal, and red seems to be qualitative just doesn’t count very strongly to me.
  2. I have a very strong presumption in favour of physicalism,[4] and every non-illusionist physicalist theory doesn't seem to me to offer a serious attempt to solve the hard problem, so the best option seems to be to dissolve it, hence illusionism. It sounds like you went the other way towards dualism through your dissatisfaction with physicalist theories, and I'd guess Chalmers did, too.
  1. ^

    But might leave out too many details of how this actually works in humans and other animals to be very satisfying.

  2. ^

    E.g. Frankish on continuity here (54:00-57:37).

    Also Dennett (2019, p. 54):

    Dogs presumably do not think there is something it is like to be them, even if there is. It is not that a dog thinks there isn’t anything it is like to be a dog; the dog is not a theorist at all, and hence does not suffer from the theorists’ illusion. The hard problem and meta-problem are only problems for us humans, and mainly just for those of us humans who are particularly reflective. In other words, dogs aren’t bothered or botherable by problem intuitions. Dogs – and, for that matter, clams and ticks and bacteria – do enjoy (or at any rate benefit from) a sort of user illusion: they are equipped to discriminate and track only some of the properties in their environment.

    And Dennett thinks that chickens, octopuses and bees are definitely conscious.

  3. ^

    They were also the first explanations of illusionism I'd read. I haven't settled on Graziano's AST in particular, but it seems like a promising direction.

  4. ^

    Just generally.

    But also, other than panpsychism, where could we possibly draw a line for the presence and absence of the extra nonphysical property/properties? I can't imagine there being any plausible responses.

    Or, if panpsychist, how could these properties possibly combine in ways that correspond to what our brains are doing and our specific judgements? Maybe some kind of property dualism?

    I also can't imagine there being any plausible account of how the nonphysical affects the physical (or else we would already have identified it and adopted it into our physical ontology), so I'd be stuck with epiphenomenalism.

    So, if not an illusionist, I'd have to be a (property dualist) epiphenomenalist, and it seems there would be no way to empirically distinguish such accounts from their illusionist counterparts, which just drop the nonphysical stuff. And whether or not there are different ethical implications between them, I can't imagine them being that decisive in practice. The difference just doesn't seem that interesting anymore, but I favour the metaphysically more parsimonious illusionism.

    FWIW, I haven't read much Chalmers, and I learned about property dualism after illusionism already became intuitive to me.

I tend to think that questions about which organisms or systems are conscious mostly depend on identifying the physical correlates of consciousness, and understanding how they work as a system, and that questions about panpsychism, illusionism, eliminativism, or even Calmer's Hard Problem don't bear on this question very much. I think there's probably still a place for that philosophical debate because (1) there might be implications about where to look for the physical systems and (2) as I said to Michael earlier, illusionism might change our perspective on whether we assign special moral value to conscious experience at all. But I think (1) is marginal, and (2) is sort of a long shot.

In contrast, I think empirical and scientific investigation can help us understand a lot about which systems are conscious, and about what sort of conscious experiences they have, so I think most morally cruxy questions of consciousness are scientific and empirical.

Consequently, I wasn't too bothered by Humphrey side-stepping this issue, although I basically agree he did, because he offered solid theory and empirical investigation that suggests further empirical tests that might help us make progress on understanding consciousness in animals and other systems.

The D.B. case is also an interesting one - I don't see why it isn't plausible to imagine that the operation (and similarly split brain cases as documented by Nagel) might lead to a second vestige of consciousness as cut off from you as that of your family, friends, and coworkers. Except this fragment doesn't have control of motor or speach functions, how horrible! It can only pass information onto the 'dominant' one.

That was my reaction when I first read about split-brain patients. I now doubt it's all that horrible. First, there's been plenty of research of split-brain patients and I don't think anyone has discovered signs of distress from split-halfs that are cut off from speech expression; those halves do have other ways of communicating, e.g., through signs. Second, in humans, much of distress is governed physiologically, and so (1) we would be able to detect physiological signs of stress, but more importantly (2) even if there's a conscious half of a split-brain which can't express itself, its mood-state might be normal because it shares a body with the other half, and so the two jointly set mood, and the system overall might not be in distress. Finally, even if consciousness isn't illusory, conscious will often is; much more of our decisions are determined subconsciously than we think, and if the illusion still holds, the loss of conscious control might not even be perceived.

I do think blindsight is pretty compelling evidence for conscious visual sensation in species with regular sight and blindsight, as well as important evidence against conscious visual sensation in fish, frogs and reptiles, but I'm not sure what to make of it overall.

From Humphrey's Aeon article:

In mammals, there are two main pathways from the eye to the brain: an evolutionarily ancient one – the descendant of the visual system used by fish, frogs and reptiles – that goes to the optic tectum in the mid-brain, and a newer one that goes up to the cortex. In Helen, the older visual system was still intact. If a frog can see using the optic tectum, why not Helen?

So, fish, frogs and reptiles rely on a visual system that exists in humans (and other mammals and birds) that is not enough to generate conscious visual sensation in humans. The primary visual cortex seems necessary for conscious visual perception in mammals, and birds seem to have a functional analogue. (Are there studies of blindsight in birds?) On the other hand, fish, frogs and reptiles seem not to have any analogue. So, whatever functions or processes are necessary for conscious visual sensation in humans (and other mammals and birds) don't seem to be realized in fish, frogs and reptiles.

However, I can imagine some possibilities that could undermine this argument:

  1. The ancient system could have functions in fish, frogs and reptiles that the primary visual cortex in mammals has taken on, in a kind of evolutionary migration of some functions.
  2. Whatever happens in the ancient system is not available in the "global workspace" in humans and so doesn't result in conscious visual sensation, but it could be in fish, frogs and/or reptiles. It might be that the primary visual cortex and the cortex in general added extra layers before the global workspace and executive function, and there was no need for the ancient system to keep feeding directly into the global workspace in mammals and birds. So those connections were lost or replaced with connections that run through the primary visual cortex, which are lost or inactive in blindsight.
    1. On the other hand, maybe fish, frogs and reptiles don't have any global workspace at all, either. Nieder, 2022 writes "In contrast, reptiles and amphibians show no sign of either working memory or volitional attention. Surprisingly, some species of teleost fishes exhibit elementary working memory and voluntary attention effects suggestive of possibly rudimentary forms of subjective experience. With the potential exception of honeybees, evidence for conscious processing is lacking in invertebrates." That being said, I don't think he cites any negative results, so this is not evidence of absence, just absence of evidence. And the fact that some fish seem to have working memory and voluntary attention suggests those fish do have something like a global workspace, despite no cortex and these functions being realized in the human cortex.
  3. Blindsighted humans do have conscious visual sensation, just not accessible for report.

 

Also related is Mason, G., & J. Michelle Lavery. (2022). What Is It Like to Be a Bass? Red Herrings, Fish Pain and the Study of Animal Sentience. Frontiers in Veterinary Science, 9. https://doi.org/10.3389/fvets.2022.788289

From the abstract:

After reviewing key consciousness concepts, we identify “red herring” measures that should not be used to infer sentience because also present in non-sentient organisms, notably those lacking nervous systems, like plants and protozoa (P); spines disconnected from brains (S); decerebrate mammals and birds (D); and humans in unaware states (U). These “S.P.U.D. subjects” can show approach/withdrawal; react with apparent emotion; change their reactivity with food deprivation or analgesia; discriminate between stimuli; display Pavlovian learning, including some forms of trace conditioning; and even learn simple instrumental responses. Consequently, none of these responses are good indicators of sentience. Potentially more valid are aspects of working memory, operant conditioning, the self-report of state, and forms of higher order cognition. We suggest new experiments on humans to test these hypotheses, as well as modifications to tests for “mental time travel” and self-awareness (e.g., mirror self-recognition) that could allow these to now probe sentience (since currently they reflect perceptual rather than evaluative, affective aspects of consciousness). Because “bullet-proof” neurological and behavioral indicators of sentience are thus still lacking, agnosticism about fish sentience remains widespread.

Humphrey's argument fish aren't conscious doesn't only rest on their not having the requisite brain structures, because as you say, it is possible consciousness could have developed in their own structures in ways that are simply distinct from our own. But then, Humphrey would ask, if they have visual sensations, why are they uninterested in play? When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations but not a propensity to play in a way that helps them to learn about those sensations.

Perhaps fish just don't benefit from learning more about their visual sensations. The sensations are adaptive, but learning about them confers no additional adaptive advantage. That seems a stretch to me, because it's hard for me to imagine sensations being adaptive without learning and experimenting with them conferring additional advantage.

You could also respond by citing examples where fish can play, and are motivated to sensation-seek, as you already have, and I think if Humphrey believes your examples he would find that persuasive evidence about those organisms consciousness.

Does he spell out more why it's useful to learn more about your own sensations? Also, couldn't this apply to any perception that feeds into executive functions/cognitive control, conscious or not?

What if sensory play is just very species-specific? Do the juveniles of every mammal and bird species play? Would he think the species without play aren't conscious, even if they have basically the same sensory neural structures?

A motivation to engage in (sensory) play has resource costs. Playing uses energy and time, and it takes energy to build the structures responsible for the motivation to play. And the motivation could be risky without a safe environment, e.g. away from predators or protection by parents and with enough food. Fish larvae don't seem to get such safe environments.

I guess a thesis he's stated elsewhere is that it's the function of consciousness to matter. This is the adaptive belief it causes. So, conscious sensations should just be interesting to animals with them, and maybe without that interest, there's no benefit to conscious sensation. This doesn’t seem crazy to me, and it seems pretty plausible with my sympathies to illusionism. Consciousness illusions should be adaptive in some way.

But, this only tells me about conscious sensation. Animals without conscious sensation could still have conscious pleasure, unpleasantness and desires, which realize the mattering and interest. And animals don't engage in play to explore unpleasantness and aversive desire. So what are the benefits of unpleasantness and aversive desire being conscious as opposed to unconscious? And could there be similar benefits for conscious sensation? If there are, then sensory play may not be (evolutionarily) necessary for consciousness in general or conscious sensation in particular after all.

To me "conscious pleasure" without conscious sensation almost sounds like "the sound of one hand clapping". Can you have pure joy unconnected to a particular sensation? Maybe, but I'm sceptical. First, the closest I can imagine is calm joyful moments during meditation, or drug-induced euphoria, but in both cases I think it's at least plausible there are associated sensations. Second, to me, even the purest moments of simple joy seem to be sensations in themselves, and I don't know if there's any conscious experience without sensations.

Humphrey theorises that the evolutionary impulse for conscious sensations includes (1) the development of a sense of self (2) which in turn allows for a sense of other, and theory of mind. He thinks that mere unconscious perception can't be reasoned about or used to model others because, being unconscious, it is inaccessible by the global workspace for that kind of use. in contrast, conscious sensations are accessible in the global workspace and can be used to imagine the past, future, or what others are experiencing. The cognitive and sensory empathy that allows can enable an organism to behave socially, to engage in deceit or control, to more effectively care for another, to anticipate what a predator can and can't see, etc.

I would add that conscious sensation allows for more abstract processing of sensations, which enables tool use and other complex planning like long term planning in order to get the future self more pleasurable sensations. Humphrey doesn't talk about that much, perhaps because it's only a small subset of conscious species that have been observed doing those things, so perhaps mere consciousness isn't sufficient to engage in them (some would argue you need language to do good long term planning and complex abstraction).

Humphrey believes that mammals in general do engage in play, which he thinks all (but not only) conscious animals do, and that they also engage in sensation-seeking (e.g. sliding down slopes or moving fast through the air for no reason), which he thinks only (but not all) conscious animals do. And he'd say the same thing about birds, and the fact that those behaviors' distribution over species lines up nicely with the species with neural structures he thinks generates consciousness he treats as additional confirmation of his theory.

Animals do engage in play with unpleasant experiences, e.g., playfighting can include moderately unpleasant sensations. I suppose the benefits of those experiences being conscious might be to form more sophisticated strategies of avoiding them in future. It isn't that Humphrey thinks play is necessary for conscious to emerge, it's that he thinks all conscious animals are motivated to engage in play.

I feel this last answer maybe hasn't answered all your questions but I was a bit confused by your last paragraph, which might have arisen out of an understandable misunderstanding of the claim about consciousness and play.

Thanks, this is helpful!

Can you have pure joy unconnected to a particular sensation? Maybe, but I'm sceptical. First, the closest I can imagine is calm joyful moments during meditation, or drug-induced euphoria, but in both cases I think it's at least plausible there are associated sensations. Second, to me, even the purest moments of simple joy seem to be sensations in themselves, and I don't know if there's any conscious experience without sensations.

I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination). Inner speech is of course sensory, but it's not the sensations of the inner speech, and instead your high-level interpretation of the meaning that causes the pleasure. (There might still be other subtle sensations associated with pleasure, e.g. from changes to your heart rate, body temperature, facial muscles, or even simulated smiling.)

Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.

 

I would add that conscious sensation allows for more abstract processing of sensations, which enables tool use and other complex planning like long term planning in order to get the future self more pleasurable sensations. Humphrey doesn't talk about that much, perhaps because it's only a small subset of conscious species that have been observed doing those things, so perhaps mere consciousness isn't sufficient to engage in them (some would argue you need language to do good long term planning and complex abstraction).

Maybe some other potential capacities that seem widespread among mammals and birds (and not really investigated much in others?) that could make use of conscious sensation (and conscious pleasure and unpleasantness):

  1. episodic(-like) memory (although it's not clear this is consciously experienced in other animals)
  2. working memory
  3. voluntary attention control
  4. short-term planning (which benefits from the above)

FWIW, mammals seem able to discriminate anxiety-like states from other states.[1]

Animals do engage in play with unpleasant experiences, e.g., playfighting can include moderately unpleasant sensations.

I don't think they are motivated to explore things they find unpleasant or aversive, or unpleasantness or aversion themselves. Rather, it just happens sometimes when they're engaging in the things they are motivated to do for other reasons.

I suppose the benefits of those experiences being conscious might be to form more sophisticated strategies of avoiding them in future.

Ya, this seems plausible to me. But this also seems like the thing that's more morally important to look into directly. Maybe frogs' vision is blindsight, their touch and hearing are unconscious, etc., so they aren't motivated to engage in sensory play, but they might still benefit from conscious unpleasantness and aversion for more sophisticated strategies to avoid them. And they might still benefit from conscious pleasure for more sophisticated strategies to pursue pleasure. The conscious pleasure, unpleasantness and desire seem far more important than conscious sensations.

  1. ^

    Carey and Fry (1995) show that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Similar experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, in section 4.d., starting on p.81. Rats generalized from hangover to morphine withdrawal and jetlag, from high doses of cocaine to movement restriction, from an anxiety-inducing drug to aggressive defeat and predator cues. Of course, anxiety has physical symptoms, so maybe this is what they're discriminating, not the negative affect or aversive desire, although non-anxiolytic anticonvulsants didn’t block the effects, so convulsions in particular seem unlikely to explain the difference.

 

I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination)....Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.

 

I think most of those things actually can be reduced to sensations; moods can't be, but then, are moods consciously experienced, or do they only predispose us to interpret conscious experiences more positively or negatively?

(Edit: another set of sensations you might overlook when you think about conscious experience of mood are your bodily sensations: heart rate, skin conductivity, etc.)

But this also seems like the thing that's more morally important to look into directly. Maybe frogs' vision is blindsight, their touch and hearing are unconscious, etc., so they aren't motivated to engage in sensory play, but they might still benefit from conscious unpleasantness and aversion for more sophisticated strategies to avoid them. And they might still benefit from conscious pleasure for more sophisticated strategies to pursue pleasure.

They "might" do, sure, but what's your expectation they in fact will experience conscious pleasantness devoid of sensations? High enough to not write it off entirely, to make it worthwhile to experiment on, and to be cautious about how we treat those organisms in the meantime--sure. I think we can agree on that. 

But perhaps we've reached a sort of crux here: is it possible, or probable, that organisms could experience conscious pleasure or pain without conscious sensation? It seems like a worthwhile question. After reading Humphrey I feel like it's certainly possible, but I'd give it maybe around 0.35 probability. As I said in OP, I would value more research in this area to try to give us more certainty. 

If your probability that conscious pleasure and pain can exist without conscious sensation is, say, over 0.8 or so, I'd be curious about what leads you to believe that with confidence.

I think most of those things actually can be reduced to sensations

What do you mean by "reduced to"? It's tricky to avoid confounding here, because we're constantly aware of sensations and our experiences of pleasure and unpleasantness seem typically associated with sensations. But I would guess that pleasure and unpleasantness aren't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.

Apparently even conscious physical pain affect (unpleasantness) can occur without pain sensation, but this is not normal and recorded cases seem to be the result of brain damage (Ploner et al., 1999, Uhelski et al., 2012).

moods can't be, but then, are moods consciously experienced, or do they only predispose us to interpret conscious experiences more positively or negatively?

I'm not sure, and that's a great question! Seems pretty likely these are just dispositions. I was also thinking of separation anxiety as an unpleasant experience with no specific sensations in other animals (assuming they can't imagine their parents, when they are away), but this could just be more like a mood that disposes them to interpret their perceptions or sensations more negatively/threatening.

 

They "might" do, sure, but what's your expectation they in fact will experience conscious pleasantness devoid of sensations? (...) If your probability that conscious pleasure and pain can exist without conscious sensation is, say, over 0.8 or so, I'd be curious about what leads you to believe that with confidence.

Thanks for pushing on this. There are multiple standards at which I could answer this, and it would depend on what I (or we) want "conscious" to mean.

With relatively high standards for consciousness like Humphrey seems to be using, or something else at least as strict as having a robust global workspace (with some standard executive functions, like working memory or voluntary attention control), I'd assign maybe 70%-95% probability to the in principle possibility based on introspection, studies of pain affect without pain sensation, and imagining direct stimulation of pleasure systems, or with drugs or meditation. However, I'd be very surprised (<15%) if there's any species with conscious pleasure or unpleasantness without the species generally also having conscious sensations. It doesn't seem useful for an animal to be conscious of pleasure or unpleasantness without also being conscious of their causes, which seems to require conscious sensation. Plus, whatever mechanisms are necessary for consciousness per se could be used for both perceptions/sensations and pleasure.

With low standards, e.g. a sensation is a perception + a belief that the perception matters, and pleasure is a positive judgement (as a belief), and low standards for what counts as a belief, I'd be less confident either way for both the in principle and in practice questions. I'd mostly have in mind similar intuitions, arguments and other evidence as above, but the evidence just seems weaker and less reliable here. But I'd also be more confident that frogs, fish and invertebrates have conscious pleasure and unpleasantness and conscious sensations.

You could also mix low standards for one but high standards for the other, but I'd give these possibilities less weight.

But I would guess that pleasure and unpleasantness isn't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.

This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there's a certain kind of recurrent cortical processing of the signal which can loosely be described as "sensation". I mean that very loosely; it even can include memories of physical events or semantic thought (which you might understand as a sort of recall of auditory processing). Without that recurrent cortical processing modeling the reward and learning process, probably all that midbrain dopaminergic activity does not get consciously perceived. Perhaps it does, indirectly, when the dopaminergic activity (or lack thereof) influences the sorts of sensations you have.

But I'm getting really speculative here. I'm an empiricist and my main contention is that there's a live issue with unknowns and researchers should figure out what sort of empirical tests might resolve some of these questions, and then collect data to test all this out.

Thanks for your summary!

I'll admit I didn't really follow the section 'sensation, sentition, and the ipsundrum' but the rest of it seems very weak, if any, evidence for the theory.

To pick one example: Why should I think sensory play is a necessary condition for sentience?

You could imagine a species which had all the neural architecture mammals & birds have, but had no limbs. I think we wouldn't observe it 'playing,' but I think Humphrey's theory still implies it's sentient.

I've tried to condense a book-length presentation into a 10 minute read and I probably have made some bad choices about which parts to leave out.

Its not that sensory play is necessary for producing sentience. The claim is that any animal that is sentient would be motivated to play. There might be other motivations for play that are not sentience, but all sentient creatures (so the argument goes) would want to play in order to explore and learn about the properties of its own sensory world.

For the limbless species you mentioned, if we imagine a radical scenario like a mammal that evolves into an entirely static plant-like existence, but for some reason, it doesn't lose its now-evolutionarily-useless capacity for sentience, i suppose I imagine that it would play if it could, but does not because it can't. So perhaps you could rescue Humphrey's assertion about play by modifying it to "any sentient animal will be motivated to play and will play to the extent they are physically able to do so".

The theory depends on humans shared ancestry and neurophysiology with other animals. Conditional on shared ancestry and neurophysiology we are able to make some tentative inferences about animal experience from our own experiences. Without that shared ancestry, I think he would be far too far out on a limb (heh).

That makes sense — I appreciate you doing that work & making calls about what to include; I bet there's a lot I'm missing!!

Ah, I wrote & meant 'a necessary condition for' — I hadn't misunderstood the argument in the way you're worried about in your second paragraph (but perhaps a useful clarification for anyone reading!)

My problem is I don't buy that 'any animal that is sentient would be motivated to play' — and ultimately I think the additional explanation you've provided here, about shared ancestry and neurophysiology, is interesting & relevant to think about re: which if any animals are sentient, but I think it just boils down to:

  1. Humans are sentient
  2. Humans have a shared ancestry and neurophysiology with other animals
  3. ?(Sentience depends on neurophysiology)? [fn]

C. Other animals are likely to be sentient

C2. Other animals are more likely to be sentient in accordance with the extent to which they share human ancestry and neurophysiology.

This argument, while IMO important/pretty compelling as a reason to start of with some moderate credence on animal sentience, doesn't do that much, and certainly couldn't, on its own, convince me of any necessary conditions for sentience — certainly not sensory play.

It also doesn't do anything to convince me that non-bird non-mammals are sufficiently different (in terms of shared ancestry and neurophysiology) from humans, such that we should think they're not sentient.

[fn] I'm unsure from your summary if Humphrey means to claim this or not, sorry!

To give a concrete example, my infant daughter can spend hours bashing her toy keyboard with 5 keys. It makes a sound every time. She knows she isn't getting any food, sleep, or any other primary reinforcer to do this. But she gets the sensations of seeing the keys light up and a cheerful voice sounding from the keyboard's speaker each time she hits it. I suppose the primary reinforcer just is the cheery voice and the keys lighting up (she seems to be drawn to light--light bulbs, screens, etc). 

During this activity, she's playing, but also learning about cause and effect--about the reliability of the keys reacting to her touch, about what kind of touch causes the reaction, and how she can fine-tune and hone her touch to get the desired effect. I think we can agree that many of these things are transferable skills that will help her in all sorts of things in life over the next few years and beyond?

I'm sort of conflating two things that Humphrey describes separately: sensory play, and sensation seeking. In this example it's hard to separate the two. But Humphrey ties them both to consciousness, and perhaps there's still something we can learn from about an activity that combines the two together.

In this case, the benefits of play are clear, and I guess the further premise is that consciousness adds additional motivation for sensory play because, e.g., it makes things like seeing lights, hearing cheery voices much more vivid and hence reinforcing, and allows the incorporation of those things with other systems that enable action planning about how to get the reinforcers again, which makes play more useful.

I agree this argument is pretty weak, because we can all agree that even the most basic lifeforms can do things like approach or avoid light. Humphrey's argument is something like the particular neurophysiology that generates consciousness also provides the motivation and ability for play. I think I have said about as much as I can to repeat the argument and you'd have to go directly to Humphrey's own writing for a better understanding of it!

Yes I see that is a reasonable thing to not be convinced about and I am not sure I can do justice to the full argument here. I don't have the book with me, so anything else I tell you is pulling from memory and strongly prone to error. Elsewhere in this comments section I said

When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations but not a propensity to play in a way that helps them to learn about those sensations.

And

Humphrey theorises that the evolutionary impulse for conscious sensations includes (1) the development of a sense of self (2) which in turn allows for a sense of other, and theory of mind. He thinks that mere unconscious perception can't be reasoned about or used to model others because, being unconscious, it is inaccessible by the global workspace for that kind of use. in contrast, conscious sensations are accessible in the global workspace and can be used to imagine the past, future, or what others are experiencing. The cognitive and sensory empathy that allows can enable an organism to behave socially, to engage in deceit or control, to more effectively care for another, to anticipate what a predator can and can't see, etc.

I believe the idea is something like sentience enables a lot more opportunity to learn about the world, and learning opportunities can be obtained through play. Not taking those opportunities if you're able is sort of like leaving free adaptive money on the table.

(Somewhat tangential.)

Does Humphrey discuss his theory as an illusionist one in the book? My understanding is that he's an illusionist and the theory he's been working on is illusionist.[1] That seems like a pretty important part of his theory, but it might not be that important for his claim that only mammals and birds are conscious.

(FWIW, I think illusionism about consciousness is probably correct.)

It seems there are two broad (moral?) interpretations of illusionism (e.g. Frankish, 2021):

  1. To be conscious, a physical system has to actually believe in the mysteriousness (or importance/mattering?) of what it's processing. In other words, it would have to actually be subject to illusions of phenomenal consciousness.
  2. To be conscious, if the right kind of system[2] were connected to the original system in the right way, that system would have to believe in (and report) the mysteriousness (or importance/mattering?) of what the combined system is processing.

1 implies 2, and it seems fewer systems could meet 1 than 2.

It seemed like Humphrey endorses something like 1. Graziano's (illusionist) Attention Schema Theory seems between 1 and 2,[3] and he (2022) wrote "the components of what we call consciousness may be present in some form in a huge range of animals, including mammals, birds, and many nonavian reptiles"[4], although I'm not aware of him specifically denying consciousness to other animals. Related to this, and while not specifically illusionist, Key, Brown and Zalucki argue that molluscs (including octopuses)insects and fish don’t have internal state prediction networks for their own pain, i.e. they don’t model their own pain. Key (2014, 2016) argues that fish lack long-range feedback connections for pain processing (perhaps between certain structures specifically) and that the pain pathway is feedforward.

On the other hand, Frankish (2023, 2022, 2021) endorses 2. I'd guess Dennett endorses 2 (or neither?), because he's confident in octopus and bee consciousness, but I'm not sure.

Although Frankish endorses 2 anyway, I suspect he's too skeptical of other animals meeting something like 1, setting the bar too high for introspection and/or the kinds of beliefs that are required. He has a whole talk titled "Why We Can Know What It’s Like To Be a Bat and Bats Can’t". Dennett might also set the bar too high; see Graziano's response to him.[3]

I also lean towards 1, but possibly under a slightly different interpretation: I suspect the system just has to believe something matters. I might also have a low bar for what could count as a belief that something matters, but this seems vague. I think humans can believe things without stating their beliefs (in inner speech or externally, see Malcolm, 1973, and/or sections 1 and 4 of Schwitzgebel, 2019), and if that's the case, it seems hard to justify the claim that insects, say, very likely don't believe anything matters.

On the other hand, then we might end up having to recognize that humans often have (active) beliefs that something matters that we don't typically recognize ourselves as being conscious of. And we might end up with a basically panpsychist (but possibly gradualist) view.

  1. ^

    Humphrey (2017) wrote, after contrasting realism and illusionism:

    Still, which is right? No one yet knows for sure. But I’m not hiding which I hope is right. Although I myself have recently questioned the language of illusionism (Humphrey 2016b), I hope to see a resolution of the “hard problem” within the bounds of our standard world model.

    Also, see this interview.

    (FWIW, Graziano (2016), also an illusionist, wrote: "I confess that I baulk at the term ‘illusionism’ because I think it miscommunicates", and elaborates on this.)

  2. ^

    Presumably with some constraints on what the system can do.

  3. ^

    The attention schema could itself be the beliefs and include the illusions of consciousness.

    Graziano (2020a) wrote:

    Therefore, in AST, just as animals “know” about their own bodies in some deep intuitive sense via their body schemas, they also “know” about a subjective experience inside of them (a detail-poor depiction of their attentional state) via an attention schema. They may, however, lack higher cognitive levels of reflection on those deeper models.

    Dennett (2020) suggests that only humans need an attention schema and that dogs do not. I think perhaps the difference in opinion here relates to higher level and lower level models. Humans undoubtedly have layers of higher cognitive models, myths and beliefs and cultural baggage. Much of the ghost mythology that we discussed in our target article (Graziano et al., 2020) is presumably unique to humans, exactly as Dennett suggests. But in AST, many of these human beliefs stem from, or are cultural elaborations of, a deeper model that is built into us and many other animals – an intrinsic model of attention.

    He uses quotes around the word 'know', so he might not mean these count as beliefs. Graziano (2020b) also wrote the following, which contrasts the attention schema ("automatic self-model (...)") from our beliefs:

    Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.

    AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.

  4. ^

    Before that, Graziano (2020a) wrote:

    Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does.

Not absolutely sure I'm afraid. I lent my copy of the book out to a colleague so I can't check.

Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn't actually say his view is an illusionist one.

Personally I can't stand the label "illusionism" because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane--there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are more specific.

And it seems like the version matters a great deal too. Perhaps if consciousness really is an illusion, and none of us really have qualia--we're all p-zombies programmed to believe we aren't--I have a hard time understanding the point of altruism or anything more than instrumental morality. But if we're just talking about an illusion that consciousness is a mysterious other worldly thing, and somehow, there really are qualia, then altruism feels like a meaningful life project to adopt.

On the whole having read Humphrey's book, I don't think he explicitly said he was an illusionist. but perhaps his theory suggests it, I'm not sure. He didn't really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.

Personally I can't stand the label "illusionism" because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all!

I think this is technically accurate, but illusionists don't deny the existence of consciousness or claim that consciousness is an illusion; they deny the existence of phenomenal consciousness and qualia as typically characterized[1], and claim their appearances are illusions. Even Frankish, an illusionist, uses "what-it-is-likeness" in describing consciousness (e.g. "Why We Can Know What It’s Like To Be a Bat and Bats Can’t"), but thinks that should be formalized and understood in non-phenomenal (and instead physical-functional) terms, not as standard qualia.

The problem is that (classic) qualia and phenomenality have become understood as synonymous with consciousness, so denying them sounds like denying consciousness, which seems crazy.

 

Perhaps if consciousness really is an illusion, and none of us really have qualia--we're all p-zombies programmed to believe we aren't--I have a hard time understanding the point of altruism or anything more than instrumental morality.

Kammerer, 2019 might be of interest. On accounting for the badness of pain, he writes:

The best option here for the illusionist would probably be to draw inspiration from desire-satisfaction views of well-being (Brandt 1979; Heathwood 2006) or from attitudinal theories of valenced states (Feldman 2002), and to say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike. After all, when I am in pain, there is something awful which is that I want it to stop (and that my desire is frustrated); alternatively, one could insist on the fact that what is bad is that I dislike my pain. This frustration or this dislike are what makes pain a harm, which in turn grounds its negative value. This might be the most promising lead to an account of what makes pain bad.

This approach is also roughly what I'd go with. That being said, I'm a moral antirealist, and I think you can't actually ground value stance-independently.

 

He didn't really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.

Makes sense.

  1. ^

    "Classic qualia: Introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective." (Frankish (video))

    I think this is basically the standard definition of 'qualia', but Frankish adds 'classic' to distinguish it from Nagel's 'what-it-is-likeness'.

say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike

I'm curious how, excluding phenomenal definitions, you define he defines "frustration of a desire" or "negative attitude of a dislike", because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire ("navigate through a maze to get to the goal square") and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.

I share your moral antirealism, but don't think I could be convinced to care about preventing frustration of that sort of simple desire. It's the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai's frustrated desires and a human's

I think illusionists haven't worked out the precise details, and that's more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like "frustration of a desire" or "negative attitude of a dislike". And we can assign more moral weight the more true it seems.[2]

We can ask about:

  1. how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
  2. what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they're worth describing as beliefs, and the effects of these beliefs,
  3. how else they're aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that's worth describing as (that type of) awareness, and the effects of this awareness.
  1. ^

    Tomasik (2014-2017, various other writings here), Muehlhauser, 2017 (sections 2.3.2 and 6.7), Frankish (2023, 51:00-1:02:25), Dennett (Rothman, 20172018, p.168-16920192021, 1:16:30-1:18:00), Dung (2022) and Wilterson and Graziano, 2021.

  2. ^

    This is separate from their intensity or strength.

Thanks Michael. For readers who are confused by my post but still want to know more, consider just reading (2), which is a very good précis by Nick Humphrey of his book which I tried to summarize. It might be better for readers, rather than reading my essay, to just read that. 

I did enjoy the discussion here in general. I hadn't heard of the "illusionist" stance before and it does sound quite interesting yet I do find it quite confusing as well.

I generally find there to be a big confusion about the relation of the self to what "consciousness" is. I was in this rabbit hole of thinking about it a lot and I realised I had to probe the edges of my "self" to figure out how it truly manifested. A 1000 hours into meditation some of the existing barriers have fallen down. 

The complex attractor state can actually be experienced in meditation and it is what you would generally call a case of dependent origination or a self-sustaining loop (literally, lol). You can see through this by the practice of realising that the self-property of mind is co-created by your mind and that it is "empty". This is a big part of the meditation project. (alongside loving-kindness practice, please don't skip the loving-kindness practice)

Experience itself isn't mediated by this "selfing" property, it is rather an artificial boundary we have created about our actions in the world for simplification reasons. (See Boundaries as a general way of this occurring.)

So, the self cannot be the ground of consciousness; it is rather a computationally optimal structure for behaving in the world. Yet realizing this fully is easiest done through your own experience, or through n=1 science. Meaning that to fully collect the evidence you will have to discover it through your own phenomenological experience. (which makes it weird to take into western philosophical contexts)

So, the self cannot be the ground and partly as a consequence of this and partly since consciousness is a very conflated term, I like thinking more about different levels of sentience instead. At a certain threshold of sentience the "selfing" loop is formed.

The claims and evidence he's talking about may be true but I don't believe that justifies the conclusions that he draws from them.

Good job. Posts like these are why I still check the EA forum despite the ridiculous nonsense (castles et al). 

I feel like I should be writing and reading posts about AI but honestly I am too intimidated to go near that topic.

I specialize in AI, and I respectfully disagree. I think there's much more low-hanging fruit available when studying consciousness. Interpretability research is a thriving subfield best left to PhD students and I don't really think that bloggers add much value here. 

I personally am also not concerned about AGI because I think consciousness is quantum. 

ha I see. Your advice might be right but I don't think "consciousness is quantum". I wonder if you could say what you mean by that?

Of course I've heard that before. In the past when I have heard people say that before, it's by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:

  1. Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn't give you reassurance about AGI. We could have a formal definition of intelligence, and causal instantiations of it, without any qualia what-its-like-to-be subjective consciousness existing in the system. There is also conscious experience with minimal intelligence, like experiences of raw pleasure, pain, or observing the blueness of the sky. As I explain in the linked post, consciousness is also orthogonal to agency or goal-directed behavior.
  2. There's a great deal of research about consciousness. I described one account in my post, and Nick Humphrey does go out on a limb more than most researchers do. But my sense is most neuroscientists of consciousness endorse some account of consciousness roughly equivalent to Nick's. While probably some (not all or even a majority) would concede the hard problem remains, based on what we do know about the structure of the physical substrates underlying consciousness, it's hard to imagine what role "quantum" would do.
  3. It fails to add any sense of meaningful free will, because a brain that makes decisions based on random quantum fluctuations doesn't in any meaningful way have more agency than a brain that makes decisions based on pre-determined physical causal chains. While a [hypothetical] quantum-based brain does avoid being pre-determined by physical causal chains, now it is just pre-determined by random quantum fluctuations.
  4. Lastly I have to confess a bit of prejudice against this view. In the past it seems like this view has been proposed so naively it seems like people are just mashing together two phenomena that no one fully understands, and proposing they're related because ???? But the only thing they have in common, as far as I know, is that we don't understand them. That's not much of a reason to believe in a hypothesis that links them.
  5. Assuming your view was correct, if someone built a quantum computer, would you then be more worried about AGI? That doesn't seem so far off.
Curated and popular this week
Relevant opportunities