Alan Turing, in 1950, disregarded the question of machine thought in favor of his Turing Test. He thought the problem was essentially intractable and decided that determining machine intelligence was more relevant. I think the question is worth revisiting now through the lens of consciousness rather than intelligence.


In 2004, neuroscientist Giulio Tononi introduced integrated information theory (IIT), a mathematical model of consciousness. He posited that consciousness consisted of all the informational exclusion an integrated system is capable of—how many possible states of reality it can reject when it has a certain experience—and can be determined precisely with his mathematical techniques.

Tononi argued that a system’s subjective experiences simply are the elimination of alternate possible experiences. Capacity for discrimination is integrated information, or Φ (phi). Tononi’s mathematical model has performed as expected in limited research studies on simple, low-neuron systems, but the algorithm is generally computationally prohibitive for more interesting, borderline-conscious systems (like an AI).

One of the most cogent critiques of IIT is given by Scott Aaronson, a computer scientist who discoursed with Tononi on his blog in 2014. In his initial post, Aaronson thoroughly dismantles many of the counterintuitive implications of the theory—showing, for example, that Tononi’s model would assign greater consciousness to a sufficiently large array of logic gates than to the human brain.

Tononi’s reply, according to Aaronson, “doesn’t ‘bite the bullet’ so much as devour a bullet hoagie with mustard,” accepting and defending IIT’s counterintuitive results. He argues against Aaronson’s appeal to intuition, and highlights the virtues of his approach grounded solely in phenomenology.

Aaronson is having none of it. He notes the inanity of disregarding intuitions in as poorly-defined a field as consciousness studies, demanding that Tononi’s definition fit what we feel about the simple, agreed-upon ‘paradigm-cases’ before it can be trusted to decide on the borderline.


This entire discourse is eerily reminiscent of the debate on the definition of life among biologists. In Shannon's information, Bernal's biopoiesis and Bernoulli distribution as pillars for building a definition of life, Radosław Piast, a chemist at the University of Warsaw, argues that the phenomenon of life can be described as “a continuum of self-maintainable information.”

Piast notes that existing definitions for life tend to overfit to scientists’ intuitions: he points out a definition from 2010 that uses “not even 100 characters to describe life, yet an additional 135 to exclude viruses from it.” He points to organisms with virus-like deficiencies that are still strongly considered to be alive, suggesting the breakdown of the ‘paradigm-cases’ model. Conventional methods of biological description struggle to solve borderline cases reliably.

Instead, working from a ground-up understanding of life’s development from organic matter—“biopoiesis”—and stability of (genetic) information as a key characteristic of life, Piast analogizes basic biological processes in terms of information theory. He postulates that the phenomenon of life itself can be described as this stable, self-maintained information, and it generally falls along a continuum.

He classifies various “cis-actions” (including repair, reproduction, etc.) which move an organism up the self-maintainable information continuum “toward lifeness.”

Piast’s framework is a compelling strategy to define and classify a famously difficult-to-define-and-classify, yet fundamental, concept. I think that a similar approach should be extended to consciousness.


The same definitional disarray is present in defining consciousness. Cognitive scientist and psychiatrist Michael A. Cerullo notes in The Problem with Phi: A Critique of Integrated Information Theory, his analysis of Tononi and Aaronson’s exchange, that much of their disagreement originates in a disagreement (or even misunderstanding) surrounding the definition of ‘consciousness.’ He labels Tononi’s definition—what Φ measures in photodiodes and arrays of logic gates—‘noncognitive-consciousness’ in relation to three other formulations:

“Chalmers’ primitive protoconsciousness (e.g., the consciousness of quarks) … Block’s access-cognitive-consciousness (i.e., consciousness associated with representational content), and Block’s phenomenal-cognitive-consciousness (i.e., consciousness associated with sensations)”

Cerullo contends that Φ can only really describe noncognitive-consciousness, while Aaronson expects it to be accurate about the two cognitive-consciousnesses as well. Aaronson’s disappointment with the theory comes, ultimately, from Tononi’s own misunderstanding of his work—he overhyped Φ’s explanatory power, and Aaronson took him at his word.

Cerullo concludes his article quoting philosopher Ned Block’s comment to Tononi at the 2014 Science of Consciousness meeting: “You have a theory of something, I am just not sure what it is.” Cerullo believes it’s a theory “that, even if correct, does not help us to understand or predict the kind of consciousness that is relevant to our subjective experience.”

I believe it’s a consciousness cis-action.


Aaronson notes that advanced AI is one of the interesting, borderline cases of consciousness. There’s been plenty of heated debate about neural networks and advanced machine learning architectures since the 1950s, and as a result, we often see definitions of consciousness specifically (and verbosely) tailored one way or another.

John Searle’s ‘biological naturalism’ is certainly a strong example. He accepts the fundamentals of physicalism, rejects Cartesian dualism, and somehow finds that organic biochemistry is necessary for consciousness, with little justification.

Just as Piast rejects the inclusion of the borderline case of viruses in defining life, we must reject the inclusion of synthetic minds in defining consciousness. Instead, we should look for the cis-actions that move any system along its continuum.

I propose integrated information as one. IIT provides a mathematical framework ready to go, and Φ is a useful measure to elevate the capacity of some systems over others. Humans can discriminate more than photodiodes; that’s still very useful information.

Attention is another. Professor of psychology and neuroscience Michael Graziano has proposed attention schema theory (AST) as an alternative model of consciousness. In brief, this cis-action would be based on how good a system is at shifting its attention to various sensory perceptions based on its own internal model (schema) of attention.

My third proposed cis-action is based on global workspace theory (GWT), developed by two cognitive scientists in the 1980s. I’ll call it thought selection. It would measure a potentially conscious system’s ability to shift its mental spotlight to a specific topic. To broadcast a single conscious focus to all the system’s subconscious processes.

Modern artificial intelligence systems certainly pass the integrated information test, so have at least a minimal level of consciousness. Akin to a virus or mitochondrion’s claim to aliveness. Whether they have much attention or thought selection is an open question, though it seems less likely.

This selection of cis-actions is far from exhaustive and not nearly as rigorous as it should be, but, conceptually, this framework does provide for future machine consciousness—and future machine thought—with sufficiently large and capable systems.

With enough neurons, the evolutionary nature of neural network development would all but guarantee a constructed attention schema, thanks to its efficiency improvements. And thought selection, too, is likely a natural result of scaled-up reinforcement learning—it’s massively more efficient with enough interconnected neurons.

Future machines are almost certain to continue moving up the continuum of consciousness with each new technology, each new update, each new GPT.


This leaves us with a clear moral agenda.

The sheer number of AI instances compels us to take their capacity for experience—their degree of consciousness—under serious consideration.

Effective Altruists—who aim to reduce all suffering—must of course prioritize understanding and uncovering the degree to which modern AI is conscious, and then must take the results of their investigations seriously.

Searle’s biological naturalism fits our grossly outdated intuitions—to owe moral consideration only to ourselves and our fellow tribe members—but our fundamental pursuit should be overcoming these instincts and rejecting his unduly restrictive view.

Machines that think, feel.[1] Machines that feel are moral patients.

Consciousness cis-actions won’t investigate themselves. That will take a concerted effort from a group of people willing to care for the minds others neglect—Effective Altruists.

  1. ^

    Or perhaps, “experience,” which I would argue still qualifies them for moral patienthood.

5

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: Consciousness should be viewed as a continuum rather than binary, with integrated information theory, attention, and thought selection proposed as key factors that move systems along this continuum toward greater consciousness.

Key points:

  1. Integrated Information Theory (IIT) provides a mathematical framework for measuring consciousness, but has limitations and counterintuitive implications.
  2. Consciousness, like life, may be better understood as existing on a continuum rather than as a binary state.
  3. Three proposed "consciousness cis-actions" that move systems along this continuum: integrated information, attention, and thought selection.
  4. Modern AI systems likely possess some degree of consciousness based on this framework, raising moral considerations.
  5. Effective Altruists should prioritize understanding and investigating the consciousness of AI systems.
  6. Rejecting restrictive views of consciousness (e.g. biological naturalism) is necessary to properly consider the moral status of artificial minds.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from arsht
Curated and popular this week
Relevant opportunities