Why Machines Could Be – But Aren’t – Conscious

I recently had a brief back-and-forth with Bobby Azarian about his new article on Raw Story. Azarian, a neuroscientist at George Mason University, argued that artificial intelligence (AI) could never be conscious. I highly recommend reading Azarian’s article: it’s a great distillation of some key concepts in the philosophy of mind, and he makes an argument that is well worth considering. For the most part, I agree with Azarian’s reasoning regarding current A.I., but I don’t think his argument precludes the possibility of future A.I. being conscious.

First, a summary of Azarian’s key points:

  1. Computers are Turing Machines, which means they can perform operations on symbols but can’t recognize what those symbols mean (which requires a mind).
  2. Consciousness is a biological phenomenon, which is produced by processes very different from what happens inside a computer. While brains are in some sense “digital,” since information is carried by a neuron either firing (a “1”) or not firing (a “0”), there are a host of important analogue biological processes in the brain, including, in Azarian’s words, “cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops.”
  3. Simulation is not duplication: even if we simulate a brain, it wouldn’t be conscious, for the same reason that a computer simulation of water isn’t wet.

These are essentially the arguments that philosopher John Searle has made against the possibility of conscious A.I. And these are all good arguments against the possibility of current machines being conscious. But, none of them imply that A.I. couldn’t be conscious.

Machines could be conscious for a very simple reason: machines are physical systems and so are brains. If machines could reproduce the same properties of brains that make brains conscious, then machines would also be conscious. 

With this simple point in mind, let’s go through Azarian’s argument again:

  1. The fact that computers are Turing Machines does not necessarily preclude their ability to consciously understand what they’re doing. Computers happen to be built in such a way that they don’t need sentience to carry out their computations: you feed them some symbols, they manipulate those symbols according to some algorithm, and they spit out their answer. Brains do much the same thing – often without conscious understanding. For example: Your conscious self might be awful at trigonemetry, but you have neurons in your brain that are quite good at it. Whenever you hear a sound, the sound information gets passed on to those neurons, and, following some algorithm that implements a trigonometric calculation, those neurons compute where the sound is coming from. All you consciously perceive is the outcome of that computation, which isn’t different in any important sense from what Turing machines do. The only difference between you and the Turing machine is that the outputs of these unconscious computations get sent to some network of neurons that are doing something that produces you – i.e., your consciousness. Computers simply don’t carry out that extra step.
  2. Azarian is right to point out that consciousness is a biological phenomenon. But there’s no reason to think that this phenomenon couldn’t be reproduced in a machine: Vitamin D production is a biological process that happens in your skin, but we can reproduce that process with machines. That’s how Vitamin D supplements are made.
  3. Azarian is right to say that simulation is not duplication: a computer simulation of water is not wet. But that’s fine. If we can’t simulate, then let’s duplicate. If we build a physical system that duplicates the properties of water that make water wet, then that physical system would also be wet. Likewise, if we build a physical system that has all the properties of brains that make brains conscious, then that system would be conscious.

What could such properties be? What makes brains conscious? And are the properties that make brains conscious the sort of properties we might reasonably expect to be able to duplicate in intelligent machines?

While neuroscientists don’t yet know what makes brains conscious, a theory that is gaining increasing traction in the field is the Integrated Information Theory of Consciousness. I’ve written about the theory on this blog before, and my current research at Berkeley is on testing some of its predictions. The theory, in its most basic form, states that consciousness simply is integrated information, i.e. information from the intrinsic perspective of a system. Let’s assume that the theory is correct (which it might not be). What would the theory mean for A.I.?

Integrated Information Theory implies that A.I. could be conscious, but isn’t. All the biological facts about brains – the chemistry of their neurotransmitters, the frequency with which populations of neurons oscillate, the molecular genetics of neurons, etc., are all simply details of implementation. If Integrated Information Theory is correct, then what really matters for consciousness is the presence of integrated information, regardless of the medium in which integrated information is implemented. It could very well be that all these analogue processes are necessary for generating sufficient amounts of integrated information, but that these processes themselves aren’t what matter.

Here’s the rub: current A.I. doesn’t have any integrated information. The architecture of current A.I. systems is largely feedforward, with some feedback for some A.I. systems. Activity in one layer of an artificial neural network gets sent forward to another layer, and that layer’s activity gets sent forward to another layer, until the activity reaches a final output layer, and the activity in the output layer determines the artificial neural network’s actions. 

neural_network
The neural networks underlying many A.I. systems are feedforward.

Fundamentally, this is not how information flows in the brain. The brain, and in particular the thalamocortical system (where we think consciousness is produced), is a stupendously integrated system. Everything talks to everything else, everything talks to itself, and information is constantly getting distributed across the entire network. According to Integrated Information Theory, it is out of this integration of information across vast swathes of the thalamocortical system that consciousness emerges.

The remarkable success of A.I. in beating humans at a whole range of tasks only drives home the point, made long ago by John Searle and reiterated in Azarian’s article, that you don’t need consciousness for intelligence. Weak A.I., i.e. non-sentient A.I., can be extremely intelligent. But I doubt if even Google’s DeepMind has an iota of consciousness.

But there’s no reason that the architecture of an A.I. couldn’t reproduce the properties that makes the thalamocortical system conscious. If A.I.s were restructured in a such a way that they reproduced the properties of brains that make brains conscious – be that integrated information or some other property – then A.I. would be conscious. If, on the other hand, the development of A.I. continues along its current course, without adding integrated information (or some other property that makes brains conscious) into the equation, then we shouldn’t expect A.I. to be conscious any time soon

4 thoughts on “Why Machines Could Be – But Aren’t – Conscious

  1. I am NOT a neurobiologist and I am quite capable of being ignorant or simply wrong, but isn’t there another, simple answer? We can (and have) removed entire hemispheres of human (and other species) brains and , if the subject is young enough they are capable of developing fairly normally. One author describes a friend who was completely normal is most respects, but on autopsy was found to be in possession of only one hemisphere. This argues for the potential of two independent hemispheres. This was “proven” by the work of Sperry and Gazzanica in the 1960s & ’70s with their split brain studies. Since we have possession of two separate “minds” couldn’t consciousness be simply the observation of ideas or observations by one hemisphere by the other?
    Or is this a very well known and basic concept taught at early levels of psychology and neurobiology?
    So… If all that is required for consciousness is perception by one function perception unit by another then creation of a dual computer would theoretically be AI.
    And creation of a three perception units machine would allow it to program itself, thereby quickly allowing rapid evolution of its programming. This could potentially evolve far past humans, but the outcome is unpredictable.

    The purpose of living organisms is to reproduce, an evolving machine would also reproduce, because a non reproducing machine wouldn’t, go watch the end of the Matrix series.

    Blaine Hebert at

    1. Hi Blaine, thanks for your comment! I don’t think I followed your point about the perception units, but as for the split-brain patients, there is lively scientific and philosophical debate regarding how to characterize their consciousness. One increasingly popular view – and one that is directly implied by the Integrated Information Theory of Consciousness – is that if you sever the connection between the hemispheres, you do in fact get two different conscious minds residing in the same brain. A split-brain patient might seem normal because only one of those minds (typically the one in the left hemisphere) has access to verbal communication, and also tends not to notice that something is off.

      In the terms of Integrated Information Theory, we would say that the two hemispheres in a split-brain patient no longer form an integrated whole, but that each independently is integrated enough to form its own mind. Same goes for people with only one hemisphere: that hemisphere is capable of producing its own mind. But, when the connection between the hemispheres is intact, the hemispheres act together to produce a single, integrated, conscious mind.

  2. Dear Daniel.
    As far as i know, a split-brain surgery only involves severing the corpus callosum, meaning that a considerable fraction (e.g the comissures) of cross-hemisphere connections are still intact. Therefore the claim that 2 independent minds are operating in a single split-brain body i find rather intense.
    In addition, since you most likely read the wikipedia article about artificial neural networks (judging from the striking similarity of your ANN figure to wiki’s) I think you are indeed aware that recurrent neuronal networks (RNN) very well exist and are widely used for tasks, such as machine-based visual recognition and so on. This is the reason why I cannot agree with you on the lack of recurrence to be the -up to now- reason for the hypothetical in-existence of conscious AI.
    However I do find the the Integrated information theory quite intruiging especially considering the role of the Insular Cortex in integration and generation of conscious awareness.

    All the best for your research,
    Dominik

    1. Hi Dominik,

      Thanks for your comment. You’re right – split-brain patients usually only have their corpus callosum removed, which leaves intact other minor cross-hemisphere connections, including the anterior commisusure, the posterior commissure, the hippocampal commissure, and the fornix. One could make a case that these connections are enough to bind the activity in the two hemispheres into one mind, but this is unlikely. The only one of these connections that seems important for the production of conscious experience is the anterior commissure, which binds together the temporal lobes and likely has a part to play in pain perception. The other connections serve to bind non-perception-related activity across the two hemispheres (such as memory and certain reflexes), and so likely wouldn’t have a part to play in integrating the activity in the two hemispheres into a single, unified consciousness. There’s also behavioral evidence in favor of this view: the classic research done by Michael Gazzaniga on split-brain patients showed that one hemisphere can perceive and report on stimuli that the other hemisphere can’t, and that the two hemispheres often conflict in their goals and subsequent actions. That said, if we accept the Integrated Information Theory of Consciousness (which we shouldn’t until we have more solid empirical evidence in its favor), then the best way to settle the debate would simply be to measure the amount of integrated information in each hemisphere individually and compare that to the amount of integrated information when you consider both hemispheres together: if the amount of integrated information in the whole brain of a split-brain patient isn’t significantly higher than the integrated information in one hemisphere plus the integrated information in the other hemisphere, then Integrated Information Theory would say that there are two separate conscious minds in the split brain.

      As for the possibility of recurrent neural networks having some amount of integrated information, I’ll have to admit that my understanding may not be up-to-date. My impression was based on current deep feedforward networks, such as Google’s Inception network, for which integrated information would unequivocally be zero. It would be interesting to see how much integrated information is produced in recurrent neural networks. If it’s a significant amount, and if A.I. development is moving in that direction, then maybe A.I. is in fact moving toward consciousness. Of course, the simulation vs. duplication argument would apply here, but that’s far more tricky philosophically. Perhaps that’s a post for another time.

      Daniel

Leave a Reply