Scientists don’t yet know how life emerged on Earth. Darwin’s theory of evolution by natural selection explains how life evolved once it emerged, but it does not explain how life emerged in the first place.
But science presses on. One of the main challenges in figuring out how life emerged from chemistry is defining what life is. Where do we draw the line and say that something is alive, and no longer just chemistry?
The answer might lie in how matter manipulates information, or, perhaps more accurately, in how information manipulates matter.
This proposition is based on a series of recent papers by physicists Sara Imari Walker and Paul Davies of Arizona State University. Walker and Davies observe that in biological (i.e. living) systems, there’s top-down information control: information in these systems can actually influence how their component parts behave.
To understand what this means, imagine taking all the chemicals normally found in bacteria cells and pouring those chemicals into a petri dish. Now imagine taking actual bacteria and putting those in another petri dish. Now put sugar in both petri dishes. In the dish with just chemicals, there might be some basic chemical reactions between the sugar and the chemicals already there, but nothing very interesting will happen. In the dish with the living bacteria, however, the bacterial culture will metabolize the sugar and grow. This distinction has led some scientists to point to metabolism and reproduction as indicators of life.
But one could point to non-living systems that behave similarly to the bacteria in this example. Take crystals: in an abstract sense, crystals “metabolize” the material around them to grow. So what’s the difference between crystals and living systems?
Crystals belong to a class of physical systems that Walker and Davies call trivial replicators. They write: “Trivial replicators process information strictly in the passive sense. Typically, they are characterized by building blocks which are not much simpler than the assembled object.” In terms of information, they write that the “algorithm” that describes how these sorts of systems grow contains fewer bits than do the systems themselves. In other words, trivial replicators like crystals grow according to very few basic rules – and those rules are far simpler than the crystals themselves.

But the rules that describe how non-trivial replicators like bacteria grow are a lot more complicated. In fact, the complexity of these rules is comparable to the complexity of the bacteria themselves. Not only are these rules more complex, but they are also explicitly programmed. For bacteria and other known examples of life, that programming is found in DNA. This is in stark contrast to trivial replicator systems like crystals, which grow in a predictable way governed by their implicit physics.
So one possible way to distinguish life from non-life is this: in living systems there is top-down information control that governs how those systems grow and replicate, whereas in non-living systems growth is governed by the bottom-up information flow implicit in the physics of those systems. It’s not clear how trivial replicators evolved into non-trivial replicators, but this new way of looking at life suggests a new direction of research for astrobiologists: rather than focusing on the “hardware” (chemistry) of living systems, researchers should start to think about the “software” (information flow) in living systems.
So now to a practical question: according to this characterization of life, how do we actually identify something as being “alive?” If we send a probe to Mars to analyze some soil samples, how will it determine whether those samples contain living things? If Davies and Walker are right, then you’d want a machine to be able to measure the degree of top-down information control in a physical system. While such a machine doesn’t yet exist, and while it’s hard to imagine how it could possibly work, we should at least begin to ask what such a machine would actually calculate. And here’s where things get really interesting: Davies and Walker suggest that the best way to calculate the degree of top-down information control would be a variant of Φ, a measure of information integration introduced by neuroscientist Giulio Tononi as a way to measure how conscious a system is.

Tononi’s Φ, which I’ve written about before, was devised as a way to measure how much information the human brain integrates. It was inspired by the observation that brain regions that generate consciousness form a network that integrates a massive amount of information, whereas brain regions that don’t contribute to consciousness don’t integrate nearly as much information. Davies and Walker make the interesting observation that Φ might also be able to differentiate between living and non-living systems.
To see why this is the case, let’s return to our earlier example of the two petri dishes, one with the soup of chemicals and the other with bacteria. The chemical soup in the first petri dish would have a Φ of about zero, because those chemicals don’t form an integrated information-processing whole. The bacteria in the second dish, on the other hand, would have a much higher Φ value because bacteria are complete, integrated systems with information-processing capabilities.
If we are to accept that Φ measures both how conscious a system is and that a variant of Φ would be able to differentiate between living and non-living systems, we’re led to a rather bizarre conclusion: complex chemistry becomes life when there’s a significant jump in the Φ of that system, and so complex chemistry becomes life when there’s a significant jump in that system’s degree of consciousness.
Now this is indeed a fantastic conclusion. But history warrants some caution here: we might be predisposed to accepting these sorts of information-theoretic explanations because information theory explains the workings of our most prevalent technologies. Scientists have made a similar mistake before. As historian Steven Shapin argues in The Scientific Revolution, Enlightenment-era thinkers were quick to accept mechanical explanations of things because of their conviction that the rules governing the workings of their mechanical technologies could explain the workings of the universe. And my worry is that we might be doing the same thing. Information theory governs the workings of our most prevalent technologies, so maybe, for cultural reasons, we’re not as critical as we should be when we attempt to use information theory to explain the workings of the universe.
But I say we press on with information-theoretic explanations anyway. Some of the Enlightenment era’s mechanical explanations for physical phenomena (like the motion of large bodies) ended up being right, whereas others (like Descartes’ attempt to explain consciousness) were not. Information is the governing principle of many of our current scientific paradigms, so let’s just run with it for as long as we can. The beauty of science is that it’s equipped to deal with what happens when paradigms inevitably fall apart.
And, in the meantime, the possibility that chemistry becomes life when chemistry becomes the subject of experience should give philosophers plenty to think about.
Sources
Shapin, S. (1996). The Scientific Revolution. The University of Chicago Press.
Tononi, G. (2004). “An information integration theory of consciousness.” BMC Neurosci. 5:42.
Walker, S.I. (2014). “Top-Down Causation and the Rise of Information in the Emergence of Life.” Information, 5:424-439.
Walker, S.I. and Davies, P.C.W. (2012). “The algorithmic origins of life.” J R Soc Interface, 10: 20120869.