Post-humanists envision a future in which human minds can be “uploaded” – uploaded to virtual environments, uploaded to new biological bodies, or uploaded to totally robotic bodies. Every sign in the rapid advance of neuroscience suggests that we should begin taking this idea seriously. Reflecting on the prospect of mind uploading, Princeton neuroscience professor Michael Graziano recently wrote:
It is tempting to ignore these ideas as just another science-fiction trope, a nerd fantasy. But something about it won’t leave me alone. I am a neuroscientist. I study the brain. For nearly 30 years, I’ve studied how sensory information gets taken in and processed, how movements are controlled and, lately, how networks of neurons might compute the spooky property of awareness. I find myself asking, given what we know about the brain, whether we really could upload someone’s mind to a computer. And my best guess is: yes, almost certainly.
While the precise details of how mind uploading might be achieved are worthy of a lengthy discussion, what I want to talk about is some of the profound philosophical problems that this possibility raises. Specifically, how will we think about personal identity once mind uploading is possible?
In the West, the problem of personal identity has usually amounted to the question of how identity can persist over time. Though it might seem like an easy question to answer, all the obvious responses to this question turn out to be problematic.
Generally, there have been two ways to think about persistence of identity over time: continuity of substance and continuity of consciousness. Continuity of substance can mean either physical continuity or the continuity of “mental substance.” Physical continuity is obviously problematic because you’re not composed of the same cells as you were five years ago, yet you like to consider yourself the same person you were five years ago. The continuity of mental substance is problematic because it presupposes the existence of an immaterial soul.
The other contender for an explanation of identity over time is the continuity of consciousness. The basic intuition here is that identity is determined by memories, psychological characteristics, behavioral dispositions, and how you self-identity. This idea was first proposed by John Locke and has faired pretty well since then.
Philosopher Derek Parfit, however, has developed some interesting arguments against the idea that continuity of consciousness means continuity of identity. In several of his works, most notably Reasons and Persons, Parfit imagines a number of futuristic technologies that challenge our intuitions about personal identity. In one of them, he describes a Star Trek-like “teletransporter” that destroys your body in one location and then reconstructs it perfectly in another location. The person that is reconstructed in the second location will have all the memories of the person who was destroyed in the first location. Most importantly, there is perfect psychological continuity between the two. This raises some problems for the idea that identity is psychological continuity. Imagine, for example, that the first person never gets destroyed. Or, imagine that hundreds of replicas are made of the first person, and the first person never gets destroyed. There will be psychological continuity between all these replicas and the original body, but most of us wouldn’t want to say that they’re the same person as the original. And yet they’re psychologically continuous with the original. Psychological continuity does not, therefore, mean continuity of identity.
We can take Parfit’s line of thought further by constructing thought experiments involving post-human technology. Take, for instance, Ray Kurzweil’s insistence that people a hundred years from now will think it pretty amazing that people actually went through the day without backing up their mind file. If you do die, then luckily your backed-up mind can be uploaded to a new body! But think about this: once you are uploaded to a new body, your last memory will be uploading your mind the morning before you died. You will have no recollection of your own death. If that’s the case, then are you the same person as the person who died? Or are you just the same person as the one who uploaded her mind-file that morning? Are those two – the one who uploaded her mind and the one who died several hours later – even the same people?
What about identity in a virtual environment? Your mind in a fantasy world might retain some memories of your physical life, but your body (including gender) could be completely different. In other words, your embodied experience in a virtual world could be nothing like the embodied experience of your physical self, and surely embodiment has some important role to play in personal identity?
What about having multiple selves uploaded to multiple locations? One self could be uploaded to a biological body that’s having dinner with your spouse while another self is journeying through a virtual Narnia, and yet another self is relaxing on a virtual beach on another planet. Does continuity of personal identity mean anything at all once this becomes technologically feasible?
What these examples bring home is that personal identity is problematic no matter how you look at it. So instead of trying to find a theory of personal identity that is immune to all these criticisms, we should maybe abandon the notion that there is such a thing as a permanent self at all. The thoughts, emotions, and beliefs that you think of as constituting the self are of course real – at least they’re as real as anything psychological can be real. But, as the Buddha insisted, the “self” that these component parts comprise is not itself real.
That’s more or less the conclusion that Derek Parfit comes to when contemplating the problem of personal identity. But psychological continuity still matters to Parfit, despite his belief that it has nothing to do with personal identity. He insists that if we had to choose between being destroyed by a teletransporter in one place and being reconstructed elsewhere, with all the same memories, psychological traits, and dispositions as before, or not being destroyed at all but having our psychological continuity erased, we’d prefer the former. Psychological continuity matters to us regardless of our beliefs about personal identity.
I agree with Parfit. And if mind uploading becomes technologically feasible in my lifetime, then perhaps it’s good that I’ve begun to think about this. One day it might be possible to live in a virtual environment or upload my mind to an engineered body long after my original biological body has stopped working. I, at least, would like to preserve my psychological continuity even though there is no “self” to be preserved.
And if the futurists are right, then the time when I’ll be able to make that choice might be only a few decades away.