The Ethics of Cognitive Enhancement: Part 2

Go back to Part 1. 

In Part 1 of this series on the ethics of cognitive enhancement, I pointed out that it is not logical to deny people access to new methods of cognitive enhancement as they become available, because such methods would not be categorically different from methods of cognitive enhancement that are currently legal and regularly used. I came upon this argument in a paper by computational neuroscientist Anders Sandberg and philosopher Nick Bostrom titled, “Cognitive Enhancement: Methods, Ethics, and Regulatory Challenges.” In this paper, Bostrom and Sandberg point out that we accept a host of “conventional” means of cognitive enhancement like education, mental training techniques, and the use of tools such as pen and paper, calculators, or computers, while “unconventional” methods like drugs, implants, and direct brain-computer interfaces often evoke social or moral objections.

Bostrom and Sandberg take issue with the inconsistency in how we respond to different forms of cognitive enhancement. They motivate their objection by comparing various forms of unconventional methods to similar but more conventional methods of cognitive enhancement. With each comparison, their argument boils down to the following: Cognitive enhancement method A isn’t substantially different from cognitive enhancement method B; we don’t normally find method B to be ethically objectionable; therefore, we shouldn’t find method A to be ethically objectionable either.

98a50300ec446238b3e757a4_UK_frozen_academics
Philosopher Nick Bostrom (left) and neuroscientist Anders Sandberg (right)

One example that Bostrom and Sandberg discuss is the idea that the availability of mind-enhancing drugs coupled with economic competition might make the use of such drugs necessary to qualify for some careers. Bostrom and Sandberg point out that the same can be said of literacy, which dramatically changes how the brain processes language, is forced on citizens too young to be capable of offering consent, and which renders those who do not develop this skill ineligible for almost all jobs. We don’t take issue with literacy education, and so why should we take issue with a drug that might have similar social effects, so long as using that drug is safe?

After a series of similar examples comparing conventional and unconventional methods of cognitive enhancement, Bostrom and Sandberg conclude the following:

The demarcation between these two categories [conventional versus unconventional cognitive enhancement] is problematic and may increasingly blur. It might be the newness of the unconventional means, and the fact that they are currently still mostly experimental, which is responsible for their problematic status rather than any essential problem with the technologies themselves. As society gains more experience with currently unconventional technologies, they may become absorbed into the ordinary category of human tools.

In short: people find unconventional methods of cognitive enhancement problematic for psychological, rather than purely philosophical reasons. An interesting bit of history that supports this claim is how the introduction of writing was initially received in the West. Socrates, who lived at the dawn of the wider use of writing among Greek intellectuals, was famously critical of the effect that writing would have on people’s minds. In his Phaedrus, Plato has Socrates say that writing would “create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” This shows us that there was a time when writing was an “unconventional” method of cognitive enhancement. And who today would call writing unethical?

While Bostrom and Sandberg don’t take this line of argument, it’s worth pointing out that you can compare methods of cognitive enhancement at the level of neural architecture. If you use a drug or neural prosthetic to cause a specific behavioral change, there’s no reason to think that the underlying change in neural connectivity would be different from the change that would result from a “conventional” method that leads to the same behavioral change. Take, for example, the scene from The Matrix in which Neo learns kung fu instantaneously by having the knowledge directly transmitted to his brain: the brain changes that lead him to know kung fu are the same as those that would result from decades of practice. Why should we object to a technologically assisted change in neural architecture but not a slower, non-assisted change in neural architecture, when the changes to the brain as a result of either method are the same? We might value some other lesson learned from the sheer amount of time and effort it takes to perfect a skill, and that’s fine. But that doesn’t mean that we should ban faster ways of acquiring that same skill.

While I find Bostrom and Sandberg’s argument to be generally more cogent than that of Fukuyama, it’s not without its problems. What’s missing from their general discussion of cognitive enhancers is the fact that what we even mean by “enhancement” is value-laden. Few if any have expressed interest in improving, for example, our sense of smell, even though doing so would certainly be interesting. Rather, the domains of improvement that are generally thought to be “enhancements” are those that improve cognitive capabilities deemed to be valuable: memory, intelligence, focus, and perhaps contentedness (if only to make you content with what your society demands of you). If this proposition seems suspect to you, consider what we deem to be mental “disease.” Nobody is considered mentally ill because of a lack of creativity, or a poor sense of smell, or low libido, or no appreciation for music. Rather, people are medicated for anything that inhibits their capacity to work. The most frequently medicated psychiatric conditions are anxiety, depression, bipolar disorder, schizophrenia, and attention deficit disorder, all of which hinder our productivity – nobody with a low libido, inability to appreciate music, poor sense of smell, or lack of creativity has trouble filling out a spreadsheet or writing an email, as long as they are calm, happy, emotionally stable, behave appropriately in public, and can sustain focus.

The cultural contingency of mental illness is one of the core ideas of Michel Foucoult’s 1960 book Madness and Civilization: A History of Insanity in the Age of Reason. Foucault maintains that behaviors that deviate from what a particular culture deems “normal” are stigmatized and increasingly medicalized. And this is particularly true of behaviors that impede one’s ability to work. In Classical Europe, a culture whose sensibilities we have partially inherited, so-called correctional facilities like the Hôpital Général in Paris were opened to imprison the poor and prevent “mendicancy and idleness as sources of all disorder” (Foucault, 57). The resulting mass imprisonment of the poor was, Foucoult suggests, the origin of the culture of confinement that later led to the mass imprisonment of those considered to be insane:

From the creation of the Hôpital Général, from the opening, in Germany and in England, of the first houses of correction, and until the end of the eighteenth century, the age of reason confined. It confined the debauched, spendthrift fathers, prodigal sons, blasphemers, men who “seek to undo themselves,” libertines. And through these parallels, these strange complicities, the age sketched the profile of its own experience of unreason. (Foucault, 65).

If we agree with Foucault, then we accept that there is a direct link between an inability to work and what we consider “insanity.” Nowadays, we medicate rather than imprison those who have trouble working because of cognitive “deficits,” so long as they don’t steal or inflict violence on anyone (though I’m sure that if we could “cure” the instinct to steal or be violent, we would, and then eventually look back at the crowded prisons of today and call them “barbaric” just as we call the insane asylums of a century ago “barbaric”).

French philosopher Michel Foucault

The point of drawing attention to the connection between mental illness and the inability to work is this: to deem some forms of cognitive alterations “enhancements” is the manifestation of this way of thinking extended to those who aren’t otherwise considered to have a deficit in their ability to work. In other words, if you’re not below some acceptable level of ability when it comes to focusing on your work and you’re doing something to improve your focus or make you more emotionally stable, you’re not fixing a problem – you’re “enhancing” yourself. If you’re doing something to increase your libido, improve your sense of smell, or enhance your appreciation of music, then you’re not “enhancing” yourself. You’re experimenting with yourself at best.

I question what we normally mean when we say “enhancement” in order to push against the broadly accepted division between technologies that boost these “desirable” traits and technologies that boost traits that are valued less. But I still think that Sandberg and Bostrom’s argument holds weight: if we are to allow education, then we should allow Ritalin. If we are to allow computers, then we should allow neural prostheses. But I think that we need to look beyond what we currently consider “enhancement” and allow people to undergo other forms of cognitive alterations, so long as those alterations don’t harm anyone else.

***

What can we conclude from Fukuyama’s and Sandberg and Bostrom’s arguments? What sort of regulation – if any – should be in place to monitor the development of cognitive enhancers?

Continue onto Part 3.

Leave a Reply