The Ethics of Cognitive Enhancement: Part 3

Go back to Part 1.

Go back to Part 2.

From the discussion in the last two parts of this series on the ethics of cognitive enhancement, we can draw three conclusions.

The first is that we should allow all forms of cognitive alterations that aren’t lethal and don’t lead you to harm others. As discussed in Part 2 of this series, the largely unnoticed problem with the discussion of cognitive enhancement is that there are particular elements of our culture that drive us to label some kinds of cognitive alterations “enhancements” and others not. If we are to allow cognitive tools that increase focus and memory, we cannot ban equally safe (or equally dangerous) technologies that can make someone more creative, expand their consciousness, or increase their empathy. We are already well on our way to legislating cognition-changing technologies in this way, as evidenced by how we legislate currently available methods of cognitive alterations: the “conventional” methods that Bostrom and Sandberg cite are of course entirely legal, and it is relatively easy to legally acquire cognition-boosting prescription medications like Adderall and Modafinil. But we also have a host of chemicals that are as safe (or as dangerous) as prescription medications, but that lead to changes in perception, creativity, and empathy rather than focus, memory, and wakefulness. These chemicals are illegal and you cannot attain a prescription for them, because deficits in the traits that they enhance are not considered to be indicative of a cognitive disorder. This is a worrisome precedent, and does not bode well for how we will regulate cognitive alterations in the future.

The second conclusion we should draw is that all forms of cognitive alterations need to be very carefully studied before they are made available to the public. If all forms of cognitive alterations are to be made available, then people need to know exactly what the effects and side effects are of any drug or device that will affect their psychology. This includes all psychological and physiological changes these alterations might induce. Once that information is made available, people will be able to make more informed decisions about whether or not they want to use a cognitive alteration technology.

Finally, the institutions that conduct the research on the safety of cognitive alterations need to be either a part of the government or subject to extensive governmental oversight. These institutions cannot take money from any commercial companies or other governmental institutions with clear agendas. Fukuyama comes to a similar conclusion in Our Pusthuman Future. He maintains that although science could be relied on to self-regulate in the past, the same cannot be said of scientists currently working in biotechnology:

There are now too many commercial interests chasing too much money for self-regulation to continue to work well into the future. Most biotechnology companies will simply not have the incentives to observe many of the fine ethical distinctions that need to be made, which means that governments necessarily have to step in to draw up and enforce rules for them. (Fukuyama, 184).

Though the history of governmental research on the safety of various technologies is anything but exemplary, the government, in the ideal, is the most disinterested body with the greatest responsibility to ensure that proper information is disseminated. If the public is to receive information about the safety of various forms of cognitive alterations, then they need to receive information from an institution without commercial interests. Unfortunately, the U.S. federal government has a long history of actively thwarting objective research on the safety and efficacy of both pharmaceutical and recreational methods of altering cognition. So as technologies with ever-increasing potential to alter our psychology become available, it becomes all the more imperative that we reform the way that our government funds and oversees scientific research.


As a final note, I would like to comment on what is a potential risk in changing human nature through cognitive enhancement. Unlike Fukuyama, I don’t think that altering human psychology is inherently bad because I don’t think that human nature is intrinsically worth protecting. But I do think that changing human nature could be risky, simply because it can lead to unforeseen social consequences. Try as we might, social systems are far too complex for us to understand. Radical changes to our psychologies would lead to equally or even more radical changes to our social, economic, and political structures – and such radical change may or may not be fit to survive. If cognitive alterations are to lead to a radical change in human nature, there’s no knowing whether that change will be a favorable one.

The solution, then, is to keep the conversation going regarding what cognitive alterations we should and shouldn’t use. If we do this right – if we alter ourselves in a careful, adaptive, and gradual way – then we may not just survive, but thrive as posthumans.

Leave a Reply