Downloading Consciousness: a BrainHub panel discussion

Alison Barth, a professor of biological sciences at Carnegie Mellon, participated in a panel held by BrainHub. (credit: Alison Barth) Alison Barth, a professor of biological sciences at Carnegie Mellon, participated in a panel held by BrainHub. (credit: Alison Barth) Wayne Wu, an assistant professor and Associate Director of the Center for the Neural Basis of Cognition participated in a panel held by BrainHub. (credit: Wayne Wu) Wayne Wu, an assistant professor and Associate Director of the Center for the Neural Basis of Cognition participated in a panel held by BrainHub. (credit: Wayne Wu)

On Thursday afternoon, members of the Center for the Neural Basis of Cognition (CNBC) assembled to present a panel discussion regarding the relationship between computer science, neuroscience, and the connection between the brain’s circuitry and consciousness.

The panel was composed of Carnegie Mellon faculty from multiple departments. The panel, mediated by professor of biological sciences and interim director of BrainHub Alison Barth, included Carnegie Mellon faculty coming from the Departments of Computer Science, Neuroscience and Biological Sciences, and Philosophy. The attendees were Sandra Kuhlman, assistant professor in the Department of Biological Sciences; Wayne Wu, an assistant professor and Associate Director of the CNBC; David Touretzky, research professor for the computer science department and the CNBC; and Anind Dey, a faculty member in the Department of Human Computer Interaction.

In the words of Dr. Barth, the central question the discussion revolved around was, “How close are we to being able to represent the human consciousness in computers?”

Kuhlman began the panel by speaking about the brain’s complexity. After listing some of the ways in which this complexity manifests, she noted her doubt toward the idea of neuroscientists fully understanding the brain within her lifetime, stating that she is more interested in “developing strategies for understanding the brain” than she is in pursuing a complete understanding.

Next, Wu discussed a paper which examined the idea of hearing a sound to understand consciousness; some brain activity was observed when the subject was introduced to sound but did not actively “hear” it, and brain activity was widespread when the subject heard the sound, or was conscious of it. In this way, Wu summarized that not all brain activity “gives rise to mentality,” which adds another difficulty to the already aggressive list.

Wu went on to discuss the idea of downloading consciousness, of mirroring the brain’s circuitry and attempting to replicated consciousness in a computer. He admitted that “there are already problems connecting the brain to the mind, so there would be many more issues in connecting the brain to computational systems.” As it now stands, there is no clear sign for a biological basis for consciousness, no explanatory connection between intelligence and the circuitry of the brain.

Wu brought up the idea of Turing’s Test, in which a human (A), and a computer (B) are asked questions by an impartial party (C); based on the questions and their answers, C determines which one is human. If C has to guess, then this is considered proof of an artificial consciousness’s legitimacy. This idea was applied to constructions such as Watson, IBM’s artificial intelligence who beat actual human players at Jeopardy, SimSensei, a virtual therapist, and Chatbot programs which simulate conversation in text. The argument is that artificial intelligence is more than the ability to create a dialogue; some programs can do this by reiterating the question originally asked, or in Watson’s case, by answering correctly. These sorts of virtual consciousnesses are the closest we can currently come to a truly artificial intelligence, but they appear to be on the right track.

The conversation gradually shifted from the creation of artificial intelligence in computers to the possibility of transferring a human consciousness into a computer. The panelists focused on a case in which a doctor of neuroscience developed a brain tumor and elected to preserve her brain so that in the future, there would be potential for her consciousness to be replicated in a computer so that she and her boyfriend could talk after she passed away. The panelists agreed that there are many issues with cryopreservation and this overall situation. There is the issue of natural cell death and the tissue death that comes from freezing, and since we do not currently have a concrete connection between the brain and the mind, it would be a long while before anything like this would even be plausible. Also, there is the fact that were they to reconstruct her brain, they would be reconstructing a haywire brain — a tumored brain — to consider.

Dey talked about work being done by the University of Southern California, where researchers are attempting to preserve the consciousness of a Holocaust survivor. They recorded him with multiple cameras in order to produce a 3D hologram construction of his body. When he speaks, he mimics the normal physical patterns of human speech very well. The hologram is a repository of information, much like a human intelligence: One can ask him a question, and he will reply by combining many facts about himself and responding in a very human way. This seemed to be the most interesting push toward a real consciousness within a computer out of all of the examples.

The underlying thread beneath these two examples is that there is still no clear connection between the mind and the brain — between consciousness and circuitry. The most difficult task, the panel agreed, would be to recreate a specific consciousness and have it recognizable by a loved one.

As of now, it is far easier to replicate a simple emotion or a person you’ve never met before. Recreating a specific consciousness brings up the question: just because a program can think like a person, or has the same wiring and circuit activity, is that program then that person? Do they need access to that person’s memories to truly replicate that consciousness? Is a simple replication of the circuitry enough, or is there something more, something intangible about the individual that makes this concept of downloading consciousness unproductive? Right now, there are more questions than answers.