Technology and Consciousness

note from susan

What does consciousness do, what is it for, and what does it have to do with artificial intelligence? In the summer of 2017, I was honored to present at one of a series of eight workshops organized by SRI International’s computer science laboratory that looked at these and other questions related to technology and consciousness. Daniel Sanchez and John Rushby’s smart, accessible Technology and Consciousness Workshops Report is now available. It provides an excellent overview of theories of consciousness, the possibility of machine or technological consciousness, and their potential implications. Take a look at the report’s final paragraph:

If we accept that technological consciousness is a possibility, or that machines without consciousness may come to possess capabilities associated with consciousness, then issues of safety and control and of moral obligation need to be addressed. These also are crosscutting between philosophy, computer science, and other disciplines such as law and sociology. In fact, long before we reach questions of conciousness, philosophical questions abound in modern computer science: assurance asks what do we know about our system (i.e., epistemology), and self-driving cars, chatbots, and assistive robots all pose problems in ethics.

So, looking forward, we urge continuing cross-disciplinary study of these topics.

Cross-disciplinary work has challenges but the rewards would be considerable.


 

Technology and Consciousness Workshops Report

John Rushby and Daniel Sanchez

Computer Science Laboratory

SRI International, Menlo Park CA USA

Abstract

We report on a series of eight workshops held in the summer of 2017 on the topic “technology and consciousness." The workshops covered many subjects but the overall goal was to assess the possibility of machine consciousness, and its potential implications.

In the body of the report, we summarize most of the basic themes that were discussed: the structure and function of the brain, theories of consciousness, explicit attempts to construct conscious machines, detection and measurement of consciousness, possible emergence of a conscious technology, methods for control of such a technology and ethical considerations that might be owed to it. An appendix outlines the topics of each workshop and provides abstracts of the talks delivered.

conclusion and next steps

The possible emergence of technological consciousness is an important topic: were it to happen, it could have significant impact on our future, our safety, our institutions, and the nature of our relationship with machines.

It is clear that our technology will have computational power approximately equivalent to a human brain within a decade or so but, apart from that observation, assessment of the feasibility of technological consciousness seems to depend more on our knowledge and beliefs about consciousness than on technological questions.

Currently, we have no good theory of (human) consciousness: we do not know how it works, what it does, nor how or why it evolved. This leaves space for much speculation and opinion. Many believe that phenomenal consciousness (the personal sense of experience) is the important topic—it is the essence of what it means to be human —and the possibility that it could arise in a machine is of profound concern. Yet we know so little about phenomenal consciousness that we cannot tell if animals have  it.  Some  believe it  is part of the basic  mechanism of advanced perception and arose 500 million years ago (in the Cambrian explosion) and is possessed by all vertebrates [Feinberg and Mallatt, 2016]. Others say it arose in anatomically modern humans only a few tens of thousands of years ago, when signs of creativity and art first appear in the archaeological record [Mithen, 1996] (some say even later, just 3,000 years ago [Jaynes, 1976]), and is therefore absent in animals (though there are likely some evolutionary antecedents). Others say it is an epiphenomenal side effect of other developments (e.g., intentional consciousness—the ability to think about something) and does not matter much at all.

Those of the latter opinion might argue that concern for phenomenal consciousness is an anthropomorphic indulgence and that intentional consciousness is what matters, since it seems to be the enabler of advanced cognitive capabilities such as counterfactual reasoning and shared intentionality (the ability to create shared goals and engage in teamwork). Yet others might observe that consciousness may be needed to enable these capabilities in humans, but technology can achieve them by other means. 

One of the largest benefits from contemplation of technological consciousness is that, through elaborations of the points above, it facilitates identification and isolation of many aspects of consciousness that are entwined in humans. Further, explicit effort (i.e., simulations and other experiments) to create machine consciousness not only helps evaluate theories of consciousness, it forces those theories to be articulated with sufficient precision that their evaluation becomes possible.  Thus, study  of technological consciousness illuminates human consciousness and that, in turn, can better inform consideration of technological consciousness. Additional study of these topics is highly recommended, and must be a cross-disciplinary effort with contributions needed from philosophy, psychology, neuroscience, computer science and several other fields.

If we accept that technological consciousness is a possibility, or that machines without consciousness may come to possess capabilities associated with consciousness, then issues of safety and control and of moral obligation need to be addressed. These also are crosscutting between philosophy, computer science, and other disciplines such as law and sociology. In fact, long before we reach questions of consciousness, philosophical questions abound in modern computer science: assurance asks what do we know about our system (i.e., epistemology), and self-driving cars, chatbots, and assistive robots all pose problems in ethics.

So, looking forward, we urge continuing cross-disciplinary study of these topics. Cross-disciplinary work has challenges but the rewards would be considerable.

 
Previous
Previous

Misguided Devotion: On Kirstin Allio's "Buddhism for Western Children"

Next
Next

What Matters Most? A Mindful Game for the New Year.