In the sixth grade, I was staring at a classmate who was babbling about some inane topic, when I briefly wondered: “Is he alive?”
By “alive,” I did not mean that he satisfied the functional criteria of a life form. That seemed to follow straightforwardly from his behavior. I meant: did he have experiences, like I did? Was there a technicolor film running behind all of his expressions and noises and gesticulations? An inner illuminated world of sights, sounds, smells, pains, pleasures, and other forms of feeling and conscious perception? Or was he rather devoid of consciousness—a philosophical zombie? Mere clockwork in the darkness?
Philosophers call this the “problem of other minds.” Normally it does not perturb us with regard to other humans, given the similarity of their behavior and physiology to our own; one must be pathologically philosophical, I guess, to fall down that rabbit hole. But the problem grows fangs when raised about various other organisms and actual or hypothetical machines. It is not quite the problem I will tackle in this post, but it is certainly related.
Fast-forward to my undergraduate years. After a psychology lecture, I remember contemplating how one might construct a machine that could feel pleasure. I supposed that one would have to imitate, to a sufficient grain, the electrochemical processes responsible for pleasure in animal brains. But then the strangeness of that correlation struck me. Why should the shuffling of electrons and molecules give rise to a feeling of bliss, to that seemingly ineffable state of glorious “yes-ness”? Sure, the physical process reinforces certain behaviors, but that seemed possible without any feeling at all—a matter of adjusting synaptic weights in the neural network. And if I started with a simple, evidently mindless machine, and kept adding components and wiring, increasing its complexity and its information-processing capabilities, at no point would I expect the structure to acquire the capacity to have feelings and conscious perceptions. What hocus pocus would transform electricity and wiring into experiences like the color and taste of turmeric? (See Figure 1 below.)
David Chalmers (2003) has famously called this the “hard problem of consciousness.” Physical events, conceptually or logically speaking, do not seem to imply conscious ones, even though we do find them paired in experience. We certainly have the instinct or habit of attributing consciousness on the basis of certain appearances and behaviors in other people and animals, but as I realized in that sixth-grade classroom, we have no assurance here. Nor do we have assurance in our assumptions that simple animals, plants, and various nonliving things lack consciousness. The link between physical and conscious events is unclear, unexpected, and difficult to establish, especially as we look beyond ourselves and consider beings more and more disanalogous to us. Hence, the hard problem of consciousness and the problem of other minds are tightly interwoven. Both are worthy of our reflection, given the great value and moral significance we place on consciousness. In fact, some have argued that certain states of consciousness are the only intrinsically valuable things. Along with the problem of other minds, the problem of intrinsic value is a question for another post.
To address the hard problem, I want to begin by making an epistemological distinction between mysteries and marvels.
A mystery is something unknown, something in need of explanation. Some philosophers have distinguished between “mysteries” and “puzzles,” regarding the former as less tractable than the latter, but for my purposes here, we can regard the two terms as synonymous. I have mentioned a mystery already: the problem of other minds. We don’t know to what extent many other organisms or even certain machines are conscious. We also don’t know many details about 1) which kinds of physical events correspond to specific kinds of conscious events (like the color or taste of turmeric), and 2) which kinds of physical events correspond to consciousness in general. The former has been called the neural correlates of consciousness (NCCs), and the latter has been called the global neural correlate of consciousness (GNCC) (see, e.g., Metzinger, 2009, p. 11). While plenty of mysteries remain, cognitive science has made great strides on these fronts. See Stanislas Dehaene (2014) for an excellent overview and analysis of research related to the GNCC.
Some philosophers and scientists have claimed that, besides the NCCs and the GNCC, the hard problem of consciousness is also a mystery, with an “explanatory gap” that needs to be filled. Some have even claimed that this is a mystery that humans do not have the cognitive capacity to solve. The philosopher Colin McGinn (1989) is one such mysterian; the cognitive scientist Steven Pinker (1997) has also made remarks along these lines (561-565).
Let us stipulate that a marvel is something strange, counterintuitive, surprising, or awe-inspiring that nonetheless is known and does not stand in need of further explanation. I might marvel at the collective ingenuity and organization of an ant colony while knowing all the relevant eusocial details about how those traits emerge. I might consider a certain conclusion of relativity, quantum mechanics, or particle physics to be marvelous, without needing to believe that some more intuitive mechanism must explain its weirdness. It may just be a bizarre, brute fact about physical reality. Human cognition may be limited in its ability to find such facts intuitive, yet still competent to discover whatever facts there are.
In my next post (Part 2), I will discuss some reasons for thinking that the hard problem is a marvel, and not, in fact, a mystery. Let me know your thoughts about this post and topic in the Reply section below!
Chalmers, David (2003). Consciousness and its place in nature. In Stephen P. Stich & Ted A. Warfield (eds.), Blackwell Guide to the Philosophy of Mind. Blackwell. pp. 102–142. Retrieved online at: http://consc.net/papers/nature.pdf.
Dehaene, Stanislas (2014). Consciousness and the brain: deciphering how the brain codes our thoughts. New York: Penguin Books.
McGinn, Colin (1989). Can we solve the mind-body problem? Mind 98 (July): 349-66.
Metzinger, Thomas (2009). The ego tunnel: the science of the mind and the myth of the self. New York: Basic Books.
Pinker, Steven (1997). How the mind works. New York: W. W. Norton & Company.
Follow this Blog
Get new content delivered directly to your inbox.