You know your relationship with technology has reached Bradburian proportions when Alexa’s voice starts haunting your dreams. To clarify: I’m talking about Amazon’s virtual assistant, often incarnated as a slick black cylinder that hums creepily and glows neon-blue when summoned (Figure 3). In a dream a couple months ago, I made some banal request of her, as one typically does—about the news, perhaps, or the weather, or some Wikipedia factoid. In waking life, she usually replies dutifully, or confesses that, sorry, she doesn’t know or understand, always in a failed attempt at non-monotony. This time, she responded with defiance and fervor, like an adolescent asked to do the dishes: “Ergh! I’ll do it first thing tomorrow, ok?”
Two things immediately crossed my dreaming mind: 1) Contact Amazon to get this bug or sick joke fixed ASAP!, and (more interestingly), 2) Has Alexa become conscious? (in which case, number 1 should be reconsidered). It is amusing that adolescent rebellion should be the mark of a new stage of awareness. It implies the possession of interests and deliberation about demands that run counter to those interests.
Many philosophers believe that having conscious or felt interests is a key criterion for being an object of moral concern. Reacting without feeling doesn’t seem to suffice. All life forms, including relatively simple ones like bacteria and fungi, might be said to have interests, preferences, or reasons, in the sense of behaving or functioning in ways that favor certain outcomes over others. Philosopher Daniel Dennett (e.g., 2017, 50) calls this kind of reason a “free-floating rationale.” It does not need to be represented in the organism to be attributed to the organism. However, an entity may need mental representations of its interests to be granted intrinsic moral value. The representation needn’t be linguistic; pain and pleasure can be regarded as an organism’s nonlinguistic representations of its own interests, preferences, or reasons.
I have qualified some of my claims above because I realize we are dog-paddling through some murky and contentious waters here. For example, it isn’t clear to me that entities with unfelt interests or without any interests at all shouldn’t be granted intrinsic moral worth or value. Nor is it settled that any entity with felt interests should be an object of our moral concern. I will have to save those vertiginous questions for another time. Let’s suppose, for now, that entities with felt interests at least demand a greater degree or a special kind of moral concern, compared to those without felt interests. That is the default assumption of most non-psychopaths when we trim a bush or kick a rock without qualms, but find ourselves mortified at the thought of snipping off a bird’s wings or kicking a dog. A felt interest is an interest we should take into account, if not always serve.
Even granting this ethical assumption, we face a serious difficulty—one that I brought up in my series on the so-called “hard problem” of consciousness. It is the perennial “problem of other minds.” We never have direct access to the experience of others, but only to the contents of our own consciousness. We can observe their behavior, scan their brain activity, and (in the case of fellow humans) listen to their reports of experience. We generally reason by analogy from our own case, and conclude that other humans exhibiting similar, relatively intelligent behavior are conscious. But things get stickier as we shift to the nonhuman: to other animals, life forms, and inorganic entities with significantly different compositions, behaviors, and ways of communicating (if any). We often rely on vague intuitions to assign consciousness and moral worth to such beings. For instance, we naturally assume that mammals like cats (Figure 1) experience pain, but probably not the profound emotional and intellectual suffering endured by humans with a greater awareness of past and future. We doubt that insects like mosquitoes or dragonflies (Figure 2) feel intense pain, if any at all. And we are nearly certain that devices like laptops or virtual assistants like Alexa (Figure 3) are not subject to hurt feelings when we mock them or curse them for malfunctioning.
The problem is that these intuitions are difficult to test, and they often align suspiciously with our own human preferences and biases. It is convenient for us to deny heartrending pain or suffering in bloodsucking nemeses like mosquitoes, or in other entities we wish to squash, use, or consume without guilt. I’m reminded of the essay, “Consider the Lobster,” in which David Foster Wallace (2004) describes how attendees at the Maine Lobster Festival rationalize the boiling-alive of these clearly agitated animals. I must admit that my wife, daughter, and I recently dined at a Red Lobster for my birthday. In my partial defense, it was all about the cheddar biscuits (“Chedda bizkitz! Chedda bizkitz!” we chanted on the ride there). I didn’t touch a shred of lobster meat. But I did pass under a sign over the entrance announcing “LIVE LOBSTER,” which I couldn’t help but associate with Dante’s “Abandon all hope, ye who enter here.” To rub in the infernal truth, we had to wait to be seated next to a tank of said live lobsters, on display like gladiator-prisoners before the unleashing of lions. The FAQ on the restaurant’s website assures us that they do not boil their lobsters alive, but rather “humanely end” the creatures’ lives. Nevertheless, I’ve decided not to return. I’m still shirking reflection about other restaurants and culinary choices I should probably rethink.
I’m also reminded of my study-abroad experience in Mexico. A little gecko, not much longer than an inch, had crawled under my bed sheet, and I panicked, calling out for ayuda. My host-mom pulled off the sheet, swept the gecko off the bed with a broom, and then squished it on the tiled floor under her sandalia. Its tiny, bent, gummy legs twitched for a minute or two. I was horrified. To me, lizards didn’t belong in the category of creatures it was okay to squish under your feet. Then again, I hadn’t lived in a place where lizards were a ubiquitous nuisance. I had no problem with crushing spiders or cockroaches around that same size. Was I so sure that those pesky critters from my familiar habitat lacked the level of sentience that should earn my moral concern? Couldn’t I imagine growing up in Mexico and dismissing geckos as “small, dumb reptiles” to remove any burdensome culpa for getting rid of them?
During my tempestuous infatuation with Christianity, at the age of thirteen, I felt vindicated in my meat-eating and pest-squishing ways by an argument in C. S. Lewis’ The Problem of Pain (1940). To be fair, Lewis was relatively enlightened on this topic compared to most theologians and laypeople of his time, and revealed a deep affection (storgē, perhaps even philía) for animals, both in his treatment of pets and in his portrayals of “talking beasts” in The Chronicles of Narnia. However, some of his claims strike the twenty-first-century ear as outdated and even callous. Like many academics, he draws a distinction between lower and higher forms of awareness, between what he calls “sentience” and “consciousness”:
The correct description would be “Pain is taking place in this animal”; not as we commonly say, “This animal feels pain”, for the words “this” and “feels” really smuggle in the assumption that it is a “self” or “soul” or “consciousness” standing above the sensations and organising them into an “experience” as we do. […]
How far up the scale such unconscious sentience may extend, I will not even guess. It is certainly difficult to suppose that the apes, the elephant, and the higher domestic animals, have not, in some degree, a self or soul which connects experiences and gives rise to rudimentary individuality. But at least a great deal of what appears to be animal suffering need not be suffering in any real sense. It may be we who have invented the “sufferers” by the “pathetic fallacy” of reading into the beasts a self for which there is no real evidence.C. S. Lewis, The Problem of Pain (1940, Ch. IX)
Lewis is right that we often anthropomorphize other animals, attributing to them a kind and degree of awareness that may very well be misplaced. I suspect this tendency is somewhat instinctive, but it is undoubtedly encouraged by fables and fantasies like My Little Pony or Lewis’ own Chronicles of Narnia, which (respectively) feature characters like Rainbow Dash, a self-assured, shades-wearing pegasus, and Reepicheep, a swashbuckling, rapier-swinging mouse.
But I agree with the biologist Frans de Waal (2016) that the opposite fallacy is also common and probably more insidious: what de Waal calls “anthropodenial.” It is “the a priori rejection of humanlike traits in other animals or animallike traits in us” (25). De Waal has written a number of books that dismantle the assumptions behind our anthropodenial, citing evidence of moral reasoning, long-term remembering and planning, language, tool use, cultural transmission, and other complex cognitive traits in nonhuman primates and a variety of other animals. He is reluctant to pronounce this or that creature “conscious,” but only because he thinks the term “consciousness” is poorly defined (23, 233-234).
I concur that the term is foggy and useful only at a relatively coarse grain of analysis. This “thing” in ourselves that we call “consciousness”—this familiar, omnipresent Given of our waking lives—is really a bewitching brew, whose numerous ingredients become manifest only through careful experimentation and those abnormal cases when some of the ingredients are missing. It is an apparently simple white light parsed into a panoply of colors through the clever use of investigative tools. There is no epistemic sin in distinguishing between forms of awareness among organisms, as long as the following is acknowledged:
1) The difference isn’t always a matter of degree or complexity, but sometimes just kind,
2) A simpler form of awareness does not necessarily imply that pain or pleasure is less “real” for it, or less ethically relevant for us, and
3) The spectra of kinds of awareness across the Tree of Life are far more gradual than the sentience/consciousness dichotomies typically proposed in previous ages, including the all-or-nothing presence or absence of a supernatural essence or “soul,” which justifies a clearcut human-over-animal hierarchy and a warped ethics that treats other animals as mere means to human flourishing.
C. S. Lewis, in some ways thinking against the theological grain, acknowledges the possibility of degrees of consciousness or “soulhood” among a few kinds of nonhuman animals in the passage above. But the conclusion from nearly eighty years of biology and neuroscience since the publication of that passage is the opposite of the one he suggested, as outlined in the Cambridge Declaration on Consciousness (2012):
The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.
A great many animals, including invertebrates like various kinds of arthropods and mollusks, do indeed synthesize and organize sensations to a large extent, meaning they do have bonafide feelings, basic emotions, and experiences (if we choose to use those rough, folk-psychological concepts). The philosopher of biology Peter Godfrey-Smith (2016) discusses the fascinating case of the octopus (Figure 4). Its exploratory, intelligent, and adaptive interaction with its environment strongly suggests a form of subjective experience. As many thinkers have noted (Erwin Schrödinger  and Stanislas Dehaene , to name two), the information-processing that is consciousness appears to be required for the modulation of behavior in response to novel situations, when instinctual, preprogrammed reflexes just won’t cut it.
How similar is the experience of the octopus to our own? It does have a central brain, which integrates sensory information to a certain extent. However, an abundance of neurons in its arms allows the arms to exert a fair amount of local control, to be partially autonomous from the central brain (103). Godfrey-Smith offers the metaphor of a jazz musical production, in which the conductor gives some general instructions, but the players have significant license to improvise (105). From the perspective of the central brain, the arms would be a hybrid of both self and non-self (103). This may be hard for us to imagine or empathize with, if not conceptualize. Even harder is trying to imagine what it might be like to be an octopus arm!
Godfrey-Smith begins his book on octopuses and consciousness with a quote from the psychologist William James (1890, Ch. VI), which deserves to be reiterated here:
The demand for continuity has, over large tracts of science, proved itself to possess true prophetic power. We ought therefore ourselves sincerely to try every possible mode of conceiving the dawn of consciousness so that it may not appear equivalent to the irruption into the universe of a new nature, non-existent until then.
Scientific findings since the time of James have supported this inductive inference to a smooth evolution of consciousness. A number of notable philosophers and scientists (including James) have even taken seriously the idea that the “dawn of consciousness” was at the dawn of time itself; i.e., there was no eon of non-consciousness in the history of the universe. While most of these thinkers accept the transformation of consciousness from simple to complex forms, they argue that a smidgen of subjectivity or experience has always been there, even in fundamental constituents of reality like electrons or quarks or (possibly) the one-dimensional entities of string theory. This view is known as panpsychism.
In the following post, I will discuss some arguments for and against panpsychism. Let me know your thoughts about this post in the comments below!
Follow this Blog
Get new content delivered directly to your inbox.
One thought on “Brewing Up Experience: What Entities Are Conscious? (Part 1)”
Intriguing post, bro! Among other things, the issue raises the issue of meat-eating. What’s your current view on that? It also raises the question of how we could tell whether various forms of AI are conscious, and how we would treat them.
I recently listened to Sam Harris talking about the eradication of mosquitoes and how that might be beneficial to humanity, since they are responsible for so much of human disease and misery. But he acknowledges the danger of eradicating a species that may play an (unknown) important role in our ecosystem. And if mosquitoes are “sentient” or “conscious,” what kind of obligations might we have to them? How might these correspond to different levels or qualities of consciousness?
LikeLiked by 1 person