The Ethics of Unplugging Your Computer

Because CONvergence is largely a fan convention, many of the panels offered involve panelists whose qualifications are “I was really excited about this topic”. Sometimes this means that you end up with a very interesting variety of perspectives, but unfortunately sometimes it makes for panels with unprepared and uninformed panelists. One of these that I attended this weekend was “When Is Turning Off a Computer Murder?”  The concept of this panel (when and how might a computer reach a state of consciousness on par with personhood) was fascinating. The execution less so.

So for that reason, I’m going to explore what makes something eligible for ethical consideration, connect those concepts with sentience and consciousness, and see if we know anything about whether or not machines have reached these stages yet.

Let’s start with the concept of murder, since the title of the panel looks at when we think a being deserves moral consideration. There is a great deal of argument within ethical spheres over what kind of beings deserve moral consideration. Religious ethics often tends to afford human beings a special consideration simply by dint of being part of the species. However most ethical systems have a slightly more objective criterion for ethical consideration. Some ethical systems believe that being alive constitutes enough of a reason to let something keep living, others suggest sentience, others consciousness, others self consciousness. I personally tend towards Peter Singer’s preference utilitarianism, which suggests that we shouldn’t cause any unnecessary pain, and that if something has a preference or interest in remaining alive, we shouldn’t kill it.

Even more complex is the fact that we have different standards about what types of beings we shouldn’t harm and which types of beings we should hold responsible for their actions. So for example, many people feel as if we shouldn’t kill animals but they do not feel that animals should be held morally responsible for killing each other.

I doubt we’re going to reach any conclusions about what types of beings deserve moral consideration and what exactly constitutes murder, especially considering the fact that the animal rights debates are still raging with a fiery intensity, but we can at least potentially place computers and machines somewhere in the schema that we already have for living beings. For more conversation on what personhood might be and who deserves moral consideration, check out the Stanford Encyclopedia of Philosophy or Center of Ethics at U of Missouri.

But let’s start at the very beginning of moral consideration: living things.

The current characteristics that we use to classify something as “living” include internal organization, using energy, interaction with environment, and reproduction. All of these things are things that machines have been able to do, so it doesn’t seem off base to consider some machines as “alive”, at least in some way. Some people may assert that because computers can’t reproduce or replicate in an organic way they are not alive, but this seems at odds with the ways we treat human beings who cannot reproduce (hint: they don’t suddenly become non humans). One important element of being alive that we usually take into consideration when thinking of ethics is pain: hurting things is bad. The question of whether or not computers can feel pain is wrapped up in the questions of consciousness that will be discussed later.

The next level of moral consideration is usually sentience. Most people use the words “sentient” and “conscious” fairly interchangeably, and one of the difficulties with the panel was that neither of these terms was defined. Typically, sentient simply means capable of sensing and responding to the world. Under this definition, computers have definitely reached a level of sentience, although their senses differ from human senses (this is not a problem as far as sentience goes. There are certainly sentient animals, such as dolphins or bats, that have senses like echolocation that humans do not).

Here’s where it starts to get complicated: consciousness. Trying to define consciousness is a little bit like making out with a sea slug: it’s slippery and uncomfortable and you’re not entirely sure why you’re doing it. But unlike sea slugs, consciousness is an integral part of our experience of the world and is highly relevant to our moral choices, so we probably should spend some time grappling with it and hoping it doesn’t wriggle out of our fingers (side note: my brother did once kiss a sea slug).

There are lots of things that make consciousness tough to pin down. The first of these is that depending upon what we’re trying to talk about or the context in which we’re speaking, the way we define consciousness changes. The entry on consciousness in the Stanford Encyclopedia of Philosophy lists six different ways to define consciousness*:

1. Sentience

2. Wakefulness

3. Self-consciousness

4. What it is like

5. Subject of conscious states

6. Transitive consciousness (being conscious of something)

Some of these are clearly more relevant for moral considerations than others (we don’t generally consider wakefulness relevant in our moral decisions). We’ve already touched on sentience, but let’s take some time to examine the other possible definitions and how we could determine whether or not computers have them.

Self-consciousness is often a test for whether or not something should have moral standing. It’s often used as an argument for why we should afford more consideration to animals like dolphins and chimps. Currently, we use the mirror test to determine whether or not an animal is self conscious. This test is not perfect though, as self consciousness is an inner awareness of one’s own thoughts. It relies on meta cognition, inner narrative, and a sense of identity. This points to one of the serious challenges of understanding consciousness, which is that we cannot understand it simply by using “objective” data: it requires both first and third person data because it is a subjective state.

With those caveats, there is a robot who has passed the mirror test. This is a good indication that it has some sense of self awareness. What it doesn’t give us information about is “what it is like”, which is the next possible definition of consciousness. This suggestion is championed by Thomas Nagel (who is really one of the more fantastic philosophers writing today). The best example of this is Nagel’s classic essay “What Is It Like To Be A Bat” (that title alone is one of the reasons I loved majoring in philosophy), in which Nagel explores the idea of experiencing the world as a bat and posits that the consciousness of a bat is the point of view of being a bat. This may seem tautological, but it gets at the idea that consciousness is a subjective experience that cannot be witnessed or entirely understood from an external perspective. We can have some cross subjective understandings of consciousness and experience between beings that are quite similar, but (as an example) we as humans are simply not equipped to know what it is like to be a bat.

Nagel says of consciousness: “It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons”. We can see the “objective” facts about an experience, but not the point of view of that experience (Nagel goes into much greater detail on this subject in “The View From Nowhere”, an exploration of the fact that an objective point of view will always be missing some information because it will never know what it’s like to be situation subjectively).

While “what it is like” seems to make an intuitive sense in terms of consciousness, it doesn’t have a whole lot of explanatory of power about what it is that we’re actually experiencing when we’re conscious, nor does it point us towards a way to find out whether or not other things have a way that it is like to be.

The next potentially useful definition is “Subject of conscious states”. This doesn’t really give us a whole lot of information without definitions of potential conscious states, but luckily we can glean some of these from the elements that many of the definitions of consciousness have in common. These point towards the quality of conscious states, although are not those states themselves. They include but are not limited to qualitative character, phenomenal structure, subjectivity, self-perspectival organization, intentionality and transparency, and dynamic flow. Briefly, these are as follows:

Qualitative character: This is the point at which the Stanford Encyclopedia of Philosophy started using the word “feels”, which really just made my day in researching this blog post. Another, more pretentious word for “feels” is of course qualia. This is deeply related to Nagel’s “what it is like” and points at that experience, the quality of senses, thoughts, and feelings.

Phenomenal structure: Phenomenal structure is one of the few elements of consciousness that appears to be uniquely human. It is the ability not only to have experiences and recognize experiences, but to situate those experiences in a larger network of understanding. It refers to the frameworks we use to understand things (e.g. not simply using our senses but having associations and intentions and representations that come from our sensory input).

Subjectivity: Closely related to the previous two concepts, subjectivity is the access to the experience of consciousness.

Self perspectival organization: this is a ridiculously long way of saying a unified self identity that is situated in the world, perceives, and experiences. This again exists on a spectrum (not all of us can be fully self-actualized ya know?).

Intentionality and transparency: we aren’t immediately aware of our experience of perceiving/thinking/feeling: rather we experience/think/perceive a THING.

Dynamic flow: This is a great deal like learning or growing, however it is something more: it means that we don’t experience the world as discrete, disconnected moments in time but rather that our consciousness is an ongoing process.

It seems quite possible for a computer to have some of these elements but not all of them. I would not be surprised if at some point computers developed a qualitative character, but having a phenomenal structure seems less likely (at least until they begin to develop some sort of robot culture bent on human destruction). This would give us a spectrum of consciousness, which maps quite well onto our understanding of the moral standing of non human animals. Again, different people would have different moral feelings at different places along the spectrum: some humans have no qualms about killing dolphins which almost certainly have consciousness, while others are disturbed by even killing insects.

The most relevant definitions of consciousness appear to be “self awareness”, “what it is like”, and “subject of conscious states”. It seems to me that the latter two are really  just different ways of expressing each other since most of the conscious states that we can identify are quite similar to “what it is like”. In that case, it seems that in order to be the most relevant for ethical consideration, a being would have to be self aware and also have an experience of living or conscious states.

Unfortunately there is no real way to determine this because that is an experience, a subjective state that we can never access. At some point we may simply have to trust the robots when they tell us they’re feeling things. This may seem unscientific, but we actually do it every day with other human beings: we have no solid proof that other human beings are experiencing emotions and consciousness and feelings in the same way that we do. They behave as if they do, but that behavior does not necessarily require an inner life. It is much easier to make the leap to accepting human consciousness than robot consciousness because the mental lives of other humans seem far more parallel to our own: if my brain can create these experiences, it makes perfect sense that another, similar brain can do the same.

At the point at which computers start expressing desires is when I will start to have qualms over turning off my computer, but as a preference utilitarian, this is the consideration that I try to give all beings.

 

 

*These definitions are what SEP calls “creature consciousness”, or ways that an animal or person might be considered conscious. It also looks at “state consciousness”, which are mental states that could be called “conscious”. These are clearly related, but in this case creature consciousness is more relevant to determining whether we can call a computer “conscious” or worthy of moral consideration.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s