I Know This One: Identity and Object Permanence

Last week I did something exciting. Hold your breath, this one’s gonna blow you away.

My mom told me something about how I was as a child and I disagreed with her.

Yeah, let me give you a minute to pick your jaws up off the floor. Sarcasm aside, this was a big deal for me, although for many people it might just seem like a normal experience. Because I do not trust my memory, my perception, or my interpretations of events. I have struggled with this for a long time. I relied on grades to tell me that I was smart, because I didn’t believe it otherwise. I relied on friends and my parents to inform me that I was kind and interesting and caring.

And if someone said something negative about me, I was suddenly sure that they were right. I’m not sure where this comes from in my brain, but it’s almost as if I don’t have object permanence about myself or traits about myself. Facts slip away quickly, and I find myself uncertain about whether I’m rational and reasonable, thoughtful, needy, demanding, or something else I would never expect. When I get into an argument with someone else, I feel as if I’m losing my mind because suddenly all the things that I had thought so clearly a day or two days ago that were bothering me are gone the moment they say I’m wrong.

I think this is at the root of some of my inability to create a strong and stable identity. I have a hard time feeling secure and certain of myself and my abilities, of my worth in the world, because all those facts are like water in a sieve to me. Of course thanks to depression brain, my negative thoughts stick like burrs, but that means that any identity I have is based entirely on bad thoughts about myself. Then I’m told by family, friends, therapists, everyone to argue with those thoughts. It leaves me in a horrible situation of not knowing who to trust and not being able to trust myself, particularly because I start to question my own memories when someone tells me I should have interpreted things differently.

I don’t talk about this element of my mental health as often as I do some of the others. It’s one that I’ve only started to notice as a pattern recently. I don’t know how it fits in to diagnoses or labels. And more than any of that, it’s the one that puts me in the most vulnerable position, and makes me susceptible to manipulation or abuse, even unintentional. And this is the trait of my brain that leaves me feeling the most “crazy.” It does seem as if I’ve lost my mind, and can’t find any footing in reality. It seems to turn me into a stereotype that can’t fend for myself, and it makes me feel as if I can’t self advocate anymore because who will trust what I say when I can’t even trust my own experiences?

But if I’m committed to transparency about my mental health, and particularly as I start talking about positive milestones in my life, I have to talk about this. Because this moment of knowing with certainty that what someone said about me isn’t true is something I have never felt before. Not only that, but I managed to retain positive or neutral information about myself and then stick up for that piece of information, hold on to it as true even when someone disagreed.

Yes, later I did feel like I had to check in with other people who knew me when I was younger to make sure I hadn’t lost my mind. But it was still a step. It’s one piece that I can rely on and build from. I’m not entirely sure how I helped myself to this point, but I do know that I have been working endlessly on simple reminders. I have started to collect moments and facts. I write down everything that I get done and look back at it periodically. I mark out events in my brain, like the picture I took last year of me stuffing my face with a burrito when I finally felt “recovered” for the first time, or the feeling on Monday when I finally had enough work at work to last me a full day and then some (and it felt amazing), or the generosity I showed when I randomly get people gifts because I can and I want to.

I have made enough lists of my values that now I don’t have to look at them anymore, I just know them. And I know that I act on them whenever possible. I have thought so carefully about what I want my life to look like in order for me to feel comfortable and stable, that I can imagine it in full detail. And I know what I look like, because I simply fill in around the edges of myself until I see my shape.

I certainly don’t understand the world fully, but I think I’m beginning to understand MY world, which means that I know where I fit. As the incredibly cheesy DBT lingo would go, I’m starting to see what a life worth living would look like. That tells me more about myself than it does about anything else. I don’t have to please everyone else. I can do what fits my values instead. I don’t have to trust everyone else more than I trust myself

Which all means holding on to reality a little bit better. Which means I’m starting to remember better. Which I guess means that there’s hope, even when I do feel completely lost. Hope that I will be me someday. I don’t know how to end this because this is certainly not any sort of neat ending or conclusion. It’s the flicker of a beginning, something barely of note except that I didn’t realize what I’d been lacking for so long.

I’ve spent a lot of time in grandiose ideas, morality, black and white ideals, instead of boring practicalities. These things seem appealing to me. Unfortunately what I need to hold on to are the basic, simple facts about who I am and what I do. Here’s to looking for boredom.

 

My Body Is My Self

I have a fiery hatred for Cartesian dualism. There are well documented problems with dualism, and modern neuroscience indicates a close relationship between the physical aspects of the brain and the subjective experiences of the mind. Being embodied can really suck sometimes (trust me, I have an eating disorder), but one of the important elements of being mentally healthy for me is accepting not only that I have a body but that in many ways I am my body.

I recently posed the question to a friend “if you were removed from your body and put into a robot, would you still be you?” I suggest no, as the ways that I can think of to define self nearly all rely on bodily experiences: our actions, our thoughts, our feelings, our values. These things are all highly dependent on what we sense and how we sense, and are affected by the ways that our bodies work. A well fed body acts, thinks, and feels differently than a hungry body. These experiences of being dependent on something that is changeable and fallible seem to be an essential part of being human.

Even when we think of the memories and narratives that we have, our bodies are essential to a sense of self. Memories are often sensory experiences, dependent on what we perceived and the emotions elicited in the moment. There’s evidence that smell is more connected to memory than other senses, which points towards the idea that our memories are colored by both our fallible and finite brains, and the ways that our body is capable of processing an experience. Even the stories that we tell about ourselves are highly influenced by our bodies, if only because our social position is affected by our weight and height and strength and gender presentation. It’s easy to imagine that our concept of selfhood is entirely abstract or mental, but most of our emotions are experienced physically, and things like stress or relaxation are very physical, embodied experiences.

All of this is to say that I’m firmly convinced that me, Olivia, is not simply my conscious experience, but my conscious experience as situated in this body, and that if I were to be transplanted there would be a pivotal change in my essential identity. I’m not entirely sure what this means as far as continuity of identity or whether or not we can really assert that we have an underlying self that continues to exist through all our experiences except insofar as we have memories and stories, but that’s not the focus for today.

Instead, I want to talk about sex.

Some people are totally down with casual sex, and this post is not for them. This post is about why (at least for me and probably some other people too) sex can seem so intimate and personal, why it seems so vulnerable, and why for some people it feels violating. One of the reasons that I am starting to consider labeling myself “sex-averse” is because of the highly intertwined nature of self and body. I trust very few people with the more intimate parts of myself. Sure, I’m open about the fact that I have an eating disorder, and I write about my experiences here, but in person there are many, many things I don’t talk about often. Many of these things are embodied experiences: sexual assault, self harm, purging. My experience of my body is one of pain, and more often than not it is a solitary experience because these things are shameful.

It is deeply embarrassing and terrifying to me to let that side of me be real, to actually be quiet and vulnerable in my body. My body is puke and blood and tears and snot. That is not the intimacy I want. I can grudgingly accept that those things are a part of me, but I don’t want to dwell on them or revel in them. It’s possible that at some point in the future my body will become something else to me: strength or grace. But those elements, those animal elements, the things that we cannot control will always be an essential part of having a body and of sharing that body.

For many other people, discomfort with sex is about judgment. It’s easy to write this off as the same kind of fear of judgment we have when we’re going to the beach and showing more skin than usual, or when we’re spending some serious one on one time with someone. I tend to think it’s more than that though, which is where questions of dualism come in. I’m sure some people are fairly capable of bifurcating self from body (although I also am fairly sure that this is somewhat illusory for the reasons presented above). But I think that some of us feel the “me”ness of our bodies more: we feel intimately that my body is not simply something that belongs to me or a bit of meat that carries me around, but is in fact an integral part of how I experience the world and what makes up my worldview.

I feel this quite thoroughly when I am in sexual situations, and that’s a major part of why they are so intimate to me. I am not simply sharing pleasure with someone or sharing my body with someone: I am sharing one of the most essential elements of self with another person, the part of me that is my only way of connecting to the world. This is perhaps why all physical contact is intimate to me in a way that speaking is or writing is not: it demands that I am present.

And because allowing another person to experience your body is so close to letting them experience you (just as having a serious, deep conversation is, or showing them something you care deeply about is), it becomes so much more rife with potential judgment than other situations, and when judgment occurs it is much more painful. It feels far more like a rejection of self than many other circumstances.

Perhaps all of this is overthinking things, but I think it’s too easy to write off our bodies as simple mechanisms that allow us to feel pleasure and pain, or get from point a to point b. There is so much more to them, so much that is terrifying and disgusting, but also that is intimate, vulnerable, and exciting. For the moment, the selfhood of my body makes me want to shy away from physical contact, but perhaps in the future it will make it more fulfilling. However it ends up interacting with my sexuality, I want to be aware of my body and its role in my self-identity before I gallivant off into the land of sex.

The Ethics of Unplugging Your Computer

Because CONvergence is largely a fan convention, many of the panels offered involve panelists whose qualifications are “I was really excited about this topic”. Sometimes this means that you end up with a very interesting variety of perspectives, but unfortunately sometimes it makes for panels with unprepared and uninformed panelists. One of these that I attended this weekend was “When Is Turning Off a Computer Murder?”  The concept of this panel (when and how might a computer reach a state of consciousness on par with personhood) was fascinating. The execution less so.

So for that reason, I’m going to explore what makes something eligible for ethical consideration, connect those concepts with sentience and consciousness, and see if we know anything about whether or not machines have reached these stages yet.

Let’s start with the concept of murder, since the title of the panel looks at when we think a being deserves moral consideration. There is a great deal of argument within ethical spheres over what kind of beings deserve moral consideration. Religious ethics often tends to afford human beings a special consideration simply by dint of being part of the species. However most ethical systems have a slightly more objective criterion for ethical consideration. Some ethical systems believe that being alive constitutes enough of a reason to let something keep living, others suggest sentience, others consciousness, others self consciousness. I personally tend towards Peter Singer’s preference utilitarianism, which suggests that we shouldn’t cause any unnecessary pain, and that if something has a preference or interest in remaining alive, we shouldn’t kill it.

Even more complex is the fact that we have different standards about what types of beings we shouldn’t harm and which types of beings we should hold responsible for their actions. So for example, many people feel as if we shouldn’t kill animals but they do not feel that animals should be held morally responsible for killing each other.

I doubt we’re going to reach any conclusions about what types of beings deserve moral consideration and what exactly constitutes murder, especially considering the fact that the animal rights debates are still raging with a fiery intensity, but we can at least potentially place computers and machines somewhere in the schema that we already have for living beings. For more conversation on what personhood might be and who deserves moral consideration, check out the Stanford Encyclopedia of Philosophy or Center of Ethics at U of Missouri.

But let’s start at the very beginning of moral consideration: living things.

The current characteristics that we use to classify something as “living” include internal organization, using energy, interaction with environment, and reproduction. All of these things are things that machines have been able to do, so it doesn’t seem off base to consider some machines as “alive”, at least in some way. Some people may assert that because computers can’t reproduce or replicate in an organic way they are not alive, but this seems at odds with the ways we treat human beings who cannot reproduce (hint: they don’t suddenly become non humans). One important element of being alive that we usually take into consideration when thinking of ethics is pain: hurting things is bad. The question of whether or not computers can feel pain is wrapped up in the questions of consciousness that will be discussed later.

The next level of moral consideration is usually sentience. Most people use the words “sentient” and “conscious” fairly interchangeably, and one of the difficulties with the panel was that neither of these terms was defined. Typically, sentient simply means capable of sensing and responding to the world. Under this definition, computers have definitely reached a level of sentience, although their senses differ from human senses (this is not a problem as far as sentience goes. There are certainly sentient animals, such as dolphins or bats, that have senses like echolocation that humans do not).

Here’s where it starts to get complicated: consciousness. Trying to define consciousness is a little bit like making out with a sea slug: it’s slippery and uncomfortable and you’re not entirely sure why you’re doing it. But unlike sea slugs, consciousness is an integral part of our experience of the world and is highly relevant to our moral choices, so we probably should spend some time grappling with it and hoping it doesn’t wriggle out of our fingers (side note: my brother did once kiss a sea slug).

There are lots of things that make consciousness tough to pin down. The first of these is that depending upon what we’re trying to talk about or the context in which we’re speaking, the way we define consciousness changes. The entry on consciousness in the Stanford Encyclopedia of Philosophy lists six different ways to define consciousness*:

1. Sentience

2. Wakefulness

3. Self-consciousness

4. What it is like

5. Subject of conscious states

6. Transitive consciousness (being conscious of something)

Some of these are clearly more relevant for moral considerations than others (we don’t generally consider wakefulness relevant in our moral decisions). We’ve already touched on sentience, but let’s take some time to examine the other possible definitions and how we could determine whether or not computers have them.

Self-consciousness is often a test for whether or not something should have moral standing. It’s often used as an argument for why we should afford more consideration to animals like dolphins and chimps. Currently, we use the mirror test to determine whether or not an animal is self conscious. This test is not perfect though, as self consciousness is an inner awareness of one’s own thoughts. It relies on meta cognition, inner narrative, and a sense of identity. This points to one of the serious challenges of understanding consciousness, which is that we cannot understand it simply by using “objective” data: it requires both first and third person data because it is a subjective state.

With those caveats, there is a robot who has passed the mirror test. This is a good indication that it has some sense of self awareness. What it doesn’t give us information about is “what it is like”, which is the next possible definition of consciousness. This suggestion is championed by Thomas Nagel (who is really one of the more fantastic philosophers writing today). The best example of this is Nagel’s classic essay “What Is It Like To Be A Bat” (that title alone is one of the reasons I loved majoring in philosophy), in which Nagel explores the idea of experiencing the world as a bat and posits that the consciousness of a bat is the point of view of being a bat. This may seem tautological, but it gets at the idea that consciousness is a subjective experience that cannot be witnessed or entirely understood from an external perspective. We can have some cross subjective understandings of consciousness and experience between beings that are quite similar, but (as an example) we as humans are simply not equipped to know what it is like to be a bat.

Nagel says of consciousness: “It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons”. We can see the “objective” facts about an experience, but not the point of view of that experience (Nagel goes into much greater detail on this subject in “The View From Nowhere”, an exploration of the fact that an objective point of view will always be missing some information because it will never know what it’s like to be situation subjectively).

While “what it is like” seems to make an intuitive sense in terms of consciousness, it doesn’t have a whole lot of explanatory of power about what it is that we’re actually experiencing when we’re conscious, nor does it point us towards a way to find out whether or not other things have a way that it is like to be.

The next potentially useful definition is “Subject of conscious states”. This doesn’t really give us a whole lot of information without definitions of potential conscious states, but luckily we can glean some of these from the elements that many of the definitions of consciousness have in common. These point towards the quality of conscious states, although are not those states themselves. They include but are not limited to qualitative character, phenomenal structure, subjectivity, self-perspectival organization, intentionality and transparency, and dynamic flow. Briefly, these are as follows:

Qualitative character: This is the point at which the Stanford Encyclopedia of Philosophy started using the word “feels”, which really just made my day in researching this blog post. Another, more pretentious word for “feels” is of course qualia. This is deeply related to Nagel’s “what it is like” and points at that experience, the quality of senses, thoughts, and feelings.

Phenomenal structure: Phenomenal structure is one of the few elements of consciousness that appears to be uniquely human. It is the ability not only to have experiences and recognize experiences, but to situate those experiences in a larger network of understanding. It refers to the frameworks we use to understand things (e.g. not simply using our senses but having associations and intentions and representations that come from our sensory input).

Subjectivity: Closely related to the previous two concepts, subjectivity is the access to the experience of consciousness.

Self perspectival organization: this is a ridiculously long way of saying a unified self identity that is situated in the world, perceives, and experiences. This again exists on a spectrum (not all of us can be fully self-actualized ya know?).

Intentionality and transparency: we aren’t immediately aware of our experience of perceiving/thinking/feeling: rather we experience/think/perceive a THING.

Dynamic flow: This is a great deal like learning or growing, however it is something more: it means that we don’t experience the world as discrete, disconnected moments in time but rather that our consciousness is an ongoing process.

It seems quite possible for a computer to have some of these elements but not all of them. I would not be surprised if at some point computers developed a qualitative character, but having a phenomenal structure seems less likely (at least until they begin to develop some sort of robot culture bent on human destruction). This would give us a spectrum of consciousness, which maps quite well onto our understanding of the moral standing of non human animals. Again, different people would have different moral feelings at different places along the spectrum: some humans have no qualms about killing dolphins which almost certainly have consciousness, while others are disturbed by even killing insects.

The most relevant definitions of consciousness appear to be “self awareness”, “what it is like”, and “subject of conscious states”. It seems to me that the latter two are really  just different ways of expressing each other since most of the conscious states that we can identify are quite similar to “what it is like”. In that case, it seems that in order to be the most relevant for ethical consideration, a being would have to be self aware and also have an experience of living or conscious states.

Unfortunately there is no real way to determine this because that is an experience, a subjective state that we can never access. At some point we may simply have to trust the robots when they tell us they’re feeling things. This may seem unscientific, but we actually do it every day with other human beings: we have no solid proof that other human beings are experiencing emotions and consciousness and feelings in the same way that we do. They behave as if they do, but that behavior does not necessarily require an inner life. It is much easier to make the leap to accepting human consciousness than robot consciousness because the mental lives of other humans seem far more parallel to our own: if my brain can create these experiences, it makes perfect sense that another, similar brain can do the same.

At the point at which computers start expressing desires is when I will start to have qualms over turning off my computer, but as a preference utilitarian, this is the consideration that I try to give all beings.

 

 

*These definitions are what SEP calls “creature consciousness”, or ways that an animal or person might be considered conscious. It also looks at “state consciousness”, which are mental states that could be called “conscious”. These are clearly related, but in this case creature consciousness is more relevant to determining whether we can call a computer “conscious” or worthy of moral consideration.