What Is Art?

A few weeks ago a friend of mine asked me what made something art. At the time I didn’t have a good definition, but when he gave his proposed answer I was unsatisfied.

“A work of art is something we have an aesthetic experience with” he suggested.

“So a sunset could be a work of art?”

“Yes.”

I was unconvinced. Let’s start at what might be the most basic level of definition. Art must be an artifact, some sort of physical object or experience. This seems like something we can all agree on, but modern art has taken even that firm footing out from under us:

John Cage’s 4′33″, have seemed to many philosophers to lack or even, somehow, repudiate, the traditional properties of art: intended aesthetic interest, artifactuality, even perceivability” -Stanford Encyclopedia of Philosophy. A work of art that is composed of silence seems to call into question whether art needs to be an artifact or an object at all. Yet even with 4’33’, it does seem that there is a physical context (either of a CD or a performer seated in front of the audience) that is the art itself rather than simply the silence (silence without this context is not viewed as art). So perhaps we can accept that a physical presence of some sort is required for something to be art.

But not any physical experience or object is art (obviously). What distinguishes the computer I’m typing on from a work of art? Or is my lovely little Macbook Air a work of art (the answer is of course yes)? A few possible answers include beauty, communication of or embodiment of emotions, mimetic properties, or aesthetic experience. Beauty is fairly easy to take off the table as many works of art are disturbing, grotesque or straight out ugly intentionally. Perhaps there is a kind of beauty in the emotions we experience in relation to this art, but at least conventionally, beauty is not the mark of a great number of amazing works of art (this also has the problem that beauty is nearly impossible to define).

Art as communication appears to be faring better until you hit something like Duchamp’s Fountain or John Cage’s 4’33”, both of which appear to be anti-communication and simply designed to make one think. A great deal of modern art appears to be less focused on emotional and experiential communication and more on criticism and engagement, and there is little doubt for most people that modern art is in fact art. Additionally, this definition may be too broad in other ways in that it could include any expression of emotion (such as declaring one’s happiness). Additionally, not all art communicates: some art is simply representational (or may only be experienced as representational by the untrained eye). Could the Mona Lisa become not art if viewed by someone who simply saw it as a representation of a woman? It seems unlikely. There does seem to be something important in the communication view of art that should be included in any definition of art: it is intentional on the part of the artist. The viewer may not take away from the art exactly what the artist intended (as is true of any communication), but there is a give and take in art: it is put forth by someone and received by someone.

“A storm may prompt us to question the best way to avoid a shipwreck, but it is we (and not the storm) who are raising the question.” -Charles Taliaferro, Aesthetics, A Beginner’s Guide. This suggests that the object or artifact in question doesn’t have any properties that are “art”, but the viewer is imbuing the object with art qualities.

Some people go so far as to suggest that a work of art can actually embody emotions. They suggest that even if no one involved in the work of art (the creator or the viewer) were feeling any particular emotion, it would still hold that emotion (e.g. Joy for Ode to Joy). From the perspective of modern neuroscience, emotions as we know them are a uniquely human kind of thing: they are experienced thanks to the reactions in our brains and the physical reactions of our bodies. To suggest that an inanimate object might embody a human experience makes little to no sense. This suggests another piece of the definition of art: it is not inherent in the object but comes about through the interactions of the artist and the audience.

A great deal of art clearly has mimetic properties: it is meant to represent or reflect something in the world. Unfortunately this definition can’t handle abstract art, or even art like Fountain which is not so much a representation as it actually is the object it’s meant to represent. But there are some ways in which all art seeks to represent something. “Works of art function more like different linguistic statements that reference objects, rather than mirrors that offer us a reflection of what we might otherwise see directly without the aid of a mirror.” -Charles Taliaferro, Aesthetics, A Beginner’s Guide

It seems there might be a Wittgensteinian route to take here in the realm of language games: “A common family of arguments, inspired by Wittgenstein’s famous remarks about games (Wittgenstein, 1953), has it that the phenomena of art are, by their nature, too diverse to admit of the unification that a satisfactory definition strives for, or that a definition of art, were there to be such a thing, would exert a stifling influence on artistic creativity.” (Stanford Encyclopedia of Philosophy). 

In a Wittgensteinian conception of language, words do not have singular definitions but a series of ways that we use them in context that are considered successful if someone else can respond (deduce the rules of the game as it were). Perhaps in art we use images or symbols or context to put together a kind of artistic utterance that the people around us can interpret based on the other ways that those things have been used in the past, learned from a family of common definitions.

So perhaps there is no one clear definition of art, and we learn what art is by experiencing art and continuing that definition on to other things with similar characteristics, not all of which overlap. This also seems unsatisfactory, so let’s instead move to the definition that started all this: aesthetic experience.

The first and most difficult question to answer is what is an aesthetic experience? Taliaferro suggests “To have an aesthetic experience, one needs to step back or detach oneself from the urgency and practical preoccupations of life.” The Stanford Encyclopedia further states “As noted above, some philosophers lean heavily on a distinction between aesthetic properties and artistic properties, taking the former to be perceptually striking qualities that can be directly perceived in works, without knowledge of their origin and purpose, and the latter to be relational properties that works possess in virtue of their relations to art history, art genres, etc.”

There is some tension between these two definitions: one suggests something that takes us out of ourselves and the other something that inspires a reaction due to perception. There is a problem with both of these suggestions though, in that either of them could happen in reaction to something in nature with no reference to an artist, communication, or context.

But since the concept of the aesthetic necessarily involves the equally bankrupt concept of disinterestedness, its deployment advances the illusion that what is most real about things can and should be grasped or contemplated without attending to the social and economic conditions of their production.” -Stanford Encyclopedia of Philosophy

An additional problem here is that there are actually many practical objects that also could be considered art (Shaker furniture, African masks, religious icons), and because they can also be used practically we would be hard pressed to suggest they pull us out of our immediate practical preoccupations. Perhaps there is a way to combine the two definitions: an aesthetic experience is one that through striking qualities moves us outside of our own perspective. This gives us the benefits of not simply asking us to be disinterested but of asking us to expand our view, and of being slightly more specific than either of the previous definitions.

So thus far, art must be an artifact that is imbued with some sort of communicative properties through an artist and a viewer/recipient, which inspires us to move outside of our own perspective through perceptually striking qualities. Oof. That’s a mouthful, but it seems to be both specific and broad enough to capture most of the things we typically consider art.

A final few considerations to take into account: there are probably contexts in which a curator can become an artist by moving an object or a picture to a different context. They add in the communicative elements that wouldn’t exist simply by seeing something stunning in nature and being aware of your size or place in the world. The problem with this is that context can often be intensely political. When we view art as defined by the “artworld” (which is a definition some philosophers have proposed), we give a lot of power to the establishment of old, white men who already have power in art. We lose a variety of voices and tell those who come from different places that they cannot make art because they don’t have access to the proper curators or contexts. Hopefully, the previous definition is open enough that it allows a variety of contexts to serve as the vehicle for communication, opening art up for anyone who has something to communicate or anyone who wants to expand their perception.

What are your thoughts? Do you have any pieces of art that wouldn’t fit in this definition, or things that you definitively don’t think are art which would fit? Let me know!

The Ethics of Unplugging Your Computer

Because CONvergence is largely a fan convention, many of the panels offered involve panelists whose qualifications are “I was really excited about this topic”. Sometimes this means that you end up with a very interesting variety of perspectives, but unfortunately sometimes it makes for panels with unprepared and uninformed panelists. One of these that I attended this weekend was “When Is Turning Off a Computer Murder?”  The concept of this panel (when and how might a computer reach a state of consciousness on par with personhood) was fascinating. The execution less so.

So for that reason, I’m going to explore what makes something eligible for ethical consideration, connect those concepts with sentience and consciousness, and see if we know anything about whether or not machines have reached these stages yet.

Let’s start with the concept of murder, since the title of the panel looks at when we think a being deserves moral consideration. There is a great deal of argument within ethical spheres over what kind of beings deserve moral consideration. Religious ethics often tends to afford human beings a special consideration simply by dint of being part of the species. However most ethical systems have a slightly more objective criterion for ethical consideration. Some ethical systems believe that being alive constitutes enough of a reason to let something keep living, others suggest sentience, others consciousness, others self consciousness. I personally tend towards Peter Singer’s preference utilitarianism, which suggests that we shouldn’t cause any unnecessary pain, and that if something has a preference or interest in remaining alive, we shouldn’t kill it.

Even more complex is the fact that we have different standards about what types of beings we shouldn’t harm and which types of beings we should hold responsible for their actions. So for example, many people feel as if we shouldn’t kill animals but they do not feel that animals should be held morally responsible for killing each other.

I doubt we’re going to reach any conclusions about what types of beings deserve moral consideration and what exactly constitutes murder, especially considering the fact that the animal rights debates are still raging with a fiery intensity, but we can at least potentially place computers and machines somewhere in the schema that we already have for living beings. For more conversation on what personhood might be and who deserves moral consideration, check out the Stanford Encyclopedia of Philosophy or Center of Ethics at U of Missouri.

But let’s start at the very beginning of moral consideration: living things.

The current characteristics that we use to classify something as “living” include internal organization, using energy, interaction with environment, and reproduction. All of these things are things that machines have been able to do, so it doesn’t seem off base to consider some machines as “alive”, at least in some way. Some people may assert that because computers can’t reproduce or replicate in an organic way they are not alive, but this seems at odds with the ways we treat human beings who cannot reproduce (hint: they don’t suddenly become non humans). One important element of being alive that we usually take into consideration when thinking of ethics is pain: hurting things is bad. The question of whether or not computers can feel pain is wrapped up in the questions of consciousness that will be discussed later.

The next level of moral consideration is usually sentience. Most people use the words “sentient” and “conscious” fairly interchangeably, and one of the difficulties with the panel was that neither of these terms was defined. Typically, sentient simply means capable of sensing and responding to the world. Under this definition, computers have definitely reached a level of sentience, although their senses differ from human senses (this is not a problem as far as sentience goes. There are certainly sentient animals, such as dolphins or bats, that have senses like echolocation that humans do not).

Here’s where it starts to get complicated: consciousness. Trying to define consciousness is a little bit like making out with a sea slug: it’s slippery and uncomfortable and you’re not entirely sure why you’re doing it. But unlike sea slugs, consciousness is an integral part of our experience of the world and is highly relevant to our moral choices, so we probably should spend some time grappling with it and hoping it doesn’t wriggle out of our fingers (side note: my brother did once kiss a sea slug).

There are lots of things that make consciousness tough to pin down. The first of these is that depending upon what we’re trying to talk about or the context in which we’re speaking, the way we define consciousness changes. The entry on consciousness in the Stanford Encyclopedia of Philosophy lists six different ways to define consciousness*:

1. Sentience

2. Wakefulness

3. Self-consciousness

4. What it is like

5. Subject of conscious states

6. Transitive consciousness (being conscious of something)

Some of these are clearly more relevant for moral considerations than others (we don’t generally consider wakefulness relevant in our moral decisions). We’ve already touched on sentience, but let’s take some time to examine the other possible definitions and how we could determine whether or not computers have them.

Self-consciousness is often a test for whether or not something should have moral standing. It’s often used as an argument for why we should afford more consideration to animals like dolphins and chimps. Currently, we use the mirror test to determine whether or not an animal is self conscious. This test is not perfect though, as self consciousness is an inner awareness of one’s own thoughts. It relies on meta cognition, inner narrative, and a sense of identity. This points to one of the serious challenges of understanding consciousness, which is that we cannot understand it simply by using “objective” data: it requires both first and third person data because it is a subjective state.

With those caveats, there is a robot who has passed the mirror test. This is a good indication that it has some sense of self awareness. What it doesn’t give us information about is “what it is like”, which is the next possible definition of consciousness. This suggestion is championed by Thomas Nagel (who is really one of the more fantastic philosophers writing today). The best example of this is Nagel’s classic essay “What Is It Like To Be A Bat” (that title alone is one of the reasons I loved majoring in philosophy), in which Nagel explores the idea of experiencing the world as a bat and posits that the consciousness of a bat is the point of view of being a bat. This may seem tautological, but it gets at the idea that consciousness is a subjective experience that cannot be witnessed or entirely understood from an external perspective. We can have some cross subjective understandings of consciousness and experience between beings that are quite similar, but (as an example) we as humans are simply not equipped to know what it is like to be a bat.

Nagel says of consciousness: “It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons”. We can see the “objective” facts about an experience, but not the point of view of that experience (Nagel goes into much greater detail on this subject in “The View From Nowhere”, an exploration of the fact that an objective point of view will always be missing some information because it will never know what it’s like to be situation subjectively).

While “what it is like” seems to make an intuitive sense in terms of consciousness, it doesn’t have a whole lot of explanatory of power about what it is that we’re actually experiencing when we’re conscious, nor does it point us towards a way to find out whether or not other things have a way that it is like to be.

The next potentially useful definition is “Subject of conscious states”. This doesn’t really give us a whole lot of information without definitions of potential conscious states, but luckily we can glean some of these from the elements that many of the definitions of consciousness have in common. These point towards the quality of conscious states, although are not those states themselves. They include but are not limited to qualitative character, phenomenal structure, subjectivity, self-perspectival organization, intentionality and transparency, and dynamic flow. Briefly, these are as follows:

Qualitative character: This is the point at which the Stanford Encyclopedia of Philosophy started using the word “feels”, which really just made my day in researching this blog post. Another, more pretentious word for “feels” is of course qualia. This is deeply related to Nagel’s “what it is like” and points at that experience, the quality of senses, thoughts, and feelings.

Phenomenal structure: Phenomenal structure is one of the few elements of consciousness that appears to be uniquely human. It is the ability not only to have experiences and recognize experiences, but to situate those experiences in a larger network of understanding. It refers to the frameworks we use to understand things (e.g. not simply using our senses but having associations and intentions and representations that come from our sensory input).

Subjectivity: Closely related to the previous two concepts, subjectivity is the access to the experience of consciousness.

Self perspectival organization: this is a ridiculously long way of saying a unified self identity that is situated in the world, perceives, and experiences. This again exists on a spectrum (not all of us can be fully self-actualized ya know?).

Intentionality and transparency: we aren’t immediately aware of our experience of perceiving/thinking/feeling: rather we experience/think/perceive a THING.

Dynamic flow: This is a great deal like learning or growing, however it is something more: it means that we don’t experience the world as discrete, disconnected moments in time but rather that our consciousness is an ongoing process.

It seems quite possible for a computer to have some of these elements but not all of them. I would not be surprised if at some point computers developed a qualitative character, but having a phenomenal structure seems less likely (at least until they begin to develop some sort of robot culture bent on human destruction). This would give us a spectrum of consciousness, which maps quite well onto our understanding of the moral standing of non human animals. Again, different people would have different moral feelings at different places along the spectrum: some humans have no qualms about killing dolphins which almost certainly have consciousness, while others are disturbed by even killing insects.

The most relevant definitions of consciousness appear to be “self awareness”, “what it is like”, and “subject of conscious states”. It seems to me that the latter two are really  just different ways of expressing each other since most of the conscious states that we can identify are quite similar to “what it is like”. In that case, it seems that in order to be the most relevant for ethical consideration, a being would have to be self aware and also have an experience of living or conscious states.

Unfortunately there is no real way to determine this because that is an experience, a subjective state that we can never access. At some point we may simply have to trust the robots when they tell us they’re feeling things. This may seem unscientific, but we actually do it every day with other human beings: we have no solid proof that other human beings are experiencing emotions and consciousness and feelings in the same way that we do. They behave as if they do, but that behavior does not necessarily require an inner life. It is much easier to make the leap to accepting human consciousness than robot consciousness because the mental lives of other humans seem far more parallel to our own: if my brain can create these experiences, it makes perfect sense that another, similar brain can do the same.

At the point at which computers start expressing desires is when I will start to have qualms over turning off my computer, but as a preference utilitarian, this is the consideration that I try to give all beings.

 

 

*These definitions are what SEP calls “creature consciousness”, or ways that an animal or person might be considered conscious. It also looks at “state consciousness”, which are mental states that could be called “conscious”. These are clearly related, but in this case creature consciousness is more relevant to determining whether we can call a computer “conscious” or worthy of moral consideration.