Born deaf, Christine Sun Kim uses technology, performance and drawing to investigate her relationship with sound and spoken languages.
You have been deaf from birth, yet you were told as a child to not make noise. How can you have known how not to make noise if you couldn’t hear it?
It’s based on intuition. I could sense people’s reactions. For example, in school, if I dragged my feet on the ground, people would say, “Shhhh.” My family’s Korean, so they’re somewhat somber and still. I tend to be loud with my expressions, and my family would tell me to tone it down. I knew I was very animated, but that was my language. People always say, “It’s like you’re performing,” and I respond, “That’s my language.” It’s funny. But yeah, I just had to follow social cues.
All the customs and social norms, all the rules were in my face every day. I’d go into a theater, and I knew that I’d have to sit, be quiet, and walk slowly. It was learned behavior from people’s reactions around me: it depended on how and if people looked at me. If everyone’s eyes were on me, I knew I was being loud or doing something “wrong.”
As a society, the majority of people hear. And I mirror them. I have to follow what they’re doing. It was not like society gave me a clear, safe place to do whatever I wanted. I had to learn how to integrate to their ways. And the more aware I become of the noises and the norms, the more I play around with that in my artwork. The more experience I had trying to become accustomed to the norms, the more I tried to use that as material for my artwork. And oddly, that made my voice clearer.
You translate sound into other forms as an investigation and performance. Is this investigation primarily for yourself, or is it for others? To what degree do you keep your audience in mind when you’re playing?
It’s mostly about myself and my journey as an artist. Its about my relationship to and my perspective of sound as it keeps changing. It’s everlasting; it’s nonstop.
In one past work, I did a one-to-one translation from sound to vibration, working with sound to create painterly imprints. I don’t know if that really translates. It’s very limited and deals with low frequencies only, and that’s just one aspect of sound.
That’s why I let go of the idea of translating it. Now I’m trying to develop my own information system and new theories of what sound should or could be, using new forms.
Most people who write music have this idea of silence, but they can hear and they use that to define or shape silence, or vice versa. So how can I learn the idea of sound and silence from their perspective? I can’t relate to that. So I’m starting over from scratch with everything. I’m redefining things. It’s not scientific evidence. People always ask me if I use sound waves in my art, but I’m not really interested in that.
Can you tell me about the various ways that you experience sound without hearing it? I’m curious how this ties into your artwork and the various ways you explore.
For a piece called “Feedback Aftermath,” I played with feedback for hours one night and then went home. At home I didn’t feel good — [I] felt anxious. I couldn’t sleep well that night, and I didn’t want to go back to the studio for one week. That was disconcerting. And then when I watched the video of myself — because I videotape myself sometimes — I felt sort of stressed out and uneasy. Later I realized that it had an impact on me, an extreme impact, like post-traumatic stress. Most hearing people don’t experience that. You have warning signals. If your ears hurt, you leave the room, you stop, you step away. I don’t have those signals, so I went past all warnings and experienced feedback to the full degree.
How does the feedback enter one’s body, if not through sound?
There’s different ways sound has an impact on the body. Sound doesn’t enter only through the ears. It can go through the full body and also your psyche. More and more, people are starting to develop sonic warfare to use as a tool, as a weapon.
I have a story about this: to get into my apartment you have to go through one building, then walk through a courtyard and enter a second building. Once a friend of mine, who is a real estate agent, came over and, once inside my apartment, said, “Oh, it’s so quiet in here. It shouldn’t be wasted on you” — because New York is so noisy, so loud. But I realized I need that too. I used to live in a really crowded area, and I never felt fully rested. But in my home now, I can pass out and sleep for hours; I feel really rested. Noise truly does have an impact on my body.
Even now, I always like to stay in control of my sound. I have my phone off. I often don’t have it on vibrate. My TV has the sound off. This allows me to have control, so I know it’s not making noise. I was dating a hearing guy. He would come stay at my house a lot and would turn everything on. I kept telling him I wanted it off. He would reply, “Well, I’m hearing.” But that was strange because it was my relationship with sound. I wanted to be in control, so I wanted everything off. I didn’t like the extra noise floating around me because I wouldn’t know what it was.
You talk a lot in your work about the idea of sound as a currency. What do you mean by this?
For hearing people, information is captured via the ear, through sound. But you can look elsewhere and you are still getting information. With sign language, you have to be focused on what you’re seeing. Many things are dependent on sound, like Siri on the phone, voice commands. Sometimes I struggle with that, getting people to look at me or write back and forth, but they’re constantly looking away. Eye contact is lost, as is communication.
And the music world is huge. Music and sound are culturally dominant. Everyone lives in the music world and I’m constantly amazed with the way people remember lyrics. For example: if they hear a few words, then they instantly know the song — that’s a very strong cultural aspect of the hearing world. And even artists depend on that. Online videos are cultural connections, but most of them aren’t captioned. Visual sentences and visual language occupy a limited space in comparison to sound. So that’s why I’m trying to play around with this idea of voice. In fact, I just did my first vinyl record with a collaborator.
What’s on the record?
It incorporates a lot of different concepts I play around with. My voice is on the record, experimenting with sound (I don’t use my voice often). There are two records, one for the left side and one for the right side, and it comes with a list of instructions on how to listen to both of them. You are to follow these rules. You put the records on two turntables — the left on your left, the right on your right — and play them simultaneously. The right record has been designed to play loops at normal volume, the left plays continuously at low volume.
This is a reflection of growing up with hearing aids. I’m completely deaf, but I can hear a tiny bit on the right, with the help of aids (I can’t actually recognize or identify what the sound is; it’s just noise). The right record reflects this imbalance: it is a little bit louder, a little bit clearer. The left side plays seamlessly, while on the right side the different loops actually stop — it gets stuck. To continue playing the record, you have to go over and physically move the needle. It’ll play for a little longer and then you’ll have to move it again. So it becomes laborious — it becomes more work for the right side. This tangible interaction echoes my experience of hearing aids.
What is deaf culture? Is there such a thing?
Oh, yeah. Disability has its own culture too. But deaf culture revolves around language (technically, we’re a linguistic minority), and it’s a collective culture. People are very supportive of each other. It has its ways like any other culture. For example, one behavior that’s culturally deaf is that, if you grew up with a strong deaf identity, then when you’re sitting at a table and you’re signing, if somebody joins the conversation, people don’t look up. They know you’re there, they continue talking, but they automatically move over to allow somebody else in. There’s no interruption in the conversation. They have very simple rules and ways like that, and it adds up to cultural norms.
So it’s kind of got an etiquette of its own.
For sure. It’s very physical and visual. Deaf people are also extremely straightforward. I love that. When I went to Germany, talking to deaf Germans was very easy. It was a different sign language, but the second you meet each other you are instantly friends. Different languages have different sign languages, but the expressions, ideas, and concepts are similar. I think it’s easier for deaf people to communicate amongst their different languages than hearing people.
You’ve been talking about the difference between American Sign Language and English as though they’re different — for example, with the translation of this interview [which was conducted live, with translator Lisa Reynolds]. How are they different, and how do you navigate the difference when you’re writing versus signing? Do you think differently?
It’s sort of like writing from Chinese to Spanish or Spanish to French.
Yeah. Really. Very different. That’s why I think ASL is an unique language. ASL is derived from French Sign Language mixed with home sign language. It’s influenced by those but has its own formalized grammar. The tone is conveyed through body movement and facial expressions.
I like using the piano as a metaphor. Playing the piano is similar to ASL. When you put your pinky finger down that’s one note. Each finger has its separate notes, and all together you have 10 notes. So if you put them down at the same time, they become a chord. That’s like ASL. It’s not the same as English. It’s spatial, not linear. If you think of a facial expression as one note, then body movement as another note, then speed as another note, hand shape, placement, and so on — all these parts add up to convey the message. When you do it all simultaneously, it becomes a chord.
What about bypassing language altogether? What do you think of Mary Lou Jepsen’s TED talk about the brain-to-digital interface?
The idea is really creepy, but amazing. It’s a way of communicating without needing language. I do, however, question the politics of it. The people who are developing the program — are they the ones deciding what it would look like? I’m a little fuzzy on the details of it, on what it would look like if executed.
Have you seen Neil Harbisson’s talk about synesthesia? I find it amazing, but it also became political because he picked the colors. There’s a line that is crossed. What if I wanted to decide for myself? The same parallel exists with the Cochlear implant. It’s limited to only a few channels of sound. The human ear has tons of channels, where the Cochlear implant has a very limited number. So the doctors or manufacturers are the ones deciding what hearing-impaired people will benefit from the most. I have a problem with the politics. That’s my question about this technology. I think it’s a great idea to remove language and to have a different way of communicating, but I’m curious how much control I would have.
Below watch “Face Opera ii” in which performers took turns conducting and shared-conducting four separate scores on an iPad developed from the different parameters of the language. Roughly 30–40% of American Sign Language is the manual production of the language, while the rest is expressed on the face and through body movement. This is a commentary on how society places value on vocal and spoken languages, leaving little room for visual languages.