Where Access Meets Multimodality: The Case of ASL Music Videos

Janine Butler

A young man is signing the word love. On screen are the lyrics "you love."

Example #3: Enrique Iglesias’ “Hero”

In Sean Berdy’s (2012) ASL version of Enrique Iglesias’ romantic Latin pop song, “Hero,” the role of Enrique has been played by a young man (Berdy, a Deaf actor) who conveys the emotional content of the heartfelt song on his face and in his soulful blue eyes. His body gestures are equally vibrant; he embraces and gestures towards his heart with facial expressions of passion.

Berdy (2012) used clever camera work that celebrates multimodal communication. At a certain point, the young man is on his bike with the camera oriented towards him from the handlebars. While pedaling with his legs, and riding his bike hands-free, he embodies the emotional significance of love in the song through his body. Instead of treating sign language as a limitation, making it difficult for the young man to sign to his loved one and ride his bike simultaneously, this composition used signed communication to show the potential of love. (See video clip below.)

The camerawork and the visual lyrics reflect each other in tone and structure. The scenes slide in and out of the frame and the lyrics dissolve in and out. This particular song does not contain extensive lyrics, so the focus is drawn towards the acoustics and the visuals. But the words that are expressed matter and are given focus through signs and visual text.

When visual text appears, only one or two words are on the screen at the same time in any given moment, reflecting the tone of Iglesias’ song. Occasionally, four or five words may be on the screen at the same time, but the words never fade in at once. The tempo is clear and the rhythm of fading in and out matches that of the spoken lyrics.

Berdy (2012) provided a poignant example of multimodal compositions that allow for sound, embodied visual text, emotions, and gestures to be merged into a single composition in which the theme of love is accessible. Composition instructors can point to these elements in order to draw focus on what Bump Halbritter (2012) called the “multidimensional rhetorical layers of twenty-first century writing” (p. 97). The layers of aural and visual media synchronize and merge in ASL music videos as the visual text embodies the tempo and rhythm of the music.

The multidimensional rhetoric in this video “integrates a variety of modes, media, and genres—sound, images, language, music, etc.” to embody a multisensory musical meaning that could not be expressed through a single mode (Halbritter, 2012, p. 26). Halbritter’s multidimensional rhetoric reflects Sonja Foss’ (2004) call for an expansive definition of visual rhetoric that recognizes human experiences as spatially oriented, non-linear, multidimensional, and dynamic. ASL music videos use three-dimensional signs and animated visual text to capture the embodied experience of music and life.

We can draw from multidimensional compositions such as “Hero” to develop strategies for making meaning accessible through different modes. As Halbritter asserted, to be creators of multimodal compositions, composers need to be able to understand and identify the rhetorical possibilities in compositions that are open to them. We can analyze how the visual text in “Hero” fades in and out with the music to sense how the aural and visual layers can complement and enhance, not compete, with each other to increase the different ways that audiences can access a piece.

The complementary fading in and out of the text in “Hero” is a visual recreation of Kristie Fleckenstein’s (2003) imageword, in which she envisioned meaning as created through the “fluid, recursive movement” of image and word (p. 2). As Fleckenstein wrote, “images tend to nest a range of senses, resulting in meanings that are collaborative products of sound, sight, and touch, providing full and resonant…significance to meaning. ‘Seeing’ doesn’t occur alone or in isolation but is accompanied by feeling” (p. 20). The soft visual text in “Hero” is accompanied by tender feelings of love and loss, the sensation of the lover moving beyond our touch, the embodied sound of the song, the taste of the rain. The collaborative composition can be accessed by different senses.

The multisensory experience created by the layers of Sean Berdy’s (2012) video embodies the emotional content of the song. Students who recognize the possibilities for the interplay of aural, visual, and other modal layers of videos such as “Enrique Iglesias’s Hero in American Sign Language [Sean Berdy]” can then strategize different ways to express meaning in different modes and reach different senses. How might they create multilayered compositions that better reach the senses for those who cannot see the visual layer, for instance? Let’s begin by introducing these videos in the classroom, as I discuss in Pedagogy.

NEXT: Pedagogy: Introducing ASL Music Videos