Where Access Meets Multimodality: The Case of ASL Music Videos

Janine Butler

A young man is signing the word love. On screen are the lyrics "you love."

Example #1: Owl City’s “Fireflies”

The ASL version of Owl City’s “Fireflies” (D-PAN, 2013) has demonstrated how the synchronization of meaning across multiple modes can improve the accessibility of a composition.

“Fireflies” and other ASL music videos have been produced by the Deaf Professional Arts Network (D-PAN) (n.d.), an organization whose mission is “to make music and music culture accessible to the deaf and hard of hearing community, and to give recognition to deaf and hard of hearing artists everywhere” (Mission Statement section, para. 1). Since 2006, the organization has been creating and distributing ASL music videos featuring a range of Deaf performers, from professional Deaf musician Sean Forbes to young adults performing in their first ASL music videos. Many of their videos have become highly successful productions online. In addition to posting music videos online and creating music video collections on DVDs, D-PAN has hosted workshops and camps in which Deaf professionals taught students how to create their own ASL music videos.

The video below has featured preteens and teenagers from the 2013 D-PAN ASL Music Video Camp performing “Fireflies” by Owl City. Several youths alternate facing the camera and signing lyrics in tune with the aural lyrics. As one youth signs, the others are seated in the background of the cabin with one playing a guitar and others rocking to the beat of the music.

D-PAN’s (2013) “D-PAN ASL music video ‘Fireflies’ by Owl City” has redesigned the captioning conventions by reducing the number of words that appear on screen at once. Each visual text softly appears into view in sync with the singing voice and the signing body. As each new word appears, several of the preceding words remain on screen, conveying the spirit of the lyrical moment.

This video uses dynamic visual text to reinforce the rhetorical performance of sign language in embodying the meaning of the song. Sign language accompanies the lyrics from beginning to end, and the camera focuses on individuals gracefully signing the tender message. Facial and physical gestures, core components of communication in Deaf culture, intensify the emotional scenery for both deaf and non-deaf audiences. The emotional content is made accessible to viewers who can see, but may or may not hear, the song.

In a tender moment in which the words “that planet Earth turns slowly” appear, the camera focuses on the individual’s hands. Visual words appear one by one as they are signed by the youth. Since the grammatical structures of ASL and English differ, SLOWLY is signed as “turns” appears in the textual and verbal lyrics while TURNS is signed in sync with the textual and verbal word, “slowly.” The auditory and visual synchronicity emphasizes the beauty of the Earth turning slowly.

D-PAN’s (2013) “Fireflies” and other ASL music videos have made manifest ASL linguist William Stokoe’s (2006) statement that ASL poetry “shows that there is a nonparadoxical meaning in the term silent music and reminds us that rhythm stems from movement, not from sound” (p. xiii). In ASL music videos, body gestures, facial expressions, and dynamic visual text move together to embody the rhythm of the song. The lyrical content and the rhetorical meaning of the song become manifest through visual-spatial-kinesthetic movement.

In his review of different genres of ASL literature, William Bahan (2006) noted the ways that ASL songs bridge the cultural languages of English and ASL: “Some elements of vocal songs are transposed into the signed modality, such as fluidity of words/signs and the rhythm. The cadence of songs usually springs from the structural way signs are formed... and is visually pleasing” (p. 34). We can draw from Bahan’s discussion to recognize that ASL music videos transpose elements of vocal songs not only into the signed modality, but also into the dynamic visual text that appears on screen. The rhythm is visually recreated through the signs and through the qualities of the visual text.

My analysis of the embodied nature of dynamic visual text responds to deaf studies scholars who value the rhetorical and embodied qualities of ASL performances, notably Bauman, Nelson, and Rose (2006), who juxtapose English text in print and ASL on DVD. Brenda Jo Brueggemann (2009) in particular celebrated “the unique nature of ASL—its performance and passage as a nonprint, nonwritten, visual, and embodied language” (p. 34). I argue that the visual values of Deaf culture becomes apparent through videos that embody music in sign language and dynamic visual text.

This multimodal composition reinforces the value of meaning being expressed across multiple modes. When designers combine multiple modes to create meaning, they can tap into what Hull and Nelson (2006) called the “deeper aesthetic power of multimodal texts” (p. 229). Hull and Nelson’s analysis of multimodal compositions demonstrated that the combination of modes “transcend[s] what is possible via each mode separately” (p. 251). In the ASL version of “Fireflies” (D-PAN, 2013), combining the dynamic subtitles with ASL, the music, and other modes enhances the rhetorical and aesthetic composition. Making meaning accessible across a range of differences in modes, bodies, and senses is a core component of accessible multimodal pedagogies, as discussed in Multimodality.

NEXT: Multimodality: Multimodal, Multisensory Communciation