Commensurability
Many multimodal texts exclude disabled audiences because they are not commensurable across multiple modes, thus rendering the text inaccessible. Consider, for example, the kairotic space of a presentation at an academic conference. Conference presentations are highly inaccessible to a variety of participants. For many deaf people, like me, it is difficult to follow an oral presentation without another channel for accessing the information that is embedded in the sound of the presenter’s voice reading their paper, and consequently, opportunities for engaging in the circulation of ideas within the presentation (or afterwards) are lost. Sometimes additional channels for accessing the aural information in the presentation are provided by presenters when they share a script of their presentation, or by third parties, such as sign language interpreters or real-time captioners, who convert sound messages into visual ones. More typically, however, if an audience member does not have access to the mode of sound, none of the other multiple modes that are part of a standard conference presentation—handouts, images, Prezi, PowerPoint slides, audio and video clips, or the speaker’s bodily movements—provide access to the thoughts and ideas in the spoken talk. These modes complement one another, but they are not commensurable: They do not repeat or reinforce the same information via multiple channels.
This lack of commensurability across modes can be problematic in classrooms, too. While the kairotic spaces of classroom discussions tend to privilege one channel, sound, as students and teachers talk about class materials, some recent innovations have encouraged the simultaneous use of multiple discussion channels to invite participation. A recent New York Times article highlighted several teachers who incorporated social networking tools, such as Twitter, in their classrooms to enhance discussion (Gabriel, 2011). Such moves can help classrooms become more inclusive by inviting students who might be uncomfortable participating in a spoken discussion to share their thoughts in an online discussion instead. The photograph provided alongside the article represents one such discussion as students sit in a classroom with their desks arranged in a circle. Most of the students have laptops in front of them displaying a Twitter feed, which they are presumably monitoring while participating in a face-to-face classroom discussion.
Despite the multimodal richness of this classroom discussion, a closer look at this kairotic space points once again to the inaccessibility of multimodal space for bodies that don’t have access to all the modes. If a student cannot see the Twitter feed, or if a student has trouble simultaneously processing two sets of information being presented in discrete modes during class, then they will not have access to the full discussion. If one mode indexes another, as when a student comments upon without orally repeating something written in the backchannel, or when a backchannel comment responds to an oral utterance, a participant needs to have processed both of these modes in order to fully access the conversation taking place.
These examples of multimodal inhospitality point to an argument that, in some ways, opposes a core tenet of multimodality: that multimodality is valuable because of the way it engages multiple senses at once, thus immersing users more fully in an environment or amplifying the communicative resources of a text. In this way, some multimodal texts may become easier to understand because they juxtapose text, color, image, movement, and sound. An example of multimodality as a communicative enhancement is offered in Kress’s (2010) Multimodality: A Social Semiotic Approach to Contemporary Communication. Kress analyzes a map pointing drivers to a supermarket parking lot, emphasizing the synergy between the multiple modes displayed in the sign:
Using three modes in the one sign—writing and image and colour as well—has real benefits. Each mode does a specific thing: image shows what takes too long to read, and writing names what would be difficult to show. Colour is used to highlight specific aspects of the overall message. Without that division of semiotic labour, the sign, quite simply, would not work. (p. 1, emphasis in original)
The value of this sign, and its success at redirecting drivers into the parking lot, comes because it makes good use of available resources for creating meaning. The emphasis is not on providing similar information through multiple channels, it is on the richness of representation that multimodality entails. However, in many venues—during teaching, or when texts are distributed to broad audiences with a wide range of needs, preferences, technological resources, and so on—multimodal texts can miss their mark if they are not flexible enough for users to modify them, or when they don’t offer primary information through more than one mode.
Many multimodal texts are not commensurable across their various modes. Yet, the literature on multimodality pays scant attention to what happens when one or more of the modes of representation in a given text or digital environment is inaccessible to someone consuming that text or participating in that environment. This lack of commensurability across modes means that for many multimodal texts, if someone cannot access one or more of the modes, the entire text is inaccessible. I am not here trying to challenge the utility of modes complementing one another to facilitate the transmission of meaning, such as the way a musical score can enhance an audience’s feeling or mood alongside visual cues during a well-realized film. Rather, I am pointing to the way that multimodality almost universally celebrates using multiple modes without considering what happens if a user cannot access one or more of them.