How do you assess digital rhetoric in the classroom?
Assessing digital rhetoric is a complex pedagogical practice, one interviewees describe as “sticky” and “problematic” (Hodgson), “perilous” (Losh), and even “a beast” (Lee). One of the reasons assessing digital rhetoric is so challenging, as Collin Brooke indicates, is that these texts are often marked by their “novelty,” which “mitigate[s] against reliable ways to assess [the] work.” Moreover, and as Rory Lee notes, the concern regarding novelty becomes further complicated by the breadth and diversity of digital texts students tend to create and submit. “How is a podcast comparable to a video,” for instance, or how is a Prezi “comparable to a blog post?” In other words, as Lee puts it, “How can [teachers] actually assess these [projects] in ways that are commensurate?”
The challenge of assessing digital rhetoric presented here, then, seems to be one of rupture: these texts are not like print texts, and thus assessment of them should be different as well. However, the general approaches that emerge below take up this challenge by orienting themselves to continuation—that is, by drawing on best practices in assessment from the fields of rhetoric and writing studies: scaffolding and sequencing, composing process, clear evaluative criteria, and evidence of learning.1
The Para-Texts of Assessment: Pointing to Learning
Despite the difficulty of evaluating digital rhetoric, the interviewees did identify and share specific assessments. One common approach was to have students create reflections on and about their digital texts. Here, students work to answer, per Steve Holmes, questions such as “why did you try to do what you did?” In taking this route, teachers turn the focus toward process rather than product; as Holmes says, “I’m much more interested in students being able to talk about the process and develop some self consciousness about the choices that they’ve made than I am about evaluating a finished form, per se.” In having students reflect on their process, teachers are asking them to identify indicators of learning by clarifying what they did, explaining how it was rhetorical and intentional, and describing what they learned: “It’s not really so much about the grade; it’s about what they learned through that process,” says Estee Beck.
As if to counter objections to grading students’ composing processes, Crystal VanKooten states, “It’s not that their product doesn’t matter;” rather, the product “is not where [learning] resides.” Stephen McElroy reiterates this point: “What you’re assessing is not just the product […] but assessing the student’s understanding of the process of creating that [product] and the fittingness of the product that they created.” While these set of responses still value the digital texts students create, other responses, as McElroy foreshadows, showed minimal evaluative concern for the product and focus almost entirely on students’ ability to reflect on the process and demonstrate what they learned as a result. As Jennifer Warfel Juszkiewicz states, “You don’t actually grade the digital project itself; you grade the reflection.” This can be an effective means of evaluation, especially toward the beginning of the semester, because it obviates the immediate need for students to have the technical skills necessary to create digital projects and it helps “students to take a risk and try.” Nathaniel Rivers offers a similar approach: The way that I’ve resolved the issue of how do you grade digital projects is removing the grading from the product itself and essentially on to the work that they do. So they’re basically, at the end, making a case for the work that they did. And that’s what gets graded.
While assessing reflections seems to level the technological playing field and provide a way out of the “sticky” situation of grading novel projects that are diverse and use different modes and media, it also signals a central irony in attempts to assess digital rhetorical work: that is, teachers fall back to relying on words. Devoting all of one’s evaluative attention toward the process vis-à-vis reflection does raise other concerns, too. As VanKooten admits, “Sometimes, reflections can be not always the most authentic genre.” To address this, VanKooten suggests “having students doing lots of different kinds of reflection.” For example, they can talk to classmates, they can write in class, and they can compose “a more formal reflection essay.” Another way in which students can participate in reflection, as Kristin Arola says, is in “the shape of a presentation.” In having students reflect in multiple ways, we can, as Arola continues, discern “a discursive consciousness, which indicates learning.” Moreover, if students are aware from the onset that they need to practice reflection in one or more forms in order to explain both what they did (process) and what they gleaned from doing so (indicators of learning), they are more likely to think consciously and critically about their text throughout the process of production. Said otherwise, this particular form of assessment also attempts to instill and encourage a rhetorical mindset in creating digital rhetoric. A different form of reflection, one Doug Eyman offers, is to ask students to determine how they should be assessed: I think it’s incumbent upon the student to be able to explain the success or non-success, where the issues were with the production. So I think the assessment lies primarily in the rhetorician, in the rhetor, in the maker of the rhetorical object. I want them to tell me how I should assess this because it means they have an understanding of what they’ve done in a way that’s really more critically engaged than if I give them a rubric.
Although Eyman considers the rubric to be less appropriate for evaluating digital rhetoric, others argued for the effectiveness of rubrics. For instance, Jon Wargo finds value in using rubrics “as a pedagogical tool to think about either mode or resource.” In this model of assessment, students can benefit from thinking of a project in terms of its component parts, which rubrics, in isolating important characteristics in a given text and rhetorical task, ask students to do. In discussing rubrics, interviewees provided not only potential criteria but also additional strategies for developing criteria. Liz Losh shares the four criteria she tends to use when assessing digital rhetoric: “conceptual, rhetorical, stylistic, and technical.” Losh does clarify, however, that she nonetheless evaluates students holistically, so that a student who conceptualizes well but lacks technical ability to execute the technical can still succeed. The logic here, as Losh explains, is “to not get too enamored with a particular kind of polished artifact.” As such, and similar to those who rely on reflections for assessment, those who use rubrics might consider more than just the finished product in evaluating digital rhetoric.
Angela Aguayo’s set of criteria overlap with Losh’s in that she, too, uses “conceptual and technical” as two criteria. However, Aguayo also includes a third criterion: storytelling. In talking about this criterion, Aguayo frames it through a question: “Did they accomplish not just what the assignment was but articulate a story?” The difference between these two rubrics reflects a difference in the types of digital rhetoric each teacher is asking students to create. Losh’s assignments ask students to create digital arguments, hence the attention allotted to rhetorical and stylistic dimensions; Aguayo’s assignments ask students to create digital stories through video.
Another way to develop assessment criteria is to do so collaboratively with the students. As Lee says: As a class, we come together, and we say, “here’s the project prompt.” […] So given this, and given the kind of content we’ve been discussing in class and you’ve been reading outside of class, what seem like appropriate criteria that are also loose enough, that are capacious enough, that they will be applicable to the range and breadth of texts that they are going to produce? This process, similar to Eyman’s proposed method of assessment, makes students responsible for their own learning by asking them to think about appropriate assessment criteria in the context of a given project. Such a process also works to make transparent, and therein help demystify, the assessment process. Moreover, this approach is inclusive of many voices, and it requires that students work collaboratively and dialectically to determine a set of criteria suitable for all participating rhetors. That said, getting students to compromise and arrive at a consensus can be difficult and time-consuming, and in guiding this collaborative process, teachers need to be cognizant of how the “louder” voices can dominate the conversation and exert control over which criteria are included and how they’re understood. In addition, some students don’t feel adequately prepared and qualified to make decisions on assessment, preferring instead that the expert—the teacher—make such decisions for them.
A third form of assessment that emerged through interviewees’ responses was to evaluate digital rhetoric by conducting a rhetorical analysis. For Eyman, “the assessment has to come through the rhetorical analysis of that particular thing, object, performance, whatever that has been created.” James Brown agrees, saying, “I go about assessing digital rhetoric essentially the same way I go about assessing any other work," that is, rhetorically, which is implicitly an argument for continuation of rhetorical theory in digital rhetoric pedagogy. In taking this approach, interviewees also wanted to ensure that students understand what it means to rhetorically analyze digital rhetoric: such an understanding makes them critically aware of how they’re being assessed, and it helps them think rhetorically, which they can leverage in not only the analysis of digital texts but also, and more importantly in this context, the creation of their own digital texts.
To help students foster an operable understanding of a rhetorical analysis, interviewees said they devoted in-class time to modeling rhetorical analyses for their students, who then practice such analyses on their own. Sarah Arroyo outlines the process thusly, “I bring up rhetorical strategies, go over them, and analyze, and then I hold them accountable for those things.” Justin Hodgson similarly speaks to how students practice rhetorical analyses in ways that inform the production of their own digital texts and the subsequent assessment of them: Students spend the time on their own to identify, “well, what makes a good kind of this thing I want to make?” So they do the research to identify the various genres they’re participating in, the communities they’re looking at, and then thinking about what qualifies as a good work.
Orienting Ourselves to Learning
As is evidenced in the above discussion of the genres of digital rhetoric assessment, a particular method of assessment—reflection, rubrics, rhetorical analysis—results in a particular pedagogical sequence: work with students in class towards creation of the genre; have students practice the genre, ideally with texts similar to ones they will create; and then task students with producing the genre that will be assessed. This scaffolded approach not only makes students privy to the assessment process but also asks students to employ the assessment themselves, a move rooted in the longstanding commonplace that practice with and in a meta-genre begets effective composing.
Regardless of how we assess digital rhetoric, we should attempt as much as possible to learn from our evaluative practices. In addition, we need to foreground assessment in ways that assist student learning and that foster an awareness of that learning. As Kathleen Yancey says, “digital rhetoric is sufficiently new that it behooves us all to use every single one of these opportunities as an opportunity to learn.” Phrased as a set of questions, for both teachers and students, Yancey offers these: “What did you learn, what did we learn, what have we all learned that can contribute to this larger enterprise?” In using assessment as gateway toward learning, teachers should also remember that the forms of assessment detailed above do not need to be practiced in isolation. For instance, teachers can use rubrics and have students create reflections. Furthermore, within their reflections, students can rhetorically analyze their own digital texts. Rubrics can also contain criteria that teachers can use to rhetorically analyze student work, and students can use those rubrics in thinking about and evaluating their own work in their reflections. Thus, in looking across the responses to assessment, we see an attempt to make the evaluative process transparent and reflective, to involve students in the process, and to make the process a rhetorical exercise, one explicitly keyed to learning.
1 In this way, the approaches here also embody the broad points of argumentative consensus and dissensus about assessing writing more broadly, though those particulars are outside of the scope of this piece.