|Subject||Poor (1)||Below Average (2)||Average (3)||Above Average (4)||Excellent (5)|
|For the Professor|
|For the Graduate Student|
|For the K-12 Teacher|
Heidi A. McKee and Dànielle Nicole DeVoss curated authors who apply and demonstrate their multimodal work in action. Rather than simply discuss what a digital assessment might look like, the authors offered real assignment parameters, illustrated with student examples and assignment submissions, and finally, outlined how these assignments might be evaluated. McKee and DeVoss aptly demonstrated ways the reader may employ similar tactics in their own work to improve pedagogical practices and student outcomes. Because of this lens, Digital Writing Assessment and Evaluation proves most useful for collegiate-level teachers of digital rhetoric, although I assert value can be found for a variety of audiences. Above you will find a rubric ranking the usefulness for each audience. Again, I employed tenets I learned in the text, like using evaluative rubrics to demonstrate information, while creating this review.
As demonstrated by Andrea Lunsford in the text’s “Foreword,” the work accomplished by this book is incredibly collaborative and highlights professors of digital rhetoric across the field, drawing into conversation the National Writing Project Digital’s perspective and leaders in the field like Cheryl Ball. In the aforementioned Charles Moran and Anne Herrington essay, the authors noted their approach to assessment begins with existing policy, thus they are in conversation with the National Council of Teachers of English (NCTE) Curriculum and Assessment Framework. Grounding work within standardized measures and policy helps situate this book expertly within the conversation for educators. Understanding the assessment of digital work allows the work to be better executed and taught, and admittedly, this book aims most work at the professor in higher education. Because of the explicit moments of application in the text, this serves the professor with a teaching role, not only research, the best.
The concept of the ePortfolio is also best applied to higher education in the text, whether it be at the individual class level or the program-level. An area that may serve the reader better would be careful consideration for overlap in individual essay topics, like ePortfolios, which is belabored in the text, as most essayists featured draw upon those same points. While many chapters reveal new nuances of similar topics, like the different uses of ePortfolios, both as multimodal objects and assessment tools, many of the chapters lay the same foundation in setting up the ideas.
Additionally, in terms of ePortfolios, less consideration was given to the intersectionality of the tools in terms of unique students’ backgrounds. In the text, Mya Poe did indeed address diverse writers, but this concept in comparison to ePortfolios might be explored more. In the article “Intercultural Competence in Technical Communication: A Working Definition and Review of Assessment Methods,” Han Yu (2011) noted “as teachers, we cannot determine that one type of learning evidence is more valuable than another, especially when the strengths of portfolios lie partly in their student-centered interpretation. This dilemma is compounded when we work with students from diverse cultures because what students value in (intercultural) learning is likely to be culturally engrained” (Yu, 2011, p. 175). This additional nuance merits some further exploration as professors involve ePortfolios in their classrooms.
Another topic which may justify more exploration is that of programmatic change resulting from the use of ePortfolios. As a reader, I understand the importance of ePortfolios as they are a key topic in the text, used to encourage and endorse student learning, help faculty assess said learning, and empower programs to use portfolios as benchmarks or a means to understanding a department’s or classes’ work as a whole. Steven Acker (2005) amplified this concept in “Overcoming Obstacles to Authentic ePortfolio Assessment.” He wrote, “In turn, individual faculty can create a teaching ePortfolio to demonstrate how they help students learn and revise their pedagogy based on the same representation, reflection, and revision cycle. At the institutional level, ePortfolio offers an ideal tool for providing evidence of improved student learning, which is meaningful to accreditation agencies and funding sources” (Acker, 2005). However, as the text proves ripe with examples in most subsets of topics, I found the programmatic evaluation of ePortfolios a little lacking. The theoretical value was demonstrated, and touched on in terms of how faculty can reshape their philosophy and approach to teaching with ePortfolios, yet the amount of examples illustrating such was not as robust as the individual importance of ePortfolios for students.
The graduate student can find value in this book as well, both to understand the state of the field, to consider the dimensions of digital projects (which they likely are engaging in) to produce better work, and to store assessment material for later use, should their degree take them down a teaching route. I found I learned more than the perspectives of McKee and DeVoss, but rather the overarching background of the history of digital assessment, without taking a history course, which would serve a graduate student population well. The arrangement of the essays, from concerns of inclusion to security to the actual assessment methodology helps the reader come “up to speed,” if you will, regarding all facets of teaching digital rhetoric in the modern age. Because, the book is such: indeed, modern. Not modern in the sense of its topic only, but in the use of its timely and relevant examples, like recent curricula of genre bending, the use of YouTube in the classroom, and the importance of the emerging ePortfolio.
Finally, many of the tenets of evaluating student work can apply beyond just the collegiate landscape. As digital work continues to permeate many different levels of students in the K-12 sphere, select concepts from the text are applicable to this audience. However, in Peggy O’Neill’s (2011) article “Reframing Reliability for Writing Assessment,” she warned that the concept of assessment for teachers must be clearly defined. She wrote, “One comes from determining what we mean by writing assessment because as a field it encompasses teachers and researchers in K-12 education as well as higher education. Some of these professionals are trained in educational measurement, but many others are trained primarily as literacy educators” (O’Neill, 2011). To O’Neill’s point, those embarking in teaching with the literacy lens may not find this text as valuable as those tracking their own assessment of student work. Despite that limitation, even evaluating digital texts for their multimodal nature rather than older, more traditional methodologies of grading can be helpful for K-12 educators in assessing their students’ work.