By the end of this anthology, I found myself a bit tired of hearing about what we will lose; instead I found more productive the comments about what we might gain by renewing our own exploration of assessment tools. Also, we need clear strategies for communicating the shortcomings of AI evaluators to others. Perhaps we do a poor job defending writing. This is not just self-interest, by the way: By defending writing we are defending a conception of education that we embrace because we think it helps people learn.
Indeed, in the end, it is not about technology at all. It is about writing. Anson cautions that until we know more about the effects of writing to and for machines, “we must proceed cautiously with their use in something as important and presumably humanistic as deciding the worth and value of people’s writing” (55). Diagnosing student writing, Haswell says, can never “be construed as easy, for the simple reason that it is never easy” (77). Perhaps the structure of the field of composition—low salaries, low prestige, bad self image—has helped dupe us (and if not us, then others) into seeking the easy way out when it comes to writing. As many of these authors point out, others outside our field are happy to ignore us—or even to plow over us in the pursuit of their own objectives, which may in some cases involve teaching people to write more effectively, but often instead feature efficiency, cost-cutting, and removing the human element from writing instruction. Solving the problem of AI graders may be inextricably intertwined with addressing the problematic structure, and image, of writing instruction in American schools.