Listen to Episode 1
This summer (2024), we started a podcast about writing studies and generative AI. We bought microphones because, frankly, we were frustrated with how much of the conversation in writing studies about generative AI is about early adoption and how to teach students prompt engineering skills. We called the podcast Everyone's Writing with AI (Except Me!) not because we feel alone, but because we know we're not the only people frustrated by AI hype. And because—to put all of our cards on the table here—we have felt (and sometimes continue to feel) defensive about our place in current disciplinary conversations about generative AI in writing classrooms. No, we hear ourselves drone to colleagues and students, we're not interested in policing the use of generative AI. And no, we're not Luddites who fear change and progress. And, yes, we believe that digital literacies and pedagogies must be taught.
While it's difficult to know how generative AI is being taken up in writing classrooms, the perspective from the top of the discipline was determined very soon after ChatGPT was released: Keep calm and adapt. You see this viewpoint promoted by the MLA–CCCC Joint Task Force on Writing and AI (2023), in the pages of a special issue in Computers and Composition (Ranade & Eyman, 2024), and in the assignments shared in the edited collection TextGenEd: Teaching with Text Generation Technologies (Vee et al., 2023). Many, including William Hart-Davidson (2018) and Jim Porter (2018), have forecasted our current moment and called for us to have the conversation about AI in writing classrooms and to prepare for critical engagement with the technologies that threaten our replacement and obsolescence.
To be sure, we need to have a critical conversation about generative AI and its implications for writing instruction. Perhaps the path forward is teaching students critical AI literacy so that they may use generative AI in full awareness of its benefits and limitations. This has been the only discussion so far, and, so, it has felt premature to settle on such a choice, given the serious environmental problems with the technology (Danelski, 2023; Kerr, 2024) and racist (Liang et al., 2023; O'Donnell, 2024) and sexist (Kotek et al., 2023) outputs and outcomes of the technology. Generative AI, especially as a pedagogical tool, strikes a devastating blow to linguistic justice (Baker-Bell, 2020; Conference on College Composition and Communication, 2020; Inoue, 2019) and the goals espoused by Students' Rights to Their Own Language (CCCC, 1974). Generative AI furthers settler colonialism and environmental devastation (Danelski, 2023; Kerr, 2024). It does concrete harm to linguistic variation and justice (Kynard, 2023; Owusu-Ansah, 2023). These aren't potential harms. The impacts of generative AI on language variation and the environment are real, material, and evident, and critical AI literacies as they have thus been articulated don't address these harms nor articulate our culpability as writers, teachers, and researchers.