This piece is offered as both a demonstration of the role generative AI coding tools might play in multimodal composition, web development, and game design going forward, and as a commentary on the discourse surrounding generative AI in higher education today. Two iterative versions of a web-based game are included within: both are primarily generated through a method John Murray and I teach as "distant coding," or the intentional use of generative AI tools to manipulate code through a combination of prompts and hand-coding to place the focus on content and design rather than syntax and structure. This method is a reflective, critical making-informed, approach to what has been popularly termed "vibe coding" (attributed to Andrej Karpathy), a mode of working that he described as "not really coding -- I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works."
My approach to distant coding throughout this project is slightly different: some elements, particularly in the interface design, did require substantial editing, particularly for version one (completed in October 2024, before the term "vibe coding" was even coined)—however, the second iteration, built in June 2025, required almost no editing other than the accompanying textual commentary. Both games are included here and intended to be played in sequence: notice the changes in specificity, illustration quality, some interface elements, and even the humor of the text, which is entirely AI-generated (including the descriptions below):
Navigate the challenges of writing program administration in an AI-saturated academic landscape. From plagiarism detection to student support, every decision is fraught in this exploration of composition pedagogy meeting machine learning.
PLAY WRITINGStep into the role of a DH center director balancing computational innovation with humanistic values. Grapple with GPU environmental impacts, data ethics, and the challenge of explaining APIs to administrators who just want results.
PLAY DH CENTERIn combination, these games were created through approximately 100 iterative prompts across multiple AI systems (ChatGPT 4.0, Claude Opus 4.0, Claude Sonnet 4.0, DALL-E 3), with human curation, editing, and hand-coding throughout the process. The resulting artifacts raise questions about agency, creativity, and labor in an age of computational collaboration.
The code is far from optimized, but both iterations make reasonable use of the P5.js library, which I commonly teach as an accessible entrypoint into playful animation, electronic literature, multimodal composition, and basic web game design. The visual assets, generated through AI image tools and then extensively modified, are in many ways an extension of the concept of "programmer's art" but are sufficient to the rhetorical task of this particular project, and they reflect two stages of image generation (generated about eight months apart).
Similarly, this landing page was initially generated using a single prompt in GitHub Copilot with Claude Sonnet 4.0:
These iterative pieces and the process above demonstrates the progression from earlier experiments using generative AI for multimodal composition, such as Douglas Eyman's previous Disputatio piece "Making a Webtext with ChatGPT" in 2023. I offer it here as a provocation for how rapidly the process of multimodal composition can change. As someone who spends a significant amount of my time as an educator trying to find new ways to bring people into code, public scholarship, and game design, I am excited by the possibilities of this workflow and the ways working with agents can particularly eliminate many of the frustration points in making. As an administrator and educator who also relies on student engagement through reading and writing, I am exhausted by much of the current conversation and the challenges that echo through in both these games.