Digital Ethics and the Wider World
While many of the chapters in Digital Ethics focus on academia—whether classroom practices specifically or academia as a whole, including its crossover with online public spaces—many other chapters consider the role that ethics play in the online world at large. These chapters fall into two themes: One group of writers consider how bad actors use online technologies for their own ends, and the other group of writers look at the role of platforms and technology in shaping or allowing un/ethical behavior. Both approaches offer useful, nuanced insights into how the internet became, in Lisa Nakamura's (2019) words, "a trash fire," and both approaches provide useful starting points for activism, research, and teaching.
The chapters "Theorycraft and Online Harassment: Mobilizing Status Quo Warriors" and "Volatile Visibility: How Online Harassment Makes Women Disappear" both offered valuable insight into the patterns and effects of online hostility. Alisha Karabinus used the lens of theorycrafting, "a deeply analytical approach to maximizing play efficacy" emerging from video games, to explore how people exploited online platforms and created guidelines and instructions for engaging in harassing and abusive behaviors (p. 167). Karabinus's analysis demonstrated how unethical behavior online often rests on a deep understanding of how online spaces function and why events like GamerGate cannot be written off.
Bridget Gelms's chapter on the role of visibility in harassment is the other side of the coin, as she interviewed women about experiences of online harassment and how those women have attempted to minimize exposure to abuse by carefully curating and shrinking their online presence. Gelms observed that "online harassment significantly alters women's relationship to social media and technology" (p. 180). One of the major contributions Gelms made with this chapter was in her examination of the effects of hypervisibility and invisibility on women's online and offline lives. Karabinus's and Gelms's chapters together offer a picture of how harassers make online spaces unsafe and toxic. Other chapters in the collection turn their focus to online and digital spaces themselves.
Platforms and Technology
Michael Trice, Liza Potts, and Rebekah Small, in their chapter "Values versus Rules in Social Media Communities: How Platforms Generate Amorality on reddit and Facebook," examined the role of ethos and values and what they call "modern rules-based communities of harm" (p. 33). Communities of harm are online spaces that "place the individual's freedom to act aggressively above any community value or long-term goal," and that often employ rules to preserve their presence on websites like reddit and Facebook despite attempting to get around platform guidelines or rejecting the site's values outright (p. 33). Through a close examination of the KotakuInAction subreddit and a cluster of Leftbook groups, Trice, Potts, and Small unpacked how the prioritization of rules over values leads to hostile and even abusive behaviors.
In "Finding Effective Moderation Practices on Twitch," Tabitha M. London, Joey Crundwell, Marcy Bock Eastley, Natalie Santiago, and Jennifer Jenkins took a different approach to thinking about how platforms can shape un/ethical user behavior through their examination of Twitch's moderation capabilities and the ways that several streamers employ or do not employ them to deal with gender-based harassment. Taken together, these two chapters provide insight into how platforms and users interact to create the conditions for amoral behavior, as well as to mitigate it. Scholarship on moderation, rules, and values on other platforms will continue to add to the discussion.
"The Banality of Digital Aggression: Algorithmic Data Surveillance in Medical Wearables" moves the conversation away from social media spaces—a dominant focus in many of the chapters—to focus on digital ethics related to medical devices. Through a collaborative introduction, Krista Kennedy and Noah Wilson contemplated the impact of data surveillance on users of essential medical devices (like Kennedy's hearing aids) versus that on users of voluntary devices (like Wilson's heart rate monitor): "One of us decides daily how much data they will generate and how much they will let their body be tracked. The other of us simply generates data, which is simply tracked. Only one of us consents or resists because only one of us can opt out… without significant repercussions in their life" (p. 216). Kennedy and Wilson's shift from thinking about digital aggression in terms of human interaction to the interactions humans have with devices, data, and surveillance is an essential addition to the collection. The chapter added new layers to the questions asked in Dieterle, Edwards, and Martin's chapter: What happens when the circulation and amplification of data is outside an individual's control?
Taken together, the chapters in Digital Ethics that deal with directed harassment and ethical failures at the levels of platforms and larger social systems paint a bleak picture of digital existence. However, the collection as a whole and these chapters in particular offer calls to action that other scholars can and should take up. How do the platforms and technologies we use on a regular basis (like Microsoft Word, Canvas, Blackboard, and Outlook) shape our behavior and our ethics? What platforms and technologies do we use for research, for teaching, and for socializing, and what information about us circulates as a result? Are there interventions that scholars in the field can and should make in online spaces, and what might those look like? While the authors acknowledged that the current state of digital ethics leaves much to be desired, they also argued that the status quo does not have to remain as is, and challenge readers to carry forward the projects they started.