It has become almost rote to say that now, more than ever, students need to understand how to navigate web content to find verifiable and reliable information sources. While the data-collection portion of this study took place in the aftermath of the 2016 American presidential election, with students and the public increasingly confused about #fakenews and what media sources to trust to disseminate information (Rainie & Anderson, 2017), structural revisions of this article took place during the COVID-19 outbreak and ensuing racial protests in the United States. As university campuses closed around the country in spring 2020, students, professors, and librarians alike headed home to watch the stories unfold online. And many of them also took to the streets in support of the Black Lives Matter movement. As the outbreak and protests evolved quickly, so did misinformation, partial information, and well-meaning, but bad, information. Will ibuprofen make me more likely to be hospitalized if I get coronavirus? Did federal enforcement groups take protesters off the streets in unmarked vehicles? Can I make my own hand sanitizer? Are young people dying from the illness? Which white nationalist groups were involved in rioting? These questions, and varied answers to them, spread like wildfire on the open web, with people grasping for any semblance of an answer during a time when answers were as scarce as toilet paper.
As trained information professionals watching the pandemic break, we observed, in interest, and also unease, as information spread faster than the illness and societal unrest did. We wondered how students were receiving this information about something so exigent, so timely. We continue to wonder if educators are any surer than their students in their information-evaluation footing in such times of uncertainty and fear. Our research into source evaluation behaviors of first-year students, we realize, can give us some answers into how individuals interact with information online. The findings may be a beginning step to consider what composition courses can do to incorporate better media and information literacy pedagogy into curriculum.
Those of us associated with first-year writing (FYW) students realize that many of them get their first formalized media and information literacy instruction in conjunction with first-year composition courses (Artman, Frisicaro-Pawlowskica, & Monge, 2010, p. 94). It remains difficult to ascertain, however, how exactly students are interacting with online content, and what, precisely, composition teachers and librarians should be teaching first-year students in their writing classes. How can curriculum respond to very real habits we see new students enact in their information consumption behaviors?
How to read
This webtext attempts to answer these questions by using the results from a study of first-year composition students' information evaluation behaviors as a basis for analysis and implications. In the summer of 2017, we, the librarians associated with the composition program at Brigham Young University, tested 20% of the students enrolled in Writing 150, the university's FYW course. We wanted to know how students go about determining how credible online information is. We were especially interested in information that is free and could easily be found through Google searches or social media—in other words, not academic articles hidden by publisher paywalls.
As librarians, we wondered what made websites most and least reliable for students. We wondered if students would notice differences between mainstream and fringe web publications, or differences in article genres. Using a proctored survey, talk-aloud protocols, and screen recordings, we observed source evaluation behaviors from first-year students, and then coded them using grounded theory. As we assessed our findings, we began to separate major trends into novice- or expert-level evaluation behaviors based on previous research and our own expertise interacting with thousands of first-year writers each year through library-run information literacy sessions.
Ultimately, we identify three pitfalls that arose from student source evaluation behaviors. These, we believe, are salient points of further investigation for librarians, educators, and compositionists alike. We complicate these pitfalls in the discussion section of the webtext, but here we offer a brief overview of the issues you will see us returning to repeatedly as you read. As you move through this webtext, we invite you to consider how these pitfalls interact with one another and how educators might respond. We offer our own pedagogical implications in the final section of the webtext.
- First, there is a confusion over what constitutes authority online. Students had difficulty following rules of thumb for information evaluation in the ever-changing landscape of the web. Who or what do they trust? As we transcribed each session, we could hear the confusion in their voices, and it was obvious in their justifications for what made the articles more or less reliable as well—students were very confused. We observed them struggle over what features are the most important to assess when it comes to authority, especially when some features seem to conflict with one another. Students had difficulty pinpointing what authority was credible and why.
- Second, given the difficulty assessing authority, students relied on their own assessments of credibility and reliability, often resulting in confirmation bias and other problematic, short-sighted, narrow responses to the information presented. Such responses were not indicative of how the information existed in a wider context but instead within a very narrow, personally inflected reading by the student. In response to conflicting feelings, emotions, or markers of credibility, students relied on pre-existing notions, emotional reactions, and instinctive responses to material.
- Third, also connected to difficulty ascertaining where authority is derived online, students were easily seduced by visual rhetoric like graphs, videos, and web design features. They believed that these were stand-ins for believability and trustworthiness, and as such, ignored more important markers when assessing information’s value, such as if the information could be corroborated, was well researched, or was disseminated by a trusted source. Such shallow source-evaluation behaviors bolster other research done in the domain of source evaluation (Faix, 2014; Curie et al., 2010; McClure & Clink, 2009). We make the further connection that this behavior is connected to the overall confusion over authority specifically.
The ability to distinguish between reliable and less reliable information objects will only continue to grow in importance in an era of deep fakes, social media news, and distrust of mainstream media outlets. Correctly interpreting evidence-based claims about how to protect yourself against COVID-19 can have long-term health consequences, and discerning between biased and unbiased information relating to the protests surrounding racial inequities can have a significant impact on what structural changes are made in American institutions and society moving forward.
This webtext provides an interactive view of the landscape of our study. We picked five online sources which you can explore in the menu in conjunction with your reading of this text. Reading the text will be integral to your understanding of the interactive features. You will notice in your exploration that these are not the actual articles with which students completed the study. Given copyright constraints, we created mockup pages that work as stand-ins for the actual pages. The lorem ipsum text stands in for non-relevant parts of the article, and we have emphasized, in our design, any parts of the page that students engaged with in our coding of behaviors. We have linked the original pages to the mockups, so you can compare and explore the original material as well. Our discussion, findings, and analysis are all underlined by the interactive features, and we will draw your attention to them as you read. After you read and have the full context of the study, we invite you to explore the interactive features in full.
Throughout the rest of this webtext, we will discuss the ways in which students interact with popular sources and how students determine whether information is credible or not. We will address how they responded when confronted with unknown websites and ideas that challenged their points of view as well as how educators might best teach students to critically assess the cascade of information students are exposed to each day. By better understanding how students evaluate information, we can better facilitate the types of learning necessary for students to become responsible digital citizens in this time of uncertainty and change.