Given the quickly changing ecosystem of information online, studies like ours will continually need to be revisited in light of shifting evaluative impulses on the part of students. While significant literature exists about source evaluation behaviors, which we review in the context portion of this webtext, the timeliness of our study, the demographics studied, and the methods of our data collection give our study a rich overview of first-year writing student evaluation behaviors that can help us decide how to better teach about interacting with web-based information in composition classes. Here, we give an overview of some of the major findings of the study. First, we begin with the basic demographic breakdown of our participants.
Students rated the articles from 1 to 10—1 being very unreliable and 10 being very reliable. They rated each article twice: before and after they had time to examine the article in depth. Students were instructed to initially take 2 minutes to look at a screenshot of the article and provide a quick gut-reaction rating of the article's reliability. They then were instructed to open a Google tab and investigate the article in any way they chose. Here are the ratings for the five articles both before and after they researched them.
Note both the minimum and maximum scores given out of 10 by the research participants and the number of participants ("count") to finish the research task. The count goes down as the table progresses because some students did not finish rating each article within the 60-minute time allotment for each participant. The findings are presented to you in the order students rated the articles.
In our approach to grounded theory coding, we took note of what comments appeared the most often in terms of what made each source more or less reliable. These features are summarized below, and are further explicated in the interactive portion of this webtext.
- More reliable: NPR as a publishing body is familiar and well-known.
- Less reliable: NPR as a publishing body is biased/liberal.
Researcher comments: The way students saw the publishing body as both a boon and a detractor showcases student confusion over authority, especially where publishing venues are at play.
- More reliable: The author had expertise on the subject of the article.
- Less reliable: Huffington Post is a dubious publishing venue.
Researcher comments: Two forms of authority are at odds here, the author versus the publication venue. Students had a hard time balancing their competing feelings. Furthermore, many were unsure if Huffington Post, a fairly mainstream news source, could be trusted.
- More reliable: The embedded graph from NASA.
- Less reliable: The article/website bias.
Researcher comments: Even though students were quick to point out the website's obvious bias, they were still easily seduced by the graph's visual rhetoric, which to them lent authority to the piece.
- More reliable: Washington Post is a well-known news source.
- Less reliable: "Nothing makes it less reliable" was by far the top comment recorded under "less reliable" for the Washington Post article.
Researcher comments: While students generally trusted this source, and it was the highest rated source out of all the articles, the fact that the mean score after research was still an 8/10 showcases a general distrust of media sources even when students can find ostensibly "nothing" wrong with them.
- More reliable: The sources cited/quoted.
- Less reliable: The article's bias, specifically evidenced in the language.
Researcher comments: Students struggled to balance the biased source with the evidence the source provided. Where does bias outweigh evidence, they wondered?