What do first-year students find reliable in online sources?

Silva, Green, Mendoza
header

Methods

Brigham Young University is located in Provo, UT, and is a large, private, highly selective religious university. Depending on the year, around 65% of students enrolled at BYU take a first-year writing (FYW) course during their first year of study (Writing 150: Writing and Rhetoric). Tasked with teaching these students information literacy skills connected to their research projects in FYW, we wondered how students were assessing information they found online. We devised a test with five different information sources that students would evaluate, and we would analyze their behaviors using a proctored survey, screen recordings, and voice recordings. Our study was reviewed and approved by our institutional review board (IRB).

To recruit participants, we went to FYW classes on the first or second day of Writing 150 classes and asked for participants using recruitment flyers. We wanted to test students before they had had any formal information literacy training from librarians and when they were very new to the university. To encourage participation, we promised each participant a $10 gift card. In total, 89 students participated, which was 20% of the total Writing 150 enrollment over the summer of 2017. During the test, we gathered demographic information on the participants such as age, sex, and how many semesters of college the student had completed. Though the demographic information we collected was helpful in understanding the student population we were testing, overall, we did not see a statistically significant correlation between specific demographic markers and how students evaluated the five different sources.

On the day of the test, the students were ushered into a private office space in the academic library where they completed a survey asking about the five articles that are represented in the interactive portion of this webtext. Below is a table that explains why we chose each article. Remember, the linked mockups are representative of the original articles we used to test students.

Table 1: Articles used in survey. Links included to original articles and mockups.
Publication Article Title Mockup Selection Criteria
NPR "Over-the-counter birth control pills would be safe for teens, researchers say" APR Chosen as a reliable, mainstream news article that cited academic research and other reliable sources.
Huffington Post "A third way for universities" PuffyHost Chosen as an opinion editorial from a well-known website that appeals to younger readers.
The Blaze "Global warming fail: Study finds melting sea ice is actually helping arctic animals" The Flame Chosen for a far-right bias, inaccurate use of data and information, a hot-button political issue, and use of visuals. (The online version of this article no longer includes a graph from NASA delineating melting sea ice levels that was in the original version.)
The Washington Post "Elon Musk's SpaceX makes history by launching a 'flight-proven' rocket" The Jefferson Post Chosen to represent a well-known newspaper reporting on an event, for its apolitical subject, and inclusion of a video.
Daily Kos "There's a growing crisis in care for disabled and elderly people. Oh, and it's a jobs crisis, too" The Daily Post Chosen for a fringe, far-left bias and use of a combative, biased tone and casual language.

First, students were to take two minutes to evaluate a screenshot of the article, and then they were to take a few minutes to open up a new browser tab and do any research they deemed necessary to evaluate the article. We did not define what we meant by research so as to observe, as closely as possible, how students might naturally go about assessing the reliability of information sources. Students took anywhere from 25 to 60 minutes to complete the entire test with all five articles, but we estimated that most students completed the test in 40 minutes.

For each article the students examined, we asked the students to respond to the following questions in written form at the end of the talk-aloud portion of the evaluation:

  • Overall, what qualities/attributes make this source more reliable to you?
  • Overall, what qualities/attributes make this source less reliable to you?

The answers to these written questions are specifically where we get the quotes and statistics on which we report in the interactive portion of our webtext. More analysis of the talk-aloud protocols and screen recordings can be found in a published article in the Journal of Information Literacy (Silva, Green, & Walker, 2018). For this webtext we chose to focus on what students wrote because students often recorded what the most salient features of the articles were to them (as opposed to the things they were less aware of, which we noticed in the talk-aloud portion of the study). In other words, if students were able to write it down, it was an important feature, and therefore, it underlined significant trends that we wanted to highlight for those engaging with first-year writing students. As the research team examined the written responses reported on in this webtext, trends in evaluative criteria became clear.

Using grounded theory, we devised coding protocols after we collected student responses. Because most students showed both strengths and weaknesses in their analysis of the sources, we decided to classify behaviors, rather than specific individuals, as expert or novice. For example, after we noticed many students commenting on the graph in The Blaze article, members of the research team searched for references to the graph and divided them into students who had remarked upon the graph's convincing and reliable nature (novice) and those students who had realized that the graph did not support the claims made in the argument (expert). Some behaviors, such as looking at the author's credentials, were only categorized as expert behaviors because we recognize, as librarian-experts, that this is generally a good authority-assessing practice. Other behaviors, such as focusing on the domain, were only coded as a novice behavior, since we know that the .com or .org distinctions tell us very little about the reliability of a website nowadays. Certain behaviors, like remarking on previous experience with the website, had both novice and expert facets. Novices remarked on having no previous experience with the publication. Experts revealed that the source was one they had encountered before and were able to articulate relevant context about the source. In other words, a novice approach would treat a lack of information as a reason to reject information, while an expert approach would treat a lack of information as a reason to withold judgement until more information could be found.

The behaviors that we coded for owe much to the Association of College & Research Libraries' (ACRL, 2015) Framework for Information Literacy for Higher Education. This framework breaks up threshold concepts related to information literacy into six interconnected frames. We use these frames to help explain the difference between novice and expert behaviors in different domains. For example, students were able to articulate how the author had leveraged their experience and credentials into credibility ("Authority Is Constructed and Contextual" frame), understood the importance of validating the argument being made with other research or opinions ("Scholarship as Conversation" frame), and recognized the importance of looking into the venue in which that the information was published ("Information Creation as a Process" frame). Novice students, on the other hand, struggled with many of these distinctions, focusing on more shallow indicators (web design or word choice) and lacked the depth of understanding that the ACRL's framework outlines.

After determining expert and novice behaviors for each article, three members of the research team coded the responses and determined how many students had exhibited each behavior, resulting in the novice and expert comments seen in the interactive mockups in this webtext. We review our wider findings of the written portion of the student responses in the next section.

Findings