Constructing the Assessment Tool

The process of constructing the assessment tool for this study was iterative and inductive. It began with a general survey of over 100 randomly selected webtexts—presumably successful examples of scholarly online arguments published in the CoverWeb and Features sections of Kairos—in order to generate a list of common features. These features were organized into two main groups of observations regarding (1) aspects of the webtexts that appear scholarly based on adherence to traditional conventions and (2) aspects of the webtexts that do not coincide with traditional scholarly conventions, and that may instead follow emerging web-based conventions. This organization was influenced by the goal of the analysis, which was to determine in what ways webtexts extend the boundaries of traditional scholarship and require revised assessment criteria for determining their scholarly legitimacy. Knowledge of the traditional standards of print scholarship as well as emerging conventions of effective online writing provided the lens through which observations were made and the grounding for what would eventually become the main assessment tool for closely analyzing the select group of webtexts.

Although some diversity in formal design exists among the webtexts, particularly as they are viewed chronologically from earliest to most recent, several common features were identified. For example, many texts include, among other features:

These initial observations from the general survey guided the development and organization of a set of evaluative and descriptive questions that formed a test draft of the assessment tool. The descriptive questions seek to measure the presence of commonly-observed features within the select group of webtexts and typically require yes/no responses. For example, one question asks whether the webtext includes a graphic overview; another question asks whether the webtext includes an explicitly labeled references node. The evaluative questions attempt to measure the effectiveness of a particular strategy against traditional print-based standards and emerging web-based standards; they also require yes/no responses. For example one question asks whether the webtext follows a rhetoric of arrivals and departures, while another asks whether the nodes within the text are self-contained and contextualized. These questions reflect judgments based on presumably objective standards that have been discussed and agreed upon by many scholars.

The draft questions were tested to see whether they were relevant to a variety of webtexts, refined to account for any omissions, and re-tested through up to ten additional surveys of several randomly selected Kairos webtexts. The purpose of this process was to incorporate observations unaccounted for in the original draft of the tool and to ensure both wide and detailed coverage of characteristics that, if recurrent in a number of webtexts, may help to identify and define characteristics of online scholarship. For example, the original test draft failed to include a question regarding how the webtext makes meaning. Such meaning-making, it was determined, could emerge through text alone; text and graphics; or text and multiple media such as audio, video, and animation. The answer to a question of this nature provides a better sense of the kinds of technological allowances being used—and therefore being accepted or encouraged in this medium—to present research arguments. Additionally, the test draft included a very general query regarding the presence of documentation—such as a references node—within a webtext. However, further investigation of additional webtexts revealed that documentation was presented in various conventional and unconventional ways and, therefore, required a revised question in the final draft of the assessment tool that considers the specific presentation of documentation in webtexts. This iterative process for developing and testing questions allowed for the construction of a more detailed tool that begins to highlight the nuanced distinctions and similarities among these webtexts.

Finally, the completed assessment tool was applied to a close reading and analysis of both the longitudinal and current sets of webtexts with the goal of exploring and recording webtext commonalities for those deemed the “best” (the first set) as well as trends in current webtexts (the second set) that combine to form a description of Kairos’s implicit standards for online scholarship. The questions that comprise the assessment tool are organized into two main categories.

Category A queries the extent to which traditional print-based scholarly conventions are recognized and function within webtexts. This category reveals key similarities in scholarly communication between print and online media. Category B considers the extent to which webtexts incorporate emerging conventions of web-based writing. This category reveals key differences in formal design brought about by the use of hypertextual and hypermedia capabilities of the online medium; it shifts the focus of traditional scholarly criteria toward the inclusion of non-conventional, web-based criteria, thereby distinguishing webtexts as new forms of scholarship. The methodology includes a brief rationale for each of the questions within the assessment tool.