Methodology: This Survey

Introduction
Digital Technologies
Changing Literacies
Teacher Training
No Technology

 

Methodology
This survey
Limitations/Challenges
See the Survey


Courses & Workshops
Nature of Training
Faculty & Graduate Students
Assessment


Conclusion
Further Study
Appendix A
Works Cited


This Survey

To begin the survey process, I created the questionnaire (See Appendix A). I crafted questions that I thought would provide insight to my first two research questions. After drafting items on the survey, I then asked a small number of colleagues to take the survey. This small pilot study helped me to check the clarity of the questions and adjust the online process of collecting responses. As a result of the small pilot study, I found that the “submit” button was very small, and I also revised questions that seemed “leading” to pilot study respondents. To create the survey, I divided the questions into groups. For example, the survey format consisted of 21 questions where the respondents were asked to make single choices from multiple choice questions with an option to explain answers in text-boxes provided. The text-boxes were available for additional responses to questions 3 and 7-21. The survey content consisted of three major sections: demographics (1-6); teacher-training and technology (7-17); and assessment and resources (18-21).

These sections were not arbitrarily selected; rather, they serve a distinct purpose. First, in order to cross tabulate responses (faculty vs. graduate student, for example), it was imperative to create a demographics section. This section asked basic questions such as gender, length of time teaching, location of institution, type of institution, academic status, and general proficiency with technology. Using results from the demographic section, I was able to correlate responses between groups. Second, the teacher-training section afforded the opportunity to address directly the state of technology training in workshops and courses. For example, I asked questions such as whether or not programs offered or required workshops or courses that provide graduate students and new teachers preparation with technology especially in relation to new literacies. Third, the last section, assessment and resources, addresses explicitly the perception of the effectiveness of that training. For example, I ask questions such as what is your assessment of the effectiveness of technology training for graduate students? These sections cater to specific issues that pertain to my research questions.

The population I was striving to reach included faculty and graduate students in current rhetoric and composition doctoral programs. To guide me in identifying an accurate population, I used the 2000 issue of Rhetoric Review 18.2. This professional journal periodically reports the state of rhetoric and composition doctoral programs and is widely accepted as an authority on such graduate programs. The numbers provided by Rhetoric Review offered an approximate population of 1,770 for the survey.  The population or N was defined as faculty and graduate students in rhetoric and composition programs or configurations of such programs.

The survey was distributed via TechRhet and the WPA-L Listserv during March 2004.  The distribution method entailed a web address that I distributed through various means (email, listservs, web blogs, and news sites). The survey was also distributed at the 2004 Conference on College Composition and Communication in San Antonio, Texas, via advertisement flyers with the web address on them. In addition, I contacted faculty or students at universities within the target population to help distribute the survey to graduate students and other faculty.  I used this method of distribution for several reasons. Given the pervasiveness of technology within the population I selected, it only made sense to present the survey online. Many teachers and scholars in our field have easy access to the Internet, so this seemed both logical and efficient. Placing the survey online was also cost and time effective, which relieved me from paying postage and distributing the survey a year in advance. Additionally, the listservs are populated with potential respondents relative to my study, and the Conference on College Composition and Communication is also a site where many potential respondents can be found. This kind of sampling is called “selected,” “accidental,” or “convenience” sampling. This kind of sampling was the best I had available to get the maximum number of respondents. Other sampling techniques could have been used such as random sampling. Using selective sampling, I could solicit help from others in the distribution process and select participants. Convenience sampling is a method of sampling whereby the investigator chooses respondents from within the population. What is lost in sampling accuracy is saved in time and money (Bailey 81). Discounting the 17 responses used for my pilot study, my total sample consisted of 344, the number of participants reached (derived from the number of individuals responding to some aspect of the survey). Out of that sample population, I received 113 usable responses, giving me just over a 30 percent response rate.

The tool used to collect survey data was INQSIT hosted at Ball State University. This software was designed to do two things: provide faculty and students at Ball State University the ability to create online tests and exams, and two, to provide faculty and the university a way to conduct survey research. The program supports HTML coding language and allows the user to post questions and retrieve responses. INQSIT will record responses to surveys and automatically tabulate basic results such as the number of responses to a single question and the percentage of a specific response for a specific question. It will also download easily into a Microsoft Excel document. To use the INQSIT system, I was given a user name and password, which provided appropriate access to create and specify survey commands and parameters. Click here to see what respondents saw when they encountered the online survey.

 Limitations/Challenges>> Methodology>> The Online Survey>>