Conclusion

Introduction
Digital Technologies
Changing Literacies
Teacher Training
No Technology

 

Methodology
This survey
Limitations/Challenges
See the Survey


Courses & Workshops
Nature of Training
Faculty & Graduate Students
Assessment


Conclusion
Further Study
Appendix A
Works Cited

 

Conclusion

The data gathered from the survey suggests that while programs in rhetoric and composition are not necessarily requiring courses or workshops for technology training and teaching new literacies, many programs are indeed, attempting to offer courses and workshops for technology training and teaching with new literacies. The nature of the training occurring in such courses and workshops centers on the use of software programs (like Netscape composer, Microsoft office products, Macromedia products, and/or Adobe products) and on the use of hardware located in the computer classroom. For example, a workshop might train graduate students to use a LCD projector or a classroom printer.

Absent from the text-boxes associated with Tables 1 and 2 is any evidence of training in the theory behind teaching writing with technology or any mention of new literacies. Though the survey did not specifically ask a question pertaining to this issue, the absence of such responses indicate that neither graduate students nor faculty sees "technology training" tied to theoretical-type training or preparation. For example, when a faculty member or graduate student thinks of technology training, they think of it as learning a software package or some technology as a simple tool. The discipline has not systematically linked teacher training and teacher preparation with the theory behind teaching with technology or the new literacies evolving from it.

The respondents were also asked about their opinions of the effectiveness of technology training for graduate students. Both graduate students and faculty claimed overwhelmingly that they believed that technology training for graduate students is somewhat effective. However, it seems that "somewhat effective" has different meanings to different audiences. This difference can be determined from text-box responses. Text-box responses from faculty seem to be more positive than those of graduate students, suggesting that faculty rate the effectiveness of their program’s technology training more highly than do graduate students who are receiving and implementing the tenets of that training in undergraduate writing courses (FYC). This discrepancy suggests that the phrase "somewhat effective" maybe variously defined by both groups of respondents. Graduate students have the perception that they should already know how to use technology, while faculty cited the numerous places around campus where graduate students can participate in such training. These findings suggest that programs should be explicit about what they expect from graduate students in terms of technology experience and requirements. For example, if it is already known that neither the department nor the program will offer workshops to train graduate students to use technology, then both the department and the program should be candid about where such training can be obtained. The findings also suggest that graduate students are expected to take some initiative to learn technology on their own. Faculty members repeated this idea in text-box responses. Faculty believe that graduate students should be motivated to learn technology and that they should seek it out themselves.

Program assessment of technology training is basically non-existent, according to the survey and subsequent responses, though few responses indicate exit interviews and written evaluations as a method their program uses to assess technology training. Table 4 illustrates that both graduate students and faculty said that their program had no assessment of technology training. Some faculty named specific procedures they used to assess technology training, but it remained clear that programs mostly have no assessment procedures in place. It was also evident that both graduate students and faculty were unsure of assessment procedures, whether they existed, or, if they did, what they were. These findings are important because we learn that faculty and graduate students have a dim perspective on the effectiveness of their program’s technology training. Moreover, there are no formal means of tracking student satisfaction or dissatisfaction, and thus no way to determine what if anything should change about these workshops. While it can be inferred that technology training should change, there is really no confident way to suggest this when there is no formal and anonymous way to report their dissatisfaction. Before change, there must be assessment. These perceptions of program assessment in particular suggest strongly that more formal assessment of technology training is needed.

Further Study>>