REPORT OUTLINE FOR MINI-GRANT ON MOO ASSESSMENT RESEARCH Project Directors: Gloria McMillan, Adjunct Writing Instructor Mika Sita, Writing Instructor Pima Community College Date submitted: December 17, 1996 INTRODUCTION: THE PROBLEM: Common knowledge is that students find grammar boring and painful. Our MOO-based (online) opinion surveys have and polled four sections of Writing 101 students and 90% of these students have responded that grammar is both a mystery to them and also extremely unpleasant to try to master. This backs up the commonly- held wisdom that grammar is not perceived as an interesting area to study. One approach to the problem is simply to forge ahead with traditional methods and make the students take grammar as if it were medicine. Some of us who use computer-assisted instruction have noticed the heightened awareness that students show when "playing" on the keyboard. We began to build an element of lively interaction, if not play, into our online curriculum and we hoped, by this method, to make the acquisition of grammar skills a more tolerable, if not pleasurable, pursuit. The purpose of the current study is to determine the effects of computer-assisted instruction (CAI) and computer-assisted testing (CAT). We seek to examine if students taught and tested on computers perform differently than students taught and tested traditionally. With the rapid expansion of technology in higher education today, the implication for Pima Community College will have growing importance over the coming years. Our design is a two-group research project. One group used a combination of computer-assisted instruction and computer- assisted testing. The second group used traditional classroom methods and paper tests. Our question was, "Are there differences in student performance when taught/tested by traditional vs. CAI/CAT delivery systems?" One alternate hypothesis that students would score better on the computer assessment was tested by pairing the computer tests with identical-by-content paper tests, which were given to a control group. Both teaching and testing in the experimental group were done largely (regarding teaching) on the computer. If the above hypothesis held, then the experimental group's scores should have shown more rise from the pretest to the second test. If the students' scores showed an incon- clusive result, this did not invalidate the research process, but merely pointed out that one semester was a slim sampling for a trend to show decisively. The null hypothesis was that the delivery system of the curricula and tests would have had no effect at all on the performance of students and that both groups would perform similarly in incremental gains in knowledge from the pretests to the second tests. One alternate hypothesis was that the traditional control group would do better on the tests than the experimental group. Additionally, the two groups and repeated tests were our chance to begin to document and account for hitherto untraced correlates to learning on computers (past positive and negative experiences, access to computers at home or work, etc.) Additional data relevant to the experimental group was collected on prior computer experiences and attitudes toward CAI/CAT. LITERATURE ON ONLINE VS. HARD COPY TESTING: One study was done comparing the PLATO system (computer adaptive testing) of test delivery with hard copy tests at Indiana University. The experimenters found that students who drilled using the PLATO CAI system scored approximately 20% better than students in comparable classes who used only the textbook to drill and review (Oates 193). Students were general in their praise of the PLATO system of exercises and testing. In a second study on comparing CAT with hard copy testing, experimenters at Educational Testing Service (ETS) found that trials comparing the CAT and paper versions of the Graduate Record Exam (GRE) yielded little difference in scoring among the student test-takers. Experimenters found that mode of delivery made no substantive difference on scores (Schaeffer 29). A third study traced the attitudes of students toward taking tests via computer. Although the student attitudes were positive, perhaps that the sample added a positive bias to the whole proceedings, since the sample consisted of students already predisposed towards computers: educational technology students, enrolled in a course on computers (Glowacki 9). Also, the experimenters noted the reduction in class time of taking the computerized version of the tests (Glowacki 8). Measures included on the study were actual student performance on the tests, as well as students' perceptions of their performance on the tests. (Glowacki 10) A study done on community college students and their levels of anxiety concluded that "computer-administered testing can potentially increase test anxiety and depress test performance for examinees who are relatively unfamiliar with computers." (Lamazeres 703) With such a variety of studies and results, we decided that nothing, in fact, had been proven about CAT vs. paper testing, although several of the studies did agree on the point that students with little prior exposure to computers might actually experience depressed scores on computerized tests. METHODS -- DATA COLLECTION: SAMPLE AND SAMPLING METHOD: Our sample consisted of students from four sections of WRT 101. Two sections of Writing 101 students used computerized tests and two sections used other paper versions. The control group used a paper test and the experimental group used an online interactive test. The sampling method was that we used WRT 101 classes for all the assessments. Using all Writing 101 students diminishes bias due to prior knowledge. Additionally, these students should be less varied in skill, because most are placed into this level via an assessment test or having taken WRT 100. Students self- selected into these sections of Writing 101 (computer sections were noted by an asterisk in the schedule of classes). While not entirely a random sample, these groups did not differ dramatically. HOW HYPOTHESES WERE TESTED: Apparatus: The instruments used in this project are called "surveys." (See Appendix A.) They are interactive tests written in MOO code (a form of C++ language). Students were taught online at Diversity University MOO and became accustomed to responding to questions in discussions by typing their answers online. We used many teaching tools all semester to help students grasp the concepts of grammar. Students could see charts and diagrams of grammatical concepts at the MOO. Taking the tests also in this format was no anomaly for the experimental group, but only a continuation of other educational activities in the MOO setting. We structured the MOO tests so that students had to progress straight through the test. In creating a hard copy version for the control group, we achieved a similar format by printing only one question per page and putting the instruction at the beginning that students should not turn back, but proceed from beginning to end of the test. Experimental Procedures: The design of our two-group study was to allow comparison/ assessment of different curricula/tests. In order to compare the performances, we needed to have one experimental group using both computer-assisted curriculum (CAI) and computer- assisted tests (CAT.) And one control group using more traditional curriculum and tests (on paper.) The experimenters took steps to standardize procedures between the control and experimental groups. The tests were given as close to the same times in the semester as possible for each group. Additionally, the tests were engineered to be similar in format, as was described above. We gave two grammar skill assessments to each group. The first assessment was a test of grammar skills. The second assessment was a test of sentence structure knowledge, including some concepts overlapping with the first. In addition, the first grammar skill test was repeated by ONLY the experimental group, using the computer version. We gave the first grammar test as a pretest measure to the control and experimental groups near the beginning of the semester. This assessment was designed to provide information on previous knowledge, in that it covered topics relevant at the Writing 100 level. We gave the second grammar test in late November, close to the end of the semester. This assessment was designed to provide information on incremental improvement of knowledge during Writing 101. MEASUREMENT OF VARIABLES: Each test item was labelled as 1 = correct or 0 = incorrect. The sum of correct and incorrect items on a test constituted the total score. METHODS OF ANALYSIS, STATISTICS: We calculated the descriptive statistics, including the mean, standard deviation, minimum and maximum for each item and the total score. We tested for differences between the control and experimental groups, using the analysis of covariance (ANCOVA). We selected ANCOVA to address the question of outcome differences (i.e., post test), given possible initial knowledge differences (i.e., pretest). TEST DEVELOPMENT: Very often in Writing 101, tests items consist of sentence diagrams, identifying parts of speech, identifying parts of the sentence. Examples of such questions are: labelling the subject, verb, direct object, indirect object, and prepositional phrases in a sentence. This complex question became a series of separate questions in our second-generation test. The students' responses to each item were labelled as correct or incorrect. We did pilot testing during the spring semester of 1996. After the results were gathered, we analyzed these in weekly consultation sessions with John Fulginiti. The assessments were then expanded in the following fashion: We broke items down from complex forms to unit-scorable responses. The reason for this in terms of scoring was that complex items left some ambiguity. Not every item was being scored separately and a judgement had to be made in a rather subjective fashion as to whether an item was correct or not. That is, an item with several points might be nearly correct, but not entirely, and in the first version had to be counted as correct. John Fulginiti had suggested that these be broken into discrete scoring units. Simplifying the complex items resulted in longer tests and increased detailed coverage of the domain to be tested. In response to this observation, the skills tests were lengthened by about one third of total length. Additionally, longer tests are more reliable than shorter. RESULTS: TABLE 1: TOTAL NUMBER OF STUDENTS IN EACH GROUP: -------------------------------------------------------------- All Students Combined: ---------------------- Pretest Post test (GD06) (GD08) Experimental group: 32 22 Control group: 25 15 Experimental group only: ------------------------ Female: 18 18 Male: 7 4 Control group only: ------------------- Female: 17 11 Male: 13 4 TABLE 2: DESCRIPTIVE RESULTS: -------------------------------------------------------------- GROUP: PRETEST: POST TEST: IMPROVEMENT: ------ -------- ---------- ------------ Experimental: 68.93 93.13 24.2% (Computer) (11.54) (4.62) Control: 66.25 90.75 24.5% (Paper Test) (9.90) (6.17) IV. DISCUSSION The mean of the computer group on the pretest was 68.93. The mean of the paper group on the post test was 66.25. This is not different enough to be significant statistically, so we knew our two groups were starting at the same or quite similar skill levels. The mean of the computer group on the post test was 93.13. The mean of the paper group on the post test was 90.75. This is not different enough to be significant, indicating no difference between the computer and traditional tests. The results told us other things. Method of delivery has no detrimental effect, even though several students began with profound computer anxiety. The learning of both groups rose dramatically, so either of Pima's methods of delivery is an effective curriculum. However, we do feel that as more students gain access to computers at home and they experience less computer anxiety, the computer method of delivery will additional show gains. We had several potential learning correlates: attitude toward subject, previous affective experiences with computers, gender, age, and ethnicity. Time and resources allowed us only to reduce the data on gender for this study. But we feel that enlightening use could have been made from any of the above-named correlates. Future research on correlates to learning in the writing classroom is planned as a follow-up to this study. In our study, we had responses indicating ethnic differences in attitude toward computer curriculum. We had only two self-described Native American students, but they provided several insights about the use of the computer to make such students feel more ready to participate in class discussions. Recalling that these issues were prominent ones in the recently-published institutional climate research, we feel that further attention could well be directed towards gaining the responses in detail from minority students about computer- mediated testing and instruction. Both of our two Native American students remained with the class to completion and they each expressed relief that they could participate without being the visual center of attention of the class. In general, the study went well and the attitude of the overwhelming majority of students to being involved in this research was very positive. More students than in any other semester told me that they intended to follow me (Gloria McMillan) to the next level course, WRT 102, next semester. They even wished us well on their final exams. The key to such studies is to see the progression over time of performances and attitudes. As mentioned, these technologies are now becoming more available to students at home and this may well shift the balance in their performance online, as anxiety levels lessen. APPENDIX A List of questions for the survey: GloData6 Question 1 THIS IS A SELF-HELP ON GRAMMAR. NOTE WHERE YOU ARE GUESSING. WRITE IT DOWN. LATER, DO AN EXERCISE ON THE PLACE(S) YOU HAD TO GUESS. YOU CAN BE YOUR OWN GRAMMAR COACH! [NO QUESTION--PRESS ENTER TO PROCEED.] Question 2 Which pronoun is correct in this sentence? "Everyone must bring ________ own lunch." a. His b. Her c. Their d. a and b e. b and c [Use correct LETTER of your choice.] Question 3 2) The correct answer was d. "a and b" (his or her.) Which pronoun is correct in this sentence? "Almost every great author sometimes doubts ________ skill." a. His b. Her c. Their d. a and b e. b and c [ Write the LETTER of your choice.] Question 4 Which pronoun is correct in this sentence? "A celebrity writing a memoir should use other sources beside ______________ memory." a. His b. Her c. Their d. a and b e. b and c Question 5 There are some major reasons for commas: lists in series, two clauses that are relative, two clauses where one is dependent, setting off intro. phrase, setting off parenthetical remarks. Where should this sentence have a comma? "John was tired of being paged and he really felt he could use a vacation." a. Comma between 'felt' and 'he' b. Comma bet. 'paged' and 'and' c. Comma bet. 'tired' and 'of.' [Use correct LETTER of your choice.] Question 6 John was tired of being paged, and he felt he could use a vacation." The correct answer was b. The comma between 'paged' and 'and.' Why? Question 7 Where should this sentence have a comma? "Jane wanted the job in Chicago but she didn't want to move so far from her mother." a. Comma between 'but' and 'she.' b. Comma between 'move' and 'so.' c. Comma between 'Chicago' and 'but.' Question 8 In the last sentence, the comma went between 'Chicago' and 'but.' Jane wanted the job in Chicago = MAIN CLAUSE but she didn't want to move so far...etc. = DEPENDENT CLAUSE 'But' is a conjunction of the type that introduces a dependent clause. But, however, unless, until, because, despite, although, and others are all CONJUNCTIONS that introduce DEPENDENT CLAUSES. [Press ENTER to proceed.] Question 9 Where does the comma go dividing the two clauses in this sentence? "Although Jim was taller than Carlos he was shorter than Bob." a. Comma between "Carlos" and "he." b. Comma between "shorter" and "than." c. Comma between "taller" and "than." Question 10 How can you tell a complete sentence? Usually, it does not begin with one of the subordinating conjunctions, such as however, although, despite, because...etc. A complete sentence usually has a verb. There are those short answer sentences, such as, "Who's there?" "Jim." These are an exception. In the group of sentences below, which is incomplete? a. Jim wanted to go home early. b. Sally also was late. c. Until it rains. d. By Spring, we'll have new styles in stock. [Use correct LETTER of your choice.] Question 11 The correct answer to the last question was: c. Until it rains This is not an independent clause or sentence because it leads off with a subordinating conjunction. Of course, there are times when writers, especially in fiction, will make use of these sentence fragments. This still is non- standard usage and should be avoided in formal writing. Give an example of another sentence fragment: Question 12 A SIMPLE sentence must have at least a subject and a verb. A COMPOUND sentence is formed when two or more simple sentences are combined. Which of the two sentences below is a compound sentence? a. John went home early from the factory. b. I went home early, and I washed my hair. c. I met Jeanne in the store after school. Question 13 Which of these sentences is COMPOUND? a. Susan fell off her horse, and she had to have her leg set. b. My mother always wins in Las Vegas. c. How far you go depends upon how fast you run. Question 14 THE COMPLEX SENTENCE consists of one simple sentence and at least one dependent clause introduced by a subordinate conjunction or relative pronoun. "When he finished work at noon, the bathroom was finished." Which of the sentences below is a complex sentence? a. The snow stopped, and the sun shone again. b. Maria finished medical school a year early. c. However I try, I can't fix my car. Question 15 Which of the sentences below has at least TWO SIMPLE SENTENCES (main clauses) and at least one DEP. CLAUSE? a. Although he was indicted, the official never went to jail. b. Because he went to jail and he carried a record, Jack had difficulty getting a job. c. Being only four-foot-ten was no handicap for Ann, who could list all the benefits of her size. Question 16 THE COMPOUND-COMPLEX SENTENCE consists of two or more simple sentences and AT LEAST one dependent clause. "Susan wrote a book and she got it published, but no one read it." Susan wrote a book = MAIN CLAUSE 1 and she got it published = MAIN CLAUSE 2 but no one read it = DEP. CLAUSE [Press ENTER to go on.] Question 17 PRONOUNS are used in place of nouns in a sentence. I, we, me, us, our, ours, you, your, yours, he, she, it, its, him, his, her, hers, they, them, their, theirs. How many pronouns are in the following sentence? "He gave her their address, rather than ours." a. 3 b. 4 c. 2 d. 5 Question 18 RELATIVE PRONOUN: introduces an adjective or noun clause in a sentence. "We will be happy, WHOEVER wins the race." Which of the words below is another relative pronoun? a. never b. absent c. whatever d. done Question 19 VERBS: Show or express the action in a sentence. Show or express the state of being in a sentence. Which of the words below is a verb? a. grow b. building c. man d. century Question 20 SUBJECT AND VERB must agree by NUMBER. They must either both be S. or PL. There are times when it is tricky to see which is the subject, so it is best to find the verb and see what matches it by number. In the sentences below, find the one that has a mismatch between its subject and its verb. a. Jim and Jane are my two best friends. b. Each of you are needed on this job. c. Between us two, he isn't the best cook. d. Falling behind on assignments is never a wise move. [Use correct LETTER of your choice.] Question 21 The correct answer is b. Each of you ARE needed for this job. Are does not match the subject each, which is singular. Which of these pronouns would also take a SINGULAR verb? a. Everyone b. Somebody c. Nobody d. Everybody [Use correct LETTER(S) of your choice.] Question 22 PREPOSITIONAL PHRASE TRAP: Here is a sentence with a PPT in it: One of the trees had Dutch Elm disease. You may wonder whether the subject here is ONE or TREES. Look at the word or two preceding each of those words. Is there a preposition? Good! If you noticed that 'of' is a preposition, that makes TREES the object of the prep. and not the subject. Many times, subjects will have a little prepositional phrase near them functioning as a description. THIS confusion may cause you to pick the wrong verb! Note this sentence: ONE of the TREES (was, were) wild. Answer: One of the trees was wild. This is because, as in the first example, 'of the trees' is your prepositional phrase trap! Write a sentence with a Prepositional Phrase Trap near the subject: Question 23 The idea here is to underline a title of a BIG work and set off smaller things (story in collection, mag. article, pop tune) with double quotes. In hard copy one would underline Tom Sawyer, because it is the title of a novel. But electronic mail can't use underlines, so the email or etext replacement for the underline is: _Tom Sawyer_ A short article about _Tom Sawyer_ in a journal would only take quotation marks. Example: "Mark Twain's Use of Nature Imagery." Which of these titles takes quotation marks? a. Hamlet. (play by Wm. Shakespeare) b. Crime and Punishment (novel by Feodor Dostoievskii) c. Rabbit Run. (novel by John Updike) d. With God on Our Side. (Bob Dylan song) Question 24 In which of the following snetences is the pronoun incorrect? Remember that he, she, we, etc. become him, her, us, etc., when they go into the objective case. Bob and I went home. The movie seemed silly to Bob and me. (objective) a. The baby and I got flu shots. b. Jane and I went to school early. c. Jim paid Bob and I a hundred dollars for that rug. d. Dinner started before Sally and I arrived. [Use correct LETTER of your choice.] Question 25 What area of grammar is your weakest? You may specify such things as parts of speech (nouns, verbs--), punctuation, etc. Tell your weakest area and what kind of exercises or teacher input might help. Question 26 LINKING VERBS: take a subject complement. "Boiling onions smell bad." 'Bad' is not an object. "Bad' tells us something about the subject. Which of the sentences below has a LINKING VERB? a. Shut the door! b. That chicken soup tastes good. c. The boy threw the ball. d. Oscar threw a party last week. Question 27 AUXILLIARY VERBS: combine with main verbs to make a verb phrase. These auxilliary verbs include BE and HAVE. "The train HAS STARTED." "Don't GET OVERCOME by the heat." Which sentence below has an AUXILLIARY VERB? a. Be on time next week! b. I forgot she had a cold. c. I am tired of waiting for him. d. Whose student is she? Question 28 VERBALS ==> PARTICIPLES VERBALS is the general name for verbs used as adjectives, nouns, or adverbs. PARTICIPLES end in -ing in present tense and -ed in past. Remember! They are verbs used as nouns, adjectives, and adverbs. Which of the sentences below has a verbal in it? a. She was singing too loudly. b. He just made a fishing trip. c. He was running last in the race. Question 29 VERBALS ==> GERUNDS GERUNDS are special forms of verbs always used as NOUNS. "Holly loves SKIING." Which sentence has a GERUND? a. Seeing is believing. b. He is coming home for lunch. c. John was playing ball with the Cubs. Question 30 VERBALS: ==> INFINITIVES Infinitives can also be used as nouns, adverbs, and adjectives. "Carrie didn't know what to think." Which of the sentences below has an infinitive? a. He is going to Phoenix. b. You may stay as long as you want to. c. He didn't know how to cry. WORKS CITED Books: Anastasi, Ann. _Psychological Testing_. 6th ed. New York: MacMillan, 1988. Angelo, Thomas. A. and K. Patricia Cross. _Classroom Assessment Techniques_. 2nd ed. San Francisco: Jossey-Bass, 1993. Ford, James E., ed. _Teaching the Research Paper: from Theory to Practice, From Research to writing_. Metuchen, NJ: Scarecrow, 1995. Gable, Robert K. _Instrument Development in the Affective Domain_. Boston: Kluwer-Nijhoff, 1986. Hashimoto, Irvin Y. _Thirteen Weeks: a Guide to Teaching College Writing_. Portsmouth, NH: Boynton/Cook, 1991. Jacobs, Lucy C. and Clinton Chase. _Developing and Using Tests Effectively_. San Francisco: Jossey-Bass, 1992. Kerlinger, Fred N. _Foundations of Behavioral Research_. 3rd ed. New York: Holt, 1986. National Educational Research Association. _Standards for Educational and Psychological Testing_. Washington, DC: Am. Psychological Assocication, 1985. Nitko, Anthony. _Educational Tests and Measurement: an Introduction_. New York: Harcourt, 1983. Seyler, Dorothy L. _Doing Research: the Complete Research Paper Guide_. New York: McGraw, 1993. Articles: Glowacki, Margaret L. "Developing Computerized Tests for Classroom Teachers: A Pilot Study." Paper presented at the Annual Meeting of the Mid-South Educational Association (Biloxi, MS, November 8-20, 1995). Lamazeres, Yvonne M. "The effects of computer-assisted instruction on the writing performance and writing anxiety of community college students." _Dissertation Abstracts International_: 1992, Vol. 53(3-A), Sep, p. 703. Oates, William. "An Evaluation of Computer-Assisted Instruction for English Grammar Review." Appears in _Studies in Language Learning_. Sep 1981: 193-200. Schaeffer, Gary A. "Field Test of a Computer-based GRE General Test." Apr 1993: 79 pp. ERIC file ED385588.