We’ve all had him: that professor who drones in a monotone
voice, presents material too quickly and expects his students to
be over-enthusiastic about geological rock formations. We sit through
his lectures, crossword puzzle in hand, and silently wait for the
end of the quarter, so we can fill out our teacher evaluations and
hope that next time he leaves a few books off the syllabus and finds
his sense of humor. We bubble in our opinions and think that we
are fulfilling our democratic duty. But what exactly are we doing
when we fill out those evaluations besides offering feedback and
constructive criticism? In addition to evaluating our professors,
the administration uses us to make one of the hardest decisions
it faces: which professors should continue to teach, who deserves
tenure and who is expendable. In fact, student evaluations of teaching
highlight the interaction between the administration,
the faculty and the students. This
paper explores the relationship between these three groups by focusing
on student evaluations and argues that these evaluations are not
a valid measure of effective teaching and therefore should not be
the sole criteria by which a professor is rated.
To begin with, it is necessary to briefly describe
the relationship between administrators, faculty and students in
the postmodern university. First, the administration runs the postmodern
university as a business selling a product: a degree. Moreover,
the student is the consumer who pays tuition in return for a degree.
Because the student’s role has shifted from an intellectual
to a consumer, students’ attitudes
towards learning are apathetic. For example, Paul Trout, a professor
at Montana State University, argues that students see the postmodern
university not as a place of enlightenment but as a career boost.
In his article “Student
Anti-Intellectualism and the Dumbing Down of the University”
he states: “Although students have many reasons for going
to college, a very large number—71.3 percent of the entering
class of 1995—do so not to enrich their minds but their pocketbooks."
Basically Trout argues that instead of learning for learning’s
sake, we are most interested in using our degree to ensure monetary
success. Thus, the student’s goal is to earn a degree with
as little effort as possible. Second, the relationship between the
professor and the student is important. Students evaluate their
professors, and these evaluations determine their professor’s
career. Importantly, students often have ulterior motives and biases
when filling out evaluations. Professors try to eliminate this bias
(and in the process boost their evaluation) by appealing to the
students by inflating grades, and lowering standards. Thus, tenuous
relationships and reciprocal benefits characterize the postmodern
university.
To further complicate the postmodern university,
we must consider the fact that not all professors are affected by
student evaluations. Tenured professors
are evaluated by the quality of their research and not the quality
of their teaching. By contrast, untenured professors are evaluated
solely by their teaching. Moreover, positive student evaluations
are vital for this group because student evaluations are often the
sole measure by which a professor is evaluated. This limits a professor’s
academic freedom, because professors
shy away from controversial topics that could reflect poorly in
their evaluations. In fact, this is an enormous power to bestow
upon unsuspecting students. To put the problem into perspective,
I offer an anecdote. One of my favorite professors was almost fired
as a Communication Studies lecturer because one student out of two
hundred gave him a bad student evaluation. If one evaluation has
this much power in a large class, consider the grim implications
for a small seminar. While students are basing evaluations on superficial
criteria such as a hideous tweed jacket, a foreign accent and an
unfair midterm grade, the administration assumes that we are not
only responding rationally but also that we are good judges of effective
teaching. While it would be simplistic to say that students can’t
recognize effective teaching when they see it, many critics note
that students often confuse effective speaking skills with class
content. Undoubtedly the two are linked, but many professors feel
they should be evaluated based on the knowledge they are conveying
rather than the way it is conveyed. Since our evaluations determine
the careers of our untenured professors, shouldn’t we take
the time to investigate student bias?
First, what do student evaluations actually
measure? The administration would argue that evaluations measure
effective teaching. For example, in her article “What
do Student Ratings Mean?” Kathleen McKinney, a coordinator
at the Center for the Advancement of Teaching, argues: “Instructors
may believe that student evaluations are unreliable. In general,
the research does not support this belief.” In fact, she believes
that student evaluations and a quantified ranking system are good
measures of effective teaching. However, her opinion is less credible
because she is an administrator. Her findings are biased by her
investment in the issue. By contrast, professors, who are equally
biased by their personal investment in the issue, argue that student
evaluations are invalid in that they do not measure teacher effectiveness.
For example, Edward Nuhfer’s article “Of
What Value are Student Evaluations?” argues that there
are too many variables intertwined in student evaluations. He notes:
"Student evaluations are not clean assessments of any specific
work being done. Instead, they are ratings derived from students
[sic] overall feelings that arise from an inseparable mix of learning,
pedagogical approaches, communication
skills and affective factors that may or may not be important to
student learning." In essence, Nuhfer
realizes that just because an animated teacher communicates well,
does not mean that the lecture has educational content. Personality
variables affect student evaluations, but these variables do not
necessarily denote an effective teacher.
To illustrate the correlation between tenure
and personality variables, such as effectiveness and course difficulty,
I explore UCLA’s
bruinwalk website. First, I analyzed the website qualitatively.
While the sample is non-random, I qualitatively compared the two
top-ranked professors in the Communication Studies department. One
professor has tenure; the other does not. Looking at the student
comments, I noticed that both professors were often referred to
as “great,” “funny” and “interesting.”
But there was one aspect in which the two professors differed: class
difficulty. Quantitatively, the tenured professor scored a 7 in
difficulty while the lecturer scored a 5.28. In fact, the qualitative
comments reinforce this data. For example, students commented that
the tenured professor was “challenging” and to do well
in the class you had to “read in detail.” On the other
hand, the lecturer’s class was characterized as “not
difficult” and one student advised: “The tests are pretty
much based on discussion sections and the readings, so if you don't
want to go to class this would be a good class for you.” Given
that both professors were rated equally effective (8.12 and 8.03
respectively), why was one class significantly less difficult?
When I looked at bruinwalk quantitatively,
I found that departments with more tenured professors tended to
have lower rankings when compared to departments with fewer tenured
professors. By and large, the pure science and social science departments
ranked lower than the humanities averages. However, this alone does
not necessarily mean that the humanity departments have better professors.
For example,
David Kaufman, an educator and the director of the Learning
and Instructional Development Centre, states that the academic field,
such as humanities or social sciences, is related to student ratings.
Specifically he argues that ratings in the humanities are higher
than the social sciences, which in turn are higher than science
and math departments. Thus, department ratings alone do not adequately
address the differences between tenured and non-tenured scores.
To better address the differences between tenured
and non-tenured professors, I analyzed intra-department ratings.
Using the communication studies department as a case study, I compared
the averages of professors and associate professors (a proxy for
tenured faculty) with senior lecturers and lectures, (a proxy for
untenured faculty). Of the twenty-one professors listed in the bruinwalk
website, only twelve were currently listed as members of the faculty.
Thus, only these twelve are compared. Only four of the twelve were
tenured professors while eight were untenured. While the ratings
were fairly consistent, tenured professors were rated as slightly
more difficult. In fact tenured professors had an average of 6.93
as compared to the untenured professors average of 6.69. However,
this difference is probably not significant. Nonetheless, it hints
that untenured faculty, who rely on student evaluations, dumb down
their classes so as to appeal to the masses.
The fact that untenured professors are consistently
rated as being less difficult begs the question: What is the correlation
between easier classes and positive evaluations? The simplest answer
is [grades]. Research consistently shows that anticipated grades
correlates with a positive evaluation. For example, an analysis
by Edward Nuhfer showed that anticipated grades correlated positively
with global questions (such as: “Overall how effective was
this professor”) on student evaluations. This correlation
was small (0.12) but significant. Thus, without altering course
content or teaching styles, professors can inflate student grades
to boost their evaluation scores. Moreover, longitudinal studies
show grade inflation occurring. For example, in “The
Current Status of Academic Standards in Engineering Education at
Ohio University” Professor Brian Manhire states: “In
1969, 7 percent of students received grades of A- or higher while
25 percent received grades of C or lower…by 1993 these figures
had essentially reversed—becoming 26 percent and 9 percent
respectively.” While this study doesn’t directly prove
that grade inflation results from untenured professors seeking better
evaluations, it does show that a higher proportion of students is
receiving higher grades.
Because grades consistently influence student
evaluations, student opinions should not be the sole criteria by
which a professor is assessed. While student evaluations provide
course feedback and criticism, they are not a valid measure of effective
teaching. Moreover, most researchers agree that student evaluations
should be used in conjunction with other measures. For example,
Paul Trout suggests that we use interviews, committee reviews, and
document reviews (such as a professor’s syllabus, handouts
and grade sheet) as well as student evaluations to determine a professor’s
effectiveness. Furthermore, Trout argues that activism
and unionization are the first steps
towards changing our postmodern educational values. If we implement
new standards, which include multiple measures of professor effectiveness,
the quality of our education (if not our grades) will rise.
Links
Marcus, Ben. (2001). “Graded by my Students.”
Time. http://www.time.com/time/magazine/article/0,9171,93296,00.html.
Mitchell, Lee Clark. (1998). “Inflation
Isn’t the Only Thin Wrong With Grading.” The Chronicle
of Higher Education. http://chronicle.com/che-data/articles.dir/art-44,dir/issue-35.dir/35a07201.htm.
Schneider, Alison. (2001). “What Grade
Would You Give Him?” The Chronicle of Higher Education. http://chronicle.com/weekly/v47/i23/23a01001.htm.
Trout, Paul. (2000). “Flunking the Test:
The Dismal Record of Student Evaluations.” Academe. vol. 86
no. 4. http://www.aaup.org/JA00Trou.htm.
Wilson, Robin. (1998). “New Research Casts
Doubt on Value of Student Evaluations of Professors.” The
Chronicle of Higher Education. http://chronicle.com/che-data/articles,dir/art-44.dir/issue-19.dir/19a01201.htm.
|