Interview with Les Perelman

Interview with “The Man Who Killed the SAT”

Photo of Les Perelman

It’s rare that an academic gets international recognition for his or her research, even rarer in the Humanities, rarer still in Composition/Rhetoric, but Les Perelman has been interviewed by the Australian Broadcasting Company, the Canadian Broadcasting Company, MSNBC, NPR, The New York Times, The Boston Globe, Mother Jones and countless other media, online and off, regarding his criticism of high stakes, automated writing tests and their computerized evaluation. His work was even referred to on The Colbert Report!

Regarding Automated Essay Scoring (AES), Perelman enumerated serious problems to The New York Times (Winerip, 2012): computerized grading can be easily conned and compromised; it is vulnerable to test preparation classes affordable only to an affluent minority; it sets a very limited and rigid standard for what good writing is, and finally, AES leads teachers to teach to the test, thereby degrading writing instruction. Since many of these criticisms also apply to the way the SAT written exam is administered and evaluated, several pundits and scholars are giving the now retired MIT professor credit for invalidating the reliability and validity of that exam. Largely thanks to Perelman, the writing exam will now be optional when the new SAT rolls out in 2016.

How did being the Director of Writing Across the Curriculum at MIT influence your research in terms of topic and method?

MIT is a place where data are the main currency. Assertions need to be backed up by data. If you have the data, you are listened to. If you don’t have the data, you will be ignored.

Engineering practice, especially the concepts around design, have influenced both my teaching and research. All writing is essentially an engineering design problem. It is creating an artifact using a relatively recent technology (only 4,000-5,000 years old) for a specific purpose and targeted to a specific populations. A key concept in engineering design is trade-off. You can design a very passenger-safe car, like an armored personnel carrier, but it will get 10 gallons to the mile at best. In designing automobiles, there is always a conscious decision about making such trade-offs, such as between safety and fuel economy. Engineers need to be able to defend their choice of trade-offs. In this example, neither an armored personnel carrier nor a death-trap that gets 100 miles per gallon will probably be the optimum solution. Writers are always making trade-offs. How detailed an explanation should I give? Should I use technical vocabulary that a small portion of my audience may not understand? How to balance clarity and precision? Like any design problem, there is no right answer, only optimal answers given specific design constraints and audiences.

The concept of trade-offs is a key issue in writing assessment. How do we balance such design criteria as validity, reliability, cost, the prevention of cheating, and fairness? It is precisely these questions that led to my interest in online assessment. In essence, assessment is always a design problem. Online assessments can have much greater face validity because they can more closely replicate most of the characteristics of college-level writing, including essays based on one or more texts, and time to plan, revise, and edit. They can be graded as reliably as any timed-impromptu that is not scored largely on length. They cost more to score but not that much more because they consist of typescript rather than handwriting. Consequently, even though they are usually longer, they are easier to read. They are fairer than many timed impromptus because being text referenced, a student’s prior knowledge of the topic will be less important. However, in preventing cheating, an online assessment is vastly inferior to all of the safeguards that can be employed in a timed-impromptu. We don’t know who was actually behind the keyboard. We decided that all the advantages for the vast majority of students outweighed the possibility of a few students cheating.

Finally, at MIT I have had the opportunity to hang out with world-class linguists and computer scientists, who have taught me much and have given me a perspective that confirms my core intuition that Automated Essay Scoring is impossible given our current state of knowledge. Indeed, in terms of developing a theoretical basis for such a device, we are much closer to building a Star Trek-type transporter.

right arrow