Chapter-4: Developing a Theory for Portfolio-Based Writing Assessment

This chapter seems to provide the specifics we’ve found lacking in previous chapters. The chapter opens with reflections on its purpose. This chapter will address the what, how and why of portfolio assessment. The authors say they are looking for the elements that make a portfolio, using the metaphor of chemical theory, that "life as we know it is carbon-based." What then, is the carbon of portfolios? Second, they will examine how portfolios work; and third, why they work, why various features of portfolio assessment are important to their working well.

WHAT?: The carbon of portfolios involves 3 principles: collection, reflection, and selection.

  1. Collection: The portfolio is a collection of materials (the plural is important) which the student has produced. Though contents vary, the key, the authors say, is that a portfolio include "a multiplicity of texts and information about the writing context, not merely the writing itself." They stress the importance of judging students’ writing with knowledge about the instructional context, and the writer’s development, strategies and processes.

  2. Reflection: This is recognized as the element that keeps a portfolio from being merely "a pile" of writing. It is "the deliberative process" that makes connections between the pieces, considers the writing context and process, makes the writing accessible by interpreting it for a reader, in fact, makes the writer accountable to the reader. Reflection helps the student writer begin to self-assess his or her work.
  3. Selection: Reflection has its final purpose in the student’s selection of materials to include in the portfolio. Without it, the writer would have no need to reflect on his/her strengths and how to show them to best effect; nor would the reader see the coherence of the selection of pieces. It is the final step in making the portfolio a public document.

A word on assessment: In keeping with their belief in local contexts governing the development and implementation of portfolio programs, the authors hope their book will be useful to a wide audience. Therefore, they do not wish to outline a single "exemplary" approach to portfolio assessment, nor feel that it is possible to show the variety of approaches in existence. They offer instead important questions that should be considered by those creating a portfolio-based assessment program. The questions fall into four focus areas: the learner, the teacher, the assessor and the program.

Focus on the Learner: There are ten questions here. I’ll try to convey the gist. The first set of questions asks how the assessment acknowledges the writing context: assignments, teacher’s influence, teacher’s instructions with regard to preparation of the portfolio or revision of the pieces of writing, the richness and variety of materials presented in the class or "thinness or sameness" of "the schools through which the learner has progressed." (Obviously there are some assumptions to that last one!)

The second set of questions has to do with the learner’s context; how and in what ways does the portfolio represent the learner, the learner’s "identity," "values," "sense of self," "ability to address and audience, academic acculturation," "discourse communities," previous schooling, home or family worldview, race, class, ethnicity, etc.

At this point the authors do not discuss whether or how all this information might be evaluated; rather, they point out that readers often get far more information than they actually need to evaluate writing. However, they feel that considering types of information likely to be revealed in a portfolio is important for program planners as it raises important questions about criteria for assessment. They also advocate ungraded writing courses (pass/fail), delaying grading of, or not grading, individual papers, and/or allowing the whole grade to be based on the portfolio.

Focus on the Teacher: Questions about the teacher’s role as assessor are based on the Seven C's. To ensure fair assessment, teachers must "work together to agree on the structure of the portfolio, criteria for assessment, the value to be given to the readings of all concerned:"

Focus on the Program: These questions focus on considering the needs of the program in determining what information ought to be gathered, and what kind of portfolio can provide this information. At the center of this consideration are questions about students’ learning: what tools or skills the program wants to encourage students to acquire, how to provide opportunities for this acquisition, and how the assessment will measure whether it has taken place. For instance, how much does the assessor want to know about the student’s writing ability, writing processes, thinking processes, the instructional setting, and previous schooling? Programs can then ask: how many texts are needed to provide this information; what kinds should they be and how long? The authors contend that answering questions about what to assess and how portfolios can provided the necessary information will clarify thinking about how to design the program. They now turn to designing a method of assessment.

Assessment Framework: There are two ways to achieve reliability: focusing on product, or process.

Scoring: The authors mention the few research studies that have been conducted on portfolio programs, and discuss how the assessments did with regard to reliability and validity. One is the Vermont K-12 portfolio project, in which researchers found portfolio scoring unreliable. C&H-L point out that the researchers used "traditional psychometric [methods] accepted for essay tests," which "do not work very well in the portfolio context." (The researchers worked for the Rand Corporation.) Another project at the University of Michigan achieved the impressive level of reliability, defined as "readers’ agreement on scores," of .85. They also cite CU-Denver’s program as an example of achievement in reliable portfolio assessment.

C&H-L then discuss an alternative to the traditional, formal methods of assessing portfolio assessment, a method drawn from "naturalistic inquiry" which they find more in tune with the goals of portfolio assessment. This approach asks teachers "to develop ‘an interpretive framework- a coding scheme’" against which portfolios can be analyzed. Forms of this method are used at CUNY-Hunter and at CU-Denver. The authors then assess the assessment of the assessment, advocating that we "turn away from ‘objectivity’…and toward interpretation, toward understanding teachers and teaching…" Portfolio assessment, they feel, should find new ways of satisfying those clamoring for accountability and standards, rather than relying on "traditional crude measures of interrater reliability and criterion validity." Once again, we hear that there is no body of empirical research based on the type of portfolio scoring they advocate, and that this need must be a high priority in order to satisfy bureaucRATS. (Oops, sorry, I don’t know how that happened…)

The authors have a list of questions here too, to help programs develop scoring that reaches for validity and reliability. These cover four areas: 1) who the scorers should be, what role they play in the teaching/learning process, what training they should receive; 2) the setting in and conditions under which the scoring take place; 3) how disagreements are resolved; and 4) what sort of data each scoring session will produce, and how one session might affect subsequent ones. They advocate for teachers "familiar with the students and the program" to score portfolios, as they are part of the context in which the portfolios were produced, yet also support experimentation with "other significant…stakeholders" as readers, such as faculty from across the disciplines. While they do not advocate giving students a direct role in evaluating portfolios, they support giving students’ self-assessments weight in their final scores, having found that students well taught in the processes of peer critique are able evaluators of their own and each other’s work. The program must also consider what data to collect from scoring sessions in order to assess itself, and to further discussion of its scoring practices.

Actual Criteria for Scoring Portfolios: The authors first list guiding questions for those developing portfolio assessment programs to consider, such as, "to what results will the evaluation lead?" Will the evaluation lead to a grade on a scale, passing or failing a course, placement into a course, etc? "Where do the criteria and standards come from?" "Who makes the judgments?" "What part do students’ reflections and self-assessments that appear in the portfolio play…in decisions about the ‘worth’ of the portfolio?" They then offer two examples of "heuristics" used at CU-Denver, for exiting 1st year composition, and at the University of Michigan, for entry-level assessment.

CU-Denver: The five categories considered are 1) "Range of writing" ("genres, audiences, purposes"); 2) "Development of the writer’s abilities" over the term (includes quality of revision, whether substantive or surface); 3) "Engagement with ideas and issues" (whether writer is "tackling ‘important’ material," "writing as discovery"); 4)"Textual excellence" (looking at the mechanics and presentation of the finished product); and 5) "Self-reflection" (writer’s insights into his/her own work, ability to interpret it for the reader, and to self-evaluate).

Michigan: This heuristic appears as a chart with each of four categories judged on a scale between "consistently present or high" and "consistently absent or low." These categories include the

Interestingly, these heuristics were developed differently: at CU-Denver a group of composition teachers was asked create a collaborative list of criteria for grading, whereas at UM, teachers read a group of portfolios and noted factors as they read, without evaluating them. They used the elements noted as a framework in which to consider each writer’s achievement and to begin the discussion about scoring criteria. This more "bottom-up" approach achieved as high a level of reliability as traditional holistic scoring.

They authors envision portfolio programs as a building up of information about the writer. They advocate for "multiple criteria" and "multiple judgments" for evaluating portfolios which can "play a part in bureaucratic decisions…but need never make a direct pass/fail or ‘gatekeeping’ decision." (And how exactly does that work??)

The last section of this chapter has to do with specific conditions in a program or school which affect the general heuristics laid out. Questions such as who has access to portfolio evaluation (all entering students? Students in one course? Freshman?), who assesses the portfolios, who controls the portfolio’s contents, and the context in which they are produced are represented in various charts showing different actual and hypothetical plans. The charts marking different specific contexts are used to highlight positive and problematic outcomes of portfolio assessment. Basically, the authors advocate visions which allow more student control of portfolio contents (e.g.: allowing writing from outside the class for which the portfolio is being prepared) and which consider student self-assessments. If the teacher or program controls all the dimensions of portfolio production and assessment, they feel, most of the benefits of portfolios (variety of student work, student investment in learning and writing, student ownership of the work) are lost.

The summary to this chapter provides the authors' recommendations for portfolio assessment programs. These include teacher involvement in the assessment, practices which enable student ownership of the portfolio, responding to local contexts, taking into account the needs of "stakeholders" outside the classroom and even outside the school, the importance of training and discussion in the process of articulating and developing consensus on criteria for evaluation, continual assessment and discussion of the assessment, and documentation of each instance of portfolio assessment. From their recommendations I get the feeling that "all is good."

backward beginning forward