What’s New in the Classroom: Holistic Assessment

The current issue of the Journal of the Association of Legal Writing Directors (JALWD) has a number of interesting articles. In this post I want to discuss one particular article that really made me think about how I assess my students’ legal writing: Roger Klurfeld and Steven Placek’s article, “Rhetorical Judgments: Using Holistic Assessment to Improve the Quality of Administrative Decisions.”

In this piece, Klurfeld and Placek describe their work to help improve the quality of written decisions issued by the National Appeals Division of the United States Department of Agriculture. Their observations and experience make me wonder whether a holistic, reliability-tested approach to assessing student writing would improve the students’ learning experience and the overall quality of their writing.

Klurfeld and Placek write that

[m]any of these agencies [like the National Appeals Division of the USDA] have identified writing quality — however they define it — as a priority in their strategic plans, but the overwhelming number of hearings and decisions, coupled with regulatory guidelines for timeliness, may subordinate this goal to other management priorities.

Traditional approaches to improving agency decision-writing included “send[ing] writers to a slew of training programs,” at local colleges or other training centers, which can be “cumulatively expensive, fragmented, and often ineffective when the agency does not integrate the course writing perspective into an overall program.”

Moreover, these programs rarely integrate the training with the day-to-day activities of the organization’s writers. To control the quality of written products, many government legal agencies dictate that hearing officers write decisions in boilerplate format, often contorting factual patterns and legal analysis into templates at the paragraph and sentence syntax level. Remedial instructional efforts may bludgeon writers by pointing out how previously written administrative decisions contain faulty grammatical traits, syntax errors, improper use of legal terminology, and deviation from the boilerplate.

In other words, unfortunately, all these efforts to identify and control writer “errors” did not work well, overall. Indeed, for strong writers, Kurlfeld and Placek “discovered a disturbing trend: the fragmented training approach and boilerplating parts of decisions actually caused stronger, more experienced and educated writers to write poorer decisions. These writers needed to be freed to perform under a different set of writing measures.” The authors note that the task of helping agency employees write better decisions is not so different from the task facing legal writing teachers:

[It] presents a practical legal business challenge as well as the theoretical challenges embedded in the nuances of legal writing pedagogy. Amidst the pressure of issuing a large volume of decisions, agencies must contend with improving the varying skills of writers; delivering well-reasoned, clear, and reader-friendly decisions to the public; and measuring organizational performance based upon the quality of its written products.

The major appeal that the method has for me is that it would make clearer the truth, that a reader responds to all of the qualities of a text together, in a holistic judgment, rather than analyzing each separately (the way writing teachers tend to do). As the authors write (footnote material omitted),

Legal writing professors and program managers may have adopted some elements also found in holistic evaluation, such as a rubric-based evaluation, employing multiple readers for high-stakes writing tasks, or portfolio grading. To distinguish these elements from the holistic assessment method, as it is used as a systematic theoretical and practical evaluation strategy for a writing program, it is helpful to begin our discussion by describing holistic assessment.

Holistic assessment comprises a scoring method based upon a rubric of identified writing criteria applicable to the subject area.4 Raters, or readers, are encouraged to view the writing sample as more than the mere sum of its elementary parts; readers do not judge separately singular factors — such as treatment of topic, selection of rhetorical method, word choice, grammar and mechanics — that constitute a piece of writing. Rather, evaluators are asked to consider these and other factors as elements that work together; they score the writing sample on the “total impression” it makes upon the reader.

The scoring system in a holistic method also seems to reflect reader response more clearly and expressly than my current “exceeds expectations-meets expectations-falls below expectations” system.

The scoring scale is usually a six-point scale, divided into two halves (a fourpoint scale is also common). Decisions that fall into the upper half — those scored four, five, or six — are satisfactory or labeled “mastery.” Lower-half decisions are unsatisfactory, or labeled “non-mastery.” Each score is described in terms important to readers. For example, a “two” might be described as “flawed writing,” while a submission that earns a “five” might demonstrate “clearly proficient writing.” After an informed reading, a rater first decides whether the writing sample is above or below the line. Based upon the pre-established holistic rubric of agreed-upon conventions, the rater then scores the essay.

Of course, there are a number of difficulties in transferring Klurfeld and Placek’s method directly to the legal writing classroom, including, as they point out, the fact that our writers come and go over time, rather than stay on as employees. Also, establishing the system’s statistical reliability in the way Klurfeld and Placek did would be really difficult; I would need to find additional raters for a large group of writing, and then do all that rating and process the results.

Some parts of the approach described in the article, though, will be relatively easy to implement, with modifications to my rubric. Indeed, the author’s explanation of why holistic assessment seems well suited to assessment of legal writing focuses more on the reader-oriented nature of the assessment than on statistical reliability.

It privileges the reader’s role in determining writing quality, adopts an inherently judgmental disposition, and — through a rubric — weighs writing standards as they affect rhetorical and content–driven aspects of the written product. After all, legal rhetoric is intensely judgmental and self-reflective, often calling explicitly upon the audience to evaluate and weigh both the content and form of an argument. Indeed, the main purpose of much legal rhetoric is to engender a particular response in a reader or group of readers.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.