I suspect many lawyers have had the experience of briefing and arguing a case before an appellate court, and then receiving an opinion back from the court that seems like it was written for another case, with the court simply not engaging with the parties’ major arguments. Although anecdotes along these lines abound, no rigorous studies are available to show us how common such judicial nonresponsiveness is.
Part of the problem is that researchers would have to read a large volume of briefs and opinions, and then painstakingly sort out exactly which arguments were addressed and how thoroughly. Not only would the work be tedious and time-consuming, but it would also be subject to reliability concerns in light of the subjectivity in deciding whether and how satisfactorily a court has responded to an argument.
Chad Oldfather, Joseph Bockhorst, and Brian Dimmer ’09 think they have a solution to these difficulties: automated research that uses computers to compare a large number of briefs and opinions quickly and objectively. They describe their project in a new paper on SSRN entitled “Judicial Inaction in Action? Toward a Measure of Judicial Responsiveness.”
In the paper, they describe two different ways of measuring judicial responsiveness in an automated fashion. The first involves comparing the overall language in an opinion with the language in the briefs, while the second determines the extent to which the opinion cites the same cases as the briefs. Oldfather, Bockhorst, and Dimmer apply these methodologies to a set of 30 First Circuit cases, and then compare the results with a manual analysis of the same cases. They find a statistically significant correlation between the manual and automated approaches, which provides hope that the automated methods may prove useful when analyzing much larger sets of cases that could not be coded manually without an enormous investment of time and resources.
Here is the paper’s abstract:
This article attempts to develop a measure of what we call “judicial responsiveness,” which, roughly stated, concerns the extent to which judicial opinions reflect the arguments made by the parties in their briefs. We applied two methods of automated content analysis to the briefs and opinion in each of a set of 30 cases decided by the First Circuit, measuring for similarity based on computations of word counts and citation percentages. We then compared the results of those methods to the results of manual coding of the same documents. The existence of statistically significant correlations among the measures supports the conclusion that our automated methodologies serve as a valid means of assessing responsiveness. We argue that these investigations can inform a range of scholarly debates, including efforts to assess judicial quality and the influence of ideology on judging, as well as debates over specific components of the judicial process, such as the use of unpublished opinions.
Great article. Not only do the content of briefs matter, but how they appear matter as well. A new book called Typography for Lawyers is available on this topic as well. (I am not the author, but I did appreciate what the book taught me).