Article Compares Research Results Using Westlaw and Lexis

This morning the Law Librarian Blog reports on a study that makes concrete the different research results achieved through the Westlaw and Lexis research systems.  The author of the paper, Susan Nevelow Mart, a reference librarian at UC-Hastings, provides this abstract on SSRN:

Since the advent of LexisNexis headnotes and the LexisNexis classification system, the author has wondered about the different ways results are generated in West’s Custom Digest and in LexisNexis’s ‘Search by Topic or Headnote’ and by KeyCite and Shepard’s. There has been some anecdotal discussion about the differences, but no empirical investigation. This paper starts the investigation process: the author took ten pairs of matching headnotes from legally important federal and California cases and reviewed the cases in the results sets generated by each classification and citator system for relevance. The relevance standards for each case are included. The paper first reviews previous full-text database testing, and the benefits and detriments of both human indexing and algorithmic indexing. Then the two very different systems are tested. Ten pairs of headnotes is too small a sample to say absolutely that results generated by system A are and always will be a certain percentage more or less relevant than system B. However, the differences in the results sets for classification systems and for citator systems do raise some interesting issues about the efficiency and comprehensiveness of any one system, and the need to adjust research strategies accordingly.

I did not read the article (yet), but Joe Hodnicki’s post on the Law Librarian blog says that in Mart’s small sample, a search using Westlaw’s human-generated key numbers returned a larger percentage of relevant results than searches using Lexis’s algorithm-generated topic or “more like this” headnote.  More important than that (in my view), the results in the two systems were significantly different, with Lexis’s searches finding a number of relevant cases that were not found in the Westlaw searches.

Though the results are obviously preliminary, the study’s approach and findings are interesting.  I plan to discuss it with my students as some concrete support for my advice that to be very thorough in electronic research, it is best to try a number of different search strategies (e.g., terms-and-connectors, citator, and headnote searches) in both systems.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.