Assume We Have a Can Opener

There’s an interesting article by Jim Manzi in the most recent issue of the City Journal. In it, he addresses the weaknesses of empirical research in the social sciences, a problem he attributes to the greater “causal density” of questions concerning human behavior. Because of he complexity and number of potential causes for an outcome, it is extremely difficult to conduct randomized field trials that isolate the cause to be tested.

 Manzi begins his article by referring back to the debate about the stimulus package. Noting that Nobel laureates lined up on both sides of the question, he writes that “[f]ierce debates can be found in frontier areas of all the sciences, of course, but this was as if, on the night before the Apollo moon launch, half of the world’s Nobel laureates in physics were asserting that rockets couldn’t reach the moon and the other half were saying that they could.” The only thing that could be said for sure about the stimulus is that, however it turned out, “several Nobelists would be wrong about it.”

But the problem is worse than that. We can never really know who was right. If the stimulus does not achieve the results that were claimed for it, it may be because it failed or it may be because things were worse than we thought. Even if we can say that it did not work as intended, is the problem in Keynesian theory or was the stimulus just too small? As Manzi puts it, we have  “no reliable way to measure counterfactuals—that is, to know what would have happened had we not executed some policy—because so many other factors influence the outcome.”

Manzi points to an summary by scholars at the University of Cambridge of  “all 122 known criminology RFTs with at least 100 test subjects executed between 1957 and 2004. ”  Manzi claims that ” no program within this universe of tests has ever demonstrated, in replicated or multisite randomized experiments, that it creates benefits in excess of costs.”

He argues that business has had greater success with randomized field trials by ” executing many fast, cheap tests in rapid succession, rather than big, onetime “moon shots.” He suggests that the social sciences might benefit from this approach but suggests the following cautionary principles in the evaluation of social science results: 1) “few programs can be shown to work in properly randomized and replicated trials;” 2) “within this universe of programs that are far more likely to fail than succeed, programs that try to change people are even more likely to fail than those that try to change incentives;” and 3) “those rare programs that do work usually lead to improvements that are quite modest, compared with the size of the problems they are meant to address or the dreams of advocates.”

I would add the problem of confirmation bias. Those in the physical sciences may have a bias toward seeing a hypothesis confirmed, but their preexisting array of values and assumptions about the way in which the world ought to work are less likely to be confounding.

This has implications for the law. Social science evidence is often introduced to support allegations of discrimination, to prove or undercut the rationale for a particular challenged public policy or to establish a claim that act A produced effect B. Even courts that are willing to police this testimony through the use of Daubert or similar standards tend to defer to the prevailing assumptions of specialists within the field at issue. Manzi’s article raises – and not for the first time – the question as to whether the field itself can produce robust and useful results.

Some will find certain ideological implications here. Increased skepticism about how the social sciences can improve our world certainly undercuts the classic Progressive project for tha improvement of society by what Franklin Roosevelt called “enlightened administration.”

But we shouldn’t be too quick to conclude that this is a message for the right. While conservatives are less likely to argue for the transformation of human behavior through public programs, they do make a series of empirical claims about what does and does not enhance human well being. The proof of these claims is presumably equally elusive

This Post Has One Comment

  1. Peter R Heyne

    Thanks for the enlightening post that confirmed my own bias against various forms of social (pseudo)science. Exhibit A from your alma mater: Project Implicit, which purports to be unbiased social science research, but which in reality has a fairly clear politico-social agenda revealed once one takes the tests– which, to my lay estimation, gauge more a person’s typing skill/hand-eye coordination than widely and deeply ingrained racism, sexism, etc.

    Perhaps, in Queen Gertrude’s words, my Anglo-centric white guilt doth protest too much. But cloaking any sort of bias (cf. Climategate) in the sacred mantle of Science suffocates genuine empirical inquiry.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.