A recent article in the Atlantic monthly discussed the continuing progress of the Turing Test. The Turing Test was a thought experiment proposed by famed computer pioneer Alan Turing. Turing thought that a test of computer’s intelligence would be if itcould fool a human evaluator into thinking the computer was a human participant during a five minute long conversation at least 30% of the time. (Turing’s original paper, including an extended discussion of why success at this game would qualify as intelligence, can be found here.) An ongoing contest in England awards the Loebner prize, every year, to the computer who best approximates a human. Although no computer has yet cracked the 30% mark, several have come surprisingly close. Another article, in the New York Times, chronicled the ongoing development of an IBM program, known as Watson, designed to play Jeopardy. A showdown with modern Jeopardy legend Ken Jennings is expected next week. Both of these articles show the continuing progress of artificial intelligence in both understanding and actively participating in human language activities. The advances obviously raise a series of profound questions, but my thoughts turned to their potential applicability in the legal realm. Could an artificial judging program, at some indeterminate point in the future, be created to resolve cases and other legal disputes? Would we want such a program? Would we feel comfortable with the decisions it reached? To me, the central question is – do we believe an approximation of a human mind, even a perfect approximation that calls on resources an actual mind can not access, still lacks something that is fundamentally needed to sit in judgment?A brief discussion of this somewhat silly thought experiment follows below. In his memorable confirmation hearings, Justice Roberts refereed to the act of judging as ‘calling balls and strikes.’ There is a good deal of scholarship that argues Roberts’ analogy was fatally flawed, motivated more by the innate allure that it exerts on what we want judging to be, rather than what it actually is. (See Jerome Frank’s classic – Law and the Modern Mind, and Erwin Chemerinsky’s response to Justice Roberts – Seeing the Emperor’s Clothes: Recognizing the Reality of Constitutional Decision Making) However, to give credence to Roberts’ analogy, the obvious implication is that there is a relatively clear line of legality, built by all the various instruments of legal processes, and a judge simply decides fact patterns based on whether they fit within that conception. Under that conception, it is hard to think of a potentially more perfect judge than an artificial intelligence program. The artificial intelligence program would have absolute access to every statute, case, and piece of legislative history that could apply to a given situation. The program could also be given perfect neutrality. (Or one can imagine robo judge 2.0 with policy sliders that could be set by the president who appoints him.) Such a program would be a perfect blue-booker. (And if it wasn’t, it would provide a welcome point of strength, alongside Richard Posner, for any beleaguered 1L protesting against the baroque intricacies of the bluebook.) Somewhat more seriously, one could think that an artificial intelligence program would, if it could understand human language, be able to call on massive reservoirs of formal logic reasoning; C++ type language applied to the task of deciphering the intricacies of the law. Furthermore, its vast database would allow Judge Robot, at least technically, to use at least the tools of case analogy and precedent that characterize the casuistry form of reasoning. Judge Robot, at least in certain types of cases such as antitrust, may be capable of doing some of the actual quantitative economic analysis that is generally only vaguely gestured at in judicial opinions.
And yet, and yet despite my hypothetical Judge Robot’s many good qualities, I am left deeply uncomfortable, and I do not think that I am entirely alone, about the thought of such a Judge deciding any case in which I had a personal stake. Perhaps some of this is simply an intellectualized form of the uncanny valley problem, the closer that artificial intelligence approximates the processes of human thought the more uncomfortable it makes us feel. However, more precisely, my own inherent distaste for the idea seems a form of postmodern dualism as applied to legal jurisprudence – a belief that there is a mind of judging, which I believe my robo judge could approximate, and a heart of judging, constituted by lived experience and the virtue of empathy that it engenders, which it seems impossible to believe could be imputed into my robo judge.
My distinction may be a somewhat instinctive conception of Roberto Unger’s work. Unger has discussed two antinomic principles, formality and equity, that underlie jurisprudence. First, is the concept of formality, which Unger describes as “the striving for a law that is general, autonomous, public and positive.” “A system of rules is formal so far as it allows its interpreters to justify themselves by reference to the rules themselves, without regard to any other arguments of fairness or utility.” Our Judge Robot seems capable of satisfying this demand of formalism. But Unger also outlines the concept of equity, which Unger believes is “the intuitive sense of justice in a particular case.” A judge operating in the spirit of equity is extending the same love that he possesses for his friends and family outward towards the individual members of the community at large. Formality corresponds with a traditional Western conception of rational thinking, often prized for its supposedly deductive rigor and the truthiness of the results that it reaches. A computer, or at least my conception of a computer, seems capable of producing this form of thinking. Equity has a somewhat murkier tradition, undoubtedly prized but somewhat more difficult to explain.
President Obama’s controversial remarks on what he looks for in a judge seemed to provide one characterization “”[W]e need somebody who’s got the heart, the empathy, to recognize what it’s like to be a young teenage mom. The empathy to understand what it’s like to be poor, or African-American, or gay, or disabled, or old. And that’s the criteria by which I’m going to be selecting my judges.”
The primary source of empathy, I believe, must be lived experience, which is the only tool possible to produce at least some level of understanding of another human being’s situation. This empathy cannot, it seems to me, be programmed but must be formed over the course of a life. Perhaps when we react against the concept of Judge Robot, we are reacting against his fundamental inability to generate that empathy. Such a reaction may also show that those who prize a ‘by the book’ judge are implicitly accepting and comfortable with that notion as long as that book is wielded by a human judge, who innately possesses qualities which a robotic judge canot.