Now presiding, Judge Robot. Do you object?

13 Feb

Sorry Sly, there's a new game in town.

A recent article in the Atlantic monthly discussed the continuing progress of the Turing Test. The Turing Test was a thought experiment proposed by famed computer pioneer Alan Turing. Turing thought that a test of computer’s intelligence would be if itcould fool a human evaluator into thinking the computer was a human participant during a five minute long conversation at least 30% of the time. (Turing’s original paper, including an extended discussion of why success at this game would qualify as intelligence, can be found here.) An ongoing contest in England awards the Loebner prize, every year, to the computer who best approximates a human. Although no computer has yet cracked the 30% mark, several have come surprisingly close. Another article, in the New York Times, chronicled the ongoing development of an IBM program, known as Watson, designed to play Jeopardy. A showdown with modern Jeopardy legend Ken Jennings is expected next week. Both of these articles show the continuing progress of artificial intelligence in both understanding and actively participating in human language activities. The advances obviously raise a series of profound questions, but my thoughts turned to their potential applicability in the legal realm. Could an artificial judging program, at some indeterminate point in the future, be created to resolve cases and other legal disputes? Would we want such a program? Would we feel comfortable with the decisions it reached? To me, the central question is – do we believe an approximation of a human mind, even a perfect approximation that calls on resources an actual mind can not access, still lacks something that is fundamentally needed to sit in judgment?A brief discussion of this somewhat silly thought experiment follows below. In his memorable confirmation hearings, Justice Roberts refereed to the act of judging as ‘calling balls and strikes.’ There is a good deal of scholarship that argues Roberts’ analogy was fatally flawed, motivated more by the innate allure that it exerts on what we want judging to be, rather than what it actually is. (See Jerome Frank’s classic – Law and the Modern Mind, and Erwin Chemerinsky’s response to Justice Roberts  – Seeing the Emperor’s Clothes: Recognizing the Reality of Constitutional Decision Making) However, to give credence to Roberts’ analogy, the obvious implication is that there is a relatively clear line of legality, built by all the various instruments of legal processes, and a judge simply decides fact patterns based on whether they fit within that conception. Under that conception, it is hard to think of a potentially more perfect judge than an artificial intelligence program. The artificial intelligence program would have absolute access to every statute, case, and piece of legislative history that could apply to a given situation. The program could also be given perfect neutrality. (Or one can imagine robo judge 2.0 with policy sliders that could be set by the president who appoints him.) Such a program would be a perfect blue-booker. (And if it wasn’t, it would provide a welcome point of strength, alongside Richard Posner, for any beleaguered 1L protesting against the baroque intricacies of the bluebook.)  Somewhat more seriously, one could think that an artificial intelligence program would, if it could understand human language, be able to call on massive reservoirs of formal logic reasoning; C++ type language applied to the task of deciphering the intricacies of the law. Furthermore, its vast database would allow Judge Robot, at least technically, to use at least the tools of case analogy and precedent that characterize the casuistry form of reasoning. Judge Robot, at least in certain types of cases such as antitrust, may be capable of doing some of the actual quantitative economic analysis that is generally only vaguely gestured at in judicial opinions.

And yet, and yet despite my hypothetical Judge Robot’s many good qualities, I am left deeply uncomfortable, and I do not think that I am entirely alone, about the thought of such a Judge deciding any case in which I had a personal stake. Perhaps some of this is simply an intellectualized form of the uncanny valley problem, the closer that artificial intelligence approximates the processes of human thought the more uncomfortable it makes us feel. However, more precisely, my own inherent distaste for the idea seems a form of postmodern dualism as applied to legal jurisprudence – a belief that there is a mind of judging, which I believe my robo judge could approximate, and a heart of judging, constituted by lived experience and the virtue of empathy that it engenders, which it seems impossible to believe could be imputed into my robo judge.

My distinction may be a somewhat instinctive conception of Roberto Unger’s work. Unger has discussed two antinomic principles, formality and equity, that underlie jurisprudence. First, is the concept of formality, which Unger describes as “the striving for a law that is general, autonomous, public and positive.” “A system of rules is formal so far as it allows its interpreters to justify themselves by reference to the rules themselves, without regard to any other arguments of fairness or utility.” Our Judge Robot seems capable of satisfying this demand of formalism. But Unger also outlines the concept of equity, which Unger believes is “the intuitive sense of justice in a particular case.” A judge operating in the spirit of equity is extending the same love that he possesses for his friends and family outward towards the individual members of the community at large. Formality corresponds with a traditional Western conception of rational thinking, often prized for its supposedly deductive rigor and the truthiness of the results that it reaches. A computer, or at least my conception of a computer, seems capable of producing this form of thinking. Equity has a somewhat murkier tradition, undoubtedly prized but somewhat more difficult to explain.

It's not you, it's us.

President Obama’s controversial remarks on what he looks for in a judge seemed to provide one characterization “”[W]e need somebody who’s got the heart, the empathy, to recognize what it’s like to be a young teenage mom. The empathy to understand what it’s like to be poor, or African-American, or gay, or disabled, or old. And that’s the criteria by which I’m going to be selecting my judges.”

The primary source of empathy, I believe, must be lived experience, which is the only tool possible to produce at least some level of understanding of another human being’s situation. This empathy cannot, it seems to me, be programmed but must be formed over the course of a life. Perhaps when we react against the concept of Judge Robot, we are reacting against his fundamental inability to generate that empathy. Such a reaction may also show that those who prize a ‘by the book’ judge are implicitly accepting and comfortable with that notion as long as that book is wielded by a human judge, who innately possesses qualities which a robotic judge canot.


4 Responses to “Now presiding, Judge Robot. Do you object?”

  1. Rachel Funk February 12, 2011 at 8:09 am #

    With regard to the idea of equity, something I’m curious about is how we reconcile our intuitive hesitation about mechanical processes making legal decisions with our knowledge that different (human) judges reach different results with the same fact-pattern. It’s not just a question of every case being different — it’s a case of every judge being different. That is, we know certain judges are harsher sentencers, or that others are more defendant-friendly, to the point where we felt the need to create the Sentencing Guidelines, yet somehow we’re still okay with human judges and not okay with robot judges. How do we explain that? My intuition is that it’s some brand of an us-vs.-them mentality, but I really don’t know…

  2. Jon Hanson February 14, 2011 at 11:37 am #

    Great post. It seems you want a judge to be “socially situated” as well as “legally focused.”

  3. pat February 18, 2011 at 9:53 am #

    If robotics cannot be used in courtrooms as Judiciary, mankind should give up on creation of artificially intelligent robots altogether. Data fed into robots to be categorized, factored, and logically organized is what robots do best, and what man seeks to simulate in robots that approxiimate the intellectual skills of humans. If it cannot be done in law – where all laws are written and laid out, in relation to one another, it is unlikely that man will ever be able to produce a human-like robot. The same would apply to mathematics and engineering.


  1. 5 government agencies technology will make obsolete in the near future | Ideal Technologies Corporation - January 5, 2012

    […] are only a few of the ways that technology will replace traditional roles (there are many other out-there theories in this realm). In a strange paradox, technology makes each individual both more powerful and more […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: