Print Friendly and PDF
only search openDemocracy.net

Do we still need human judges in the age of Artificial Intelligence?

Technology and the law are converging, but what does that mean for justice?

Credit: Pixabay/Geralt.

The Fourth Industrial Revolution is fusing disciplines across the digital and physical worlds, with legal technology the latest example of how improved automation is reaching further and further into service-oriented professions. Casetext for example—a  legal tech-startup providing Artificial Intelligence (AI)-based research for lawyers—recently secured $12 million in one of the industry’s largest funding rounds, but research is just one area where AI is being used to assist the legal profession.

Others include contract review and due diligence, analytics, prediction, the discovery of evidence, and legal advice. Technology and the law are converging, and where they meet new questions arise about the relative roles of artificial and human agents—and the ethical issues involved in the shift from one to the other. While legal technology has largely focused on the activities of the bar, it challenges us to think about its application to the bench as well. In particular, could AI replace human judges?

Before going any further, we should distinguish algorithms from Artificial Intelligence. In simple terms, algorithms are self-contained instructions, and are already being applied in judicial decision-making. In New Jersey, for example, the Public Safety Assessment algorithm supplements the decisions made by judges over bail by using data to determine the risk of granting bail to a defendant. The idea is to assist judges in being more objective, and increase access to justice by reducing the costs associated with complicated manual bail assessments.

AI is more difficult to define. People often conflate it with machine learning, which is the ability of a machine to work with data and processes, analyzing patterns that then allow it to analyze new data without being explicitly programmed. Deeper machine learning techniques can take in enormous amounts of data, tapping into neural networks to simulate human decision-making. AI subsumes machine learning, but it is also sometimes used to describe a futuristic machine super-intelligence that is far beyond our own.

The idea of  AI judges raises important ethical issues around bias and autonomy.  AI programs may incorporate the biases of their programmers and the humans they interact with. For example, a Microsoft AI Twitter chatbot named Tay became racist, sexist, and anti-Semitic within 24 hours of interactive learning with its human audience. But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm  is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.

Giving AI decision-making powers over human cases also raises a fundamental issue of autonomy. In 1976, the German-American computer scientist Joseph Weizenbaum argued against replacing humans in positions of respect and care, and specifically mentioned judges. He argued that doing so would threaten human dignity and lead to alienation and devaluation.

Appealing to rationality, the counter-argument is that human judges are already biased, and that AI can be used to improve the way we deal with them and reduce our ignorance. Yet suspicions about AI judges remain, and are already enough of a concern to lead the European Union to promulgate a General Data Protection Regulation which becomes effective in 2018. This Regulation contains “the right not to be subject to a decision based solely on automated processing”.

In any case, could an AI judge actually do what human judges claim to do? If AI can correctly identify patterns in judicial decision-making, it might be better at using precedent to decide or predict cases. For example, an AI judge recently developed by computer scientists at University College London drew on extensive data from 584 cases before the European Court of Human Rights (ECHR).

The AI judge was able to analyze existing case law and deliver the same verdict as the ECHR 79 per cent of the time, and it found that the ECHR judgments actually depended more on non-legal facts around issues of torture, privacy, fair trials and degrading treatment than on legal arguments. This is an interesting case for legal realists who focus on what judges actually do over what they say they do. If AI can examine the case record and accurately decide cases based on the facts, human judges could be reserved for higher courts where more complex legal questions need to be examined.

But AI may even be able to examine such questions itself. In the case of positivist judges who separate morality from the law, legal interpretation could be transformed into an algorithmic task according to any given formal method. For example, if we believe that the law is socially constructed and follow the thinking of British legal theorist H. L. A. Hart, then the possibility exists to program both the ‘primary’ and ‘secondary’ rules of any legal system in this way.

Primary rules confer legal rights and duties—telling people, for example, that they cannot murder or steal. Secondary rules recognize, change or adjudicate these primary rules. For example, deep machine learning AI may be able to process how to recognize the sources of the law—like precedent and the constitution­—that are relevant in a case.

Alternatively, if we think originalists like the late Justice Antonin Scalia are right to say that the correct interpretation of the law is what reasonable people, living at the time of a legal source’s adoption, would have understood as its ordinary meaning, then AI natural language processing could be used to program this method. Natural language processing allows AI to understand and analyze the language that we use to communicate. In the era of voice-recognition software like Siri, Alexa, and Watson, natural language processing is only going to get better.

AI might be able to replicate these formalist jurists’ interpretive methods. More importantly, it might help them to be and remain consistent in their judgments. As the English utilitarian legal theorist Jeremy Bentham once wrote in An Introduction To The Principles of Morals and Legislation, “in principle and in practice, in a right track and in a wrong one, the rarest of all human qualities is consistency.” With the ability to process far more data and variables in the case record than humans could ever do, an AI judge might be able to outstrip a human one in many cases.

Things get trickier in the case of judges who introduce morality into the law—a complicated task because ethical values and the origins of morality are contested. For example, some natural lawyers believe that morality emanates from God, nature, or some other transcendent source. Programming AI with a practical, adjudicative understanding of these divine or divine-like sources in a changing human society is a hugely complex undertaking. Moreover, the surprising and unintended nature of AI ‘learning’ could lead to a distinct line of interpretation, a lex artificialis of sorts.

Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if  we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment. In that case, AI may assist or replace humans in lower courts but human judges should retain their place as the final arbiters at the apex of any legal system. In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.

Any use of AI over algorithms in legal decision-making will likely progress upwards through the judicial hierarchy. Research bodies like the International Association for Artificial Intelligence and Law (IAAIL) are already exploring AI in legal evidence-gathering and decision-making. The American Judge Richard Posner believes that the immediate use of AI and automation should be restricted to assisting judges in uncovering their own biases and maintaining consistency. However, the increasing use of automation and AI decision-making in the courts will inevitably shape human judicial decision-making along the way. An increased reliance on AI may therefore blur the line between human and AI judging over time.

The sheer pace at which these technologies are developing has led some to call for a complete moratorium in the field so that policy and the courts can catch up. This is perhaps extreme, but it is certainly clear that the issue of AI and the law needs much more concerted attention from policymakers, ethicists and scholars. Organizations like the IAAIL as well as AI regulatory bodies are needed to provide an interface between jurists, ethicists, technologists, government and the public in order to develop rules and guidelines for the appropriate use and ownership of AI in the legal system.

In the 21st century, legal scholars have their work cut out for them in addressing a host of new issues. At the heart of these issues is a hugely challenging question: what does it mean to be human in the age of Artificial Intelligence?

About the author

Ziyaad Bhorat is a South African media and entertainment junkie, technology futurist, and social justice advocate residing in Santa Monica, USA. He was one of the recipients of the Rhodes Scholarship for 2012. 


We encourage anyone to comment, please consult the
oD commenting guidelines if you have any questions.