Responsible AI in Justice Administration

Share

Share

The launch of SUPACE (Supreme Court Portal for Assistance in Court Efficiency), an artificial intelligence (“AI”) driven portal by the Supreme Court of India is in line with the confidence reposed by the Chief Justice of India (“CJI”) in these disruptive technologies.1 Last year, on National Constitution Day (November 26), he launched SUVAS (Supreme Court Vidhik Anuvaad Software), an AI trained software that can translate English into nine vernacular languages.2 As AI is being increasingly deployed by judicial systems to supplement the role of the judges, these initiatives are welcome to capitalize on the solutions offered by it. Further NITI Aayog’s National Strategy for Artificial Intelligence discusses the various applications of AI and calls for #AIforAll.3

The paper-oriented courts of our country lend a conducive environment for AI and machine learning (“ML”) especially for the purposes of document handling, reviewing evidential documents, automated technical support to assist litigants in filling forms and better understand the court proceedings, etc.4 More sophisticated AI/ML technologies allow crime forecasting (probability of recidivism, identification of high risk-individuals allowing for preventive measures etc.) and prediction of legal outcomes with better accuracy than law experts. This can provide efficiency and tackle the issue of huge backlog of cases in our courts which stands at 3.65 crore as of February 1, 2020.5 However, despite the numerous benefits of AI, there are obstacles that need to be overcome.

  1. Usurping employment
    McKinsey & Company estimates that around 23% of lawyer’s job will be automated.6 AI enjoys appeal because of its promise to automate mundane legal processes with better precision. For instance, CaseCrunch, a UK based startup held a challenge between the expert lawyers and its AI product. The AI outperformed the lawyers with an accuracy of 86.6% as compared to 62.3% of the lawyers.7 ‘LawGeex’, an AI based system was able to spot issues with a set of five Non-Disclosure Agreements (NDAs) with an accuracy of 94% in comparison to 85% by lawyers.8Therefore, use of AI has raised concerns of obsolescence and thereby job loss among the legal fraternity. However, CJI had stated that the use of AI is not meant to replace judges but only to assist them, the autonomy and discretion of judges shall not be compromised.9
  2. Transparency and accountability
    Comprehension of the functioning of the AI technologies requires technical expertise. Lack of which causes opacity, what is called the ‘black box’ problem.10 The poor understanding of how an AI has been used in assisting judicial decision making, could raise questions on accountability of the judiciary and the algorithm itself. Additionally, if AI is working along with human oversight, fixing accountability on the judge when he/she has deviated from the outcome of the AI is difficult. For instance, when a judge decides to acquit an accused while the AI indicates a high recidivism.11 This mismatch would necessitate judges to give cogent reasons for their deviation.Another issue with opacity is that it prevents building relevant arguments to challenge the outcome of an AI, thereby preventing meaningful judicial review. Employing measures to rectify issues of algorithmic bias also become difficult in such a scenario.
  3. Algorithmic bias
    The perception that accords objectivity to data ignores that it is product of a particular context thereby it also consists of underlying biases that mark that context. Algorithms work on the basis of datasets fed into them. If the data used to train machines is biased, the outcome will be too.12 For instance, in USA, the AI system used for sentencing has displayed bias against black people.13 A constant updating and checking is therefore absolutely necessary for algorithms.
  4. Automation bias
    The over-reliance and deference to AI systems is called ‘automated bias’. This robs the judicial decision making from the dynamism that is necessary for the organic development of law. As AI work on the data of the past, it is not well equipped to handle a novel situation. In such a situation blind reliance on AI is detrimental for the growth of law. Use of AI creates incentives to use more big data, thereby creating a loop – more data-based AI decision fuels more use of AI and so on.14A common safeguard against the aforementioned concerns is human oversight in using AI,15 which has been emphasized by the Hon’ble CJI as well.16 General Data Protection Regulation (“GDPR”) under Article 22 prohibits relying on a “..decision based solely on automated processing, which produces legal effects..”. Something similar may be envisaged in the Indian context. Further human intervention is necessary because of how the common law tradition works, where a body of knowledge takes a bottom-up, case by case approach. Taking into account the peculiarity of each case is something that machines are not capable of doing, as is argued.17

    Explainable AI or XAI can also tackle issues of algorithmic bias and transparency. Judicial reasoning can build rules for XAI that are based on fairness and accountability. Less deference must be accorded to an AI when it is based on a black box algorithm.18

    Both technical and legal solutions are needed for smooth functioning of AI after the technology is put to actual use. Initiating the process is important right now so that vast datasets can be built that can allow better legal contextualization, categorisation and only then unlock the full potential of artificial intelligence can be unlocked.

    1. Artificial Intelligence Is Useful For Judicial Process But Can’t Replace Humans: Chief Justice of India, BLOOMBERGQUINT (Jan. 24, 2020), https://www.bloombergquint.com/law-and-policy/ai-can-be-used-in-judicial-process-but-cannot-replace-human-discretion-cji.
    2. �AI can improve judicial system’s efficiency’ – full text of CJI Bobde’s Constitution Day speech, THEPRINT (Nov. 29, 2019), https://theprint.in/judiciary/ai-can-improve-judicial-systems-efficiency-full-text-of-cji-bobdes-constitution-day-speech/326893/
    3. National Strategy for Artificial Intelligence, NITI AAYOG (Jun. 2018), https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf
    4. Artificial Intelligence in Justice and Public Safety, IJIS INSTITUTE (Nov. 21, 2019), https://cdn.ymaws.com/www.ijis.org/resource/collection/93F7DF36-8973-4B78-A190-0E786D87F74F/IJIS_White_Paper_Artificial_Intelligence_FINAL.pdf.
    5. India’s Pending Court Cases on the Rise: In Charts, BLOOMBERGQUINT (Sep. 29, 2020), https://www.bloombergquint.com/law-and-policy/indias-pending-court-cases-on-the-rise-in-charts
    6. Nick Whitehouse, INSIGHT: The Future of Junior Lawyers Through Through the AI Looking Glass, BLOOMBERGLAW (Aug. 3, 2020), https://news.bloomberglaw.com/us-law-week/insight-the-future-of-junior-lawyers-through-the-ai-looking-glass

Rory Cellan-Jones, The robot lawyers are here – and they’re winning, BBC (Nov. 1, 2017), https://www.bbc.com/news/technology-41829534

  1. Artificial Intelligence in judiciary: Does it really make sense?, ECONOMICTIMES (Jan. 12, 2020), https://government.economictimes.indiatimes.com/news/digital-india/artificial-intelligence-in-judiciary-does-it-really-make-sense/73211365
  2. Supra n. 1.
  3. Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, Stanford Law School (2019), https://www-cdn.law.stanford.edu/wp-content/uploads/2019/08/Re-Solow-Niederman_20190808.pdf
  4. Francesco Contini, Artificial Intelligence: A New Trojan Horse for Undue Influence on Judiciaries , UNODC, https://www.unodc.org/dohadeclaration/en/news/2019/06/artificial-intelligence_-a-new-trojan-horse-for-undue-influence-on-judiciaries.html
  5. Charles Ciumei QC, Digital Justice and The Use of Algorithms to Predict Litigation Outcomes, https://files.essexcourt.com/wp-content/uploads/2018/10/08152757/Digital-Justice-and-the-Use-of-Algorithms-to-Predict-Outcomes-Tools-and-Challenges-v3.pdf.
  6. Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  7. Supra n. 10.
  8. Reuben Binns, Human Judgment in Algorithmic Loops; Individual Justice and Automated Decision Making, SSRN (Sept. 11, 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3452030
  9. Id.
  10. Id.
  11. Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence (Aug. 1,2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3440723

Lawyers.

Interns and Paralegals.

Disclaimer.

As per the rules of the Bar Council of India, we are not permitted to solicit work or advertise. By agreeing to access this website, the user acknowledges the following:

This website is meant only for providing information and does not purport to be exhaustive and updated in relation to the information contained herein. Naik Naik & Company will not be liable for any consequence of any action taken by the user relying on material / information provided on this website. Users are advised to seek independent legal counsel before proceeding to act on any information provided herein.