In India, Phase III of E-Courts Project has included a component of AI to enhance case management and legal research. It has a component for future technological advancement, including AI and blockchain, across the High Courts in India up to 2027 with separate budget outlay.
Judicial work is, and always has been, synonymous with application of mind. A Judge, as his core function, marshals the facts and appreciates the evidence presented before him. After applying law on such facts and evidence, a judgment is written with reasons supporting the conclusion. Reasons are the heart and soul of a judicial order. Lack of such reasons makes a decision arbitrary and betrays the most important essential of judicial determination. The advent of artificial intelligence (AI), despite its several positives, threatens the application of human mind in passing such reasoned judgments.1 Judiciaries, across the world, are arriving at an inflection point where judges resort to AI chatbots for their research and drafting.2 It is only a short step from there to using AI to generate judgments.
The Chief Justice of India had recently assured that AI will not be used in judicial decision-making and held that “independence and human discretion of judges remain central to the Indian justice system”.3 However, the prospect of AI being used by judges looms. Judges may resort to it to reduce their excessive burden and give in to the convenience that it provides. When presented with facts and evidence, AI systems generate a reasoned judgment with a discussion of all such facts and evidence.4 They create efficiency and help oil the judicial machinery to enable it to dispense faster justice. But the ostensible efficiency creates several problems.
In this article, we discuss the threats that emanate from the use of AI and its slippery slope which may lead the judges to use AI systems for decision-making. While it is true that AI creates convenience and efficiency in judicial functions, its use has to be guided with human control and accountability. It may appear that this is an argument against convenience, but using AI cautiously will ensure that such convenience does not undermine reasoning and propriety of a judicial decision. When such use is unguided, there is a threat of orders and judgments being passed without sufficient supervision to check the veracity of the legal positions contained in them.
The architecture of AI and its expanding judicial footprint
AI systems work by analysing large amounts of data to identify patterns and relationships.5 Based on these patterns, they produce responses or predictions to new inputs, rather than reasoning in a human and value-based manner.6 As a result, their outputs are shaped by the data on which they are trained and the limits built into their design. Their response, however, is not parallel to human reasoning.7 They try to mimic human reasoning based on all the information they have as to how earlier decisions have been taken.
While, there are general AI systems such as ChatGPT, Perplexity, Gemini, and Grok, there have been specific systems developed for the legal field. Lucio AI is used by lawyers across the world for their legal research and drafting assistance. Harvey AI is specifically targeted at law firms with transactional practice. Spellbook helps draft contracts and Kira Systems does contract review and due diligence for mergers and acquisitions.
Even judiciaries across the world have started developing AI systems intended to be used by their judges. Germany has developed OLGA to categorise cases and extract relevant case data.8 Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a system used in some parts of the United States for sentencing guidance.9 In British Columbia, Canada, a system called the “Solution Explorer” guides the litigants in the pre-trial phase by estimating their chances of success if they approach the courts.10 China has built a similar system called “Xiao Fa”.11 The most ambitious of them all is Victor, a system being developed by Brazil. Victor is built on a convolutional neural network (CNN). A CNN is a type of artificial neural network commonly used to analyse images and, in some cases, to process and understand language.12 According to its developers, Victor does not make decisions on its own but merely assists court clerks in sorting and screening appeals.13 However, there is scepticism around this as it is argued that it is difficult to draw a clear line between human decision-making and the subtle influence that software tools can exert on human reasoning.14
In India, Phase III of E-Courts Project has included a component of AI to enhance case management and legal research. It has a component for future technological advancement, including AI and blockchain, across the High Courts in India up to 2027 with separate budget outlay.15 It includes intelligent scheduling of cases, case prediction to enable the parties predict probable outcomes of their case, natural language processing for legal research and translation of documents, and data security measures.16 This complements the already existing tools such as Adalat AI which transcribes court proceedings such as witness depositions and oral arguments.17 Further, an AI-based software tool called Legal Research Analysis Assistant (LegRAA) has been developed to aid judges in legal research and document analysis, but is yet to be adopted by judges in their routine work.18 Another tool called Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE) has been created to aid judges by retrieving facts from documents submitted by the litigants and providing relevant case laws.19 However, its adoption has been limited as it is only used for criminal matters before the Delhi and Bombay High Courts.
While these initiatives are undoubtedly welcome, they require careful and cautious engagement. There is little dispute that AI can enhance efficiency and accelerate the disposal of cases in trial courts. Yet, this very promise compels a deeper inquiry into its limits. To what extent should AI be permitted to operate within the judicial process? What happens when systems designed for assistance begin to be used beyond their intended scope? And how does the legitimacy of adjudication fare if judges increasingly rely on AI for decision-making rather than independent judicial reasoning? Most importantly, why is human reasoning better than reasoning supplied by AI in drafting orders and judgments? These questions must be answered before AI is routinely used by the trial courts judges.
Limitations and risks of AI in judicial decision-making
Adjudication in trial courts is fact-heavy and fact-specific. Merely because two cases have similar facts, the result cannot be the same. A large language model learns from the vast data available on the internet and responds by identifying similarities.20 This can create a problem of generalisation even though the specific facts at hand deserve an independent deliberation.21 For example, when it encounters an interlocutory application for temporary injunction that appears similar to another application found on the internet, it may treat both of them as identical and suggest the same outcome. Hence, if the petition found on the internet was allowed, the model is likely to recommend allowance of the subsequent one as well. However, pleadings may be superficially similar while the underlying facts and surrounding circumstances differ materially. Judicial determination requires each factual situation to be assessed independently, without mechanical reliance on precedent or pattern. The use of AI in such contexts therefore risks undermining such application of independent judicial mind, which is a core foundation of adjudication.
To ensure that each case is decided on its own merits, the documentary evidence produced during the trial of such case has to be perused. A significant concern arises when AI systems are expected to extract and evaluate facts from such documents, a task that lies at the very core of a trial Judge’s function. Judges are required to peruse a wide and often unstructured range of documents, ranging from panchanamas to land records maintained in regional languages, such as Pahanis or Adangals in Telugu-speaking States. In many instances, these documents are handwritten, poorly preserved, or contain overwriting, corrections, and interpolations. An AI system may struggle not only to accurately extract data from such records, but also to comprehend their legal significance. Optical character recognition (OCR) using AI is not foolproof and is susceptible to errors arising from document quality, formatting, and language complexity.22 Even where extraction is attempted, a deeper question arises as to whether the system can correctly contextualise those facts in relation to the issue under consideration. Corrections in documents, or explanations offered during oral evidence, such as an investigating officer clarifying the absence of date in an inquest panchanama, require careful human evaluation. Further, a substantial portion of judicial records in India are not machine-readable.23 The assessment of their probative value necessarily demands independent application of the human mind, which cannot be delegated to AI systems.24
When an order or a judgment is generated using AI system, the Judge is not articulating his reasoning behind the decision. Rather, he is merely confirming the reasoning stated by the AI system. This converts the job of a judge from “application of mind” to produce a well-reasoned result to “confirmation of output” of the AI. Hence, instead of constructing the reason, a judge is simply auditing the reason. If such reason is well-articulated, a judge may find it compelling to approve it, rather than take the efforts to finding any flaws to correct such reasoning before pronouncing the judgment.25 When this becomes routine, the intensity of how minutely the reason is reviewed gradually diminishes. It betrays the essential feature of a judicial function which is to decide by the application of human mind.26 This will lead to a creation of “black box” where one is unable to ascertain why an AI system has produced an order in a particular manner.27 Some have argued that a judges should be provided with a dedicated AI tool, called “Explainable AI” which also explains the reasons behind a determination made by it, and until such time, judges should be restricted to delegate their reasoning functions to an AI system.28
As the output of AI barely takes any time, a judge may not dedicate several hours needed to peruse through the evidence with an open mind and find out if the losing side has any strength in its case. It is often seen that a doubt created during dictation of a judgment leads a judge to ponder on whether such judgment is going on the right track. However, AI does not provide any such opportunity as it cuts shorts the efforts of the judge to take a deep dive into the evidence. It is for this reason that the House of Lords stated in its Select Committee Report that “it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take”.29
When it becomes public knowledge that a judge uses AI systems to produce his orders or judgments, it casts a blow to public trust in judiciary. It erodes trust in the judicial system where human reasoning is expected to be applied. This undermines the prestige of the judiciary as it will seem that the core judicial function has been outsourced to a computer software.30 This also provides a new ground for appeal where a new wave of appeals may be preferred on the ground that the judgment of the trial court was not written by the Judge himself, but by an AI system.
A fundamental ethical concern arises as the process risks diluting accountability and the moral responsibility that lies at the heart of adjudication. The European Union Artificial Intelligence Act, 2024, adopts a risk-based regulatory framework that classifies AI activities into four categories: banned, high-risk, limited risk, and minimal risk.31 Notably, the deployment of AI for legal research, interpretative assistance, and the application of law to facts is designated as “high-risk”. Reflecting the sensitivity of adjudicatory functions, the Act mandates meaningful human oversight wherever AI is used within judicial processes, underscoring that technology may assist but cannot supplant human judgment. There is no such requirement in India, and even when such requirement is enacted as a law or mandated by rules of practice, ensuring that there is adequate human oversight to not undermine application of mind by the judges is difficult.
The Delhi High Court sounded this alarm when it held that “there is no doubt in the mind of the court that, at the present stage of technological development, AI cannot substitute either human intelligence or the humane element in the adjudicatory process. At best, the tool could be utilised for preliminary understanding or for preliminary research, and nothing more.”32
Another serious ethical concern arises when trial courts rely on AI-generated legal research, as such systems may fabricate authorities and reinforce the user’s pre-existing belief rather than accurately stating the law. At times, AI software “hallucinates” case law by creating fictitious judgments that appear plausible and authoritative and presents them confidently to affirm what the user seeks.33 Upon verification on authentic legal databases such as SCC, these cases are found not to exist at all. Such “ghost” case law grossly misrepresents legal principles, and if relied upon by trial courts, may result in orders based on non-existent decisions of the Supreme Court or High Courts. To illustrate this, consider the following prompt that was inserted by the authors on ChatGPT:
|
Prompt: Give me a case law which guides appointment of receivers and for what purpose a receiver can be appointed? ChatGPT Reply: Certainly! One of the leading judgments guiding the appointment of receivers under Order XL Rule 1 CPC is: 👨⚖️ Krishna Kumar Birla v. Rajendra Singh Lodha, 📋 Background: This case involved a dispute over the vast estate of late industrialist Priyamvada Birla. There was a will, and parties were fighting over its validity. One side sought appointment of a receiver to protect the estate from being misused during litigation. ⚖️ Key Principles Laid Down: The Supreme Court held that: “Appointment of a receiver is a drastic and harsh remedy and should be exercised with extreme caution and only when there is a strong prima facie case and a clear danger of loss or damage to the property in dispute.“ The Court summarized that:
|
In the entire reply above, the only thing that ChatGPT got right is the case name and its citation. However, when we traced this case from SCC OnLine, it was found that the case does not talk about receiver at all. It is a case law on several legal principles of probate. When ChatGPT is questioned further, it conveniently acknowledges it and backtracks from its earlier response, as seen here:
|
Prompt: In which paragraph of the Krishna Kumar case was that stated? The exact quote you’re referring to — that “appointment of a receiver is a drastic and harsh remedy and should be exercised with extreme caution…” — does not appear verbatim in Krishna Kumar Birla v. Rajendra Singh Lodha (2008) 4 SCC 300. I couldn’t locate that precise language in the available judgment text. In fact, the paragraphs we found from the judgment—115 & 116—deal with the court’s jurisdiction to frame preliminary issues for probate, not directly with the principles around receivership. It seems that your earlier citation is more of a paraphrase or legal commentary summarizing the spirit of the decision rather than a literal excerpt. This kind of phraseology—“drastic remedy,” “strong prima facie case,” and “real danger”—is commonly attributed to judicial reasoning around receivership, but it isn’t present word-for-word in the Supreme Court’s written decision in that case. If you want a literal judicial statement from an opinion, we’d have to turn to another case where those exact words were used. Want me to look up an alternative judgment with that precise phrasing? |
When we asked for it to look for such alternative judgment of the Supreme Court, it gave us this response:
|
One of the clearest statements on the gravity of appointing a receiver comes from the Supreme Court of India in R.D. Alloys & Anr. v. Ram Narain Garg & Ors. (1998). |
However, this judgment was not delivered by the Supreme Court. It is a decision of the Punjab and Haryana High Court.
There have been several instances across the world where courts and lawyers have relied on such non-existent case laws.34 In the year 2024, a trial Judge in Brazil unwittingly endorsed a draft order prepared by a court clerk who had used ChatGPT to generate portions of the reasoning without disclosing this fact.35 Relying on the authorities cited in the draft, the Judge incorporated case law that later turned out to be fictitious and factually incorrect. This led to a wrongful conviction of the accused.36
This not only betrays the trust of litigants but also places a significant burden on appellate and Revisional Courts to correct errors that should never have arisen. A cautionary example is the widely reported incident in New York, where two lawyers were debarred after filing pleadings supported by AI-generated false citations.37 Further, even where AI systems cite real case law, such decisions may be outdated, overruled, or rendered inapplicable by subsequent legal developments.38 The reliability of AI output is directly linked to the scope, depth, and currency of the legal material it is permitted to access. Where an AI system operates under informational or database limitations, it is far more likely to produce defective responses grounded in incorrect or obsolete positions of law, thereby raising profound ethical and institutional concerns for judicial decision-making. The Delhi High Court had dismissed a petition as withdrawn when it found that a petition contained several fake case laws and cited text which did not exist in the case laws.39
A dilemma arises when trial courts rely on AI systems trained on skewed legal datasets, as such dependence may distort judicial discretion and outcomes. Most of the legal material accessible to AI systems consists of judgments of the Supreme Court and the High Courts in India since these decisions are readily available online. However, the matters that reach higher courts are themselves not representative of all judicial outcomes.40 For instance, an order granting bail at the trial court level is seldom challenged, whereas an order refusing bail is far more likely to be carried in appeal or revision. As a result, higher court decisions disproportionately contain discussions on when and why bail should be granted and the parameters governing such grant. When a trial court Judge relies on AI assistance to draft or decide bail applications, the system which learns from the Supreme Court and the High Court orders may incline towards allowing bail rather than refusing it. While this tendency may appear to favour the constitutional value of personal liberty, the same bias can become problematic in other contexts, where there are special legislations governing grant of bail such as offences relating to narcotic drugs or when police seek custody of an accused, where mechanically allowing petitions without human application of mind may itself amount to an affront to liberty and public interest.
A grave concern arises when AI systems internalise and reproduce deep-seated societal prejudices, thereby threatening the constitutional promise of equality before law. A decision by the Supreme Court of Wisconsin in the United States has been often cited to show how algorithmic decision-making, when trained on socially skewed data, can amplify existing inequalities under the guise of objectivity.41 In State of Wisconsin v. Loomis (2016), the Supreme Court of Wisconsin upheld the use of a risk-assessment algorithm, called COMPAS, at the sentencing stage to evaluate the likelihood of recidivism. While approving its use, the Court expressly cautioned trial courts that such tools are susceptible to bias against marginalised communities, particularly the Black population in the United States.42 Empirical studies revealed that Black individuals were classified as likely to reoffend at nearly twice the rate of white individuals by software such as COMPAS.43 A similar danger exists in the Indian context, where AI systems, if trained on data reflecting entrenched social hierarchies, may unfairly disadvantage lower castes and religious minorities, thereby institutionalising discrimination through automated judicial assistance.44
Such pattern recognition may be used by the AI system to also find if a judge has any particular inclination and help the litigants and lawyers with forum shopping.45 Hence, an AI system may read the judgments delivered by a Judge of the Supreme Court or the High Court and label a Judge to be “pro-landlord” or “pro-tenant”. This leads to predictive analysis where the lawyers will aim to get their matter called before a judge who is likely to be favourable to their case. Software such as Lex Machina46 in the United States or Solomonic47 in the United Kingdom have already started type casting judges on such lines. When such label is attached to a judge, it erodes public confidence in objectivity of justice system.
When the AI systems are used regularly, they may tend to diminish the judicial skills of judges which was harnessed by training at Judicial Academies and experience gained by them over the years. This leads to “cognitive offloading” where judges surrender their judicial skills to AI which may diminish their abilities of critical thinking.48 The judgments are made to look good as they are drafted by the AI to seem more structured and nuanced, which improves their readability and may make them easier to understand even by non-law persons. However, this verbosity may betray the actual application of judicial mind. Judges may start paying less attention to the oral arguments of the counsel as the pleadings and evidence would be sufficient for AI to generate a judgment. Ultimately, judgments of all judges will look alike and become standardised, undermining the diversity and style that each Judge may have. This homogenised language erodes judicial individuality, something that makes us appreciate the writings of stalwart Judges such as Justice Krishna Iyer or Justice K.T. Thomas, who are well-known for their linguistic flair and analytical articulation. While it is not desirable for a trial court Judge to draft judgments in complicated language, it is also not prudent to allow all judgments to be standardised as it may betray organic reasoning that a judge is likely to articulate when a judgment is written by him.49
At a moral level, judges would be delegating the job they are entrusted with to a computer software, and this may have significant repercussions in the long-run.50 A litigant may question if his case was adjudged by a human judge or simply processed by a software. The elements of equity, beyond the concept of strict justice, will become blurred. The danger of AI in judging is not that machines will replace judges, but that judges may slowly begin to think like machines as they optimise, standardise, and minimise risk at the cost of conscience.51 AI may increase efficiency as a managerial value, but justice is a moral enterprise which needs to be preserved.
Judicial reasoning is not merely the mechanical application of rules to facts. It is a deliberative process shaped by constitutional values, moral intuition, and institutional responsibility.52 A human judge does not simply match fact patterns to precedents but evaluates them through empathy, proportionality, and an awareness of consequences. This distinction becomes evident in concrete situations. Consider a bail application in a criminal case. Two accused persons may face identical charges under the same statutory provision. An AI system trained on past data may detect that bail is frequently denied in such offences and suggest refusal.53 A human judge, however, may notice that one accused is a first-time offender, the sole caregiver of an ailing parent, and has cooperated with investigation. The Judge weighs liberty under Article 21, the absence of flight risk, and the proportionality of continued incarceration. What changes the outcome is not statistical similarity but contextual humanity.
Even in sentencing, where structured guidelines exist, human reasoning plays a transformative role. Two offenders convicted of the same offence may differ radically in remorse, background, and prospects of reform. A judge may perceive genuine contrition in an apology, or insincerity masked by rehearsed words. AI can analyse language patterns, but it cannot experience moral resonance or detect authenticity in the human sense.54
In a civil injunction matter, a machine might treat delay in filing as a decisive factor against relief. A judge may instead examine why the delay occurred: illiteracy, rural disadvantage, misinformation, or economic vulnerability. Equity often demands sensitivity to structural inequality, something that cannot be statistically inferred with moral certainty. Similarly, in a family dispute involving custody, AI may identify common judicial language about the “best interest of the child”. Yet the lived assessment of a judge which involves observing the child’s interaction with each parent, perceiving emotional cues in court, sensing manipulation or genuine affection cannot be reduced to textual inputs. Judicial reasoning here is experiential, not algorithmic.
AI operates through probabilistic pattern recognition. It can organise, summarise, and correlate data, but it does not understand suffering, intention, remorse, fear, or dignity. It cannot experience doubt. Yet doubt is central to judging: the hesitation before conviction, the second reading of evidence before dismissal, the conscious discomfort that leads to re-examination of a harsh conclusion.55 Human reasoning permits reconsideration, while algorithmic output tends toward premature finality.56
The courts derive authority not merely from efficiency or consistency, but from the assurance that decisions are made by accountable human minds exercising independent judgment. In a constitutional democracy, judging is not optimisation, it is responsibility. Responsibility presupposes conscience. Conscience cannot be coded.
Prudence in practice: The responsible use of AI in trial courts
AI, if carefully confined to a purely assistive role, can be legitimately used by trial court Judges to support the process of adjudication without intruding into decision-making.57 AI may be employed for tasks such as transcribing evidence, summarising pleadings and depositions, organising case records, translating documents, and rectifying spelling and grammatical mistakes in the orders drafted by the judges. Used in this manner, AI functions as an advanced clerical and research aid which reduces mechanical workload, saves judicial time, and improves administrative efficiency, while the core judicial functions of appreciating evidence, weighing competing arguments, applying discretion, and delivering reasoned conclusions remain firmly and exclusively with the judge.58 In short, AI may assist the judge’s hands, but must never substitute the judge’s mind.
Institutions across the world have reacted to the advent of AI by framing guidelines to define their approach. United Nations Educational, Scientific and Cultural Organization (UNESCO) has issued a document containing fifteen principles regarding the use of AI in the trial courts.59 They emphasise that AI in courts must remain human-centred, rights-respecting, transparent, and accountable, with strong safeguards against bias, discrimination, and opacity.60 In the context of trial courts, these principles require that AI be used only as an assistive tool, with judges retaining full control, responsibility, and discretion over every judicial decision, while ensuring fairness, explainability, and protection of human dignity.61 Similarly, the European Union has adopted an “Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment” which stipulates that AI must remain strictly subordinate to human judges, and must never undermine judicial independence or discretion.62 It mandates transparency, non-discrimination, and effective human control, affirming that the act of judging is a human function that cannot be delegated to algorithms. On the other hand, France has enacted a law specifically prohibiting the use of AI to predict decisions of an individual judge based on his past judgments.63
In India, the Kerala High Court has formulated a policy titled “the policy regarding use of artificial intelligence tools in district judiciary”.64 At the outset, it makes a distinction between “AI tools” and “approved AI tools”, the latter being tools which have “undergone formal evaluation and received approval from the Kerala High Court or the Supreme Court of India for use at the District Judiciary”.65 It lays down unequivocally that “AI tools shall not be used to arrive at any findings, reliefs, order or judgment under any circumstances as the responsibility for the content and integrity of the judicial order, judgment or any part thereof lies fully with the Judges”.66 It warns its judicial officers not to compromise transparency, fairness, accountability, and confidentiality, which are integral aspects of judiciary.67 Hence, specific information regarding the litigants and their disputes should not be uploaded to any AI tool as it may use this information, resulting in breach of privacy.68 Further, it recognises that AI produces “erroneous, incomplete, or biased results” and extreme caution must be exercised in relying on the citations and legal position suggested by such AI tools.69 It proposes and encourages its officers to participate in training programs organised by the High Court or the Judicial Academy to understand the ethical, legal, technical and practical aspects of AI.70
Although some have taken objection to the application of the policy of the Kerala High Court only to the district judiciary, exempting the High Court itself,71 a policy of such nature is indeed a welcome move. It is only fitting that other High Courts across the country create such policies and give guidance to their respective judicial officers.
Drawing the red line: Preserving judicial integrity in the age of AI
The judiciaries across the States in India, headed by their High Courts and guided by the Supreme Court, must deal with the use of AI proactively. Akin to the policy of the Kerala High Court, other High Courts must also frame policy and guidelines for their judges. Distinction must be made to mandate what is permissible and non-permissible use of AI. It must be made abundantly clear that AI is not to be used as a replacement of human application of mind. This “red line” must be drawn to protect judicial integrity and accountability in the orders passed and judgments pronounced. AI tools are merely assistive, and a “human-in-the-loop” principle must be followed by all judicial officers to apply their minds independently. Predictive analysis is required to be governed to avoid any likelihood of forum shopping. It can be used, however, to predict an outcome which may nudge the parties to arrive at a mediated settlement.
A National Judicial AI Oversight Committee, or a sub-committee under the umbrella of e-Committee, may be created by the Supreme Court to oversee and guide the use of AI tools in the country. This Committee can formulate policies, conduct seminars, and prescribe sensitisation and skill enhancement training to all judicial officers in the country. Such unified national approach will avoid fragmented measures across the country at the State-level.72 The High Courts and the State Judicial Academies need to design training programs to improve abilities of judges in using AI tools and demonstrate both good and bad uses of the same. A hands-on approach can be used in such training programs to show the pitfalls of AI tools, especially in using such tools for legal research. It is also important that ethical values be inculcated amongst judges as a part of AI literacy.
The judiciaries across the country must aim at preventing dependency of judges on AI tools. Judgments written by the trial court Judges must be regularly reviewed to ensure that there are no fictitious case laws or that AI tools have not been used for reasoned orders. Ultimately, the judicial responsibility of every order passed and judgment written must rest solely with the judge, and it cannot be shifted to any AI tool irrespective of the efficiency of such AI tool.
AI can modernise court administration, but adjudication is not an administrative function, it is a constitutional trust. The Supreme Court and High Courts must therefore act not merely as adopters of technology, but as guardians of the human character of justice, ensuring that AI remains a tool of assistance, not an instrument of substitution.
*Chief Judge, City Civil Courts, Hyderabad. He graduated with a BA LLB (Hons) from NALSAR University of Law in 2003.
**Civil Judge and Judicial Magistrate, Andole at Jogipet, Sangareddy District. He graduated with a BA LLB (Hons) from NALSAR University of Law in 2015.
1. William Howard Newman, “JudgeGPT: The Benefits and Challenges of an AI Judiciary”, available at <https://www.americanbar.org/groups/litigation/resources/litigation-news/2025/fall/judgegpt-benefits-challenges-ai-judiciary/>.
2. Jumpei Komoda, “Designing AI for Courts” (2023) 29 Richmond Journal of Law & Technology.
3. “Supreme Court Assures Judges Exercising ‘Utmost Caution’ on use of AI in Judicial Process”, available at <https://www.newsonair.gov.in/supreme-court-assures-judges-exercisingutmost-caution-on-use-of-ai-in-judicial-process/>.
4. Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Cambridge University Press, 2017).
5. Organisation for Economic Co-operation and Development, Artificial Intelligence in Society (OECD Publishing, 2019) available at <https://www.oecd.org/content/dam/oecd/en/publications/reports/2019/06/artificial-intelligence-in-society_c0054fa1/eedfee77-en.pdf>.
6. “AI in the Courts: Judicial Decision-Making: Transparency, Accountability, and the Judicial Role”, available at <https://justicespeakersinstitute.com/ai-in-judicial-decision-making-transparency-ethics/>.
7. European Union, “Ethics Guidelines for Trustworthy AI” (2019) available at <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>.
8. Federal Ministry of Justice and Consumer Protection (Germany), Use of Artificial Intelligence in the Judiciary — OLGA Project, as discussed in Council of Europe materials on judicial AI tools.
9. Julia Dressel and Hany Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances (2018) available at <https://www.science.org/doi/10.1126/sciadv.aao5580>.
10. Shannon Salter, “What is the Solution Explorer?”, Canadian Bar Ass’n: Bartalk (cbabc.org).
11. Cao Yin, “Courts Embrace AI to Improve Efficiency”, China Daily, available at <http://www.chinadaily.com.cn/china/2017-11/16/content_34595221.htm>.
12. Stanford University’s Convolutional Neural Networks cheatsheet, available at <https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks/ >.
13. Council of Europe — CEPEJ, Artificial Intelligence and Judicial Systems: Survey of National Practices (2020), discussing the VICTOR system used by the Administrative Court of Appeal of Paris, available at: <https://rm.coe.int/cepej-artificial-intelligence-survey/16809e7d2f>.
14. Daniel Becker and Isabela Ferrari, “VICTOR, the Brazilian Supreme Court’s Artificial Intelligence: A Beauty or a Beast?”, available at <https://cloud-platform-e218f50a4812967ba1215eaecede923f.s3.amazonaws.com/uploads/sites/25/2024/03/Victor-Beauty-or-the-Beast.pdf>.
15. Press Release, Ministry of Law and Justice, Use of AI in Judicial Processes, Press Information Bureau, available at <https://www.pib.gov.in/PressReleasePage.aspx?PRID=2100323®=3&lang=2>.
16. “AI in Judicial Processes: Transforming India’s Legal System”, available at <https://indiaai.gov.in/article/ai-in-judicial-processes-transforming-india-s-legal-system>.
17. “E-Research Library with AI Tools to Assist Lawyers”, The Times of India (10-7-2025) available at <https://timesofindia.indiatimes.com/city/delhi/e-research-library-with-ai-tools-to-assist-lawyers/articleshow/122348095.cms>.
18. Press Release, Ministry of Law and Justice, Digitalization of Publicly Available Records, Press Information Bureau, available at <https://www.pib.gov.in/PressReleasePage.aspx?PRID=2198646®=3&lang=2>.
19. Advaith Sri Krishna Datta Mamidanna, “Artificial Intelligence in the Indian Judiciary SUPACE, SUVAS, and the Limits of Assistive Automation”, available at <https://spmrf.org/artificial-intelligence-in-the-indian-judiciary-supace-suvas-and-the-limits-of-assistive-automation/>.
20. IBM Research, “What are Large Language Models (LLMs)?”, available at <https://www.ibm.com/think/topics/large-language-models>.
21. “AI in the Courts: Judicial Decision-Making: Transparency, Accountability, and the Judicial Role”, available at <https://justicespeakersinstitute.com/ai-in-judicial-decision-making-transparency-ethics/>.
22. Adobe, “What is Optical Character Recognition (OCR)?”, available at <https://www.adobe.com/acrobat/resources/what-is-ocr.html>.
23. Jyotsana Bhatnagar, “Judiciary’s Largest Project Faces Implementation Hurdles”, available at <https://www.financialexpress.com/india-news/judiciarys-largest-project-faces-implementation-hurdles/3983280/>.
24. Bhavya Singh, “Artificial Intelligence May Not Be Fruitful In Writing Judgments, Human Element Critical: Patna High Court CJ Vinod Chandran”, available at <https://www.livelaw.in/high-court/patna-high-court/patna-high-court-cj-vinod-chandran-ai-to-write-judgments-268789>.
25. Amin Ebrahimi Afrouzi, “John Robots, Thurgood Martian, and the Syntax Monster: A New Argument Against AI Judges” (2024) 37(2) Canadian Journal of Law & Jurisprudence 369—396, available at <https://doi.org/10.1017/cjlj.2024.17>
26. Dr A. Sreelatha and Dr Gyandeep Choudhary, “Exploring The Use of AI In Legal Decision Making: Benefits and Ethical Implications”, available at <https://woxsen.edu.in/research/white-papers/exploring-the-use-of-ai-in-legal-decision-making-benefits-and-ethical-implications/>.
27. Jumpei Komoda, “Designing AI for Courts” (2023) 29 Richmond Journal of Law & Technology.
28. Ashley Deeks, “The Judicial Demand for Explainable Artificial Intelligence”, (2019) 119 Columbia Law Review 1829, 1833—34.
29. 79 House of Lords Artificial Intelligence Select Committee, AI in the UK: Ready, Willing and Able?, Report Session 2017-2019 (2019): 128.
30. Benjamin Minhao Chen, Alexander Stremitzer & Kevin Tobia, “Having Your Day in Robot Court” (2022) 36(1) Harvard Journal of Law & Technology.
31. European Parliament, “EU AI Act: First Regulation on Artificial Intelligence”, available at <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/artificial-intelligence-act-eu-rules-for-trustworthy-ai>.
32. Christian Louboutin Sas v. Shoe Boutique, 2023 SCC OnLine Del 5295 : “AI Can’t Replace Human Element: HC”, The Times of India, 28-08-2023, available at <https://timesofindia.indiatimes.com/city/delhi/ai-cant-replace-human-element-hc/articleshow/103113559.cms>.
33. Kathryn Armstrong, “ChatGPT: US Lawyer Admits Using AI for Case Research“, BBC News, available at <https://www.bbc.com/news/world-us-canada-65735769>
34. “AI is Creating Fake Legal Cases and Making its Way into Real Courtrooms, with Disastrous Results”, The Conversation, available at <https://theconversation.com/ai-is-creating-fake-legal-cases-and-making-its-way-into-real-courtrooms-with-disastrous-results-225080>.
35. “Uncovering the Controversial Case of AI Sentencing in Brazil”, available at <https://www.toolify.ai/ai-news/uncovering-the-controversial-case-of-ai-sentencing-in-brazil-884517>.
36. Brazil in AI Atlas, Oxford Institute of Technology and Justice, available at <https://www.techandjustice.bsg.ox.ac.uk/research/brazil>.
37. “Judge Sanctions Lawyers for Brief Written by A.I. with Fake Citations”, available at <https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html>.
38. “AI on Trial: Legal Models Hallucinate in 1 Out of 6 (or More) Benchmarking Queries”, Stanford, available at <https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries>.
39. “Delhi HC Junks Plea Crafted by ChatGPT with Fake Quotes & Cases. What it Said, Pulling Up Erring Lawyer”, The Print, available at <https://theprint.in/judiciary/delhi-hc-junks-plea-crafted-by-chatgpt-with-fake-quotes-cases-what-it-said-pulling-up-erring-lawyer/2751518/>.
40. Punjab & Haryana High Court used ChatGPT to seek response in a bail matter and it created controversy. However, the High Court noted that AI was used only present a broader picture on bail jurisprudence. “Punjab and Haryana High Court Seeks ChatGPT’s Response in a Bail Plea Case”, The Hindu, available at <https://www.thehindu.com/news/national/other-states/punjab-and-haryana-high-court-seeks-chatgpts-response-in-a-bail-plea-case/article66667227.ece>.
41. Robert Buckland, “AI, Judges and Judgement: Setting the Scene”, Harvard Kennedy School.
42. 881 NW 2d 749 (Wis. 2016).
43. “State v. Loomis” (2017) 130 Harvard Law Review 1530, available at <https://harvardlawreview.org/print/vol-130/state-v-loomis/>.
44. “AI in the Courtroom: Opportunities and Risks”, available at <https://justicespeakersinstitute.com/ai-in-the-courtroom-benefits-and-risks/>.
45. “AI Tells Lawyers How Judges are Likely to Rule”, available at <https://www.axios.com/2023/09/12/ai-judges-trials-predictions>.
46. “Actionable Intelligence”, available at <https://www.lexisnexis.com/en-us/products/lex-machina.page>.
47. “Data Powered Litigation Intelligence”, available at <https://www.solomonic.co.uk/>.
48. Michael Gerlich, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” (2025) 15 Societies 6, available at <https://doi.org/10.3390/soc15010006>.
49. Richard M. Re, “Artificial Authorship and Judicial Opinions” (2024) 92 The George Washington Law Review 1558.
50. Shilun Zhou, “Analyzing the Justification for Using Generative AI Technology to Generate Judgments Based on the Virtue Jurisprudence Theory”, available at <https://www.tandfonline.com/doi/full/10.1080/12460125.2024.2428999#d1e476>.
51. Zichun Xu, “Human Judges in the Era of Artificial Intelligence: Challenges and Opportunities” (2022) 36(1) Applied Artificial Intelligence An International Journal 2013652, available at <https://doi.org/10.1080/08839514.2021.2013652>.
52. Minul Muhamdiramge, “Can (& Should) AI Make Decisions?”, 3-10-2024, available at <https://www.trialview.com/resources/can-and-should-ai-make-decisions>.
53. Philip Demetriou, “Judgment Day: The Rise of Artificial Intelligence in Dispute Resolution”, available at <https://csdisco.com/blog/judgment-day-the-rise-of-artificial-intelligence-in-dispute-resolution>.
54. Ryan Zahrai, “The Role of Empathy in Legal Practise: Why it Matters in Courtrooms and Client Relationships”, available at <https://zed.law/blog/vugutqtvfahbhwdpqwf4blibjneycv>.
55. Justin Snyder, “RoboCourt: How Artificial Intelligence Can Help Pro Se Litigants and Create a ‘Fairer’ Judiciary”, (2022) 10 Indiana Journal of Law and Social Equality 200.
56. Andrew Likierman, “Good News for Humans: AI Doesn’t Do Judgement”, available at <https://www.london.edu/think/good-news-for-human-beings-ai-doesnt-do-judgement>.
57. Eugene Volokh, “Chief Justice Robots”, (2019) 68 Duke Law Review 1135.
58. Gary E. Marchant, “AI in Robes: Courts, Judges, and Artificial Intelligence”, Article 2 (2024) 50(3) Ohio Northern University Law Review.
59. UNESCO, Guidelines for the Use of AI Systems in Courts and Tribunals, available at <https://www.unesco.org/en/articles/guidelines-use-ai-systems-courts-and-tribunals>.
60. UNESCO, “Safeguarding Human Rights and Judicial Independence in the Age of Algorithmic Justice”, available at <https://www.unesco.org/en/articles/safeguarding-human-rights-and-judicial-independence-age-algorithmic-justice>.
61. UNESCO, “Guidelines for the Use of AI systems in Courts and Tribunals”, available at <https://www.unesco.org/en/articles/guidelines-use-ai-systems-courts-and-tribunals>.
62. Council of Europe, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment (CEPEJ, 2018) available at <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c>
63. LOI n° 2019-222 du 23 mars 2019 de Programmation 2018-2022 et de Réforme Pour la Justice (1), available at <https://www.legifrance.gouv.fr/loda/id/JORFTEXT000038261631/>.
64. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>.
65. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA, Pt. 3.4 (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>.
66. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA, Pt. 4.6 (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>.
67. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA, Pt. 4.1 (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>..
68. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA, Pt. 4.2 (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>..
69. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA, Pt. 4.3 (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>.
70. Implementation of Policy Regarding Use of Artificial Intelligence Tools in the District Judiciary, HCKL/7490/2025-DI-3-HC KERALA, Pt. 4.9 (19-7-2025) available at <https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf>.
71. Shailraj Jhalnia, “Bench and Bot — The Kerala HC’s AI Guidelines and the Bigger Judicial Puzzle”, available at <https://spicyip.com/2025/08/bench-and-bot-the-kerala-hcs-ai-guidelines-and-the-bigger-judicial-puzzle.html>.
72. Harsh Gour, “What is the Kerala HC’s New AI Policy all About? And What More is Needed?”, available at <https://theleaflet.in/digital-rights/law-and-technology/what-is-the-kerala-hcs-new-ai-policy-all-about-and-what-more-is-needed>.


This Article on the use of AI in Trial Courts clearly indicates about the misuse of AI in court rooms, and emphasizes on the bad effects of AI generated outcomes in legal sphere.
It also recognizes the comment of the Court that AI should be used in the preliminary stage of research. And in fact it is also recognized that AI may assist Judge’s hand but never substitute judge’s mind.
Overall this is a brilliantly wriiten aeticle on Use of AI in Courts Process. This article clears as to when, where and how AI should be applied in the Court’s Process.