Introduction
In the current landscape of arbitration law in the country, integration of artificial intelligence (AI) can revolutionise dispute resolution. Chief Justice Bobde had said AI “is a perfect blend of human intelligence and machine learning”.1 Integration of AI formally in Indian arbitration practice is only a matter of time and such a leading step would go a long way in making the country an arbitration hub as it has envisioned.2 Technology can act as a game changer, leading to a reduction in costs, making arbitration more accessible and affordable. This is in line with Government’s Vision 2030 wherein it aims to see arbitration space as a dynamic and amenable dispute resolution platform.3
Despite AI’s growing role in dispute resolution, human involvement is and will be essential as professed by the current Law Minister, Mr A.R. Meghwal.4 Arbitration consists of three key players: Arbitral Tribunal, counsel, and experts (if any). Coping with the modern world needs, it is only natural that all the players will resort to using AI to derive their work output. In a recent survey conducted by White & Case5, the results concluded that 77% respondents were in favour of an arbitrator using AI to calculate damages and costs, 60% in favour of using AI to draft procedural parts of the award, while only 23% in favour of using AI to draft the reasoning of awards. The survey results reaffirm the public belief that for justice to be done a person requires equal parts brain and heart. Keeping in mind the key role of arbitrators in arbitration, the authors of this article aim to discuss the extent to which arbitrators should rely on and use AI without raising any ethical concerns. The objective is to create checks and balances in the current scheme of legislation (without making any major amendments) for such use, the absence of which could compromise fairness and accuracy of the process.
Party autonomy: A concept which is at the backbone of arbitration. The usage of AI by the Tribunal should be governed by the limits set by the parties at the initial stage of the arbitration i.e. possibly under the agreement or at the terms of reference stage, in order to provide transparency and create accountability in the dispute resolution process. The parties should also be free to bar the use of AI tools. Once the same has been decided, while passing the award, the Tribunal must issue a declaration confirming the use of AI, revealing the software or tools used and the extent of use of such software/tools (i.e. for summarising facts of the case, translating documents, document review, grammar check, etc.). For example, courts in the USA in particular 5th and 6th US Circuit Court of Appeals6, District Court for Eastern District of Texas7 and Ohio8 have either proposed that lawyers certify the extent of usage of AI program in a filing or adopted Standing Orders barring the use of AI or introduced a rule for lawyers using AI programs to review and verify computer-generated content.
A failure to provide a declaration could be considered in the Indian context a ground for setting aside the award under Section 34(2)(a)(v) of the Arbitration and Conciliation Act, 19969 under the ambit that “arbitral procedure was not in accordance with the agreement of the parties” or where the use of AI leads to errors apparent on the face of the award, it may be set aside on the ground of “patent illegality” under Section 34(2-A).10
The recent “CIArb Guideline on the use of AI in Arbitration” are one of the leading guidelines which set out the general recommendations on the use of AI by arbitrators.11 The guideline provides much-needed direction to tackle this modern tool and can provide a yardstick for the Arbitration and Conciliation Act, 1996 to make amendments.
No delegation of judicial capacity: As Chief Justice B.R. Gavai, has said, “technology must complement, not replace, the human mind in judicial decision-making”.12 The human conscience lies at the heart of delivering justice, thus it would be unfair to delegate such critical responsibility to technology. However, a tribunal is welcome to take the assistance of technology to perform non-judicial tasks which do not require application of mind. The draftsmen of the Arbitration and Conciliation Act, 1996 in all certainty referred to a natural person acting as an arbitrator when they used the words “person”, “his”, etc. under Chapter III of the Arbitration and Conciliation Act, 1996 (dealing with composition of Arbitral Tribunal).13 Going a step further, the same could be amended to be as clear as the French law (Article 1450 of the French Civil Procedure Code), which provides that “only a natural person having full capacity to exercise his or her rights may act as an arbitrator”.14
Currently, a US District Court in LaPaglia v. Valve Corpn. is hearing a challenge against an award on the ground that the arbitrator allegedly “outsourced his adjudicative role to artificial intelligence (AI)”.15 The plaintiff has alleged that the award suffers from the use of “telltale signs of AI generation” and purportedly cites facts which are “both untrue and not presented at trial or present in the record”. On a cursory check, the plaintiff’s counsel enquired with ChatGPT if certain parts of the award were authored by AI and it stated that “the paragraph’s awkward phrasing, redundancy, incoherence, and overgeneralisation ‘suggest that the passage was generated by AI rather than written by a human’”. The matter is sub judice currently, but it would be interesting to see the final verdict of the court in the matter.
Recently, the Silicon Valley Arbitration & Mediation Center16 and the Canadian Judicial Council17 released separate guidelines to raise awareness of the risks of using AI in court administration, proposing that judicial decision-making should not be delegated.
Apple, one of the world’s largest tech players has in June published a paper titled “The Illusion of Thinking”, on Large Reasoning Models (LRMs) wherein it concluded that such “models were not actually reasoning at all —— they were following learned patterns that broke down when confronted with novel challenges”.18 The study exposes the limitations of using such models and thus confirms that there is a serious lack of comprehension and reasoning while using such technology.
Authenticity of information: An AI tool solely relies on the input data to reach its conclusions. In order to make sure that the end product is fair, the input information should be free from bias. If the information used to train the AI is biased, it is but obvious that the result will reflect a biased pattern. Since the AI does not have any concept of truth or factual accuracy, a byproduct called “hallucinations” is inevitable. Hallucinations are typically incorrect or misleading information which is presented as a fact. In order to avoid the end product being plagued by hallucinations, an arbitrator, while relying on any information delivered or conclusion drawn by the AI, is responsible for checking the authenticity and correctness of such results. Recently, an Indian Tax Tribunal withdrew an order citing “inadvertent errors” after it was found that the underlying precedents it relied on, were either fake or inaccurate.19
In a leading case before the England and Wales High Court (Pyrrho Investments Ltd. v. MWB Property Ltd.20), the Court was faced with an issue: whether predictive coding could be used by the parties during electronic disclosure since the original number of documents/files were more than 17.6 million. The Court, after taking into account the large number of electronic files, approved the use of predictive coding technology to filter out irrelevant documents and remove duplicates. One of the reasons awarded by the Court for the same was that there was no evidence to show that the use of such software would lead to a less accurate disclosure. This case is a landmark decision which balances the use of AI in adjudicating a dispute while making sure that the information it churns out remains authentic. Therefore, with appropriate training and by exercising caution, AI can be leveraged to aid arbitrators in arbitration.
Confidentiality: It is imperative that a tribunal abides by its declaration of confidentiality while using any AI tools. Any and all information uploaded on the tool to process the data should be kept confidential in order to protect and safeguard the parties’ privacy. The arbitrators must avoid uploading sensitive documents to unsecure and free AI services. For instance, the American Arbitration Association (AAA) in its “Guidance on Arbitrators’ Use of AI Tools”, 202521 lists out a few considerations for the use of AI by arbitrators. The arbitrators must ensure that their use of AI complies with confidentiality obligations, and ethical guidelines. The Tribunal should refrain from using any tools/software which do not assure data protection (of either case details or party names). Arbitrators should rather be encouraged to use software/tools which provide robust data security.
Concluding remarks
The T.K. Viswanathan Committee in its recent report on “Report of the Expert Committee to Examine the Working of the Arbitration Law and Recommend Reforms” has recognised the increase in use of AI in arbitration and concluded that “AI will create highly supportive systems which will remove bottlenecks in the dispute resolution frameworks.… While AI is a tool for ensuring effectiveness and efficiency in arbitration, it cannot be allowed to replace human arbitrators and counsel.”22
In a significant milestone, the Kerala High Court has framed guidelines for use of AI by members of the judiciary.23 This goes to show the eminent need of such guidelines. However, India should address the current regulatory gap and soon draft and table guidelines on the usage of AI in arbitration especially for arbitrators (on a national level so that there is parity in treatment of arbitrators country-wide). Arbitrators are the key decision-makers of the dispute and a misstep on their end could lead to a miscarriage of justice. Overreliance on AI can adversely affect access to justice. Thus, these guidelines would help prevent the misuse of AI, raise awareness on the usage of AI, define limits of such usage and map out repercussions in case of breach of limits. We can also set up an independent review mechanism which would consist of members having expertise in AI, law and ethics, to ensure compliance with ethical guidelines and promote transparency in the proceedings. The suggested measures can be adopted without making a radical shift in the present law and would go a long way in delivering justice without any compromises.
*Senior Associate. Author can be reached at: bhavanachandak28@gmail.com.
**Associate. Author can be reached at: upadhyayshambhavi2002@gmail.com.
1. “CJI S.A. Bobde Welcomes AI System to Assist Judges in Legal Research”, ET LegalWorld (legal.economictimes.indiatimes.com, 7-4-2021).
2. Department of Legal Affairs, Ministry of Law and Justice, National Conference on “Institutional Arbitration: An Effective Framework for Dispute Resolution”, Press Release, Press Information Bureau (pib.gov.in, 15-6-2025).
3. “Law Minister Rijiju Pitches for Institutional Arbitration; Says AI can Help Arbitrators”, The Hindu (thehindu.com, 7-3-2023).
4. “AI can’t Replace Humans in Judiciary: Law Minister Arjun Meghwal”, News on Air (newsonair.gov.in, 19-4-2025).
5. 2025 International Arbitration Survey: ‘The Path Forward: Realities and Opportunities in Arbitration’, White & Case (whitecase.com).
6. Nate Raymond, “US Appellate Judge Calls Bans on AI Use by Lawyers ‘Misplaced’”, Reuters (reuters.com, 5-4-2024).
7. Sara Marken, “Wary Courts Confront AI Pitfalls as 2024 Promises More Disruption”, Reuters (reuters.com, 28-12-2023).
8. Standing Order, United States District Court, Northern District of Ohio Eastern Division (ohnd.uscourts.gov).
9. Arbitration and Conciliation Act, 1996.
10. Arbitration and Conciliation Act, 1996, S. 34 (2-A).
11. Guideline on the Use of AI in Arbitration (2025), CIArb (ciarb.org, 2025).
12. “Technology must Complement, not Replace, Human Mind in Judicial Decision-Making: CJI B.R. Gavai”, Deccan Herald (deccanherald.com, 7-6-2025).
13. Arbitration and Conciliation Act, 1996, Ch. III.
14. Code de Procédure Civile, Art. 1450.
15. LaPaglia v. Valve Corpn., 2025 SCC OnLine Dis Crt US 1.
16. Guidelines on the Use of Artificial Intelligence in Arbitration, Silicon Valley Arbitration & Mediation Center (1st Edn., 2024) (svamc.org, 2024).
17. Guidelines for the Use of Artificial Intelligence in Canadian Courts, Canadian Judicial Council (1st Edn., 2024) (cjc-ccm.ca).
18. Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio and Mehrdad Farajtabar, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity”, Apple Machine Learning Research (machinelearning.apple.com, June 2025).
19. “Artificial Intelligence in Judiciary ‘must be Approached with Caution’: Justice B.R. Gavai”, The Wire (m.thewire.in, 11-3-2025).
20. Pyrrho Investments Ltd. v. MWB Property Ltd., 2016 SCC OnLine EWHC 24.
21. Guidance on Arbitrators’ Use of AI Tools, American Arbitration Association (go.adr.org, 2025).
22. Dr T.K. Viswanathan Committee Report, Report of the Expert Committee to Examine the Working of the Arbitration Law and Recommend Reforms in the Arbitration and Conciliation Act, 1996 (February 2024).
23. “In a first, Kerala High Court Issues Policy for Use of AI by Judges; Says No AI for Judgments”, Bar & Bench (barandbench.com, 19-7-2025).