Image for the purpose of representation only.
| Photo Credit: File
The Supreme Court on Friday said judges are very conscious about the risks emanating from the indiscriminate use of Generative Artificial Intelligence (GenAI) in judicial work and would not let robotic systems take over the judicial administration process in the country.
“We are very conscious, in fact, over conscious. We do not want Artificial Intelligence overpower the judicial administration process,” Chief Justice of India Surya Kant observed.
The court was hearing a petition filed by Kartikeya Rawal, represented by senior advocate Anupam Lal Das and advocate Abhinav Shrivastava, about the dangers of GenAI, which could even create “hallucinations”, resulting in fictitious judgments, research material and, worse still, work to perpetuate bias.
Allowing the petitioner to withdraw the plea, the court said Mr. Rawal was free to approach the court on the administrative side with suggestions.
When Mr. Das submitted that there were lower court decisions citing non-existent Supreme Court judgments, the CJI said “let that be a lesson to the Bar to verify everything they research on. The judicial officers also have an equal responsibility to verify”. The CJI said judicial officers’ training camps were addressing the problems caused by the advent of AI in the legal field.
“AI will give results that make the searcher happy,” Mr. Das submitted.
Mr. Rawal’s petition sought a strict policy or, at least, guidelines to regulate the transparent, secure and uniform use of GenAI in courts, tribunals and other quasi-judicial bodies until a law was put in place.
The petition warned that the opaque use of AI and Machine Learning technologies in the judicial system and governance would trigger constitutional and human rights concerns. Judiciary must use only data free from bias, and the ownership of that data must be transparent enough to ensure stakeholders’ liability.
“The skill of Gen AI to leverage advanced neural networks and unsupervised learning to generate new data, uncover hidden patterns and automate complex processes can lead to ‘hallucinations’, resulting in fake case laws, AI bias and lengthy observations… This process of hallucinations would mean that GenAI would not be based on precedents but on a law that might not even exist,” the petition submitted.
The petition said GenAI was capable of producing original content based on prompts or query. It could create realistic images, generate content such as graphics and text, answer questions, explain complex concepts and convert language into code.
Published – December 05, 2025 06:15 pm IST