
INTRODUCTION
ARTIFICIAL INTELLIGENCE (AI) has brought about a significant shift in the constantly changing field of law, permanently altering several aspects of the legal industry. AI technologies are changing established processes and enhancing the capabilities of legal professionals in a variety of ways, including legal research and document analysis.
AI is not just a futuristic concept anymore but an integral force already reshaping the landscape of legal profession, offering efficiency, accuracy, and innovative solutions.
Two decades ago, pioneers like Jay Leib recognized the potential of AI in addressing inefficiencies within the legal profession. Leib's venture into electronic discovery with Discovery Cracker in the early 2000s marked the beginning of a revolution.
"We saw a gap in the market," claims Leib. "Why print on so much paper? For lawyers to stay up to date, they need tools.
Furthermore, a massive volume of data is being produced. As per Forbes, the amount of data generated daily is 2.5 quintillion (2,500,000,000,000,000,000) bytes, with 90 percent of the total data created in the last two years. Lawyers now deal with terabytes of data and hundreds of thousands of documents instead of sifting through mountains of paper. Lawyers need a method for sorting through the material so they can present a compelling story. Because there is a wealth of data, e-discovery, legal research, and document review have become increasingly sophisticated.
Platforms like LexisNexis, Westlaw, and Bloomberg Law leverage machine learning algorithms to navigate vast repositories of legal documents swiftly and efficiently. This has not only accelerated the research process but has also enhanced the accuracy of information extraction. These tools utilize NLP to comprehend legal texts, enabling lawyers to identify relevant cases, statutes, and regulations with unprecedented speed and precision. By providing valuable insights into legal precedent, AI-powered research tools empower lawyers to make informed decisions, significantly reducing the time and resources traditionally invested in exhaustive manual research.
Contract analysis, a historically laborious task, has undergone a paradigm shift with the advent of AI. Tools such as Kira Systems and LawGeex utilize NLP algorithms to dissect legal documents, extracting key terms and clauses with remarkable efficiency. This not only expedites the contract review process but also facilitates the identification of differences and similarities between documents, simplifying the creation of new contracts or amendments to existing ones
Artificial intelligence (AI) has been projected to be used in international arbitration for a wide range of tasks, including the selection of arbitrators, legal research, writing and editing of written submissions, document translation, case management, document organization, cost estimations, hearing arrangements (including simultaneous foreign language interpretation or transcripts), and writing of standard sections of awards.
AI USES AND ADVANTAGES FOR LEGAL SECTOR
1. Legal Research: Legal research is made easier by AI-powered applications like LexisNexis, Westlaw, and Bloomberg Law.
Machine learning algorithms extract pertinent information from legal papers through analysis.
Makes it possible for solicitors to locate cases, laws, and rules more rapidly and precisely.
2. Contract Analysis: Artificial Intelligence makes it easier to analyze legal papers using tools like Kira Systems and LawGeex.
NLP algorithms speed up and improve the process by identifying important terms, clauses, and discrepancies.
3. Document Review: AI-driven systems for reviewing documents swiftly examine a lot of documents.
Determine crucial details, names, dates, keywords, and any possible problems or contradictions greatly cuts down on the time and expense of document evaluation.
4. Predictive Analytics: Artificial intelligence (AI)-based predictive analytics programmes, such as Blue J Legal and Premonition, forecast case results, pinpoint risks and provide strategy recommendations, Make well-informed decisions and recommendations by applying machine learning algorithms to analyze case law.

CONSTRAINTS ON AI
The Four V's Of Big Data—Volume, Variety, Velocity, And Veracity—are used to identify constraints.
Ā
Problem: For AI models to produce reliable forecasts, the legal industry has to have access to enough data. However, there is a restriction in several legal areas where non-parties cannot obtain confidential decisions.
One potential solution to solve this issue is to collect confidential prizes for model-building purposes and disseminate awards in a redacted manner.
VARIETY: DO REPEATABLE PATTERNS WITH BINARY OUTCOMES NEED TO BE PROVIDED?
Challenge: The diversity of legal rulings may stem not from disparate sources or formats but rather from the topics covered in those rulings. AI models may not be able to solve complex, non-repetitive problems.
Resolution: Model-building is facilitated by explicit output questions; still, the challenge
Solution: Clearly defined output questions make the process of constructing models easier. Nevertheless, managing a variety of non-binary jobs that call for close attention to legal intricacies still presents a barrier.
VELOCITY: THE ISSUE WITH POLICY SHIFTS OVER TIME
Challenge: Policy changes might occur over time, making prior data obsolete, and legal decisions may not be made often. AI models may find it difficult to adjust to sudden changes in policy based on historical data.
Solution: While machine learning inherently involves continuous algorithmic improvement, policy changes that diverge from historical data may provide difficulties and necessitate conservative model methods.
VERACITY: POTENTIAL FOR PREJUDICE AND DATA DIET DEFICIENCIES
The precision and reliability of the data used are related to veracity. Biases found in the training set of data may be inherited by AI models, potentially producing unfair results and systemic errors.
ETHICAL CONSIDERATIONS IN AI
There are ethical and legal concerns with the use of AI in the legal system. How, for instance, can we guarantee the accountability and transparency of AI systems? How can bias be eliminated from AI decision-making? In a world when AI is capable of handling legal work, what would be the role of lawyers?
BIAS AND FAIRNESS IN AI DECISIONS
Initially, one would believe that AI models are superior to humans because of their algorithmic objectivity and infallibility, whereas humans are vulnerable to subjectivity and non-rationality and will always make mistakes.
For instance, a team of Israeli and US-American scholars have provided some insight on the significance of extraneous elements in judicial decision-making by applying their research in the legal field.
Examining over 1,100 rulings made over a 10-month period by Israeli judges concerning 40% of the nation's parole requests, the research revealed that most requests are denied on average, but that the likelihood of a decision in favour of the applicant is much higher following the judge's daily meals breaks.
Illustrations illustrate how unrelated events, including lunch breaks, might influence human decision-making. These examples should not be relevant to the case's merits. Thus, a few scholars have come to the conclusion that since computers are impervious to cognitive biases and the excessive impact of outside circumstances, AI-based decision-making would be superior to human decision-making.
But it is misguided to treat algorithmic impartiality and infallibility with unquestioning deference. Recent advances in AI research have brought attention to the dangers of biased or misbehaving systems. Any computer model that uses data is only as good as the data it uses. The derived model suffers from vulnerabilities in the data diet. Specifically, it is possible that the underlying data used to train the algorithm had human prejudices. As a result, the algorithm may have been "infected" with these biases and may have even exaggerated them by accepting them as "true" when making decisions or predicting outcomes in the future.
Research has indicated that the application of algorithms in the evaluation of criminal risk in the United States has resulted in results that are racially biased.
In order to evaluate the risk of recidivism for defendants, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system is widely used in the United States. Research conducted under this system revealed that while "white violent recidivists were 63 percent more likely to have been misclassified as a low of violation recidivism, compared with black defendants," “Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism”.
It might possibly have happened because black offenders are overrepresented in some crime rates, leading the computer to incorrectly classify them as having a greater recidivist rate. It's possible that the computer model incorrectly assumed that there was a larger chance of recidivist behaviour based on this pattern.

CHALLENGES AND OPPORTUNITIES IN THE INDIAN CONTEXT
OPPORTUNITIES:
A major opportunity for legal innovation is the incorporation of Artificial Intelligence (AI) technology into the operations of prominent Indian law firms. One of the top full-service legal firms in India, Cyril Amarchand Mangaldas, has made a ground breaking move by partnering with the well-known Canadian machine learning software vendor, Kira Systems. Through this strategic partnership, Cyril Amarchand Mangaldas will be the first law firm in India to use this cutting-edge technology, which will mark a significant development in the legal field.
Artificial intelligence used by Kira Systems' software to recognise, evaluate, and extract relevant clauses and other data from a variety of legal documents, including contracts. The law firm can now deliver specialised legal services to its clients with greater efficiency, speed, and accuracy thanks to the use of Kira Systems' software, which represents a paradigm shift in the way legal services are delivered.
The Jaswinder Singh v. State of Punjab and Anr. (2022) case offered a significant opportunity to integrate AI technologies into the legal system. In a particular instance, the Punjab and Haryana High Court requested assistance from ChatGPT, an AI-powered application, in response to a bail plea related to grave accusations of a violent and fatal assault. The application of artificial intelligence (AI) in legal procedures, particularly in terms of providing data and perspectives on case-related issues, represents a noteworthy progression in utilising technology to support the judiciary.
As an AI tool, ChatGPT maintains objectivity and abstains from voicing opinions or rendering judgements. Rather, it serves as a valuable resource, offering details on the particular subjects or inquiries that are directed at it. The use of AI in this context highlights how technology can improve legal research, make it easier to get pertinent information, and help the court get new perspectives.
CHALLENGES: Based on its subjects, principles, and methodology, traditional conceptualizations of algorithmic fairness seem to be intrinsically Western-centric. In the Indian setting, socio-economic considerations give rise to data reliability difficulties.
Important data points are frequently missing in India due to social infrastructures and structural inequities. Digital divides compound this shortfall by causing errors and sustaining residual injustice. Notably, entire populations are either misrepresented or absent from datasets, a critical issue that is evident in many studies. The startling fact that half of India's population does not have access to the Internet—women, rural areas, and Adivasis being the main excluded groups—contributes significantly to this data gap. As such, datasets obtained from internet-connected sources might unintentionally leave out a significant section of the population.
Furthermore, India is a relatively newer to 4G mobile data, therefore its data footprint is still extremely limited and biased towards issues facing the upper middle class.
Given the significant disconnect between the implemented models and the underprivileged communities they are intended to assist, it is suggested that localising model fairness only to India may be a shallow solution.
The recent debate involving Gemini AI's response to a question regarding the political standing of Prime Minister Narendra Modi is an excellent illustration of the complicated relationship that exists in the Indian setting between artificial intelligence, ethical issues, and legal implications. The event, in which Gemini AI said that Mr. Modi was carrying out policies that have been characterised as fascist, has sparked a discussion about the ethical bounds of AI and misinformation associated with it. Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, has adamantly stated that these kinds of comments are against various sections of the criminal code and Rule 3(1)(b) of the IT Rules, 2021. This highlights the difficulties presented by quickly implemented AI models in a politically delicate setting, leading the government to stress the significance of responsible AI practices.
LEGAL FRAMEWORKS FOR AI ETHICS
India currently lacks a comprehensive legal framework for Artificial Intelligence (AI). However, the Indian government charged NITI Aayog, its premier public policy think tank, with developing these principles after realising the necessity for rules and regulations in the quickly changing field of artificial intelligence. The National Strategy for Artificial Intelligence (#AIForAll), published by NITI Aayog in 2018, lays out criteria for AI research and development that are specific to industries including smart cities, infrastructure, healthcare, education, and agriculture.
NITI Aayog expanded on this by releasing the “Principles for Responsible AI" in February 2021, which broke down ethical issues into societal and systemic issues. While societal concerns examine the effects of automation on employment and job development, system considerations dive into principles of decision-making, the proper involvement of beneficiaries, and responsibility. Then, in August 2021, NITI Aayog published "Operationalizing Principles for Responsible AI," which highlighted practical steps that could be taken by the public and private sectors. These actions included capacity building, regulatory and policy interventions, encouraging ethics by design, and developing frameworks that would comply with AI standards.
The Digital Personal Data Protection Act of 2023, which also addresses privacy issues with AI platforms, is another example of India's proactive approach to AI regulation.
India is also part of the Global Partnership on Artificial Intelligence (GPAI) on the international front. The recent 2023 GPAI Summit in New Delhi showcased the work of AI experts in the fields of data governance, responsible AI, and the future of work. In line with the OECD AI Principles, this cooperative project seeks to incorporate these deliverables into national strategies. In addition, Indian departments like the Bureau of Indian Standards and the Ministry of Electronics and Information Technology are actively developing draft standards and presenting studies to address ethical, safety, and development issues related to AI.
Moreover, the first draft of India's Artificial Intelligence (AI) rules framework is expected to be unveiled in June or July of this year, according to Rajeev Chandrasekhar, Union Minister of State for Jal Shakti, Electronics and Information Technology, and Skill Development and Entrepreneurship. Chandrasekhar has stated that the government is keen to using AI to boost the economy, with a special emphasis on the healthcare, agricultural, and other sectors.
After the recent fiasco, The Ministry of Electronics and Information Technology (MeitY) released the an AI advisory on March 1, 2024, which marks a big change in the government's stance on AI research policy. The advisory issued focuses on generative AI technologies, including Google's Gemini and large language models like ChatGPT. It requires that if these models are deemed to be "under-testing" or "unreliable," they must receive an approval from the Indian government. The advisory seems to be the government's reaction to recent drama, that is the widely shared answer from Google's Gemini chatbot to a question concerning Prime Minister Narendra Modi. The recommendation highlights regulation 3(1)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in an attempt to ensure that AI models abide by current legal obligations.
1 commentaire