The promise and peril of AI in the legal industry

31 March 2025

Can AI become the ultimate legal assistant, or will its limitations keep it confined to the back office? Experts explains to Cathy Li how the answer depends on how firms adapt to Ai’s strengths and address its shortcomings. 

 

It’s late at night, and there are still 1,000 contracts left to review. But in a world where artificial intelligence tools can analyze those contracts in minutes, highlight critical clauses and flag potential risks, the days of late-night grinds are accompanied by a digital assistant.  

In an industry where time is billed by the hour, AI is set to bring changes, promising to revolutionize the legal profession, delivering efficiency and productivity gains. Legal professionals are proceeding cautiously, addressing risks such as “hallucinations” and concerns over client confidentiality and data security. 

“An even more disruptive possibility is AI taking over negotiation tasks from lawyers,” said Kai Xue, counsel at DeHeng Law Offices in Beijing, explaining that if contract negotiations are done by AI, it would upend the business model of large law firms by cutting down “heaps of partner time” spent at the negotiation table. He referenced ContractMatrix, a contract negotiation tool publicized by A&O Shearman (formerly Allen & Overy).  

According to a statement released by Harvey, an AI platform designed to assist legal professionals, A&O Shearman claimed to be the first firm to implement Harvey at an enterprise level in December 2022. The firm reported significant time savings, with employees saving 2-3 hours per week on routine tasks such as summarization, analysis and translation.  

One of the primary drivers for AI adoption in legal tech is the promise of increased efficiency and productivity. By automating routine and time-consuming tasks, AI allows legal professionals to focus on higher-value work that requires human expertise and judgement. This translates to reduced costs, improved turnaround times and the ability to handle a larger volume of cases, ultimately contributing to a healthier bottom line.  

“At the current stage, the use of AI tools is mainly for simple question and answer – assisting in drafting correspondence, providing a starting point for drafting, amassing sample clauses, summarizing complex information and translation,” said Ian Liu, a partner at Deacons in Hong Kong. “Also, AI tools are used to assist in back-office administration, business development, marketing and organizational tasks.”  

As society stands on the brink of transformative advancements, the integration of AI in the legal sector presents both significant opportunities and challenges. The arrival of generative AI tools like ChatGPT in late 2022 marked a turning point, demonstrating that technology could perform writing and research tasks with a level of proficiency approaching that of trained lawyers.  

Today, AI adoption in law firms is reshaping workflows, from answering legal queries and drafting correspondence to summarizing complex information and translating texts. But as firms race to adopt these tools, the question arises: Can AI become the ultimate legal assistant, or will its limitations keep it confined to the back office? 

“However, there are firms that expressed their concerns on the use of AI tools, stating that the technology is fairly new, and all the more so in terms of how it should be applied in legal practice. Amongst the overriding concerns are client confidentiality, data privacy, IP infringement, lack of accuracy or its inability to tailor its output to suit the nuanced needs of clients,” said Liu.  

While AI’s efficiency gains are undeniable, its adoption is not without risks. 

Hallucinations and data security  

With AI tools promise a more convenient future, it’s shown warning signs such as “hallucinations” – when AI generates incorrect or fabricated information – as well as distorted training models that produce biased outcomes. 

AI systems in the legal field can “hallucinate” in two ways: first, they might simply get the law wrong, providing incorrect information or making factual errors; second, they might describe the law accurately but cite sources that don’t back up their claims. 

According to Stanford University, this second type of error is especially dangerous in legal research, where authoritative sources are everything. A citation might look legitimate (it exists, after all), but if it doesn’t support the AI’s argument, it’s more detrimental than helpful. The whole point of legal AI is to save time by finding relevant sources. But if the tool spits out citations that seem credible but are irrelevant or misleading, users could be led astray. This misplaced trust could result in flawed legal decisions, undermining the very purpose of using AI in the first place. 

(Credit:) STANFORD UNIVERSITY HUMAN-CENTERED ARTIFICIAL INTELLIGENCE 

Image:https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries  

Accuracy and confidentiality are the primary concerns for lawyers to be fully integrating AI into their daily usage. There have been a few notable cases where lawyers used AI-generated precedents that turned out to be non-existent. One prominent example involved a New York lawyer who faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported.  

The challenge is further intensified by the inherent unpredictability of large language models (LLMs), which often operate as “black boxes,” offering little transparency into how they generate responses. Although legal AI tools have started integrating features like explainability and source citations to mitigate these concerns, doubts about their reliability persist. Recent benchmarking studies have fueled this debate, with some reports highlighting alarmingly high error rates in tasks such as legal research. These findings emphasize the critical need for a thorough and cautious evaluation of these tools. 

“As lawyers, reaching a conclusion is only part of our job; what is more important is understanding the reasoning behind that conclusion,” Liu explained, noting that legal professionals need to justify their arguments, defend their positions in court and explain the basis of their legal opinions to clients and regulators. 

A major drawback of existing AI tools is their lack of transparency in explaining the reasoning behind their conclusions. Specifically, they often do not provide clarity on the assumptions that guide their reasoning, the decisions made during document analysis, or how they evaluate, prioritize or exclude certain data points. 

Some lawyers have found ways to work around problems with AI tools by not asking AI to conduct “open-ended research” on legal questions. 

“Don’t ask AI to conduct open-ended research on legal questions. Instead, start by identifying the specific documents and then prompt the AI to explore the content,” Xue said, explaining that if an associate is researching the latest U.S. Treasury Department regulations on sanctions, find the regulation itself, copy and paste it into the AI and prompt it to break it down. By feeding the entire document directly into the AI rather than letting it conduct a broad search, the risk of hallucination can be minimized. 

Beyond technical challenges, the effectiveness of AI tools also varies significantly across jurisdictions due to differences in data availability and regulatory frameworks. 

In the U.S., contracts are made publicly available through the EDGAR database of the Securities and Exchange Commission as part of mandatory filings. “However, in China, there is no parallel system of publishing material definitive agreements of public companies as part of disclosures. So, AI in China doesn’t have the huge repository of Chinese contracts it needs for training,” explained Xue. 

He further illustrated this point by referencing aircraft lease agreements. In the U.S., AI can learn to draft such agreements by analyzing numerous precedents available on EDGAR. In contrast, for companies regulated by the China Securities Regulatory Commission, aircraft lease agreements are not published in full; instead, only a one-page summary is provided. This lack of detailed information limits the AI’s ability to learn and draft effectively. 

Despite inaccuracies in the industry, lawyers and their clients are both worried about confidentiality and data security. For example, as of November 2024, ChatGPT’s privacy policy outlines that it collects user data, including IP addresses, browser information and details about user interactions with the platform. Notably, the policy also states that ChatGPT may use personal data to enhance its services and conduct research, and it reserves the right to share this data with third parties.  

“There are certain clients who will say no to AI, and the reason is security issues,” said Michelle G.W. Yee, a partner at Johnson Stokes & Master in Hong Kong. She explained that the firm uses AI but always seeks client consent before deploying any AI tools. 

To address these concerns, Yee emphasized the firm’s rigorous data protection measures: “We purge every week, so everything is purged before the documents are even uploaded onto the system. They’re redacted, carefully redacted – it’s many layers of security and protection of client data.” 

These precautions reflect a broader industry effort to balance AI adoption with client confidentiality. As Liu noted: “It is true that data protection of client is a great concern when law firms use AI tools to review documents, but such concerns may to some extent be mitigated by law firms’ internal risk management policy and possibly by a private LLM.” 

Liu explained that while data protection remains paramount, firms are taking proactive steps to address privacy risks. For example, many have conducted in-house seminars to educate attorneys on data protection best practices and established strict protocols. In cases like contract translation, attorneys are required to remove sensitive content before processing and then repopulate it once the translation is complete. These measures demonstrate how the legal industry is working to harness AI’s potential while safeguarding client trust. 

AI’s growing footprint in legal markets 

The legal industry is undergoing a shift, as highlighted in the International Bar Association (IBA) 2024 Report. Over 60 percent of law firms globally, particularly in the EU and U.S., have integrated AI into their operations to streamline tasks and enhance efficiency. Leading the charge are firms like A&O Shearman, which partnered with Harvey in early 2023 to tackle tasks ranging from contract analysis to regulatory compliance. 

“If you’re dealing with things that are specific, the use of AI at the moment is somewhat limited from a legal professional’s perspective, for example, in virtual assets and digital assets,” said Kristi Swartz, a partner at DLA Piper in Hong Kong. Swartz, an experienced fintech lawyer, noted that information about virtual and digital assets is not always true or accurate. This is particularly due to the fast-changing and fast-paced nature of fintech, which remains a relatively new and evolving area of law. 

Swartz said that much of what AI currently accomplishes is reducing the time lawyers spend on processing tasks. This efficiency allows lawyers to focus more on value-added work, enhancing their overall effectiveness. 

DLA Piper has also gained recognition for its in-house development of a generative AI-powered legal assistant named ButterflAI, which is designed to ensure the security and confidentiality of client work. 

The legal AI market is booming, with a wave of tools emerging to streamline routine tasks, though many offer similar capabilities. Among them, Harvey AI has carved out a niche as a generative AI platform designed exclusively for legal professionals. Built on OpenAI’s cutting-edge GPT-4 model, Harvey AI specializes in legal research, document analysis, summarization, draft writing and support for both transactional law and litigation. 

The AI landscape, however, is crowded and fiercely competitive. In February 2025, Legaltech Hub listed over 80 AI-powered legal assistants, most of which provide overlapping features like document summarization, Q&A, redlining and legal research. For buyers, the market can feel overwhelming and opaque. Identifying the right tool for a firm’s specific needs often requires resource-intensive side-by-side pilot testing – a process that is costly, time-consuming and typically accessible only to the largest firms with the budget and bandwidth to invest. 

Despite these challenges, the potential of AI to transform the legal profession remains undeniable. Established legal technology providers such as LexisNexis and Thomson Reuters have launched their AI-powered solutions, often developed in collaboration with leading law firms. This trend is not limited to Western markets; adoption is also gaining momentum in Asia. For example, firms like Taiwan’s LCS & Partners, Thailand’s Kudun & Partners and Japan’s Atsumi & Sakai are actively integrating AI into their workflows. In mainland China, tools such as MetaLaw and Tongyi-Farui (developed by Alibaba’s Damo Academy) are being used for legal analysis, reflecting the region’s growing emphasis on legal tech innovation. 

As AI adoption grows, regulators worldwide are stepping in to establish frameworks that balance innovation with accountability. On the governance front, the regulatory landscape is evolving rapidly. The European Union has enacted the AI Act, which came into force in August 2024. In the United States, a sweeping Executive Order was issued in January 2025 to regulate AI systems. The Council of Europe has also adopted the first international treaty on AI, ensuring that AI systems respect human rights and the rule of law. Meanwhile, China has ambitious plans for AI regulation, with several measures already in place and a comprehensive AI law in the works. 

As the legal industry navigates this transformative era, one thing is clear: AI is no longer a distant possibility but a present reality. Whether AI becomes the ultimate legal assistant or remains a powerful but limited tool depends on how firms adapt to its strengths and address its shortcomings. For now, the lawyer reviewing contracts with Harvey AI is a glimpse into a future where technology and tradition work hand in hand – a future that is already here. 

“A pen is my brain, and I can’t think if I don’t have a pen in my hand. The new generation does not think like that at all,” said Swartz. She anticipates that the next generation of legal professionals will learn, process information, and approach problem-solving in fundamentally different ways, shaped by the tools and technologies of the digital age. 


Law firms

Please wait while the page is loading...

loader