Artificial Intelligence and Legal Practice: Due Diligence, Ethics, and Unchartered Territory

A non-human partner has unveiled itself, presenting as an early stage of artificial intelligence (“AI”). An attorney willing to partner with such an AI system could easily synergize their efforts, leading to results that are greater than the sum of their parts. In this article, intellectual property counsel Alex Salmu and business attorney Victor Wandzel explore the integration of Large Language Models (“LLMs”) in legal practice. AI systems represent a monumental shift in the legal industry, potentially enhancing each attorney’s capabilities while introducing new territory for attorneys to navigate.

LLMs, like ChatGPT, are models trained on extensive data to understand and generate natural language and are being utilized to automate legal tasks, such as legal judgement prediction, legal document analysis, and legal document writing.[1] The integration of LLMs and law invoke various Michigan Rules of Professional Conduct (“MRPC”), such as 1.1 Competence, 1.6 Confidentiality of Information, and 5.3 Responsibilities Regarding Nonlawyer Assistants. It is also important that a lawyer or law student understands how these models function so that they will better understand its shortfalls, such as hallucinations, but this is not within the scope of this article.

Whether you are a seasoned professional or a newly-barred attorney, there are particular points to be aware of as you consider practical applications of LLMs, oversight responsibilities, and ethical implications.

First, while LLMs are tools that can streamline tasks and enhance a firm’s efficiency, it’s still important to have proper oversight. An LLM can provide accurate research results, but an attorney should verify those results before use in a pleading. After all, once an attorney signs a document filed in court, Michigan Court Rules 1.109(E)(5) apply and state that signature to a filed document certifies that the signor has read the document and “to the best of his or her knowledge, information, and belief formed after reasonable inquiry, the document is well grounded in fact and is warranted by existing law or a good-faith argument . . . .” Although horror stories about ChatGPT-hallucinated cases have surfaced, it would be remiss to write off LLMs at this point. As long as we are cognizant and remain diligent about verifying cases and sources, LLMs can and will help those willing to put in the legwork early on in the adoption process.

Second, it is important under MRPC 1.1 Competence to stay informed about the latest developments in legal technology. The comment “Maintaining Competence” to MRPC 1.1 states, “To maintain the requisite knowledge and skill, a lawyer should engage in continuing study and education, including the knowledge and skills regarding existing and developing technology that are reasonably necessary to provide competent representation for the client in a particular matter.” Whether it is “reasonably necessary” to learn how to use LLMs is about as reasonably necessary as it was to learn how to use a computer and a word processor. Once adopted, an LLM can be used to improve or enrich the work product or services provided to the client, leaving more time for added value tasks that require human expertise, creativity, and emotional intelligence. In other words, let LLMs handle the less important, routine tasks. But still be sure to verify that those tasks are handled properly.

And third, at the forefront of LLMs is an active battle challenging the status quo of intellectual property and privacy laws. LLMs might use copyrighted material in their datasets and could produce outputs similar to existing copyrighted works. Datasets used by LLMs might include personal or sensitive details, posing a risk when input prompts contain such information. This is particularly concerning with anonymized data, which may not remain anonymous if the LLM can re-identify individuals from the information provided. These are just a couple of the territories where there is a need for collaboration between researchers and policymakers.[2] Alternatively, organizations may draft, maintain, and enforce a governance policy for the use of LLMs as a precaution.

Attorneys would be wise to recognize that the LLM partner is here to stay, inquiring whether the AI system can ultimately add value to the client’s bottom line and sustainability to the law practice. But its advantages also present ethical and practical challenges that require oversight and an understanding of the LLM’s limitations. New and seasoned lawyers should embrace this technological change in the legal profession, and law firms should explore this technology with agility, curiosity, and a commitment to advancing each client’s position in the AI age.

[1] Zhongxiang Sun, A Short Survey of Viewing Large Language Models in Legal Aspect, Pg. 3, Renmin University of China (March 17, 2023).

[2] See Sun, at 5.


Victor Wandzel, Esq. is a transactional business lawyer specializing in corporate law and the principal of Wandzel Law PLLC. For further insights or to connect, find Victor on LinkedIn or reach out directly via wandzellaw.com.

Alex Salmu is an Intellectual Property attorney in Michigan.