The original version of this article was published in the July 28, 2023 edition of the New York Law Journal.
Since its public debut in late 2022, ChatGPT has garnered widespread acclaim for its fast response generation, contextual understanding, and relaxed tone that simulates human speech. The legal industry, like many others, has recognized the potential of artificial intelligence technologies such as ChatGPT to revolutionize legal research, contract review, and client communication. However, the incorporation of AI in the practice of law brings with it a set of ethical challenges that attorneys must consider and carefully weigh. While AI tools like ChatGPT offer remarkable capabilities, they are no substitute for human judgment, opinions, and emotional intelligence—all of which remain vital for achieving the best results for clients. ChatGPT’s efficiency in automation comes with a set of potential risks and ethical concerns that must not be ignored.
First and foremost, attorneys have an absolute duty to provide competent representation to their clients. 1 That “competent representation” requires not just legal knowledge and skill, but also thoroughness and reasonable preparation necessary for the representation. 2 Anyone, whether lawyer or no, can plug a few directives into an AI tool like ChatGPT and use the results in place of their own work. But lawyers owe a greater duty to their clients than that: we must evaluate the ChatGPT response for accuracy, check to ensure any cited authority is still good law, and tailor the response to fit not just the facts of the case at issue, but also the audience that will read it. Anything less would fall short of the thoroughness and reasonable preparation requirements of the New York Rules.
Nor is ChatGPT as reliable as it seems at first blush. Attorneys using ChatGPT must always make certain to double-check the AI response’s accuracy, lest they find themselves in hot water with the court—and shortly thereafter, their clients. Indeed, overreliance on ChatGPT’s accuracy has already led to sanctions for two New York attorneys, where the AI tool simply made up cases to support a legal argument the attorneys then attempted to advance before the Honorable Kevin Castel of the United States District Court for the Southern District of New York.3 Needless to say, things did not go well for the attorneys in question.4 In his June 22, 2023 decision upholding monetary sanctions against the attorneys’ clients and their firm under Rule 11 of the Federal Rule of Civil Procedure, Judge Castel wrote, “The filing of papers without taking the necessary care in their preparation is an abuse of the judicial system that is subject to Rule 11 sanction. . . . An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.” Mata v. Avianca, — F. Supp. 3d —, 2023 WL 4114965, at *11-12 (S.D.N.Y. June 22, 2023) (citations and internal quotations omitted). The Court ultimately held that the attorneys, in using and relying upon ChatGPT in lieu of their own research, acted with “subjective bad faith in violating Rule 11[.]” Id. at 15. And Judge Castel made clear that the $5,000 fine meant the attorneys in question were getting off lightly for their actions:
In considering the need for specific deterrence, the Court has weighed the significant publicity generated by Respondents’ actions. The Court credits the sincerity of Respondents when they described their embarrassment and remorse. The fake cases were not submitted for any respondent’s financial gain and were not done out of personal animus. Respondents do not have a history of disciplinary violations and there is a low likelihood that they will repeat the actions described herein. . . . [But the] Court will require Respondents to inform their client and the judges whose names were wrongfully invoked of the sanctions imposed. The Court will not require an apology from Respondents because a compelled apology is not a sincere apology. Any decision to apologize is left to Respondents.
Id. at 17. Mata may well have been the first case of its kind on the subject of thoughtless reliance on AI tools in the legal field, but it surely will not be the last. In the wake of such a highly-publicized case as Mata, it is safe to assume that no future court will be so lenient going forward.
Even setting aside such blind faith in artificial intelligence as the attorneys in Mata displayed, perhaps the most significant ethical pitfall inherent in the use of AI tools such as ChatGPT is the risk to client confidentiality. A primary ethical obligation of an attorney to their clients is to maintain client confidentiality, including protecting client information from unauthorized disclosure.5 And using ChatGPT can pose a critical risk that such information may be exposed. ChatGPT chat history is accessible and reviewable by ChatGPT employees,6 which may effectively waive attorney-client privilege. Similarly, OpenAI—the company behind ChatGPT—may provide personal information (including client-identifying information) to third-party vendors and affiliates, heightening already-serious ethical concerns over data security and privacy.7 Even if ChatGPT kept all chat history entirely private, that would not guarantee its security: a recent data breach made public nearly 1.2% of all chat history of ChatGPT Plus subscribers.8
Again, there is no question that ChatGPT—like other, similar AI tools—has the potential to revolutionize the legal field. But it is not a one-for-one substitute for human attorney work product and should not be treated as such. While ChatGPT has enormous potential to increase attorney efficiency, there remain serious ethical quandaries with the use of artificial intelligence in legal work. Attorneys must be mindful of these potential pitfalls while using AI software and should only use it as the helpful tool it is meant to be, not as a replacement for thoughtful work product.
*Reprinted with permission from the July 28, 2023 edition of the New York Law Journal© 2022 ALM Media Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-257-3382 or reprints@alm.com.
- New York State Unified Court System Part 1200, Rules of Professional Conduct (the “New York Rules”), Rule 1.1, https://www.nycourts.gov/legacypdfs/rules/jointappellate/NY-Rules-Prof-Conduct-1200.pdf at 4.
- See Id.
- Rohan Goswami, ChatGPT cited ‘bogus’ cases for a New York federal court filing. The attorneys involved may face sanctions., CNBC, May 30, 2023, https://www.cnbc.com/2023/05/30/chatgpt-cited-bogus-cases-for-a-new-york-federal-court-filing.html.
- Robert Roth, Mata v. Avianca: The Blame is not Solely with ChatGPT, LinkedIn, June 8, 2023, https://www.linkedin.com/pulse/mata-v-avianca-blame-solely-chatgpt-robert-roth/.
- New York Rule 1.6, https://www.nycourts.gov/legacypdfs/rules/jointappellate/NY-Rules-Prof-Conduct-1200.pdf at 9-10.
- Natalie, What is ChatGPT?, https://help.openai.com/en/articles/6783457-what-is-chatgpt, at ¶ 5 (“As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.”) (emphasis added).
- Privacy Policy, OpenAI, June 23, 2023, https://openai.com/policies/privacy-policy (“In certain circumstances we may provide your Personal Information to third parties without further notice to you….”).
- Andrew Tarantola, OpenAI Says a Bug Leaked Sensitive ChatGPT User Data, Engadget, March 24, 2023, https://www.engadget.com/openai-says-a-bug-leaked-sensitive-chatgpt-user-data-165439848.html.