Lawyer Says He ‘Greatly Regrets’ Using ChatGPT in Lawsuit After It Cited Multiple Nonexistent Cases

The lawyer says he used ChatGPT to "supplement" his research, resulting in at least six made-up cases being cited in a brief.

courtroom seen empty
Image via Getty/gorodenkoff
courtroom seen empty

A lawyer says he “greatly regrets” using ChatGPT in his work for a client who’s suing an airline, as the artificial intelligence language model has since been determined to have cited multiple nonexistent cases in its purported research.

As first reported by the New York Times over the weekend, the suit in question stems from a man who alleges he was hurt when a serving cart hit his knee during a flight to New York. The man, identified as Roberto Mata, sued Avianca for the alleged injuries.

But when the airline company pushed for the case to be tossed, the man’s legal team—including a lawyer by the name of Steven A. Schwartz—pointed to a number of prior court rulings they argued supported their stance. The real issue here, as it turns out, was that none of these cases were real.

Schwartz, who works as part of the Levidow, Levidow & Oberman law firm, said in a subsequent affidavit—available to read here via a separate report from The Verge—that he had “consulted” ChatGPT “in order to supplement the legal research” process.

This admitted use of ChatGPT resulted in the citing of at least six different would-be cases, all of which were later found to be “nonexistent.” According to Schwartz, he had not used ChatGPT prior to this incident, claiming this made him “unaware of the possibility that its content could be false.” Schwartz, in the same court document, also said he had “no intent to deceive” either the court or the defendants in the case.

Moving forward, Schwartz—who now has a sanctions hearing on the books for next month—said he will not use ChatGPT in the future unless he’s able to secure “absolute verification” of any such claims.

Of course, this is far from the first story in which the perils of relying on such technology have been made strikingly clear. While certain CEOs have remained capitalistically (and predictably) bullish on diving all the way into AI, several leaders within the field have cautioned against doing so without regulations first being instituted.

For example, Sam Altman—the CEO of ChatGPT developers OpenAI—warned during a recent Senate Judiciary Committee hearing that this technology “can go quite wrong.” In the same hearing, Altman conceded that he was “nervous” about certain aspects of this stage of AI’s development, including his “worst fears” that those behind the tech could “cause significant harm to the world.”

Latest in Life