The growing popularity of Artificial Intelligence has made life significantly easier in many ways — helping people distil and convey copious amounts of information within seconds. It has also proved something of a curse for academia and various creative pursuits as students outsource their homework and artists render elaborate landscapes with a single prompt. The latest instance of AI running amok went viral this week — featuring a lawyer who found himself exposed in court for using ChatGPT to write a brief.

He had later told the court that he does not usually use ChatGPT — making an exception because he was caring for his dying family members. He said none of his co-counsel were aware of this use of generative AI.

ChatGPT hallucinates facts into existence

The details were shared in a recent filing before the US District Court in Kansas and noted that lawyer Sandeep Seth had used ChatGPT “as a shortcut” to find case law consistent with the facts of the case. The AI platform had been instructed to “write an order that denies the motion to strike with case law support” — a tedious job that was likely achieved within a split second. Seth had received several erroneous quotations and citations from the chatbot in response — but failed to catch the errors.

“He admits that he did not check these citations, quotations, or statements of authority for accuracy. He then circulated the draft to the “litigation team.” Seth then wrote a second draft that “substantially expanded the factual basis of our argument.” Again, he queried ChatGPT to find additional case law,” reads an excerpt from the patent infringement lawsuit filing.

All lawyers fined by Court

The situation took a significant turn for the worse when the incorrect data collated by Seth was included in subsequent legal documents. All five lawyers involved in the case had signed the documents that included these errors — with nobody bothering to separately verify whether the case laws they were citing actually existed.