Zachariah Crabill is the Colorado lawyer in question. In April of this year, a client hired Mr. Crabill to prepare a motion to set aside a judgment in the client's civil suit. Mr. Crabill had never handled such a matter, so he used the AI platform ChatGPT to prepare the appropriate papers. ChatGPT included in its work product cases that were made up. Mr. Crabill did not read through the cases or otherwise verify they were real before filing the ChatGPT work product with the court.
Putting aside momentarily the fact that Mr. Crabill failed to check that his citations were real, it is worth considering how ChatGPT works to better understand why it spits out phony citations. As I understand it, ChatGPT takes a prompt and creates words one at a time (albeit at an incomprehensible rate of speed). The process is referred to as "autoregressive modeling," where the model predicts the next word in the sequence based on the words that came before it. What is logical for artificial intelligence is not necessarily true. So, in modeling briefs that came before it, ChatGPT understands that typically there are words like "Smith v. Jones" at the end of a long sentence or paragraph. But those are only words, not necessarily real case citations. Once we understand that, it makes it understandable how the phony citations get generated. But it doesn't explain Mr. Crabill's failure to check.
To make matters worse, Mr. Crabill discovered the problem before a hearing on the motion, but didn't inform the court or otherwise rectify the situation. When the judge inquired about the citations, Mr. Crabill blamed it on an intern. Mr. Crabill finally came clean six days after the hearing when he filed an affidavit admitting he used ChatGPT to draft the motion. Colorado suspended Mr. Crabill for one year and one day, with ninety days to be served and the remainder to be stayed upon Crabill's successful completion of a two-year period of probation, with conditions. All in all, Mr. Crabill should have listened to the lesson the nuns at St. Martins drilled in my head – always check your work. Or in this case, I guess ChatGPT's.
Closer to home, Judge Michael Newman, a judge in the United States District Court for the Southern District, has instituted a standing order that provides:
"No attorney for a party, or a pro se party, may use Artificial Intelligence ("AI") in the preparation of any filing submitted to the Court. Parties and their counsel who violate this AI ban may face sanctions including, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit."
An overreaction? Perhaps, but it's not an unreasonable position. And lawyers who don't like it should thank Zachariah Crabill.