Judge Recalls Ruling Amidst Suspected AI-Driven Factual Errors

Judge Recalls Ruling Amidst Suspected AI-Driven Factual Errors

Photo by KATRIN BOLOVTSOVA on Pexels

A U.S. district court judge has taken the unusual step of withdrawing a ruling in a biopharma securities lawsuit after glaring inaccuracies and fabricated legal citations surfaced, raising concerns about potential AI involvement. The questionable content, flagged by attorney Andrew Lichtman, included misrepresented case outcomes and entirely fabricated quotes. According to Bloomberg Law, the original opinion has been marked as entered in error and is slated for replacement, an uncommon move beyond routine revisions. While the use of artificial intelligence in the case has not been officially confirmed, the nature of the errors mirrors known issues with AI-powered tools in legal contexts, often described as “hallucinations.” This incident follows similar cases, including the recent fine levied against lawyers representing MyPillow’s Mike Lindell for submitting AI-generated (and inaccurate) citations. Anthropic has also attributed similar errors to its Claude AI chatbot. The growing number of such incidents underscores the limitations of large language models (LLMs) and their current unsuitability for unsupervised legal work.