RENT Magazine Q1'26

When AI Goes Wrong AI Hallucinations Reach Housing Court in Los Angeles Eviction Case (2023) A cautionary example comes from landlord-tenant practice in California, where a well-known Los Angeles eviction attorney and his firm were sanctioned after submitting a court filing that contained “an entire body of law that was fabricated.” The firm was ordered to pay nearly $1,000 in sanctions, narrowly avoiding a mandatory bar reporting requirement, and critics warned the incident illustrated how unverified AI output can undermine both legal standards and the integrity of eviction proceedings where the stakes are often people’s homes. (Source) Mata v. Avianca (2023): A Federal Case That Became a National Warning In Mata v. Avianca, attorneys filed a legal brief filled with case citations and quotations that appeared legitimate but turned out to be completely fabricated. The citations were generated using ChatGPT, and when the court asked the attorneys to provide copies of the cited decisions, they could not because the cases did not exist. The judge dismissed the filing, called the content “gibberish,” and imposed a $5,000 sanction, turning the case into a national example of what happens when AI is treated like a legal research tool instead of a writing tool. A Record-Setting $10,000 AI Penalty (2025) In 2025, a California Court of Appeal issued a record setting sanction after an attorney filed an appeal containing numerous fabricated quotations and citations that the court determined were generated by AI. The brief included 21 fake quotations attributed to real cases, along with citations that did not support what the brief claimed. The court imposed a $10,000 penalty warning that relying on unvalidated AI output can cross into professional negligence.

PAGE 23

Powered by