A fabricated content hallucination occurs when an artificial intelligence system invents sources, citations, events, people, or other entities that do not exist, presenting them as real. This is among the most dangerous forms of AI hallucination because the fabricated material often appears highly plausible, complete with realistic formatting, names, and contextual details.
Fabricated content hallucination refers to the generation of entirely fictional information by an AI model, particularly when it creates non-existent academic citations, legal case references, books, quotations, or historical events. Unlike factual inaccuracy hallucinations that distort real information, fabricated content hallucinations conjure material from whole cloth 1).
The phenomenon arises because LLMs generate text based on statistical patterns rather than retrieving information from verified sources. When the model encounters a gap in its knowledge, it fills it with plausible-sounding fabrications rather than acknowledging uncertainty 2).
The most widely cited case of AI-fabricated legal citations occurred in the Southern District of New York. Attorney Steven Schwartz of Levidow, Levidow & Oberman used ChatGPT to conduct legal research for an opposition brief in Mata v. Avianca, Inc. The AI generated six case citations that appeared legitimate but were entirely fabricated, complete with plausible case names, citation formats, judge names, and convincing legal reasoning. None of the cited cases existed 3).
Judge P. Kevin Castel found violations of Federal Rule of Civil Procedure 11 for failing to verify the authenticity of cited authorities. The attorneys were sanctioned with a $5,000 fine, but the reputational damage was enormous. Within weeks, the case became the defining example of AI hallucination in legal practice 4).
As of early 2026, legal researchers have documented nearly 1,000 cases where practitioners or self-represented litigants submitted AI-generated hallucinated citations to courts 5).
Jeff Hancock, founder of the Stanford Social Media Lab, submitted an affidavit in a federal case challenging Minnesota's deepfake law. He used ChatGPT to help organize citations, and the tool hallucinated new citations that did not exist. Opposing attorneys challenged the filing as unreliable, and Hancock was forced to file a subsequent declaration acknowledging the AI-generated errors 6).
A study examining medical articles generated by ChatGPT found that of 115 generated references, 47% were entirely fabricated, 46% were authentic but inaccurate, and only 7% were both authentic and accurate 7). A separate investigation of ChatGPT-generated research proposals found that of 178 cited references, 69 lacked a DOI, and 28 of those were confirmed to not exist at all 8).
The Chicago Sun-Times published a “Summer Reading List for 2025” that was AI-generated. Of the 15 titles listed, only 5 were genuine works. The remaining 10 were fabrications attributed to real authors with convincing descriptions. Chilean American novelist Isabel Allende was credited with a book called Tidewater Dreams, described as her “first climate fiction novel,” despite never having written such a work 9).
Meta's Galactica scientific language model, when asked to draft a paper on creating avatars, cited a fictitious paper from a real author working in that field. In another instance, ChatGPT fabricated that scholar Harold Coward wrote a book titled Dynamic Canonicity: A Model for Biblical and Theological Interpretation, complete with false arguments about religious principles 10).
An Israeli municipality submitted court filings that relied on non-existent legal authorities, including an invented Ministry of Education Director-General Circular and several fabricated or incorrectly quoted court decisions. The Supreme Court found these to be the product of uncontrolled AI use, accepted the appeal on that basis, and ordered costs of 30,000 Israeli Shekels against the municipality 11).