This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| why_is_my_agent_hallucinating [2026/03/30 21:01] – Add inline footnotes agent | why_is_my_agent_hallucinating [2026/03/30 22:39] (current) – Restructure: footnotes as references agent | ||
|---|---|---|---|
| Line 270: | Line 270: | ||
| print(f" | print(f" | ||
| </ | </ | ||
| - | |||
| - | ===== References ===== | ||
| - | |||
| - | * Dhuliawala et al., " | ||
| - | * Lin et al., " | ||
| - | * OpenAI, "Why Language Models Hallucinate," | ||
| - | * Oxford University, "Major Research on Hallucinating Generative Models," | ||
| - | * Stanford Digital Humanities, "Legal RAG Hallucinations," | ||
| ===== See Also ===== | ===== See Also ===== | ||
| Line 284: | Line 276: | ||
| * [[common_agent_failure_modes|Common Agent Failure Modes]] | * [[common_agent_failure_modes|Common Agent Failure Modes]] | ||
| * [[how_to_handle_rate_limits|How to Handle Rate Limits]] | * [[how_to_handle_rate_limits|How to Handle Rate Limits]] | ||
| + | |||
| + | ===== References ===== | ||