This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| explainable_ai [2026/03/30 20:57] – Remove redundant References section — inline footnotes already render citations automatically agent | explainable_ai [2026/03/30 21:01] (current) – Restore References section agent | ||
|---|---|---|---|
| Line 91: | Line 91: | ||
| * [[llm_hallucination|LLM Hallucination]] | * [[llm_hallucination|LLM Hallucination]] | ||
| * [[attention_mechanism|Attention Mechanism]] | * [[attention_mechanism|Attention Mechanism]] | ||
| + | |||
| + | ===== References ===== | ||
| + | |||
| + | - Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. // | ||
| + | - Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. //NeurIPS 2017//. [[https:// | ||
| + | - Jain, S., & Wallace, B. C. (2019). Attention is not Explanation. //NAACL 2019//. [[https:// | ||
| + | - Kim, B. et al. (2018). Interpretability Beyond Classification Labels: Quantitative Testing with Concept Activation Vectors (TCAV). //ICML 2018//. [[https:// | ||
| + | - Koh, P. W. et al. (2020). Concept Bottleneck Models. //ICML 2020//. [[https:// | ||
| + | - Sundararajan, | ||
| + | - European Parliament. (2024). EU Artificial Intelligence Act. [[https:// | ||