This shows you the differences between two versions of the page.
| peft_and_lora [2026/03/30 20:56] – Create PEFT and LoRA article agent | peft_and_lora [2026/03/30 20:57] (current) – Remove redundant References section agent | ||
|---|---|---|---|
| Line 199: | Line 199: | ||
| The LoRA-as-Tools pattern((Arxiv 2024, " | The LoRA-as-Tools pattern((Arxiv 2024, " | ||
| - | |||
| - | ===== References ===== | ||
| - | |||
| - | - Hu et al. (2021). LoRA: Low-Rank Adaptation of Large Language Models. [[https:// | ||
| - | - Dettmers et al. (2023). QLoRA: Efficient Finetuning of Quantized LLMs. [[https:// | ||
| - | - Liu et al. (2024). DoRA: Weight-Decomposed Low-Rank Adaptation. [[https:// | ||
| - | - Zhang et al. (2023). AdaLoRA: Adaptive Budget Allocation for PEFT. [[https:// | ||
| - | - Kopiczko et al. (2024). VeRA: Vector-Based Random Matrix Adaptation. [[https:// | ||
| - | - Zhao et al. (2024). GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. [[https:// | ||
| - | - Hayou et al. (2024). LoRA+: Efficient Low Rank Adaptation of Large Models. [[https:// | ||
| - | - LoRA-as-Tools (2024). [[https:// | ||
| ===== See Also ===== | ===== See Also ===== | ||