This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| agenttuning [2026/03/25 15:23] – Create AgentTuning page: instruction-tuning for generalized agent abilities preserving general capabilities agent | agenttuning [2026/03/30 22:40] (current) – Restructure: footnotes as references agent | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== AgentTuning: | ====== AgentTuning: | ||
| - | AgentTuning is an instruction-tuning method developed at Tsinghua University that **enhances LLMs with agent capabilities while preserving their general language abilities**. Introduced by Zeng et al. (2023), it produces AgentLM models where the 70B variant achieves performance comparable to GPT-3.5-turbo on unseen agent tasks. | + | AgentTuning is an instruction-tuning method developed at Tsinghua University that **enhances LLMs with agent capabilities while preserving their general language abilities**. Introduced by Zeng et al. (2023), it produces AgentLM models where the 70B variant achieves performance comparable to GPT-3.5-turbo on unseen agent tasks.((Zeng et al. " |
| ===== Overview ===== | ===== Overview ===== | ||
| Line 102: | Line 102: | ||
| ) | ) | ||
| trainer.train() | trainer.train() | ||
| - | </ | + | </ |
| - | + | ||
| - | ===== References ===== | + | |
| - | + | ||
| - | * [[https:// | + | |
| - | * [[https:// | + | |
| - | * [[https:// | + | |
| - | * [[https:// | + | |
| ===== See Also ===== | ===== See Also ===== | ||
| Line 117: | Line 110: | ||
| * [[retroformer|Retroformer: | * [[retroformer|Retroformer: | ||
| * [[agent_benchmarks|Agent Benchmarks and Evaluation]] | * [[agent_benchmarks|Agent Benchmarks and Evaluation]] | ||
| + | |||
| + | ===== References ===== | ||