Llamafactory

Latest version: v0.9.0

Safety actively analyzes 665719 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

0.5.2

New features

- Support **block expansion** in [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro), see `tests/llama_pro.py` for usage
- Add `use_rslora` option for the LoRA method

New models

- Base models
- Qwen1.5 (0.5B/1.8B/4B/7B/14B/72B)
- DeepSeekMath-7B-Base
- DeepSeekCoder-7B-Base-v1.5
- Orion-14B-Base
- Instruct/Chat models
- Qwen1.5-Chat (0.5B/1.8B/4B/7B/14B/72B)
- MiniCPM-2B-SFT/DPO
- DeepSeekMath-7B-Instruct
- DeepSeekCoder-7B-Instruct-v1.5
- Orion-14B-Chat
- Orion-14B-Long-Chat
- Orion-14B-RAG-Chat
- Orion-14B-Plugin-Chat

New datasets

- Supervised fine-tuning datasets
- SlimOrca (en)
- Dolly (de)
- Dolphin (de)
- Airoboros (de)
- Preference datasets
- Orca DPO (de)

Bug fix

- Fix `torch_dtype` check in export model by fenglui in 2262
- Add Russian locale to LLaMA Board by seoeaa in 2264
- Remove manually set `use_cache` in export model by yhyu13 in 2266
- Fix DeepSpeed Zero3 training with MoE models by A-Cepheus in 2283
- Add a patch for full training of the Mixtral model using DeepSpeed Zero3 by ftgreat in 2319
- Fix bug in data pre-processing by lxsyz in 2411
- Add German sft and dpo datasets by johannhartmann in 2423
- Add version checking in `test_toolcall.py` by mini-tiger in 2435
- Enable parsing of SlimOrca dataset by mnmueller in 2462
- Add tags for models when pushing to hf hub by younesbelkada in 2474
- Fix 2189 2268 2282 2320 2338 2376 2388 2394 2397 2404 2412 2420 2421 2436 2438 2471 2481

0.5.0

Congratulations on 10k stars 🎉 Make LLM fine-tuning easier and faster together with LLaMA-Factory ✨

New features

- Support **agent tuning** for most models, you can fine-tune any LLMs with `--dataset glaive_toolcall` for tool using 2226
- Support function calling in both **API** and **Web** mode with fine-tuned models, same as the [OpenAI's format](https://platform.openai.com/docs/api-reference/chat/create)
- LLaMA Factory 🤝 [Unsloth](https://github.com/unslothai/unsloth), enjoy **170%** LoRA training speed with `--use_unsloth`, see benchmarking [here](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)
- Supports fine-tuning models on MPS device 2090

New models

- Base models
- Phi-2 (2.7B)
- InternLM2 (7B/20B)
- SOLAR-10.7B
- DeepseekMoE-16B-Base
- XVERSE-65B-2
- Instruct/Chat models
- InternLM2-Chat (7B/20B)
- SOLAR-10.7B-Instruct
- DeepseekMoE-16B-Chat
- Yuan (2B/51B/102B)

New datasets

- Supervised fine-tuning datasets
- deepctrl dataset
- Glaive function calling dataset v2

Core updates

- Refactor data engine: clearer dataset alignment, easier templating and tool formatting
- Refactor saving logic for models with value head 1789
- Use ruff code formatter for stylish code

Bug fix

- Bump transformers version to 4.36.2 by ShaneTian in 1932
- Fix requirements by dasdristanta13 in 2117
- Add Machine-Mindset project by JessyTsu1 in 2163
- Fix typo in readme file by junuMoon in 2194
- Support resize token embeddings with ZeRO3 by liu-zichen in 2201
- Fix 1073 1462 1617 1735 1742 1789 1821 1875 1895 1900 1908 1907 1909 1923 2014 2067 2081 2090 2098 2125 2127 2147 2161 2164 2183 2195 2249 2260

0.4.0

🚨🚨 Core refactor

- Deprecate `checkpoint_dir` and use `adapter_name_or_path` instead
- Replace `resume_lora_training` with `create_new_adapter`
- Move the patches in model loading to `llmtuner.model.patcher`
- Bump to Transformers 4.36.1 to adapt to the Mixtral models
- Wide adaptation for FlashAttention2 (LLaMA, Falcon, Mistral)
- Temporarily disable LongLoRA due to breaking changes, which will be supported later

The above changes were made by hiyouga in 1864

New features

- Add **DPO-ftx**: mixing fine-tuning gradients to DPO via the `dpo_ftx` argument, suggested by lylcst in https://github.com/hiyouga/LLaMA-Factory/issues/1347#issuecomment-1846943606
- Integrate **AutoGPTQ** into the model export via the `export_quantization_bit` and `export_quantization_dataset` arguments
- Support loading datasets from ModelScope Hub by tastelikefeet and wangxingjun778 in 1802
- Support resizing token embeddings with the noisy mean initialization by hiyouga in a66186b8724ffd0351a32593ab52d8a2312f339b
- Support system column in both alpaca and sharegpt dataset formats

New models

- Base models
- Mixtral-8x7B-v0.1
- Instruct/Chat models
- Mixtral-8x7B-v0.1-instruct
- Mistral-7B-Instruct-v0.2
- XVERSE-65B-Chat
- Yi-6B-Chat

Bug fix

- Improve logging for unknown arguments by yhyu13 in 1868
- Fix an overflow issue in LLaMA2 PPO training 1742
- Fix 246 1561 1715 1764 1765 1770 1771 1784 1786 1795 1815 1819 1831

0.3.3

New features

- Support loading pre-trained models from ModelScope Hub by tastelikefeet in 1700
- Support launching a reward model server in demo API via specifying `--stage=rm` in `api_demo.py`
- Support using a reward model server in PPO training via specifying `--reward_model_type api`
- Support adjusting the shard size of exported models via the `export_size` argument

New models

- Base models
- DeepseekLLM-Base (7B/67B)
- Qwen (1.8B/72B)
- Instruct/Chat models
- DeepseekLLM-Chat (7B/67B)
- Qwen-Chat (1.8B/72B)
- Yi-34B-Chat

New datasets

- Supervised fine-tuning datasets
- Nectar dataset by mlinmg in 1689
- Preference datasets
- Nectar dataset by mlinmg in 1689

Bug fix

- Improve get_current_device by billvsme in 1690
- Improve web UI preview by Samge0 in 1695
- Fix 1543 1597 1657 1658 1659 1668 1682 1696 1699 1703 1707 1710

0.3.2

New features

- Support training GPTQ quantized model 729 1481 1545
- Support resuming reward model training 1567

Bug fix

- Change default PPO parameters by hannlp in 1553
- Fix ChatGLM2&3 templates 1453 1480
- Fix 1548 by Outsider565 in 1544
- Fix 1263 1550 1558

0.3.0

New features

- Support full-parameter RLHF training (RM & PPO)
- Refactor llmtuner core in 1525 by hiyouga
- Better LLaMA Board: full-parameter RLHF and demo mode

New models

- Base models
- ChineseLLaMA-1.3B
- LingoWhale-8B
- Instruct/Chat models
- ChineseAlpaca-1.3B
- Zephyr-7B-Alpha/Beta

Bug fix

- Fix bugs in partial-parameter (freeze) tuning
- Fix 224 336 931 936 1011 1489 1494 1507 1514

Page 3 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.