Llamafactory

Latest version: v0.9.0

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

0.9.0

Congratulations on 30,000 stars πŸŽ‰ Follow us at *[X](https://twitter.com/llamafactory_ai)*

New features

- πŸ”₯Support fine-tuning **[Qwen2-VL](https://github.com/QwenLM/Qwen2-VL)** model on multi-image datasets by simonJJJ in #5290
- πŸ”₯Support time&memory-efficient **[Liger-Kernel](https://github.com/linkedin/Liger-Kernel)** via the `enable_liger_kernel` argument by hiyouga
- πŸ”₯Support memory-efficient **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer via the `use_adam_mini` argument by relic-yuexi in #5095
- Support fine-tuning Qwen2-VL model on video datasets by hiyouga in 5365 and BUAADreamer in 4136 (needs patch https://github.com/huggingface/transformers/pull/33307)
- Support fine-tuning vision language models (VLMs) using RLHF/DPO/ORPO/SimPO approaches by hiyouga
- Support [Unsloth](https://unsloth.ai/blog/long-context)'s asynchronous activation offloading method via the `use_unsloth_gc` argument
- Support [vLLM](https://github.com/vllm-project/vllm) 0.6.0 version
- Support MFU calculation by yzoaim in 5388

New models

- Base models
- Qwen2-Math (1.5B/7B/72B) πŸ“„πŸ”’
- Yi-Coder (1.5B/9B) πŸ“„
- InternLM2.5 (1.8B/7B/20B) πŸ“„
- Gemma-2-2B πŸ“„
- Meta-Llama-3.1 (8B/70B) πŸ“„
- Instruct/Chat models
- MiniCPM/MiniCPM3 (1B/2B/4B) by LDLINGLINGLING in 4996 5372 πŸ“„πŸ€–
- Qwen2-Math-Instruct (1.5B/7B/72B) πŸ“„πŸ€–πŸ”’
- Yi-Coder-Chat (1.5B/9B) πŸ“„πŸ€–
- InternLM2.5-Chat (1.8B/7B/20B) πŸ“„πŸ€–
- Qwen2-VL-Instruct (2B/7B) πŸ“„πŸ€–πŸ–ΌοΈ
- Gemma-2-2B-it by codemayq in 5037 πŸ“„πŸ€–
- Meta-Llama-3.1-Instruct (8B/70B) πŸ“„πŸ€–
- Mistral-Nemo-Instruct (12B) πŸ“„πŸ€–

New datasets

- Supervised fine-tuning datasets
- Magpie-ultra-v0.1 (en) πŸ“„
- Pokemon-gpt4o-captions (en&zh) πŸ“„πŸ–ΌοΈ
- Preference datasets
- RLHF-V (en) πŸ“„πŸ–ΌοΈ
- VLFeedback (en) πŸ“„πŸ–ΌοΈ

Changes

- Due to compatibility consideration, fine-tuning vision language models (VLMs) needs `transformers>=4.35.0.dev0`, try `pip install git+https://github.com/huggingface/transformers.git` to install it.
- `visual_inputs` has been deprecated, now you do not need to specify this argument.
- LlamaFactory now adopts lazy loading for multimodal inputs, see 5346 for details. Please use `preprocessing_batch_size` to restrict the batch size in dataset pre-processing (supported by naem1023 in 5323 ).
- LlamaFactory now supports `lmf` (equivalent to `llamafactory-cli`) as a shortcut command.

Bug fix

- Fix LlamaBoard export by liuwwang in 4950
- Add ROCm dockerfiles by HardAndHeavy in 4970
- Fix deepseek template by piamo in 4892
- Fix pissa savecallback by codemayq in 4995
- Add Korean display language in LlamaBoard by Eruly in 5010
- Fix deepseekcoder template by relic-yuexi in 5072
- Fix examples by codemayq in 5109
- Fix `mask_history` truncate from last by YeQiuO in 5115
- Fix jinja template by YeQiuO in 5156
- Fix PPO optimizer and lr scheduler by liu-zichen in 5163
- Add SailorLLM template by chenhuiyu in 5185
- Fix XPU device count by Zxilly in 5188
- Fix bf16 check in NPU by Ricardo-L-C in 5193
- Update NPU docker image by MengqingCao in 5230
- Fix image input api by marko1616 in 5237
- Add liger-kernel link by ByronHsu in 5317
- Fix 4684 4696 4917 4925 4928 4944 4959 4992 5035 5048 5060 5092 5228 5252 5292 5295 5305 5307 5308 5324 5331 5334 5338 5344 5366 5384

0.8.3

New features

- πŸ”₯Support [contamination-free packing](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing) via the `neat_packing` argument by chuan298 in #4224
- πŸ”₯Support split evaluation via the `eval_dataset` argument by codemayq in 4691
- πŸ”₯Support HQQ/EETQ quantization via the `quantization_method` argument by hiyouga
- πŸ”₯Support ZeRO-3 when using BAdam by Ledzy in 4352
- Support train on the last turn via the `mask_history` argument by aofengdaxia in 4878
- Add NPU Dockerfile by MengqingCao in 4355
- Support building FlashAttention2 in Dockerfile by hzhaoy in 4461
- Support `batch_eval_metrics` at evaluation by hiyouga

New models

- Base models
- InternLM2.5-7B πŸ“„
- Gemma2 (9B/27B) πŸ“„
- Instruct/Chat models
- TeleChat-1B-Chat by hzhaoy in 4651 πŸ“„πŸ€–
- InternLM2.5-7B-Chat πŸ“„πŸ€–
- CodeGeeX4-9B-Chat πŸ“„πŸ€–
- Gemma2-it (9B/27B) πŸ“„πŸ€–

Changes

- Fix DPO cutoff len and deprecate `reserved_label_len` argument
- Improve loss function for reward modeling

Bug fix

- Fix numpy version by MengqingCao in 4382
- Improve cli by kno10 in 4409
- Add `tool_format` parameter to control prompt by mMrBun in 4417
- Automatically label npu issue by MengqingCao in 4445
- Fix flash_attn args by stceum in 4446
- Fix docker-compose path by MengqingCao in 4544
- Fix torch-npu dependency by hashstone in 4561
- Fix deepspeed + pissa by hzhaoy in 4580
- Improve cli by injet-zhou in 4590
- Add project by wzh1994 in 4662
- Fix docstring by hzhaoy in 4673
- Fix Windows command preview in WebUI by marko1616 in 4700
- Fix vllm 0.5.1 by T-Atlas in 4706
- Fix save value head model callback by yzoaim in 4746
- Fix CUDA Dockerfile by hzhaoy in 4781
- Fix examples by codemayq in 4804
- Fix evaluation data split by codemayq in 4821
- Fix CI by codemayq in 4822
- Fix 2290 3974 4113 4379 4398 4402 4410 4419 4432 4456 4458 4549 4556 4579 4592 4609 4617 4674 4677 4683 4684 4699 4705 4731 4742 4779 4780 4786 4792 4820 4826

0.8.2

New features

- Support GLM-4 tools and parallel function calling by mMrBun in 4173
- Support **PiSSA** fine-tuning by hiyouga in 4307

New models

- Base models
- DeepSeek-Coder-V2 (16B MoE/236B MoE) πŸ“„
- Instruct/Chat models
- MiniCPM-2B πŸ“„πŸ€–
- DeepSeek-Coder-V2-Instruct (16B MoE/236B MoE) πŸ“„πŸ€–

New datasets

- Supervised fine-tuning datasets
- Neo-sft (zh)
- Magpie-Pro-300K-Filtered (en) by EliMCosta in 4309
- WebInstruct (en) by EliMCosta in 4309

Bug fix

- Fix DPO+ZeRO3 problem by hiyouga
- Add MANIFEST.in by iamthebot in 4191
- Fix eos_token in llama3 pretrain by dignfei in 4204
- Fix vllm version by kimdwkimdw and hzhaoy in 4234 and 4246
- Fix Dockerfile by EliMCosta in 4314
- Fix pandas version by zzxzz12345 in 4334
- Fix 3162 3196 3778 4198 4209 4221 4227 4238 4242 4271 4292 4295 4326 4346 4357 4362

0.8.1

- Fix 2666: Unsloth+DoRA
- Fix 4145: The PyTorch version of the docker image does not match the vLLM requirement
- Fix 4160: The problem in LongLoRA implementation with the help of f-q23
- Fix 4167: The installation problem in the Windows system by yzoaim

0.8.0

Stronger [LlamaBoard](https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#fine-tuning-with-llama-board-gui-powered-by-gradio) πŸ’ͺπŸ˜€

- Support single-node distributed training in Web UI
- Add dropdown menu for easily resuming from checkpoints and picking saved configurations by hiyouga and hzhaoy in 4053
- Support selecting checkpoints of full/freeze tuning
- Add throughput metrics to LlamaBoard by injet-zhou in 4066
- Faster UI loading

New features

- Add KTO algorithm by enji-zhou in 3785
- Add SimPO algorithm by hiyouga
- Support passing `max_lora_rank` to the vLLM backend by jue-jue-zi in 3794
- Support preference datasets in sharegpt format and remove big files from git repo by hiyouga in 3799
- Support setting system messages in CLI inference by ycjcl868 in 3812
- Add `num_samples` option in `dataset_info.json` by seanzhang-zhichen in 3829
- Add NPU docker image by dongdongqiang2018 in 3876
- Improve NPU document by MengqingCao in 3930
- Support SFT packing with greedy knapsack algorithm by AlongWY in 4009
- Add `llamafactory-cli env` for bug report
- Support image input in the API mode
- Support random initialization via the `train_from_scratch` argument
- Initialize CI

New models

- Base models
- Qwen2 (0.5B/1.5B/7B/72B/MoE) πŸ“„
- PaliGemma-3B (pt/mix) πŸ“„πŸ–ΌοΈ
- GLM-4-9B πŸ“„
- Falcon-11B πŸ“„
- DeepSeek-V2-Lite (16B) πŸ“„
- Instruct/Chat models
- Qwen2-Instruct (0.5B/1.5B/7B/72B/MoE) πŸ“„πŸ€–
- Mistral-7B-Instruct-v0.3 πŸ“„πŸ€–
- Phi-3-small-8k-instruct (7B) πŸ“„πŸ€–
- Aya-23 (8B/35B) πŸ“„πŸ€–
- OpenChat-3.6-8B πŸ“„πŸ€–
- GLM-4-9B-Chat πŸ“„πŸ€–
- TeleChat-12B-Chat by hzhaoy in 3958 πŸ“„πŸ€–
- Phi-3-medium-8k-instruct (14B) πŸ“„πŸ€–
- DeepSeek-V2-Lite-Chat (16B) πŸ“„πŸ€–
- Codestral-22B-v0.1 πŸ“„πŸ€–

New datasets

- Pre-training datasets
- FineWeb (en)
- FineWeb-Edu (en)
- Supervised fine-tuning datasets
- Ruozhiba-GPT4 (zh)
- STEM-Instruction (zh)
- Preference datasets
- Argilla-KTO-mix-15K (en)
- UltraFeedback (en)

Bug fix

- Fix RLHF for multimodal finetuning
- Fix LoRA target in multimodal finetuning by BUAADreamer in 3835
- Fix `yi` template by Yimi81 in 3925
- Fix abort issue in LlamaBoard by injet-zhou in 3987
- Pass `scheduler_specific_kwargs` to `get_scheduler` by Uminosachi in 4006
- Fix hyperparameters helps by xu-song in 4007
- Update issue template by statelesshz in 4011
- Fix vllm dtype parameter
- Fix exporting hyperparameters by MengqingCao in 4080
- Fix DeepSpeed ZeRO3 in PPO trainer
- Fix 3108 3387 3646 3717 3764 3769 3803 3807 3818 3837 3847 3853 3873 3900 3931 3965 3971 3978 3992 4005 4012 4013 4022 4033 4043 4061 4075 4077 4079 4085 4090 4120 4132 4137 4139

0.7.1

🚨🚨 Core refactor 🚨🚨

- Add **CLIs** usage, now we recommend using `llamafactory-cli` to launch training and inference, the entry point is located at the [cli.py](https://github.com/hiyouga/LLaMA-Factory/blob/main/src/llamafactory/cli.py)
- Rename files: `train_bash.py` -> `train.py`, `train_web.py` -> `webui.py`, `api_demo.py` -> `api.py`
- Remove files: `cli_demo.py`, `evaluate.py`, `export_model.py`, `web_demo.py`, use `llamafactory-cli chat/eval/export/webchat` instead
- Use **YAML configs** in examples instead of shell scripts for a pretty view
- Remove the sha1 hash check when loading datasets
- Rename arguments: `num_layer_trainable` -> `freeze_trainable_layers`, `name_module_trainable` -> `freeze_trainable_modules`

The above changes are made by hiyouga in 3596

REMINDER: Now [installation](https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#installation) is **mandatory** to use LLaMA Factory

New features

- Support training and inference on the Ascend NPU 910 devices by zhou-wjjw and statelesshz (docker images are also provided)
- Support `stop` parameter in vLLM engine by zhaonx in 3527
- Support fine-tuning token embeddings in freeze tuning via the `freeze_extra_modules` argument
- Add Llama3 [quickstart](https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#quickstart) to readme

New models

- Base models
- Yi-1.5 (6B/9B/34B) πŸ“„
- DeepSeek-V2 (236B) πŸ“„
- Instruct/Chat models
- Yi-1.5-Chat (6B/9B/34B) πŸ“„πŸ€–
- Yi-VL-Chat (6B/34B) by BUAADreamer in 3748 πŸ“„πŸ–ΌοΈπŸ€–
- Llama3-Chinese-Chat (8B/70B) πŸ“„πŸ€–
- DeepSeek-V2-Chat (236B) πŸ“„πŸ€–

Bug fix

- Add badam arguments to LlamaBoard by codemayq in 3487
- Add openai data format to readme by khazic in 3490
- Fix slow operation in dpo/orpo trainer by hiyouga
- Fix badam examples by pha123661 in 3578
- Fix download link of the nectar_rm dataset by ZeyuTeng96 in 3588
- Add project by Katehuuh in 3601
- Fix dockerfile by gaussian8 in 3604
- Fix full tuning of MLLMs by BUAADreamer in 3651
- Fix gradio environment variables by cocktailpeanut in 3654
- Fix typo and add log in API by Tendo33 in 3655
- Fix download link of the phi-3 model by YUUUCC in 3683
- Fix 3559 3560 3602 3603 3606 3625 3650 3658 3674 3694 3702 3724 3728

Page 1 of 6

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.