Llamafactory

Latest version: v0.9.2

Safety actively analyzes 723625 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

0.1.7

New features

- Preview training script in Web UI by codemayq in 479 511
- Support resuming from checkpoints by niuba in 434 (`transformers>=4.31.0` required)
- Two RoPE scaling methods: linear and NTK-aware scaling for LLaMA models (`transformers>=4.31.0` required)
- Support training the ChatGLM2-6B model
- Support PPO training in bfloat16 data type 551

Bug fix

- Unusual output of quantized models 278 391
- Runtime error in distributed DPO training 480
- Unexpected truncation in generation 532
- Dataset streaming error in pre-training 548 549
- Tensor shape mismatch in PPO training using ChatGLM2 527 528
- 475 476 478 481 494 551

0.1.6

- Adapt **[DPO training](https://arxiv.org/abs/2305.18290)** from the [TRL](https://github.com/huggingface/trl) library
- Support fine-tuning the Qwen-7B, Qwen-7B-Chat, XVERSE-13B, and ChatGLM2-6B models
- Implement the "safe" [ChatML template](https://github.com/openai/openai-python/blob/main/chatml.md) for Qwen-7B-Chat
- Better Web UI
- Pretty readme by codemayq 382
- New features: 395 451
- Fix InternLM-7B inference 312
- Fix bugs: 351 354 361 376 408 417 420 423 426

0.1.5

- Fix LLaMA-2 template 307
- Fix bug in preprocessing 968ce0dcce6bfef582ce37aea6566a65f5aac811
- Fix 294 296

0.1.4

- Support [dataset streaming](https://huggingface.co/docs/datasets/stream)
- Fix LLaMA-2 268
- Fix DeepSpeed ZeRO-3 model save 274
- Fix 242 284

0.1.3

0.1.2

- Support **LLaMA-2** (good issue 202 )
- Advanced configurations in Web UI
- Fix API (downgrade pydantic<2.0.0)
- Fix baichuan lora hparam 194 212
- Fix padding 196
- Fix ZeRO-3 199
- Allow pass args to app 213
- Code simplification
- Add ShareGPT dataset

Page 5 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.