Xtuner

Latest version: v0.1.23

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 4

0.1.5

What's Changed
* [Fix] Rename internlm-chat-20b by LZHgrla in https://github.com/InternLM/xtuner/pull/131
* [Fix] Fix CPU OOM during the merge step by LZHgrla in https://github.com/InternLM/xtuner/pull/133
* [Fix] Add `--offload-folder` for merge and chat by LZHgrla in https://github.com/InternLM/xtuner/pull/140
* [Feature] Support to remove history for chat script by LZHgrla in https://github.com/InternLM/xtuner/pull/144
* [Docs] add conda env create by KevinNuNu in https://github.com/InternLM/xtuner/pull/147
* [Fix] Fix activation checkpointing bug by LZHgrla in https://github.com/InternLM/xtuner/pull/159
* [Refactor] Refactor the preprocess of dataset by LZHgrla in https://github.com/InternLM/xtuner/pull/163
* [Feature] Support deepspeed for HF trainer by LZHgrla in https://github.com/InternLM/xtuner/pull/164
* [Feature] Support the fine-tuning of MSAgent dataset by LZHgrla in https://github.com/InternLM/xtuner/pull/156
* [Fix] Fix bugs on `traverse_dict` by LZHgrla in https://github.com/InternLM/xtuner/pull/141
* [Doc] Update `chat.md` by LZHgrla in https://github.com/InternLM/xtuner/pull/168
* bump version to 0.1.5 by LZHgrla in https://github.com/InternLM/xtuner/pull/171

New Contributors
* KevinNuNu made their first contribution in https://github.com/InternLM/xtuner/pull/147

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.4...v0.1.5

0.1.4

What's Changed
* [Fix] fix merge command in README.md by liuyanyi in https://github.com/InternLM/xtuner/pull/111
* [Doc] Add doc about using alpaca format custom dataset by HIT-cwh in https://github.com/InternLM/xtuner/pull/114
* [Feature] Support InternLM-20B by LZHgrla in https://github.com/InternLM/xtuner/pull/128
* bump version to 0.1.4 by LZHgrla in https://github.com/InternLM/xtuner/pull/129

New Contributors
* liuyanyi made their first contribution in https://github.com/InternLM/xtuner/pull/111

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.3...v0.1.4

0.1.3

What's Changed
* [Feature] Add Baichuan2 7B-chat, 13B-base, 13B-chat by LZHgrla in https://github.com/InternLM/xtuner/pull/103
* [Fix] Use `token_id` instead of `token` for `encode_fn` & Set eval mode before generate by LZHgrla in https://github.com/InternLM/xtuner/pull/107
* [Feature] Support log processed dataset & Fix doc by HIT-cwh in https://github.com/InternLM/xtuner/pull/101
* [Fix] move toy data by HIT-cwh in https://github.com/InternLM/xtuner/pull/108
* bump version to 0.1.3 by HIT-cwh in https://github.com/InternLM/xtuner/pull/109


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.2...v0.1.3

0.1.2

What's Changed
* [Doc] Fix dataset docs by HIT-cwh in https://github.com/InternLM/xtuner/pull/87
* [Doc] Fix readme by HIT-cwh in https://github.com/InternLM/xtuner/pull/92
* [Improve] Add ZeRO2-offload configs by LZHgrla in https://github.com/InternLM/xtuner/pull/94
* [Improve] Redesign convert tools by LZHgrla in https://github.com/InternLM/xtuner/pull/96
* [Fix] fix generation config by HIT-cwh in https://github.com/InternLM/xtuner/pull/98
* [Feature] Support Baichuan2 models by LZHgrla in https://github.com/InternLM/xtuner/pull/102
* bump version to 0.1.2 by LZHgrla in https://github.com/InternLM/xtuner/pull/100


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.1...v0.1.2

0.1.1

What's Changed
* [Doc] Update WeChat image by LZHgrla in https://github.com/InternLM/xtuner/pull/74
* [Doc] Modify install commands for DeepSpeed integration by LZHgrla in https://github.com/InternLM/xtuner/pull/75
* Add bot: Create .owners.yml by del-zhenwu in https://github.com/InternLM/xtuner/pull/81
* [Improve] Add several InternLM-7B full parameters fine-tuning configs by LZHgrla in https://github.com/InternLM/xtuner/pull/84
* [Feature] Add starcoder example by HIT-cwh in https://github.com/InternLM/xtuner/pull/83
* [Doc] Add data_prepare.md docs by LZHgrla in https://github.com/InternLM/xtuner/pull/82
* bump version to 0.1.1 by HIT-cwh in https://github.com/InternLM/xtuner/pull/85

New Contributors
* del-zhenwu made their first contribution in https://github.com/InternLM/xtuner/pull/81

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.0...v0.1.1

0.1.0

XTuner is released! 🔥🔥🔥

Highlights

- XTuner supports LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only **8GB**.
- XTuner supports various LLMs, datasets, algorithms and training pipelines.
- Several fine-tuned adapters are released simultaneously, including various gameplays such as the colorist LLM, plugins-based LLM, and many more. For further details, please visit [XTuner on HuggingFace](https://huggingface.co/xtuner)!

Page 4 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.