Textgen

Latest version: v1.1.1

Safety actively analyzes 714860 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

1.1.0

1. 发布基于ShareGPT4数据集微调的中英文Vicuna-13B模型[shibing624/vicuna-baichuan-13b-chat](https://huggingface.co/shibing624/vicuna-baichuan-13b-chat),和对应的LoRA模型[shibing624/vicuna-baichuan-13b-chat-lora](https://huggingface.co/shibing624/vicuna-baichuan-13b-chat-lora);
2. 支持多轮对话微调;多轮对话样本格式:[examples/data/sharegpt_zh_100_format.jsonl](https://github.com/shibing624/textgen/blob/main/examples/data/sharegpt_zh_100_format.jsonl)



**Full Changelog**: https://github.com/shibing624/textgen/compare/1.0.2...1.1.0

1.0.2

新增支持ChatGLM2和LLaMA2模型的SFT微调训练

**Full Changelog**: https://github.com/shibing624/textgen/compare/1.0.1...1.0.2

1.0

- 新增ChatGLM/LLaMA/Bloom模型的多轮对话微调训练,并发布医疗问诊LoRA模型[shibing624/ziya-llama-13b-medical-lora](https://huggingface.co/shibing624/ziya-llama-13b-medical-lora)。

训练 ChatGLM/LLaMA/Bloom 微调模型

1. 支持自定义训练数据集和训练参数,数据集格式参考[examples/data/zh_csc_test.tsv](https://github.com/shibing624/textgen/blob/main/examples/data/zh_csc_test.tsv)或者[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)
2. 支持AdaLoRA、LoRA、P_Tuning、Prefix_Tuning等部分参数微调方法,也支持全参微调
3. 支持多卡训练,支持混合精度训练

**Full Changelog**: https://github.com/shibing624/textgen/compare/0.2.7...1.0.0

1.0.0

0.2.7

- 新增ChatGLM/LLaMA/Bloom模型的SFT微调训练,并发布适用于通用对话和中文纠错的LoRA模型。

| Model | Arch | Introduction | Train Script | Predict Script |
|:----------------------------------------------------------------------------------------------------------|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------|
| [shibing624/chatglm-6b-csc-zh-lora](https://huggingface.co/shibing624/chatglm-6b-csc-zh-lora) | ChatGLM-6B | 在27万中文拼写纠错数据[shibing624/CSC](https://huggingface.co/datasets/shibing624/CSC)上微调了一版ChatGLM-6B,纠错效果有提升,发布微调后的LoRA权重 | [training script](https://github.com/shibing624/textgen/blob/main/examples/chatglm/training_chatglm_csc_demo.py) | [predict script](https://github.com/shibing624/textgen/blob/main/examples/chatglm/csc_demo.py) |
| [shibing624/chatglm-6b-belle-zh-lora](https://huggingface.co/shibing624/chatglm-6b-belle-zh-lora) | ChatGLM-6B | 在100万条中文ChatGPT指令Belle数据集[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)上微调了一版ChatGLM-6B,问答效果有提升,发布微调后的LoRA权重 | [training script](https://github.com/shibing624/textgen/blob/main/examples/chatglm/training_chatglm_hfdataset_demo.py) | [predict script](https://github.com/shibing624/textgen/blob/main/examples/chatglm/training_chatglm_hfdataset_demo.py) |
| [shibing624/llama-13b-belle-zh-lora](https://huggingface.co/shibing624/llama-13b-belle-zh-lora) | LLaMA-13B | 在100万条中文ChatGPT指令Belle数据集[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)上微调了一版Llama-13B,问答效果有提升,发布微调后的LoRA权重 | [training script](https://github.com/shibing624/textgen/blob/main/examples/llama/training_llama_hfdataset_demo.py) | [predict script](https://github.com/shibing624/textgen/blob/main/examples/llama/training_llama_hfdataset_demo.py) |
| [shibing624/chinese-alpaca-plus-7b-hf](https://huggingface.co/shibing624/chinese-alpaca-plus-7b-hf) | LLaMA-7B | [中文LLaMA-Plus, Alpaca-Plus 7B版本](https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v3.0),在LLaMA-7B上扩充了中文词表并继续预训练120G文本(通用领域),在4M指令数据集上微调后得到的中文Alpaca-plus模型 | [training script](https://github.com/shibing624/textgen/blob/main/examples/llama/training_llama_demo.py) | [predict script](https://github.com/shibing624/textgen/blob/main/examples/llama/training_llama_demo.py) |
| [shibing624/chinese-alpaca-plus-13b-hf](https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf) | LLaMA-13B | [中文LLaMA-Plus, Alpaca-Plus 13B版本](https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v3.1),在LLaMA-13B上扩充了中文词表并继续预训练120G文本(通用领域),在4.3M指令数据集上微调后得到的中文Alpaca-plus模型 | [training script](https://github.com/shibing624/textgen/blob/main/examples/llama/training_llama_demo.py) | [predict script](https://github.com/shibing624/textgen/blob/main/examples/llama/training_llama_demo.py) |





**Full Changelog**: https://github.com/shibing624/textgen/compare/0.2.5...0.2.7

0.2.5

What's Changed
* pad labels to max length by xingener in https://github.com/shibing624/textgen/pull/25

New Contributors
* xingener made their first contribution in https://github.com/shibing624/textgen/pull/25

**Full Changelog**: https://github.com/shibing624/textgen/compare/0.2.0...0.2.5

Page 3 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.