Xtuner

Latest version: v0.1.23

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

0.1.23

What's Changed
* Support InternVL 1.5/2.0 finetune by hhaAndroid in https://github.com/InternLM/xtuner/pull/737
* [Bug] fix preference_collate_fn attn_mask by HIT-cwh in https://github.com/InternLM/xtuner/pull/859
* bump version to 0.1.23 by HIT-cwh in https://github.com/InternLM/xtuner/pull/862


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.22...v0.1.23

0.1.22

What's Changed
* [Refactor] fix internlm2 dispatch by HIT-cwh in https://github.com/InternLM/xtuner/pull/779
* Fix zero3 compatibility issue for DPO by Johnson-Wang in https://github.com/InternLM/xtuner/pull/781
* [Fix] Fix map_fn in custom_dataset/sft by fanqiNO1 in https://github.com/InternLM/xtuner/pull/785
* [Fix] fix configs by HIT-cwh in https://github.com/InternLM/xtuner/pull/783
* [Docs] DPO and Reward Model documents by RangiLyu in https://github.com/InternLM/xtuner/pull/751
* Support internlm2.5 by HIT-cwh in https://github.com/InternLM/xtuner/pull/803
* [Bugs] fix dispatch bugs when model not in LOWEST_TRANSFORMERS_VERSION by HIT-cwh in https://github.com/InternLM/xtuner/pull/802
* [Docs] fix benchmark table by HIT-cwh in https://github.com/InternLM/xtuner/pull/801
* [Feature] support output without loss in openai_map_fn by HIT-cwh in https://github.com/InternLM/xtuner/pull/816
* [Docs] fix typos in sp docs by HIT-cwh in https://github.com/InternLM/xtuner/pull/821
* [Feature] Support the DatasetInfoHook of DPO training by xu-song in https://github.com/InternLM/xtuner/pull/787
* [Enhance]: Fix sequence parallel memory bottleneck in DPO & ORPO by RangiLyu in https://github.com/InternLM/xtuner/pull/830
* [Fix] Fix typo by bychen7 in https://github.com/InternLM/xtuner/pull/795
* [Fix] fix initialization of ref_llm for full param dpo training with zero-3 by xu-song in https://github.com/InternLM/xtuner/pull/778
* [Bugs] Fix attn mask by HIT-cwh in https://github.com/InternLM/xtuner/pull/852
* fix lint by HIT-cwh in https://github.com/InternLM/xtuner/pull/854
* [Bugs] Fix dispatch attn bug by HIT-cwh in https://github.com/InternLM/xtuner/pull/829
* [Docs]: update readme and DPO en docs by RangiLyu in https://github.com/InternLM/xtuner/pull/853
* Added minicpm config file to support sft、qlora、lora、dpo by LDLINGLINGLING in https://github.com/InternLM/xtuner/pull/847
* fix lint by HIT-cwh in https://github.com/InternLM/xtuner/pull/856
* bump version to 0.1.22 by HIT-cwh in https://github.com/InternLM/xtuner/pull/855

New Contributors
* Johnson-Wang made their first contribution in https://github.com/InternLM/xtuner/pull/781
* xu-song made their first contribution in https://github.com/InternLM/xtuner/pull/787
* bychen7 made their first contribution in https://github.com/InternLM/xtuner/pull/795
* LDLINGLINGLING made their first contribution in https://github.com/InternLM/xtuner/pull/847

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.21...v0.1.22

0.1.21

What's Changed
* [Feature] Support DPO, ORPO and Reward Model by RangiLyu in https://github.com/InternLM/xtuner/pull/743
* [Bugs] fix dispatch bugs by HIT-cwh in https://github.com/InternLM/xtuner/pull/775
* [Bugs] Fix HFCheckpointHook bugs when training deepseekv2 and mixtral withou… by HIT-cwh in https://github.com/InternLM/xtuner/pull/774
* [Feature] Support the scenario where sp size is not divisible by attn head num by HIT-cwh in https://github.com/InternLM/xtuner/pull/769
* bump version to 0.1.21 by HIT-cwh in https://github.com/InternLM/xtuner/pull/776


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.20...v0.1.21

0.1.20

What's Changed
* [Enhancement] Optimizing Memory Usage during ZeRO Checkpoint Convert by pppppM in https://github.com/InternLM/xtuner/pull/582
* [Fix] ZeRO2 Checkpoint Convert Bug by pppppM in https://github.com/InternLM/xtuner/pull/684
* [Feature] support auto saving tokenizer by HIT-cwh in https://github.com/InternLM/xtuner/pull/696
* [Bug] fix internlm2 flash attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/693
* [Bug] The LoRA model will have `meta-tensor` during the `pth_to_hf` phase. by pppppM in https://github.com/InternLM/xtuner/pull/697
* [Bug] fix cfg check by HIT-cwh in https://github.com/InternLM/xtuner/pull/729
* [Bugs] Fix bugs caused by sequence parallel when deepspeed is not used. by HIT-cwh in https://github.com/InternLM/xtuner/pull/752
* [Fix] Avoid incorrect `torchrun` invocation with `--launcher slurm` by LZHgrla in https://github.com/InternLM/xtuner/pull/728
* [fix] fix save eval result failed with mutil-node pretrain by HoBeedzc in https://github.com/InternLM/xtuner/pull/678
* [Improve] Support the export of various LLaVA formats with `pth_to_hf` by LZHgrla in https://github.com/InternLM/xtuner/pull/708
* [Refactor] refactor dispatch_modules by HIT-cwh in https://github.com/InternLM/xtuner/pull/731
* [Docs] Readthedocs ZH by pppppM in https://github.com/InternLM/xtuner/pull/553
* [Feature] Support finetune Deepseek v2 by HIT-cwh in https://github.com/InternLM/xtuner/pull/663
* bump version to 0.1.20 by HIT-cwh in https://github.com/InternLM/xtuner/pull/766

New Contributors
* HoBeedzc made their first contribution in https://github.com/InternLM/xtuner/pull/678

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.19...v0.1.20

0.1.19

What's Changed
* [Fix] LLaVA-v1.5 official settings by LZHgrla in https://github.com/InternLM/xtuner/pull/594
* [Feature] Release LLaVA-Llama-3-8B by LZHgrla in https://github.com/InternLM/xtuner/pull/595
* [Improve] Add single-gpu configs for LLaVA-Llama-3-8B by LZHgrla in https://github.com/InternLM/xtuner/pull/596
* [Docs] Add wisemodel badge by LZHgrla in https://github.com/InternLM/xtuner/pull/597
* [Feature] Support load_json_file with json.load by HIT-cwh in https://github.com/InternLM/xtuner/pull/610
* [Feature]Support Mircosoft Phi3 4K&128K Instruct Models by pppppM in https://github.com/InternLM/xtuner/pull/603
* [Fix] set `dataloader_num_workers=4` for llava training by LZHgrla in https://github.com/InternLM/xtuner/pull/611
* [Fix] Do not set attn_implementation to flash_attention_2 or sdpa if users already set it in XTuner configs. by HIT-cwh in https://github.com/InternLM/xtuner/pull/609
* [Release] LLaVA-Phi-3-mini by LZHgrla in https://github.com/InternLM/xtuner/pull/615
* Update README.md by eltociear in https://github.com/InternLM/xtuner/pull/608
* [Feature] Refine sp api by HIT-cwh in https://github.com/InternLM/xtuner/pull/619
* [Feature] Add conversion scripts for LLaVA-Llama-3-8B by LZHgrla in https://github.com/InternLM/xtuner/pull/618
* [Fix] Convert nan to 0 just for logging by HIT-cwh in https://github.com/InternLM/xtuner/pull/625
* [Docs] Delete colab and add speed benchmark by HIT-cwh in https://github.com/InternLM/xtuner/pull/617
* [Feature] Support dsz3+qlora by HIT-cwh in https://github.com/InternLM/xtuner/pull/600
* [Feature] Add qwen1.5 110b cfgs by HIT-cwh in https://github.com/InternLM/xtuner/pull/632
* check transformers version before dispatch by HIT-cwh in https://github.com/InternLM/xtuner/pull/672
* [Fix] `convert_xtuner_weights_to_hf` with frozen ViT by LZHgrla in https://github.com/InternLM/xtuner/pull/661
* [Fix] Fix batch-size setting of single-card LLaVA-Llama-3-8B configs by LZHgrla in https://github.com/InternLM/xtuner/pull/598
* [Feature] add HFCheckpointHook to auto save hf model after the whole training phase by HIT-cwh in https://github.com/InternLM/xtuner/pull/621
* Remove test info in DatasetInfoHook by hhaAndroid in https://github.com/InternLM/xtuner/pull/622
* [Improve] Support `safe_serialization` saving by LZHgrla in https://github.com/InternLM/xtuner/pull/648
* bump version to 0.1.19 by HIT-cwh in https://github.com/InternLM/xtuner/pull/675

New Contributors
* eltociear made their first contribution in https://github.com/InternLM/xtuner/pull/608

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.18...v0.1.19

0.1.18

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/537
* [Fix] Fix typo by KooSung in https://github.com/InternLM/xtuner/pull/547
* [Feature] support mixtral varlen attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/564
* [Feature] Support qwen sp and varlen attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/565
* [Fix]Fix attention mask in `default_collate_fn` by pppppM in https://github.com/InternLM/xtuner/pull/567
* Accept pytorch==2.2 as the bugs in triton 2.2 are fixed by HIT-cwh in https://github.com/InternLM/xtuner/pull/548
* [Feature] Refine Sequence Parallel API by HIT-cwh in https://github.com/InternLM/xtuner/pull/555
* [Fix] Enhance `split_list` to support `value` at the beginning by LZHgrla in https://github.com/InternLM/xtuner/pull/568
* [Feature] Support cohere by HIT-cwh in https://github.com/InternLM/xtuner/pull/569
* [Fix] Fix rotary_seq_len in varlen attn in qwen by HIT-cwh in https://github.com/InternLM/xtuner/pull/574
* [Docs] Add sequence parallel related to readme by HIT-cwh in https://github.com/InternLM/xtuner/pull/578
* [Bug] SUPPORT_FLASH1 = digit_version(torch.__version__) >= digit_version('2… by HIT-cwh in https://github.com/InternLM/xtuner/pull/587
* [Feature] Support Llama 3 by LZHgrla in https://github.com/InternLM/xtuner/pull/585
* [Docs] Add llama3 8B readme by HIT-cwh in https://github.com/InternLM/xtuner/pull/588
* [Bugs] Check whether cuda is available when choose torch_dtype in sft.py by HIT-cwh in https://github.com/InternLM/xtuner/pull/577
* [Bugs] fix bugs in tokenize_ftdp_datasets by HIT-cwh in https://github.com/InternLM/xtuner/pull/581
* [Feature] Support qwen moe by HIT-cwh in https://github.com/InternLM/xtuner/pull/579
* [Docs] Add tokenizer to sft in Case 2 by HIT-cwh in https://github.com/InternLM/xtuner/pull/583
* bump version to 0.1.18 by HIT-cwh in https://github.com/InternLM/xtuner/pull/590


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.17...v0.1.18

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.