Xtuner

Latest version: v0.1.19

Safety actively analyzes 627711 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

0.1.19

What's Changed
* [Fix] LLaVA-v1.5 official settings by LZHgrla in https://github.com/InternLM/xtuner/pull/594
* [Feature] Release LLaVA-Llama-3-8B by LZHgrla in https://github.com/InternLM/xtuner/pull/595
* [Improve] Add single-gpu configs for LLaVA-Llama-3-8B by LZHgrla in https://github.com/InternLM/xtuner/pull/596
* [Docs] Add wisemodel badge by LZHgrla in https://github.com/InternLM/xtuner/pull/597
* [Feature] Support load_json_file with json.load by HIT-cwh in https://github.com/InternLM/xtuner/pull/610
* [Feature]Support Mircosoft Phi3 4K&128K Instruct Models by pppppM in https://github.com/InternLM/xtuner/pull/603
* [Fix] set `dataloader_num_workers=4` for llava training by LZHgrla in https://github.com/InternLM/xtuner/pull/611
* [Fix] Do not set attn_implementation to flash_attention_2 or sdpa if users already set it in XTuner configs. by HIT-cwh in https://github.com/InternLM/xtuner/pull/609
* [Release] LLaVA-Phi-3-mini by LZHgrla in https://github.com/InternLM/xtuner/pull/615
* Update README.md by eltociear in https://github.com/InternLM/xtuner/pull/608
* [Feature] Refine sp api by HIT-cwh in https://github.com/InternLM/xtuner/pull/619
* [Feature] Add conversion scripts for LLaVA-Llama-3-8B by LZHgrla in https://github.com/InternLM/xtuner/pull/618
* [Fix] Convert nan to 0 just for logging by HIT-cwh in https://github.com/InternLM/xtuner/pull/625
* [Docs] Delete colab and add speed benchmark by HIT-cwh in https://github.com/InternLM/xtuner/pull/617
* [Feature] Support dsz3+qlora by HIT-cwh in https://github.com/InternLM/xtuner/pull/600
* [Feature] Add qwen1.5 110b cfgs by HIT-cwh in https://github.com/InternLM/xtuner/pull/632
* check transformers version before dispatch by HIT-cwh in https://github.com/InternLM/xtuner/pull/672
* [Fix] `convert_xtuner_weights_to_hf` with frozen ViT by LZHgrla in https://github.com/InternLM/xtuner/pull/661
* [Fix] Fix batch-size setting of single-card LLaVA-Llama-3-8B configs by LZHgrla in https://github.com/InternLM/xtuner/pull/598
* [Feature] add HFCheckpointHook to auto save hf model after the whole training phase by HIT-cwh in https://github.com/InternLM/xtuner/pull/621
* Remove test info in DatasetInfoHook by hhaAndroid in https://github.com/InternLM/xtuner/pull/622
* [Improve] Support `safe_serialization` saving by LZHgrla in https://github.com/InternLM/xtuner/pull/648
* bump version to 0.1.19 by HIT-cwh in https://github.com/InternLM/xtuner/pull/675

New Contributors
* eltociear made their first contribution in https://github.com/InternLM/xtuner/pull/608

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.18...v0.1.19

0.1.18

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/537
* [Fix] Fix typo by KooSung in https://github.com/InternLM/xtuner/pull/547
* [Feature] support mixtral varlen attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/564
* [Feature] Support qwen sp and varlen attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/565
* [Fix]Fix attention mask in `default_collate_fn` by pppppM in https://github.com/InternLM/xtuner/pull/567
* Accept pytorch==2.2 as the bugs in triton 2.2 are fixed by HIT-cwh in https://github.com/InternLM/xtuner/pull/548
* [Feature] Refine Sequence Parallel API by HIT-cwh in https://github.com/InternLM/xtuner/pull/555
* [Fix] Enhance `split_list` to support `value` at the beginning by LZHgrla in https://github.com/InternLM/xtuner/pull/568
* [Feature] Support cohere by HIT-cwh in https://github.com/InternLM/xtuner/pull/569
* [Fix] Fix rotary_seq_len in varlen attn in qwen by HIT-cwh in https://github.com/InternLM/xtuner/pull/574
* [Docs] Add sequence parallel related to readme by HIT-cwh in https://github.com/InternLM/xtuner/pull/578
* [Bug] SUPPORT_FLASH1 = digit_version(torch.__version__) >= digit_version('2… by HIT-cwh in https://github.com/InternLM/xtuner/pull/587
* [Feature] Support Llama 3 by LZHgrla in https://github.com/InternLM/xtuner/pull/585
* [Docs] Add llama3 8B readme by HIT-cwh in https://github.com/InternLM/xtuner/pull/588
* [Bugs] Check whether cuda is available when choose torch_dtype in sft.py by HIT-cwh in https://github.com/InternLM/xtuner/pull/577
* [Bugs] fix bugs in tokenize_ftdp_datasets by HIT-cwh in https://github.com/InternLM/xtuner/pull/581
* [Feature] Support qwen moe by HIT-cwh in https://github.com/InternLM/xtuner/pull/579
* [Docs] Add tokenizer to sft in Case 2 by HIT-cwh in https://github.com/InternLM/xtuner/pull/583
* bump version to 0.1.18 by HIT-cwh in https://github.com/InternLM/xtuner/pull/590


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.17...v0.1.18

0.1.17

What's Changed
* [Fix] Fix PyPI package by LZHgrla in https://github.com/InternLM/xtuner/pull/540
* [Improve] Add LoRA fine-tuning configs for LLaVA-v1.5 by LZHgrla in https://github.com/InternLM/xtuner/pull/536
* [Configs] Add sequence_parallel_size and SequenceParallelSampler to configs by HIT-cwh in https://github.com/InternLM/xtuner/pull/538
* Check shape of attn_mask during attn forward by HIT-cwh in https://github.com/InternLM/xtuner/pull/543
* bump version to v0.1.17 by LZHgrla in https://github.com/InternLM/xtuner/pull/542


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.16...v0.1.17

0.1.16

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/487
* Fix type error when the visual encoder is not CLIP by hhaAndroid in https://github.com/InternLM/xtuner/pull/496
* [Feature] Support Sequence parallel by HIT-cwh in https://github.com/InternLM/xtuner/pull/456
* [Bug] Fix bugs in flash_attn1_pytorch by HIT-cwh in https://github.com/InternLM/xtuner/pull/513
* [Fix] delete cat in varlen attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/508
* bump version to 0.1.16 by HIT-cwh in https://github.com/InternLM/xtuner/pull/520
* [Improve] Add `generation_kwargs` for `EvaluateChatHook` by LZHgrla in https://github.com/InternLM/xtuner/pull/501
* [Bugs] Fix bugs when training in non-distributed env by HIT-cwh in https://github.com/InternLM/xtuner/pull/522
* [Fix] Support transformers>=4.38 and require transformers>=4.36.0 by HIT-cwh in https://github.com/InternLM/xtuner/pull/494
* [Fix] Fix throughput hook by HIT-cwh in https://github.com/InternLM/xtuner/pull/527
* Update README.md by JianxinDong in https://github.com/InternLM/xtuner/pull/528
* [Fix] dispatch internlm rote by HIT-cwh in https://github.com/InternLM/xtuner/pull/530
* Limit transformers != 4.38 by HIT-cwh in https://github.com/InternLM/xtuner/pull/531

New Contributors
* hhaAndroid made their first contribution in https://github.com/InternLM/xtuner/pull/496
* JianxinDong made their first contribution in https://github.com/InternLM/xtuner/pull/528

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.15...v0.1.16

0.1.15

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/437
* [Bugs] Fix bugs when using EpochBasedRunner by HIT-cwh in https://github.com/InternLM/xtuner/pull/439
* [Feature] Support processing ftdp dataset and custom dataset offline by HIT-cwh in https://github.com/InternLM/xtuner/pull/410
* Update prompt_template.md by aJupyter in https://github.com/InternLM/xtuner/pull/441
* [Doc] Split finetune_custom_dataset.md to 6 parts by HIT-cwh in https://github.com/InternLM/xtuner/pull/445
* [Improve] Add notes for demo_data examples by LZHgrla in https://github.com/InternLM/xtuner/pull/458
* [Fix] Gemma prompt_template by LZHgrla in https://github.com/InternLM/xtuner/pull/454
* [Feature] Add LLaVA-InternLM2-1.8B by LZHgrla in https://github.com/InternLM/xtuner/pull/449
* show more info about datasets by amulil in https://github.com/InternLM/xtuner/pull/464
* [Fix] write text with `encoding='utf-8'` by LZHgrla in https://github.com/InternLM/xtuner/pull/477
* support offline process llava data by HIT-cwh in https://github.com/InternLM/xtuner/pull/448
* [Fix] `msagent_react_map_fn` error by LZHgrla in https://github.com/InternLM/xtuner/pull/470
* [Improve] Reorg `xtuner/configs/llava/` configs by LZHgrla in https://github.com/InternLM/xtuner/pull/483
* limit pytorch version <= 2.1.2 as there may be some bugs in triton2… by HIT-cwh in https://github.com/InternLM/xtuner/pull/452
* [Fix] fix batch sampler bs by HIT-cwh in https://github.com/InternLM/xtuner/pull/468
* bump version to v0.1.15 by LZHgrla in https://github.com/InternLM/xtuner/pull/486

New Contributors
* aJupyter made their first contribution in https://github.com/InternLM/xtuner/pull/441

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.14...v0.1.15

0.1.14

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/341
* [Feature] More flexible `TrainLoop` by LZHgrla in https://github.com/InternLM/xtuner/pull/348
* [Feature]Support CEPH by pppppM in https://github.com/InternLM/xtuner/pull/266
* [Improve] Add `--repetition-penalty` for `xtuner chat` by LZHgrla in https://github.com/InternLM/xtuner/pull/351
* [Feature] Support MMBench DDP Evaluate by pppppM in https://github.com/InternLM/xtuner/pull/300
* [Fix] `KeyError` of `encode_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/361
* [Fix] Fix `batch_size` of full fine-tuing LLaVA-InternLM2 by LZHgrla in https://github.com/InternLM/xtuner/pull/360
* [Fix] Remove `system` for `alpaca_map_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/363
* [Fix] Use `DEFAULT_IMAGE_TOKEN` instead of `'<image>'` by LZHgrla in https://github.com/InternLM/xtuner/pull/353
* [Feature] Support internlm sft by HIT-cwh in https://github.com/InternLM/xtuner/pull/302
* [Fix] Add `attention_mask` for `default_collate_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/371
* [Fix] Update requirements by LZHgrla in https://github.com/InternLM/xtuner/pull/369
* [Fix] Fix rotary_base, add `colors_map_fn` to `DATASET_FORMAT_MAPPING` and rename 'internlm_repo' to 'intern_repo' by HIT-cwh in https://github.com/InternLM/xtuner/pull/372
* update by HIT-cwh in https://github.com/InternLM/xtuner/pull/377
* Delete useless codes and refactor process_untokenized_datasets by HIT-cwh in https://github.com/InternLM/xtuner/pull/379
* [Feature] support flash attn 2 in internlm1, internlm2 and llama by HIT-cwh in https://github.com/InternLM/xtuner/pull/381
* [Fix] Fix installation docs of mmengine in `intern_repo_dataset.md` by LZHgrla in https://github.com/InternLM/xtuner/pull/384
* [Fix] Update InternLM2 `apply_rotary_pos_emb` by LZHgrla in https://github.com/InternLM/xtuner/pull/383
* [Feature] support saving eval output before save checkpoint by HIT-cwh in https://github.com/InternLM/xtuner/pull/385
* fix lr scheduler setting by gzlong96 in https://github.com/InternLM/xtuner/pull/394
* [Fix] Remove pre-defined `system` of `alpaca_zh_map_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/395
* [Feature] Support `Qwen1.5` by LZHgrla in https://github.com/InternLM/xtuner/pull/407
* [Fix] Fix no space in chat output using InternLM2. (357) by KooSung in https://github.com/InternLM/xtuner/pull/404
* [Fix] typo: `--system-prompt` to `--system-template` by LZHgrla in https://github.com/InternLM/xtuner/pull/406
* [Improve] Add `output_with_loss` for dataset process by LZHgrla in https://github.com/InternLM/xtuner/pull/408
* [Fix] Fix dispatch to support transformers>=4.36 & Add USE_TRITON_KERNEL environment variable by HIT-cwh in https://github.com/InternLM/xtuner/pull/411
* [Feature]Add InternLM2-Chat-1_8b full config by KMnO4-zx in https://github.com/InternLM/xtuner/pull/396
* [Fix] Fix extract_json_objects by fanqiNO1 in https://github.com/InternLM/xtuner/pull/419
* [Fix] Fix pth_to_hf error by LZHgrla in https://github.com/InternLM/xtuner/pull/426
* [Feature] Support `Gemma` by PommesPeter in https://github.com/InternLM/xtuner/pull/429
* add refcoco to llava by LKJacky in https://github.com/InternLM/xtuner/pull/425
* [Fix] Inconsistent BatchSize of `LengthGroupedSampler` by LZHgrla in https://github.com/InternLM/xtuner/pull/436
* bump version to v0.1.14 by LZHgrla in https://github.com/InternLM/xtuner/pull/431

New Contributors
* gzlong96 made their first contribution in https://github.com/InternLM/xtuner/pull/394
* KooSung made their first contribution in https://github.com/InternLM/xtuner/pull/404
* KMnO4-zx made their first contribution in https://github.com/InternLM/xtuner/pull/396
* fanqiNO1 made their first contribution in https://github.com/InternLM/xtuner/pull/419
* PommesPeter made their first contribution in https://github.com/InternLM/xtuner/pull/429
* LKJacky made their first contribution in https://github.com/InternLM/xtuner/pull/425

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.13...v0.1.14

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.