Xtuner

Latest version: v0.1.23

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.1.17

What's Changed
* [Fix] Fix PyPI package by LZHgrla in https://github.com/InternLM/xtuner/pull/540
* [Improve] Add LoRA fine-tuning configs for LLaVA-v1.5 by LZHgrla in https://github.com/InternLM/xtuner/pull/536
* [Configs] Add sequence_parallel_size and SequenceParallelSampler to configs by HIT-cwh in https://github.com/InternLM/xtuner/pull/538
* Check shape of attn_mask during attn forward by HIT-cwh in https://github.com/InternLM/xtuner/pull/543
* bump version to v0.1.17 by LZHgrla in https://github.com/InternLM/xtuner/pull/542


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.16...v0.1.17

0.1.16

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/487
* Fix type error when the visual encoder is not CLIP by hhaAndroid in https://github.com/InternLM/xtuner/pull/496
* [Feature] Support Sequence parallel by HIT-cwh in https://github.com/InternLM/xtuner/pull/456
* [Bug] Fix bugs in flash_attn1_pytorch by HIT-cwh in https://github.com/InternLM/xtuner/pull/513
* [Fix] delete cat in varlen attn by HIT-cwh in https://github.com/InternLM/xtuner/pull/508
* bump version to 0.1.16 by HIT-cwh in https://github.com/InternLM/xtuner/pull/520
* [Improve] Add `generation_kwargs` for `EvaluateChatHook` by LZHgrla in https://github.com/InternLM/xtuner/pull/501
* [Bugs] Fix bugs when training in non-distributed env by HIT-cwh in https://github.com/InternLM/xtuner/pull/522
* [Fix] Support transformers>=4.38 and require transformers>=4.36.0 by HIT-cwh in https://github.com/InternLM/xtuner/pull/494
* [Fix] Fix throughput hook by HIT-cwh in https://github.com/InternLM/xtuner/pull/527
* Update README.md by JianxinDong in https://github.com/InternLM/xtuner/pull/528
* [Fix] dispatch internlm rote by HIT-cwh in https://github.com/InternLM/xtuner/pull/530
* Limit transformers != 4.38 by HIT-cwh in https://github.com/InternLM/xtuner/pull/531

New Contributors
* hhaAndroid made their first contribution in https://github.com/InternLM/xtuner/pull/496
* JianxinDong made their first contribution in https://github.com/InternLM/xtuner/pull/528

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.15...v0.1.16

0.1.15

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/437
* [Bugs] Fix bugs when using EpochBasedRunner by HIT-cwh in https://github.com/InternLM/xtuner/pull/439
* [Feature] Support processing ftdp dataset and custom dataset offline by HIT-cwh in https://github.com/InternLM/xtuner/pull/410
* Update prompt_template.md by aJupyter in https://github.com/InternLM/xtuner/pull/441
* [Doc] Split finetune_custom_dataset.md to 6 parts by HIT-cwh in https://github.com/InternLM/xtuner/pull/445
* [Improve] Add notes for demo_data examples by LZHgrla in https://github.com/InternLM/xtuner/pull/458
* [Fix] Gemma prompt_template by LZHgrla in https://github.com/InternLM/xtuner/pull/454
* [Feature] Add LLaVA-InternLM2-1.8B by LZHgrla in https://github.com/InternLM/xtuner/pull/449
* show more info about datasets by amulil in https://github.com/InternLM/xtuner/pull/464
* [Fix] write text with `encoding='utf-8'` by LZHgrla in https://github.com/InternLM/xtuner/pull/477
* support offline process llava data by HIT-cwh in https://github.com/InternLM/xtuner/pull/448
* [Fix] `msagent_react_map_fn` error by LZHgrla in https://github.com/InternLM/xtuner/pull/470
* [Improve] Reorg `xtuner/configs/llava/` configs by LZHgrla in https://github.com/InternLM/xtuner/pull/483
* limit pytorch version <= 2.1.2 as there may be some bugs in triton2… by HIT-cwh in https://github.com/InternLM/xtuner/pull/452
* [Fix] fix batch sampler bs by HIT-cwh in https://github.com/InternLM/xtuner/pull/468
* bump version to v0.1.15 by LZHgrla in https://github.com/InternLM/xtuner/pull/486

New Contributors
* aJupyter made their first contribution in https://github.com/InternLM/xtuner/pull/441

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.14...v0.1.15

0.1.14

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/341
* [Feature] More flexible `TrainLoop` by LZHgrla in https://github.com/InternLM/xtuner/pull/348
* [Feature]Support CEPH by pppppM in https://github.com/InternLM/xtuner/pull/266
* [Improve] Add `--repetition-penalty` for `xtuner chat` by LZHgrla in https://github.com/InternLM/xtuner/pull/351
* [Feature] Support MMBench DDP Evaluate by pppppM in https://github.com/InternLM/xtuner/pull/300
* [Fix] `KeyError` of `encode_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/361
* [Fix] Fix `batch_size` of full fine-tuing LLaVA-InternLM2 by LZHgrla in https://github.com/InternLM/xtuner/pull/360
* [Fix] Remove `system` for `alpaca_map_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/363
* [Fix] Use `DEFAULT_IMAGE_TOKEN` instead of `'<image>'` by LZHgrla in https://github.com/InternLM/xtuner/pull/353
* [Feature] Support internlm sft by HIT-cwh in https://github.com/InternLM/xtuner/pull/302
* [Fix] Add `attention_mask` for `default_collate_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/371
* [Fix] Update requirements by LZHgrla in https://github.com/InternLM/xtuner/pull/369
* [Fix] Fix rotary_base, add `colors_map_fn` to `DATASET_FORMAT_MAPPING` and rename 'internlm_repo' to 'intern_repo' by HIT-cwh in https://github.com/InternLM/xtuner/pull/372
* update by HIT-cwh in https://github.com/InternLM/xtuner/pull/377
* Delete useless codes and refactor process_untokenized_datasets by HIT-cwh in https://github.com/InternLM/xtuner/pull/379
* [Feature] support flash attn 2 in internlm1, internlm2 and llama by HIT-cwh in https://github.com/InternLM/xtuner/pull/381
* [Fix] Fix installation docs of mmengine in `intern_repo_dataset.md` by LZHgrla in https://github.com/InternLM/xtuner/pull/384
* [Fix] Update InternLM2 `apply_rotary_pos_emb` by LZHgrla in https://github.com/InternLM/xtuner/pull/383
* [Feature] support saving eval output before save checkpoint by HIT-cwh in https://github.com/InternLM/xtuner/pull/385
* fix lr scheduler setting by gzlong96 in https://github.com/InternLM/xtuner/pull/394
* [Fix] Remove pre-defined `system` of `alpaca_zh_map_fn` by LZHgrla in https://github.com/InternLM/xtuner/pull/395
* [Feature] Support `Qwen1.5` by LZHgrla in https://github.com/InternLM/xtuner/pull/407
* [Fix] Fix no space in chat output using InternLM2. (357) by KooSung in https://github.com/InternLM/xtuner/pull/404
* [Fix] typo: `--system-prompt` to `--system-template` by LZHgrla in https://github.com/InternLM/xtuner/pull/406
* [Improve] Add `output_with_loss` for dataset process by LZHgrla in https://github.com/InternLM/xtuner/pull/408
* [Fix] Fix dispatch to support transformers>=4.36 & Add USE_TRITON_KERNEL environment variable by HIT-cwh in https://github.com/InternLM/xtuner/pull/411
* [Feature]Add InternLM2-Chat-1_8b full config by KMnO4-zx in https://github.com/InternLM/xtuner/pull/396
* [Fix] Fix extract_json_objects by fanqiNO1 in https://github.com/InternLM/xtuner/pull/419
* [Fix] Fix pth_to_hf error by LZHgrla in https://github.com/InternLM/xtuner/pull/426
* [Feature] Support `Gemma` by PommesPeter in https://github.com/InternLM/xtuner/pull/429
* add refcoco to llava by LKJacky in https://github.com/InternLM/xtuner/pull/425
* [Fix] Inconsistent BatchSize of `LengthGroupedSampler` by LZHgrla in https://github.com/InternLM/xtuner/pull/436
* bump version to v0.1.14 by LZHgrla in https://github.com/InternLM/xtuner/pull/431

New Contributors
* gzlong96 made their first contribution in https://github.com/InternLM/xtuner/pull/394
* KooSung made their first contribution in https://github.com/InternLM/xtuner/pull/404
* KMnO4-zx made their first contribution in https://github.com/InternLM/xtuner/pull/396
* fanqiNO1 made their first contribution in https://github.com/InternLM/xtuner/pull/419
* PommesPeter made their first contribution in https://github.com/InternLM/xtuner/pull/429
* LKJacky made their first contribution in https://github.com/InternLM/xtuner/pull/425

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.13...v0.1.14

0.1.13

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/329
* [Docs] Add LLaVA-InternLM2 results by LZHgrla in https://github.com/InternLM/xtuner/pull/332
* Update internlm2_chat template by RangiLyu in https://github.com/InternLM/xtuner/pull/339
* [Fix] Fix examples demo_data configs by LZHgrla in https://github.com/InternLM/xtuner/pull/334
* bump version to v0.1.13 by LZHgrla in https://github.com/InternLM/xtuner/pull/340

New Contributors
* RangiLyu made their first contribution in https://github.com/InternLM/xtuner/pull/339

**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.12...v0.1.13

0.1.12

What's Changed
* set dev version by LZHgrla in https://github.com/InternLM/xtuner/pull/281
* [Fix] Update LLaVA results by LZHgrla in https://github.com/InternLM/xtuner/pull/283
* [Fix] Update LLaVA results (based on VLMEvalKit) by LZHgrla in https://github.com/InternLM/xtuner/pull/285
* [Fix] Fix filter bug for test data by LZHgrla in https://github.com/InternLM/xtuner/pull/293
* [Fix] Fix `ConcatDataset` by LZHgrla in https://github.com/InternLM/xtuner/pull/298
* [Improve] Redesign the `prompt_template` by LZHgrla in https://github.com/InternLM/xtuner/pull/294
* [Fix] Fix errors about `stop_words` by LZHgrla in https://github.com/InternLM/xtuner/pull/313
* [Fix] Fix Mixtral LoRA setting by LZHgrla in https://github.com/InternLM/xtuner/pull/312
* [Feature] Support DeepSeek-MoE by LZHgrla in https://github.com/InternLM/xtuner/pull/311
* [Fix] Set `torch.optim.AdamW` as the default optimizer by LZHgrla in https://github.com/InternLM/xtuner/pull/318
* [FIx] Fix `pth_to_hf` for LLaVA model by LZHgrla in https://github.com/InternLM/xtuner/pull/316
* [Improve] Add `demo_data` examples by LZHgrla in https://github.com/InternLM/xtuner/pull/278
* [Feature] Support InternLM2 by LZHgrla in https://github.com/InternLM/xtuner/pull/321
* [Fix] Fix the resume of seed by LZHgrla in https://github.com/InternLM/xtuner/pull/309
* [Feature] Accelerate `xtuner xxx` by pppppM in https://github.com/InternLM/xtuner/pull/307
* [Fix] Fix InternLM2 url by LZHgrla in https://github.com/InternLM/xtuner/pull/325
* [Fix] Limit the version of python, `>=3.8, <3.11` by LZHgrla in https://github.com/InternLM/xtuner/pull/327
* [Fix] Add `trust_remote_code=True` for AutoModel by LZHgrla in https://github.com/InternLM/xtuner/pull/328
* [Docs] Improve README by LZHgrla in https://github.com/InternLM/xtuner/pull/326
* bump verion to v0.1.12 by pppppM in https://github.com/InternLM/xtuner/pull/323


**Full Changelog**: https://github.com/InternLM/xtuner/compare/v0.1.11...v0.1.12

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.