Internlm

Latest version: v0.2.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

0.1.0

First release with some datasets.

* ARC
* BBH
* ceval
* CLUE
* FewCLUE
* GAOKAO-BENCH
* LCSTS
* math
* mbpp
* mmlu
* nq
* summedits
* SuperGLUE

0.1.0a2

<!-- Release notes generated using configuration in .github/release.yml at main -->

What's Changed
💥 Improvements
* Unify prefill & decode passes by lzhangzz in https://github.com/InternLM/lmdeploy/pull/775
* add cuda12.1 build check ci by irexyc in https://github.com/InternLM/lmdeploy/pull/782
* auto upload cuda12.1 python pkg to release when create new tag by irexyc in https://github.com/InternLM/lmdeploy/pull/784
* Report the inference benchmark of models with different size by lvhan028 in https://github.com/InternLM/lmdeploy/pull/794
* Add chat template for Yi by AllentDan in https://github.com/InternLM/lmdeploy/pull/779
🐞 Bug fixes
* Fix early-exit condition in attention kernel by lzhangzz in https://github.com/InternLM/lmdeploy/pull/788
* Fix missed arguments when benchmark static inference performance by lvhan028 in https://github.com/InternLM/lmdeploy/pull/787
* fix extra colon in InternLMChat7B template by C1rN09 in https://github.com/InternLM/lmdeploy/pull/796
* Fix local kv head num by lvhan028 in https://github.com/InternLM/lmdeploy/pull/806
📚 Documentations
* Update benchmark user guide by lvhan028 in https://github.com/InternLM/lmdeploy/pull/763
🌐 Other
* bump version to v0.1.0a2 by lvhan028 in https://github.com/InternLM/lmdeploy/pull/807

New Contributors
* C1rN09 made their first contribution in https://github.com/InternLM/lmdeploy/pull/796

**Full Changelog**: https://github.com/InternLM/lmdeploy/compare/v0.1.0a1...v0.1.0a2

0.1.0a1

<!-- Release notes generated using configuration in .github/release.yml at main -->

What's Changed
💥 Improvements
* Set the default value of `max_context_token_num` 1 by lvhan028 in https://github.com/InternLM/lmdeploy/pull/761
* add triton server test and workflow yml by RunningLeon in https://github.com/InternLM/lmdeploy/pull/760
* improvement(build): enable ninja and gold linker by tpoisonooo in https://github.com/InternLM/lmdeploy/pull/767
* Report first-token-latency and token-latency percentiles by lvhan028 in https://github.com/InternLM/lmdeploy/pull/736
* convert model with hf repo_id by irexyc in https://github.com/InternLM/lmdeploy/pull/774
🐞 Bug fixes
* [Fix] build docker image failed since `packaging` is missing by lvhan028 in https://github.com/InternLM/lmdeploy/pull/753
* [Fix] Rollback the data type of `input_ids` to `TYPE_UINT32` in preprocessor's proto by lvhan028 in https://github.com/InternLM/lmdeploy/pull/758
* fix turbomind build on sm<80 by grimoire in https://github.com/InternLM/lmdeploy/pull/754
* fix typo by grimoire in https://github.com/InternLM/lmdeploy/pull/769
🌐 Other
* bump version to 0.1.0a1 by lvhan028 in https://github.com/InternLM/lmdeploy/pull/776


**Full Changelog**: https://github.com/InternLM/lmdeploy/compare/v0.1.0a0...v0.1.0a1

0.1.0a0

<!-- Release notes generated using configuration in .github/release.yml at v0.1.0a0 -->

What's Changed
🚀 Features
* Add extra_requires to reduce dependencies by RunningLeon in https://github.com/InternLM/lmdeploy/pull/580
* TurboMind 2 by lzhangzz in https://github.com/InternLM/lmdeploy/pull/590
* Support loading hf model directly by irexyc in https://github.com/InternLM/lmdeploy/pull/685
💥 Improvements
* Fix Tokenizer encode by AllentDan in https://github.com/InternLM/lmdeploy/pull/645
* Optimize for throughput by lzhangzz in https://github.com/InternLM/lmdeploy/pull/701
* Replace mmengine with mmengine-lite by zhouzaida in https://github.com/InternLM/lmdeploy/pull/715
🐞 Bug fixes
* Fix init of batch state by lzhangzz in https://github.com/InternLM/lmdeploy/pull/682
* fix turbomind stream canceling by grimoire in https://github.com/InternLM/lmdeploy/pull/686
* [Fix] Fix load_checkpoint_in_model bug by HIT-cwh in https://github.com/InternLM/lmdeploy/pull/690
* Fix wrong eos_id and bos_id obtained through grpc api by lvhan028 in https://github.com/InternLM/lmdeploy/pull/644
* Fix cache/output length calculation by lzhangzz in https://github.com/InternLM/lmdeploy/pull/738
* [Fix] Skip empty batch by lzhangzz in https://github.com/InternLM/lmdeploy/pull/747
📚 Documentations
* [Docs] Update Supported Matrix by pppppM in https://github.com/InternLM/lmdeploy/pull/679
* [Docs] Update KV8 Docs by pppppM in https://github.com/InternLM/lmdeploy/pull/681
* [Doc] Update restful api doc by AllentDan in https://github.com/InternLM/lmdeploy/pull/662
* Check-in user guide about turbomind config by lvhan028 in https://github.com/InternLM/lmdeploy/pull/680
🌐 Other
* bump version to v0.1.0a0 by lvhan028 in https://github.com/InternLM/lmdeploy/pull/709

New Contributors
* zhouzaida made their first contribution in https://github.com/InternLM/lmdeploy/pull/715

**Full Changelog**: https://github.com/InternLM/lmdeploy/compare/v0.0.14...v0.1.0a0

0.0.14

<!-- Release notes generated using configuration in .github/release.yml at main -->

What's Changed

💥 Improvements
* Improve api_server and webui usage by AllentDan in https://github.com/InternLM/lmdeploy/pull/544
* fix: gradio gr.Button.update deprecated after 4.0.0 by hscspring in https://github.com/InternLM/lmdeploy/pull/637
* add cli to list the supported model names by RunningLeon in https://github.com/InternLM/lmdeploy/pull/639
* Refactor model conversion by irexyc in https://github.com/InternLM/lmdeploy/pull/296
* [Enchance] internlm message to prompt by Harold-lkk in https://github.com/InternLM/lmdeploy/pull/499
* update turbomind session_len with model.session_len by AllentDan in https://github.com/InternLM/lmdeploy/pull/634
* Manage session id using random int for gradio local mode by aisensiy in https://github.com/InternLM/lmdeploy/pull/553
* Add UltraCM and WizardLM chat templates by AllentDan in https://github.com/InternLM/lmdeploy/pull/599
* Add check env sub command by RunningLeon in https://github.com/InternLM/lmdeploy/pull/654
🐞 Bug fixes
* [Fix] Qwen's quantization results are abnormal & Baichuan cannot be quantized by pppppM in https://github.com/InternLM/lmdeploy/pull/605
* FIX: fix stop_session func bug by yunzhongyan0 in https://github.com/InternLM/lmdeploy/pull/578
* fix benchmark serving computation mistake by AllentDan in https://github.com/InternLM/lmdeploy/pull/630
* fix Tokenizer load error when the path of the being-converted model is not writable by irexyc in https://github.com/InternLM/lmdeploy/pull/669
* fix tokenizer_info when convert the model by irexyc in https://github.com/InternLM/lmdeploy/pull/661
🌐 Other
* bump version to v0.0.14 by lvhan028 in https://github.com/InternLM/lmdeploy/pull/663

New Contributors
* hscspring made their first contribution in https://github.com/InternLM/lmdeploy/pull/637
* yunzhongyan0 made their first contribution in https://github.com/InternLM/lmdeploy/pull/578

**Full Changelog**: https://github.com/InternLM/lmdeploy/compare/v0.0.13...v0.0.14

0.0.13

<!-- Release notes generated using configuration in .github/release.yml at main -->

What's Changed
🚀 Features
* Add more user-friendly CLI by RunningLeon in https://github.com/InternLM/lmdeploy/pull/541
💥 Improvements
* support inference a batch of prompts by AllentDan in https://github.com/InternLM/lmdeploy/pull/467
📚 Documentations
* Add "build from docker" section by lvhan028 in https://github.com/InternLM/lmdeploy/pull/602
🌐 Other
* bump version to v0.0.13 by lvhan028 in https://github.com/InternLM/lmdeploy/pull/620


**Full Changelog**: https://github.com/InternLM/lmdeploy/compare/v0.0.12...v0.0.13

Page 4 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.