Litgpt

Latest version: v0.5.3

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.4.11

What's Changed
* Add distribute=None to python-api.md by rasbt in https://github.com/Lightning-AI/litgpt/pull/1676
* Make LitGPT LLM API compatible with PyTorch Lightning Trainer 1/2 by rasbt in https://github.com/Lightning-AI/litgpt/pull/1667
* Auto device handling in LLM API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1677
* Fix KV cache issue in LLM API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1678
* Improved benchmark utils by rasbt in https://github.com/Lightning-AI/litgpt/pull/1679
* Add PR benchmark util for internal use by rasbt in https://github.com/Lightning-AI/litgpt/pull/1680
* Added git hash to benchmark utility. by apaz-cli in https://github.com/Lightning-AI/litgpt/pull/1681
* Spelling fix by rasbt in https://github.com/Lightning-AI/litgpt/pull/1685
* Add Microsoft Phi 3.5 checkpoint by rasbt in https://github.com/Lightning-AI/litgpt/pull/1687
* Update check_nvlink_connectivity by sanderland in https://github.com/Lightning-AI/litgpt/pull/1684
* Make number of generated tokens consistent with CLI by rasbt in https://github.com/Lightning-AI/litgpt/pull/1690
* Avoid error when executing benchmark util outside a git folder by rasbt in https://github.com/Lightning-AI/litgpt/pull/1691
* Combine `generate()` functions by apaz-cli in https://github.com/Lightning-AI/litgpt/pull/1675
* Bumb version to 0.4.11 by rasbt in https://github.com/Lightning-AI/litgpt/pull/1695
* Fix falcon prompt template by rasbt in https://github.com/Lightning-AI/litgpt/pull/1696

New Contributors
* sanderland made their first contribution in https://github.com/Lightning-AI/litgpt/pull/1684

**Full Changelog**: https://github.com/Lightning-AI/litgpt/compare/v0.4.10...v0.4.11

0.4.10

What's Changed
* Support Tensor Parallel in Python API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1661
* Swap old Llama model with Phi-3 by rasbt in https://github.com/Lightning-AI/litgpt/pull/1666
* Update azure-gpu-test.yml by rasbt in https://github.com/Lightning-AI/litgpt/pull/1669
* Support the refactored API in litgpt serve by rasbt in https://github.com/Lightning-AI/litgpt/pull/1668
* Multi-gpu serving by rasbt in https://github.com/Lightning-AI/litgpt/pull/1670
* Add Mistral Large 123B by rasbt in https://github.com/Lightning-AI/litgpt/pull/1673
* Bumb version to 0.4.10 for next release by rasbt in https://github.com/Lightning-AI/litgpt/pull/1674


**Full Changelog**: https://github.com/Lightning-AI/litgpt/compare/v0.4.9...v0.4.10

0.4.9

What's Changed

* Update LitServe version and tests by rasbt in https://github.com/Lightning-AI/litgpt/pull/1654
* Support for using large models in the Python API via sequential generation by rasbt in https://github.com/Lightning-AI/litgpt/pull/1637
* Add a PyTorch Lightning example by rasbt in https://github.com/Lightning-AI/litgpt/pull/1656
* Refactor Python API to introduce new distribute method (part of a larger refactor for PTL support) by rasbt in https://github.com/Lightning-AI/litgpt/pull/1657
* Fix some issues with circular and relative imports by rasbt in https://github.com/Lightning-AI/litgpt/pull/1658
* Optionally return benchmark info in Python API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1660
* Bumb version for 0.4.9 release by rasbt in https://github.com/Lightning-AI/litgpt/pull/1664


**Full Changelog**: https://github.com/Lightning-AI/litgpt/compare/v0.4.8...v0.4.9

0.4.8

What's Changed
* Adds unit test to test for parity between streaming and non-streaming API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1650
* Add Gemma 2 2B by rasbt in https://github.com/Lightning-AI/litgpt/pull/1651
* Pin litserve version by rasbt in https://github.com/Lightning-AI/litgpt/pull/1652
* Version bumb for Gemma 2 2B release by rasbt in https://github.com/Lightning-AI/litgpt/pull/1653


**Full Changelog**: https://github.com/Lightning-AI/litgpt/compare/v0.4.7...v0.4.8

0.4.7

What's Changed
* Apply prompt style for tp.py and sequentially.py by Andrei-Aksionov in https://github.com/Lightning-AI/litgpt/pull/1629
* Fix prompt docstring in Python API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1635
* Update windows cpu-tests.yml by rasbt in https://github.com/Lightning-AI/litgpt/pull/1630
* Remove NumPy < 2.0 pin by rasbt in https://github.com/Lightning-AI/litgpt/pull/1631
* Fix kv-cache issue in Python API streaming mode by rasbt in https://github.com/Lightning-AI/litgpt/pull/1633
* Updates installation requirements to install minimal required packages for basic use by rasbt in https://github.com/Lightning-AI/litgpt/pull/1634
* Faster safetensors conversion when downloading model by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1624
* Add Sebastian as code owner by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1641
* Add missing super() call in data modules by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1639
* Update Lightning version to 2.4.0 pre by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1640
* Add tunable kvcache with error handling for nonsense inputs. by apaz-cli in https://github.com/Lightning-AI/litgpt/pull/1636
* Use Python API in serve code by rasbt in https://github.com/Lightning-AI/litgpt/pull/1644
* Fix autodownload + conversion issue by rasbt in https://github.com/Lightning-AI/litgpt/pull/1645
* Properly clear kv-cache by rasbt in https://github.com/Lightning-AI/litgpt/pull/1647
* Fix error raising where max_returned_tokens > max_seq_length_setting by rasbt in https://github.com/Lightning-AI/litgpt/pull/1648
* Add quantization support to litgpt serve by rasbt in https://github.com/Lightning-AI/litgpt/pull/1646
* Bump for 0.4.7 release by rasbt in https://github.com/Lightning-AI/litgpt/pull/1649


**Full Changelog**: https://github.com/Lightning-AI/litgpt/compare/v0.4.6...v0.4.7

0.4.6

What's Changed
* Change default top_k to 50 everywhere for consistency by rasbt in https://github.com/Lightning-AI/litgpt/pull/1592
* Fix kv-cache clearing in Python API and Serve by rasbt in https://github.com/Lightning-AI/litgpt/pull/1596
* dynamic KV Cache batching by aniketmaurya in https://github.com/Lightning-AI/litgpt/pull/1600
* Remove non-used eos_id in Python API by rasbt in https://github.com/Lightning-AI/litgpt/pull/1594
* Add quantization test and revert lightning version by rasbt in https://github.com/Lightning-AI/litgpt/pull/1605
* Dynamically set kv-cache size in serve by rasbt in https://github.com/Lightning-AI/litgpt/pull/1602
* Update LitData version and restore previous LitData assertions in tests by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1609
* Gemma 2: `9b` and `27b` versions by Andrei-Aksionov in https://github.com/Lightning-AI/litgpt/pull/1545
* Update config hub table qlora sections by rasbt in https://github.com/Lightning-AI/litgpt/pull/1611
* max_returned_tokens -> max_new_tokens by rasbt in https://github.com/Lightning-AI/litgpt/pull/1612
* Add warning about pretrain preprocessing by rasbt in https://github.com/Lightning-AI/litgpt/pull/1618
* Print warning about unsupported repo_ids by rasbt in https://github.com/Lightning-AI/litgpt/pull/1617
* Restore capability to load alternative weights by rasbt in https://github.com/Lightning-AI/litgpt/pull/1620
* Enable unbalanced number of layers in sequential generation by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1623
* Llama 3.1 8B and 70B checkpoints by rasbt in https://github.com/Lightning-AI/litgpt/pull/1619
* Add Llama 3.1 405B config by awaelchli in https://github.com/Lightning-AI/litgpt/pull/1622
* Bumb version to 0.4.6 for next release (Gemma 2 and Llama 3.1) by rasbt in https://github.com/Lightning-AI/litgpt/pull/1626


**Full Changelog**: https://github.com/Lightning-AI/litgpt/compare/v0.4.5...v0.4.6

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.