Openllm

Latest version: v0.6.14

Safety actively analyzes 681857 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 15 of 22

0.2.25

Usage

All available models: openllm models

To start a LLM: python -m openllm start opt

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P ghcr.io/bentoml/openllm:0.2.25 start opt

To run OpenLLM Clojure UI (community-maintained): docker run -p 8420:80 ghcr.io/bentoml/openllm-ui-clojure:0.2.25

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)

What's Changed
* chore: upload nightly wheels to test.pypi.org by aarnphm in https://github.com/bentoml/OpenLLM/pull/215
* feat(contrib): ClojureScript UI by GutZuFusss in https://github.com/bentoml/OpenLLM/pull/89
* fix(ci): remove broken build hooks by aarnphm in https://github.com/bentoml/OpenLLM/pull/216
* chore(ci): add dependabot and fix vllm release container by aarnphm in https://github.com/bentoml/OpenLLM/pull/217
* feat(models): add vLLM support for Falcon by aarnphm in https://github.com/bentoml/OpenLLM/pull/223
* chore(readme): update nightly badge [skip ci] by aarnphm in https://github.com/bentoml/OpenLLM/pull/224

New Contributors
* GutZuFusss made their first contribution in https://github.com/bentoml/OpenLLM/pull/89

**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.2.24...v0.2.25

0.2.22

Usage

All available models: openllm models

To start a LLM: python -m openllm start opt

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it --entrypoint=/bin/bash -P ghcr.io/bentoml/openllm:0.2.22 openllm --help

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.2.21...v0.2.22

0.2.21

Usage

All available models: openllm models

To start a LLM: python -m openllm start opt

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it --entrypoint=/bin/bash -P ghcr.io/bentoml/openllm:0.2.21 openllm --help

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* fix(release): fix exclude options within compiled wheels by aarnphm in https://github.com/bentoml/OpenLLM/pull/197
* infra: migrate to initial `openllm-node` library by aarnphm in https://github.com/bentoml/OpenLLM/pull/199
* perf: compiled modules and enable lazyeval by aarnphm in https://github.com/bentoml/OpenLLM/pull/200


**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.2.20...v0.2.21

0.2.20

Usage

All available models: openllm models

To start a LLM: python -m openllm start opt

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it --entrypoint=/bin/bash -P ghcr.io/bentoml/openllm:0.2.20 openllm --help

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.2.18...v0.2.20

0.2.18

Usage

All available models: openllm models

To start a LLM: python -m openllm start opt

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it --entrypoint=/bin/bash -P ghcr.io/bentoml/openllm:0.2.18 openllm --help

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* feat(strategy): only spawn up one runner by aarnphm in https://github.com/bentoml/OpenLLM/pull/189
* feat: homebrew tap by aarnphm in https://github.com/bentoml/OpenLLM/pull/190
* refactor(cli): compiled wheels and extension modules by aarnphm in https://github.com/bentoml/OpenLLM/pull/191


**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.2.17...v0.2.18

0.2.17

Usage

All available models: openllm models

To start a LLM: python -m openllm start opt

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it --entrypoint=/bin/bash -P ghcr.io/bentoml/openllm:0.2.17 openllm --help

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* feat: optimize model saving and loading on single GPU by aarnphm in https://github.com/bentoml/OpenLLM/pull/183
* fix(ci): update version correctly [skip ci] by aarnphm in https://github.com/bentoml/OpenLLM/pull/184
* fix(models): setup xformers in base container and loading PyTorch meta weights by aarnphm in https://github.com/bentoml/OpenLLM/pull/185
* infra(generation): initial work for generating tokens by aarnphm in https://github.com/bentoml/OpenLLM/pull/186
* ci: pre-commit autoupdate [pre-commit.ci] by pre-commit-ci in https://github.com/bentoml/OpenLLM/pull/187
* feat: --force-push to allow force push to bentocloud by aarnphm in https://github.com/bentoml/OpenLLM/pull/188


**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.2.16...v0.2.17

Page 15 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.