Openllm

Latest version: v0.6.23

Safety actively analyzes 723607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 24

0.4.35

Usage

All available models: openllm models

To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.35 start HuggingFaceH4/zephyr-7b-beta

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* chore(deps): bump pypa/gh-action-pypi-publish from 1.8.10 to 1.8.11 by dependabot in https://github.com/bentoml/OpenLLM/pull/749
* chore(deps): bump docker/metadata-action from 5.0.0 to 5.2.0 by dependabot in https://github.com/bentoml/OpenLLM/pull/751
* chore(deps): bump taiki-e/install-action from 2.21.19 to 2.21.26 by dependabot in https://github.com/bentoml/OpenLLM/pull/750
* ci: pre-commit autoupdate [pre-commit.ci] by pre-commit-ci in https://github.com/bentoml/OpenLLM/pull/753
* fix(logprobs): explicitly set logprobs=None by aarnphm in https://github.com/bentoml/OpenLLM/pull/757


**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.34...v0.4.35

0.4.34

Usage

All available models: openllm models

To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.34 start HuggingFaceH4/zephyr-7b-beta

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* feat(models): Support qwen by yansheng105 in https://github.com/bentoml/OpenLLM/pull/742

New Contributors
* yansheng105 made their first contribution in https://github.com/bentoml/OpenLLM/pull/742

**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.33...v0.4.34

0.4.33

Usage

All available models: openllm models

To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.33 start HuggingFaceH4/zephyr-7b-beta

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.32...v0.4.33

0.4.32

Usage

All available models: openllm models

To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.32 start HuggingFaceH4/zephyr-7b-beta

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* chore(deps): bump taiki-e/install-action from 2.21.17 to 2.21.19 by dependabot in https://github.com/bentoml/OpenLLM/pull/735
* chore(deps): bump github/codeql-action from 2.22.7 to 2.22.8 by dependabot in https://github.com/bentoml/OpenLLM/pull/734
* chore: revert back previous backend support PyTorch by aarnphm in https://github.com/bentoml/OpenLLM/pull/739


**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.31...v0.4.32

0.4.31

Usage

All available models: openllm models

To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.31 start HuggingFaceH4/zephyr-7b-beta

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



What's Changed
* fix(docs): remove invalid options by aarnphm in https://github.com/bentoml/OpenLLM/pull/733


**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.30...v0.4.31

0.4.30

Usage

All available models: openllm models

To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta

To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.30 start HuggingFaceH4/zephyr-7b-beta

Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)



**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.29...v0.4.30

Page 8 of 24

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.