Usage
All available models: openllm models
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.35 start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)
What's Changed
* chore(deps): bump pypa/gh-action-pypi-publish from 1.8.10 to 1.8.11 by dependabot in https://github.com/bentoml/OpenLLM/pull/749
* chore(deps): bump docker/metadata-action from 5.0.0 to 5.2.0 by dependabot in https://github.com/bentoml/OpenLLM/pull/751
* chore(deps): bump taiki-e/install-action from 2.21.19 to 2.21.26 by dependabot in https://github.com/bentoml/OpenLLM/pull/750
* ci: pre-commit autoupdate [pre-commit.ci] by pre-commit-ci in https://github.com/bentoml/OpenLLM/pull/753
* fix(logprobs): explicitly set logprobs=None by aarnphm in https://github.com/bentoml/OpenLLM/pull/757
**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.34...v0.4.35