Usage
All available models: openllm models
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
To run OpenLLM within a container environment (requires GPUs): docker run --gpus all -it -P -v $PWD/data:$HOME/.cache/huggingface/ ghcr.io/bentoml/openllm:0.4.41 start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)
What's Changed
* docs: add notes about dtypes usage. by aarnphm in https://github.com/bentoml/OpenLLM/pull/786
* chore(deps): bump taiki-e/install-action from 2.22.0 to 2.22.5 by dependabot in https://github.com/bentoml/OpenLLM/pull/790
* chore(deps): bump github/codeql-action from 2.22.9 to 3.22.11 by dependabot in https://github.com/bentoml/OpenLLM/pull/794
* chore(deps): bump sigstore/cosign-installer from 3.2.0 to 3.3.0 by dependabot in https://github.com/bentoml/OpenLLM/pull/793
* chore(deps): bump actions/download-artifact from 3.0.2 to 4.0.0 by dependabot in https://github.com/bentoml/OpenLLM/pull/791
* chore(deps): bump actions/upload-artifact from 3.1.3 to 4.0.0 by dependabot in https://github.com/bentoml/OpenLLM/pull/792
* ci: pre-commit autoupdate [pre-commit.ci] by pre-commit-ci in https://github.com/bentoml/OpenLLM/pull/796
* fix(cli): avoid runtime `__origin__` check for older Python by aarnphm in https://github.com/bentoml/OpenLLM/pull/798
* feat(vllm): support GPTQ with 0.2.6 by aarnphm in https://github.com/bentoml/OpenLLM/pull/797
* fix(ci): lock to v3 iteration of `actions/artifacts` workflow by aarnphm in https://github.com/bentoml/OpenLLM/pull/799
**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.4.40...v0.4.41