Usage
All available models: python -m openllm.models
To start a LLM: python -m openllm start dolly-v2
Find more information about this release in the [CHANGELOG.md](https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md)
What's Changed
* fix: running MPT on CPU by aarnphm in https://github.com/bentoml/OpenLLM/pull/92
* tests: add sanity check for openllm.client by aarnphm in https://github.com/bentoml/OpenLLM/pull/93
* feat: custom dockerfile templates by aarnphm in https://github.com/bentoml/OpenLLM/pull/95
* feat(llm): fine-tuning Falcon by aarnphm in https://github.com/bentoml/OpenLLM/pull/98
* feat: add citation by aarnphm in https://github.com/bentoml/OpenLLM/pull/103
* peft: improve speed and quality by aarnphm in https://github.com/bentoml/OpenLLM/pull/102
* chore: fix mpt loading on single GPU by aarnphm in https://github.com/bentoml/OpenLLM/pull/105
**Full Changelog**: https://github.com/bentoml/OpenLLM/compare/v0.1.19...v0.1.20