New models
* [Dolphin 2.1 Mistral](https://ollama.ai/library/dolphin2.1-mistral): an instruct-tuned model based on Mistral and trained on a dataset filtered to remove alignment and bias.
* [Zephyr Beta](https://ollama.ai/library/zephyr): this is the second model in the series based on Mistral, and has strong performance that compares to and even exceeds Llama 2 70b in several categories. It’s trained on a distilled dataset, improving grammar and yielding even better chat results.
What's Changed
* Pasting multi-line strings in `ollama run` is now possible
* Fixed various issues when writing prompts in `ollama run`
* The library models have been refreshed and revamped including `llama2`, `codellama`, and more:
* All `chat` or `instruct` models now support setting the `system` parameter, or `SYSTEM` command in the `Modelfile`
* Parameters (`num_ctx`, etc) have been updated for library models
* Slight performance improvements for all models
* Model storage can now be configured with `OLLAMA_MODELS`. See the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-change-where-ollama-stores-models) for more info on how to configure this.
* `OLLAMA_HOST` will now default to port `443` when `https://` is specified, and port `80` when `http://` is specified
* Fixed trailing slashes causing an error when using `OLLAMA_HOST`
* Fixed issue where `ollama pull` would retry multiple times when out of space
* Fixed various `out of memory` issues when using Nvidia GPUs
* Fixed performance issue previously introduced on AMD CPUs
New Contributors
* ajayk made their first contribution in https://github.com/jmorganca/ollama/pull/855
**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.5...v0.1.6