New improvements
* Using Ollama CLI without Ollama running will now start Ollama
* Changed the buffer limit so that conversations would continue until it is complete
* Models now stay loaded in memory automatically between messages, so series of prompts are extra fast!
* The white fluffy Ollama icon is back when using dark mode
<img width="463" alt="Screenshot 2023-08-03 at 4 30 02 PM" src="https://github.com/jmorganca/ollama/assets/251292/7b673cd0-9dd8-4de3-bcea-2b1097a0f6d2">
* Ollama will now run on Intel Macs. Compatibility & performance improvements to come
* When running `ollama run`, the `/show` command can be used to inspect the current model
* `ollama run` can now take in multi-line strings:
% ollama run llama2
>>> """
Is this a
multi-line
string?
"""
Thank you for asking! Yes, the input you provided is a multi-line string. It contains multiple lines of text separated by line breaks.
* More seamless updates: Ollama will now show a subtle hint that an update is ready in the tray menu, instead of a dialog window
* `ollama run --verbose` will now show load duration times
Bug fixes
* Fixed crashes on Macs with 8GB of shared memory
* Fixed issues in scanning multi-line strings in a `Modelfile`