- Mistral LLM models now support options: `-o temperature 0.7`, `-o top_p 0.1`, `-o max_tokens 20`, `-o safe_mode 1`, `-o random_seed 12`. [2](https://github.com/simonw/llm-mistral/issues/2)
- Support for the Mistral embeddings model, available via `llm embed -m mistral-embed -c 'text goes here'`. [3](https://github.com/simonw/llm-mistral/issuse/3)
- The `--no-stream` option now uses the non-streaming Mistral API.