Llm-mistral

Latest version: v0.3.1

Safety actively analyzes 623871 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.3.1

- No longer raises an error if you run `llm models` without first setting a `mistral` API key. [6](https://github.com/simonw/llm-mistral/issues/6)
- Mixtral 8x22b is now available as `llm -m mistral/open-mixtral-8x22b 'say hello'`. New installations will get this model automatically - if you do not see the model in the `llm models` list you should run `llm mixtral refresh` to update your local cache of available models. [7](https://github.com/simonw/llm-mistral/issues/7)

0.3

- Support for the new [Mistral Large](https://mistral.ai/news/mistral-large/) model - `llm -m mistral-large "prompt goes here"`. [#5](https://github.com/simonw/llm-mistral/issues/5)
- All Mistral API models are now supported automatically - LLM fetches a list of models from their API the first time the plugin is installed, and that list can be refreshed at any time using the new `llm mistral refresh` command.
- When using the Python API a model key can now be set using `model.key = '...'` - thanks, [Alexandre Bulté](https://github.com/abulte). [#4](https://github.com/simonw/llm-mistral/pull/4)

0.2

- Mistral LLM models now support options: `-o temperature 0.7`, `-o top_p 0.1`, `-o max_tokens 20`, `-o safe_mode 1`, `-o random_seed 12`. [2](https://github.com/simonw/llm-mistral/issues/2)
- Support for the Mistral embeddings model, available via `llm embed -m mistral-embed -c 'text goes here'`. [3](https://github.com/simonw/llm-mistral/issuse/3)
- The `--no-stream` option now uses the non-streaming Mistral API.

0.1

- Initial release. Provides models `mistral-tiny`, `mistral-small` and `mistral-medium` via the [Mistral API](https://docs.mistral.ai/). [#1](https://github.com/simonw/llm-mistral/issuse/1)

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.