Languagemodels

Latest version: v0.20.0

Safety actively analyzes 638699 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.14.0

Changed

- Simplified dialogstudio system message

Fixed

- Correct missing instruction in openchat prompt

0.13.0

Changed

- Improved search speed when searching many documents
- Reduce memory usage for large document embeddings
- Updated to TinyLlama Chat v1.0
- Remove auto model scaling on Colab
- Correct phi-1.5 prompt format
- Correct model license metadata

Added

- Add Mistral-7B-Instruct-v0.2 model
- Add openchat-3.5-1210 model
- Add phi-2 model
- Support static batching by passing lists to `do`
- Support choices list on `do` to restrict possible outputs

0.12.0

Changed

- Remove explicit setuptools dependency (see [CTranslate21526](https://github.com/OpenNMT/CTranslate2/pull/1526))

Fixed

- Reduce model size when not using a CPU in Colab

0.11.0

Changed

- Default to 8GB model size on Colab
- Allow 2048 token response by default on Colab
- Use Colab GPU by default if available
- Skip returning prompt for decoder-only models
- Ensure whitespace is removed from decoder-only outputs

Added

- Add neural-chat-7b-v3-1 as default 8GB model
- Add max_tokens config option

0.10.0

Added

- Add gte-tiny embedding model
- Properly support Python 3.12

Fixed

- Removed extra classification prompt when performing classification with generative models
- Prevent doubling of special tokens during classification

0.9.0

Changed

- Use per-model instruction formats
- Batch chunk embeddings for faster performance embedding larger documents

Added

- Automatically use query prefixes as needed for embeddings
- Add phi-1.5 model
- Add dialogstudio base model
- Add support for gte-small embeddings
- Add support for bge-small-en embeddings

Fixed

- Allow token suppression on decoder-only models
- Remove HTML comments appearing in some wiki pages

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.