Onprem

Latest version: v0.11.1

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 10

0.11.1

new:
- N/A

changed
- support folder creation in `utils.download` (149)

fixed:
- pin to `sentence_transformers<4` due to SetFit bug

0.11.0

new:
- add `LLM.set_store_type` method (147)

changed
- Default model changed to Zephyr-7b-beta (148)
- remove `LLM.ask_with_memory` (146)

fixed:
- source is empty during OCR (144)

0.10.1

new:
- N/A

changed
- N/A

fixed:
- ensure chat returns string response (142)
- revert to `dense` as default `store_type` (143)

0.10.0

new:
- support for custom metadata in vectorstore (126)
- basic full-text indexing (132)
- support for using sparse vector stores with `LLM.ask` (136)
- support complex RAG filtering (137)

changed
- **Breaking Changes**: Use sparse vector stores as default (141)
- **Breaking Changes**:`LLM.chat` renamed to `LLM.ask_with_memory`.
`LLM.chat` is now a simple conversational chatbot (no RAG) (138)
- **Breaking Changes**: refactor vectorstore (133, 1e84f46)
- **Breaking Changes**: Vector stores are stored within a subfolder
of `LLMvectordb_path` (either `dense` or `sparse`) (140)
- use os.walk instead of glob for `extract_files` and remove dot
from extensions (127)
- Add `batch_size` parameter to `LLM.ingest` (128)
- use generators in `load_documents` (129)
- Changed `split_list` to `batch_list`
- explicitly define available metadata types (131)
- use GPU for embeddings by default, if available (135)

fixed:
- Use `load_vectordb` to load vector database in `LLM.query` (130)
- disable progress bar for `pdf_markdown` (due to notebook issue) (134)
- fix bugs and add tests for vector store updates (139)

0.9.0

new:
- Support for using self-ask prompt strategy with RAG (120)
- Improved table understanding when invoking `LLm.ask`. (124)
- helpers for document metadata (121)

changed
- Added `k` and `score_threshold` arguments to `LLM.ask` (122)
- Added `n_proc` paramter to control the number of CPUs used
by `LLM.ingest` (ee09807)
- Upgrade version of `chromadb` (125)

fixed:
- Ensure table-processing is sequential and not parallelized (123)
- Fixes to support newer version of `langchain_community`. (125)

0.8.0

new:
- Added `HFClassifier` to `pipelines.classifier` module (119)
- Added `SKClassifier` to `pipelines.classifier` module (118)
- `sk` "helper" module to fit simple scikit-learn text models (117)

changed
- Added `process_documents` function (117)

fixed:
- Pass `autodetect_encoding` argument to `TextLoader` (116)

Page 1 of 10

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.