Local-rag-llm

Latest version: v0.0.21

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.0.15

Fixed
* Bug fix in `pgdump`

0.0.14

Added
* Added dockerfiles and instructions
* Added functions to dump and restore vector databases

0.0.13

Fixed
* `memory_limit` in chat mode would previously overload the context window and stop working, fixed now.
* `context_window` moved to be only at LLM instantiation, removed from other locations in `gen_response` which didn't work.

0.0.12

Added
* split the LLM from the vector DB/chat engine model, meaning now you can have multiple separate model objects use the same LLM. Temperature, context window, max new tokens, system prompt, etc. can also all be changed at inference time via the `model.gen_response()` function.

0.0.11

Added
* added automatic handling of CSV data files by converting them to chunked markdown tables

0.0.10

Added
* added `streaming` option to `.gen_response`

Page 2 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.