Semanticache

Latest version: v0.1.1

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.1

Super excited to announce the initial release of **SemantiCache**, a semantic caching library designed to optimize query-response handling in LLM applications.

Key Features
- **Vector-based Caching:** Leverage FAISS and HuggingFace embeddings for efficient similarity search.
- **Automatic Cache Management:** Supports TTL and size-based trimming to maintain optimal cache performance.
- **Leaderboard Tracking:** Easily monitor the most frequently accessed queries.
- **Persistent Storage:** Maintain cache state across sessions with robust file-based persistence.

Installation
Install via pip:
sh
pip install semanticache

View on [PyPI](https://pypi.org/project/SemantiCache/0.1.0/)

Quick Start
python
from semanticache import SemantiCache

Initialize the cache
cache = SemantiCache(
trim_by_size=True,
cache_path="./sem_cache",
config_path="./sem_config",
cache_size=100,
ttl=3600,
threshold=0.1,
leaderboard_top_n=5,
log_level="INFO"
)

Store a query-response pair
cache.set("What is the capital of France?", "Paris")

Retrieve a cached response
response = cache.get("What is the capital of France?")
print(response) Output: Paris


Documentation & Contributions
For detailed documentation, refer to the [SemantiCache Docs](https://github.com/theabrahamaudu/SemantiCache/blob/main/docs/SemantiCacheDocs.md).

Contributions and suggestions for improvements (like additional tests and support for alternate vector engines) are welcome.

Acknowledgments & License
Built on top of **FAISS**, **HuggingFace**, and **LangChain Community**. SemantiCache is licensed under the GNU General Public License v3.

Enjoy a smarter caching experience for your LLM apps with SemantiCache!

0.1.0

Super excited to announce the initial release of **SemantiCache**, a semantic caching library designed to optimize query-response handling in LLM applications.

Key Features
- **Vector-based Caching:** Leverage FAISS and HuggingFace embeddings for efficient similarity search.
- **Automatic Cache Management:** Supports TTL and size-based trimming to maintain optimal cache performance.
- **Leaderboard Tracking:** Easily monitor the most frequently accessed queries.
- **Persistent Storage:** Maintain cache state across sessions with robust file-based persistence.

Installation
Install via pip:
sh
pip install semanticache

View on [PyPI](https://pypi.org/project/SemantiCache/0.1.0/)

Quick Start
python
from semanticache import Cache

Initialize the cache
cache = Cache(
trim_by_size=True,
cache_path="./sem_cache",
config_path="./sem_config",
cache_size=100,
ttl=3600,
threshold=0.1,
leaderboard_top_n=5,
log_level="INFO"
)

Store a query-response pair
cache.set("What is the capital of France?", "Paris")

Retrieve a cached response
response = cache.get("What is the capital of France?")
print(response) Output: Paris


Documentation & Contributions
For detailed documentation, refer to the [SemantiCache Docs](https://github.com/theabrahamaudu/SemantiCache/blob/main/docs/SemantiCacheDocs.md).

Contributions and suggestions for improvements (like additional tests and support for alternate vector engines) are welcome.

Acknowledgments & License
Built on top of **FAISS**, **HuggingFace**, and **LangChain Community**. SemantiCache is licensed under the GNU General Public License v3.

Enjoy a smarter caching experience for your LLM apps with SemantiCache!

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.