Openai-ratelimiter

Latest version: v0.7

Safety actively analyzes 688238 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.7

- Added In-memory caching (async only).
**Full Changelog**: https://github.com/Youssefbenhammouda/openai-ratelimiter/compare/v0.6.1...V0.7

0.6.1

What's Changed
* docs(readme): Add legal disclaimer section explaining affiliation with OpenAI by Elijas in https://github.com/Youssefbenhammouda/openai-ratelimiter/pull/4
* Should "asyncio.sleep" be here instead of "time.sleep"? by Elijas in https://github.com/Youssefbenhammouda/openai-ratelimiter/pull/6

New Contributors
* Elijas made their first contribution in https://github.com/Youssefbenhammouda/openai-ratelimiter/pull/4

**Full Changelog**: https://github.com/Youssefbenhammouda/openai-ratelimiter/compare/v0.5...v0.6.1

0.5

Removed versioning for dependencies.

0.4

openai-ratelimiter is a Python library offering a simple and efficient way to prevent hitting OpenAI API rate limits. Supporting both synchronous and asynchronous programming paradigms, the package provides classes like ChatCompletionLimiter, TextCompletionLimiter, and their asynchronous equivalents. The current version supports only Redis as the caching service and has been tested with Python 3.11.4.

Key methods available include clear_locks() to remove all current model locks and is_locked() to check if a request would be locked based on given parameters.

As part of its future plans, the library aims to include in-memory caching, rate limiting for different types of models such as embeddings and DALL·E image model, providing more functions about the current state, and organization level rate limiting.

Contributions to enhance this library are highly welcomed. The library has been developed and maintained by Youssef Benhammouda.

To install the library, please refer to the instructions on the main page of the repository.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.