Async-openai

Latest version: v0.0.52

Safety actively analyzes 681775 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.0.52

- Added support for the following parameters in `model_configurations` in `OpenAIManager`:

- `ping_timeout` - allows for custom timeouts for each client.

- `included_models` - allows for more flexible setting of models in Azure.

- `weight` - allows for weighted selection of clients.

- Improved Healthcheck behavior to cache if successful for a period of time, and always recheck if not.

- Added `dimension` parameter for `embedding` models.

0.0.51rc

- Modification of `async_openai.types.context.ModelContextHandler` to a proxied object singleton.

- Begin adding support for external providers, such as `together` to allow usage in conjunction with `OpenAI` models. WIP.

- Rework of `api_resource` and `root_name` in `Route` objects to be settable during initialization. This is to allow for flexibility for external providers.

- Added capability to have multi-api-key support for external providers, allowing for automatic rotation between api keys.

0.0.50

**Breaking Changes**

- The `OpenAI` client has been refactored to be a singleton `ProxyObject` vs a `Type` object.

Currently, this API is accessible with `async_openai.OpenAIManager`, which provides all the existing functionality of the `OpenAI` client, with a few additional features.

- `OpenAIManager` supports automatic proxy rotation and client selection based on available models.

- `OpenAIManager` supports automatic retrying of failed requests, as well as enabling automatic healthchecking prior to each request to ensure the endpoint is available with `auto_healthcheck_enabled`, otherwise it will rotate to another endpoint. This is useful for ensuring high availability and reliability of the API.

Future versions will deprecate the `OpenAI` client in favor of the `OpenAIManager` object.

- Added new `OpenAIFunctions` class which provides a robust interface for creating and running functions. This class is also a singleton `ProxyObject`.

This can be accessed through the `OpenAIManager.functions` object

0.0.41

**Update to Latest OpenAI API**

This version updates the API to the latest version of OpenAI's API, which includes the following changes:

- addition of `gpt-4-turbo` models

- Add additional supported parameters to `chat` endpoint. We maintain v1 parameters for `azure` endpoints, but will pass through the new parameters for `openai` endpoints.

- Add gradual support for `tools`

**Updates**

- Rework of validating `models`, which now is no longer done, and expects the user to pass the correct model name.

- No longer supporting `validate_max_tokens` as there are now many different schemas for `max_tokens` depending on the model.

0.0.40

**Potentially Breaking Changes**

This version introduces full compatability with `pydantic v1/v2` where previous versions would only work with `pydantic v1`. Auto-detection and handling of deprecated methods of `pydantic` models are handled by `lazyops`, and require `lazyops >= 0.2.60`.

With `pydantic v2` support, there should be a slight performance increase in parsing `pydantic` objects, although the majority of the time is spent waiting for the API to respond.

Additionally, support is added for handling the response like a `dict` object, so you can access the response like `response['choices']` rather than `response.choices`.

0.0.36

**Additions**

- Added auto-parsing of `pydantic` objects in `function_call` parameters and return the same object schema in `chat_response.function_result_objects`.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.