Ibm-generative-ai

Latest version: v3.0.0

Safety actively analyzes 641102 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 9

0.3.2

Not secure
What's Changed

:bug: Bug fixes
- Correctly handle async errors and process abortion

:wrench: Configuration Changes
- Increase async generate/tokenize retry limits from 3 to 5

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.3.1...v0.3.2

0.3.1

Not secure
What's Changed

:rocket: Features / Enhancements
- Handle concurrency limits for `generate` and `generate_as_completed` methods.
- Add automatic handling of rate limits for the tokenize endpoint (tokenize_async method).
- Added `stop_sequence` parameter for generated output (non-empty token which caused the generation to stop) + added - `include_stop_sequence` parameter for the `GenerateParams` (it indicates whether the stop sequence (which caused the generation to stop) is part of the generated text. The default value depends on the model in use).
- Removed hidden `stop_sequences` removal inside the `LangChainInterface`, which can now be controlled via the `include_stop_sequence` parameter.
- Improve general error handling + method signatures (improve Python typings).

:bug: Bug fixes
- Fix stacked progress bar (`generate_async` method)
- Handle cases when the package is used inside the `asyncio` environment
- Hide warning when an unknown field is retrieved in the generated response


**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.3.0...v0.3.1

0.3.0

Not secure
What's Changed

:rocket: Features / Enhancements
- Added Hugging Face Agent support; see an [example](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/huggingface_agent.py).
- Drastically improve the speed of `generate_async` method - the concurrency limit is now automatically inferred from the API. (custom setting of `ConnectionManager.MAX_CONCURRENT_GENERATE` will be ignored). In case you want to slow down the speed of generating, just pass the following parameter to the method: `max_concurrency_limit=1` or any other value.
- Increase the default tokenize processing limits from 5 requests per second to 10 requests per second (this will be increased in the future).

:bug: Bug fixes
- Throws on unhandled exceptions during the `generate_async` calls.
Correctly cleanups the async HTTP clients when the task/calculation is being cancelled (for instance, you call generate_async in Jupyter - Notebook and then click the stop button). This should prevent receiving the `Can't have two active async_generate_clients` error.
- Fix async support for newer LangChain versions (`>=0.0.300`)
- Fix LangChain PromptTemplate import warning in newer versions of LangChain
- Correctly handle server errors when streaming
- Fix `tune_methods` method

0.2.8

Not secure
What's Changed

:rocket: Features / Enhancements

- Added moderation support; now you can retrieve HAP for generated requests ([example](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/generate_with_moderation.py))
- Internally improve streaming processing (poor or unstable internet connection)
- Internally improve server response parsing and error handling
- Add a user-agent header to distinguish Python SDK on the API

:bug: Bug fixes
- LangChain - correct handling of stop_sequences
- Correctly set versions of used dependencies (httpx / pyyaml)
- Prevents unexpected modifications to user's GenerateParams passed to the Model class
- Prevents unexpected errors when GenerateParams contains stream=True and generate (non-stream) version is called

:wrench: Configuration changes
- Remove API version from the API endpoint string

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.2.7...v0.2.8

0.2.7

Not secure
What's Changed
* feat(langchain) - generate method by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/157
* fix(params): do not strip special characters by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/153
* fix: correct httpx dependency version by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/158

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.2.6...v0.2.7

0.2.6

Not secure
What's Changed
* feat(langchain): add streaming support by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/144
* feat(http): allow override httpx options by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/149
* feat: add typical_p parameter by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/135
* chore: update examples by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/136
* docs: mention CLI in README by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/143
* chore: adding escapting of backslashes for re.sub value by assaftibm in https://github.com/IBM/ibm-generative-ai/pull/84
* chore: correct README.md typo by ind1go in https://github.com/IBM/ibm-generative-ai/pull/148
* update schema for stop_sequences generate param by mirianfsilva in https://github.com/IBM/ibm-generative-ai/pull/142

New Contributors
* assaftibm made their first contribution in https://github.com/IBM/ibm-generative-ai/pull/84
* ind1go made their first contribution in https://github.com/IBM/ibm-generative-ai/pull/148

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.2.5...v0.2.6

Page 7 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.