Ibm-generative-ai

Latest version: v3.0.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 9

0.6.1

Not secure
What's Changed
* fix: correct llama-index import for new version by David-Kristek in https://github.com/IBM/ibm-generative-ai/pull/243
* fix(examples): correct Hugging Face example prompt by David-Kristek in https://github.com/IBM/ibm-generative-ai/pull/244
* fix: prevent duplicating template with same name by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/245

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.6.0...v0.6.1

0.6.0

Not secure
What's Changed
* feat(extensions): add support for llamaindex by David-Kristek in https://github.com/IBM/ibm-generative-ai/pull/238
* fix: update aiohttp to support python 3.12 by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/239
* fix: add missing __init__.py in package to fix broken import by jezekra1 in https://github.com/IBM/ibm-generative-ai/pull/241
* fix: update maximal local concurrency limit based on API response by Tomas2D in https://github.com/IBM/ibm-generative-ai/pull/242

New Contributors
* jezekra1 made their first contribution in https://github.com/IBM/ibm-generative-ai/pull/241

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.5.1...v0.5.2

0.5.1

Not secure
🐛 Bug fixes
- Add missing rate-limit check for tokenize methods
- Unify error messages between sync and async methods


**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.5.0...v0.5.1

0.5.0

Not secure
:rocket: Features / Enhancements

- Added integration for LangChain Chat Models; see an example of [generation](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/langchain_chat_generate.py) and [streaming](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/langchain_chat_stream.py).
- Added support for LangChain Model Serialization (saving and loading models); [see an example](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/langchain_serialization.py).
- Added support for the Chat endpoint in `Model` class; see an [example](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/chat.py).
- Added support for new moderation models (HAP, STIGMA, Implicit Hate) - not released on API yet but will be available soon.
- Added type validation for input_tokens property in generate response.
- Extend LangChain generation information / LLM Output (token_usage structure, generated tokens, stop_reason, conversation_id, created_at, ...).
- Add optional `raw_response=True/False` parameter to `generate_stream` / `generate_as_complete` and `generate` methods to receive a raw response instead of unwrapped results.

:bug: Bug fixes
- LangChain extension now correctly tokenizes the inputs (previously, the GPT2 tokenizer had been used).
- Improve general error handling.


**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.4.1...v0.5.0

0.4.1

Not secure
What's Changed

:bug: Bug fixes
- Correctly handle file responses
- Use `tqdm.auto` instead of `tqdm.tqdm` to improve display in Jupyter Notebooks

**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.4.0...v0.4.1

0.4.0

Not secure
What's Changed

⚠️ Switch to Pydantic V2
- In case your application is dependent on Pydantic V1, refer to the [migration guide](https://docs.pydantic.dev/2.0/migration/).
- If you cannot upgrade, stick to the previous version 0.3.2.


**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.3.2...v0.4.0

Page 6 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.