:rocket: Features / Enhancements
- Added integration for LangChain Chat Models; see an example of [generation](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/langchain_chat_generate.py) and [streaming](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/langchain_chat_stream.py).
- Added support for LangChain Model Serialization (saving and loading models); [see an example](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/langchain_serialization.py).
- Added support for the Chat endpoint in `Model` class; see an [example](https://github.com/IBM/ibm-generative-ai/blob/main/examples/user/chat.py).
- Added support for new moderation models (HAP, STIGMA, Implicit Hate) - not released on API yet but will be available soon.
- Added type validation for input_tokens property in generate response.
- Extend LangChain generation information / LLM Output (token_usage structure, generated tokens, stop_reason, conversation_id, created_at, ...).
- Add optional `raw_response=True/False` parameter to `generate_stream` / `generate_as_complete` and `generate` methods to receive a raw response instead of unwrapped results.
:bug: Bug fixes
- LangChain extension now correctly tokenizes the inputs (previously, the GPT2 tokenizer had been used).
- Improve general error handling.
**Full Changelog**: https://github.com/IBM/ibm-generative-ai/compare/v0.4.1...v0.5.0