Added
- Multi-provider LLM integration support for:
- Google Gemini models
- Anthropic Claude models
- Hugging Face Inference API
- AWS Bedrock managed services
- New `generate_text_with_llm_multi` function with unified interface for all providers
- Support for both SDK and direct requests implementations for all providers
- Comprehensive caching system for all providers to minimize API costs
- Detailed documentation and examples for multi-provider integration
- New optional dependency group `llm_multi` for easy installation
Changed
- Enhanced error handling for all LLM implementations
- Improved type hints for better IDE support
- Updated documentation for clarity and consistency
Fixed
- Issue with caching in the requests implementation
- Error handling for Azure OpenAI deployments
- Compatibility with latest OpenAI SDK versions