Ai-microcore

Latest version: v3.16.3

Safety actively analyzes 722032 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 9

3.0.1

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.0.0...v3.0.1

3.0.0

Changelog
- Support of local language models through custom inference function
- Support of local transformers models
- INIT_PARAMS: dict config parameter introduced to customize local models / API clients
- CHAT_MODE: bool config parameter introduced to force switch to text-completion/chat mode
- Config instance now may be casted to dict (convenient to reconfigure / export)
- More tests added
- API tests fixed
- get_vram_usage(), show_vram_usage() & is_google_colab() utility functions implemented
- (llm response str).as_message + (prompt | llm response str).as_user, .as_system, .as_assistant, .as_model properties added (utils.ConvertableToMessage class)

2.1.1

Changelog
Fix 14 : Unwanted behaviour: previous configuration remains active when calling configure(...) with invalid settings

2.1.0

- Utility functions added: is_notebook(), is_kaggle(), better text coloring functions
- bugfix colorama in Jupyter notebooks
- more informal exceptions
- code style fixes

2.0.1

Fix: Drop unsupported arguments for Anthropic Claude 3 model if used (seed)

2.0.0

- Added support for Gemini models via Google AI Studio
- Improved documentation
- Renamed GOOGLE_GEMINI_RESPONSE_VALIDATION to GOOGLE_VERTEX_RESPONSE_VALIDATION (applicable only for Vertex AI)
- Minor refactoring and code style improvements

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v1.1.0...v2.0.0

Page 6 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.