Ai-microcore

Latest version: v3.12.0

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 7

3.10.0

What's Changed
* **New feature** Tiktoken usage for estimating number of tokens in prompt / response, fitting semantic search results to target token number
Examples:
python
// limit semantic search results
mc.texts.search("test_collection", "query", n_results=20).fit_to_token_size(max_tokens=5, min_documents=3)
// extended string now have .to_tokens() and .num_tokens() methods:
prompt= mc.tpl('my_template.j2')
tokens = prompt.to_tokens(for_model="gpt-4")
prompt.num_tokens()
mc.llm(prompt).num_tokens()

For usage with any regular strings, see [microcore.tokenizing](https://github.com/Nayjest/ai-microcore/blob/main/microcore/tokenizing.py)
* ui improvements:
* ui.ask_yn() now have "default" argument and also handles KeyboardInterrupt
* new function: ui.warninig()

* Other minor fixes

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.9.1...v3.10.0

3.9.1

bugfix: avoid recursion when json.JSONEncoder fallbacks to default & produce exception, provide original exception

3.9.0

Changelog
- Config.display(): Improved masking of private keys.
- Config.display(): Resolved issue 18 where the google_vertex_response_validation field appeared in config.describe for non-Google APIs.
- Config.display(): Fixed an issue where fields loaded into ENV that differ from defaults were not displayed.
- get_bool_from_env(): Corrected to ensure default values are not converted to boolean.
- Added a metrics context manager.
- Python code execution: Added max_execution_time to the Timeout error text in python.execute() to display maximum execution time.
- Updated and extended requirements, including chroma_db and anthropic.
- OpenAI API: Enhanced error handling.
- Transformers: Added support for OpenAI-style parameters (n, seed, stop).
- Fix 22: Corrected an issue where responses from the Anthropic API had a "content" property that overrides the default one provided by microcore in the LLMResponse class.
- configure(): Updated to accept a configuration file name as its sole argument.
- Enhanced capability to configure properties with list or dict types from JSON-serialized values in environment variables or configuration files.
- Enabled the configuration of callables as strings using the format <module_name>.<func_name>.
- The following configuration fields can now be loaded from environment variables or configuration files: CHAT_MODE, GOOGLE_VERTEX_GCLOUD_AUTH, GOOGLE_GEMINI_SAFETY_SETTINGS, LLM_DEFAULT_ARGS, INFERENCE_FUNC, STORAGE_DEFAULT_FILE_EXT, EMBEDDING_DB_FOLDER, MAX_CONCURRENT_TASKS.
- Additional documentation comments have been added.
- Additional tests have been added.

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.8.0...v3.9.0

3.8.0

- Python code execution utils added

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.7.0...v3.8.0

3.7.0

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.6.1...v3.7.0

3.6.1

**Full Changelog**: https://github.com/Nayjest/ai-microcore/compare/v3.6.0...v3.6.1

Page 2 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.