Sibila

Latest version: v0.4.5

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.4.5

- fix: Remove NoneType reference which was causing an error in Python 3.9.
- docs: Add simple tools example.
- docs: Update readme and other docs.

0.4.4

- feat: Support vision models from OpenAI, Anthropic and Llava-based local models.
- feat: Add Msg class for better handling of other data types besides text, for now only images.
- feat: Update Thread class to support images. Also added trimming functionality and as a result removed emptied Context class.
- feat: Add close() method to Model* classes, to release resources.
- fix: Removed no longer used _workaround1 in LlamaCppTokenizer.
- fix: Avoid setting "max_tokens" in remote models that support it, for GenConf.max_tokens=0.
- fix: Update configs to new OpenAI models.
- docs: Add vision model and Thread use documentation.
- docs: Add receipt image extraction example.
- test: Add tests for Thread, Msg and vision models.

0.4.3

- feat: Add support for dataclass Optional and Union annotations.
- feat: Add Groq remote model provider.
- fix: Add deepcopy to json_schema_massage() to solve schema massaging error.
- fix: Set Thread.__repr__ output the same as __str__ for development convenience.
- docs: Improve Pydantic and dataclass documentation with examples of Optional, Union and default fields.
- test: Add tests for complex extraction into Pydantic and dataclass objects.

0.4.2

- feat: Add Model.create() argument to retrieve the actual initialization params used to create the model.
- fix: Correct OpenAI's "max_tokens_limit" setting to 4096 in base_models.json, a more sensible default value for future models.
- fix: Update Model.version() formats to be simpler and simplify comparison between versions.

0.4.1

- feat: Add Anthropic provider.
- feat: Add chat template formats for Llama3 and Phi-3 instruct models, StableLM-2, Command-R/Plus.
- feat: Add output_fn_name property to Model, for changing the output function name in models that use a Tools/Functions API.
- feat: Better JSON/Schema decoding errors.
- fix: Don't use a string representation of the dataclass when its doc string is unset, during JSON Schema creation, to keep equivalence with Pydantic-based generation.
- fix: Workaround for MistralModel, where the Mistral API misses api_key argument/env variable when run from pytest.
- fix: Consolidate all Model class info as methods to avoid property/method() calling confusion.
- docs: Update installation instructions and include info on new Anthropic provider.
- test: Better parametrized tests for remote and local models.
- test: Add tests for new provider.

0.4.0

- feat: New providers: Mistral AI, Together.ai and Fireworks AI allowing access to all their chat-based models.
- feat: Model classes now support async calls with the '_async' prefix, for example extract_async(). This requires model API support: only remote models will benefit. Local models (via llama.cpp) can still be called with _async methods but do not have async IO that can run concurrently.
- feat: Add 'special' field to GenConf, allowing provider or model specific generation arguments.
- feat: All models now also accept model path/name starting with their provider names as in Models.create().
- feat: Change Model.json() to stop requiring a JSON Schema as first argument.
- fix: More robust JSON extraction for misbehaved remote models.
- fix: LlamaCppModel no longer outputting debug info when created in Jupyter notebook environment with verbose=False.
- fix: Default "gpt-4" model in 'sibila/res/base:models.json' now points to gpt-4-1106-preview, the first GPT-4 model that accepts json-object output.
- docs: Add API references for new classes and _async() methods.
- docs: Add new async example.
- test: Add new tests for new providers/model classes.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.