Alexandra-ai-eval

Latest version: v0.1.0

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.0

Added

- Support for evaluation of local Hugging Face models.
- Tests for the `question_answering`-task.
- The `automatic_speech_recognition`-task.
- Util functions, `leaderboard_utils`, for interacting with the associated REST-api which interacts with the leaderboard holding the evaluation results.
- A new function in the `evaluator` module, called `_send_results_to_leaderboard` which sends evaluation results to the leaderboard using the util functions from `leaderboard_utils`, and tests for this function and `leaderboard_utils`.
- The `discourse-coherence`-task.
- Support for integer labels.

0.0.1

Added

- First release, which includes evaluation of sentiment models from the Hugging Face
Hub. This can be run with the CLI using the `evaluate` command, or via a script using
the `Evaluator` class.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.