Grounded-ai

Latest version: v1.0.5

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.0.9a0

The grounded-ai evaluator library is a Python package that provides a set of evaluation metrics and tools to assess the performance and reliability of generative AI application outputs, with a focus on grounding - ensuring that the model's outputs are substantiated by and faithful to the provided context or reference data. Other metrics are in development.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.