The grounded-ai evaluator library is a Python package that provides a set of evaluation metrics and tools to assess the performance and reliability of generative AI application outputs, with a focus on grounding - ensuring that the model's outputs are substantiated by and faithful to the provided context or reference data. Other metrics are in development.