Deepeval

Latest version: v2.4.8

Safety actively analyzes 710445 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

2.3.9

🥳 Latest feature to allow users to inject the Faithfulness metric with their custom template. Most suited for custom LLMs where text data is highly formatted by data engineers and stored in databases according to different categories.

2.2.7

Here are the new features we're bringing to you in the latest release:
💥 Releasing beta for *Deep, Acyclic, Graph*. A new deterministic way in deepeval to build decision trees for deterministic outputs for LLM evaluation: https://docs.confident-ai.com/docs/metrics-dag
⚙️ Open-sourcing all LLM red teaming vulnerabilities: https://docs.confident-ai.com/docs/red-teaming-introduction
🪄 Fixes to synthetic dataset generation pipeline

2.0

Here are the new features we're bringing to you in the latest release:
⚙️ Automated LLM red teaming, aka. vulnerability and security safety scanning. You can now scan for over 40+ vulnerabilities using 10+ SOTA attack enhancement techniques in <10 lines of python code.
🪄 Synthetic dataset generation with a highly customizable synthetic data generation pipeline to cover literally any use case.
🖼️ Multi-modal LLM evaluation - perfect for an image editing or text-image use cases.
💬 Conversational evaluation - perfect for evaluating LLM chatbots.
💥 More LLM system metrics: Prompt Alignment (to determine whether your LLM is able to follow instructions specified in your prompt template), Tool Correctness (for agents), and Json Correctness (to validate if LLM outputs conform to your desired schema)

1.4.7

In DeepEval 1.4.7, we're releasing:
- LLM red teaming. Safety test your LLM application for 40+ vulnerabilities with 10+ attack enhancements, docs here: https://docs.confident-ai.com/docs/red-teaming-introduction
- Improved synthetic data synthesizer, much more functionality and customizbility: https://docs.confident-ai.com/docs/evaluation-datasets-synthetic-data
- Conversational metrics: Dedicated metrics to evaluate LLM turns
- Multi-modal metrics: Image editing and text to image evaluation

0.21.74

In DeepEval v0.21.74, we have:
- Agnetic evaluation metric to evaluate tool calling correctness for LLM agents: https://docs.confident-ai.com/docs/metrics-tool-correctness
- Pydantic Schemas to enforce JSON outputs for custom, smaller LLMs: https://docs.confident-ai.com/docs/guides-using-custom-llms
- Asynchronous support for synthetic data generation: https://docs.confident-ai.com/docs/evaluation-datasets-synthetic-data
- Tracing integration for LLamaIndex and LangChain: https://docs.confident-ai.com/docs/confident-ai-tracing

0.21.62

In DeepEval v0.21.62, we:
- added an option to print out intermediate steps during metric execution, which can be configured via the `verbose_mode` parameter: https://docs.confident-ai.com/docs/metrics-answer-relevancy#example
- hyperparameters can be logged to Confident AI via the evaluate() function: https://docs.confident-ai.com/docs/getting-started#optimizing-hyperparameters
- Synthetic data generation now gives more realistic results and is more customizable: https://docs.confident-ai.com/docs/evaluation-datasets-synthetic-data

Page 1 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.