Langsmith-evaluation-helper

Latest version: v0.1.5

Safety actively analyzes 681775 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.5

What's Changed
* enable adding metadata keys by kazuyaseki in https://github.com/gaudiy/langsmith-evaluation-helper/pull/23


**Full Changelog**: https://github.com/gaudiy/langsmith-evaluation-helper/compare/v0.1.4...v0.1.5

0.1.4

What's Changed
* Chore/enable run without direnv by kazuyaseki in https://github.com/gaudiy/langsmith-evaluation-helper/pull/17
* [Test] testing integration_test by sandy1618 in https://github.com/gaudiy/langsmith-evaluation-helper/pull/20
* fix not to format before invoke by kazuyaseki in https://github.com/gaudiy/langsmith-evaluation-helper/pull/22


**Full Changelog**: https://github.com/gaudiy/langsmith-evaluation-helper/compare/v0.1.3...v0.1.4

0.1.3

What's Changed
* Bump langchain-community from 0.2.5 to 0.2.9 by dependabot in https://github.com/gaudiy/langsmith-evaluation-helper/pull/18
* enable running with input template by kazuyaseki in https://github.com/gaudiy/langsmith-evaluation-helper/pull/19


**Full Changelog**: https://github.com/gaudiy/langsmith-evaluation-helper/compare/v0.1.2...v0.1.3

0.1.2

Initial release of LangSmith Evaluation Helper, an open-source library that simplifies the process of running evaluations using LangSmith.

Key Features

- **YAML-based Configuration**: Easily set up and customize your evaluations using a simple YAML configuration file.
- **Flexible Prompt Handling**: Support for both standard prompts and custom run scripts to accommodate various evaluation scenarios.
- **Multiple Model Support**: Evaluate across different language models, including GPT-3.5 Turbo, GPT-4, Claude 3 Sonnet, and more.
- **Concurrent Evaluation**: Run multiple evaluations in parallel to improve efficiency.
- **Built-in Assertions**: Validate results using length checks, LLM-based judgments, and similarity comparisons.
- **Integration with LangSmith**: Seamlessly view and analyze your evaluation results in the LangSmith platform.

Getting Started

1. Install the package:

pip install langsmith-evaluation-helper


2. Create a `config.yml` file to define your evaluation parameters.

3. Run your evaluation:

langsmith-evaluation-helper evaluate path/to/your/config.yml


For more details on usage and configuration options, please refer to our [README](https://github.com/gaudiy/langsmith-evaluation-helper/blob/main/README.md).

We look forward to seeing how you use LangSmith Evaluation Helper in your projects!

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.