Promptfoo

Latest version: v0.1.0

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 28 of 28

0.5.0

What's Changed
* Add support for grading by semantic similarity
* Add support for local LLMs like Llama, Alpaca, Vicuna, GPT4All, etc. via LocalAI
* Improved error handling
* Improved word wrapping in CLI output


**Full Changelog**: https://github.com/typpo/promptfoo/compare/0.4.0...0.5.0

0.4.0

What's Changed
- Web viewer for eval results
- Support for OPENAI_STOP to set OpenAI stopwords
- Increase default request timeout

0.3.0

* Feature: LLM automatic grading of outputs by LLM https://github.com/typpo/promptfoo/pull/4
* Improve how test results are shown (PASS/FAIL is shown in matrix view, rather than its own column)
* Ability to override OPENAI_API_HOST environment variable
* Ability to set an API call timeout via `REQUEST_TIMEOUT_MS` environment variable
* Improve readability of HTML table output

0.2.2

- Fix error in `promptfoo init` output
- Fix ordering issue when building table concurrently
- Output more useful errors when API calls fail
- Add `promptfoo --version` output

**Full Changelog**: https://github.com/typpo/promptfoo/compare/0.2.0...0.2.2

0.2.0

- Add `promptfoo init` command
- Improve custom provider loading and add example

Page 28 of 28

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.