Latest version: v0.1.17
The information on this page was curated by experts in our Cybersecurity Intelligence Team.
TruthTorchLM is an open-source library designed to assess truthfulness in language models' outputs. The library integrates state-of-the-art methods, offers comprehensive benchmarking tools across various tasks, and enables seamless integration with popular frameworks like Huggingface and LiteLLM.
No known vulnerabilities found
Has known vulnerabilities