Latest version: v0.3.15
The information on this page was curated by experts in our Cybersecurity Intelligence Team.
LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
No known vulnerabilities found
Has known vulnerabilities