Safety vulnerability ID: 73283
The information on this page was manually curated by our Cybersecurity Intelligence Team.
A critical vulnerability in the Guardrails library allows arbitrary code execution through eval injection. In affected versions, the parse_rail_arguments function in validator_utils.py uses the eval() function to parse user-supplied arguments, potentially allowing attackers to execute malicious code. This vulnerability affects all users of the Guardrails library who process untrusted input.
Latest version: 0.6.1
Adding guardrails to large language models.
This vulnerability has no description
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application