PyPi: Guardrails-Ai

CVE-2024-45858

Safety vulnerability ID: 73283

This vulnerability was reviewed by experts

The information on this page was manually curated by our Cybersecurity Intelligence Team.

Created at Sep 18, 2024 Updated at Dec 11, 2024
Scan your Python projects for vulnerabilities →

Advisory

A critical vulnerability in the Guardrails library allows arbitrary code execution through eval injection. In affected versions, the parse_rail_arguments function in validator_utils.py uses the eval() function to parse user-supplied arguments, potentially allowing attackers to execute malicious code. This vulnerability affects all users of the Guardrails library who process untrusted input.

Affected package

guardrails-ai

Latest version: 0.6.1

Adding guardrails to large language models.

Affected versions

Fixed versions

Vulnerability changelog

This vulnerability has no description

Resources

Use this package?

Scan your Python project for dependency vulnerabilities in two minutes

Scan your application