Impact
The vulnerability lies in the Mamba language model framework up to version 2.2.6. The library loads pre‑trained model weight files via torch.load without the security‑restrictive weights_only=True flag, permitting the execution of arbitrary Python objects deserialized by the pickle module. An attacker can publish a malicious model repository on HuggingFace Hub, and when a victim loads this model, arbitrary code runs in the context of the mamba process.
Affected Systems
Any installation of the Mamba language model framework 2.2.6 or earlier that uses the MambaLMHeadModel.from_pretrained() method to load models from HuggingFace Hub is vulnerable. The issue affects projects that rely on the default loading behavior for pre‑trained models and have not applied any mitigation such as disabling unsafe deserialization or verifying model integrity.
Risk and Exploitability
Because the flaw allows deserialization of malicious payloads, an attacker can achieve remote code execution on the victim machine. Based on the description, the attacker must host a malicious model on HuggingFace Hub, which the victim then pulls during model loading. This requires only network access to the hub and the victim’s use of the vulnerable model-loading function. The CVSS score is not provided, and the absence of an EPSS score and KEV listing suggest the vulnerability may not yet be actively exploited, but the potential impact remains high and acts as a significant risk for exposed deployments.
OpenCVE Enrichment