Impact
A vulnerability exists in the HuggingFace Transformers library’s Trainer class where a call to torch.load() in the _load_rng_state() method omits the weights_only=True parameter, enabling arbitrary code execution when a malicious checkpoint file is loaded. The flaw is attributable to CWE-502. An attacker who can supply a crafted checkpoint can execute code on the system executing the trainer, potentially compromising confidentiality, integrity, and availability.
Affected Systems
All versions of transformers that support torch≥2.2 and are used with PyTorch below 2.6 are affected. The issue is fixed in version v5.0.0rc3. The affected product is HuggingFace Transformers, a popular open‑source library for natural language processing.
Risk and Exploitability
The CVSS base score of 7.8 reflects a high severity, and the EPSS score of less than 1 % indicates a low likelihood of exploitation on the global scale; it is not listed in CISA’s KEV catalog. The attack requires an attacker to supply a malicious checkpoint file, usually through an untrusted model or data source, so the primary vectors are compromised model artifacts or data pipelines. Because the flaw manifests during model checkpoint loading, it can be mitigated by ensuring only trusted checkpoint files are used and by applying the vendor patch. The overall risk is moderate to high for organizations that routinely load external checkpoints and therefore should act promptly.
OpenCVE Enrichment
Github GHSA