Impact
The load_model function in optimate's neural_magic_training.py dereferences torch.load without the weights_only=True security parameter, enabling deserialization of arbitrary Python objects through the Pickle module. An attacker providing a malicious .pt or .pth file via the --model command-line argument can coerce the program to execute arbitrary code on the victim's machine. This is a classic CWE‑502 insecure deserialization flaw and results in complete compromise of the host running optimate.
Affected Systems
The flaw is present in the optimate project maintained by nebuly‑ai, specifically in commit a6d302f912b481c94370811af6b11402f51d377f dated 2024‑07‑21. Any installation that uses the neural_magic_training.py script and invokes the --model option to load user-specified model files without specifying weights_only is affected. Earlier builds released before this commit or those that enforce the security parameter are not vulnerable.
Risk and Exploitability
The risk is extremely high, as arbitrary code execution gives an attacker full control over the system that runs optimate. No EPSS or CVSS metrics are provided, but the absence of defensive measures suggests a base score in the 9‑10 range. The vulnerability can be exercised locally by anyone able to run optimate with a crafted model file, meaning that users must be careful with file provenance. The lack of a KEV listing indicates that there are currently no known public exploits, yet the attack surface remains open for any adversary capable of supplying a malicious model.
OpenCVE Enrichment