Impact
The load_model() routine in the neural_magic_training.py script of the Optimate project executes a module.py file from a user‑supplied directory using Python's exec() without any validation. An attacker who can supply or control the contents of the directory specified with the --model option can thus run arbitrary Python code in the context of the process that invokes the script. This flaw effectively gives the attacker local code execution privileges within the environment where the script is executed, potentially allowing system compromise, data exfiltration, or further lateral movement.
Affected Systems
The vulnerability affects the Optimate repository maintained by Nebuly AI, specifically the state captured by commit a6d302f912b481c94370811af6b11402f51d377f dated 2024-07-21. No vendor product list beyond this open‑source project is specified in the CNA data.
Risk and Exploitability
The EPSS score for this issue is not available and the vulnerability is not listed in CISA's KEV catalog, but the weakness is clearly high in severity due to the direct use of exec() on untrusted input. The most likely attack vector is local: an attacker who can place files in or specify a directory for the --model argument can trigger arbitrary code execution. The flaw does not require any special network exposure and has no dependency on external services. Because the code runs with whatever privileges the script owner has, the impact can be full system compromise if the process is privileged.
OpenCVE Enrichment