Impact
The flaw in InstructLab’s linux_train.py hardcodes trust_remote_code=True when loading models from HuggingFace. This allows a remote attacker to insert a malicious model that contains arbitrary Python code. If a user runs ilab train/download/generate with such a model, the code executes with the user’s privileges, potentially leading to full system compromise and loss of confidentiality, integrity, and availability.
Affected Systems
Red Hat Enterprise Linux AI (RHEL AI) 3
Risk and Exploitability
The CVSS score of 8.8 indicates a high‑severity vulnerability, yet the EPSS score is not available and the issue is not listed in CISA KEV. The likely attack vector requires an attacker to supply a malicious HuggingFace model that a user then loads with ilab train/download/generate. Because trust_remote_code is forced to True, the model’s Python code runs unverified, giving the attacker unrestricted execution on the host. This makes exploitation feasible for anyone who can persuade a user to run the command or who already has control of the local file system to place a model, and the impact is complete compromise of the target system.
OpenCVE Enrichment