Description
A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.
Published: 2026-04-22
Score: 8.8 High
EPSS: n/a
KEV: No
Impact: Remote Code Execution
Action: Apply Workaround
AI Analysis

Impact

The flaw in InstructLab’s linux_train.py hardcodes trust_remote_code=True when loading models from HuggingFace. This allows a remote attacker to insert a malicious model that contains arbitrary Python code. If a user runs ilab train/download/generate with such a model, the code executes with the user’s privileges, potentially leading to full system compromise and loss of confidentiality, integrity, and availability.

Affected Systems

Red Hat Enterprise Linux AI (RHEL AI) 3

Risk and Exploitability

The CVSS score of 8.8 indicates a high‑severity vulnerability, yet the EPSS score is not available and the issue is not listed in CISA KEV. The likely attack vector requires an attacker to supply a malicious HuggingFace model that a user then loads with ilab train/download/generate. Because trust_remote_code is forced to True, the model’s Python code runs unverified, giving the attacker unrestricted execution on the host. This makes exploitation feasible for anyone who can persuade a user to run the command or who already has control of the local file system to place a model, and the impact is complete compromise of the target system.

Generated by OpenCVE AI on April 22, 2026 at 19:23 UTC.

Remediation

Vendor Workaround

To mitigate this issue, only use models from trusted sources when performing `instructlab` operations. Review the origin and integrity of any HuggingFace model before using it with `ilab train/download/generate`. Consider running `instructlab` commands within a sandboxed or isolated environment to limit the potential impact of executing untrusted code.


OpenCVE Recommended Actions

  • Use only models from trusted sources when performing instructlab operations.
  • Verify the origin and integrity of any HuggingFace model before using it with ilab train/download/generate.
  • Run instructlab commands within a sandboxed or isolated environment to limit the potential impact of executing untrusted code.

Generated by OpenCVE AI on April 22, 2026 at 19:23 UTC.

Tracking

Sign in to view the affected projects.

Advisories

No advisories yet.

History

Wed, 22 Apr 2026 13:45:00 +0000

Type Values Removed Values Added
Description A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.
Title Instructlab: instructlab: arbitrary code execution due to hardcoded `trust_remote_code=true`
First Time appeared Redhat
Redhat enterprise Linux Ai
Weaknesses CWE-829
CPEs cpe:/a:redhat:enterprise_linux_ai:3
Vendors & Products Redhat
Redhat enterprise Linux Ai
References
Metrics cvssV3_1

{'score': 8.8, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H'}


Subscriptions

Redhat Enterprise Linux Ai
cve-icon MITRE

Status: PUBLISHED

Assigner: redhat

Published:

Updated: 2026-04-22T13:04:04.795Z

Reserved: 2026-04-22T12:54:46.753Z

Link: CVE-2026-6859

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Awaiting Analysis

Published: 2026-04-22T14:17:07.687

Modified: 2026-04-22T21:23:52.620

Link: CVE-2026-6859

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

Updated: 2026-04-22T19:30:24Z

Weaknesses