Description
Open WebUI is a self-hosted artificial intelligence platform designed to operate entirely offline. Prior to 0.9.0, the /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group membership, and AccessGrants before allowing a request, the /responses proxy only validates that the user has a valid session via get_verified_user. This allows any authenticated user to interact with any model configured on the instance by sending a POST request to /api/openai/responses with an arbitrary model ID. This vulnerability is fixed in 0.9.0.
Published: 2026-05-15
Score: 7.1 High
EPSS: < 1% Very Low
KEV: No
Impact: n/a
Action: n/a
AI Analysis

Impact

This vulnerability arises from the /responses endpoint in Open WebUI's OpenAI router, which accepts any authenticated user and forwards requests straight to upstream LLM providers without model‑specific authorization checks. Consequently, an attacker who has legitimate access to the system can send a POST request to /api/openai/responses with any model ID, enabling unauthorized use of all models configured on the instance. The flaw reflects improper access control (CWE‑284) and a failure to enforce authorization (CWE‑862).

Affected Systems

The affected product is Open WebUI, a self‑hosted AI platform. All implementations of open-webui:open-webui prior to version 0.9.0 are vulnerable. The issue is fixed in 0.9.0, so any deployment running an older release must be considered at risk.

Risk and Exploitability

The CVSS score of 7.1 indicates high severity. Because the problem is accessed via an authenticated session, the attack vector requires valid credentials but does not rely on network exploitation or user interaction beyond normal login. The EPSS score is not available, so an exact exploitation probability cannot be derived; however, the lack of access controls implies that any authenticated user can abuse the vulnerability. The vulnerability is not currently listed in CISA's KEV catalog, but the potential to misappropriate all configured models makes it a high‑impact risk, especially in environments where model usage is tightly controlled.

Generated by OpenCVE AI on May 15, 2026 at 21:24 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Upgrade Open WebUI to version 0.9.0 or later, which implements per‑model access checks on the /responses endpoint.
  • Ensure that all authenticated users are governed by role‑based access controls and that model usage is governed by group membership and AccessGrants.
  • If immediate upgrade is not possible, temporarily disable the /responses endpoint or insert a middleware layer that verifies model ownership before forwarding requests.
  • Validate session tokens and enforce strict authentication to prevent misuse by compromised credentials.

Generated by OpenCVE AI on May 15, 2026 at 21:24 UTC.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Github GHSA Github GHSA GHSA-hp5m-24vp-vq2q Open WebUI's responses passthrough endpoint lacks access control authorization
History

Fri, 15 May 2026 23:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'none', 'Technical Impact': 'partial'}, 'version': '2.0.3'}


Fri, 15 May 2026 21:45:00 +0000

Type Values Removed Values Added
First Time appeared Open-webui
Open-webui open-webui
Vendors & Products Open-webui
Open-webui open-webui

Fri, 15 May 2026 20:15:00 +0000

Type Values Removed Values Added
Description Open WebUI is a self-hosted artificial intelligence platform designed to operate entirely offline. Prior to 0.9.0, the /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group membership, and AccessGrants before allowing a request, the /responses proxy only validates that the user has a valid session via get_verified_user. This allows any authenticated user to interact with any model configured on the instance by sending a POST request to /api/openai/responses with an arbitrary model ID. This vulnerability is fixed in 0.9.0.
Title Open WebUI: responses passthrough endpoint lacks access control authorization
Weaknesses CWE-284
CWE-862
References
Metrics cvssV3_1

{'score': 7.1, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:H'}


Subscriptions

Open-webui Open-webui
cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-05-15T22:21:46.573Z

Reserved: 2026-05-06T20:59:00.594Z

Link: CVE-2026-44556

cve-icon Vulnrichment

Updated: 2026-05-15T22:15:08.094Z

cve-icon NVD

Status : Received

Published: 2026-05-15T20:16:47.097

Modified: 2026-05-15T20:16:47.097

Link: CVE-2026-44556

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

Updated: 2026-05-15T21:30:08Z

Weaknesses