| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| When saving HSTS data to an excessively long file name, curl could end up
removing all contents, making subsequent requests using that file unaware of
the HSTS status they should otherwise use. |
| This flaw allows an attacker to insert cookies at will into a running program
using libcurl, if the specific series of conditions are met.
libcurl performs transfers. In its API, an application creates "easy handles"
that are the individual handles for single transfers.
libcurl provides a function call that duplicates en easy handle called
[curl_easy_duphandle](https://curl.se/libcurl/c/curl_easy_duphandle.html).
If a transfer has cookies enabled when the handle is duplicated, the
cookie-enable state is also cloned - but without cloning the actual
cookies. If the source handle did not read any cookies from a specific file on
disk, the cloned version of the handle would instead store the file name as
`none` (using the four ASCII letters, no quotes).
Subsequent use of the cloned handle that does not explicitly set a source to
load cookies from would then inadvertently load cookies from a file named
`none` - if such a file exists and is readable in the current directory of the
program using libcurl. And if using the correct file format of course. |
| libcurl's ASN1 parser code has the `GTime2str()` function, used for parsing an
ASN.1 Generalized Time field. If given an syntactically incorrect field, the
parser might end up using -1 for the length of the *time fraction*, leading to
a `strlen()` getting performed on a pointer to a heap buffer area that is not
(purposely) null terminated.
This flaw most likely leads to a crash, but can also lead to heap contents
getting returned to the application when
[CURLINFO_CERTINFO](https://curl.se/libcurl/c/CURLINFO_CERTINFO.html) is used. |
| When curl is asked to use HSTS, the expiry time for a subdomain might
overwrite a parent domain's cache entry, making it end sooner or later than
otherwise intended.
This affects curl using applications that enable HSTS and use URLs with the
insecure `HTTP://` scheme and perform transfers with hosts like
`x.example.com` as well as `example.com` where the first host is a subdomain
of the second host.
(The HSTS cache either needs to have been populated manually or there needs to
have been previous HTTPS accesses done as the cache needs to have entries for
the domains involved to trigger this problem.)
When `x.example.com` responds with `Strict-Transport-Security:` headers, this
bug can make the subdomain's expiry timeout *bleed over* and get set for the
parent domain `example.com` in curl's HSTS cache.
The result of a triggered bug is that HTTP accesses to `example.com` get
converted to HTTPS for a different period of time than what was asked for by
the origin server. If `example.com` for example stops supporting HTTPS at its
expiry time, curl might then fail to access `http://example.com` until the
(wrongly set) timeout expires. This bug can also expire the parent's entry
*earlier*, thus making curl inadvertently switch back to insecure HTTP earlier
than otherwise intended. |
| When asked to both use a `.netrc` file for credentials and to follow HTTP
redirects, curl could leak the password used for the first host to the
followed-to host under certain circumstances.
This flaw only manifests itself if the netrc file has an entry that matches
the redirect target hostname but the entry either omits just the password or
omits both login and password. |
| This flaw makes curl overflow a heap based buffer in the SOCKS5 proxy
handshake.
When curl is asked to pass along the host name to the SOCKS5 proxy to allow
that to resolve the address instead of it getting done by curl itself, the
maximum length that host name can be is 255 bytes.
If the host name is detected to be longer, curl switches to local name
resolving and instead passes on the resolved address only. Due to this bug,
the local variable that means "let the host resolve the name" could get the
wrong value during a slow SOCKS5 handshake, and contrary to the intention,
copy the too long host name to the target buffer instead of copying just the
resolved address there.
The target buffer being a heap based buffer, and the host name coming from the
URL that curl has been told to operate with. |
| When an application tells libcurl it wants to allow HTTP/2 server push, and the amount of received headers for the push surpasses the maximum allowed limit (1000), libcurl aborts the server push. When aborting, libcurl inadvertently does not free all the previously allocated headers and instead leaks the memory. Further, this error condition fails silently and is therefore not easily detected by an application. |
| libcurl did not check the server certificate of TLS connections done to a host specified as an IP address, when built to use mbedTLS. libcurl would wrongly avoid using the set hostname function when the specified hostname was given as an IP address, therefore completely skipping the certificate check. This affects all uses of TLS protocols (HTTPS, FTPS, IMAPS, POPS3, SMTPS, etc). |
| When curl is told to use the Certificate Status Request TLS extension, often referred to as OCSP stapling, to verify that the server certificate is valid, it might fail to detect some OCSP problems and instead wrongly consider the response as fine. If the returned status reports another error than 'revoked' (like for example 'unauthorized') it is not treated as a bad certficate. |
| When a protocol selection parameter option disables all protocols without adding any then the default set of protocols would remain in the allowed set due to an error in the logic for removing protocols. The below command would perform a request to curl.se with a plaintext protocol which has been explicitly disabled. curl --proto -all,-http http://curl.se The flaw is only present if the set of selected protocols disables the entire set of available protocols, in itself a command with no practical use and therefore unlikely to be encountered in real situations. The curl security team has thus assessed this to be low severity bug. |
| libcurl skips the certificate verification for a QUIC connection under certain conditions, when built to use wolfSSL. If told to use an unknown/bad cipher or curve, the error path accidentally skips the verification and returns OK, thus ignoring any certificate problems. |
| When asked to use a `.netrc` file for credentials **and** to follow HTTP
redirects, curl could leak the password used for the first host to the
followed-to host under certain circumstances.
This flaw only manifests itself if the netrc file has a `default` entry that
omits both login and password. A rare circumstance. |
| libcurl supports *pinning* of the server certificate public key for HTTPS transfers. Due to an omission, this check is not performed when connecting with QUIC for HTTP/3, when the TLS backend is wolfSSL. Documentation says the option works with wolfSSL, failing to specify that it does not for QUIC and HTTP/3. Since pinning makes the transfer succeed if the pin is fine, users could unwittingly connect to an impostor server without noticing. |
| Due to a mistake in libcurl's WebSocket code, a malicious server can send a
particularly crafted packet which makes libcurl get trapped in an endless
busy-loop.
There is no other way for the application to escape or exit this loop other
than killing the thread/process.
This might be used to DoS libcurl-using application. |
| libcurl would wrongly close the same eventfd file descriptor twice when taking
down a connection channel after having completed a threaded name resolve. |
| This flaw allows a malicious HTTP server to set "super cookies" in curl that
are then passed back to more origins than what is otherwise allowed or
possible. This allows a site to set cookies that then would get sent to
different and unrelated sites and domains.
It could do this by exploiting a mixed case flaw in curl's function that
verifies a given cookie domain against the Public Suffix List (PSL). For
example a cookie could be set with `domain=co.UK` when the URL used a lower
case hostname `curl.co.uk`, even though `co.uk` is listed as a PSL domain. |
| When libcurl is asked to perform automatic gzip decompression of
content-encoded HTTP responses with the `CURLOPT_ACCEPT_ENCODING` option,
**using zlib 1.2.0.3 or older**, an attacker-controlled integer overflow would
make libcurl perform a buffer overflow. |
| libcurl accidentally skips the certificate verification for QUIC connections when connecting to a host specified as an IP address in the URL. Therefore, it does not detect impostors or man-in-the-middle attacks. |
| curl inadvertently kept the SSL session ID for connections in its cache even when the verify status (*OCSP stapling*) test failed. A subsequent transfer to
the same hostname could then succeed if the session ID cache was still fresh, which then skipped the verify status check. |
| An authentication bypass vulnerability exists in libcurl prior to v8.0.0 where it reuses a previously established SSH connection despite the fact that an SSH option was modified, which should have prevented reuse. libcurl maintains a pool of previously used connections to reuse them for subsequent transfers if the configurations match. However, two SSH settings were omitted from the configuration check, allowing them to match easily, potentially leading to the reuse of an inappropriate connection. |