| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). |
| Issue summary: A timing side-channel which could potentially allow recovering
the private key exists in the ECDSA signature computation.
Impact summary: A timing side-channel in ECDSA signature computations
could allow recovering the private key by an attacker. However, measuring
the timing would require either local access to the signing application or
a very fast network connection with low latency.
There is a timing signal of around 300 nanoseconds when the top word of
the inverted ECDSA nonce value is zero. This can happen with significant
probability only for some of the supported elliptic curves. In particular
the NIST P-521 curve is affected. To be able to measure this leak, the attacker
process must either be located in the same physical computer or must
have a very fast network connection with low latency. For that reason
the severity of this vulnerability is Low.
The FIPS modules in 3.4, 3.3, 3.2, 3.1 and 3.0 are affected by this issue. |
| Issue summary: Use of -addreject option with the openssl x509 application adds
a trusted use instead of a rejected use for a certificate.
Impact summary: If a user intends to make a trusted certificate rejected for
a particular use it will be instead marked as trusted for that use.
A copy & paste error during minor refactoring of the code introduced this
issue in the OpenSSL 3.5 version. If, for example, a trusted CA certificate
should be trusted only for the purpose of authenticating TLS servers but not
for CMS signature verification and the CMS signature verification is intended
to be marked as rejected with the -addreject option, the resulting CA
certificate will be trusted for CMS signature verification purpose instead.
Only users which use the trusted certificate format who use the openssl x509
command line application to add rejected uses are affected by this issue.
The issues affecting only the command line application are considered to
be Low severity.
The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue.
OpenSSL 3.4, 3.3, 3.2, 3.1, 3.0, 1.1.1 and 1.0.2 are also not affected by this
issue. |
| The (1) TLS and (2) DTLS implementations in OpenSSL 1.0.1 before 1.0.1g do not properly handle Heartbeat Extension packets, which allows remote attackers to obtain sensitive information from process memory via crafted packets that trigger a buffer over-read, as demonstrated by reading private keys, related to d1_both.c and t1_lib.c, aka the Heartbleed bug. |
| Issue summary: The POLY1305 MAC (message authentication code) implementation
contains a bug that might corrupt the internal state of applications on the
Windows 64 platform when running on newer X86_64 processors supporting the
AVX512-IFMA instructions.
Impact summary: If in an application that uses the OpenSSL library an attacker
can influence whether the POLY1305 MAC algorithm is used, the application
state might be corrupted with various application dependent consequences.
The POLY1305 MAC (message authentication code) implementation in OpenSSL does
not save the contents of non-volatile XMM registers on Windows 64 platform
when calculating the MAC of data larger than 64 bytes. Before returning to
the caller all the XMM registers are set to zero rather than restoring their
previous content. The vulnerable code is used only on newer x86_64 processors
supporting the AVX512-IFMA instructions.
The consequences of this kind of internal application state corruption can
be various - from no consequences, if the calling application does not
depend on the contents of non-volatile XMM registers at all, to the worst
consequences, where the attacker could get complete control of the application
process. However given the contents of the registers are just zeroized so
the attacker cannot put arbitrary values inside, the most likely consequence,
if any, would be an incorrect result of some application dependent
calculations or a crash leading to a denial of service.
The POLY1305 MAC algorithm is most frequently used as part of the
CHACHA20-POLY1305 AEAD (authenticated encryption with associated data)
algorithm. The most common usage of this AEAD cipher is with TLS protocol
versions 1.2 and 1.3 and a malicious client can influence whether this AEAD
cipher is used by the server. This implies that server applications using
OpenSSL can be potentially impacted. However we are currently not aware of
any concrete application that would be affected by this issue therefore we
consider this a Low severity security issue.
As a workaround the AVX512-IFMA instructions support can be disabled at
runtime by setting the environment variable OPENSSL_ia32cap:
OPENSSL_ia32cap=:~0x200000
The FIPS provider is not affected by this issue. |
| The c_rehash script does not properly sanitise shell metacharacters to prevent command injection. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.3 (Affected 3.0.0,3.0.1,3.0.2). Fixed in OpenSSL 1.1.1o (Affected 1.1.1-1.1.1n). Fixed in OpenSSL 1.0.2ze (Affected 1.0.2-1.0.2zd). |
| OpenSSL 3.0.0 through 3.3.2 on the PowerPC architecture is vulnerable to a Minerva attack, exploitable by measuring the time of signing of random messages using the EVP_DigestSign API, and then using the private key to extract the K value (nonce) from the signatures. Next, based on the bit size of the extracted nonce, one can compare the signing time of full-sized nonces to signatures that used smaller nonces, via statistical tests. There is a side-channel in the P-364 curve that allows private key extraction (also, there is a dependency between the bit size of K and the size of the side channel). NOTE: This CVE is disputed because the OpenSSL security policy explicitly notes that any side channels which require same physical system to be detected are outside of the threat model for the software. The timing signal is so small that it is infeasible to be detected without having the attacking process running on the same physical system. |
| Issue summary: The POLY1305 MAC (message authentication code) implementation
contains a bug that might corrupt the internal state of applications running
on PowerPC CPU based platforms if the CPU provides vector instructions.
Impact summary: If an attacker can influence whether the POLY1305 MAC
algorithm is used, the application state might be corrupted with various
application dependent consequences.
The POLY1305 MAC (message authentication code) implementation in OpenSSL for
PowerPC CPUs restores the contents of vector registers in a different order
than they are saved. Thus the contents of some of these vector registers
are corrupted when returning to the caller. The vulnerable code is used only
on newer PowerPC processors supporting the PowerISA 2.07 instructions.
The consequences of this kind of internal application state corruption can
be various - from no consequences, if the calling application does not
depend on the contents of non-volatile XMM registers at all, to the worst
consequences, where the attacker could get complete control of the application
process. However unless the compiler uses the vector registers for storing
pointers, the most likely consequence, if any, would be an incorrect result
of some application dependent calculations or a crash leading to a denial of
service.
The POLY1305 MAC algorithm is most frequently used as part of the
CHACHA20-POLY1305 AEAD (authenticated encryption with associated data)
algorithm. The most common usage of this AEAD cipher is with TLS protocol
versions 1.2 and 1.3. If this cipher is enabled on the server a malicious
client can influence whether this AEAD cipher is used. This implies that
TLS server applications using OpenSSL can be potentially impacted. However
we are currently not aware of any concrete application that would be affected
by this issue therefore we consider this a Low severity security issue. |
| Issue summary: Applications performing certificate name checks (e.g., TLS
clients checking server certificates) may attempt to read an invalid memory
address resulting in abnormal termination of the application process.
Impact summary: Abnormal termination of an application can a cause a denial of
service.
Applications performing certificate name checks (e.g., TLS clients checking
server certificates) may attempt to read an invalid memory address when
comparing the expected name with an `otherName` subject alternative name of an
X.509 certificate. This may result in an exception that terminates the
application program.
Note that basic certificate chain validation (signatures, dates, ...) is not
affected, the denial of service can occur only when the application also
specifies an expected DNS name, Email address or IP address.
TLS servers rarely solicit client certificates, and even when they do, they
generally don't perform a name check against a reference identifier (expected
identity), but rather extract the presented identity after checking the
certificate chain. So TLS servers are generally not affected and the severity
of the issue is Moderate.
The FIPS modules in 3.3, 3.2, 3.1 and 3.0 are not affected by this issue. |
| The OPENSSL_LH_flush() function, which empties a hash table, contains a bug that breaks reuse of the memory occuppied by the removed hash table entries. This function is used when decoding certificates or keys. If a long lived process periodically decodes certificates or keys its memory usage will expand without bounds and the process might be terminated by the operating system causing a denial of service. Also traversing the empty hash table entries will take increasingly more time. Typically such long lived processes might be TLS clients or TLS servers configured to accept client certificate authentication. The function was added in the OpenSSL 3.0 version thus older releases are not affected by the issue. Fixed in OpenSSL 3.0.3 (Affected 3.0.0,3.0.1,3.0.2). |
| The function `OCSP_basic_verify` verifies the signer certificate on an OCSP response. In the case where the (non-default) flag OCSP_NOCHECKS is used then the response will be positive (meaning a successful verification) even in the case where the response signing certificate fails to verify. It is anticipated that most users of `OCSP_basic_verify` will not use the OCSP_NOCHECKS flag. In this case the `OCSP_basic_verify` function will return a negative value (indicating a fatal error) in the case of a certificate verification failure. The normal expected return value in this case would be 0. This issue also impacts the command line OpenSSL "ocsp" application. When verifying an ocsp response with the "-no_cert_checks" option the command line application will report that the verification is successful even though it has in fact failed. In this case the incorrect successful response will also be accompanied by error messages showing the failure and contradicting the apparently successful result. Fixed in OpenSSL 3.0.3 (Affected 3.0.0,3.0.1,3.0.2). |
| Issue summary: Checking excessively long DH keys or parameters may be very slow.
Impact summary: Applications that use the functions DH_check(), DH_check_ex()
or EVP_PKEY_param_check() to check a DH key or DH parameters may experience long
delays. Where the key or parameters that are being checked have been obtained
from an untrusted source this may lead to a Denial of Service.
The function DH_check() performs various checks on DH parameters. After fixing
CVE-2023-3446 it was discovered that a large q parameter value can also trigger
an overly long computation during some of these checks. A correct q value,
if present, cannot be larger than the modulus p parameter, thus it is
unnecessary to perform these checks if q is larger than p.
An application that calls DH_check() and supplies a key or parameters obtained
from an untrusted source could be vulnerable to a Denial of Service attack.
The function DH_check() is itself called by a number of other OpenSSL functions.
An application calling any of those other functions may similarly be affected.
The other functions affected by this are DH_check_ex() and
EVP_PKEY_param_check().
Also vulnerable are the OpenSSL dhparam and pkeyparam command line applications
when using the "-check" option.
The OpenSSL SSL/TLS implementation is not affected by this issue.
The OpenSSL 3.0 and 3.1 FIPS providers are not affected by this issue. |
| A security vulnerability has been identified in all supported versions
of OpenSSL related to the verification of X.509 certificate chains
that include policy constraints. Attackers may be able to exploit this
vulnerability by creating a malicious certificate chain that triggers
exponential use of computational resources, leading to a denial-of-service
(DoS) attack on affected systems.
Policy processing is disabled by default but can be enabled by passing
the `-policy' argument to the command line utilities or by calling the
`X509_VERIFY_PARAM_set1_policies()' function. |
| Issue summary: Checking excessively long DH keys or parameters may be very slow.
Impact summary: Applications that use the functions DH_check(), DH_check_ex()
or EVP_PKEY_param_check() to check a DH key or DH parameters may experience long
delays. Where the key or parameters that are being checked have been obtained
from an untrusted source this may lead to a Denial of Service.
The function DH_check() performs various checks on DH parameters. One of those
checks confirms that the modulus ('p' parameter) is not too large. Trying to use
a very large modulus is slow and OpenSSL will not normally use a modulus which
is over 10,000 bits in length.
However the DH_check() function checks numerous aspects of the key or parameters
that have been supplied. Some of those checks use the supplied modulus value
even if it has already been found to be too large.
An application that calls DH_check() and supplies a key or parameters obtained
from an untrusted source could be vulernable to a Denial of Service attack.
The function DH_check() is itself called by a number of other OpenSSL functions.
An application calling any of those other functions may similarly be affected.
The other functions affected by this are DH_check_ex() and
EVP_PKEY_param_check().
Also vulnerable are the OpenSSL dhparam and pkeyparam command line applications
when using the '-check' option.
The OpenSSL SSL/TLS implementation is not affected by this issue.
The OpenSSL 3.0 and 3.1 FIPS providers are not affected by this issue. |
| Issue summary: The AES-SIV cipher implementation contains a bug that causes
it to ignore empty associated data entries which are unauthenticated as
a consequence.
Impact summary: Applications that use the AES-SIV algorithm and want to
authenticate empty data entries as associated data can be misled by removing,
adding or reordering such empty entries as these are ignored by the OpenSSL
implementation. We are currently unaware of any such applications.
The AES-SIV algorithm allows for authentication of multiple associated
data entries along with the encryption. To authenticate empty data the
application has to call EVP_EncryptUpdate() (or EVP_CipherUpdate()) with
NULL pointer as the output buffer and 0 as the input buffer length.
The AES-SIV implementation in OpenSSL just returns success for such a call
instead of performing the associated data authentication operation.
The empty data thus will not be authenticated.
As this issue does not affect non-empty associated data authentication and
we expect it to be rare for an application to use empty associated data
entries this is qualified as Low severity issue. |
| While parsing an IPAddressFamily extension in an X.509 certificate, it is possible to do a one-byte overread. This would result in an incorrect text display of the certificate. This bug has been present since 2006 and is present in all versions of OpenSSL before 1.0.2m and 1.1.0g. |
| In OpenSSL 1.1.0 before 1.1.0d, if a malicious server supplies bad parameters for a DHE or ECDHE key exchange then this can result in the client attempting to dereference a NULL pointer leading to a client crash. This could be exploited in a Denial of Service attack. |
| A denial of service flaw was found in OpenSSL 0.9.8, 1.0.1, 1.0.2 through 1.0.2h, and 1.1.0 in the way the TLS/SSL protocol defined processing of ALERT packets during a connection handshake. A remote attacker could use this flaw to make a TLS/SSL server consume an excessive amount of CPU and fail to accept connections from other clients. |
| There is an overflow bug in the AVX2 Montgomery multiplication procedure used in exponentiation with 1024-bit moduli. No EC algorithms are affected. Analysis suggests that attacks against RSA and DSA as a result of this defect would be very difficult to perform and are not believed likely. Attacks against DH1024 are considered just feasible, because most of the work necessary to deduce information about a private key may be performed offline. The amount of resources required for such an attack would be significant. However, for an attack on TLS to be meaningful, the server would have to share the DH1024 private key among multiple clients, which is no longer an option since CVE-2016-0701. This only affects processors that support the AVX2 but not ADX extensions like Intel Haswell (4th generation). Note: The impact from this issue is similar to CVE-2017-3736, CVE-2017-3732 and CVE-2015-3193. OpenSSL version 1.0.2-1.0.2m and 1.1.0-1.1.0g are affected. Fixed in OpenSSL 1.0.2n. Due to the low severity of this issue we are not issuing a new release of OpenSSL 1.1.0 at this time. The fix will be included in OpenSSL 1.1.0h when it becomes available. The fix is also available in commit e502cc86d in the OpenSSL git repository. |
| OpenSSL 1.0.2 (starting from version 1.0.2b) introduced an "error state" mechanism. The intent was that if a fatal error occurred during a handshake then OpenSSL would move into the error state and would immediately fail if you attempted to continue the handshake. This works as designed for the explicit handshake functions (SSL_do_handshake(), SSL_accept() and SSL_connect()), however due to a bug it does not work correctly if SSL_read() or SSL_write() is called directly. In that scenario, if the handshake fails then a fatal error will be returned in the initial function call. If SSL_read()/SSL_write() is subsequently called by the application for the same SSL object then it will succeed and the data is passed without being decrypted/encrypted directly from the SSL/TLS record layer. In order to exploit this issue an application bug would have to be present that resulted in a call to SSL_read()/SSL_write() being issued after having already received a fatal error. OpenSSL version 1.0.2b-1.0.2m are affected. Fixed in OpenSSL 1.0.2n. OpenSSL 1.1.0 is not affected. |