| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| A flaw was found in libssh when using the ChaCha20 cipher with the OpenSSL library. If an attacker manages to exhaust the heap space, this error is not detected and may lead to libssh using a partially initialized cipher context. This occurs because the OpenSSL error code returned aliases with the SSH_OK code, resulting in libssh not properly detecting the error returned by the OpenSSL library. This issue can lead to undefined behavior, including compromised data confidentiality and integrity or crashes. |
| In the Linux kernel, the following vulnerability has been resolved:
nfsd: fix RELEASE_LOCKOWNER
The test on so_count in nfsd4_release_lockowner() is nonsense and
harmful. Revert to using check_for_locks(), changing that to not sleep.
First: harmful.
As is documented in the kdoc comment for nfsd4_release_lockowner(), the
test on so_count can transiently return a false positive resulting in a
return of NFS4ERR_LOCKS_HELD when in fact no locks are held. This is
clearly a protocol violation and with the Linux NFS client it can cause
incorrect behaviour.
If RELEASE_LOCKOWNER is sent while some other thread is still
processing a LOCK request which failed because, at the time that request
was received, the given owner held a conflicting lock, then the nfsd
thread processing that LOCK request can hold a reference (conflock) to
the lock owner that causes nfsd4_release_lockowner() to return an
incorrect error.
The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because it
never sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, so
it knows that the error is impossible. It assumes the lock owner was in
fact released so it feels free to use the same lock owner identifier in
some later locking request.
When it does reuse a lock owner identifier for which a previous RELEASE
failed, it will naturally use a lock_seqid of zero. However the server,
which didn't release the lock owner, will expect a larger lock_seqid and
so will respond with NFS4ERR_BAD_SEQID.
So clearly it is harmful to allow a false positive, which testing
so_count allows.
The test is nonsense because ... well... it doesn't mean anything.
so_count is the sum of three different counts.
1/ the set of states listed on so_stateids
2/ the set of active vfs locks owned by any of those states
3/ various transient counts such as for conflicting locks.
When it is tested against '2' it is clear that one of these is the
transient reference obtained by find_lockowner_str_locked(). It is not
clear what the other one is expected to be.
In practice, the count is often 2 because there is precisely one state
on so_stateids. If there were more, this would fail.
In my testing I see two circumstances when RELEASE_LOCKOWNER is called.
In one case, CLOSE is called before RELEASE_LOCKOWNER. That results in
all the lock states being removed, and so the lockowner being discarded
(it is removed when there are no more references which usually happens
when the lock state is discarded). When nfsd4_release_lockowner() finds
that the lock owner doesn't exist, it returns success.
The other case shows an so_count of '2' and precisely one state listed
in so_stateid. It appears that the Linux client uses a separate lock
owner for each file resulting in one lock state per lock owner, so this
test on '2' is safe. For another client it might not be safe.
So this patch changes check_for_locks() to use the (newish)
find_any_file_locked() so that it doesn't take a reference on the
nfs4_file and so never calls nfsd_file_put(), and so never sleeps. With
this check is it safe to restore the use of check_for_locks() rather
than testing so_count against the mysterious '2'. |
| In the Linux kernel, the following vulnerability has been resolved:
nilfs2: prevent kernel bug at submit_bh_wbc()
Fix a bug where nilfs_get_block() returns a successful status when
searching and inserting the specified block both fail inconsistently. If
this inconsistent behavior is not due to a previously fixed bug, then an
unexpected race is occurring, so return a temporary error -EAGAIN instead.
This prevents callers such as __block_write_begin_int() from requesting a
read into a buffer that is not mapped, which would cause the BUG_ON check
for the BH_Mapped flag in submit_bh_wbc() to fail. |
| Memory corruption during memory assignment to headless peripheral VM due to incorrect error code handling. |
| In the Linux kernel, the following vulnerability has been resolved:
dm rq: don't queue request to blk-mq during DM suspend
DM uses blk-mq's quiesce/unquiesce to stop/start device mapper queue.
But blk-mq's unquiesce may come from outside events, such as elevator
switch, updating nr_requests or others, and request may come during
suspend, so simply ask for blk-mq to requeue it.
Fixes one kernel panic issue when running updating nr_requests and
dm-mpath suspend/resume stress test. |
| The recv_and_process_client_pkt function in networking/ntpd.c in busybox allows remote attackers to cause a denial of service (CPU and bandwidth consumption) via a forged NTP packet, which triggers a communication loop. |
| Dell Alienware Command Center 6.x (AWCC), versions prior to 6.10.15.0, contain a Detection of Error Condition Without Action vulnerability. A low privileged attacker with local access could potentially exploit this vulnerability, leading to Arbitrary Code Execution. |
| Improper return value within AMD uProf can allow a local attacker to bypass KSLR, potentially resulting in loss of confidentiality or availability. |
| The Inter-process Communication (IPC) implementation in Google Chrome before 18.0.1025.168, as used in Mozilla Firefox before 38.0 and other products, does not properly validate messages, which has unspecified impact and attack vectors. |
| Systemic Internal Server Errors - HTTP 500 ResponseThis issue affects BLU-IC2: through 1.19.5; BLU-IC4: through 1.19.5 . |
| Lack of Graceful Error Handling - HTTP 5xx ErrorThis issue affects BLU-IC2: through 1.19.5; BLU-IC4: through 1.19.5 . |
| Unchecked Error Condition vulnerability in Apache Tomcat. If Tomcat is configured to use a custom Jakarta Authentication (formerly JASPIC) ServerAuthContext component which may throw an exception during the authentication process without explicitly setting an HTTP status to indicate failure, the authentication may not fail, allowing the user to bypass the authentication process. There are no known Jakarta Authentication components that behave in this way.
This issue affects Apache Tomcat: from 11.0.0-M1 through 11.0.0-M26, from 10.1.0-M1 through 10.1.30, from 9.0.0-M1 through 9.0.95.
The following versions were EOL at the time the CVE was created but are
known to be affected: 8.5.0 though 8.5.100. Other EOL versions may also be affected.
Users are recommended to upgrade to version 11.0.0, 10.1.31 or 9.0.96, which fix the issue. |
| A vulnerability was found in OpenSSH when the VerifyHostKeyDNS option is enabled. A machine-in-the-middle attack can be performed by a malicious machine impersonating a legit server. This issue occurs due to how OpenSSH mishandles error codes in specific conditions when verifying the host key. For an attack to be considered successful, the attacker needs to manage to exhaust the client's memory resource first, turning the attack complexity high. |
| A flaw was found in rsync. It could allow a server to enumerate the contents of an arbitrary file from the client's machine. This issue occurs when files are being copied from a client to a server. During this process, the rsync server will send checksums of local data to the client to compare with in order to determine what data needs to be sent to the server. By sending specially constructed checksum values for arbitrary files, an attacker may be able to reconstruct the data of those files byte-by-byte based on the responses from the client. |
| GE Multilink ML800, ML1200, ML1600, and ML2400 switches with firmware 4.2.1 and earlier and Multilink ML810, ML3000, and ML3100 switches with firmware 5.2.0 and earlier allow remote attackers to cause a denial of service (resource consumption or reboot) via crafted packets. |
| The DNP3 feature on Rockwell Automation Allen-Bradley MicroLogix 1400 1766-Lxxxxx A FRN controllers 7 and earlier and 1400 1766-Lxxxxx B FRN controllers before 15.001 allows remote attackers to cause a denial of service (process disruption) via malformed packets over (1) an Ethernet network or (2) a serial line. |
| [This CNA information record relates to multiple CVEs; the
text explains which aspects/vulnerabilities correspond to which CVE.]
There are multiple issues related to the handling and accessing of guest
memory pages in the viridian code:
1. A NULL pointer dereference in the updating of the reference TSC area.
This is CVE-2025-27466.
2. A NULL pointer dereference by assuming the SIM page is mapped when
a synthetic timer message has to be delivered. This is
CVE-2025-58142.
3. A race in the mapping of the reference TSC page, where a guest can
get Xen to free a page while still present in the guest physical to
machine (p2m) page tables. This is CVE-2025-58143. |
| [This CNA information record relates to multiple CVEs; the
text explains which aspects/vulnerabilities correspond to which CVE.]
There are multiple issues related to the handling and accessing of guest
memory pages in the viridian code:
1. A NULL pointer dereference in the updating of the reference TSC area.
This is CVE-2025-27466.
2. A NULL pointer dereference by assuming the SIM page is mapped when
a synthetic timer message has to be delivered. This is
CVE-2025-58142.
3. A race in the mapping of the reference TSC page, where a guest can
get Xen to free a page while still present in the guest physical to
machine (p2m) page tables. This is CVE-2025-58143. |
| Envoy is a cloud-native, open source edge and service proxy. The HTTP/2 protocol stack in Envoy versions prior to 1.29.3, 1.28.2, 1.27.4, and 1.26.8 are vulnerable to CPU exhaustion due to flood of CONTINUATION frames. Envoy's HTTP/2 codec allows the client to send an unlimited number of CONTINUATION frames even after exceeding Envoy's header map limits. This allows an attacker to send a sequence of CONTINUATION frames without the END_HEADERS bit set causing CPU utilization, consuming approximately 1 core per 300Mbit/s of traffic and culminating in denial of service through CPU exhaustion. Users should upgrade to version 1.29.3, 1.28.2, 1.27.4, or 1.26.8 to mitigate the effects of the CONTINUATION flood. As a workaround, disable HTTP/2 protocol for downstream connections. |
| nghttp2 is an implementation of the Hypertext Transfer Protocol version 2 in C. The nghttp2 library prior to version 1.61.0 keeps reading the unbounded number of HTTP/2 CONTINUATION frames even after a stream is reset to keep HPACK context in sync. This causes excessive CPU usage to decode HPACK stream. nghttp2 v1.61.0 mitigates this vulnerability by limiting the number of CONTINUATION frames it accepts per stream. There is no workaround for this vulnerability. |