| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: avoid potential out-of-bounds in btrfs_encode_fh()
The function btrfs_encode_fh() does not properly account for the three
cases it handles.
Before writing to the file handle (fh), the function only returns to the
user BTRFS_FID_SIZE_NON_CONNECTABLE (5 dwords, 20 bytes) or
BTRFS_FID_SIZE_CONNECTABLE (8 dwords, 32 bytes).
However, when a parent exists and the root ID of the parent and the
inode are different, the function writes BTRFS_FID_SIZE_CONNECTABLE_ROOT
(10 dwords, 40 bytes).
If *max_len is not large enough, this write goes out of bounds because
BTRFS_FID_SIZE_CONNECTABLE_ROOT is greater than
BTRFS_FID_SIZE_CONNECTABLE originally returned.
This results in an 8-byte out-of-bounds write at
fid->parent_root_objectid = parent_root_id.
A previous attempt to fix this issue was made but was lost.
https://lore.kernel.org/all/4CADAEEC020000780001B32C@vpn.id2.novell.com/
Although this issue does not seem to be easily triggerable, it is a
potential memory corruption bug that should be fixed. This patch
resolves the issue by ensuring the function returns the appropriate size
for all three cases and validates that *max_len is large enough before
writing any data. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nft_objref: validate objref and objrefmap expressions
Referencing a synproxy stateful object from OUTPUT hook causes kernel
crash due to infinite recursive calls:
BUG: TASK stack guard page was hit at 000000008bda5b8c (stack is 000000003ab1c4a5..00000000494d8b12)
[...]
Call Trace:
__find_rr_leaf+0x99/0x230
fib6_table_lookup+0x13b/0x2d0
ip6_pol_route+0xa4/0x400
fib6_rule_lookup+0x156/0x240
ip6_route_output_flags+0xc6/0x150
__nf_ip6_route+0x23/0x50
synproxy_send_tcp_ipv6+0x106/0x200
synproxy_send_client_synack_ipv6+0x1aa/0x1f0
nft_synproxy_do_eval+0x263/0x310
nft_do_chain+0x5a8/0x5f0 [nf_tables
nft_do_chain_inet+0x98/0x110
nf_hook_slow+0x43/0xc0
__ip6_local_out+0xf0/0x170
ip6_local_out+0x17/0x70
synproxy_send_tcp_ipv6+0x1a2/0x200
synproxy_send_client_synack_ipv6+0x1aa/0x1f0
[...]
Implement objref and objrefmap expression validate functions.
Currently, only NFT_OBJECT_SYNPROXY object type requires validation.
This will also handle a jump to a chain using a synproxy object from the
OUTPUT hook.
Now when trying to reference a synproxy object in the OUTPUT hook, nft
will produce the following error:
synproxy_crash.nft: Error: Could not process rule: Operation not supported
synproxy name mysynproxy
^^^^^^^^^^^^^^^^^^^^^^^^ |
| In the Linux kernel, the following vulnerability has been resolved:
media: v4l2-subdev: Fix alloc failure check in v4l2_subdev_call_state_try()
v4l2_subdev_call_state_try() macro allocates a subdev state with
__v4l2_subdev_state_alloc(), but does not check the returned value. If
__v4l2_subdev_state_alloc fails, it returns an ERR_PTR, and that would
cause v4l2_subdev_call_state_try() to crash.
Add proper error handling to v4l2_subdev_call_state_try(). |
| In the Linux kernel, the following vulnerability has been resolved:
media: iris: fix module removal if firmware download failed
Fix remove if firmware failed to load:
qcom-iris aa00000.video-codec: Direct firmware load for qcom/vpu/vpu33_p4.mbn failed with error -2
qcom-iris aa00000.video-codec: firmware download failed
qcom-iris aa00000.video-codec: core init failed
then:
$ echo aa00000.video-codec > /sys/bus/platform/drivers/qcom-iris/unbind
Triggers:
genpd genpd:1:aa00000.video-codec: Runtime PM usage count underflow!
------------[ cut here ]------------
video_cc_mvs0_clk already disabled
WARNING: drivers/clk/clk.c:1206 at clk_core_disable+0xa4/0xac, CPU#1: sh/542
<snip>
pc : clk_core_disable+0xa4/0xac
lr : clk_core_disable+0xa4/0xac
<snip>
Call trace:
clk_core_disable+0xa4/0xac (P)
clk_disable+0x30/0x4c
iris_disable_unprepare_clock+0x20/0x48 [qcom_iris]
iris_vpu_power_off_hw+0x48/0x58 [qcom_iris]
iris_vpu33_power_off_hardware+0x44/0x230 [qcom_iris]
iris_vpu_power_off+0x34/0x84 [qcom_iris]
iris_core_deinit+0x44/0xc8 [qcom_iris]
iris_remove+0x20/0x48 [qcom_iris]
platform_remove+0x20/0x30
device_remove+0x4c/0x80
<snip>
---[ end trace 0000000000000000 ]---
------------[ cut here ]------------
video_cc_mvs0_clk already unprepared
WARNING: drivers/clk/clk.c:1065 at clk_core_unprepare+0xf0/0x110, CPU#2: sh/542
<snip>
pc : clk_core_unprepare+0xf0/0x110
lr : clk_core_unprepare+0xf0/0x110
<snip>
Call trace:
clk_core_unprepare+0xf0/0x110 (P)
clk_unprepare+0x2c/0x44
iris_disable_unprepare_clock+0x28/0x48 [qcom_iris]
iris_vpu_power_off_hw+0x48/0x58 [qcom_iris]
iris_vpu33_power_off_hardware+0x44/0x230 [qcom_iris]
iris_vpu_power_off+0x34/0x84 [qcom_iris]
iris_core_deinit+0x44/0xc8 [qcom_iris]
iris_remove+0x20/0x48 [qcom_iris]
platform_remove+0x20/0x30
device_remove+0x4c/0x80
<snip>
---[ end trace 0000000000000000 ]---
genpd genpd:0:aa00000.video-codec: Runtime PM usage count underflow!
------------[ cut here ]------------
gcc_video_axi0_clk already disabled
WARNING: drivers/clk/clk.c:1206 at clk_core_disable+0xa4/0xac, CPU#4: sh/542
<snip>
pc : clk_core_disable+0xa4/0xac
lr : clk_core_disable+0xa4/0xac
<snip>
Call trace:
clk_core_disable+0xa4/0xac (P)
clk_disable+0x30/0x4c
iris_disable_unprepare_clock+0x20/0x48 [qcom_iris]
iris_vpu33_power_off_controller+0x17c/0x428 [qcom_iris]
iris_vpu_power_off+0x48/0x84 [qcom_iris]
iris_core_deinit+0x44/0xc8 [qcom_iris]
iris_remove+0x20/0x48 [qcom_iris]
platform_remove+0x20/0x30
device_remove+0x4c/0x80
<snip>
------------[ cut here ]------------
gcc_video_axi0_clk already unprepared
WARNING: drivers/clk/clk.c:1065 at clk_core_unprepare+0xf0/0x110, CPU#4: sh/542
<snip>
pc : clk_core_unprepare+0xf0/0x110
lr : clk_core_unprepare+0xf0/0x110
<snip>
Call trace:
clk_core_unprepare+0xf0/0x110 (P)
clk_unprepare+0x2c/0x44
iris_disable_unprepare_clock+0x28/0x48 [qcom_iris]
iris_vpu33_power_off_controller+0x17c/0x428 [qcom_iris]
iris_vpu_power_off+0x48/0x84 [qcom_iris]
iris_core_deinit+0x44/0xc8 [qcom_iris]
iris_remove+0x20/0x48 [qcom_iris]
platform_remove+0x20/0x30
device_remove+0x4c/0x80
<snip>
---[ end trace 0000000000000000 ]---
Skip deinit if initialization never succeeded. |
| Cross-site scripting (XSS) vulnerability in the generate report functionality in Rarlab WinRAR 7.11, allows attackers to disclose user information such as the computer username, generated report directory, and IP address. The generate report command includes archived file names without validation in the HTML report, which allows potentially malicious HTML tags to be injected into the report. User interaction is required. User must use the "generate report" functionality and open the report. |
| A SQL injection vulnerability exists in the login functionality of WellSky Harmony version 4.1.0.2.83 within the 'xmHarmony.asp' endpoint. User-supplied input to the 'TXTUSERID' parameter is not properly sanitized before being incorporated into a SQL query. Successful authentication may lead to authentication bypass, data leakage, or full system compromise of backend database contents. |
| CUPS is a standards-based, open-source printing system, and `libcupsfilters` contains the code of the filters of the former `cups-filters` package as library functions to be used for the data format conversion tasks needed in Printer Applications. In CUPS-Filters versions up to and including 1.28.17 and libscupsfilters versions 2.0.0 through 2.1.1, CUPS-Filters's `imagetoraster` filter has an out of bounds read/write vulnerability in the processing of TIFF image files. While the pixel buffer is allocated with the number of pixels times a pre-calculated bytes-per-pixel value, the function which processes these pixels is called with a size of the number of pixels times 3. When suitable inputs are passed, the bytes-per-pixel value can be set to 1 and bytes outside of the buffer bounds get processed. In order to trigger the bug, an attacker must issue a print job with a crafted TIFF file, and pass appropriate print job options to control the bytes-per-pixel value of the output format. They must choose a printer configuration under which the `imagetoraster` filter or its C-function equivalent `cfFilterImageToRaster()` gets invoked. The vulnerability exists in both CUPS-Filters 1.x and the successor library libcupsfilters (CUPS-Filters 2.x). In CUPS-Filters 2.x, the vulnerable function is `_cfImageReadTIFF() in libcupsfilters`. When this function is invoked as part of `cfFilterImageToRaster()`, the caller passes a look-up-table during whose processing the out of bounds memory access happens. In CUPS-Filters 1.x, the equivalent functions are all found in the cups-filters repository, which is not split into subprojects yet, and the vulnerable code is in `_cupsImageReadTIFF()`, which is called through `cupsImageOpen()` from the `imagetoraster` tool. A patch is available in commit b69dfacec7f176281782e2f7ac44f04bf9633cfa. |
| If kdcproxy receives a request for a realm which does not have server addresses defined in its configuration, by default, it will query SRV records in the DNS zone matching the requested realm name. This creates a server-side request forgery vulnerability, since an attacker could send a request for a realm matching a DNS zone where they created SRV records pointing to arbitrary ports and hostnames (which may resolve to loopback or internal IP addresses). This vulnerability can be exploited to probe internal network topology and firewall rules, perform port scanning, and exfiltrate data. Deployments where
the "use_dns" setting is explicitly set to false are not affected. |
| If an attacker causes kdcproxy to connect to an attacker-controlled KDC server (e.g. through server-side request forgery), they can exploit the fact that kdcproxy does not enforce bounds on TCP response length to conduct a denial-of-service attack. While receiving the KDC's response, kdcproxy copies the entire buffered stream into a new
buffer on each recv() call, even when the transfer is incomplete, causing excessive memory allocation and CPU usage. Additionally, kdcproxy accepts incoming response chunks as long as the received data length is not exactly equal to the length indicated in the response
header, even when individual chunks or the total buffer exceed the maximum length of a Kerberos message. This allows an attacker to send unbounded data until the connection timeout is reached (approximately 12 seconds), exhausting server memory or CPU resources. Multiple concurrent requests can cause accept queue overflow, denying service to legitimate clients. |
| A stored cross-site scripting (XSS) in the Business Line Management module of Xxl-api v1.3.0 attackers to execute arbitrary web scripts or HTML via injecting a crafted payload into the Name parameter. |
| The Datadog Agent collects events and metrics from hosts and sends them to Datadog. A vulnerability within the Datadog Linux Host Agent versions 7.65.0 through 7.70.2 exists due to insufficient permissions being set on the `opt/datadog-agent/python-scripts/__pycache__` directory during installation. Code in this directory is only run by the Agent during Agent install/upgrades. This could allow an attacker with local access to modify files in this directory, which would then subsequently be run when the Agent is upgraded, resulting in local privilege escalation. This issue requires local access to the host and a valid low privilege account to be vulnerable. Note that this vulnerability only impacts the Linux Host Agent. Other variations of the Agent including the container, kubernetes, windows host and other agents are not impacted. Version 7.71.0 contains a patch for the issue. |
| Cross Site Scripting (XSS) vulnerability in CrushFTP 11.3.6_48. The Web-Based Server has a feature where users can share files, the feature reflects the filename to an emailbody field with no sanitations leading to HTML Injection. |
| A null pointer dereference vulnerability exists in airpig2011 IEC104 thru Commit be6d841 (2019-07-08). When multiple threads enqueue elements concurrently via IEC10X_PrioEnQueue, the function may dereference a null or freed queue pointer, resulting in a segmentation fault and potential denial-of-service. |
| Open Access Management (OpenAM) is an access management solution. In versions prior to 16.0.0, if the "claims_parameter_supported" parameter is activated, it is possible, thanks to the "oidc-claims-extension.groovy" script, to inject the value of one's choice into a claim contained in the id_token or in the user_info. In the request of an authorize function, a claims parameter containing a JSON file can be injected. This JSON file allows attackers to customize the claims returned by the "id_token" and "user_info" files. This allows for a very wide range of vulnerabilities depending on how clients use claims. For example, if some clients rely on an email field to identify a user, an attacker can choose the email address they want, and therefore assume any identity they choose. Version 16.0.0 fixes the issue. |
| Tuleap is an Open Source Suite to improve management of software developments and collaboration. Tuleap Community Edition prior to version 16.13.99.1761813675 and Tuleap Enterprise Edition prior to versions 16.13-5 and 16.12-8 don't have cross-site request forgery protection in the management of SVN commit rules and immutable tags. An attacker could use this vulnerability to trick victims into changing the commit rules or immutable tags of a SVN repo. Tuleap Community Edition 16.13.99.1761813675, Tuleap Enterprise Edition 16.13-5, and Tuleap Enterprise Edition 16.12-8 contain a fix for the issue. |
| Wasmtime is a runtime for WebAssembly. Prior to version 38.0.4, 37.0.3, 36.0.3, and 24.0.5, Wasmtime's Rust embedder API contains an unsound interaction where a WebAssembly shared linear memory could be viewed as a type which provides safe access to the host (Rust) to the contents of the linear memory. This is not sound for shared linear memories, which could be modified in parallel, and this could lead to a data race in the host. Patch releases have been issued for all supported versions of Wasmtime, notably: 24.0.5, 36.0.3, 37.0.3, and 38.0.4. These releases reject creation of shared memories via `Memory::new` and shared memories are now excluded from core dumps. As a workaround, eembeddings affected by this issue should use `SharedMemory::new` instead of `Memory::new` to create shared memories. Affected embeddings should also disable core dumps if they are unable to upgrade. Note that core dumps are disabled by default but the wasm threads proposal (and shared memory) is enabled by default. |
| DuckDB is a SQL database management system. DuckDB implemented block-based encryption of DB on the filesystem starting with DuckDB 1.4.0. There are a few issues related to this implementation. The DuckDB can fall back to an insecure random number generator (pcg32) to generate cryptographic keys or IVs. When clearing keys from memory, the compiler may remove the memset() and leave sensitive data on the heap. By modifying the database header, an attacker could downgrade the encryption mode from GCM to CTR to bypass integrity checks. There may be a failure to check return value on call to OpenSSL `rand_bytes()`. An attacker could use public IVs to compromise the internal state of RNG and determine the randomly generated key used to encrypt temporary files, get access to cryptographic keys if they have access to process memory (e.g. through memory leak),circumvent GCM integrity checks, and/or influence the OpenSSL random number generator and DuckDB would not be able to detect a failure of the generator. Version 1.4.2 has disabled the insecure random number generator by no longer using the fallback to write to or create databases. Instead, DuckDB will now attempt to install and load the OpenSSL implementation in the `httpfs` extension. DuckDB now uses secure MbedTLS primitive to clear memory as recommended and requires explicit specification of ciphers without integrity checks like CTR on `ATTACH`. Additionally, DuckDB now checks the return code. |
| Tuleap is an Open Source Suite to improve management of software developments and collaboration. Tuleap Community Edition prior to version 16.13.99.1762267347 and Tuleap Enterprise Edition prior to versions 17.01-, 16.13-6, and 16.12-9 don't have cross-site request forgery protections in the file release system. An attacker could use this vulnerability to trick victims into changing the commit rules or immutable tags of a SVN repo. Tuleap Community Edition 16.13.99.1762267347, Tuleap Enterprise Edition 17.0-1, Tuleap Enterprise Edition 16.13-6, and Tuleap Enterprise Edition 16.12-9 fix the issue. |
| An improper default permission vulnerability was reported in Lenovo Dock Manager that, under certain conditions during installation, could allow an authenticated local user to redirect log files with elevated privileges. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix space cache corruption and potential double allocations
When testing space_cache v2 on a large set of machines, we encountered a
few symptoms:
1. "unable to add free space :-17" (EEXIST) errors.
2. Missing free space info items, sometimes caught with a "missing free
space info for X" error.
3. Double-accounted space: ranges that were allocated in the extent tree
and also marked as free in the free space tree, ranges that were
marked as allocated twice in the extent tree, or ranges that were
marked as free twice in the free space tree. If the latter made it
onto disk, the next reboot would hit the BUG_ON() in
add_new_free_space().
4. On some hosts with no on-disk corruption or error messages, the
in-memory space cache (dumped with drgn) disagreed with the free
space tree.
All of these symptoms have the same underlying cause: a race between
caching the free space for a block group and returning free space to the
in-memory space cache for pinned extents causes us to double-add a free
range to the space cache. This race exists when free space is cached
from the free space tree (space_cache=v2) or the extent tree
(nospace_cache, or space_cache=v1 if the cache needs to be regenerated).
struct btrfs_block_group::last_byte_to_unpin and struct
btrfs_block_group::progress are supposed to protect against this race,
but commit d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when
waiting for a transaction commit") subtly broke this by allowing
multiple transactions to be unpinning extents at the same time.
Specifically, the race is as follows:
1. An extent is deleted from an uncached block group in transaction A.
2. btrfs_commit_transaction() is called for transaction A.
3. btrfs_run_delayed_refs() -> __btrfs_free_extent() runs the delayed
ref for the deleted extent.
4. __btrfs_free_extent() -> do_free_extent_accounting() ->
add_to_free_space_tree() adds the deleted extent back to the free
space tree.
5. do_free_extent_accounting() -> btrfs_update_block_group() ->
btrfs_cache_block_group() queues up the block group to get cached.
block_group->progress is set to block_group->start.
6. btrfs_commit_transaction() for transaction A calls
switch_commit_roots(). It sets block_group->last_byte_to_unpin to
block_group->progress, which is block_group->start because the block
group hasn't been cached yet.
7. The caching thread gets to our block group. Since the commit roots
were already switched, load_free_space_tree() sees the deleted extent
as free and adds it to the space cache. It finishes caching and sets
block_group->progress to U64_MAX.
8. btrfs_commit_transaction() advances transaction A to
TRANS_STATE_SUPER_COMMITTED.
9. fsync calls btrfs_commit_transaction() for transaction B. Since
transaction A is already in TRANS_STATE_SUPER_COMMITTED and the
commit is for fsync, it advances.
10. btrfs_commit_transaction() for transaction B calls
switch_commit_roots(). This time, the block group has already been
cached, so it sets block_group->last_byte_to_unpin to U64_MAX.
11. btrfs_commit_transaction() for transaction A calls
btrfs_finish_extent_commit(), which calls unpin_extent_range() for
the deleted extent. It sees last_byte_to_unpin set to U64_MAX (by
transaction B!), so it adds the deleted extent to the space cache
again!
This explains all of our symptoms above:
* If the sequence of events is exactly as described above, when the free
space is re-added in step 11, it will fail with EEXIST.
* If another thread reallocates the deleted extent in between steps 7
and 11, then step 11 will silently re-add that space to the space
cache as free even though it is actually allocated. Then, if that
space is allocated *again*, the free space tree will be corrupted
(namely, the wrong item will be deleted).
* If we don't catch this free space tree corr
---truncated--- |