CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
A vulnerability was found in D-Link DIR-825 1.08.01. This impacts the function get_ping6_app_stat of the file ping6_response.cg of the component httpd. Performing manipulation of the argument ping6_ipaddr results in buffer overflow. It is possible to initiate the attack remotely. The exploit has been made public and could be used. This vulnerability only affects products that are no longer supported by the maintainer. |
A vulnerability has been found in itsourcecode Online Discussion Forum 1.0. This affects an unknown function of the file /admin. Such manipulation of the argument Username leads to sql injection. The attack may be performed from remote. The exploit has been disclosed to the public and may be used. |
A vulnerability was detected in Campcodes Grocery Sales and Inventory System 1.0. The affected element is an unknown function of the file /index.php. The manipulation of the argument page results in cross site scripting. The attack can be executed remotely. The exploit is now public and may be used. |
A security vulnerability has been detected in Campcodes Grocery Sales and Inventory System 1.0. Impacted is an unknown function of the file /ajax.php?action=delete_sales. The manipulation of the argument ID leads to sql injection. Remote exploitation of the attack is possible. The exploit has been disclosed publicly and may be used. |
A weakness has been identified in Campcodes Grocery Sales and Inventory System 1.0. This issue affects some unknown processing of the file /ajax.php?action=save_receiving. Executing manipulation of the argument ID can lead to sql injection. The attack may be launched remotely. The exploit has been made available to the public and could be exploited. |
A security flaw has been discovered in itsourcecode POS Point of Sale System 1.0. This vulnerability affects unknown code of the file /inventory/main/vendors/datatables/unit_testing/templates/complex_header_2.php. Performing manipulation of the argument scripts results in cross site scripting. The attack may be initiated remotely. The exploit has been released to the public and may be exploited. |
In the Linux kernel, the following vulnerability has been resolved:
ipv6: sr: Fix MAC comparison to be constant-time
To prevent timing attacks, MACs need to be compared in constant time.
Use the appropriate helper function for this. |
In the Linux kernel, the following vulnerability has been resolved:
comedi: pcl726: Prevent invalid irq number
The reproducer passed in an irq number(0x80008000) that was too large,
which triggered the oob.
Added an interrupt number check to prevent users from passing in an irq
number that was too large.
If `it->options[1]` is 31, then `1 << it->options[1]` is still invalid
because it shifts a 1-bit into the sign bit (which is UB in C).
Possible solutions include reducing the upper bound on the
`it->options[1]` value to 30 or lower, or using `1U << it->options[1]`.
The old code would just not attempt to request the IRQ if the
`options[1]` value were invalid. And it would still configure the
device without interrupts even if the call to `request_irq` returned an
error. So it would be better to combine this test with the test below. |
In the Linux kernel, the following vulnerability has been resolved:
net: usb: asix_devices: Fix PHY address mask in MDIO bus initialization
Syzbot reported shift-out-of-bounds exception on MDIO bus initialization.
The PHY address should be masked to 5 bits (0-31). Without this
mask, invalid PHY addresses could be used, potentially causing issues
with MDIO bus operations.
Fix this by masking the PHY address with 0x1f (31 decimal) to ensure
it stays within the valid range. |
In the Linux kernel, the following vulnerability has been resolved:
NFS: Fix a race when updating an existing write
After nfs_lock_and_join_requests() tests for whether the request is
still attached to the mapping, nothing prevents a call to
nfs_inode_remove_request() from succeeding until we actually lock the
page group.
The reason is that whoever called nfs_inode_remove_request() doesn't
necessarily have a lock on the page group head.
So in order to avoid races, let's take the page group lock earlier in
nfs_lock_and_join_requests(), and hold it across the removal of the
request in nfs_inode_remove_request(). |
In the Linux kernel, the following vulnerability has been resolved:
drm/amd/display: Avoid a NULL pointer dereference
[WHY]
Although unlikely drm_atomic_get_new_connector_state() or
drm_atomic_get_old_connector_state() can return NULL.
[HOW]
Check returns before dereference.
(cherry picked from commit 1e5e8d672fec9f2ab352be121be971877bff2af9) |
In the Linux kernel, the following vulnerability has been resolved:
comedi: Make insn_rw_emulate_bits() do insn->n samples
The `insn_rw_emulate_bits()` function is used as a default handler for
`INSN_READ` instructions for subdevices that have a handler for
`INSN_BITS` but not for `INSN_READ`. Similarly, it is used as a default
handler for `INSN_WRITE` instructions for subdevices that have a handler
for `INSN_BITS` but not for `INSN_WRITE`. It works by emulating the
`INSN_READ` or `INSN_WRITE` instruction handling with a constructed
`INSN_BITS` instruction. However, `INSN_READ` and `INSN_WRITE`
instructions are supposed to be able read or write multiple samples,
indicated by the `insn->n` value, but `insn_rw_emulate_bits()` currently
only handles a single sample. For `INSN_READ`, the comedi core will
copy `insn->n` samples back to user-space. (That triggered KASAN
kernel-infoleak errors when `insn->n` was greater than 1, but that is
being fixed more generally elsewhere in the comedi core.)
Make `insn_rw_emulate_bits()` either handle `insn->n` samples, or return
an error, to conform to the general expectation for `INSN_READ` and
`INSN_WRITE` handlers. |
In the Linux kernel, the following vulnerability has been resolved:
net, hsr: reject HSR frame if skb can't hold tag
Receiving HSR frame with insufficient space to hold HSR tag in the skb
can result in a crash (kernel BUG):
[ 45.390915] skbuff: skb_under_panic: text:ffffffff86f32cac len:26 put:14 head:ffff888042418000 data:ffff888042417ff4 tail:0xe end:0x180 dev:bridge_slave_1
[ 45.392559] ------------[ cut here ]------------
[ 45.392912] kernel BUG at net/core/skbuff.c:211!
[ 45.393276] Oops: invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN NOPTI
[ 45.393809] CPU: 1 UID: 0 PID: 2496 Comm: reproducer Not tainted 6.15.0 #12 PREEMPT(undef)
[ 45.394433] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 45.395273] RIP: 0010:skb_panic+0x15b/0x1d0
<snip registers, remove unreliable trace>
[ 45.402911] Call Trace:
[ 45.403105] <IRQ>
[ 45.404470] skb_push+0xcd/0xf0
[ 45.404726] br_dev_queue_push_xmit+0x7c/0x6c0
[ 45.406513] br_forward_finish+0x128/0x260
[ 45.408483] __br_forward+0x42d/0x590
[ 45.409464] maybe_deliver+0x2eb/0x420
[ 45.409763] br_flood+0x174/0x4a0
[ 45.410030] br_handle_frame_finish+0xc7c/0x1bc0
[ 45.411618] br_handle_frame+0xac3/0x1230
[ 45.413674] __netif_receive_skb_core.constprop.0+0x808/0x3df0
[ 45.422966] __netif_receive_skb_one_core+0xb4/0x1f0
[ 45.424478] __netif_receive_skb+0x22/0x170
[ 45.424806] process_backlog+0x242/0x6d0
[ 45.425116] __napi_poll+0xbb/0x630
[ 45.425394] net_rx_action+0x4d1/0xcc0
[ 45.427613] handle_softirqs+0x1a4/0x580
[ 45.427926] do_softirq+0x74/0x90
[ 45.428196] </IRQ>
This issue was found by syzkaller.
The panic happens in br_dev_queue_push_xmit() once it receives a
corrupted skb with ETH header already pushed in linear data. When it
attempts the skb_push() call, there's not enough headroom and
skb_push() panics.
The corrupted skb is put on the queue by HSR layer, which makes a
sequence of unintended transformations when it receives a specific
corrupted HSR frame (with incomplete TAG).
Fix it by dropping and consuming frames that are not long enough to
contain both ethernet and hsr headers.
Alternative fix would be to check for enough headroom before skb_push()
in br_dev_queue_push_xmit().
In the reproducer, this is injected via AF_PACKET, but I don't easily
see why it couldn't be sent over the wire from adjacent network.
Further Details:
In the reproducer, the following network interface chain is set up:
┌────────────────┐ ┌────────────────┐
│ veth0_to_hsr ├───┤ hsr_slave0 ┼───┐
└────────────────┘ └────────────────┘ │
│ ┌──────┐
├─┤ hsr0 ├───┐
│ └──────┘ │
┌────────────────┐ ┌────────────────┐ │ │┌────────┐
│ veth1_to_hsr ┼───┤ hsr_slave1 ├───┘ └┤ │
└────────────────┘ └────────────────┘ ┌┼ bridge │
││ │
│└────────┘
│
┌───────┐ │
│ ... ├──────┘
└───────┘
To trigger the events leading up to crash, reproducer sends a corrupted
HSR fr
---truncated--- |
In the Linux kernel, the following vulnerability has been resolved:
ACPI: pfr_update: Fix the driver update version check
The security-version-number check should be used rather
than the runtime version check for driver updates.
Otherwise, the firmware update would fail when the update binary had
a lower runtime version number than the current one.
[ rjw: Changelog edits ] |
In the Linux kernel, the following vulnerability has been resolved:
crypto: caam - Prevent crash on suspend with iMX8QM / iMX8ULP
Since the CAAM on these SoCs is managed by another ARM core, called the
SECO (Security Controller) on iMX8QM and Secure Enclave on iMX8ULP, which
also reserves access to register page 0 suspend operations cannot touch
this page.
This is similar to when running OPTEE, where OPTEE will reserve page 0.
Track this situation using a new state variable no_page0, reflecting if
page 0 is reserved elsewhere, either by other management cores in SoC or
by OPTEE.
Replace the optee_en check in suspend/resume with the new check.
optee_en cannot go away as it's needed elsewhere to gate OPTEE specific
situations.
Fixes the following splat at suspend:
Internal error: synchronous external abort: 0000000096000010 [#1] SMP
Hardware name: Freescale i.MX8QXP ACU6C (DT)
pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : readl+0x0/0x18
lr : rd_reg32+0x18/0x3c
sp : ffffffc08192ba20
x29: ffffffc08192ba20 x28: ffffff8025190000 x27: 0000000000000000
x26: ffffffc0808ae808 x25: ffffffc080922338 x24: ffffff8020e89090
x23: 0000000000000000 x22: ffffffc080922000 x21: ffffff8020e89010
x20: ffffffc080387ef8 x19: ffffff8020e89010 x18: 000000005d8000d5
x17: 0000000030f35963 x16: 000000008f785f3f x15: 000000003b8ef57c
x14: 00000000c418aef8 x13: 00000000f5fea526 x12: 0000000000000001
x11: 0000000000000002 x10: 0000000000000001 x9 : 0000000000000000
x8 : ffffff8025190870 x7 : ffffff8021726880 x6 : 0000000000000002
x5 : ffffff80217268f0 x4 : ffffff8021726880 x3 : ffffffc081200000
x2 : 0000000000000001 x1 : ffffff8020e89010 x0 : ffffffc081200004
Call trace:
readl+0x0/0x18
caam_ctrl_suspend+0x30/0xdc
dpm_run_callback.constprop.0+0x24/0x5c
device_suspend+0x170/0x2e8
dpm_suspend+0xa0/0x104
dpm_suspend_start+0x48/0x50
suspend_devices_and_enter+0x7c/0x45c
pm_suspend+0x148/0x160
state_store+0xb4/0xf8
kobj_attr_store+0x14/0x24
sysfs_kf_write+0x38/0x48
kernfs_fop_write_iter+0xb4/0x178
vfs_write+0x118/0x178
ksys_write+0x6c/0xd0
__arm64_sys_write+0x14/0x1c
invoke_syscall.constprop.0+0x64/0xb0
do_el0_svc+0x90/0xb0
el0_svc+0x18/0x44
el0t_64_sync_handler+0x88/0x124
el0t_64_sync+0x150/0x154
Code: 88dffc21 88dffc21 5ac00800 d65f03c0 (b9400000) |
In the Linux kernel, the following vulnerability has been resolved:
media: mt9m114: Fix deadlock in get_frame_interval/set_frame_interval
Getting / Setting the frame interval using the V4L2 subdev pad ops
get_frame_interval/set_frame_interval causes a deadlock, as the
subdev state is locked in the [1] but also in the driver itself.
In [2] it's described that the caller is responsible to acquire and
release the lock in this case. Therefore, acquiring the lock in the
driver is wrong.
Remove the lock acquisitions/releases from mt9m114_ifp_get_frame_interval()
and mt9m114_ifp_set_frame_interval().
[1] drivers/media/v4l2-core/v4l2-subdev.c - line 1129
[2] Documentation/driver-api/media/v4l2-subdev.rst |
In the Linux kernel, the following vulnerability has been resolved:
tls: fix handling of zero-length records on the rx_list
Each recvmsg() call must process either
- only contiguous DATA records (any number of them)
- one non-DATA record
If the next record has different type than what has already been
processed we break out of the main processing loop. If the record
has already been decrypted (which may be the case for TLS 1.3 where
we don't know type until decryption) we queue the pending record
to the rx_list. Next recvmsg() will pick it up from there.
Queuing the skb to rx_list after zero-copy decrypt is not possible,
since in that case we decrypted directly to the user space buffer,
and we don't have an skb to queue (darg.skb points to the ciphertext
skb for access to metadata like length).
Only data records are allowed zero-copy, and we break the processing
loop after each non-data record. So we should never zero-copy and
then find out that the record type has changed. The corner case
we missed is when the initial record comes from rx_list, and it's
zero length. |
In the Linux kernel, the following vulnerability has been resolved:
net/sched: Fix backlog accounting in qdisc_dequeue_internal
This issue applies for the following qdiscs: hhf, fq, fq_codel, and
fq_pie, and occurs in their change handlers when adjusting to the new
limit. The problem is the following in the values passed to the
subsequent qdisc_tree_reduce_backlog call given a tbf parent:
When the tbf parent runs out of tokens, skbs of these qdiscs will
be placed in gso_skb. Their peek handlers are qdisc_peek_dequeued,
which accounts for both qlen and backlog. However, in the case of
qdisc_dequeue_internal, ONLY qlen is accounted for when pulling
from gso_skb. This means that these qdiscs are missing a
qdisc_qstats_backlog_dec when dropping packets to satisfy the
new limit in their change handlers.
One can observe this issue with the following (with tc patched to
support a limit of 0):
export TARGET=fq
tc qdisc del dev lo root
tc qdisc add dev lo root handle 1: tbf rate 8bit burst 100b latency 1ms
tc qdisc replace dev lo handle 3: parent 1:1 $TARGET limit 1000
echo ''; echo 'add child'; tc -s -d qdisc show dev lo
ping -I lo -f -c2 -s32 -W0.001 127.0.0.1 2>&1 >/dev/null
echo ''; echo 'after ping'; tc -s -d qdisc show dev lo
tc qdisc change dev lo handle 3: parent 1:1 $TARGET limit 0
echo ''; echo 'after limit drop'; tc -s -d qdisc show dev lo
tc qdisc replace dev lo handle 2: parent 1:1 sfq
echo ''; echo 'post graft'; tc -s -d qdisc show dev lo
The second to last show command shows 0 packets but a positive
number (74) of backlog bytes. The problem becomes clearer in the
last show command, where qdisc_purge_queue triggers
qdisc_tree_reduce_backlog with the positive backlog and causes an
underflow in the tbf parent's backlog (4096 Mb instead of 0).
To fix this issue, the codepath for all clients of qdisc_dequeue_internal
has been simplified: codel, pie, hhf, fq, fq_pie, and fq_codel.
qdisc_dequeue_internal handles the backlog adjustments for all cases that
do not directly use the dequeue handler.
The old fq_codel_change limit adjustment loop accumulated the arguments to
the subsequent qdisc_tree_reduce_backlog call through the cstats field.
However, this is confusing and error prone as fq_codel_dequeue could also
potentially mutate this field (which qdisc_dequeue_internal calls in the
non gso_skb case), so we have unified the code here with other qdiscs. |
In the Linux kernel, the following vulnerability has been resolved:
mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list
In shrink_folio_list(), the hwpoisoned folio may be large folio, which
can't be handled by unmap_poisoned_folio(). For THP, try_to_unmap_one()
must be passed with TTU_SPLIT_HUGE_PMD to split huge PMD first and then
retry. Without TTU_SPLIT_HUGE_PMD, we will trigger null-ptr deref of
pvmw.pte. Even we passed TTU_SPLIT_HUGE_PMD, we will trigger a
WARN_ON_ONCE due to the page isn't in swapcache.
Since UCE is rare in real world, and race with reclaimation is more rare,
just skipping the hwpoisoned large folio is enough. memory_failure() will
handle it if the UCE is triggered again.
This happens when memory reclaim for large folio races with
memory_failure(), and will lead to kernel panic. The race is as
follows:
cpu0 cpu1
shrink_folio_list memory_failure
TestSetPageHWPoison
unmap_poisoned_folio
--> trigger BUG_ON due to
unmap_poisoned_folio couldn't
handle large folio
[tujinjiang@huawei.com: add comment to unmap_poisoned_folio()] |
In the Linux kernel, the following vulnerability has been resolved:
media: venus: protect against spurious interrupts during probe
Make sure the interrupt handler is initialized before the interrupt is
registered.
If the IRQ is registered before hfi_create(), it's possible that an
interrupt fires before the handler setup is complete, leading to a NULL
dereference.
This error condition has been observed during system boot on Rb3Gen2. |