CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
In the Linux kernel, the following vulnerability has been resolved:
USB: Gadget: core: Help prevent panic during UVC unconfigure
Avichal Rakesh reported a kernel panic that occurred when the UVC
gadget driver was removed from a gadget's configuration. The panic
involves a somewhat complicated interaction between the kernel driver
and a userspace component (as described in the Link tag below), but
the analysis did make one thing clear: The Gadget core should
accomodate gadget drivers calling usb_gadget_deactivate() as part of
their unbind procedure.
Currently this doesn't work. gadget_unbind_driver() calls
driver->unbind() while holding the udc->connect_lock mutex, and
usb_gadget_deactivate() attempts to acquire that mutex, which will
result in a deadlock.
The simple fix is for gadget_unbind_driver() to release the mutex when
invoking the ->unbind() callback. There is no particular reason for
it to be holding the mutex at that time, and the mutex isn't held
while the ->bind() callback is invoked. So we'll drop the mutex
before performing the unbind callback and reacquire it afterward.
We'll also add a couple of comments to usb_gadget_activate() and
usb_gadget_deactivate(). Because they run in process context they
must not be called from a gadget driver's ->disconnect() callback,
which (according to the kerneldoc for struct usb_gadget_driver in
include/linux/usb/gadget.h) may run in interrupt context. This may
help prevent similar bugs from arising in the future. |
In the Linux kernel, the following vulnerability has been resolved:
bpf: reject unhashed sockets in bpf_sk_assign
The semantics for bpf_sk_assign are as follows:
sk = some_lookup_func()
bpf_sk_assign(skb, sk)
bpf_sk_release(sk)
That is, the sk is not consumed by bpf_sk_assign. The function
therefore needs to make sure that sk lives long enough to be
consumed from __inet_lookup_skb. The path through the stack for a
TCPv4 packet is roughly:
netif_receive_skb_core: takes RCU read lock
__netif_receive_skb_core:
sch_handle_ingress:
tcf_classify:
bpf_sk_assign()
deliver_ptype_list_skb:
deliver_skb:
ip_packet_type->func == ip_rcv:
ip_rcv_core:
ip_rcv_finish_core:
dst_input:
ip_local_deliver:
ip_local_deliver_finish:
ip_protocol_deliver_rcu:
tcp_v4_rcv:
__inet_lookup_skb:
skb_steal_sock
The existing helper takes advantage of the fact that everything
happens in the same RCU critical section: for sockets with
SOCK_RCU_FREE set bpf_sk_assign never takes a reference.
skb_steal_sock then checks SOCK_RCU_FREE again and does sock_put
if necessary.
This approach assumes that SOCK_RCU_FREE is never set on a sk
between bpf_sk_assign and skb_steal_sock, but this invariant is
violated by unhashed UDP sockets. A new UDP socket is created
in TCP_CLOSE state but without SOCK_RCU_FREE set. That flag is only
added in udp_lib_get_port() which happens when a socket is bound.
When bpf_sk_assign was added it wasn't possible to access unhashed
UDP sockets from BPF, so this wasn't a problem. This changed
in commit 0c48eefae712 ("sock_map: Lift socket state restriction
for datagram sockets"), but the helper wasn't adjusted accordingly.
The following sequence of events will therefore lead to a refcount
leak:
1. Add socket(AF_INET, SOCK_DGRAM) to a sockmap.
2. Pull socket out of sockmap and bpf_sk_assign it. Since
SOCK_RCU_FREE is not set we increment the refcount.
3. bind() or connect() the socket, setting SOCK_RCU_FREE.
4. skb_steal_sock will now set refcounted = false due to
SOCK_RCU_FREE.
5. tcp_v4_rcv() skips sock_put().
Fix the problem by rejecting unhashed sockets in bpf_sk_assign().
This matches the behaviour of __inet_lookup_skb which is ultimately
the goal of bpf_sk_assign(). |
In the Linux kernel, the following vulnerability has been resolved:
ring-buffer: Sync IRQ works before buffer destruction
If something was written to the buffer just before destruction,
it may be possible (maybe not in a real system, but it did
happen in ARCH=um with time-travel) to destroy the ringbuffer
before the IRQ work ran, leading this KASAN report (or a crash
without KASAN):
BUG: KASAN: slab-use-after-free in irq_work_run_list+0x11a/0x13a
Read of size 8 at addr 000000006d640a48 by task swapper/0
CPU: 0 PID: 0 Comm: swapper Tainted: G W O 6.3.0-rc1 #7
Stack:
60c4f20f 0c203d48 41b58ab3 60f224fc
600477fa 60f35687 60c4f20f 601273dd
00000008 6101eb00 6101eab0 615be548
Call Trace:
[<60047a58>] show_stack+0x25e/0x282
[<60c609e0>] dump_stack_lvl+0x96/0xfd
[<60c50d4c>] print_report+0x1a7/0x5a8
[<603078d3>] kasan_report+0xc1/0xe9
[<60308950>] __asan_report_load8_noabort+0x1b/0x1d
[<60232844>] irq_work_run_list+0x11a/0x13a
[<602328b4>] irq_work_tick+0x24/0x34
[<6017f9dc>] update_process_times+0x162/0x196
[<6019f335>] tick_sched_handle+0x1a4/0x1c3
[<6019fd9e>] tick_sched_timer+0x79/0x10c
[<601812b9>] __hrtimer_run_queues.constprop.0+0x425/0x695
[<60182913>] hrtimer_interrupt+0x16c/0x2c4
[<600486a3>] um_timer+0x164/0x183
[...]
Allocated by task 411:
save_stack_trace+0x99/0xb5
stack_trace_save+0x81/0x9b
kasan_save_stack+0x2d/0x54
kasan_set_track+0x34/0x3e
kasan_save_alloc_info+0x25/0x28
____kasan_kmalloc+0x8b/0x97
__kasan_kmalloc+0x10/0x12
__kmalloc+0xb2/0xe8
load_elf_phdrs+0xee/0x182
[...]
The buggy address belongs to the object at 000000006d640800
which belongs to the cache kmalloc-1k of size 1024
The buggy address is located 584 bytes inside of
freed 1024-byte region [000000006d640800, 000000006d640c00)
Add the appropriate irq_work_sync() so the work finishes before
the buffers are destroyed.
Prior to the commit in the Fixes tag below, there was only a
single global IRQ work, so this issue didn't exist. |
In the Linux kernel, the following vulnerability has been resolved:
wifi: mac80211: check for station first in client probe
When probing a client, first check if we have it, and then
check for the channel context, otherwise you can trigger
the warning there easily by probing when the AP isn't even
started yet. Since a client existing means the AP is also
operating, we can then keep the warning.
Also simplify the moved code a bit. |
In the Linux kernel, the following vulnerability has been resolved:
sctp: add a refcnt in sctp_stream_priorities to avoid a nested loop
With this refcnt added in sctp_stream_priorities, we don't need to
traverse all streams to check if the prio is used by other streams
when freeing one stream's prio in sctp_sched_prio_free_sid(). This
can avoid a nested loop (up to 65535 * 65535), which may cause a
stuck as Ying reported:
watchdog: BUG: soft lockup - CPU#23 stuck for 26s! [ksoftirqd/23:136]
Call Trace:
<TASK>
sctp_sched_prio_free_sid+0xab/0x100 [sctp]
sctp_stream_free_ext+0x64/0xa0 [sctp]
sctp_stream_free+0x31/0x50 [sctp]
sctp_association_free+0xa5/0x200 [sctp]
Note that it doesn't need to use refcount_t type for this counter,
as its accessing is always protected under the sock lock.
v1->v2:
- add a check in sctp_sched_prio_set to avoid the possible prio_head
refcnt overflow. |
In the Linux kernel, the following vulnerability has been resolved:
cifs: Release folio lock on fscache read hit.
Under the current code, when cifs_readpage_worker is called, the call
contract is that the callee should unlock the page. This is documented
in the read_folio section of Documentation/filesystems/vfs.rst as:
> The filesystem should unlock the folio once the read has completed,
> whether it was successful or not.
Without this change, when fscache is in use and cache hit occurs during
a read, the page lock is leaked, producing the following stack on
subsequent reads (via mmap) to the page:
$ cat /proc/3890/task/12864/stack
[<0>] folio_wait_bit_common+0x124/0x350
[<0>] filemap_read_folio+0xad/0xf0
[<0>] filemap_fault+0x8b1/0xab0
[<0>] __do_fault+0x39/0x150
[<0>] do_fault+0x25c/0x3e0
[<0>] __handle_mm_fault+0x6ca/0xc70
[<0>] handle_mm_fault+0xe9/0x350
[<0>] do_user_addr_fault+0x225/0x6c0
[<0>] exc_page_fault+0x84/0x1b0
[<0>] asm_exc_page_fault+0x27/0x30
This requires a reboot to resolve; it is a deadlock.
Note however that the call to cifs_readpage_from_fscache does mark the
page clean, but does not free the folio lock. This happens in
__cifs_readpage_from_fscache on success. Releasing the lock at that
point however is not appropriate as cifs_readahead also calls
cifs_readpage_from_fscache and *does* unconditionally release the lock
after its return. This change therefore effectively makes
cifs_readpage_worker work like cifs_readahead. |
In the Linux kernel, the following vulnerability has been resolved:
drivers: base: Free devm resources when unregistering a device
In the current code, devres_release_all() only gets called if the device
has a bus and has been probed.
This leads to issues when using bus-less or driver-less devices where
the device might never get freed if a managed resource holds a reference
to the device. This is happening in the DRM framework for example.
We should thus call devres_release_all() in the device_del() function to
make sure that the device-managed actions are properly executed when the
device is unregistered, even if it has neither a bus nor a driver.
This is effectively the same change than commit 2f8d16a996da ("devres:
release resources on device_del()") that got reverted by commit
a525a3ddeaca ("driver core: free devres in device_release") over
memory leaks concerns.
This patch effectively combines the two commits mentioned above to
release the resources both on device_del() and device_release() and get
the best of both worlds. |
In the Linux kernel, the following vulnerability has been resolved:
cifs: fix mid leak during reconnection after timeout threshold
When the number of responses with status of STATUS_IO_TIMEOUT
exceeds a specified threshold (NUM_STATUS_IO_TIMEOUT), we reconnect
the connection. But we do not return the mid, or the credits
returned for the mid, or reduce the number of in-flight requests.
This bug could result in the server->in_flight count to go bad,
and also cause a leak in the mids.
This change moves the check to a few lines below where the
response is decrypted, even of the response is read from the
transform header. This way, the code for returning the mids
can be reused.
Also, the cifs_reconnect was reconnecting just the transport
connection before. In case of multi-channel, this may not be
what we want to do after several timeouts. Changed that to
reconnect the session and the tree too.
Also renamed NUM_STATUS_IO_TIMEOUT to a more appropriate name
MAX_STATUS_IO_TIMEOUT. |
In the Linux kernel, the following vulnerability has been resolved:
bus: mhi: host: Range check CHDBOFF and ERDBOFF
If the value read from the CHDBOFF and ERDBOFF registers is outside the
range of the MHI register space then an invalid address might be computed
which later causes a kernel panic. Range check the read value to prevent
a crash due to bad data from the device. |
In the Linux kernel, the following vulnerability has been resolved:
tunnels: fix kasan splat when generating ipv4 pmtu error
If we try to emit an icmp error in response to a nonliner skb, we get
BUG: KASAN: slab-out-of-bounds in ip_compute_csum+0x134/0x220
Read of size 4 at addr ffff88811c50db00 by task iperf3/1691
CPU: 2 PID: 1691 Comm: iperf3 Not tainted 6.5.0-rc3+ #309
[..]
kasan_report+0x105/0x140
ip_compute_csum+0x134/0x220
iptunnel_pmtud_build_icmp+0x554/0x1020
skb_tunnel_check_pmtu+0x513/0xb80
vxlan_xmit_one+0x139e/0x2ef0
vxlan_xmit+0x1867/0x2760
dev_hard_start_xmit+0x1ee/0x4f0
br_dev_queue_push_xmit+0x4d1/0x660
[..]
ip_compute_csum() cannot deal with nonlinear skbs, so avoid it.
After this change, splat is gone and iperf3 is no longer stuck. |
In the Linux kernel, the following vulnerability has been resolved:
dm integrity: call kmem_cache_destroy() in dm_integrity_init() error path
Otherwise the journal_io_cache will leak if dm_register_target() fails. |
In the Linux kernel, the following vulnerability has been resolved:
nfsd: clean up potential nfsd_file refcount leaks in COPY codepath
There are two different flavors of the nfsd4_copy struct. One is
embedded in the compound and is used directly in synchronous copies. The
other is dynamically allocated, refcounted and tracked in the client
struture. For the embedded one, the cleanup just involves releasing any
nfsd_files held on its behalf. For the async one, the cleanup is a bit
more involved, and we need to dequeue it from lists, unhash it, etc.
There is at least one potential refcount leak in this code now. If the
kthread_create call fails, then both the src and dst nfsd_files in the
original nfsd4_copy object are leaked.
The cleanup in this codepath is also sort of weird. In the async copy
case, we'll have up to four nfsd_file references (src and dst for both
flavors of copy structure). They are both put at the end of
nfsd4_do_async_copy, even though the ones held on behalf of the embedded
one outlive that structure.
Change it so that we always clean up the nfsd_file refs held by the
embedded copy structure before nfsd4_copy returns. Rework
cleanup_async_copy to handle both inter and intra copies. Eliminate
nfsd4_cleanup_intra_ssc since it now becomes a no-op. |
In the Linux kernel, the following vulnerability has been resolved:
irqchip: Fix refcount leak in platform_irqchip_probe
of_irq_find_parent() returns a node pointer with refcount incremented,
We should use of_node_put() on it when not needed anymore.
Add missing of_node_put() to avoid refcount leak. |
In the Linux kernel, the following vulnerability has been resolved:
nilfs2: fix potential UAF of struct nilfs_sc_info in nilfs_segctor_thread()
The finalization of nilfs_segctor_thread() can race with
nilfs_segctor_kill_thread() which terminates that thread, potentially
causing a use-after-free BUG as KASAN detected.
At the end of nilfs_segctor_thread(), it assigns NULL to "sc_task" member
of "struct nilfs_sc_info" to indicate the thread has finished, and then
notifies nilfs_segctor_kill_thread() of this using waitqueue
"sc_wait_task" on the struct nilfs_sc_info.
However, here, immediately after the NULL assignment to "sc_task", it is
possible that nilfs_segctor_kill_thread() will detect it and return to
continue the deallocation, freeing the nilfs_sc_info structure before the
thread does the notification.
This fixes the issue by protecting the NULL assignment to "sc_task" and
its notification, with spinlock "sc_state_lock" of the struct
nilfs_sc_info. Since nilfs_segctor_kill_thread() does a final check to
see if "sc_task" is NULL with "sc_state_lock" locked, this can eliminate
the race. |
In the Linux kernel, the following vulnerability has been resolved:
scsi: Revert "scsi: core: Do not increase scsi_device's iorequest_cnt if dispatch failed"
The "atomic_inc(&cmd->device->iorequest_cnt)" in scsi_queue_rq() would
cause kernel panic because cmd->device may be freed after returning from
scsi_dispatch_cmd().
This reverts commit cfee29ffb45b1c9798011b19d454637d1b0fe87d. |
In the Linux kernel, the following vulnerability has been resolved:
dax: Fix dax_mapping_release() use after free
A CONFIG_DEBUG_KOBJECT_RELEASE test of removing a device-dax region
provider (like modprobe -r dax_hmem) yields:
kobject: 'mapping0' (ffff93eb460e8800): kobject_release, parent 0000000000000000 (delayed 2000)
[..]
DEBUG_LOCKS_WARN_ON(1)
WARNING: CPU: 23 PID: 282 at kernel/locking/lockdep.c:232 __lock_acquire+0x9fc/0x2260
[..]
RIP: 0010:__lock_acquire+0x9fc/0x2260
[..]
Call Trace:
<TASK>
[..]
lock_acquire+0xd4/0x2c0
? ida_free+0x62/0x130
_raw_spin_lock_irqsave+0x47/0x70
? ida_free+0x62/0x130
ida_free+0x62/0x130
dax_mapping_release+0x1f/0x30
device_release+0x36/0x90
kobject_delayed_cleanup+0x46/0x150
Due to attempting ida_free() on an ida object that has already been
freed. Devices typically only hold a reference on their parent while
registered. If a child needs a parent object to complete its release it
needs to hold a reference that it drops from its release callback.
Arrange for a dax_mapping to pin its parent dev_dax instance until
dax_mapping_release(). |
In the Linux kernel, the following vulnerability has been resolved:
scsi: qla2xxx: Fix deletion race condition
System crash when using debug kernel due to link list corruption. The cause
of the link list corruption is due to session deletion was allowed to queue
up twice. Here's the internal trace that show the same port was allowed to
double queue for deletion on different cpu.
20808683956 015 qla2xxx [0000:13:00.1]-e801:4: Scheduling sess ffff93ebf9306800 for deletion 50:06:0e:80:12:48:ff:50 fc4_type 1
20808683957 027 qla2xxx [0000:13:00.1]-e801:4: Scheduling sess ffff93ebf9306800 for deletion 50:06:0e:80:12:48:ff:50 fc4_type 1
Move the clearing/setting of deleted flag lock. |
In the Linux kernel, the following vulnerability has been resolved:
fs/ntfs3: Validate data run offset
This adds sanity checks for data run offset. We should make sure data
run offset is legit before trying to unpack them, otherwise we may
encounter use-after-free or some unexpected memory access behaviors.
[ 82.940342] BUG: KASAN: use-after-free in run_unpack+0x2e3/0x570
[ 82.941180] Read of size 1 at addr ffff888008a8487f by task mount/240
[ 82.941670]
[ 82.942069] CPU: 0 PID: 240 Comm: mount Not tainted 5.19.0+ #15
[ 82.942482] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 82.943720] Call Trace:
[ 82.944204] <TASK>
[ 82.944471] dump_stack_lvl+0x49/0x63
[ 82.944908] print_report.cold+0xf5/0x67b
[ 82.945141] ? __wait_on_bit+0x106/0x120
[ 82.945750] ? run_unpack+0x2e3/0x570
[ 82.946626] kasan_report+0xa7/0x120
[ 82.947046] ? run_unpack+0x2e3/0x570
[ 82.947280] __asan_load1+0x51/0x60
[ 82.947483] run_unpack+0x2e3/0x570
[ 82.947709] ? memcpy+0x4e/0x70
[ 82.947927] ? run_pack+0x7a0/0x7a0
[ 82.948158] run_unpack_ex+0xad/0x3f0
[ 82.948399] ? mi_enum_attr+0x14a/0x200
[ 82.948717] ? run_unpack+0x570/0x570
[ 82.949072] ? ni_enum_attr_ex+0x1b2/0x1c0
[ 82.949332] ? ni_fname_type.part.0+0xd0/0xd0
[ 82.949611] ? mi_read+0x262/0x2c0
[ 82.949970] ? ntfs_cmp_names_cpu+0x125/0x180
[ 82.950249] ntfs_iget5+0x632/0x1870
[ 82.950621] ? ntfs_get_block_bmap+0x70/0x70
[ 82.951192] ? evict+0x223/0x280
[ 82.951525] ? iput.part.0+0x286/0x320
[ 82.951969] ntfs_fill_super+0x1321/0x1e20
[ 82.952436] ? put_ntfs+0x1d0/0x1d0
[ 82.952822] ? vsprintf+0x20/0x20
[ 82.953188] ? mutex_unlock+0x81/0xd0
[ 82.953379] ? set_blocksize+0x95/0x150
[ 82.954001] get_tree_bdev+0x232/0x370
[ 82.954438] ? put_ntfs+0x1d0/0x1d0
[ 82.954700] ntfs_fs_get_tree+0x15/0x20
[ 82.955049] vfs_get_tree+0x4c/0x130
[ 82.955292] path_mount+0x645/0xfd0
[ 82.955615] ? putname+0x80/0xa0
[ 82.955955] ? finish_automount+0x2e0/0x2e0
[ 82.956310] ? kmem_cache_free+0x110/0x390
[ 82.956723] ? putname+0x80/0xa0
[ 82.957023] do_mount+0xd6/0xf0
[ 82.957411] ? path_mount+0xfd0/0xfd0
[ 82.957638] ? __kasan_check_write+0x14/0x20
[ 82.957948] __x64_sys_mount+0xca/0x110
[ 82.958310] do_syscall_64+0x3b/0x90
[ 82.958719] entry_SYSCALL_64_after_hwframe+0x63/0xcd
[ 82.959341] RIP: 0033:0x7fd0d1ce948a
[ 82.960193] Code: 48 8b 0d 11 fa 2a 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 008
[ 82.961532] RSP: 002b:00007ffe59ff69a8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
[ 82.962527] RAX: ffffffffffffffda RBX: 0000564dcc107060 RCX: 00007fd0d1ce948a
[ 82.963266] RDX: 0000564dcc107260 RSI: 0000564dcc1072e0 RDI: 0000564dcc10fce0
[ 82.963686] RBP: 0000000000000000 R08: 0000564dcc107280 R09: 0000000000000020
[ 82.964272] R10: 00000000c0ed0000 R11: 0000000000000202 R12: 0000564dcc10fce0
[ 82.964785] R13: 0000564dcc107260 R14: 0000000000000000 R15: 00000000ffffffff |
In the Linux kernel, the following vulnerability has been resolved:
perf: RISC-V: Remove PERF_HES_STOPPED flag checking in riscv_pmu_start()
Since commit 096b52fd2bb4 ("perf: RISC-V: throttle perf events") the
perf_sample_event_took() function was added to report time spent in
overflow interrupts. If the interrupt takes too long, the perf framework
will lower the sysctl_perf_event_sample_rate and max_samples_per_tick.
When hwc->interrupts is larger than max_samples_per_tick, the
hwc->interrupts will be set to MAX_INTERRUPTS, and events will be
throttled within the __perf_event_account_interrupt() function.
However, the RISC-V PMU driver doesn't call riscv_pmu_stop() to update the
PERF_HES_STOPPED flag after perf_event_overflow() in pmu_sbi_ovf_handler()
function to avoid throttling. When the perf framework unthrottled the event
in the timer interrupt handler, it triggers riscv_pmu_start() function
and causes a WARN_ON_ONCE() warning, as shown below:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 240 at drivers/perf/riscv_pmu.c:184 riscv_pmu_start+0x7c/0x8e
Modules linked in:
CPU: 0 PID: 240 Comm: ls Not tainted 6.4-rc4-g19d0788e9ef2 #1
Hardware name: SiFive (DT)
epc : riscv_pmu_start+0x7c/0x8e
ra : riscv_pmu_start+0x28/0x8e
epc : ffffffff80aef864 ra : ffffffff80aef810 sp : ffff8f80004db6f0
gp : ffffffff81c83750 tp : ffffaf80069f9bc0 t0 : ffff8f80004db6c0
t1 : 0000000000000000 t2 : 000000000000001f s0 : ffff8f80004db720
s1 : ffffaf8008ca1068 a0 : 0000ffffffffffff a1 : 0000000000000000
a2 : 0000000000000001 a3 : 0000000000000870 a4 : 0000000000000000
a5 : 0000000000000000 a6 : 0000000000000840 a7 : 0000000000000030
s2 : 0000000000000000 s3 : ffffaf8005165800 s4 : ffffaf800424da00
s5 : ffffffffffffffff s6 : ffffffff81cc7590 s7 : 0000000000000000
s8 : 0000000000000006 s9 : 0000000000000001 s10: ffffaf807efbc340
s11: ffffaf807efbbf00 t3 : ffffaf8006a16028 t4 : 00000000dbfbb796
t5 : 0000000700000000 t6 : ffffaf8005269870
status: 0000000200000100 badaddr: 0000000000000000 cause: 0000000000000003
[<ffffffff80aef864>] riscv_pmu_start+0x7c/0x8e
[<ffffffff80185b56>] perf_adjust_freq_unthr_context+0x15e/0x174
[<ffffffff80188642>] perf_event_task_tick+0x88/0x9c
[<ffffffff800626a8>] scheduler_tick+0xfe/0x27c
[<ffffffff800b5640>] update_process_times+0x9a/0xba
[<ffffffff800c5bd4>] tick_sched_handle+0x32/0x66
[<ffffffff800c5e0c>] tick_sched_timer+0x64/0xb0
[<ffffffff800b5e50>] __hrtimer_run_queues+0x156/0x2f4
[<ffffffff800b6bdc>] hrtimer_interrupt+0xe2/0x1fe
[<ffffffff80acc9e8>] riscv_timer_interrupt+0x38/0x42
[<ffffffff80090a16>] handle_percpu_devid_irq+0x90/0x1d2
[<ffffffff8008a9f4>] generic_handle_domain_irq+0x28/0x36
After referring other PMU drivers like Arm, Loongarch, Csky, and Mips,
they don't call *_pmu_stop() to update with PERF_HES_STOPPED flag
after perf_event_overflow() function nor do they add PERF_HES_STOPPED
flag checking in *_pmu_start() which don't cause this warning.
Thus, it's recommended to remove this unnecessary check in
riscv_pmu_start() function to prevent this warning. |
In the Linux kernel, the following vulnerability has been resolved:
drm/mediatek: mtk_drm_crtc: Add checks for devm_kcalloc
As the devm_kcalloc may return NULL, the return value needs to be checked
to avoid NULL poineter dereference. |