CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
In the Linux kernel, the following vulnerability has been resolved:
block: don't call rq_qos_ops->done_bio if the bio isn't tracked
rq_qos framework is only applied on request based driver, so:
1) rq_qos_done_bio() needn't to be called for bio based driver
2) rq_qos_done_bio() needn't to be called for bio which isn't tracked,
such as bios ended from error handling code.
Especially in bio_endio():
1) request queue is referred via bio->bi_bdev->bd_disk->queue, which
may be gone since request queue refcount may not be held in above two
cases
2) q->rq_qos may be freed in blk_cleanup_queue() when calling into
__rq_qos_done_bio()
Fix the potential kernel panic by not calling rq_qos_ops->done_bio if
the bio isn't tracked. This way is safe because both ioc_rqos_done_bio()
and blkcg_iolatency_done_bio() are nop if the bio isn't tracked. |
In the Linux kernel, the following vulnerability has been resolved:
scsi: pm80xx: Fix memory leak during rmmod
Driver failed to release all memory allocated. This would lead to memory
leak during driver removal.
Properly free memory when the module is removed. |
In the Linux kernel, the following vulnerability has been resolved:
scsi: lpfc: Fix link down processing to address NULL pointer dereference
If an FC link down transition while PLOGIs are outstanding to fabric well
known addresses, outstanding ABTS requests may result in a NULL pointer
dereference. Driver unload requests may hang with repeated "2878" log
messages.
The Link down processing results in ABTS requests for outstanding ELS
requests. The Abort WQEs are sent for the ELSs before the driver had set
the link state to down. Thus the driver is sending the Abort with the
expectation that an ABTS will be sent on the wire. The Abort request is
stalled waiting for the link to come up. In some conditions the driver may
auto-complete the ELSs thus if the link does come up, the Abort completions
may reference an invalid structure.
Fix by ensuring that Abort set the flag to avoid link traffic if issued due
to conditions where the link failed. |
In the Linux kernel, the following vulnerability has been resolved:
RDMA/rxe: Return CQE error if invalid lkey was supplied
RXE is missing update of WQE status in LOCAL_WRITE failures. This caused
the following kernel panic if someone sent an atomic operation with an
explicitly wrong lkey.
[leonro@vm ~]$ mkt test
test_atomic_invalid_lkey (tests.test_atomic.AtomicTest) ...
WARNING: CPU: 5 PID: 263 at drivers/infiniband/sw/rxe/rxe_comp.c:740 rxe_completer+0x1a6d/0x2e30 [rdma_rxe]
Modules linked in: crc32_generic rdma_rxe ip6_udp_tunnel udp_tunnel rdma_ucm rdma_cm ib_umad ib_ipoib iw_cm ib_cm mlx5_ib ib_uverbs ib_core mlx5_core ptp pps_core
CPU: 5 PID: 263 Comm: python3 Not tainted 5.13.0-rc1+ #2936
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
RIP: 0010:rxe_completer+0x1a6d/0x2e30 [rdma_rxe]
Code: 03 0f 8e 65 0e 00 00 3b 93 10 06 00 00 0f 84 82 0a 00 00 4c 89 ff 4c 89 44 24 38 e8 2d 74 a9 e1 4c 8b 44 24 38 e9 1c f5 ff ff <0f> 0b e9 0c e8 ff ff b8 05 00 00 00 41 bf 05 00 00 00 e9 ab e7 ff
RSP: 0018:ffff8880158af090 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff888016a78000 RCX: ffffffffa0cf1652
RDX: 1ffff9200004b442 RSI: 0000000000000004 RDI: ffffc9000025a210
RBP: dffffc0000000000 R08: 00000000ffffffea R09: ffff88801617740b
R10: ffffed1002c2ee81 R11: 0000000000000007 R12: ffff88800f3b63e8
R13: ffff888016a78008 R14: ffffc9000025a180 R15: 000000000000000c
FS: 00007f88b622a740(0000) GS:ffff88806d540000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f88b5a1fa10 CR3: 000000000d848004 CR4: 0000000000370ea0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
rxe_do_task+0x130/0x230 [rdma_rxe]
rxe_rcv+0xb11/0x1df0 [rdma_rxe]
rxe_loopback+0x157/0x1e0 [rdma_rxe]
rxe_responder+0x5532/0x7620 [rdma_rxe]
rxe_do_task+0x130/0x230 [rdma_rxe]
rxe_rcv+0x9c8/0x1df0 [rdma_rxe]
rxe_loopback+0x157/0x1e0 [rdma_rxe]
rxe_requester+0x1efd/0x58c0 [rdma_rxe]
rxe_do_task+0x130/0x230 [rdma_rxe]
rxe_post_send+0x998/0x1860 [rdma_rxe]
ib_uverbs_post_send+0xd5f/0x1220 [ib_uverbs]
ib_uverbs_write+0x847/0xc80 [ib_uverbs]
vfs_write+0x1c5/0x840
ksys_write+0x176/0x1d0
do_syscall_64+0x3f/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae |
In the Linux kernel, the following vulnerability has been resolved:
uio_hv_generic: Fix another memory leak in error handling paths
Memory allocated by 'vmbus_alloc_ring()' at the beginning of the probe
function is never freed in the error handling path.
Add the missing 'vmbus_free_ring()' call.
Note that it is already freed in the .remove function. |
In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix deadlock when cloning inline extents and using qgroups
There are a few exceptional cases where cloning an inline extent needs to
copy the inline extent data into a page of the destination inode.
When this happens, we end up starting a transaction while having a dirty
page for the destination inode and while having the range locked in the
destination's inode iotree too. Because when reserving metadata space
for a transaction we may need to flush existing delalloc in case there is
not enough free space, we have a mechanism in place to prevent a deadlock,
which was introduced in commit 3d45f221ce627d ("btrfs: fix deadlock when
cloning inline extent and low on free metadata space").
However when using qgroups, a transaction also reserves metadata qgroup
space, which can also result in flushing delalloc in case there is not
enough available space at the moment. When this happens we deadlock, since
flushing delalloc requires locking the file range in the inode's iotree
and the range was already locked at the very beginning of the clone
operation, before attempting to start the transaction.
When this issue happens, stack traces like the following are reported:
[72747.556262] task:kworker/u81:9 state:D stack: 0 pid: 225 ppid: 2 flags:0x00004000
[72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)
[72747.556271] Call Trace:
[72747.556273] __schedule+0x296/0x760
[72747.556277] schedule+0x3c/0xa0
[72747.556279] io_schedule+0x12/0x40
[72747.556284] __lock_page+0x13c/0x280
[72747.556287] ? generic_file_readonly_mmap+0x70/0x70
[72747.556325] extent_write_cache_pages+0x22a/0x440 [btrfs]
[72747.556331] ? __set_page_dirty_nobuffers+0xe7/0x160
[72747.556358] ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]
[72747.556362] ? update_group_capacity+0x25/0x210
[72747.556366] ? cpumask_next_and+0x1a/0x20
[72747.556391] extent_writepages+0x44/0xa0 [btrfs]
[72747.556394] do_writepages+0x41/0xd0
[72747.556398] __writeback_single_inode+0x39/0x2a0
[72747.556403] writeback_sb_inodes+0x1ea/0x440
[72747.556407] __writeback_inodes_wb+0x5f/0xc0
[72747.556410] wb_writeback+0x235/0x2b0
[72747.556414] ? get_nr_inodes+0x35/0x50
[72747.556417] wb_workfn+0x354/0x490
[72747.556420] ? newidle_balance+0x2c5/0x3e0
[72747.556424] process_one_work+0x1aa/0x340
[72747.556426] worker_thread+0x30/0x390
[72747.556429] ? create_worker+0x1a0/0x1a0
[72747.556432] kthread+0x116/0x130
[72747.556435] ? kthread_park+0x80/0x80
[72747.556438] ret_from_fork+0x1f/0x30
[72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]
[72747.566961] Call Trace:
[72747.566964] __schedule+0x296/0x760
[72747.566968] ? finish_wait+0x80/0x80
[72747.566970] schedule+0x3c/0xa0
[72747.566995] wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]
[72747.566999] ? finish_wait+0x80/0x80
[72747.567024] lock_extent_bits+0x37/0x90 [btrfs]
[72747.567047] btrfs_invalidatepage+0x299/0x2c0 [btrfs]
[72747.567051] ? find_get_pages_range_tag+0x2cd/0x380
[72747.567076] __extent_writepage+0x203/0x320 [btrfs]
[72747.567102] extent_write_cache_pages+0x2bb/0x440 [btrfs]
[72747.567106] ? update_load_avg+0x7e/0x5f0
[72747.567109] ? enqueue_entity+0xf4/0x6f0
[72747.567134] extent_writepages+0x44/0xa0 [btrfs]
[72747.567137] ? enqueue_task_fair+0x93/0x6f0
[72747.567140] do_writepages+0x41/0xd0
[72747.567144] __filemap_fdatawrite_range+0xc7/0x100
[72747.567167] btrfs_run_delalloc_work+0x17/0x40 [btrfs]
[72747.567195] btrfs_work_helper+0xc2/0x300 [btrfs]
[72747.567200] process_one_work+0x1aa/0x340
[72747.567202] worker_thread+0x30/0x390
[72747.567205] ? create_worker+0x1a0/0x1a0
[72747.567208] kthread+0x116/0x130
[72747.567211] ? kthread_park+0x80/0x80
[72747.567214] ret_from_fork+0x1f/0x30
[72747.569686] task:fsstress state:D stack:
---truncated--- |
IBM InfoSphere Information 11.7 Server authenticated user to obtain sensitive information when a detailed technical error message is returned in a request. This information could be used in further attacks against the system. |
IBM InfoSphere Information Server 11.7 DataStage Flow Designer
transmits sensitive information via URL or query parameters that could be exposed to an unauthorized actor using man in the middle techniques. |
IBM Maximo Asset Management 7.6.1.3 is vulnerable to stored cross-site scripting. This vulnerability allows a privileged user to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. |
IBM Operational Decision Manager 8.11.0.1, 8.11.1.0, 8.12.0.1, and 9.0.0.1 is vulnerable to cross-site scripting. This vulnerability allows an unauthenticated attacker to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. |
IBM MQ Container when used with the IBM MQ Operator LTS 2.0.0 through 2.0.29, MQ Operator CD 3.0.0, 3.0.1, 3.1.0 through 3.1.3, 3.3.0, 3.4.0, 3.4.1, 3.5.0, 3.5.1, and MQ Operator SC2 3.2.0 through 3.2.10 and configured with Cloud Pak for Integration Keycloak could disclose sensitive information to a privileged user. |
IBM MQ Operator LTS 2.0.0 through 2.0.29, MQ Operator CD 3.0.0, 3.0.1, 3.1.0 through 3.1.3, 3.3.0, 3.4.0, 3.4.1, 3.5.0, 3.5.1, and MQ Operator SC2 3.2.0 through 3.2.10
Client connecting to a MQ Queue Manager can cause a SIGSEGV in the AMQRMPPA channel process terminating it. |
IBM Db2 for Linux, UNIX and Windows 12.1.0 and 12.1.1 is vulnerable to a denial of service as the server may crash under certain conditions with a specially crafted query. |
IBM Concert Software 1.0.0 through 1.0.5 is vulnerable to server-side request forgery (SSRF). This may allow an authenticated attacker to send unauthorized requests from the system, potentially leading to network enumeration or facilitating other attacks. |
IBM Concert Software 1.0.0 through 1.0.5 could allow a remote attacker to traverse directories on the system. An attacker could send a specially crafted URL request containing "dot dot" sequences (/../) to view arbitrary files on the system. |
Cross-Site Request Forgery (CSRF) vulnerability in Drupal Configuration Split allows Cross Site Request Forgery.This issue affects Configuration Split: from 0.0.0 before 1.10.0, from 2.0.0 before 2.0.2. |
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability in Drupal SpamSpan filter allows Cross-Site Scripting (XSS).This issue affects SpamSpan filter: from 0.0.0 before 3.2.1. |
Cross-Site Request Forgery (CSRF) vulnerability in Drupal OAuth2 Client allows Cross Site Request Forgery.This issue affects OAuth2 Client: from 0.0.0 before 4.1.3. |
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability in Drupal View Password allows Cross-Site Scripting (XSS).This issue affects View Password: from 0.0.0 before 6.0.4. |
A vulnerability was found in Project Worlds Free Download Online Shopping System up to 192.168.1.88. It has been rated as critical. This issue affects some unknown processing of the file /online-shopping-webvsite-in-php-master/success.php. The manipulation of the argument id leads to sql injection. The attack may be initiated remotely. The exploit has been disclosed to the public and may be used. |