In the Linux kernel, the following vulnerability has been resolved:
powerpc/kasan: Fix addr error caused by page alignment
In kasan_init_region, when k_start is not page aligned, at the begin of
for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
`va = block + k_cur - k_start` is less than block, the addr va is invalid,
because the memory address space from va to block is not alloced by
memblock_alloc, which will not be reserved by memblock_reserve later, it
will be used by other places.
As a result, memory overwriting occurs.
for example:
int __init __weak kasan_init_region(void *start, size_t size)
{
[...]
/* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
block = memblock_alloc(k_end - k_start, PAGE_SIZE);
[...]
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
/* at the begin of for loop
* block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
* va(dcd96c00) is less than block(dcd97000), va is invalid
*/
void *va = block + k_cur - k_start;
[...]
}
[...]
}
Therefore, page alignment is performed on k_start before
memblock_alloc() to ensure the validity of the VA address.
Metrics
Affected Vendors & Products
References
History
Fri, 22 Nov 2024 12:00:00 +0000
Type | Values Removed | Values Added |
---|---|---|
References |
|
Wed, 13 Nov 2024 02:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
First Time appeared |
Redhat
Redhat enterprise Linux |
|
CPEs | cpe:/a:redhat:enterprise_linux:9 cpe:/o:redhat:enterprise_linux:9 |
|
Vendors & Products |
Redhat
Redhat enterprise Linux |
Tue, 05 Nov 2024 10:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
References |
|
MITRE
Status: PUBLISHED
Assigner: Linux
Published: 2024-04-03T14:55:14.149Z
Updated: 2024-11-05T15:47:32.026Z
Reserved: 2024-02-19T14:20:24.159Z
Link: CVE-2024-26712
Vulnrichment
Updated: 2024-08-02T00:14:12.618Z
NVD
Status : Awaiting Analysis
Published: 2024-04-03T15:15:53.590
Modified: 2024-11-21T09:02:53.947
Link: CVE-2024-26712
Redhat