In kasan_init_region, when k_start is not page aligned,
at the begin of for loop, k_cur = k_start & PAGE_MASK
is less than k_start, and then va = block + k_cur - k_start
is less than block, the addr va is invalid, because the
memory address space from va to block is not alloced by
memblock_alloc, which will not be reserved
by memblock_reserve later, it will be used by other places.

As a result, memory overwriting occurs.

for example:
int __init __weak kasan_init_region(void *start, size_t size)
{
[...]
        /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
        block = memblock_alloc(k_end - k_start, PAGE_SIZE);
        [...]
        for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
                /* at the begin of for loop
                 * block(dcd97000) va(dcd96c00) k_cur(feef7000) 
k_start(feef7400)
                 * va(dcd96c00) is less than block(dcd97000), va is invalid
                 */
                void *va = block + k_cur - k_start;
                [...]
        }
[...]
}

Therefore, page alignment is performed on k_start before
memblock_alloc to ensure the validity of the VA address.

Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.")

Signed-off-by: Jiangfeng Xiao <xiaojiangf...@huawei.com>
---
 arch/powerpc/mm/kasan/init_32.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
index a70828a..aa9aa11 100644
--- a/arch/powerpc/mm/kasan/init_32.c
+++ b/arch/powerpc/mm/kasan/init_32.c
@@ -64,6 +64,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
        if (ret)
                return ret;
 
+       k_start = k_start & PAGE_MASK;
        block = memblock_alloc(k_end - k_start, PAGE_SIZE);
        if (!block)
                return -ENOMEM;
-- 
1.8.5.6

Reply via email to