Message ID | 1705974359-43790-1-git-send-email-xiaojiangfeng@huawei.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0 |
Headers | show |
Series | [PING] powerpc/kasan: Fix addr error caused by page alignment | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/github-powerpc_ppctests | success | Successfully ran 8 jobs. |
snowpatch_ozlabs/github-powerpc_selftests | success | Successfully ran 8 jobs. |
snowpatch_ozlabs/github-powerpc_sparse | success | Successfully ran 4 jobs. |
snowpatch_ozlabs/github-powerpc_clang | success | Successfully ran 6 jobs. |
snowpatch_ozlabs/github-powerpc_kernel_qemu | success | Successfully ran 23 jobs. |
Le 23/01/2024 à 02:45, Jiangfeng Xiao a écrit : > [Vous ne recevez pas souvent de courriers de xiaojiangfeng@huawei.com. Découvrez pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ] > > In kasan_init_region, when k_start is not page aligned, > at the begin of for loop, k_cur = k_start & PAGE_MASK > is less than k_start, and then va = block + k_cur - k_start > is less than block, the addr va is invalid, because the > memory address space from va to block is not alloced by > memblock_alloc, which will not be reserved > by memblock_reserve later, it will be used by other places. > > As a result, memory overwriting occurs. > > for example: > int __init __weak kasan_init_region(void *start, size_t size) > { > [...] > /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */ > block = memblock_alloc(k_end - k_start, PAGE_SIZE); > [...] > for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { > /* at the begin of for loop > * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) > * va(dcd96c00) is less than block(dcd97000), va is invalid > */ > void *va = block + k_cur - k_start; > [...] > } > [...] > } > > Therefore, page alignment is performed on k_start before > memblock_alloc to ensure the validity of the VA address. > > Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.") > > Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Be patient, your patch is not lost. Now we have it twice, see: https://patchwork.ozlabs.org/project/linuxppc-dev/list/?submitter=76392 > --- > arch/powerpc/mm/kasan/init_32.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c > index a70828a..aa9aa11 100644 > --- a/arch/powerpc/mm/kasan/init_32.c > +++ b/arch/powerpc/mm/kasan/init_32.c > @@ -64,6 +64,7 @@ int __init __weak kasan_init_region(void *start, size_t size) > if (ret) > return ret; > > + k_start = k_start & PAGE_MASK; > block = memblock_alloc(k_end - k_start, PAGE_SIZE); > if (!block) > return -ENOMEM; > -- > 1.8.5.6 >
On Tue, 23 Jan 2024 09:45:59 +0800, Jiangfeng Xiao wrote: > In kasan_init_region, when k_start is not page aligned, > at the begin of for loop, k_cur = k_start & PAGE_MASK > is less than k_start, and then va = block + k_cur - k_start > is less than block, the addr va is invalid, because the > memory address space from va to block is not alloced by > memblock_alloc, which will not be reserved > by memblock_reserve later, it will be used by other places. > > [...] Applied to powerpc/fixes. [1/1] powerpc/kasan: Fix addr error caused by page alignment https://git.kernel.org/powerpc/c/4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0 cheers
diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c index a70828a..aa9aa11 100644 --- a/arch/powerpc/mm/kasan/init_32.c +++ b/arch/powerpc/mm/kasan/init_32.c @@ -64,6 +64,7 @@ int __init __weak kasan_init_region(void *start, size_t size) if (ret) return ret; + k_start = k_start & PAGE_MASK; block = memblock_alloc(k_end - k_start, PAGE_SIZE); if (!block) return -ENOMEM;
In kasan_init_region, when k_start is not page aligned, at the begin of for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then va = block + k_cur - k_start is less than block, the addr va is invalid, because the memory address space from va to block is not alloced by memblock_alloc, which will not be reserved by memblock_reserve later, it will be used by other places. As a result, memory overwriting occurs. for example: int __init __weak kasan_init_region(void *start, size_t size) { [...] /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */ block = memblock_alloc(k_end - k_start, PAGE_SIZE); [...] for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { /* at the begin of for loop * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) * va(dcd96c00) is less than block(dcd97000), va is invalid */ void *va = block + k_cur - k_start; [...] } [...] } Therefore, page alignment is performed on k_start before memblock_alloc to ensure the validity of the VA address. Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.") Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> --- arch/powerpc/mm/kasan/init_32.c | 1 + 1 file changed, 1 insertion(+)