From patchwork Tue Apr 23 20:47:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bethany Jamison X-Patchwork-Id: 1926775 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VPDhP6WQdz1yZy for ; Wed, 24 Apr 2024 06:47:33 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1rzN30-0005gx-Cj; Tue, 23 Apr 2024 20:47:26 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1rzN2v-0005fi-NP for kernel-team@lists.ubuntu.com; Tue, 23 Apr 2024 20:47:21 +0000 Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 936D23F2EA for ; Tue, 23 Apr 2024 20:47:21 +0000 (UTC) Received: by mail-qk1-f198.google.com with SMTP id af79cd13be357-79088c219ccso84407885a.2 for ; Tue, 23 Apr 2024 13:47:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713905240; x=1714510040; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ozXvEPJFUZTzJdVwP02leljMPUDBc22u53zmLgFcyMY=; b=q51Cj2cLfhuM6TbZPMScfyp4CnFWhXI78n8A6DR8NYRRxyf/yPm+tuT7kGey/+vpJQ +YK22JYBWkHAGoQeoVXflWYjXEXSEEgXmC7Yg1hsTn5W37GEHxOt6esBbj2rPWJgIdPY bLsncakihjncg55TsOixOXOrtVhoX7EcCqB8+J5/h6DyEuXYB/B75QFktiv0ZXxHbaby DkzoSu2EwZ0u/dkKdgrO8lD2/qZXiQPlNUzpWTkh/kJXa+D23Raqn1Nw42jDKcuVwHlK 6bg64yYx3JCQpS6HQv1UTSyv6XKMmW51JboKZHHUyprWiFzk8w6M0L0J2/QLBXFpAfoU +S5Q== X-Gm-Message-State: AOJu0YyblUAlahR4pD+fEiiIkx5chZQMJdnejKequpe19HEsYO/22xSX f7XapU9JbzOy7jOKNRfIy+WMoRMZ/ycYyDFUwpPQjwrKx5JDdMJKhhLb/W97jUry9x/r00ZJXA+ R/6Ustp1iezkoMJ9agZrcIVzuWPi/ElSUcCuPjJsTNfOYZF/WrJKFA24qKfjYD+3HqK8ZJ4bxkj OASvuvRBDfjw== X-Received: by 2002:a0c:d641:0:b0:6a0:93b6:f74b with SMTP id e1-20020a0cd641000000b006a093b6f74bmr181708qvj.0.1713905240496; Tue, 23 Apr 2024 13:47:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG2i+NzlNHt3Y30kFLjE+D/+zphEdWTF6Wp2QbzCe37FU5aPPxgFrgKTJNyA21JxnsnYSpQXA== X-Received: by 2002:a0c:d641:0:b0:6a0:93b6:f74b with SMTP id e1-20020a0cd641000000b006a093b6f74bmr181700qvj.0.1713905240154; Tue, 23 Apr 2024 13:47:20 -0700 (PDT) Received: from smtp.gmail.com (72-46-51-119.lnk.ne.static.allophone.net. [72.46.51.119]) by smtp.gmail.com with ESMTPSA id p11-20020a0cf54b000000b0069b58f8c33dsm5536731qvm.45.2024.04.23.13.47.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Apr 2024 13:47:19 -0700 (PDT) From: Bethany Jamison To: kernel-team@lists.ubuntu.com Subject: [SRU][F][PATCH v2 1/1] powerpc/kasan: Fix addr error caused by page alignment Date: Tue, 23 Apr 2024 15:47:16 -0500 Message-Id: <20240423204716.27033-3-bethany.jamison@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423204716.27033-1-bethany.jamison@canonical.com> References: <20240423204716.27033-1-bethany.jamison@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jiangfeng Xiao In kasan_init_region, when k_start is not page aligned, at the begin of for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then `va = block + k_cur - k_start` is less than block, the addr va is invalid, because the memory address space from va to block is not alloced by memblock_alloc, which will not be reserved by memblock_reserve later, it will be used by other places. As a result, memory overwriting occurs. for example: int __init __weak kasan_init_region(void *start, size_t size) { [...] /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */ block = memblock_alloc(k_end - k_start, PAGE_SIZE); [...] for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { /* at the begin of for loop * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) * va(dcd96c00) is less than block(dcd97000), va is invalid */ void *va = block + k_cur - k_start; [...] } [...] } Therefore, page alignment is performed on k_start before memblock_alloc() to ensure the validity of the VA address. Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.") Signed-off-by: Jiangfeng Xiao Signed-off-by: Michael Ellerman Link: https://msgid.link/1705974359-43790-1-git-send-email-xiaojiangfeng@huawei.com (backported from commit 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0) [bjamison: context conflict - added k_start realignment to appropriate spot in code] CVE-2024-26712 Signed-off-by: Bethany Jamison --- arch/powerpc/mm/kasan/kasan_init_32.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index 3f78007a72822..84b0bd1b8ff3b 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -90,8 +90,10 @@ static int __ref kasan_init_region(void *start, size_t size) if (ret) return ret; - if (!slab_is_available()) + if (!slab_is_available()) { + k_start = k_start & PAGE_MASK; block = memblock_alloc(k_end - k_start, PAGE_SIZE); + } for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);