From patchwork Mon Apr 22 17:07:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bethany Jamison X-Patchwork-Id: 1926258 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VNWsf1gpwz23sR for ; Tue, 23 Apr 2024 03:08:06 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1ryx8v-0003SG-4S; Mon, 22 Apr 2024 17:07:49 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1ryx8s-0003RV-IA for kernel-team@lists.ubuntu.com; Mon, 22 Apr 2024 17:07:46 +0000 Received: from mail-il1-f200.google.com (mail-il1-f200.google.com [209.85.166.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 4E88E3F16A for ; Mon, 22 Apr 2024 17:07:46 +0000 (UTC) Received: by mail-il1-f200.google.com with SMTP id e9e14a558f8ab-36c1127c5deso9237935ab.0 for ; Mon, 22 Apr 2024 10:07:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713805665; x=1714410465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pyjd+bOhQeNNeE/7vQGJdL0CxMxJDwiutrbr6Or//ts=; b=f5HM51YwjVBAlXHVGbiAXVu+5r/cK3oAJWlM7pPXrT7cmnsDav+xcB2iwanctHbrR3 9gArf9Z3up9s82BDylEGvPpNwuSOMOtRore21YJIZRZVZIawXgTWHfhjyUxyaFtP17el t4qpdeMS4k4Ouri0ssV9WIMjNJxg4+G3d7VpP2l56j7o+CswqIBNm4FhD/ZldxgxjEZj m4l86rh8u9aYYFOMJmdx/PrktIWHYveiXZes8GrG6PMERi/KWCLBOpdCb8YufRISsnqk bc0siK7jfM5J1unM3ML8/5hYXwYBpNjAdyAc4kZMXKB5NgagoDOuF9h9BQt44W/p8JTZ KXLQ== X-Gm-Message-State: AOJu0YxiPqA9t+EhIJ2JS08WWomyABAbsI9YFoSA3IBi9Sn3jd2UJojW ybri82jCp5BAvufZ8E4F18WxPkPkdluzlc0r9QUyyuISfSLl9qTurJ8iBQH7r7BaiVVrErVNXlV XwNYLrcxCsxdnpTi2Sp9duHrbhlcHwjzXV2g21BVFYRQFGCx26tkuxAubphHAnRFJ0uhQabfc86 HzbmKHumwNBQ== X-Received: by 2002:a05:6e02:148d:b0:36b:329c:2b2e with SMTP id n13-20020a056e02148d00b0036b329c2b2emr272239ilk.14.1713805664835; Mon, 22 Apr 2024 10:07:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGXviljVa4jp3HEizVIqVjt4D3lEU7h9WJHTt18NH/U6Y6tMCk4Od5TFo3IMOwbKQBpsLxFjA== X-Received: by 2002:a05:6e02:148d:b0:36b:329c:2b2e with SMTP id n13-20020a056e02148d00b0036b329c2b2emr272229ilk.14.1713805664478; Mon, 22 Apr 2024 10:07:44 -0700 (PDT) Received: from smtp.gmail.com (104-218-69-129.dynamic.lnk.ne.allofiber.net. [104.218.69.129]) by smtp.gmail.com with ESMTPSA id m9-20020a92c529000000b0036a38481ec1sm2193227ili.72.2024.04.22.10.07.44 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Apr 2024 10:07:44 -0700 (PDT) From: Bethany Jamison To: kernel-team@lists.ubuntu.com Subject: [SRU][F][PATCH 1/1] powerpc/kasan: Fix addr error caused by page alignment Date: Mon, 22 Apr 2024 12:07:42 -0500 Message-Id: <20240422170742.19770-3-bethany.jamison@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240422170742.19770-1-bethany.jamison@canonical.com> References: <20240422170742.19770-1-bethany.jamison@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jiangfeng Xiao In kasan_init_region, when k_start is not page aligned, at the begin of for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then `va = block + k_cur - k_start` is less than block, the addr va is invalid, because the memory address space from va to block is not alloced by memblock_alloc, which will not be reserved by memblock_reserve later, it will be used by other places. As a result, memory overwriting occurs. for example: int __init __weak kasan_init_region(void *start, size_t size) { [...] /* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */ block = memblock_alloc(k_end - k_start, PAGE_SIZE); [...] for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { /* at the begin of for loop * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) * va(dcd96c00) is less than block(dcd97000), va is invalid */ void *va = block + k_cur - k_start; [...] } [...] } Therefore, page alignment is performed on k_start before memblock_alloc() to ensure the validity of the VA address. Fixes: 663c0c9496a6 ("powerpc/kasan: Fix shadow area set up for modules.") Signed-off-by: Jiangfeng Xiao Signed-off-by: Michael Ellerman Link: https://msgid.link/1705974359-43790-1-git-send-email-xiaojiangfeng@huawei.com (backported from commit 4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0) [bjamison: context conflict - added k_start realignment to appropriate spot in code] CVE-2024-26712 Signed-off-by: Bethany Jamison --- arch/powerpc/mm/kasan/kasan_init_32.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index 3f78007a72822..8a294f94a7ca3 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -91,6 +91,7 @@ static int __ref kasan_init_region(void *start, size_t size) return ret; if (!slab_is_available()) + k_start = k_start & PAGE_MASK; block = memblock_alloc(k_end - k_start, PAGE_SIZE); for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {