From patchwork Sun Sep 3 23:45:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1829252 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=obgTVIXu; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Rf7hT48Vgz1ynf for ; Mon, 4 Sep 2023 09:46:33 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qcwnO-0000TG-6L; Sun, 03 Sep 2023 23:46:22 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qcwnL-0000So-VU for kernel-team@lists.ubuntu.com; Sun, 03 Sep 2023 23:46:20 +0000 Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id C0D733F13F for ; Sun, 3 Sep 2023 23:46:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1693784779; bh=0q8VRVowjifuErg4KxzkupsrWT2YtuUuElr0s78oH9U=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=obgTVIXu8Zp0UKtISK1biTAM2opXbydDnWrI67zp6VxEHitUQP6e/wcwCQJXvncEi 7GuQFSVlqXojyDkfTYBPxlGbW/25F3tLtOILVmj37XDOXTgR/5lDv6A61IwYOyapSL cTn4WUxDla2KUu5GnULCRFIDouA7Cw5mhTyFmYMvnQBwrTJoTF0uvuz3KXh/I+1Q39 +0qStyGOgrieuL2v2IIwag7OCOFZ29uaep+zAqzxJ0r0QGX7XH5EqqvSCKcBYBxUBh qF6iVXU9IR8tMbXbjwhYvpaijEHb8JCPX/cKKSvLT4D2EHVLRFhynAjLHgAHy03Ebt J0d+o2x0E/TNg== Received: by mail-ej1-f72.google.com with SMTP id a640c23a62f3a-9a647551b7dso83190166b.1 for ; Sun, 03 Sep 2023 16:46:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693784779; x=1694389579; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0q8VRVowjifuErg4KxzkupsrWT2YtuUuElr0s78oH9U=; b=cHlOhyXFHZY2J73k9L61nuiA8i2WySA6ObzPDVyMGMkAU1En2bXMmIwpm5XsiDBrvs OYlqDCfK2SmWV7pIL+a2hluAklm94yk/hH96cDDvvLdBJ0ylvQtMsItCBDTPrpAs/4go dm/DmBuPNDRExeTUjrPsAztFJqgDJJINnnb1W6+ZOr4nJCHzxpLj3JlYUBb9/HTBffI3 pzqhlGoYk6K5kjx/TE7mzPy3ddDSN0h1tsDtaearSNRoebjeXWlNU9kl+hP037PlldBA e62bNN08ynD0bP6vtdbAPCZp/a6RYR6W7aritqty+Sx+QTyuYZBLDumNeMO7KDbL4Fs9 /WAQ== X-Gm-Message-State: AOJu0Yyhx5kVjARjg8z00/IVJUpkEUCiLbOltSH9Hhw8gifr0uydAsw0 7NXVsBKHxVg7YqWQpYFv7lV6xGP8fJEXB+fCHtU+GVGbsinld5g3/eQqIB3fDAmEqjFSKIbFPDB h8BpJPG5wb5QMdDcOqxuOB+QN+CRIggRh2zeOVjNjWEGI2RBe2OKi X-Received: by 2002:a17:907:948b:b0:9a5:794f:f3c5 with SMTP id dm11-20020a170907948b00b009a5794ff3c5mr11978217ejc.6.1693784779258; Sun, 03 Sep 2023 16:46:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFP4/Aswyv7aDUBTmOujAHkLmBsd+cVDbiPlaE1xldTP/gyWz+g+PsaBFpEDK6jqDVsggKtGw== X-Received: by 2002:a17:907:948b:b0:9a5:794f:f3c5 with SMTP id dm11-20020a170907948b00b009a5794ff3c5mr11978207ejc.6.1693784778870; Sun, 03 Sep 2023 16:46:18 -0700 (PDT) Received: from localhost ([24.133.89.143]) by smtp.gmail.com with ESMTPSA id p20-20020a170906229400b0099bd1a78ef5sm5314473eja.74.2023.09.03.16.46.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 Sep 2023 16:46:18 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Focal 2/6] x86/kasan: Map shadow for percpu pages on demand Date: Mon, 4 Sep 2023 02:45:59 +0300 Message-Id: <20230903234603.859937-3-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230903234603.859937-1-cengiz.can@canonical.com> References: <20230903234603.859937-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Andrey Ryabinin KASAN maps shadow for the entire CPU-entry-area: [CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE] This will explode once the per-cpu entry areas are randomized since it will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to allocate shadow for such big area. Fix this by allocating KASAN shadow only for really used cpu entry area addresses mapped by cea_map_percpu_pages() Thanks to the 0day folks for finding and reporting this to be an issue. [ dhansen: tweak changelog since this will get committed before peterz's actual cpu-entry-area randomization ] Signed-off-by: Andrey Ryabinin Signed-off-by: Dave Hansen Tested-by: Yujie Liu Cc: kernel test robot Link: https://lore.kernel.org/r/202210241508.2e203c3d-yujie.liu@intel.com (cherry picked from commit 3f148f3318140035e87decc1214795ff0755757b) CVE-2023-0597 [cengizcan: prerequisite commit] Signed-off-by: Cengiz Can --- arch/x86/include/asm/kasan.h | 3 +++ arch/x86/mm/cpu_entry_area.c | 8 +++++++- arch/x86/mm/kasan_init_64.c | 15 ++++++++++++--- 3 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index 13e70da38bed..de75306b932e 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -28,9 +28,12 @@ #ifdef CONFIG_KASAN void __init kasan_early_init(void); void __init kasan_init(void); +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid); #else static inline void kasan_early_init(void) { } static inline void kasan_init(void) { } +static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size, + int nid) { } #endif #endif diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index d9643647a9ce..102e1d63f99a 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -9,6 +9,7 @@ #include #include #include +#include static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage); @@ -48,8 +49,13 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags) static void __init cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot) { + phys_addr_t pa = per_cpu_ptr_to_phys(ptr); + + kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE, + early_pfn_to_nid(PFN_DOWN(pa))); + for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE) - cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot); + cea_set_pte(cea_vaddr, pa, prot); } static void __init percpu_setup_debug_store(unsigned int cpu) diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 296da58f3013..76e913611136 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -293,6 +293,18 @@ void __init kasan_early_init(void) kasan_map_early_shadow(init_top_pgt); } +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid) +{ + unsigned long shadow_start, shadow_end; + + shadow_start = (unsigned long)kasan_mem_to_shadow(va); + shadow_start = round_down(shadow_start, PAGE_SIZE); + shadow_end = (unsigned long)kasan_mem_to_shadow(va + size); + shadow_end = round_up(shadow_end, PAGE_SIZE); + + kasan_populate_shadow(shadow_start, shadow_end, nid); +} + void __init kasan_init(void) { int i; @@ -356,9 +368,6 @@ void __init kasan_init(void) kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), shadow_cpu_entry_begin); - kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, - (unsigned long)shadow_cpu_entry_end, 0); - kasan_populate_early_shadow(shadow_cpu_entry_end, kasan_mem_to_shadow((void *)__START_KERNEL_map));