From patchwork Thu Jun 8 02:10:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1791978 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=XX8u1iDM; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Qc74j5sDWz20WP for ; Thu, 8 Jun 2023 12:11:49 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q757n-0001XH-4g; Thu, 08 Jun 2023 02:11:43 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q757j-0001TS-UN for kernel-team@lists.ubuntu.com; Thu, 08 Jun 2023 02:11:39 +0000 Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com [209.85.218.69]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id A83543F117 for ; Thu, 8 Jun 2023 02:11:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1686190299; bh=O51RXFqC9aAspeeimbYfWEIW5AGKHzHagKsBOTB34EY=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=XX8u1iDMkiB1VRBrJVPImyFgH3xIeCpOrUByFxutTYsydd4uDkDIc8rbkTZYlBtAY EXZSG7JVk1SVx0K2Wz2WEewa4db7+wpjXEOsBePMenhts45mKIHBYQaQswF2Mo9yZm Sm+k6Y3bdYh87JzLxI28DBgVKRPcmam2lv7h4TYoFi8OrKbRsWtOv2vITk1+otBrhO b7Xqjp42BE0POYnoJI9bE5wN/LVLzJCyPW4KBw3+edawU34O3e7yA7GDXzbHMnVd/U 0gJuH3gvjzD4pxwqfNaAc0COzhFjQxiKJSgON1pbvedr6jf7oz3lOGQRJg8Fe8q46w EccnAOnwP0AOQ== Received: by mail-ej1-f69.google.com with SMTP id a640c23a62f3a-978876e43a7so18391766b.3 for ; Wed, 07 Jun 2023 19:11:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686190299; x=1688782299; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O51RXFqC9aAspeeimbYfWEIW5AGKHzHagKsBOTB34EY=; b=AtIoKijFZchVC2OsJa3Gz+aZilZNsNErS/I4tbX8l7Vpy0f8YngAQuVYZoRZOUjIHQ 9sZmBgp/ORHvkf+wWaRIEUfrSeC/NLgbw2GTRolnLWyi9dHLDDj1ofQWnC5Ag5SPK+Um cc+An+Nta7CAylRk5IQXzrweLyJFchh9TAySw4lBumSkzWEhg5Y+a1Vsb7g4wSQOUCB8 IkbxVVYrzdfPFurtxSgYjqnAjgEE8fNCfiGUKpoJ1u+0gGwv1Z9e99g8vno/rv2ophTv 5JwQMtZAn4ICBI/0hPAxrQ7BMiDrzR0H1um3duMo8+oj/B6a/lLUNqbPs6BXe6bfC3XL hpng== X-Gm-Message-State: AC+VfDxBnOvUFVHC9su+Wp2Sol/3LQbzSTbYOArwXPHIkWemcHJ3CAWF IJatc6FZJAkAZwRV4pPvOGmxmfqNzFJGZcL6s5j9qu/il8lq9FPsQnt3LuHv0U4CXNTcGA1RArC GiBp36vBM6ioC+/gVxvFksWaWc455Ks0b/BeKiUJcK1RpiVMKPcuenc0= X-Received: by 2002:a17:907:6d04:b0:973:946d:36b2 with SMTP id sa4-20020a1709076d0400b00973946d36b2mr7506493ejc.56.1686190299125; Wed, 07 Jun 2023 19:11:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4kUZ0Jtd2RCDSQ0H32NuPYgFujM//sNAQcxgBhybqvHWXsddI0RBSFg++XfUGKajRn8g464w== X-Received: by 2002:a17:907:6d04:b0:973:946d:36b2 with SMTP id sa4-20020a1709076d0400b00973946d36b2mr7506491ejc.56.1686190298940; Wed, 07 Jun 2023 19:11:38 -0700 (PDT) Received: from localhost ([82.222.124.85]) by smtp.gmail.com with ESMTPSA id br10-20020a170906d14a00b00978723f594bsm41646ejb.101.2023.06.07.19.11.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 19:11:38 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Jammy, OEM-5.17, Kinetic, OEM-6.0 PATCH 5/5] x86/mm: Do not shuffle CPU entry areas without KASLR Date: Thu, 8 Jun 2023 05:10:56 +0300 Message-Id: <20230608021055.203634-7-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230608021055.203634-1-cengiz.can@canonical.com> References: <20230608021055.203634-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Michal Koutný The commit 97e3d26b5e5f ("x86/mm: Randomize per-cpu entry area") fixed an omission of KASLR on CPU entry areas. It doesn't take into account KASLR switches though, which may result in unintended non-determinism when a user wants to avoid it (e.g. debugging, benchmarking). Generate only a single combination of CPU entry areas offsets -- the linear array that existed prior randomization when KASLR is turned off. Since we have 3f148f331814 ("x86/kasan: Map shadow for percpu pages on demand") and followups, we can use the more relaxed guard kasrl_enabled() (in contrast to kaslr_memory_enabled()). Fixes: 97e3d26b5e5f ("x86/mm: Randomize per-cpu entry area") Signed-off-by: Michal Koutný Signed-off-by: Dave Hansen Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20230306193144.24605-1-mkoutny%40suse.com CVE-2023-0597 (cherry picked from commit a3f547addcaa10df5a226526bc9e2d9a94542344) Signed-off-by: Cengiz Can --- arch/x86/mm/cpu_entry_area.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index 7c855dffcdc2..0d804624239f 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -10,6 +10,7 @@ #include #include #include +#include static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage); @@ -29,6 +30,12 @@ static __init void init_cea_offsets(void) unsigned int max_cea; unsigned int i, j; + if (!kaslr_enabled()) { + for_each_possible_cpu(i) + per_cpu(_cea_offset, i) = i; + return; + } + max_cea = (CPU_ENTRY_AREA_MAP_SIZE - PAGE_SIZE) / CPU_ENTRY_AREA_SIZE; /* O(sodding terrible) */