From patchwork Fri Jun 14 14:49:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philip Cox X-Patchwork-Id: 1947942 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W12LY045Vz20Pb for ; Sat, 15 Jun 2024 00:52:21 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1sI8Hn-0002GX-RQ; Fri, 14 Jun 2024 14:52:15 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1sI8Hm-0002G3-Ts for kernel-team@lists.ubuntu.com; Fri, 14 Jun 2024 14:52:14 +0000 Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 8B6A43F156 for ; Fri, 14 Jun 2024 14:52:14 +0000 (UTC) Received: by mail-qt1-f198.google.com with SMTP id d75a77b69052e-4404d05602eso21196381cf.1 for ; Fri, 14 Jun 2024 07:52:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718376730; x=1718981530; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M9978NmVLVgaU+xYKLzfqL9pYNBgLR8O4NTJITNmzuU=; b=DWOnrwYvS24a27upqWEE1clX1r1m4Q3Mnh4fWCPeycBlHurcXnK2HZaviZklzZv87+ 3/4zQhQTWtlB4oLco6VE8+eBZwvjSLN1k+zFpIIlZdCEmCCtoSNraTIuu3nK2WFWnEiw RbT4H28R3y5D5ktrmGnecaroR9lxZTMWpGEguIJnG6/GW9mFwYA1nB40cMMJBuwA4SA1 vrwYjpaavWk25sistNeSYPW3r+Dmf8rDmquW7jic4sJ7Dqqr+H05aPek4lEkl8LxJJgm EgCd6KfMuSsiSVy/OoDxDo3TDHlAcY76n70QZMNnFbSDxliunt4CcOzPHOWMDmNmepSx JAhw== X-Gm-Message-State: AOJu0YwB4iPjgr+jQJQsSqpX/u9Ow4v6kSDnA/62nOFDAYkmWfaEVSvp nFsFdAId7IT83yudm4OTECv9jGKC8H3J/IpEGmy3OHCPzAlRInN4IAwkOyQBJQ58Xa1TJ37JLtJ EzE78mbPGP31ive7mFDzOMS71AJGf4O/Il5vDgPlZB1NQfFNTdJm5gKqoLCMVSiV5bfmQr6jKR/ VLtbQiI4i3lg== X-Received: by 2002:ac8:7c53:0:b0:440:2b02:f04c with SMTP id d75a77b69052e-442168aa788mr26488591cf.18.1718376730530; Fri, 14 Jun 2024 07:52:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF6lMBBccP84hAyGnOsjN5YUBSXCIeTVOyCThnFlsdWOLjgnuPm8LU1Yh1ipQCWEZ3FQ19V9Q== X-Received: by 2002:ac8:7c53:0:b0:440:2b02:f04c with SMTP id d75a77b69052e-442168aa788mr26488331cf.18.1718376729749; Fri, 14 Jun 2024 07:52:09 -0700 (PDT) Received: from cox.conference (bras-base-maplon2310w-grc-36-76-69-53-230.dsl.bell.ca. [76.69.53.230]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-44213eb4b53sm12305181cf.42.2024.06.14.07.52.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:52:07 -0700 (PDT) From: Philip Cox To: kernel-team@lists.ubuntu.com Subject: [j/n][linux-aws][PATCH 1/2] arm64: mm: Don't remap pgtables per-cont(pte|pmd) block Date: Fri, 14 Jun 2024 10:49:55 -0400 Message-Id: <20240614144957.572216-2-philip.cox@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614144957.572216-1-philip.cox@canonical.com> References: <20240614144957.572216-1-philip.cox@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Ryan Roberts BugLink: https://bugs.launchpad.net/bugs/2069352 A large part of the kernel boot time is creating the kernel linear map page tables. When rodata=full, all memory is mapped by pte. And when there is lots of physical ram, there are lots of pte tables to populate. The primary cost associated with this is mapping and unmapping the pte table memory in the fixmap; at unmap time, the TLB entry must be invalidated and this is expensive. Previously, each pmd and pte table was fixmapped/fixunmapped for each cont(pte|pmd) block of mappings (16 entries with 4K granule). This means we ended up issuing 32 TLBIs per (pmd|pte) table during the population phase. Let's fix that, and fixmap/fixunmap each page once per population, for a saving of 31 TLBIs per (pmd|pte) table. This gives a significant boot speedup. Execution time of map_mem(), which creates the kernel linear map page tables, was measured on different machines with different RAM configs: | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra | VM, 16G | VM, 64G | VM, 256G | Metal, 512G ---------------|-------------|-------------|-------------|------------- | ms (%) | ms (%) | ms (%) | ms (%) ---------------|-------------|-------------|-------------|------------- before | 168 (0%) | 2198 (0%) | 8644 (0%) | 17447 (0%) after | 78 (-53%) | 435 (-80%) | 1723 (-80%) | 3779 (-78%) Signed-off-by: Ryan Roberts Tested-by: Itaru Kitayama Tested-by: Eric Chanudet Reviewed-by: Mark Rutland Reviewed-by: Ard Biesheuvel Link: https://lore.kernel.org/r/20240412131908.433043-2-ryan.roberts@arm.com Signed-off-by: Will Deacon (cherry picked from commit 5c63db59c5f89925add57642be4f789d0d671ccd) Signed-off-by: Philip Cox --- arch/arm64/mm/mmu.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e070012c41c0..82ecfae0c9fe 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -153,12 +153,9 @@ static bool pgattr_change_is_safe(u64 old, u64 new) return ((old ^ new) & ~mask) == 0; } -static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, +static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot) { - pte_t *ptep; - - ptep = pte_set_fixmap_offset(pmdp, addr); do { pte_t old_pte = READ_ONCE(*ptep); @@ -173,8 +170,6 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, phys += PAGE_SIZE; } while (ptep++, addr += PAGE_SIZE, addr != end); - - pte_clear_fixmap(); } static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, @@ -185,6 +180,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, { unsigned long next; pmd_t pmd = READ_ONCE(*pmdp); + pte_t *ptep; BUG_ON(pmd_sect(pmd)); if (pmd_none(pmd)) { @@ -200,6 +196,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, } BUG_ON(pmd_bad(pmd)); + ptep = pte_set_fixmap_offset(pmdp, addr); do { pgprot_t __prot = prot; @@ -210,20 +207,21 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, (flags & NO_CONT_MAPPINGS) == 0) __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - init_pte(pmdp, addr, next, phys, __prot); + init_pte(ptep, addr, next, phys, __prot); + ptep += pte_index(next) - pte_index(addr); phys += next - addr; } while (addr = next, addr != end); + + pte_clear_fixmap(); } -static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, +static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(int), int flags) { unsigned long next; - pmd_t *pmdp; - pmdp = pmd_set_fixmap_offset(pudp, addr); do { pmd_t old_pmd = READ_ONCE(*pmdp); @@ -249,8 +247,6 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, } phys += next - addr; } while (pmdp++, addr = next, addr != end); - - pmd_clear_fixmap(); } static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, @@ -260,6 +256,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, { unsigned long next; pud_t pud = READ_ONCE(*pudp); + pmd_t *pmdp; /* * Check for initial section mappings in the pgd/pud. @@ -278,6 +275,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, } BUG_ON(pud_bad(pud)); + pmdp = pmd_set_fixmap_offset(pudp, addr); do { pgprot_t __prot = prot; @@ -288,10 +286,13 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, (flags & NO_CONT_MAPPINGS) == 0) __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags); + init_pmd(pmdp, addr, next, phys, __prot, pgtable_alloc, flags); + pmdp += pmd_index(next) - pmd_index(addr); phys += next - addr; } while (addr = next, addr != end); + + pmd_clear_fixmap(); } static inline bool use_1G_block(unsigned long addr, unsigned long next, From patchwork Fri Jun 14 14:49:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Philip Cox X-Patchwork-Id: 1947943 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W12LZ3msNz20Pb for ; Sat, 15 Jun 2024 00:52:22 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1sI8Hp-0002IA-6m; Fri, 14 Jun 2024 14:52:17 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1sI8Hn-0002GU-Ty for kernel-team@lists.ubuntu.com; Fri, 14 Jun 2024 14:52:15 +0000 Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id A90F63F156 for ; Fri, 14 Jun 2024 14:52:15 +0000 (UTC) Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-44061ceb150so22723061cf.3 for ; Fri, 14 Jun 2024 07:52:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718376733; x=1718981533; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QZN/8tD3Twsf3xT+vH16NzFOkzxugQEeuC26MMhvJiQ=; b=LY7H4K5pEKFaxnMJ192MI3hrD5nZNWJyOnvvtRl1Nq1ORPZbFBEiblZi4pfsqd70/Z BMo+p5sExDcup1EqRlR76GrbTWl4WCM7SQ14v38TE8hqB9XgzfTeN53XQMoMfJUrtQ62 N0o4wPhjma/DE/GrUqQJVGagDpEYhTYwfAm6XL9tjq3y4fpzXT+LXF9DUd0YkyXVRQkC /IH9X1yW0ETCBR+U1YOeSZ7OEueEM/guN2UBKdl+6I9BMdJZ6az8VbfVLhF9lXQr3AUj pxv68QgGoaDcEaPRze7wM0SJncrNaIwgEKKdQ9L/upeyjeS/HOV/dZ0n3REgYRcrZEVr OPVQ== X-Gm-Message-State: AOJu0YxrwRJtaI+kVpwUeZjjK2BzJvJxH/5LsBZ4B1pwzxOWRFf2axti 5ERfMMKWN/C6JDEdPyygV0siVExXl0Tfc8ps41tfpcg6To+Z+edWgkaF96WYlrdbEiFgsFVBc68 bSWhQm+LDuPzeac+VNoRHwhiH0EsD8tVs/85b0c6SJQbzn9EoQh8CI/TFNIqQh9IFgVGyOptd4B bY77mHjPzqlg== X-Received: by 2002:a05:622a:11:b0:441:5612:21d5 with SMTP id d75a77b69052e-44216b1c996mr37063051cf.49.1718376733288; Fri, 14 Jun 2024 07:52:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGJz+skG8MJwtNAQavOV7b/9sMUxl//mLGoWFlHHn2L1h/6FERn553Ye0/k79+x0vZERiGldg== X-Received: by 2002:a05:622a:11:b0:441:5612:21d5 with SMTP id d75a77b69052e-44216b1c996mr37062811cf.49.1718376732861; Fri, 14 Jun 2024 07:52:12 -0700 (PDT) Received: from cox.conference (bras-base-maplon2310w-grc-36-76-69-53-230.dsl.bell.ca. [76.69.53.230]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-44213eb4b53sm12305181cf.42.2024.06.14.07.52.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:52:10 -0700 (PDT) From: Philip Cox To: kernel-team@lists.ubuntu.com Subject: [j][linux-aws][PATCH 2/2] arm64: mm: Batch dsb and isb when populating pgtables Date: Fri, 14 Jun 2024 10:49:56 -0400 Message-Id: <20240614144957.572216-3-philip.cox@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240614144957.572216-1-philip.cox@canonical.com> References: <20240614144957.572216-1-philip.cox@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Ryan Roberts BugLink: https://bugs.launchpad.net/bugs/2069352 After removing uneccessary TLBIs, the next bottleneck when creating the page tables for the linear map is DSB and ISB, which were previously issued per-pte in __set_pte(). Since we are writing multiple ptes in a given pte table, we can elide these barriers and insert them once we have finished writing to the table. Execution time of map_mem(), which creates the kernel linear map page tables, was measured on different machines with different RAM configs: | Apple M2 VM | Ampere Altra| Ampere Altra| Ampere Altra | VM, 16G | VM, 64G | VM, 256G | Metal, 512G ---------------|-------------|-------------|-------------|------------- | ms (%) | ms (%) | ms (%) | ms (%) ---------------|-------------|-------------|-------------|------------- before | 78 (0%) | 435 (0%) | 1723 (0%) | 3779 (0%) after | 11 (-86%) | 161 (-63%) | 656 (-62%) | 1654 (-56%) Signed-off-by: Ryan Roberts Tested-by: Itaru Kitayama Tested-by: Eric Chanudet Reviewed-by: Mark Rutland Reviewed-by: Ard Biesheuvel Link: https://lore.kernel.org/r/20240412131908.433043-3-ryan.roberts@arm.com Signed-off-by: Will Deacon (backported from commit 1fcb7cea8a5f7747e02230f816c2c80b060d9517 [context changes in init_pte(), replaced __set_pte() with set_pte()]) Signed-off-by: Philip Cox --- arch/arm64/include/asm/pgtable.h | 7 ++++++- arch/arm64/mm/mmu.c | 11 ++++++++++- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b5e969bc074d..8be993e49356 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -252,9 +252,14 @@ static inline pte_t pte_mkdevmap(pte_t pte) return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); } -static inline void set_pte(pte_t *ptep, pte_t pte) +static inline void __set_pte_nosync(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); +} + +static inline void set_pte(pte_t *ptep, pte_t pte) +{ + __set_pte_nosync(ptep, pte); /* * Only if the new pte is valid and kernel, otherwise TLB maintenance diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 82ecfae0c9fe..c480447cbb98 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -159,7 +159,11 @@ static void init_pte(pte_t *ptep, unsigned long addr, unsigned long end, do { pte_t old_pte = READ_ONCE(*ptep); - set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); + /* + * Required barriers to make this visible to the table walker + * are deferred to the end of alloc_init_cont_pte(). + */ + __set_pte_nosync(ptep, pfn_pte(__phys_to_pfn(phys), prot)); /* * After the PTE entry has been populated once, we @@ -213,6 +217,11 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, phys += next - addr; } while (addr = next, addr != end); + /* + * Note: barriers and maintenance necessary to clear the fixmap slot + * ensure that all previous pgtable writes are visible to the table + * walker. + */ pte_clear_fixmap(); }