diff mbox series

[v2] powerpc/book3s64/hugetlb: Fix disabling hugetlb when fadump is active

Message ID 20241217074640.1064510-1-sourabhjain@linux.ibm.com (mailing list archive)
State Accepted
Commit d629d7a8efc33d05d62f4805c0ffb44727e3d99f
Headers show
Series [v2] powerpc/book3s64/hugetlb: Fix disabling hugetlb when fadump is active | expand

Checks

Context Check Description
snowpatch_ozlabs/github-powerpc_ppctests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_selftests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_sparse success Successfully ran 4 jobs.
snowpatch_ozlabs/github-powerpc_clang success Successfully ran 5 jobs.
snowpatch_ozlabs/github-powerpc_kernel_qemu success Successfully ran 21 jobs.

Commit Message

Sourabh Jain Dec. 17, 2024, 7:46 a.m. UTC
Commit 8597538712eb ("powerpc/fadump: Do not use hugepages when fadump
is active") disabled hugetlb support when fadump is active by returning
early from hugetlbpage_init():arch/powerpc/mm/hugetlbpage.c and not
populating hpage_shift/HPAGE_SHIFT.

Later, commit 2354ad252b66 ("powerpc/mm: Update default hugetlb size
early") moved the allocation of hpage_shift/HPAGE_SHIFT to early boot,
which inadvertently re-enabled hugetlb support when fadump is active.

Fix this by implementing hugepages_supported() on powerpc. This ensures
that disabling hugetlb for the fadump kernel is independent of
hpage_shift/HPAGE_SHIFT.

Fixes: 2354ad252b66 ("powerpc/mm: Update default hugetlb size early")
CC: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
CC: Hari Bathini <hbathini@linux.ibm.com>
CC: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Mahesh Salgaonkar <mahesh@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com>
---

Changelog:

Since v1: https://lore.kernel.org/all/20241202054310.928610-1-sourabhjain@linux.ibm.com/
 - Change return type of hugepages_supported() to bool
 - Add a Reviewed-by

Note: Even with this fix included, it is possible to enable gigantic
pages in the fadump kernel. IIUC, gigantic pages were never disabled
for the fadump kernel.

Currently, gigantic pages are allocated during early boot as long as
the respective hstate is supported by the architecture.

I will introduce some changes in the generic hugetlb code to allow the
architecture to decide on supporting gigantic pages on the go. Bringing
gigantic page allocation under hugepages_supported() does work for
powerpc but I need verify the impact on other architectures.

Regarding the Fixes tag: This patch fixes a bug inadvertently introduced
by the commit mentioned under Fixes tag in the commit message. Feel free
to remove the tag if it is unnecessary.

---
 arch/powerpc/include/asm/hugetlb.h | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Madhavan Srinivasan Jan. 1, 2025, 9:08 a.m. UTC | #1
On Tue, 17 Dec 2024 13:16:40 +0530, Sourabh Jain wrote:
> Commit 8597538712eb ("powerpc/fadump: Do not use hugepages when fadump
> is active") disabled hugetlb support when fadump is active by returning
> early from hugetlbpage_init():arch/powerpc/mm/hugetlbpage.c and not
> populating hpage_shift/HPAGE_SHIFT.
> 
> Later, commit 2354ad252b66 ("powerpc/mm: Update default hugetlb size
> early") moved the allocation of hpage_shift/HPAGE_SHIFT to early boot,
> which inadvertently re-enabled hugetlb support when fadump is active.
> 
> [...]

Applied to powerpc/next.

[1/1] powerpc/book3s64/hugetlb: Fix disabling hugetlb when fadump is active
      https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=next&id=d629d7a8efc33d05d62f4805c0ffb44727e3d99f 

Thanks
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
index 18a3028ac3b6..dad2e7980f24 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ -15,6 +15,15 @@ 
 
 extern bool hugetlb_disabled;
 
+static inline bool hugepages_supported(void)
+{
+	if (hugetlb_disabled)
+		return false;
+
+	return HPAGE_SHIFT != 0;
+}
+#define hugepages_supported hugepages_supported
+
 void __init hugetlbpage_init_defaultsize(void);
 
 int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,