diff mbox series

[v4,5/7] mm/x86: arch_check_zapped_pud()

Message ID 20240807194812.819412-6-peterx@redhat.com (mailing list archive)
State Handled Elsewhere, archived
Headers show
Series mm/mprotect: Fix dax puds | expand

Commit Message

Peter Xu Aug. 7, 2024, 7:48 p.m. UTC
Introduce arch_check_zapped_pud() to sanity check shadow stack on PUD zaps.
It has the same logic of the PMD helper.

One thing to mention is, it might be a good idea to use page_table_check in
the future for trapping wrong setups of shadow stack pgtable entries [1].
That is left for the future as a separate effort.

[1] https://lore.kernel.org/all/59d518698f664e07c036a5098833d7b56b953305.camel@intel.com

Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: x86@kernel.org
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 arch/x86/include/asm/pgtable.h | 10 ++++++++++
 arch/x86/mm/pgtable.c          |  7 +++++++
 include/linux/pgtable.h        |  7 +++++++
 mm/huge_memory.c               |  4 +++-
 4 files changed, 27 insertions(+), 1 deletion(-)

Comments

Thomas Gleixner Aug. 7, 2024, 10:28 p.m. UTC | #1
On Wed, Aug 07 2024 at 15:48, Peter Xu wrote:

> Subject: mm/x86: arch_check_zapped_pud()

Is not a proper subject line. It clearly lacks a verb.

  Subject: mm/x86: Implement arch_check_zapped_pud()


> Introduce arch_check_zapped_pud() to sanity check shadow stack on PUD zaps.
> It has the same logic of the PMD helper.

s/of/as/

> +
> +void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
> +{
> +	/* See note in arch_check_zapped_pte() */
> +	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
> +			pud_shstk(pud));

Please get rid of the line break. You have 100 characters.

> +}
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 2a6a3cccfc36..2289e9f7aa1b 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -447,6 +447,13 @@ static inline void arch_check_zapped_pmd(struct vm_area_struct *vma,
>  }
>  #endif
>  
> +#ifndef arch_check_zapped_pud
> +static inline void arch_check_zapped_pud(struct vm_area_struct *vma,
> +					 pud_t pud)
> +{

Ditto..

> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 0024266dea0a..81c5da0708ed 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c

Why is a mm change burried in a patch which is named mm/x86?

It's clearly documented that core changes with the generic fallback come
in one patch and the architecture override in a separate one afterwards.

Do we write documentation just for the sake of writing it?

Thanks,

        tglx
Peter Xu Aug. 8, 2024, 3:49 p.m. UTC | #2
On Thu, Aug 08, 2024 at 12:28:47AM +0200, Thomas Gleixner wrote:
> On Wed, Aug 07 2024 at 15:48, Peter Xu wrote:
> 
> > Subject: mm/x86: arch_check_zapped_pud()
> 
> Is not a proper subject line. It clearly lacks a verb.
> 
>   Subject: mm/x86: Implement arch_check_zapped_pud()
> 
> 
> > Introduce arch_check_zapped_pud() to sanity check shadow stack on PUD zaps.
> > It has the same logic of the PMD helper.
> 
> s/of/as/

Will fix above two.

> 
> > +
> > +void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
> > +{
> > +	/* See note in arch_check_zapped_pte() */
> > +	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
> > +			pud_shstk(pud));
> 
> Please get rid of the line break. You have 100 characters.

Coding-style.rst still tells me 80 here:

        The preferred limit on the length of a single line is 80 columns.

        Statements longer than 80 columns should be broken into sensible chunks,
        unless exceeding 80 columns significantly increases readability and does
        not hide information.

Maybe this just changed very recently so even not in mm-unstable?

I'll fix the two line-wrap in this patch anyway, as I figured these two
cases even didn't hit 80..  probably because I used fill-column=75 locally..

But still I'll probably need to figure it out for other spots.  Please help
me to justify.

> 
> > +}
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 2a6a3cccfc36..2289e9f7aa1b 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -447,6 +447,13 @@ static inline void arch_check_zapped_pmd(struct vm_area_struct *vma,
> >  }
> >  #endif
> >  
> > +#ifndef arch_check_zapped_pud
> > +static inline void arch_check_zapped_pud(struct vm_area_struct *vma,
> > +					 pud_t pud)
> > +{
> 
> Ditto..
> 
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 0024266dea0a..81c5da0708ed 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> 
> Why is a mm change burried in a patch which is named mm/x86?
> 
> It's clearly documented that core changes with the generic fallback come
> in one patch and the architecture override in a separate one afterwards.
> 
> Do we write documentation just for the sake of writing it?

Hmm if that's the rule, in practise I bet many patches are violating that
rule that we merged and whatever patches I used to read.. but I see now,
I'll break this patch into two.

Personally I'd really still prefer it to be one patch especially when it's
only implemented in x86, then I copy x86+mm maintainers all here and it
explains why it's there.  So please let me know if you think it may still
make sense to keep this patch as-is, or I'll split by default.

Thanks,
David Hildenbrand Aug. 8, 2024, 8:45 p.m. UTC | #3
>>> +void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
>>> +{
>>> +	/* See note in arch_check_zapped_pte() */
>>> +	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
>>> +			pud_shstk(pud));
>>
>> Please get rid of the line break. You have 100 characters.
> 
> Coding-style.rst still tells me 80 here:
> 
>          The preferred limit on the length of a single line is 80 columns.
> 
>          Statements longer than 80 columns should be broken into sensible chunks,
>          unless exceeding 80 columns significantly increases readability and does
>          not hide information.
> 
> Maybe this just changed very recently so even not in mm-unstable?
> 
> I'll fix the two line-wrap in this patch anyway, as I figured these two
> cases even didn't hit 80..  probably because I used fill-column=75 locally..
> 
> But still I'll probably need to figure it out for other spots.  Please help
> me to justify.

My interpretation is (the doc is not completely clear to me as well, but 
checkpatch.pl hardcodes the max_line_length=100) that we can happily use 
up to 100 chars.

I also tend to stay within 80 chars, unless really reasonable. Years of 
Linux kernel hacking really taught my inner self to not do it.

Here I would agree that having the VM_WARN_ON_ONCE in a single would aid 
readability.

An example where 100 chars are likely a bad idea would be when nesting 
that deeply such that most lines start exceeding 80 chars. We should 
rather fix the code then -- like the coding style spells out :)
diff mbox series

Patch

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index a2a3bd4c1bda..fdb8ac9e7030 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -174,6 +174,13 @@  static inline int pud_young(pud_t pud)
 	return pud_flags(pud) & _PAGE_ACCESSED;
 }
 
+static inline bool pud_shstk(pud_t pud)
+{
+	return cpu_feature_enabled(X86_FEATURE_SHSTK) &&
+	       (pud_flags(pud) & (_PAGE_RW | _PAGE_DIRTY | _PAGE_PSE)) ==
+	       (_PAGE_DIRTY | _PAGE_PSE);
+}
+
 static inline int pte_write(pte_t pte)
 {
 	/*
@@ -1667,6 +1674,9 @@  void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte);
 #define arch_check_zapped_pmd arch_check_zapped_pmd
 void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd);
 
+#define arch_check_zapped_pud arch_check_zapped_pud
+void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud);
+
 #ifdef CONFIG_XEN_PV
 #define arch_has_hw_nonleaf_pmd_young arch_has_hw_nonleaf_pmd_young
 static inline bool arch_has_hw_nonleaf_pmd_young(void)
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index f5931499c2d6..d4b3ccf90236 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -926,3 +926,10 @@  void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd)
 	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
 			pmd_shstk(pmd));
 }
+
+void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
+{
+	/* See note in arch_check_zapped_pte() */
+	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
+			pud_shstk(pud));
+}
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 2a6a3cccfc36..2289e9f7aa1b 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -447,6 +447,13 @@  static inline void arch_check_zapped_pmd(struct vm_area_struct *vma,
 }
 #endif
 
+#ifndef arch_check_zapped_pud
+static inline void arch_check_zapped_pud(struct vm_area_struct *vma,
+					 pud_t pud)
+{
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0024266dea0a..81c5da0708ed 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2293,12 +2293,14 @@  int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		 pud_t *pud, unsigned long addr)
 {
 	spinlock_t *ptl;
+	pud_t orig_pud;
 
 	ptl = __pud_trans_huge_lock(pud, vma);
 	if (!ptl)
 		return 0;
 
-	pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm);
+	orig_pud = pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm);
+	arch_check_zapped_pud(vma, orig_pud);
 	tlb_remove_pud_tlb_entry(tlb, pud, addr);
 	if (vma_is_special_huge(vma)) {
 		spin_unlock(ptl);