Message ID | 20240902-mm-generic-shadow-stack-guard-v1-3-9acda38b3dd3@kernel.org |
---|---|
State | New |
Headers | show |
Series | mm: Care about shadow stack guard gap when getting an unmapped area | expand |
On Mon, Sep 02, 2024 at 08:08:15PM GMT, Mark Brown wrote: > As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow > stack guard gap during placement") our current mmap() implementation does > not take care to ensure that a new mapping isn't placed with existing > mappings inside it's own guard gaps. This is particularly important for > shadow stacks since if two shadow stacks end up getting placed adjacent to > each other then they can overflow into each other which weakens the > protection offered by the feature. > > On x86 there is a custom arch_get_unmapped_area() which was updated by the > above commit to cover this case by specifying a start_gap for allocations > with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and > use the generic implementation of arch_get_unmapped_area() so let's make > the equivalent change there so they also don't get shadow stack pages > placed without guard pages. Don't you need to unwind that change in x86 now you're doing it in generic code? > > Architectures which do not have this feature will define VM_SHADOW_STACK > to VM_NONE and hence be unaffected. Nice. > > Suggested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Signed-off-by: Mark Brown <broonie@kernel.org> > --- > mm/mmap.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/mmap.c b/mm/mmap.c > index b06ba847c96e..902c482b6084 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -1753,6 +1753,14 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) > return gap; > } > Would be nice to take some of the context in commit message as a short comment describing the function. I mean it's kinda trivially self-documenting obviously, but it's useful context for somebody wanting to understand _why_ we are doing this at a glance. > +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) > +{ > + if (vm_flags & VM_SHADOW_STACK) > + return PAGE_SIZE; > + > + return 0; > +} > + > /* > * Search for an unmapped address range. > * > @@ -1814,6 +1822,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, > info.length = len; > info.low_limit = mm->mmap_base; > info.high_limit = mmap_end; > + info.start_gap = stack_guard_placement(vm_flags); > return vm_unmapped_area(&info); > } > > @@ -1863,6 +1872,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, > info.length = len; > info.low_limit = PAGE_SIZE; > info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); > + info.start_gap = stack_guard_placement(vm_flags); > addr = vm_unmapped_area(&info); > > /* > > -- > 2.39.2 >
On Tue, Sep 03, 2024 at 06:49:46PM +0100, Lorenzo Stoakes wrote: > On Mon, Sep 02, 2024 at 08:08:15PM GMT, Mark Brown wrote: > > On x86 there is a custom arch_get_unmapped_area() which was updated by the > > above commit to cover this case by specifying a start_gap for allocations > > with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and > > use the generic implementation of arch_get_unmapped_area() so let's make > > the equivalent change there so they also don't get shadow stack pages > > placed without guard pages. > Don't you need to unwind that change in x86 now you're doing it in generic code? No, x86 had a preexisting custom implementation for some other reason (hence the "updated by the above commit" part above) - the shadow stack support would most likely have been added in the core in the first place were it not for that.
On Tue, Sep 03, 2024 at 07:20:02PM GMT, Mark Brown wrote: > On Tue, Sep 03, 2024 at 06:49:46PM +0100, Lorenzo Stoakes wrote: > > On Mon, Sep 02, 2024 at 08:08:15PM GMT, Mark Brown wrote: > > > > On x86 there is a custom arch_get_unmapped_area() which was updated by the > > > above commit to cover this case by specifying a start_gap for allocations > > > with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and > > > use the generic implementation of arch_get_unmapped_area() so let's make > > > the equivalent change there so they also don't get shadow stack pages > > > placed without guard pages. > > > Don't you need to unwind that change in x86 now you're doing it in generic code? > > No, x86 had a preexisting custom implementation for some other reason > (hence the "updated by the above commit" part above) - the shadow stack > support would most likely have been added in the core in the first place > were it not for that. Oh yeah missed that! Other than comment nice-to-have this seems fine: Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
* Mark Brown <broonie@kernel.org> [240902 15:09]: > As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow > stack guard gap during placement") our current mmap() implementation does > not take care to ensure that a new mapping isn't placed with existing > mappings inside it's own guard gaps. This is particularly important for > shadow stacks since if two shadow stacks end up getting placed adjacent to > each other then they can overflow into each other which weakens the > protection offered by the feature. > > On x86 there is a custom arch_get_unmapped_area() which was updated by the > above commit to cover this case by specifying a start_gap for allocations > with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and > use the generic implementation of arch_get_unmapped_area() so let's make > the equivalent change there so they also don't get shadow stack pages > placed without guard pages. > > Architectures which do not have this feature will define VM_SHADOW_STACK > to VM_NONE and hence be unaffected. > > Suggested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Signed-off-by: Mark Brown <broonie@kernel.org> > --- > mm/mmap.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/mmap.c b/mm/mmap.c > index b06ba847c96e..902c482b6084 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -1753,6 +1753,14 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) > return gap; > } > > +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) > +{ > + if (vm_flags & VM_SHADOW_STACK) > + return PAGE_SIZE; Is PAGE_SIZE is enough? > + > + return 0; > +} > + > /* > * Search for an unmapped address range. > * > @@ -1814,6 +1822,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, > info.length = len; > info.low_limit = mm->mmap_base; > info.high_limit = mmap_end; > + info.start_gap = stack_guard_placement(vm_flags); > return vm_unmapped_area(&info); > } > > @@ -1863,6 +1872,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, > info.length = len; > info.low_limit = PAGE_SIZE; > info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); > + info.start_gap = stack_guard_placement(vm_flags); > addr = vm_unmapped_area(&info); > > /* > > -- > 2.39.2 >
On Tue, Sep 03, 2024 at 03:41:49PM -0400, Liam R. Howlett wrote: > * Mark Brown <broonie@kernel.org> [240902 15:09]: > > +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) > > +{ > > + if (vm_flags & VM_SHADOW_STACK) > > + return PAGE_SIZE; > Is PAGE_SIZE is enough? It's what x86 currently uses so it'll be no worse off if it gets moved to the generic code (there's a comment in the arch code explaing what's needed there) and it's enough for arm64, we only do single record pushes/pops or (optionally) writes to unconstrained addresses.
On Mon, Sep 02, 2024 at 08:08:15PM +0100, Mark Brown wrote: >As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow >stack guard gap during placement") our current mmap() implementation does >not take care to ensure that a new mapping isn't placed with existing >mappings inside it's own guard gaps. This is particularly important for >shadow stacks since if two shadow stacks end up getting placed adjacent to >each other then they can overflow into each other which weakens the >protection offered by the feature. > >On x86 there is a custom arch_get_unmapped_area() which was updated by the >above commit to cover this case by specifying a start_gap for allocations >with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and >use the generic implementation of arch_get_unmapped_area() so let's make >the equivalent change there so they also don't get shadow stack pages >placed without guard pages. > >Architectures which do not have this feature will define VM_SHADOW_STACK >to VM_NONE and hence be unaffected. > >Suggested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> >Signed-off-by: Mark Brown <broonie@kernel.org> >--- > mm/mmap.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > >diff --git a/mm/mmap.c b/mm/mmap.c >index b06ba847c96e..902c482b6084 100644 >--- a/mm/mmap.c >+++ b/mm/mmap.c >@@ -1753,6 +1753,14 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) > return gap; > } > >+static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) >+{ >+ if (vm_flags & VM_SHADOW_STACK) >+ return PAGE_SIZE; >+ >+ return 0; >+} >+ > /* > * Search for an unmapped address range. > * >@@ -1814,6 +1822,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, > info.length = len; > info.low_limit = mm->mmap_base; > info.high_limit = mmap_end; >+ info.start_gap = stack_guard_placement(vm_flags); > return vm_unmapped_area(&info); > } > >@@ -1863,6 +1872,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, > info.length = len; > info.low_limit = PAGE_SIZE; > info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); >+ info.start_gap = stack_guard_placement(vm_flags); > addr = vm_unmapped_area(&info); > > /* > lgtm Reviewed-by: Deepak Gupta <debug@rivosinc.com> >-- >2.39.2 >
On Tue, Sep 03, 2024 at 08:57:20PM +0100, Mark Brown wrote: >On Tue, Sep 03, 2024 at 03:41:49PM -0400, Liam R. Howlett wrote: >> * Mark Brown <broonie@kernel.org> [240902 15:09]: > >> > +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) >> > +{ >> > + if (vm_flags & VM_SHADOW_STACK) >> > + return PAGE_SIZE; > >> Is PAGE_SIZE is enough? > >It's what x86 currently uses so it'll be no worse off if it gets moved >to the generic code (there's a comment in the arch code explaing what's >needed there) and it's enough for arm64, we only do single record >pushes/pops or (optionally) writes to unconstrained addresses. It's enough for RISC-V too.
diff --git a/mm/mmap.c b/mm/mmap.c index b06ba847c96e..902c482b6084 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1753,6 +1753,14 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) return gap; } +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) +{ + if (vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + /* * Search for an unmapped address range. * @@ -1814,6 +1822,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, info.length = len; info.low_limit = mm->mmap_base; info.high_limit = mmap_end; + info.start_gap = stack_guard_placement(vm_flags); return vm_unmapped_area(&info); } @@ -1863,6 +1872,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); + info.start_gap = stack_guard_placement(vm_flags); addr = vm_unmapped_area(&info); /*
As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow stack guard gap during placement") our current mmap() implementation does not take care to ensure that a new mapping isn't placed with existing mappings inside it's own guard gaps. This is particularly important for shadow stacks since if two shadow stacks end up getting placed adjacent to each other then they can overflow into each other which weakens the protection offered by the feature. On x86 there is a custom arch_get_unmapped_area() which was updated by the above commit to cover this case by specifying a start_gap for allocations with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and use the generic implementation of arch_get_unmapped_area() so let's make the equivalent change there so they also don't get shadow stack pages placed without guard pages. Architectures which do not have this feature will define VM_SHADOW_STACK to VM_NONE and hence be unaffected. Suggested-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Mark Brown <broonie@kernel.org> --- mm/mmap.c | 10 ++++++++++ 1 file changed, 10 insertions(+)