mbox series

[v2,0/6] x86/uaccess: Avoid barrier_nospec()

Message ID cover.1729201904.git.jpoimboe@kernel.org (mailing list archive)
Headers show
Series x86/uaccess: Avoid barrier_nospec() | expand

Message

Josh Poimboeuf Oct. 17, 2024, 9:55 p.m. UTC
At least for now, continue to assume mask_user_address() is safe on AMD
when combined with STAC/CLAC -- as get_user(), put_user() and
masked_user_access_begin() already do today.

v2:
- Separate copy_to_user() and clear_user() changes out into separate patches
- Add masking to __get_user() and __put_user()

v1:
https://lore.kernel.org/b626840e55d4aa86b4b9b377a4cc2cda7038d33d.1728706156.git.jpoimboe@kernel.org

Josh Poimboeuf (6):
  x86/uaccess: Avoid barrier_nospec() in copy_from_user()
  x86/uaccess: Avoid barrier_nospec() in __get_user()
  x86/uaccess: Rearrange putuser.S
  x86/uaccess: Add user pointer masking to __put_user()
  x86/uaccess: Add user pointer masking to copy_to_user()
  x86/uaccess: Add user pointer masking to clear_user()

 arch/powerpc/include/asm/uaccess.h |  2 +
 arch/x86/include/asm/uaccess_64.h  | 10 ++--
 arch/x86/lib/getuser.S             | 27 +++++++--
 arch/x86/lib/putuser.S             | 92 ++++++++++++++++++------------
 include/linux/uaccess.h            |  6 --
 5 files changed, 86 insertions(+), 51 deletions(-)

Comments

Andrew Cooper Oct. 17, 2024, 10:31 p.m. UTC | #1
On 17/10/2024 10:55 pm, Josh Poimboeuf wrote:
> At least for now, continue to assume mask_user_address() is safe on AMD
> when combined with STAC/CLAC -- as get_user(), put_user() and
> masked_user_access_begin() already do today.

Honestly, I find this a very worrying position to take.

It's one thing not to know there's a speculative security vulnerability
with how mask_user_address() is used.

It's totally another to say "lets pretend that it doesn't exist so we
can continue to make things faster".


Even if you can get Intel and AMD to agree that STAC/CLAC are really
LFENCEs (and I think you'll struggle), they'd only confer the safety you
want between a real conditional that excludes the non-canonical range,
and the pointer deference.

Any path that genuinely deferences a non-canonical pointer is not safe,
whatever serialisation you put in the way.  The attacker wins the moment
the load uop executes.

The final hunk of patch 1 is safe (iff STAC is given extra guarantees)
because it is between the conditional and the deference.  Patch 4 is not
safe (if the comment is correct) because it removes the conditional.


Or state that you intend to disregard this non-canoncal speculation
problem;  that's fine(ish) too, as long as it's done transparently.

~Andrew
Josh Poimboeuf Oct. 17, 2024, 10:42 p.m. UTC | #2
On Thu, Oct 17, 2024 at 11:31:30PM +0100, Andrew Cooper wrote:
> Even if you can get Intel and AMD to agree that STAC/CLAC are really
> LFENCEs (and I think you'll struggle), they'd only confer the safety you
> want between a real conditional that excludes the non-canonical range,
> and the pointer deference.
> 
> Any path that genuinely deferences a non-canonical pointer is not safe,
> whatever serialisation you put in the way.  The attacker wins the moment
> the load uop executes.
> 
> The final hunk of patch 1 is safe (iff STAC is given extra guarantees)
> because it is between the conditional and the deference.  Patch 4 is not
> safe (if the comment is correct) because it removes the conditional.

So the naming is confusing:

  - put_user()   implementation is __put_user_*()
  - __put_user() implementation is __put_user_nocheck_*()

Patch 4 only affects __put_user(), for which the user is expected to
call access_ok() beforehand.

The current implementations of get_user(), put_user() and
masked_user_access_begin() avoid the conditional.  Those are the ones it
sounds like you're worried about?

None of my patches remove conditional checks.