Message ID | 20190218150846.5822-2-aaron.ma@canonical.com |
---|---|
State | New |
Headers | show |
Series | [1/1] x86/entry/64: Remove %ebx handling from error_entry/exit | expand |
On 2019-02-18 16:08:46, Aaron Ma wrote: > From: Andy Lutomirski <luto@kernel.org> > > error_entry and error_exit communicate the user vs. kernel status of > the frame using %ebx. This is unnecessary -- the information is in > regs->cs. Just use regs->cs. > > This makes error_entry simpler and makes error_exit more robust. > > It also fixes a nasty bug. Before all the Spectre nonsense, the > xen_failsafe_callback entry point returned like this: > > ALLOC_PT_GPREGS_ON_STACK > SAVE_C_REGS > SAVE_EXTRA_REGS > ENCODE_FRAME_POINTER > jmp error_exit > > And it did not go through error_entry. This was bogus: RBX > contained garbage, and error_exit expected a flag in RBX. > > Fortunately, it generally contained *nonzero* garbage, so the > correct code path was used. As part of the Spectre fixes, code was > added to clear RBX to mitigate certain speculation attacks. Now, > depending on kernel configuration, RBX got zeroed and, when running > some Wine workloads, the kernel crashes. This was introduced by: > > commit 3ac6d8c787b8 ("x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface") > > With this patch applied, RBX is no longer needed as a flag, and the > problem goes away. > > I suspect that malicious userspace could use this bug to crash the > kernel even without the offending patch applied, though. > > [ Historical note: I wrote this patch as a cleanup before I was aware > of the bug it fixed. ] > > [ Note to stable maintainers: this should probably get applied to all > kernels. If you're nervous about that, a more conservative fix to > add xorl %ebx,%ebx; incl %ebx before the jump to error_exit should > also fix the problem. ] > > Reported-and-tested-by: M. Vefa Bicakci <m.v.b@runbox.com> > Signed-off-by: Andy Lutomirski <luto@kernel.org> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> > Cc: Borislav Petkov <bp@alien8.de> > Cc: Brian Gerst <brgerst@gmail.com> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: Denys Vlasenko <dvlasenk@redhat.com> > Cc: Dominik Brodowski <linux@dominikbrodowski.net> > Cc: Greg KH <gregkh@linuxfoundation.org> > Cc: H. Peter Anvin <hpa@zytor.com> > Cc: Josh Poimboeuf <jpoimboe@redhat.com> > Cc: Juergen Gross <jgross@suse.com> > Cc: Linus Torvalds <torvalds@linux-foundation.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: stable@vger.kernel.org > Cc: xen-devel@lists.xenproject.org > Fixes: 3ac6d8c787b8 ("x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface") > Link: http://lkml.kernel.org/r/b5010a090d3586b2d6e06c7ad3ec5542d1241c45.1532282627.git.luto@kernel.org > Signed-off-by: Ingo Molnar <mingo@kernel.org> > > CVE-2018-14678 > > (cherry picked from commit b3681dd548d06deb2e1573890829dff4b15abf46) > Signed-off-by: Aaron Ma <aaron.ma@canonical.com> Clean cherry pick, we've had this patch applied in Xenial's kernel, and it looks safe to me. Acked-by: Tyler Hicks <tyhicks@canonical.com> Thanks! Tyler > --- > arch/x86/entry/entry_64.S | 18 ++++-------------- > 1 file changed, 4 insertions(+), 14 deletions(-) > > diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S > index 3a7d58384479..c0636b7ccc86 100644 > --- a/arch/x86/entry/entry_64.S > +++ b/arch/x86/entry/entry_64.S > @@ -937,7 +937,7 @@ ENTRY(\sym) > > call \do_sym > > - jmp error_exit /* %ebx: no swapgs flag */ > + jmp error_exit > .endif > END(\sym) > .endm > @@ -1172,7 +1172,6 @@ END(paranoid_exit) > > /* > * Save all registers in pt_regs, and switch GS if needed. > - * Return: EBX=0: came from user mode; EBX=1: otherwise > */ > ENTRY(error_entry) > UNWIND_HINT_FUNC > @@ -1219,7 +1218,6 @@ ENTRY(error_entry) > * for these here too. > */ > .Lerror_kernelspace: > - incl %ebx > leaq native_irq_return_iret(%rip), %rcx > cmpq %rcx, RIP+8(%rsp) > je .Lerror_bad_iret > @@ -1253,28 +1251,20 @@ ENTRY(error_entry) > > /* > * Pretend that the exception came from user mode: set up pt_regs > - * as if we faulted immediately after IRET and clear EBX so that > - * error_exit knows that we will be returning to user mode. > + * as if we faulted immediately after IRET. > */ > mov %rsp, %rdi > call fixup_bad_iret > mov %rax, %rsp > - decl %ebx > jmp .Lerror_entry_from_usermode_after_swapgs > END(error_entry) > > - > -/* > - * On entry, EBX is a "return to kernel mode" flag: > - * 1: already in kernel mode, don't need SWAPGS > - * 0: user gsbase is loaded, we need SWAPGS and standard preparation for return to usermode > - */ > ENTRY(error_exit) > UNWIND_HINT_REGS > DISABLE_INTERRUPTS(CLBR_ANY) > TRACE_IRQS_OFF > - testl %ebx, %ebx > - jnz retint_kernel > + testb $3, CS(%rsp) > + jz retint_kernel > jmp retint_user > END(error_exit) > > -- > 2.17.1 > > > -- > kernel-team mailing list > kernel-team@lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/kernel-team
Acked-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
On 18.02.19 16:08, Aaron Ma wrote: > From: Andy Lutomirski <luto@kernel.org> > > error_entry and error_exit communicate the user vs. kernel status of > the frame using %ebx. This is unnecessary -- the information is in > regs->cs. Just use regs->cs. > > This makes error_entry simpler and makes error_exit more robust. > > It also fixes a nasty bug. Before all the Spectre nonsense, the > xen_failsafe_callback entry point returned like this: > > ALLOC_PT_GPREGS_ON_STACK > SAVE_C_REGS > SAVE_EXTRA_REGS > ENCODE_FRAME_POINTER > jmp error_exit > > And it did not go through error_entry. This was bogus: RBX > contained garbage, and error_exit expected a flag in RBX. > > Fortunately, it generally contained *nonzero* garbage, so the > correct code path was used. As part of the Spectre fixes, code was > added to clear RBX to mitigate certain speculation attacks. Now, > depending on kernel configuration, RBX got zeroed and, when running > some Wine workloads, the kernel crashes. This was introduced by: > > commit 3ac6d8c787b8 ("x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface") > > With this patch applied, RBX is no longer needed as a flag, and the > problem goes away. > > I suspect that malicious userspace could use this bug to crash the > kernel even without the offending patch applied, though. > > [ Historical note: I wrote this patch as a cleanup before I was aware > of the bug it fixed. ] > > [ Note to stable maintainers: this should probably get applied to all > kernels. If you're nervous about that, a more conservative fix to > add xorl %ebx,%ebx; incl %ebx before the jump to error_exit should > also fix the problem. ] > > Reported-and-tested-by: M. Vefa Bicakci <m.v.b@runbox.com> > Signed-off-by: Andy Lutomirski <luto@kernel.org> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> > Cc: Borislav Petkov <bp@alien8.de> > Cc: Brian Gerst <brgerst@gmail.com> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Cc: Denys Vlasenko <dvlasenk@redhat.com> > Cc: Dominik Brodowski <linux@dominikbrodowski.net> > Cc: Greg KH <gregkh@linuxfoundation.org> > Cc: H. Peter Anvin <hpa@zytor.com> > Cc: Josh Poimboeuf <jpoimboe@redhat.com> > Cc: Juergen Gross <jgross@suse.com> > Cc: Linus Torvalds <torvalds@linux-foundation.org> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: stable@vger.kernel.org > Cc: xen-devel@lists.xenproject.org > Fixes: 3ac6d8c787b8 ("x86/entry/64: Clear registers for exceptions/interrupts, to reduce speculation attack surface") > Link: http://lkml.kernel.org/r/b5010a090d3586b2d6e06c7ad3ec5542d1241c45.1532282627.git.luto@kernel.org > Signed-off-by: Ingo Molnar <mingo@kernel.org> > > CVE-2018-14678 > > (cherry picked from commit b3681dd548d06deb2e1573890829dff4b15abf46) > Signed-off-by: Aaron Ma <aaron.ma@canonical.com> > --- Applied to bionic/master-next. Thanks. -Stefan > arch/x86/entry/entry_64.S | 18 ++++-------------- > 1 file changed, 4 insertions(+), 14 deletions(-) > > diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S > index 3a7d58384479..c0636b7ccc86 100644 > --- a/arch/x86/entry/entry_64.S > +++ b/arch/x86/entry/entry_64.S > @@ -937,7 +937,7 @@ ENTRY(\sym) > > call \do_sym > > - jmp error_exit /* %ebx: no swapgs flag */ > + jmp error_exit > .endif > END(\sym) > .endm > @@ -1172,7 +1172,6 @@ END(paranoid_exit) > > /* > * Save all registers in pt_regs, and switch GS if needed. > - * Return: EBX=0: came from user mode; EBX=1: otherwise > */ > ENTRY(error_entry) > UNWIND_HINT_FUNC > @@ -1219,7 +1218,6 @@ ENTRY(error_entry) > * for these here too. > */ > .Lerror_kernelspace: > - incl %ebx > leaq native_irq_return_iret(%rip), %rcx > cmpq %rcx, RIP+8(%rsp) > je .Lerror_bad_iret > @@ -1253,28 +1251,20 @@ ENTRY(error_entry) > > /* > * Pretend that the exception came from user mode: set up pt_regs > - * as if we faulted immediately after IRET and clear EBX so that > - * error_exit knows that we will be returning to user mode. > + * as if we faulted immediately after IRET. > */ > mov %rsp, %rdi > call fixup_bad_iret > mov %rax, %rsp > - decl %ebx > jmp .Lerror_entry_from_usermode_after_swapgs > END(error_entry) > > - > -/* > - * On entry, EBX is a "return to kernel mode" flag: > - * 1: already in kernel mode, don't need SWAPGS > - * 0: user gsbase is loaded, we need SWAPGS and standard preparation for return to usermode > - */ > ENTRY(error_exit) > UNWIND_HINT_REGS > DISABLE_INTERRUPTS(CLBR_ANY) > TRACE_IRQS_OFF > - testl %ebx, %ebx > - jnz retint_kernel > + testb $3, CS(%rsp) > + jz retint_kernel > jmp retint_user > END(error_exit) > >
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 3a7d58384479..c0636b7ccc86 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -937,7 +937,7 @@ ENTRY(\sym) call \do_sym - jmp error_exit /* %ebx: no swapgs flag */ + jmp error_exit .endif END(\sym) .endm @@ -1172,7 +1172,6 @@ END(paranoid_exit) /* * Save all registers in pt_regs, and switch GS if needed. - * Return: EBX=0: came from user mode; EBX=1: otherwise */ ENTRY(error_entry) UNWIND_HINT_FUNC @@ -1219,7 +1218,6 @@ ENTRY(error_entry) * for these here too. */ .Lerror_kernelspace: - incl %ebx leaq native_irq_return_iret(%rip), %rcx cmpq %rcx, RIP+8(%rsp) je .Lerror_bad_iret @@ -1253,28 +1251,20 @@ ENTRY(error_entry) /* * Pretend that the exception came from user mode: set up pt_regs - * as if we faulted immediately after IRET and clear EBX so that - * error_exit knows that we will be returning to user mode. + * as if we faulted immediately after IRET. */ mov %rsp, %rdi call fixup_bad_iret mov %rax, %rsp - decl %ebx jmp .Lerror_entry_from_usermode_after_swapgs END(error_entry) - -/* - * On entry, EBX is a "return to kernel mode" flag: - * 1: already in kernel mode, don't need SWAPGS - * 0: user gsbase is loaded, we need SWAPGS and standard preparation for return to usermode - */ ENTRY(error_exit) UNWIND_HINT_REGS DISABLE_INTERRUPTS(CLBR_ANY) TRACE_IRQS_OFF - testl %ebx, %ebx - jnz retint_kernel + testb $3, CS(%rsp) + jz retint_kernel jmp retint_user END(error_exit)