From patchwork Sat Apr 26 19:36:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin O'Connor X-Patchwork-Id: 343093 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 8AEF0140142 for ; Sun, 27 Apr 2014 05:37:05 +1000 (EST) Received: from localhost ([::1]:37064 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1We8PH-00072S-3W for incoming@patchwork.ozlabs.org; Sat, 26 Apr 2014 15:37:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58030) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1We8Ot-0006jo-83 for qemu-devel@nongnu.org; Sat, 26 Apr 2014 15:36:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1We8On-0002j7-9t for qemu-devel@nongnu.org; Sat, 26 Apr 2014 15:36:39 -0400 Received: from mail-pb0-f54.google.com ([209.85.160.54]:36961) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1We8Om-0002iM-VT for qemu-devel@nongnu.org; Sat, 26 Apr 2014 15:36:33 -0400 Received: by mail-pb0-f54.google.com with SMTP id rp16so934110pbb.41 for ; Sat, 26 Apr 2014 12:36:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=+ADa4Oa+wpkEvU3Pu0wNbzwjmNcFzKEZIJCEz0XnaW4=; b=Eqn5BTySIUNU754tfkXgud//lSYIERT4oI0LDmQJ5Wfeg30Nl04FfQWtN3chH+Ihyc KquGVRrFzh5IOyu2yYZ1x28+owUNGpV/gWXwp7FG+uazrnEjWuhhTgAYXRcCjKwAgU8O qjqYkjLk0z7inMKQEdLyzPuy38JMKuMVysKwNZF4FTGu1fxDBWcsreNymzG+jhypdqfY 9X4Q8ggrkuSnyo/YVuIM0onTDiuuZ1qKLvw9rLFQtLoMLs0ORCuXNoo7/oLBBmQAg1D9 MZ/LslSjAXZVEDAJAxLj3l2Hgms6ktNaaJWCWnE9uLmHjZqBXUrJKhi4KvJZOlmEL/mY 9S5g== X-Gm-Message-State: ALoCoQl4XWa2KCUuH9VUxtycxymyL6vtHFutaQVfTj0U+93yKLnefRx/dPy1tn1b7hzpQjiKC2/+ X-Received: by 10.68.130.229 with SMTP id oh5mr15610006pbb.121.1398540991463; Sat, 26 Apr 2014 12:36:31 -0700 (PDT) Received: from localhost (207-172-170-53.c3-0.avec-ubr1.nyr-avec.ny.cable.rcn.com. [207.172.170.53]) by mx.google.com with ESMTPSA id pl10sm24264636pbb.56.2014.04.26.12.36.29 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 26 Apr 2014 12:36:30 -0700 (PDT) Date: Sat, 26 Apr 2014 15:36:27 -0400 From: Kevin O'Connor To: Paolo Bonzini Message-ID: <20140426193627.GA13486@morn.localdomain> References: <20140425171718.GA1591@morn.localdomain> <535B772D.1020602@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <535B772D.1020602@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.160.54 Cc: Richard Henderson , qemu-devel@nongnu.org, Gerd Hoffmann Subject: Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save and restore it on rsm. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Sat, Apr 26, 2014 at 11:06:53AM +0200, Paolo Bonzini wrote: > Il 25/04/2014 19:17, Kevin O'Connor ha scritto: > >The current SMI interrupt handler is being run with the same CPL as > >the code it interrupts. If the existing code is running with CPL=3, > >then the SMI handler can cause spurious exceptions. The System > >Management Mode (SMM) should always run at the highest protection > >level. > > KVM computes the CPL as follows: > > if (CR0.PE == 0) > return 0; > > if (!EFER.LMA && EFLAGS.VM) > return 3; > > return CS.selector & 3; > > Perhaps this can be used instead of save/restore of the CPL field. [...] > >I'm not very familiar with the QEMU TCG code, so it is possible there > >is a better way to accomplish this. I can confirm that without this > >patch an extended SeaBIOS SMI handler can cause spurious faults, but > >it works correctly with this patch. Yes, I was thinking of something like that as well. If QEMU internally observes the formula above, then something like the patch below should work instead of my original patch. However, I'm not an expert on QEMU TCG and the patch below would require much more testing. (As a side note, the patch below also obviates the need to save/restore cpl in the svm code, but I left saving of cpl in the "hsave page" in case that is part of some standard.) -Kevin From e1f9ebc7154f82a9d7507cc0bd09aa02f209b7ed Mon Sep 17 00:00:00 2001 From: Kevin O'Connor Date: Sat, 26 Apr 2014 15:20:38 -0400 Subject: [PATCH] The x86 CPL is stored in CS.selector - auto update hflags accordingly. Instead of manually calling cpu_x86_set_cpl() when the CPL changes, check for CPL changes on calls to cpu_x86_load_seg_cache(R_CS). Every location that called cpu_x86_set_cpl() also called cpu_x86_load_seg_cache(R_CS), so cpu_x86_set_cpl() is no longer required. This fixes the SMM handler code as it was not setting/restoring the CPL level manually. A few places in the code have been reordered so that eflags/cr0 changes occur before calling cpu_x86_load_seg_cache(R_CS). This is done because the CPL level depends on eflags/cr0. However, this change should be done anyway, because cpu_x86_load_seg_cache() already uses eflags/cr0 to determine other hflags. Signed-off-by: Kevin O'Connor --- bsd-user/main.c | 2 -- linux-user/main.c | 2 -- target-i386/cpu.h | 32 ++++++++++++++--------------- target-i386/seg_helper.c | 53 +++++++++++++++++------------------------------- target-i386/smm_helper.c | 20 +++++++++--------- target-i386/svm_helper.c | 7 ++----- 6 files changed, 46 insertions(+), 70 deletions(-) diff --git a/bsd-user/main.c b/bsd-user/main.c index f81ba55..9f895b4 100644 --- a/bsd-user/main.c +++ b/bsd-user/main.c @@ -1003,8 +1003,6 @@ int main(int argc, char **argv) cpu->opaque = ts; #if defined(TARGET_I386) - cpu_x86_set_cpl(env, 3); - env->cr[0] = CR0_PG_MASK | CR0_WP_MASK | CR0_PE_MASK; env->hflags |= HF_PE_MASK; if (env->features[FEAT_1_EDX] & CPUID_SSE) { diff --git a/linux-user/main.c b/linux-user/main.c index 947358a..9f7cfe4 100644 --- a/linux-user/main.c +++ b/linux-user/main.c @@ -4051,8 +4051,6 @@ int main(int argc, char **argv, char **envp) #endif #if defined(TARGET_I386) - cpu_x86_set_cpl(env, 3); - env->cr[0] = CR0_PG_MASK | CR0_WP_MASK | CR0_PE_MASK; env->hflags |= HF_PE_MASK; if (env->features[FEAT_1_EDX] & CPUID_SSE) { diff --git a/target-i386/cpu.h b/target-i386/cpu.h index 2a22a7d..6360794 100644 --- a/target-i386/cpu.h +++ b/target-i386/cpu.h @@ -124,9 +124,9 @@ #define ID_MASK 0x00200000 /* hidden flags - used internally by qemu to represent additional cpu - states. Only the CPL, INHIBIT_IRQ, SMM and SVMI are not - redundant. We avoid using the IOPL_MASK, TF_MASK, VM_MASK and AC_MASK - bit positions to ease oring with eflags. */ + states. Only the INHIBIT_IRQ, SMM and SVMI are not redundant. We + avoid using the IOPL_MASK, TF_MASK, VM_MASK and AC_MASK bit + positions to ease oring with eflags. */ /* current cpl */ #define HF_CPL_SHIFT 0 /* true if soft mmu is being used */ @@ -974,19 +974,27 @@ static inline void cpu_x86_load_seg_cache(CPUX86State *env, /* update the hidden flags */ { if (seg_reg == R_CS) { +#if HF_CPL_MASK != 3 +#error HF_CPL_MASK is hardcoded +#endif + int cpl = selector & 3; #ifdef TARGET_X86_64 if ((env->hflags & HF_LMA_MASK) && (flags & DESC_L_MASK)) { /* long mode */ - env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK; - env->hflags &= ~(HF_ADDSEG_MASK); + env->hflags &= ~(HF_ADDSEG_MASK | HF_CPL_MASK); + env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK | cpl; } else #endif { /* legacy / compatibility case */ + if (!(env->cr[0] & CR0_PE_MASK)) + cpl = 0; + else if (env->eflags & VM_MASK) + cpl = 3; new_hflags = (env->segs[R_CS].flags & DESC_B_MASK) >> (DESC_B_SHIFT - HF_CS32_SHIFT); - env->hflags = (env->hflags & ~(HF_CS32_MASK | HF_CS64_MASK)) | - new_hflags; + env->hflags &= ~(HF_CS32_MASK | HF_CS64_MASK | HF_CPL_MASK); + env->hflags |= new_hflags | cpl; } } new_hflags = (env->segs[R_SS].flags & DESC_B_MASK) @@ -1031,16 +1039,6 @@ int cpu_x86_get_descr_debug(CPUX86State *env, unsigned int selector, target_ulong *base, unsigned int *limit, unsigned int *flags); -/* wrapper, just in case memory mappings must be changed */ -static inline void cpu_x86_set_cpl(CPUX86State *s, int cpl) -{ -#if HF_CPL_MASK == 3 - s->hflags = (s->hflags & ~HF_CPL_MASK) | cpl; -#else -#error HF_CPL_MASK is hardcoded -#endif -} - /* op_helper.c */ /* used for debug or cpu save/restore */ void cpu_get_fp80(uint64_t *pmant, uint16_t *pexp, floatx80 f); diff --git a/target-i386/seg_helper.c b/target-i386/seg_helper.c index 8c3f92c..4f11dc4 100644 --- a/target-i386/seg_helper.c +++ b/target-i386/seg_helper.c @@ -409,11 +409,7 @@ static void switch_tss(CPUX86State *env, int tss_selector, for (i = 0; i < 6; i++) { load_seg_vm(env, i, new_segs[i]); } - /* in vm86, CPL is always 3 */ - cpu_x86_set_cpl(env, 3); } else { - /* CPL is set the RPL of CS */ - cpu_x86_set_cpl(env, new_segs[R_CS] & 3); /* first just selectors as the rest may trigger exceptions */ for (i = 0; i < 6; i++) { cpu_x86_load_seg_cache(env, i, new_segs[i], 0, 0, 0); @@ -752,19 +748,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } SET_ESP(esp, sp_mask); + /* interrupt gate clear IF mask */ + if ((type & 1) == 0) { + env->eflags &= ~IF_MASK; + } + env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK); + selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - cpu_x86_set_cpl(env, dpl); env->eip = offset; - - /* interrupt gate clear IF mask */ - if ((type & 1) == 0) { - env->eflags &= ~IF_MASK; - } - env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK); } #ifdef TARGET_X86_64 @@ -917,19 +912,18 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, } env->regs[R_ESP] = esp; + /* interrupt gate clear IF mask */ + if ((type & 1) == 0) { + env->eflags &= ~IF_MASK; + } + env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK); + selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - cpu_x86_set_cpl(env, dpl); env->eip = offset; - - /* interrupt gate clear IF mask */ - if ((type & 1) == 0) { - env->eflags &= ~IF_MASK; - } - env->eflags &= ~(TF_MASK | VM_MASK | RF_MASK | NT_MASK); } #endif @@ -960,7 +954,8 @@ void helper_syscall(CPUX86State *env, int next_eip_addend) code64 = env->hflags & HF_CS64_MASK; - cpu_x86_set_cpl(env, 0); + env->eflags &= ~env->fmask; + cpu_load_eflags(env, env->eflags, 0); cpu_x86_load_seg_cache(env, R_CS, selector & 0xfffc, 0, 0xffffffff, DESC_G_MASK | DESC_P_MASK | @@ -972,8 +967,6 @@ void helper_syscall(CPUX86State *env, int next_eip_addend) DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK | DESC_W_MASK | DESC_A_MASK); - env->eflags &= ~env->fmask; - cpu_load_eflags(env, env->eflags, 0); if (code64) { env->eip = env->lstar; } else { @@ -982,7 +975,7 @@ void helper_syscall(CPUX86State *env, int next_eip_addend) } else { env->regs[R_ECX] = (uint32_t)(env->eip + next_eip_addend); - cpu_x86_set_cpl(env, 0); + env->eflags &= ~(IF_MASK | RF_MASK | VM_MASK); cpu_x86_load_seg_cache(env, R_CS, selector & 0xfffc, 0, 0xffffffff, DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | @@ -993,7 +986,6 @@ void helper_syscall(CPUX86State *env, int next_eip_addend) DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK | DESC_W_MASK | DESC_A_MASK); - env->eflags &= ~(IF_MASK | RF_MASK | VM_MASK); env->eip = (uint32_t)env->star; } } @@ -1014,6 +1006,9 @@ void helper_sysret(CPUX86State *env, int dflag) } selector = (env->star >> 48) & 0xffff; if (env->hflags & HF_LMA_MASK) { + cpu_load_eflags(env, (uint32_t)(env->regs[11]), TF_MASK | AC_MASK + | ID_MASK | IF_MASK | IOPL_MASK | VM_MASK | RF_MASK | + NT_MASK); if (dflag == 2) { cpu_x86_load_seg_cache(env, R_CS, (selector + 16) | 3, 0, 0xffffffff, @@ -1035,11 +1030,8 @@ void helper_sysret(CPUX86State *env, int dflag) DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK | (3 << DESC_DPL_SHIFT) | DESC_W_MASK | DESC_A_MASK); - cpu_load_eflags(env, (uint32_t)(env->regs[11]), TF_MASK | AC_MASK - | ID_MASK | IF_MASK | IOPL_MASK | VM_MASK | RF_MASK | - NT_MASK); - cpu_x86_set_cpl(env, 3); } else { + env->eflags |= IF_MASK; cpu_x86_load_seg_cache(env, R_CS, selector | 3, 0, 0xffffffff, DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | @@ -1051,8 +1043,6 @@ void helper_sysret(CPUX86State *env, int dflag) DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK | (3 << DESC_DPL_SHIFT) | DESC_W_MASK | DESC_A_MASK); - env->eflags |= IF_MASK; - cpu_x86_set_cpl(env, 3); } } #endif @@ -1905,7 +1895,6 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - cpu_x86_set_cpl(env, dpl); SET_ESP(sp, sp_mask); env->eip = offset; } @@ -2134,7 +2123,6 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - cpu_x86_set_cpl(env, rpl); sp = new_esp; #ifdef TARGET_X86_64 if (env->hflags & HF_CS64_MASK) { @@ -2185,7 +2173,6 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, IF_MASK | IOPL_MASK | VM_MASK | NT_MASK | VIF_MASK | VIP_MASK); load_seg_vm(env, R_CS, new_cs & 0xffff); - cpu_x86_set_cpl(env, 3); load_seg_vm(env, R_SS, new_ss & 0xffff); load_seg_vm(env, R_ES, new_es & 0xffff); load_seg_vm(env, R_DS, new_ds & 0xffff); @@ -2238,7 +2225,6 @@ void helper_sysenter(CPUX86State *env) raise_exception_err(env, EXCP0D_GPF, 0); } env->eflags &= ~(VM_MASK | IF_MASK | RF_MASK); - cpu_x86_set_cpl(env, 0); #ifdef TARGET_X86_64 if (env->hflags & HF_LMA_MASK) { @@ -2274,7 +2260,6 @@ void helper_sysexit(CPUX86State *env, int dflag) if (env->sysenter_cs == 0 || cpl != 0) { raise_exception_err(env, EXCP0D_GPF, 0); } - cpu_x86_set_cpl(env, 3); #ifdef TARGET_X86_64 if (dflag == 2) { cpu_x86_load_seg_cache(env, R_CS, ((env->sysenter_cs + 32) & 0xfffc) | diff --git a/target-i386/smm_helper.c b/target-i386/smm_helper.c index 35901c9..3e94391 100644 --- a/target-i386/smm_helper.c +++ b/target-i386/smm_helper.c @@ -191,16 +191,6 @@ void helper_rsm(CPUX86State *env) #ifdef TARGET_X86_64 cpu_load_efer(env, ldq_phys(cs->as, sm_state + 0x7ed0)); - for (i = 0; i < 6; i++) { - offset = 0x7e00 + i * 16; - cpu_x86_load_seg_cache(env, i, - lduw_phys(cs->as, sm_state + offset), - ldq_phys(cs->as, sm_state + offset + 8), - ldl_phys(cs->as, sm_state + offset + 4), - (lduw_phys(cs->as, sm_state + offset + 2) & - 0xf0ff) << 8); - } - env->gdt.base = ldq_phys(cs->as, sm_state + 0x7e68); env->gdt.limit = ldl_phys(cs->as, sm_state + 0x7e64); @@ -238,6 +228,16 @@ void helper_rsm(CPUX86State *env) cpu_x86_update_cr3(env, ldl_phys(cs->as, sm_state + 0x7f50)); cpu_x86_update_cr0(env, ldl_phys(cs->as, sm_state + 0x7f58)); + for (i = 0; i < 6; i++) { + offset = 0x7e00 + i * 16; + cpu_x86_load_seg_cache(env, i, + lduw_phys(cs->as, sm_state + offset), + ldq_phys(cs->as, sm_state + offset + 8), + ldl_phys(cs->as, sm_state + offset + 4), + (lduw_phys(cs->as, sm_state + offset + 2) & + 0xf0ff) << 8); + } + val = ldl_phys(cs->as, sm_state + 0x7efc); /* revision ID */ if (val & 0x20000) { env->smbase = ldl_phys(cs->as, sm_state + 0x7f00) & ~0x7fff; diff --git a/target-i386/svm_helper.c b/target-i386/svm_helper.c index aa17ecd..02f6b34 100644 --- a/target-i386/svm_helper.c +++ b/target-i386/svm_helper.c @@ -282,9 +282,6 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend) env->vm_vmcb + offsetof(struct vmcb, save.dr7)); env->dr[6] = ldq_phys(cs->as, env->vm_vmcb + offsetof(struct vmcb, save.dr6)); - cpu_x86_set_cpl(env, ldub_phys(cs->as, - env->vm_vmcb + offsetof(struct vmcb, - save.cpl))); /* FIXME: guest state consistency checks */ @@ -703,7 +700,8 @@ void helper_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1) cpu_load_eflags(env, ldq_phys(cs->as, env->vm_hsave + offsetof(struct vmcb, save.rflags)), - ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK)); + ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK | + VM_MASK)); CC_OP = CC_OP_EFLAGS; svm_load_seg_cache(env, env->vm_hsave + offsetof(struct vmcb, save.es), @@ -728,7 +726,6 @@ void helper_vmexit(CPUX86State *env, uint32_t exit_code, uint64_t exit_info_1) env->vm_hsave + offsetof(struct vmcb, save.dr7)); /* other setups */ - cpu_x86_set_cpl(env, 0); stq_phys(cs->as, env->vm_vmcb + offsetof(struct vmcb, control.exit_code), exit_code); stq_phys(cs->as, env->vm_vmcb + offsetof(struct vmcb, control.exit_info_1),