From patchwork Wed Apr 25 11:54:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Guo X-Patchwork-Id: 904197 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40WKL70Jxbz9s1P for ; Wed, 25 Apr 2018 22:30:55 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="nq0pQrnQ"; dkim-atps=neutral Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 40WKL65WwSzF24q for ; Wed, 25 Apr 2018 22:30:54 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="nq0pQrnQ"; dkim-atps=neutral X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:400e:c00::241; helo=mail-pf0-x241.google.com; envelope-from=wei.guo.simon@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="nq0pQrnQ"; dkim-atps=neutral Received: from mail-pf0-x241.google.com (mail-pf0-x241.google.com [IPv6:2607:f8b0:400e:c00::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40WJYH4CtpzF24R for ; Wed, 25 Apr 2018 21:55:31 +1000 (AEST) Received: by mail-pf0-x241.google.com with SMTP id l27so15008762pfk.12 for ; Wed, 25 Apr 2018 04:55:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IA4OURehBz661FOl+lF31lIgFWgVRHhkv9IRq0wH4Z8=; b=nq0pQrnQrCpGzJUOJwVMcRTkFRDwC0Hz/hfwYMrCINv5R6671l/kg/bRlYFFLhOl14 k6S/1rJRMyOlZMoDaKRaXlL6B4Agck310m8AxMh4IxHFfKHLepL/1xDXUFxjrQ3Ofzzy raMXZbsx2sjQXMEfvJQ22tqmpERZ5pF0ActZkpGusc0Axhpyn4AGwSV+sd2fwY7d7s+L qZ4jq10EXs9LwSnxRZ9hE0Uo+xoy5DHSr8dZX8JaL03sW+qjdW362oJT+O+jk0JrlDzK M9MLh+x+Q5+UT5J7HqX6dtMgkz3e2NyhlLEk2Ku6OmC8XcQAA9th2zq8GUXlwBOd0dM5 fRGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IA4OURehBz661FOl+lF31lIgFWgVRHhkv9IRq0wH4Z8=; b=nu6MHm0e2MsFHtq+wTXTkITI//gZL+pTHOKsigGWF8+loGWe+t0GlyvvJF+Ph+uQ5p PpA7UWAWgWeOcnO0RM0JFV3C2IJrQNrdbwSvmfHyY0bb2F4yz8G/ZCOV90BB/ZcFYeNg XieBKPtFVYHTT/N0T7HHi2l5EIaggZXFVt9ugqVVbVrutqRrnNouVg4own8nTJ4VbNF7 XJeErbomH8qKhoIEbiDn+5J4oClAebAkUZ3EdJeaRH8GkHspID+VH5MSJJ5uNgLHA2ba 6BAZYjpmKsrFwuMwadebWsrFTj0K/0mS5G9zJyW+XOVenI01wmTXPfE7Zf24Ri94DzNt Sx9Q== X-Gm-Message-State: ALQs6tAHe8yUT8F//ifU9HcZ01PbmuI0YpuvgWJ9chxmnichUCDB+PTw boeiz584xwGn9NpCBjTTqw18nA== X-Google-Smtp-Source: AB8JxZqqJE5FB+Uh43Tg4rA5qpEmFAY6SMm0l+0VL8/VkBtsUd2M/JkbbhtrAtIT2IA+acuWUkbCWA== X-Received: by 10.101.81.131 with SMTP id h3mr3394894pgq.58.1524657329587; Wed, 25 Apr 2018 04:55:29 -0700 (PDT) Received: from simonLocalRHEL7.cn.ibm.com ([112.73.0.87]) by smtp.gmail.com with ESMTPSA id j1sm30912934pgn.69.2018.04.25.04.55.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Apr 2018 04:55:29 -0700 (PDT) From: wei.guo.simon@gmail.com To: kvm-ppc@vger.kernel.org Subject: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input Date: Wed, 25 Apr 2018 19:54:44 +0800 Message-Id: <1524657284-16706-12-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1524657284-16706-1-git-send-email-wei.guo.simon@gmail.com> References: <1524657284-16706-1-git-send-email-wei.guo.simon@gmail.com> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Simon Guo , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Simon Guo This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported by analyse_instr() and handle accordingly. When emulating VSX store, the VSX reg will need to be flushed so that the right reg val can be retrieved before writing to IO MEM. Suggested-by: Paul Mackerras Signed-off-by: Simon Guo --- arch/powerpc/kvm/emulate_loadstore.c | 256 ++++++++++++++--------------------- 1 file changed, 101 insertions(+), 155 deletions(-) diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c index 0bfee2f..bbd2f58 100644 --- a/arch/powerpc/kvm/emulate_loadstore.c +++ b/arch/powerpc/kvm/emulate_loadstore.c @@ -181,6 +181,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) break; #endif +#ifdef CONFIG_VSX + case LOAD_VSX: { + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + if (op.element_size == 8) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; + else + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD; + } else if (op.element_size == 4) { + if (op.vsx_flags & VSX_SPLAT) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD_LOAD_DUMP; + else + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD; + } else + break; + + if (size < op.element_size) { + /* precision convert case: lxsspx, etc */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* lxvw4x, lxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } + + emulated = kvmppc_handle_vsx_load(run, vcpu, + KVM_MMIO_REG_VSX|op.reg, io_size_each, + 1, op.type & SIGNEXT); + break; + } +#endif case STORE: if (op.type & UPDATE) { vcpu->arch.mmio_ra = op.update_reg; @@ -248,6 +296,59 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) } break; #endif +#ifdef CONFIG_VSX + case STORE_VSX: { + /* io length for each mmio emulation */ + int io_size_each; + + if (op.vsx_flags & VSX_CHECK_VEC) { + if (kvmppc_check_altivec_disabled(vcpu)) + return EMULATE_DONE; + } else { + if (kvmppc_check_vsx_disabled(vcpu)) + return EMULATE_DONE; + } + + /* if it is PR KVM, the FP/VEC/VSX registers need to + * be flushed so that kvmppc_handle_store() can read + * actual VMX vals from vcpu->arch. + */ + if (!is_kvmppc_hv_enabled(vcpu->kvm)) + vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, + MSR_VSX); + + if (op.vsx_flags & VSX_FPCONV) + vcpu->arch.mmio_sp64_extend = 1; + + /* stxsiwx has a special vsx_offset */ + if ((get_op(inst) == 31) && + (get_xop(inst) == OP_31_XOP_STXSIWX)) + vcpu->arch.mmio_vsx_offset = 1; + + if (op.element_size == 8) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_DWORD; + else if (op.element_size == 4) + vcpu->arch.mmio_vsx_copy_type = + KVMPPC_VSX_COPY_WORD; + else + break; + + if (size < op.element_size) { + /* precise conversion case, like stxsspx */ + vcpu->arch.mmio_vsx_copy_nums = 1; + io_size_each = size; + } else { /* stxvw4x, stxvd2x, etc */ + vcpu->arch.mmio_vsx_copy_nums = + size/op.element_size; + io_size_each = op.element_size; + } + + emulated = kvmppc_handle_vsx_store(run, vcpu, + op.reg, io_size_each, 1); + break; + } +#endif case CACHEOP: /* Do nothing. The guest is performing dcbi because * hardware DMA is not snooped by the dcache, but @@ -262,161 +363,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu) } } - - if (emulated == EMULATE_DONE) - goto out; - - switch (get_op(inst)) { - case 31: - switch (get_xop(inst)) { -#ifdef CONFIG_VSX - case OP_31_XOP_LXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXSIWAX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 1); - break; - - case OP_31_XOP_LXSIWZX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXVD2X: - /* - * In this case, the official load/store process is like this: - * Step1, exit from vm by page fault isr, then kvm save vsr. - * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS - * as reference. - * - * Step2, copy data between memory and VCPU - * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use - * 2copies*8bytes or 4copies*4bytes - * to simulate one copy of 16bytes. - * Also there is an endian issue here, we should notice the - * layout of memory. - * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference. - * If host is little-endian, kvm will call XXSWAPD for - * LXVD2X_ROT/STXVD2X_ROT. - * So, if host is little-endian, - * the postion of memeory should be swapped. - * - * Step3, return to guest, kvm reset register. - * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS - * as reference. - */ - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_LXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 4, 1, 0); - break; - - case OP_31_XOP_LXVDSX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = - KVMPPC_VSX_COPY_DWORD_LOAD_DUMP; - emulated = kvmppc_handle_vsx_load(run, vcpu, - KVM_MMIO_REG_VSX|rt, 8, 1, 0); - break; - - case OP_31_XOP_STXSDX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; - - case OP_31_XOP_STXSSPX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - vcpu->arch.mmio_sp64_extend = 1; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; - - case OP_31_XOP_STXSIWX: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_offset = 1; - vcpu->arch.mmio_vsx_copy_nums = 1; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; - - case OP_31_XOP_STXVD2X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 2; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 8, 1); - break; - - case OP_31_XOP_STXVW4X: - if (kvmppc_check_vsx_disabled(vcpu)) - return EMULATE_DONE; - vcpu->arch.mmio_vsx_copy_nums = 4; - vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD; - emulated = kvmppc_handle_vsx_store(run, vcpu, - rs, 4, 1); - break; -#endif /* CONFIG_VSX */ - - default: - emulated = EMULATE_FAIL; - break; - } - break; - - default: - emulated = EMULATE_FAIL; - break; - } - -out: if (emulated == EMULATE_FAIL) { advance = 0; kvmppc_core_queue_program(vcpu, 0);