From patchwork Tue Dec 13 06:21:47 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Evans X-Patchwork-Id: 130995 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 31FD11007D3 for ; Tue, 13 Dec 2011 17:21:30 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752064Ab1LMGV3 (ORCPT ); Tue, 13 Dec 2011 01:21:29 -0500 Received: from ozlabs.org ([203.10.76.45]:41556 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751909Ab1LMGV2 (ORCPT ); Tue, 13 Dec 2011 01:21:28 -0500 Received: from localhost.localdomain (ibmaus65.lnk.telstra.net [165.228.126.9]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPSA id 7C13E1007D4; Tue, 13 Dec 2011 17:21:23 +1100 (EST) From: Matt Evans To: kvm@vger.kernel.org, kvm-ppc@vger.kernel.org Cc: penberg@kernel.org, asias.hejun@gmail.com, levinsasha928@gmail.com, gorcunov@gmail.com Subject: [PATCH V3 2/2] kvm tools: Create arch-specific kvm_cpu__emulate_{mm}io() Date: Tue, 13 Dec 2011 17:21:47 +1100 Message-Id: <1323757307-10411-3-git-send-email-matt@ozlabs.org> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1323757307-10411-1-git-send-email-matt@ozlabs.org> References: <1323757307-10411-1-git-send-email-matt@ozlabs.org> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Different architectures will deal with MMIO exits differently. For example, KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering into windows in PCI bridges on other architectures. This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio() from the main runloop's IO and MMIO exit handlers. For x86, these directly call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will perform some address munging before passing on the call. Signed-off-by: Matt Evans --- tools/kvm/kvm-cpu.c | 34 +++++++++++++++--------------- tools/kvm/x86/include/kvm/kvm-cpu-arch.h | 17 ++++++++++++++- 2 files changed, 33 insertions(+), 18 deletions(-) diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c index 8ec4efa..b7ae3d3 100644 --- a/tools/kvm/kvm-cpu.c +++ b/tools/kvm/kvm-cpu.c @@ -52,11 +52,11 @@ static void kvm_cpu__handle_coalesced_mmio(struct kvm_cpu *cpu) while (cpu->ring->first != cpu->ring->last) { struct kvm_coalesced_mmio *m; m = &cpu->ring->coalesced_mmio[cpu->ring->first]; - kvm__emulate_mmio(cpu->kvm, - m->phys_addr, - m->data, - m->len, - 1); + kvm_cpu__emulate_mmio(cpu->kvm, + m->phys_addr, + m->data, + m->len, + 1); cpu->ring->first = (cpu->ring->first + 1) % KVM_COALESCED_MMIO_MAX; } } @@ -111,13 +111,13 @@ int kvm_cpu__start(struct kvm_cpu *cpu) case KVM_EXIT_IO: { bool ret; - ret = kvm__emulate_io(cpu->kvm, - cpu->kvm_run->io.port, - (u8 *)cpu->kvm_run + - cpu->kvm_run->io.data_offset, - cpu->kvm_run->io.direction, - cpu->kvm_run->io.size, - cpu->kvm_run->io.count); + ret = kvm_cpu__emulate_io(cpu->kvm, + cpu->kvm_run->io.port, + (u8 *)cpu->kvm_run + + cpu->kvm_run->io.data_offset, + cpu->kvm_run->io.direction, + cpu->kvm_run->io.size, + cpu->kvm_run->io.count); if (!ret) goto panic_kvm; @@ -126,11 +126,11 @@ int kvm_cpu__start(struct kvm_cpu *cpu) case KVM_EXIT_MMIO: { bool ret; - ret = kvm__emulate_mmio(cpu->kvm, - cpu->kvm_run->mmio.phys_addr, - cpu->kvm_run->mmio.data, - cpu->kvm_run->mmio.len, - cpu->kvm_run->mmio.is_write); + ret = kvm_cpu__emulate_mmio(cpu->kvm, + cpu->kvm_run->mmio.phys_addr, + cpu->kvm_run->mmio.data, + cpu->kvm_run->mmio.len, + cpu->kvm_run->mmio.is_write); if (!ret) goto panic_kvm; diff --git a/tools/kvm/x86/include/kvm/kvm-cpu-arch.h b/tools/kvm/x86/include/kvm/kvm-cpu-arch.h index 822d966..198efe6 100644 --- a/tools/kvm/x86/include/kvm/kvm-cpu-arch.h +++ b/tools/kvm/x86/include/kvm/kvm-cpu-arch.h @@ -4,7 +4,8 @@ /* Architecture-specific kvm_cpu definitions. */ #include /* for struct kvm_regs */ - +#include "kvm/kvm.h" /* for kvm__emulate_{mm}io() */ +#include #include struct kvm; @@ -31,4 +32,18 @@ struct kvm_cpu { struct kvm_coalesced_mmio_ring *ring; }; +/* + * As these are such simple wrappers, let's have them in the header so they'll + * be cheaper to call: + */ +static inline bool kvm_cpu__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int size, u32 count) +{ + return kvm__emulate_io(kvm, port, data, direction, size, count); +} + +static inline bool kvm_cpu__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write) +{ + return kvm__emulate_mmio(kvm, phys_addr, data, len, is_write); +} + #endif /* KVM__KVM_CPU_ARCH_H */