From patchwork Mon Jun 2 22:01:01 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Williamson X-Patchwork-Id: 355093 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 879BF140082 for ; Tue, 3 Jun 2014 08:13:16 +1000 (EST) Received: from localhost ([::1]:49473 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WraKi-0003Zs-4A for incoming@patchwork.ozlabs.org; Mon, 02 Jun 2014 18:03:56 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51265) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WraHy-0008Km-Vy for qemu-devel@nongnu.org; Mon, 02 Jun 2014 18:01:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WraHu-0000Z4-KW for qemu-devel@nongnu.org; Mon, 02 Jun 2014 18:01:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:63080) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WraHu-0000Yz-D5 for qemu-devel@nongnu.org; Mon, 02 Jun 2014 18:01:02 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s52M11G7018782 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Mon, 2 Jun 2014 18:01:01 -0400 Received: from bling.home (ovpn-113-35.phx2.redhat.com [10.3.113.35]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s52M11UL003510; Mon, 2 Jun 2014 18:01:01 -0400 From: Alex Williamson To: qemu-devel@nongnu.org Date: Mon, 02 Jun 2014 16:01:01 -0600 Message-ID: <20140602220101.26111.94563.stgit@bling.home> In-Reply-To: <20140602215946.26111.16417.stgit@bling.home> References: <20140602215946.26111.16417.stgit@bling.home> User-Agent: StGit/0.17-dirty MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: alex.williamson@redhat.com Subject: [Qemu-devel] [PULL 8/8] vfio: Add guest side IOMMU support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: David Gibson This patch uses the new IOMMU notifiers to allow VFIO pass through devices to work with guest side IOMMUs, as long as the host-side VFIO iommu has sufficient capability and granularity to match the guest side. This works by tracking all map and unmap operations on the guest IOMMU using the notifiers, and mirroring them into VFIO. There are a number of FIXMEs, and the scheme involves rather more notifier structures than I'd like, but it should make for a reasonable proof of concept. Signed-off-by: David Gibson Signed-off-by: Alexey Kardashevskiy Signed-off-by: Alex Williamson --- hw/misc/vfio.c | 139 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 134 insertions(+), 5 deletions(-) diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c index b170fd2..7437c2e 100644 --- a/hw/misc/vfio.c +++ b/hw/misc/vfio.c @@ -160,10 +160,18 @@ typedef struct VFIOContainer { }; void (*release)(struct VFIOContainer *); } iommu_data; + QLIST_HEAD(, VFIOGuestIOMMU) giommu_list; QLIST_HEAD(, VFIOGroup) group_list; QLIST_ENTRY(VFIOContainer) next; } VFIOContainer; +typedef struct VFIOGuestIOMMU { + VFIOContainer *container; + MemoryRegion *iommu; + Notifier n; + QLIST_ENTRY(VFIOGuestIOMMU) giommu_next; +} VFIOGuestIOMMU; + /* Cache of MSI-X setup plus extra mmap and memory region for split BAR map */ typedef struct VFIOMSIXInfo { uint8_t table_bar; @@ -2383,7 +2391,8 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova, static bool vfio_listener_skipped_section(MemoryRegionSection *section) { - return !memory_region_is_ram(section->mr) || + return (!memory_region_is_ram(section->mr) && + !memory_region_is_iommu(section->mr)) || /* * Sizing an enabled 64-bit BAR can cause spurious mappings to * addresses in the upper part of the 64-bit address space. These @@ -2393,6 +2402,65 @@ static bool vfio_listener_skipped_section(MemoryRegionSection *section) section->offset_within_address_space & (1ULL << 63); } +static void vfio_iommu_map_notify(Notifier *n, void *data) +{ + VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n); + VFIOContainer *container = giommu->container; + IOMMUTLBEntry *iotlb = data; + MemoryRegion *mr; + hwaddr xlat; + hwaddr len = iotlb->addr_mask + 1; + void *vaddr; + int ret; + + DPRINTF("iommu map @ %"HWADDR_PRIx" - %"HWADDR_PRIx"\n", + iotlb->iova, iotlb->iova + iotlb->addr_mask); + + /* + * The IOMMU TLB entry we have just covers translation through + * this IOMMU to its immediate target. We need to translate + * it the rest of the way through to memory. + */ + mr = address_space_translate(&address_space_memory, + iotlb->translated_addr, + &xlat, &len, iotlb->perm & IOMMU_WO); + if (!memory_region_is_ram(mr)) { + DPRINTF("iommu map to non memory area %"HWADDR_PRIx"\n", + xlat); + return; + } + /* + * Translation truncates length to the IOMMU page size, + * check that it did not truncate too much. + */ + if (len & iotlb->addr_mask) { + DPRINTF("iommu has granularity incompatible with target AS\n"); + return; + } + + if (iotlb->perm != IOMMU_NONE) { + vaddr = memory_region_get_ram_ptr(mr) + xlat; + + ret = vfio_dma_map(container, iotlb->iova, + iotlb->addr_mask + 1, vaddr, + !(iotlb->perm & IOMMU_WO) || mr->readonly); + if (ret) { + error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", " + "0x%"HWADDR_PRIx", %p) = %d (%m)", + container, iotlb->iova, + iotlb->addr_mask + 1, vaddr, ret); + } + } else { + ret = vfio_dma_unmap(container, iotlb->iova, iotlb->addr_mask + 1); + if (ret) { + error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", " + "0x%"HWADDR_PRIx") = %d (%m)", + container, iotlb->iova, + iotlb->addr_mask + 1, ret); + } + } +} + static void vfio_listener_region_add(MemoryListener *listener, MemoryRegionSection *section) { @@ -2403,8 +2471,6 @@ static void vfio_listener_region_add(MemoryListener *listener, void *vaddr; int ret; - assert(!memory_region_is_iommu(section->mr)); - if (vfio_listener_skipped_section(section)) { DPRINTF("SKIPPING region_add %"HWADDR_PRIx" - %"PRIx64"\n", section->offset_within_address_space, @@ -2428,15 +2494,57 @@ static void vfio_listener_region_add(MemoryListener *listener, return; } + memory_region_ref(section->mr); + + if (memory_region_is_iommu(section->mr)) { + VFIOGuestIOMMU *giommu; + + DPRINTF("region_add [iommu] %"HWADDR_PRIx" - %"HWADDR_PRIx"\n", + iova, int128_get64(int128_sub(llend, int128_one()))); + /* + * FIXME: We should do some checking to see if the + * capabilities of the host VFIO IOMMU are adequate to model + * the guest IOMMU + * + * FIXME: For VFIO iommu types which have KVM acceleration to + * avoid bouncing all map/unmaps through qemu this way, this + * would be the right place to wire that up (tell the KVM + * device emulation the VFIO iommu handles to use). + */ + /* + * This assumes that the guest IOMMU is empty of + * mappings at this point. + * + * One way of doing this is: + * 1. Avoid sharing IOMMUs between emulated devices or different + * IOMMU groups. + * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if + * there are some mappings in IOMMU. + * + * VFIO on SPAPR does that. Other IOMMU models may do that different, + * they must make sure there are no existing mappings or + * loop through existing mappings to map them into VFIO. + */ + giommu = g_malloc0(sizeof(*giommu)); + giommu->iommu = section->mr; + giommu->container = container; + giommu->n.notify = vfio_iommu_map_notify; + QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next); + memory_region_register_iommu_notifier(giommu->iommu, &giommu->n); + + return; + } + + /* Here we assume that memory_region_is_ram(section->mr)==true */ + end = int128_get64(llend); vaddr = memory_region_get_ram_ptr(section->mr) + section->offset_within_region + (iova - section->offset_within_address_space); - DPRINTF("region_add %"HWADDR_PRIx" - %"HWADDR_PRIx" [%p]\n", + DPRINTF("region_add [ram] %"HWADDR_PRIx" - %"HWADDR_PRIx" [%p]\n", iova, end - 1, vaddr); - memory_region_ref(section->mr); ret = vfio_dma_map(container, iova, end - iova, vaddr, section->readonly); if (ret) { error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", " @@ -2480,6 +2588,27 @@ static void vfio_listener_region_del(MemoryListener *listener, return; } + if (memory_region_is_iommu(section->mr)) { + VFIOGuestIOMMU *giommu; + + QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) { + if (giommu->iommu == section->mr) { + memory_region_unregister_iommu_notifier(&giommu->n); + QLIST_REMOVE(giommu, giommu_next); + g_free(giommu); + break; + } + } + + /* + * FIXME: We assume the one big unmap below is adequate to + * remove any individual page mappings in the IOMMU which + * might have been copied into VFIO. This works for a page table + * based IOMMU where a big unmap flattens a large range of IO-PTEs. + * That may not be true for all IOMMU types. + */ + } + iova = TARGET_PAGE_ALIGN(section->offset_within_address_space); end = (section->offset_within_address_space + int128_get64(section->size)) & TARGET_PAGE_MASK;