Message ID | 1484817932-14452-4-git-send-email-peterx@redhat.com |
---|---|
State | New |
Headers | show |
diff --git a/hw/vfio/common.c b/hw/vfio/common.c index ce55dff..4d90844 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -354,11 +354,10 @@ static void vfio_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb) return; } - if (!vfio_get_vaddr(iotlb, &vaddr, &read_only)) { - return; - } - if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) { + if (!vfio_get_vaddr(iotlb, &vaddr, &read_only)) { + return; + } ret = vfio_dma_map(container, iova, iotlb->addr_mask + 1, vaddr, read_only);
Linux vfio driver supports to do VFIO_IOMMU_UNMAP_DMA for a very big region. This can be leveraged by QEMU IOMMU implementation to cleanup existing page mappings for an entire iova address space (by notifying with an IOTLB with extremely huge addr_mask). However current vfio_iommu_map_notify() does not allow that. It make sure that all the translated address in IOTLB is falling into RAM range. The check makes sense, but it should only be a sensible checker for mapping operations, and mean little for unmap operations. This patch moves this check into map logic only, so that we'll get faster unmap handling (no need to translate again), and also we can then better support unmapping a very big region when it covers non-ram ranges or even not-existing ranges. Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/vfio/common.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)