mbox series

[v2,0/5] Add virtio-iommu driver

Message ID 20180621190655.56391-1-jean-philippe.brucker@arm.com
Headers show
Series Add virtio-iommu driver | expand

Message

Jean-Philippe Brucker June 21, 2018, 7:06 p.m. UTC
Implement the base virtio-iommu driver, following version 0.7 of the
specification [1].

Changes since last version [2]:
* Address comments, thanks again for the review.
* As suggested, add a DT binding description in patch 1.
* Depend on VIRTIO_MMIO=y to fix a build failure¹
* Switch to v0.7 of the spec, which changes resv_mem parameters and adds
  an MMIO flag. These are trivial but not backward compatible. Once
  device or driver is upstream, updates to the spec will rely on feature
  bits to stay compatible with this code.
* Implement the new tlb_sync interface, by splitting add_req() and
  sync_req(). I noticed a small improvement on netperf stream because
  the synchronous iommu_unmap() also benefits from this. Other
  experiments, such as using kmem_cache for requests instead of kmalloc,
  didn't show any improvement.

Driver is available on branch virtio-iommu/v0.7 [3], and the x86+ACPI
prototype is on branch virtio-iommu/devel. That x86 code hasn't changed,
it still requires the DMA ifdeffery and I lack the expertise to tidy it
up. The kvmtool example device has been cleaned up and is available on
branch virtio-iommu/v0.7 [4].

Feedback welcome!

Thanks,
Jean

[1] virtio-iommu specification v0.7
    https://www.spinics.net/lists/linux-virtualization/msg34127.html
[2] virtio-iommu driver v1
    https://www.spinics.net/lists/kvm/msg164322.html
[3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.7
    http://www.linux-arm.org/git?p=linux-jpb.git;a=shortlog;h=refs/heads/virtio-iommu/v0.7
[4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.7 

                                  ---
¹ A word on the module story. Because of complex dependencies IOMMU
drivers cannot yet be .ko modules. Enabling it is outside the scope of
this series but I have a small prototype on branch virtio-iommu/
module-devel. It seems desirable since some distros currently ship the
transport code as module and are unlikely to change this on our account.
Note that this series works fine with arm64 defconfig, which selects
VIRTIO_MMIO=y.

I could use some help to clean this up. Currently my solution is to split
virtio-iommu into a module and a 3-lines built-in stub, which isn't
graceful but could have been worse.

Keeping the whole virtio-iommu driver as builtin would require accessing
any virtio utility through get_symbol. So far we only have seven of
those and could keep a list of pointer ops, but I find this solution
terrible. If virtio or virtio_mmio is a module, virtio-iommu also needs
to be one.

The minimal set of changes to make a module out of an OF-based IOMMU
driver seems to be:
* Export IOMMU symbols used by drivers
* Don't give up deferring probe in of_iommu
* Move IOMMU OF tables to .rodata
* Create a static virtio-iommu stub that declares the virtio-iommu OF
  table entry. The build system doesn't pick up IOMMU_OF_DECLARE when
  done from an object destined to be built as module :(
* Create a device_link between endpoint and IOMMU, to ensure that
  removing the IOMMU driver also unbinds the endpoint driver. Putting
  this in IOMMU core seems like a better approach since even when not
  built as module, unbinding an IOMMU device via sysfs will cause
  crashes.

With this, as long as virtio-mmio isn't loaded, the OF code defers probe
of endpoints that depend on virtio-iommu. Once virtio-mmio is loaded,
the probe is still deferred until virtio-iommu registers itself to the
virtio bus. Once virtio-iommu is loaded, probe of endpoints managed by
the IOMMU follows.

I'll investigate ACPI IORT when I find some time, though I don't expect
much complication and suggest we tackle one problem at a time. Since it
is a luxury that requires changes to the IOMMU core, module support is
left as a future improvement.
                                  ---

Jean-Philippe Brucker (5):
  dt-bindings: virtio: Specify #iommu-cells value for a virtio-iommu
  iommu: Add virtio-iommu driver
  iommu/virtio: Add probe request
  iommu/virtio: Add event queue
  vfio: Allow type-1 IOMMU instantiation for ARM

 .../devicetree/bindings/virtio/mmio.txt       |    8 +
 MAINTAINERS                                   |    6 +
 drivers/iommu/Kconfig                         |   11 +
 drivers/iommu/Makefile                        |    1 +
 drivers/iommu/virtio-iommu.c                  | 1164 +++++++++++++++++
 drivers/vfio/Kconfig                          |    2 +-
 include/uapi/linux/virtio_ids.h               |    1 +
 include/uapi/linux/virtio_iommu.h             |  172 +++
 8 files changed, 1364 insertions(+), 1 deletion(-)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

Comments

Michael S. Tsirkin June 22, 2018, 12:51 a.m. UTC | #1
On Thu, Jun 21, 2018 at 08:06:52PM +0100, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  MAINTAINERS                       |   6 +
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 929 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 117 ++++
>  6 files changed, 1065 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 55c08968aaab..bfb09dfa8e02 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -15248,6 +15248,12 @@ S:	Maintained
>  F:	drivers/virtio/virtio_input.c
>  F:	include/uapi/linux/virtio_input.h
>  
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>  VIRTUAL BOX GUEST DEVICE DRIVER
>  M:	Hans de Goede <hdegoede@redhat.com>
>  M:	Arnd Bergmann <arnd@arndb.de>

Please add the virtualization mailing list.

> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 958417f22020..820709383899 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -412,4 +412,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO=y
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 244ad7913a81..b559c6ae81ea 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -33,3 +33,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..ea0242d8624b
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,929 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 Arm Limited
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +#define VIOMMU_REQUEST_VQ		0
> +#define VIOMMU_NR_VQS			1
> +
> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */

Where does this come from? Do you absolutely have to have
an arbitrary timeout?

> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
> +	spinlock_t			request_lock;
> +	struct list_head		requests;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	u32				flags;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	unsigned long			nr_endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct list_head		list;
> +	void				*writeback;
> +	unsigned int			write_offset;
> +	unsigned int			len;
> +	char				buf[];
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +static int viommu_get_req_errno(void *buf, size_t len)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	switch (tail->status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +static void viommu_set_req_status(void *buf, size_t len, int status)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	tail->status = status;
> +}
> +
> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
> +				   struct virtio_iommu_req_head *req,
> +				   size_t len)
> +{
> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
> +
> +	return len - tail_size;
> +}
> +
> +/*
> + * __viommu_sync_req - Complete all in-flight requests
> + *
> + * Wait for all added requests to complete. When this function returns, all
> + * requests that were in-flight at the time of the call have completed.
> + */
> +static int __viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret = 0;
> +	unsigned int len;
> +	size_t write_len;
> +	struct viommu_request *req;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +	ktime_t timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
> +
> +	assert_spin_locked(&viommu->request_lock);
> +
> +	virtqueue_kick(vq);
> +
> +	while (!list_empty(&viommu->requests)) {
> +		len = 0;
> +		req = virtqueue_get_buf(vq, &len);
> +		if (req == NULL) {
> +			if (ktime_before(ktime_get(), timeout))
> +				continue;
> +
> +			/* After timeout, remove all requests */
> +			req = list_first_entry(&viommu->requests,
> +					       struct viommu_request, list);
> +			ret = -ETIMEDOUT;
> +		}
> +
> +		if (!len)
> +			viommu_set_req_status(req->buf, req->len,
> +					      VIRTIO_IOMMU_S_IOERR);
> +
> +		write_len = req->len - req->write_offset;
> +		if (req->writeback && len >= write_len)
> +			memcpy(req->writeback, req->buf + req->write_offset,
> +			       write_len);
> +
> +		list_del(&req->list);
> +		kfree(req);
> +	}
> +
> +	return ret;
> +}
> +
> +static int viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_sync_req(viommu);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * __viommu_add_request - Add one request to the queue
> + * @buf: pointer to the request buffer
> + * @len: length of the request buffer
> + * @writeback: copy data back to the buffer when the request completes.
> + *
> + * Add a request to the queue. Only synchronize the queue if it's already full.
> + * Otherwise don't kick the queue nor wait for requests to complete.
> + *
> + * When @writeback is true, data written by the device, including the request
> + * status, is copied into @buf after the request completes. This is unsafe if
> + * the caller allocates @buf on stack and drops the lock between add_req() and
> + * sync_req().
> + *
> + * Return 0 if the request was successfully added to the queue.
> + */
> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
> +			    bool writeback)
> +{
> +	int ret;
> +	off_t write_offset;
> +	struct viommu_request *req;
> +	struct scatterlist top_sg, bottom_sg;
> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +
> +	write_offset = viommu_get_req_offset(viommu, buf, len);
> +	if (!write_offset)
> +		return -EINVAL;
> +
> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	req->len = len;
> +	if (writeback) {
> +		req->writeback = buf + write_offset;
> +		req->write_offset = write_offset;
> +	}
> +	memcpy(&req->buf, buf, write_offset);
> +
> +	sg_init_one(&top_sg, req->buf, write_offset);
> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
> +
> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	if (ret == -ENOSPC) {
> +		/* If the queue is full, sync and retry */
> +		if (!__viommu_sync_req(viommu))
> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	}
> +	if (ret)
> +		goto err_free;
> +
> +	list_add_tail(&req->list, &viommu->requests);
> +	return 0;
> +
> +err_free:
> +	kfree(req);
> +	return ret;
> +}
> +
> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_add_req(viommu, buf, len, false);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * Send a request and wait for it to complete. Return the request status (as an
> + * errno)
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
> +				size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +
> +	ret = __viommu_add_req(viommu, buf, len, true);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
> +		goto out_unlock;
> +	}
> +
> +	ret = __viommu_sync_req(viommu);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +		/* Fall-through (get the actual request status) */
> +	}
> +
> +	ret = viommu_get_req_errno(buf, len);
> +out_unlock:
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +	return ret;
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size, u32 flags)
> +{
> +	unsigned long irqflags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +	mapping->flags		= flags;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				  unsigned long iova, size_t size)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/* Trying to split a mapping? */
> +		if (mapping->iova.start < iova)
> +			break;
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +		kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all endpoints,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int ret;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct virtio_iommu_req_map map;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		map = (struct virtio_iommu_req_map) {
> +			.head.type	= VIRTIO_IOMMU_T_MAP,
> +			.domain		= cpu_to_le32(vdomain->id),
> +			.virt_start	= cpu_to_le64(mapping->iova.start),
> +			.virt_end	= cpu_to_le64(mapping->iova.last),
> +			.phys_start	= cpu_to_le64(mapping->paddr),
> +			.flags		= cpu_to_le32(mapping->flags),
> +		};
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +		if (ret)
> +			break;
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	/* ida limits size to 31 bits. A value of 0 means "max" */
> +	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
> +				  1U << viommu->domain_bits;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0);
> +
> +	if (vdomain->viommu)
> +		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In the virtio-iommu device, when attaching the endpoint to a new
> +	 * domain, it is detached from the old one and, if as as a result the
> +	 * old domain isn't attached to any endpoint, all mappings are removed
> +	 * from the old domain and it is freed.
> +	 *
> +	 * In the driver the old domain still exists, and its mappings will be
> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
> +	 * freed explicitly.
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		vdev->vdomain->nr_endpoints--;
> +
> +	req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (!vdomain->nr_endpoints) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings (e.g. SW MSI).
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	vdomain->nr_endpoints++;
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct virtio_iommu_req_map map;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.phys_start	= cpu_to_le64(paddr),
> +		.virt_end	= cpu_to_le64(iova + size - 1),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!vdomain->nr_endpoints)
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct virtio_iommu_req_unmap unmap;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size);
> +	if (unmapped < size)
> +		return 0;
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!vdomain->nr_endpoints)
> +		return unmapped;
> +
> +	unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
> +	};
> +
> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static void viommu_iotlb_sync(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	viommu_sync_req(vdomain->viommu);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	int ret;
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	ret = iommu_device_link(&viommu->iommu, dev);
> +	if (ret)
> +		goto err_free_dev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		goto err_unlink_dev;
> +	}
> +
> +	iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +
> +err_unlink_dev:
> +	iommu_device_unlink(&viommu->iommu, dev);
> +
> +err_free_dev:
> +	kfree(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	struct viommu_endpoint *vdev;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	iommu_group_remove_device(dev);
> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	kfree(vdev);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.map_sg			= default_iommu_map_sg,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.iotlb_sync		= viommu_iotlb_sync,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +	.of_xlate		= viommu_of_xlate,
> +};
> +
> +static int viommu_init_vqs(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;

Please validate that device has VIRTIO_F_VERSION_1 -
we don't ever want new devices to grow legacy
interfaces.


> +
> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +	INIT_LIST_HEAD(&viommu->requests);
> +
> +	ret = viommu_init_vqs(viommu);
> +	if (ret)
> +		return ret;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_vqs;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_vqs;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +err_free_vqs:
> +	vdev->config->del_vqs(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +
> +	/* Stop all virtqueues */
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +IOMMU_OF_DECLARE(viommu, "virtio,mmio");
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..f4fafa0b41a7
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,117 @@
> +/* SPDX-License-Identifier: BSD-3-Clause */
> +/*
> + * Virtio-iommu definition v0.7
> + *
> + * Copyright (C) 2018 Arm Ltd.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {
> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8					domain_bits;
> +} __packed;

Please pad structs so each field and all of it are size aligned and avoid __packed.

Applies below too.

> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
> +						 VIRTIO_IOMMU_MAP_F_MMIO)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le64					phys_start;
> +	__le32					flags;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +union virtio_iommu_req {
> +	struct virtio_iommu_req_head		head;
> +
> +	struct virtio_iommu_req_attach		attach;
> +	struct virtio_iommu_req_detach		detach;
> +	struct virtio_iommu_req_map		map;
> +	struct virtio_iommu_req_unmap		unmap;
> +};

Such unions are problematic: if you add an option of a larger structure
down the road, do all requests grow automatically?

> +
> +#endif
> -- 
> 2.17.0
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin June 22, 2018, 12:55 a.m. UTC | #2
On Thu, Jun 21, 2018 at 08:06:53PM +0100, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 149 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index ea0242d8624b..29ce9f4398ef 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -48,6 +48,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -69,8 +70,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -121,6 +124,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -404,6 +410,103 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +		    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
> +			dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +				 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -627,15 +730,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -683,9 +804,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -708,6 +838,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -725,6 +856,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -821,6 +953,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -902,6 +1038,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index f4fafa0b41a7..2fd802a860cd 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -105,6 +110,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +} __packed;


start/end are not aligned you need to pad more.

> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -112,6 +148,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> -- 
> 2.17.0
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker June 25, 2018, 5:52 p.m. UTC | #3
On 22/06/18 01:51, Michael S. Tsirkin wrote:
>> +VIRTIO IOMMU DRIVER
>> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> +S:	Maintained
>> +F:	drivers/iommu/virtio-iommu.c
>> +F:	include/uapi/linux/virtio_iommu.h
>> +
>>  VIRTUAL BOX GUEST DEVICE DRIVER
>>  M:	Hans de Goede <hdegoede@redhat.com>
>>  M:	Arnd Bergmann <arnd@arndb.de>
> 
> Please add the virtualization mailing list.

Ok. For the driver get_maintainer.pl now outputs the iommu and
virtualizations lists, as well as Joerg Roedel. For the header it
outputs the virtualization list, Jason and you.

>> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
> 
> Where does this come from? Do you absolutely have to have
> an arbitrary timeout?

It isn't necessary but very useful for development, and I wonder if
we could keep it behind an #ifdef DEBUG. Some requests are sent from
hardirq-safe context (because device driver are allowed to call
iommu_map/unmap from an IRQ handler), where they would lock the CPU if
the IOMMU device misbehaves. The timeout is here to provide a little
more info to developers, instead of a generic RCU lockup message.

On the other hand allowing unmap requests to timeout might pose a
security risk (freeing pages that are still used by endpoints). So I
think moving this behind a DEBUG would be better.

>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
> 
> Please validate that device has VIRTIO_F_VERSION_1 -
> we don't ever want new devices to grow legacy
> interfaces.

Agreed. In this case I think I can also remove from the device
specification all legacy paragraphs, that cater for missing FEATURES_OK
status and native endianess.

For the record, all kvmtool devices on my branch now default to virtio
version 1, since they need the VIRTIO_F_IOMMU_PLATFORM bit.

>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
>> +} __packed;
> 
> Please pad structs so each field and all of it are size aligned and avoid __packed.
> 
> Applies below too.

Ok, I'll clean this up (as well as the structs in the device spec). I
think the only struct that needs more padding is virtio_iommu_fault. The
total size of the attach request is 20 bytes, not aligned on 64 bits,
but it probably doesn't need to change. virtio_blk_req for example has
an odd size and I don't think that caused any problem (?).

>> +union virtio_iommu_req {
>> +	struct virtio_iommu_req_head		head;
>> +
>> +	struct virtio_iommu_req_attach		attach;
>> +	struct virtio_iommu_req_detach		detach;
>> +	struct virtio_iommu_req_map		map;
>> +	struct virtio_iommu_req_unmap		unmap;
>> +};
> 
> Such unions are problematic: if you add an option of a larger structure
> down the road, do all requests grow automatically?

Yes, requests don't have a fixed size, and the next extension adds two
slightly larger requests. I'm also in favor of removing the union, and
letting implementations redefine it if needed. At the moment this union
is only used by kvmtool for parsing requests, not by QEMU or the Linux
driver.

Thanks a lot for the comments

Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker June 25, 2018, 5:52 p.m. UTC | #4
On 22/06/18 01:55, Michael S. Tsirkin wrote:
>> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
>> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
>> +
>> +struct virtio_iommu_probe_resv_mem {
>> +	__u8					subtype;
>> +	__u8					reserved[3];
>> +	__le64					start;
>> +	__le64					end;
>> +} __packed;
> 
> 
> start/end are not aligned you need to pad more.

The complete structure is actually the 32-bit header
virtio_iommu_probe_property, followed by the content
virtio_iommu_probe_resv_mem:

struct {
	le16	type
	le16	length
	u8	subtype
	u8	reserved[3]
	le64	start
	le64	end
};

I'll redefine virtio_iommu_probe_resv_mem to include this header.
Otherwise, without __packed, a compiler introduces padding when
concatenating header and content.

Thanks,
Jean

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker June 26, 2018, 5:59 p.m. UTC | #5
On 21/06/18 20:06, Jean-Philippe Brucker wrote:
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +		    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)

Hm, I messed it up while rebasing. I'll remove this condition.

Thanks,
Jean

> +			dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +				 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin June 26, 2018, 6:07 p.m. UTC | #6
On Thu, Jun 21, 2018 at 08:06:50PM +0100, Jean-Philippe Brucker wrote:
> Implement the base virtio-iommu driver, following version 0.7 of the
> specification [1].
> 
> Changes since last version [2]:
> * Address comments, thanks again for the review.
> * As suggested, add a DT binding description in patch 1.
> * Depend on VIRTIO_MMIO=y to fix a build failure¹
> * Switch to v0.7 of the spec, which changes resv_mem parameters and adds
>   an MMIO flag. These are trivial but not backward compatible. Once
>   device or driver is upstream, updates to the spec will rely on feature
>   bits to stay compatible with this code.
> * Implement the new tlb_sync interface, by splitting add_req() and
>   sync_req(). I noticed a small improvement on netperf stream because
>   the synchronous iommu_unmap() also benefits from this. Other
>   experiments, such as using kmem_cache for requests instead of kmalloc,
>   didn't show any improvement.
> 
> Driver is available on branch virtio-iommu/v0.7 [3], and the x86+ACPI
> prototype is on branch virtio-iommu/devel. That x86 code hasn't changed,
> it still requires the DMA ifdeffery and I lack the expertise to tidy it
> up. The kvmtool example device has been cleaned up and is available on
> branch virtio-iommu/v0.7 [4].
> 
> Feedback welcome!
> 
> Thanks,
> Jean

So as I pointed out new virtio 0 device isn't really welcome ;)
No one bothered implementing virtio 1 in MMIO for all the work
that was put in defining it. The fact that the MMIO part of the
spec doesn't seem to allow for transitional devices might
or might not have something to do with this.

So why make it an MMIO device at all? A system with an IOMMU
will have a PCI bus, won't it? And it's pretty common to
have the IOMMU be a PCI device on the root bus.
Will remove need to bother with dt bindings etc.


> [1] virtio-iommu specification v0.7
>     https://www.spinics.net/lists/linux-virtualization/msg34127.html
> [2] virtio-iommu driver v1
>     https://www.spinics.net/lists/kvm/msg164322.html
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.7
>     http://www.linux-arm.org/git?p=linux-jpb.git;a=shortlog;h=refs/heads/virtio-iommu/v0.7
> [4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.7 
> 
>                                   ---
> ¹ A word on the module story. Because of complex dependencies IOMMU
> drivers cannot yet be .ko modules. Enabling it is outside the scope of
> this series but I have a small prototype on branch virtio-iommu/
> module-devel. It seems desirable since some distros currently ship the
> transport code as module and are unlikely to change this on our account.
> Note that this series works fine with arm64 defconfig, which selects
> VIRTIO_MMIO=y.
> 
> I could use some help to clean this up. Currently my solution is to split
> virtio-iommu into a module and a 3-lines built-in stub, which isn't
> graceful but could have been worse.
> 
> Keeping the whole virtio-iommu driver as builtin would require accessing
> any virtio utility through get_symbol. So far we only have seven of
> those and could keep a list of pointer ops, but I find this solution
> terrible. If virtio or virtio_mmio is a module, virtio-iommu also needs
> to be one.
> 
> The minimal set of changes to make a module out of an OF-based IOMMU
> driver seems to be:
> * Export IOMMU symbols used by drivers
> * Don't give up deferring probe in of_iommu
> * Move IOMMU OF tables to .rodata
> * Create a static virtio-iommu stub that declares the virtio-iommu OF
>   table entry. The build system doesn't pick up IOMMU_OF_DECLARE when
>   done from an object destined to be built as module :(
> * Create a device_link between endpoint and IOMMU, to ensure that
>   removing the IOMMU driver also unbinds the endpoint driver. Putting
>   this in IOMMU core seems like a better approach since even when not
>   built as module, unbinding an IOMMU device via sysfs will cause
>   crashes.
> 
> With this, as long as virtio-mmio isn't loaded, the OF code defers probe
> of endpoints that depend on virtio-iommu. Once virtio-mmio is loaded,
> the probe is still deferred until virtio-iommu registers itself to the
> virtio bus. Once virtio-iommu is loaded, probe of endpoints managed by
> the IOMMU follows.
> 
> I'll investigate ACPI IORT when I find some time, though I don't expect
> much complication and suggest we tackle one problem at a time. Since it
> is a luxury that requires changes to the IOMMU core, module support is
> left as a future improvement.
>                                   ---
> 
> Jean-Philippe Brucker (5):
>   dt-bindings: virtio: Specify #iommu-cells value for a virtio-iommu
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
>   vfio: Allow type-1 IOMMU instantiation for ARM
> 
>  .../devicetree/bindings/virtio/mmio.txt       |    8 +
>  MAINTAINERS                                   |    6 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1164 +++++++++++++++++
>  drivers/vfio/Kconfig                          |    2 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  172 +++
>  8 files changed, 1364 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> -- 
> 2.17.0
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker June 27, 2018, 6:04 p.m. UTC | #7
On 26/06/18 19:07, Michael S. Tsirkin wrote:
> So as I pointed out new virtio 0 device isn't really welcome ;)

Agreed, virtio-iommu is expected to be implemented on virtio 1 and
later. I'll remove the two legacy-related paragraph from the spec and
add a check in the driver as you suggested, to avoid giving the wrong idea.

> No one bothered implementing virtio 1 in MMIO for all the work
> that was put in defining it.

That is curious, given that the virtio_mmio driver supports virtio 1 and
from my own experience, porting the MMIO transport to virtio 1 only
requires updating a few registers, when ANY_LAYOUT, modern virtqueue and
status are already implemented.

> The fact that the MMIO part of the
> spec doesn't seem to allow for transitional devices might
> or might not have something to do with this.

Sorry, which part do you have in mind? The spec does provide both a
legacy and modern register layout, with version numbers to differentiate
them.

> So why make it an MMIO device at all? A system with an IOMMU
> will have a PCI bus, won't it? And it's pretty common to
> have the IOMMU be a PCI device on the root bus.
Having the IOMMU outside PCI seems more common though. On Arm and Intel
the IOMMU doesn't have a PCI config space, BARs or capabilities, just a
plain MMIO region and a static number of interrupts. However the AMD
IOMMU appears as a PCI device with additional MMIO registers. I would be
interested in implementing virtio-iommu as a PCI dev at some point, at
least so that we can use MSI-X.

The problem is that it requires invasive changes in the firmware
interfaces and drivers. They need to describe relationship between IOMMU
and endpoint, and DT or ACPI IORT don't expect the programming interface
to be inside the PCI bus that the IOMMU manages. Describing it as a
peripheral is more natural. For AMD it is implemented in their driver
using IVHD tables that can't be reused. Right now I don't expect any
success in proposing changes to firmware interfaces or drivers, because
the device is new and paravirtualized, and works out of the box with
MMIO. Hopefully that will change in the future, perhaps when someone
supports DT for AMD IOMMU (they do describe bindings at the end of the
spec, but I don't think it can work in Linux with the existing
infrastructure)

Another reason to keep the MMIO transport option is that one
virtio-iommu can manage DMA from endpoints on multiple PCI domains at
the same time, as well as platform devices. Some VMMs might want that,
in which case the IOMMU would be a separate platform device.

> Will remove need to bother with dt bindings etc.
That's handled by the firmware drivers and IOMMU layer, there shouldn't
be any special treatment at the virtio layer. In general removal of an
IOMMU needs to be done after removal of all endpoints connected to it,
which should be described using device_link from the driver core. DT or
ACPI is only used to tell drivers where to find resources, and to
describe the DMA topology.

Thanks,
Jean
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael S. Tsirkin June 27, 2018, 7:59 p.m. UTC | #8
On Wed, Jun 27, 2018 at 07:04:46PM +0100, Jean-Philippe Brucker wrote:
> On 26/06/18 19:07, Michael S. Tsirkin wrote:
> > So as I pointed out new virtio 0 device isn't really welcome ;)
> 
> Agreed, virtio-iommu is expected to be implemented on virtio 1 and
> later. I'll remove the two legacy-related paragraph from the spec and
> add a check in the driver as you suggested, to avoid giving the wrong idea.
> 
> > No one bothered implementing virtio 1 in MMIO for all the work
> > that was put in defining it.
> 
> That is curious, given that the virtio_mmio driver supports virtio 1 and
> from my own experience, porting the MMIO transport to virtio 1 only
> requires updating a few registers, when ANY_LAYOUT, modern virtqueue and
> status are already implemented.
> 
> > The fact that the MMIO part of the
> > spec doesn't seem to allow for transitional devices might
> > or might not have something to do with this.
> 
> Sorry, which part do you have in mind? The spec does provide both a
> legacy and modern register layout, with version numbers to differentiate
> them.

Yes but there's no way to implement a transitional virtio mmio
device. The version is either "legacy" or "modern".

So if you implement a modern device old guests won't work with it.

> > So why make it an MMIO device at all? A system with an IOMMU
> > will have a PCI bus, won't it? And it's pretty common to
> > have the IOMMU be a PCI device on the root bus.
> Having the IOMMU outside PCI seems more common though. On Arm and Intel
> the IOMMU doesn't have a PCI config space, BARs or capabilities, just a
> plain MMIO region and a static number of interrupts. However the AMD
> IOMMU appears as a PCI device with additional MMIO registers. I would be
> interested in implementing virtio-iommu as a PCI dev at some point, at
> least so that we can use MSI-X.
> 
> The problem is that it requires invasive changes in the firmware
> interfaces and drivers. They need to describe relationship between IOMMU
> and endpoint, and DT or ACPI IORT don't expect the programming interface
> to be inside the PCI bus that the IOMMU manages.

They don't particularly care IMHO.

> Describing it as a
> peripheral is more natural. For AMD it is implemented in their driver
> using IVHD tables that can't be reused. Right now I don't expect any
> success in proposing changes to firmware interfaces or drivers, because
> the device is new and paravirtualized, and works out of the box with
> MMIO. Hopefully that will change in the future, perhaps when someone
> supports DT for AMD IOMMU (they do describe bindings at the end of the
> spec, but I don't think it can work in Linux with the existing
> infrastructure)

That's a bit abstract, I don't really understand the issues involved.
ACPI is formatted by QEMU, so firmware does not need to care.
And is there even a DT for intel?

> Another reason to keep the MMIO transport option is that one
> virtio-iommu can manage DMA from endpoints on multiple PCI domains at
> the same time, as well as platform devices. Some VMMs might want that,
> in which case the IOMMU would be a separate platform device.

Which buses are managed by the IOMMU is separate from the bus
on which it's programming interface resides.

> > Will remove need to bother with dt bindings etc.
> That's handled by the firmware drivers and IOMMU layer, there shouldn't
> be any special treatment at the virtio layer. In general removal of an
> IOMMU needs to be done after removal of all endpoints connected to it,
> which should be described using device_link from the driver core. DT or
> ACPI is only used to tell drivers where to find resources, and to
> describe the DMA topology.
> 
> Thanks,
> Jean

My point was virtio mmio isn't widely used outside of ARM.
Rather than try to make everyone use it, IMHO it's better
to start with PCI.
Peter Maydell June 28, 2018, 9:40 a.m. UTC | #9
On 27 June 2018 at 20:59, Michael S. Tsirkin <mst@redhat.com> wrote:
> My point was virtio mmio isn't widely used outside of ARM.
> Rather than try to make everyone use it, IMHO it's better
> to start with PCI.

Yes. I would much rather we let virtio-mmio disappear into
history as a random bit of legacy stuff. One of the things
I didn't like about the virtio-iommu design was that it
resurrected virtio-mmio as something we need to actively care
about again, so if there is a path forward that will work with
pci virtio I would much prefer that.

thanks
-- PMM
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jean-Philippe Brucker July 4, 2018, 11:25 a.m. UTC | #10
On 27/06/18 20:59, Michael S. Tsirkin wrote:
>> Another reason to keep the MMIO transport option is that one
>> virtio-iommu can manage DMA from endpoints on multiple PCI domains at
>> the same time, as well as platform devices. Some VMMs might want that,
>> in which case the IOMMU would be a separate platform device.
> 
> Which buses are managed by the IOMMU is separate from the bus
> on which it's programming interface resides.

Sorry I didn't get this. We probably don't want to instantiate a PCI
root complex just for the IOMMU, so it needs to be in the same PCI
segment as managed endpoints. For example in my VM the AMD IOMMU is
presented as 00:02.0, between other devices on PCI bus 00.

In any case, I have a solution for virtio-pci that works with DT and
ACPI, and isn't excessively awful. I'll probably send it as part of the
next version.

Thanks,
Jean

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html