From patchwork Thu Feb 25 18:28:36 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 46268 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) by ozlabs.org (Postfix) with ESMTP id 467B9B7C59 for ; Fri, 26 Feb 2010 05:56:07 +1100 (EST) Received: from localhost ([127.0.0.1]:58561 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NkinA-0007YW-4T for incoming@patchwork.ozlabs.org; Thu, 25 Feb 2010 13:50:32 -0500 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NkiV8-0001Un-1c for qemu-devel@nongnu.org; Thu, 25 Feb 2010 13:31:54 -0500 Received: from [199.232.76.173] (port=56233 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NkiV7-0001UL-Dv for qemu-devel@nongnu.org; Thu, 25 Feb 2010 13:31:53 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NkiV5-0006iH-Um for qemu-devel@nongnu.org; Thu, 25 Feb 2010 13:31:53 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55488) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NkiV5-0006iB-HZ for qemu-devel@nongnu.org; Thu, 25 Feb 2010 13:31:51 -0500 Received: from int-mx05.intmail.prod.int.phx2.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.18]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o1PIVnkA024049 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 25 Feb 2010 13:31:49 -0500 Received: from redhat.com (vpn2-11-182.ams2.redhat.com [10.36.11.182]) by int-mx05.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with SMTP id o1PIVkah028838; Thu, 25 Feb 2010 13:31:47 -0500 Date: Thu, 25 Feb 2010 20:28:36 +0200 From: "Michael S. Tsirkin" To: Anthony Liguori , qemu-devel@nongnu.org Message-ID: <70c29d6d75bc3ee2efc89f5c3f8be833d660a1be.1267122331.git.mst@redhat.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Mutt-Fcc: =sent User-Agent: Mutt/1.5.19 (2009-01-05) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.18 X-detected-operating-system: by monty-python.gnu.org: Genre and OS details not recognized. Cc: amit.shah@redhat.com, kraxel@redhat.com, quintela@redhat.com Subject: [Qemu-devel] [PATCHv2 08/12] virtio-pci: fill in notifier support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Support host/guest notifiers in virtio-pci. The last one only with kvm, that's okay because vhost relies on kvm anyway. Note on kvm usage: kvm ioeventfd API is implemented on non-kvm systems as well, this is the reason we don't need if (kvm_enabled()) around it. Signed-off-by: Michael S. Tsirkin --- hw/virtio-pci.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 62 insertions(+), 0 deletions(-) diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c index 006ff38..3f1214c 100644 --- a/hw/virtio-pci.c +++ b/hw/virtio-pci.c @@ -24,6 +24,7 @@ #include "net.h" #include "block_int.h" #include "loader.h" +#include "kvm.h" /* from Linux's linux/virtio_pci.h */ @@ -398,6 +399,65 @@ static unsigned virtio_pci_get_features(void *opaque) return proxy->host_features; } +static void virtio_pci_guest_notifier_read(void *opaque) +{ + VirtQueue *vq = opaque; + EventNotifier *n = virtio_queue_guest_notifier(vq); + if (event_notifier_test_and_clear(n)) { + virtio_irq(vq); + } +} + +static int virtio_pci_guest_notifier(void *opaque, int n, bool assign) +{ + VirtIOPCIProxy *proxy = opaque; + VirtQueue *vq = virtio_queue(proxy->vdev, n); + EventNotifier *notifier = virtio_queue_guest_notifier(vq); + + if (assign) { + int r = event_notifier_init(notifier, 0); + if (r < 0) + return r; + qemu_set_fd_handler(event_notifier_get_fd(notifier), + virtio_pci_guest_notifier_read, NULL, vq); + } else { + qemu_set_fd_handler(event_notifier_get_fd(notifier), + NULL, NULL, NULL); + event_notifier_cleanup(notifier); + } + + return 0; +} + +static int virtio_pci_host_notifier(void *opaque, int n, bool assign) +{ + VirtIOPCIProxy *proxy = opaque; + VirtQueue *vq = virtio_queue(proxy->vdev, n); + EventNotifier *notifier = virtio_queue_host_notifier(vq); + int r; + if (assign) { + r = event_notifier_init(notifier, 1); + if (r < 0) { + return r; + } + r = kvm_set_ioeventfd(proxy->addr + VIRTIO_PCI_QUEUE_NOTIFY, + n, event_notifier_get_fd(notifier), + assign); + if (r < 0) { + event_notifier_cleanup(notifier); + } + } else { + r = kvm_set_ioeventfd(proxy->addr + VIRTIO_PCI_QUEUE_NOTIFY, + n, event_notifier_get_fd(notifier), + assign); + if (r < 0) { + return r; + } + event_notifier_cleanup(notifier); + } + return r; +} + static const VirtIOBindings virtio_pci_bindings = { .notify = virtio_pci_notify, .save_config = virtio_pci_save_config, @@ -405,6 +465,8 @@ static const VirtIOBindings virtio_pci_bindings = { .save_queue = virtio_pci_save_queue, .load_queue = virtio_pci_load_queue, .get_features = virtio_pci_get_features, + .host_notifier = virtio_pci_host_notifier, + .guest_notifier = virtio_pci_guest_notifier, }; static void virtio_init_pci(VirtIOPCIProxy *proxy, VirtIODevice *vdev,