From patchwork Fri Aug 14 15:43:13 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Haskins X-Patchwork-Id: 31421 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@bilbo.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from ozlabs.org (ozlabs.org [203.10.76.45]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx.ozlabs.org", Issuer "CA Cert Signing Authority" (verified OK)) by bilbo.ozlabs.org (Postfix) with ESMTPS id E5954B7099 for ; Sat, 15 Aug 2009 01:46:03 +1000 (EST) Received: by ozlabs.org (Postfix) id D6C03DDD1B; Sat, 15 Aug 2009 01:46:03 +1000 (EST) Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id F1D53DDD0B for ; Sat, 15 Aug 2009 01:46:02 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932672AbZHNPoN (ORCPT ); Fri, 14 Aug 2009 11:44:13 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S932631AbZHNPn0 (ORCPT ); Fri, 14 Aug 2009 11:43:26 -0400 Received: from victor.provo.novell.com ([137.65.250.26]:53926 "EHLO victor.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932616AbZHNPnS (ORCPT ); Fri, 14 Aug 2009 11:43:18 -0400 Received: from dev.haskins.net (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by victor.provo.novell.com with ESMTP (TLS encrypted); Fri, 14 Aug 2009 09:43:15 -0600 Received: from dev.haskins.net (localhost [127.0.0.1]) by dev.haskins.net (Postfix) with ESMTP id A862B4641FA; Fri, 14 Aug 2009 11:43:13 -0400 (EDT) From: Gregory Haskins Subject: [PATCH v3 4/6] vbus-proxy: add a pci-to-vbus bridge To: alacrityvm-devel@lists.sourceforge.net Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Date: Fri, 14 Aug 2009 11:43:13 -0400 Message-ID: <20090814154313.26116.20825.stgit@dev.haskins.net> In-Reply-To: <20090814154125.26116.70709.stgit@dev.haskins.net> References: <20090814154125.26116.70709.stgit@dev.haskins.net> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds a pci-based bridge driver to interface between the a host VBUS and the guest's vbus-proxy bus model. It completes the guest side notion of a "vbus-connector", and requires a cooresponding host-side connector (in this case, the pci-bridge model) to comlete the connection. Signed-off-by: Gregory Haskins --- drivers/vbus/Kconfig | 10 + drivers/vbus/Makefile | 3 drivers/vbus/pci-bridge.c | 877 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/Kbuild | 1 include/linux/vbus_pci.h | 145 +++++++ 5 files changed, 1036 insertions(+), 0 deletions(-) create mode 100644 drivers/vbus/pci-bridge.c create mode 100644 include/linux/vbus_pci.h -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/vbus/Kconfig b/drivers/vbus/Kconfig index e1939f5..87c545d 100644 --- a/drivers/vbus/Kconfig +++ b/drivers/vbus/Kconfig @@ -12,3 +12,13 @@ config VBUS_PROXY in a virtualization solution which implements virtual-bus devices on the backend, say Y. If unsure, say N. +config VBUS_PCIBRIDGE + tristate "PCI to Virtual-Bus bridge" + depends on PCI + depends on VBUS_PROXY + select IOQ + default n + help + Provides a way to bridge host side vbus devices via a PCI-BRIDGE + object. If you are running virtualization with vbus devices on the + host, and the vbus is exposed via PCI, say Y. Otherwise, say N. diff --git a/drivers/vbus/Makefile b/drivers/vbus/Makefile index a29a1e0..944b7f1 100644 --- a/drivers/vbus/Makefile +++ b/drivers/vbus/Makefile @@ -1,3 +1,6 @@ vbus-proxy-objs += bus-proxy.o obj-$(CONFIG_VBUS_PROXY) += vbus-proxy.o + +vbus-pcibridge-objs += pci-bridge.o +obj-$(CONFIG_VBUS_PCIBRIDGE) += vbus-pcibridge.o diff --git a/drivers/vbus/pci-bridge.c b/drivers/vbus/pci-bridge.c new file mode 100644 index 0000000..f0ed51a --- /dev/null +++ b/drivers/vbus/pci-bridge.c @@ -0,0 +1,877 @@ +/* + * Copyright (C) 2009 Novell. All Rights Reserved. + * + * Author: + * Gregory Haskins + * + * This file is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +MODULE_AUTHOR("Gregory Haskins"); +MODULE_LICENSE("GPL"); +MODULE_VERSION("1"); + +#define VBUS_PCI_NAME "pci-to-vbus-bridge" + +struct vbus_pci { + spinlock_t lock; + struct pci_dev *dev; + struct ioq eventq; + struct vbus_pci_event *ring; + struct vbus_pci_regs *regs; + struct vbus_pci_signals *signals; + int irq; + int enabled:1; +}; + +static struct vbus_pci vbus_pci; + +struct vbus_pci_device { + char type[VBUS_MAX_DEVTYPE_LEN]; + u64 handle; + struct list_head shms; + struct vbus_device_proxy vdev; + struct work_struct add; + struct work_struct drop; +}; + +DEFINE_PER_CPU(struct vbus_pci_fastcall_desc, vbus_pci_percpu_fastcall) +____cacheline_aligned; + +/* + * ------------------- + * common routines + * ------------------- + */ + +static int +vbus_pci_bridgecall(unsigned long nr, void *data, unsigned long len) +{ + struct vbus_pci_call_desc params = { + .vector = nr, + .len = len, + .datap = __pa(data), + }; + unsigned long flags; + int ret; + + spin_lock_irqsave(&vbus_pci.lock, flags); + + memcpy_toio(&vbus_pci.regs->bridgecall, ¶ms, sizeof(params)); + ret = ioread32(&vbus_pci.regs->bridgecall); + + spin_unlock_irqrestore(&vbus_pci.lock, flags); + + return ret; +} + +static int +vbus_pci_buscall(unsigned long nr, void *data, unsigned long len) +{ + struct vbus_pci_fastcall_desc *params; + int ret; + + preempt_disable(); + + params = &get_cpu_var(vbus_pci_percpu_fastcall); + + params->call.vector = nr; + params->call.len = len; + params->call.datap = __pa(data); + + iowrite32(smp_processor_id(), &vbus_pci.signals->fastcall); + + ret = params->result; + + preempt_enable(); + + return ret; +} + +struct vbus_pci_device * +to_dev(struct vbus_device_proxy *vdev) +{ + return container_of(vdev, struct vbus_pci_device, vdev); +} + +static void +_signal_init(struct shm_signal *signal, struct shm_signal_desc *desc, + struct shm_signal_ops *ops) +{ + desc->magic = SHM_SIGNAL_MAGIC; + desc->ver = SHM_SIGNAL_VER; + + shm_signal_init(signal, shm_locality_north, ops, desc); +} + +/* + * ------------------- + * _signal + * ------------------- + */ + +struct _signal { + struct vbus_pci *pcivbus; + struct shm_signal signal; + u32 handle; + struct rb_node node; + struct list_head list; +}; + +static struct _signal * +to_signal(struct shm_signal *signal) +{ + return container_of(signal, struct _signal, signal); +} + +static int +_signal_inject(struct shm_signal *signal) +{ + struct _signal *_signal = to_signal(signal); + + iowrite32(_signal->handle, &vbus_pci.signals->shmsignal); + + return 0; +} + +static void +_signal_release(struct shm_signal *signal) +{ + struct _signal *_signal = to_signal(signal); + + kfree(_signal); +} + +static struct shm_signal_ops _signal_ops = { + .inject = _signal_inject, + .release = _signal_release, +}; + +/* + * ------------------- + * vbus_device_proxy routines + * ------------------- + */ + +static int +vbus_pci_device_open(struct vbus_device_proxy *vdev, int version, int flags) +{ + struct vbus_pci_device *dev = to_dev(vdev); + struct vbus_pci_deviceopen params; + int ret; + + if (dev->handle) + return -EINVAL; + + params.devid = vdev->id; + params.version = version; + + ret = vbus_pci_buscall(VBUS_PCI_HC_DEVOPEN, + ¶ms, sizeof(params)); + if (ret < 0) + return ret; + + dev->handle = params.handle; + + return 0; +} + +static int +vbus_pci_device_close(struct vbus_device_proxy *vdev, int flags) +{ + struct vbus_pci_device *dev = to_dev(vdev); + unsigned long iflags; + int ret; + + if (!dev->handle) + return -EINVAL; + + spin_lock_irqsave(&vbus_pci.lock, iflags); + + while (!list_empty(&dev->shms)) { + struct _signal *_signal; + + _signal = list_first_entry(&dev->shms, struct _signal, list); + + list_del(&_signal->list); + + spin_unlock_irqrestore(&vbus_pci.lock, iflags); + shm_signal_put(&_signal->signal); + spin_lock_irqsave(&vbus_pci.lock, iflags); + } + + spin_unlock_irqrestore(&vbus_pci.lock, iflags); + + /* + * The DEVICECLOSE will implicitly close all of the shm on the + * host-side, so there is no need to do an explicit per-shm + * hypercall + */ + ret = vbus_pci_buscall(VBUS_PCI_HC_DEVCLOSE, + &dev->handle, sizeof(dev->handle)); + + if (ret < 0) + printk(KERN_ERR "VBUS-PCI: Error closing device %s/%lld: %d\n", + vdev->type, vdev->id, ret); + + dev->handle = 0; + + return 0; +} + +static int +vbus_pci_device_shm(struct vbus_device_proxy *vdev, int id, int prio, + void *ptr, size_t len, + struct shm_signal_desc *sdesc, struct shm_signal **signal, + int flags) +{ + struct vbus_pci_device *dev = to_dev(vdev); + struct _signal *_signal = NULL; + struct vbus_pci_deviceshm params; + unsigned long iflags; + int ret; + + if (!dev->handle) + return -EINVAL; + + params.devh = dev->handle; + params.id = id; + params.flags = flags; + params.datap = (u64)__pa(ptr); + params.len = len; + + if (signal) { + /* + * The signal descriptor must be embedded within the + * provided ptr + */ + if (!sdesc + || (len < sizeof(*sdesc)) + || ((void *)sdesc < ptr) + || ((void *)sdesc > (ptr + len - sizeof(*sdesc)))) + return -EINVAL; + + _signal = kzalloc(sizeof(*_signal), GFP_KERNEL); + if (!_signal) + return -ENOMEM; + + _signal_init(&_signal->signal, sdesc, &_signal_ops); + + /* + * take another reference for the host. This is dropped + * by a SHMCLOSE event + */ + shm_signal_get(&_signal->signal); + + params.signal.offset = (u64)sdesc - (u64)ptr; + params.signal.prio = prio; + params.signal.cookie = (u64)_signal; + + } else + params.signal.offset = -1; /* yes, this is a u32, but its ok */ + + ret = vbus_pci_buscall(VBUS_PCI_HC_DEVSHM, + ¶ms, sizeof(params)); + if (ret < 0) { + if (_signal) { + /* + * We held two references above, so we need to drop + * both of them + */ + shm_signal_put(&_signal->signal); + shm_signal_put(&_signal->signal); + } + + return ret; + } + + if (signal) { + BUG_ON(ret < 0); + + _signal->handle = ret; + + spin_lock_irqsave(&vbus_pci.lock, iflags); + + list_add_tail(&_signal->list, &dev->shms); + + spin_unlock_irqrestore(&vbus_pci.lock, iflags); + + shm_signal_get(&_signal->signal); + *signal = &_signal->signal; + } + + return 0; +} + +static int +vbus_pci_device_call(struct vbus_device_proxy *vdev, u32 func, void *data, + size_t len, int flags) +{ + struct vbus_pci_device *dev = to_dev(vdev); + struct vbus_pci_devicecall params = { + .devh = dev->handle, + .func = func, + .datap = (u64)__pa(data), + .len = len, + .flags = flags, + }; + + if (!dev->handle) + return -EINVAL; + + return vbus_pci_buscall(VBUS_PCI_HC_DEVCALL, ¶ms, sizeof(params)); +} + +static void +vbus_pci_device_release(struct vbus_device_proxy *vdev) +{ + struct vbus_pci_device *_dev = to_dev(vdev); + + vbus_pci_device_close(vdev, 0); + + kfree(_dev); +} + +struct vbus_device_proxy_ops vbus_pci_device_ops = { + .open = vbus_pci_device_open, + .close = vbus_pci_device_close, + .shm = vbus_pci_device_shm, + .call = vbus_pci_device_call, + .release = vbus_pci_device_release, +}; + +/* + * ------------------- + * vbus events + * ------------------- + */ + +static void +deferred_devadd(struct work_struct *work) +{ + struct vbus_pci_device *new; + int ret; + + new = container_of(work, struct vbus_pci_device, add); + + ret = vbus_device_proxy_register(&new->vdev); + if (ret < 0) + panic("failed to register device %lld(%s): %d\n", + new->vdev.id, new->type, ret); +} + +static void +deferred_devdrop(struct work_struct *work) +{ + struct vbus_pci_device *dev; + + dev = container_of(work, struct vbus_pci_device, drop); + vbus_device_proxy_unregister(&dev->vdev); +} + +static void +event_devadd(struct vbus_pci_add_event *event) +{ + struct vbus_pci_device *new = kzalloc(sizeof(*new), GFP_KERNEL); + if (!new) { + printk(KERN_ERR "VBUS_PCI: Out of memory on add_event\n"); + return; + } + + INIT_LIST_HEAD(&new->shms); + + memcpy(new->type, event->type, VBUS_MAX_DEVTYPE_LEN); + new->vdev.type = new->type; + new->vdev.id = event->id; + new->vdev.ops = &vbus_pci_device_ops; + + dev_set_name(&new->vdev.dev, "%lld", event->id); + + INIT_WORK(&new->add, deferred_devadd); + INIT_WORK(&new->drop, deferred_devdrop); + + schedule_work(&new->add); +} + +static void +event_devdrop(struct vbus_pci_handle_event *event) +{ + struct vbus_device_proxy *dev = vbus_device_proxy_find(event->handle); + + if (!dev) { + printk(KERN_WARNING "VBUS-PCI: devdrop failed: %lld\n", + event->handle); + return; + } + + schedule_work(&to_dev(dev)->drop); +} + +static void +event_shmsignal(struct vbus_pci_handle_event *event) +{ + struct _signal *_signal = (struct _signal *)event->handle; + + _shm_signal_wakeup(&_signal->signal); +} + +static void +event_shmclose(struct vbus_pci_handle_event *event) +{ + struct _signal *_signal = (struct _signal *)event->handle; + + /* + * This reference was taken during the DEVICESHM call + */ + shm_signal_put(&_signal->signal); +} + +/* + * ------------------- + * eventq routines + * ------------------- + */ + +static struct ioq_notifier eventq_notifier; + +static int __init +eventq_init(int qlen) +{ + struct ioq_iterator iter; + int ret; + int i; + + vbus_pci.ring = kzalloc(sizeof(struct vbus_pci_event) * qlen, + GFP_KERNEL); + if (!vbus_pci.ring) + return -ENOMEM; + + /* + * We want to iterate on the "valid" index. By default the iterator + * will not "autoupdate" which means it will not hypercall the host + * with our changes. This is good, because we are really just + * initializing stuff here anyway. Note that you can always manually + * signal the host with ioq_signal() if the autoupdate feature is not + * used. + */ + ret = ioq_iter_init(&vbus_pci.eventq, &iter, ioq_idxtype_valid, 0); + BUG_ON(ret < 0); + + /* + * Seek to the tail of the valid index (which should be our first + * item since the queue is brand-new) + */ + ret = ioq_iter_seek(&iter, ioq_seek_tail, 0, 0); + BUG_ON(ret < 0); + + /* + * Now populate each descriptor with an empty vbus_event and mark it + * valid + */ + for (i = 0; i < qlen; i++) { + struct vbus_pci_event *event = &vbus_pci.ring[i]; + size_t len = sizeof(*event); + struct ioq_ring_desc *desc = iter.desc; + + BUG_ON(iter.desc->valid); + + desc->cookie = (u64)event; + desc->ptr = (u64)__pa(event); + desc->len = len; /* total length */ + desc->valid = 1; + + /* + * This push operation will simultaneously advance the + * valid-tail index and increment our position in the queue + * by one. + */ + ret = ioq_iter_push(&iter, 0); + BUG_ON(ret < 0); + } + + vbus_pci.eventq.notifier = &eventq_notifier; + + /* + * And finally, ensure that we can receive notification + */ + ioq_notify_enable(&vbus_pci.eventq, 0); + + return 0; +} + +/* Invoked whenever the hypervisor ioq_signal()s our eventq */ +static void +eventq_wakeup(struct ioq_notifier *notifier) +{ + struct ioq_iterator iter; + int ret; + + /* We want to iterate on the head of the in-use index */ + ret = ioq_iter_init(&vbus_pci.eventq, &iter, ioq_idxtype_inuse, 0); + BUG_ON(ret < 0); + + ret = ioq_iter_seek(&iter, ioq_seek_head, 0, 0); + BUG_ON(ret < 0); + + /* + * The EOM is indicated by finding a packet that is still owned by + * the south side. + * + * FIXME: This in theory could run indefinitely if the host keeps + * feeding us events since there is nothing like a NAPI budget. We + * might need to address that + */ + while (!iter.desc->sown) { + struct ioq_ring_desc *desc = iter.desc; + struct vbus_pci_event *event; + + event = (struct vbus_pci_event *)desc->cookie; + + switch (event->eventid) { + case VBUS_PCI_EVENT_DEVADD: + event_devadd(&event->data.add); + break; + case VBUS_PCI_EVENT_DEVDROP: + event_devdrop(&event->data.handle); + break; + case VBUS_PCI_EVENT_SHMSIGNAL: + event_shmsignal(&event->data.handle); + break; + case VBUS_PCI_EVENT_SHMCLOSE: + event_shmclose(&event->data.handle); + break; + default: + printk(KERN_WARNING "VBUS_PCI: Unexpected event %d\n", + event->eventid); + break; + }; + + memset(event, 0, sizeof(*event)); + + /* Advance the in-use head */ + ret = ioq_iter_pop(&iter, 0); + BUG_ON(ret < 0); + } + + /* And let the south side know that we changed the queue */ + ioq_signal(&vbus_pci.eventq, 0); +} + +static struct ioq_notifier eventq_notifier = { + .signal = &eventq_wakeup, +}; + +/* Injected whenever the host issues an ioq_signal() on the eventq */ +irqreturn_t +eventq_intr(int irq, void *dev) +{ + _shm_signal_wakeup(vbus_pci.eventq.signal); + + return IRQ_HANDLED; +} + +/* + * ------------------- + */ + +static int +eventq_signal_inject(struct shm_signal *signal) +{ + /* The eventq uses the special-case handle=0 */ + iowrite32(0, &vbus_pci.signals->eventq); + + return 0; +} + +static void +eventq_signal_release(struct shm_signal *signal) +{ + kfree(signal); +} + +static struct shm_signal_ops eventq_signal_ops = { + .inject = eventq_signal_inject, + .release = eventq_signal_release, +}; + +/* + * ------------------- + */ + +static void +eventq_ioq_release(struct ioq *ioq) +{ + /* released as part of the vbus_pci object */ +} + +static struct ioq_ops eventq_ioq_ops = { + .release = eventq_ioq_release, +}; + +/* + * ------------------- + */ + +static void +vbus_pci_release(void) +{ + if (vbus_pci.irq > 0) + free_irq(vbus_pci.irq, NULL); + + if (vbus_pci.signals) + pci_iounmap(vbus_pci.dev, (void *)vbus_pci.signals); + + if (vbus_pci.regs) + pci_iounmap(vbus_pci.dev, (void *)vbus_pci.regs); + + pci_release_regions(vbus_pci.dev); + pci_disable_device(vbus_pci.dev); + + kfree(vbus_pci.eventq.head_desc); + kfree(vbus_pci.ring); + + vbus_pci.enabled = false; +} + +static int __init +vbus_pci_open(void) +{ + struct vbus_pci_bridge_negotiate params = { + .magic = VBUS_PCI_ABI_MAGIC, + .version = VBUS_PCI_HC_VERSION, + .capabilities = 0, + }; + + return vbus_pci_bridgecall(VBUS_PCI_BRIDGE_NEGOTIATE, + ¶ms, sizeof(params)); +} + +#define QLEN 1024 + +static int __init +vbus_pci_eventq_register(void) +{ + struct vbus_pci_busreg params = { + .count = 1, + .eventq = { + { + .count = QLEN, + .ring = (u64)__pa(vbus_pci.eventq.head_desc), + .data = (u64)__pa(vbus_pci.ring), + }, + }, + }; + + return vbus_pci_bridgecall(VBUS_PCI_BRIDGE_QREG, + ¶ms, sizeof(params)); +} + +static int __init +_ioq_init(size_t ringsize, struct ioq *ioq, struct ioq_ops *ops) +{ + struct shm_signal *signal = NULL; + struct ioq_ring_head *head = NULL; + size_t len = IOQ_HEAD_DESC_SIZE(ringsize); + + head = kzalloc(len, GFP_KERNEL | GFP_DMA); + if (!head) + return -ENOMEM; + + signal = kzalloc(sizeof(*signal), GFP_KERNEL); + if (!signal) { + kfree(head); + return -ENOMEM; + } + + head->magic = IOQ_RING_MAGIC; + head->ver = IOQ_RING_VER; + head->count = ringsize; + + _signal_init(signal, &head->signal, &eventq_signal_ops); + + ioq_init(ioq, ops, ioq_locality_north, head, signal, ringsize); + + return 0; +} + +static int __devinit +vbus_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) +{ + int ret; + int cpu; + + if (vbus_pci.enabled) + return -EEXIST; /* we only support one bridge per kernel */ + + if (pdev->revision != VBUS_PCI_ABI_VERSION) { + printk(KERN_DEBUG "VBUS_PCI: expected ABI version %d, got %d\n", + VBUS_PCI_ABI_VERSION, + pdev->revision); + return -ENODEV; + } + + vbus_pci.dev = pdev; + + ret = pci_enable_device(pdev); + if (ret < 0) + return ret; + + ret = pci_request_regions(pdev, VBUS_PCI_NAME); + if (ret < 0) { + printk(KERN_ERR "VBUS_PCI: Could not init BARs: %d\n", ret); + goto out_fail; + } + + vbus_pci.regs = pci_iomap(pdev, 0, sizeof(struct vbus_pci_regs)); + if (!vbus_pci.regs) { + printk(KERN_ERR "VBUS_PCI: Could not map BARs\n"); + goto out_fail; + } + + vbus_pci.signals = pci_iomap(pdev, 1, sizeof(struct vbus_pci_signals)); + if (!vbus_pci.signals) { + printk(KERN_ERR "VBUS_PCI: Could not map BARs\n"); + goto out_fail; + } + + ret = vbus_pci_open(); + if (ret < 0) { + printk(KERN_DEBUG "VBUS_PCI: Could not register with host: %d\n", + ret); + goto out_fail; + } + + /* + * Allocate an IOQ to use for host-2-guest event notification + */ + ret = _ioq_init(QLEN, &vbus_pci.eventq, &eventq_ioq_ops); + if (ret < 0) { + printk(KERN_ERR "VBUS_PCI: Cound not init eventq: %d\n", ret); + goto out_fail; + } + + ret = eventq_init(QLEN); + if (ret < 0) { + printk(KERN_ERR "VBUS_PCI: Cound not setup ring: %d\n", ret); + goto out_fail; + } + + ret = pci_enable_msi(pdev); + if (ret < 0) { + printk(KERN_ERR "VBUS_PCI: Cound not enable MSI: %d\n", ret); + goto out_fail; + } + + vbus_pci.irq = pdev->irq; + + ret = request_irq(pdev->irq, eventq_intr, 0, "vbus", NULL); + if (ret < 0) { + printk(KERN_ERR "VBUS_PCI: Failed to register IRQ %d\n: %d", + pdev->irq, ret); + goto out_fail; + } + + /* + * Add one fastcall vector per cpu so that we can do lockless + * hypercalls + */ + for_each_possible_cpu(cpu) { + struct vbus_pci_fastcall_desc *desc = + &per_cpu(vbus_pci_percpu_fastcall, cpu); + struct vbus_pci_call_desc params = { + .vector = cpu, + .len = sizeof(*desc), + .datap = __pa(desc), + }; + + ret = vbus_pci_bridgecall(VBUS_PCI_BRIDGE_FASTCALL_ADD, + ¶ms, sizeof(params)); + if (ret < 0) { + printk(KERN_ERR \ + "VBUS_PCI: Failed to register cpu:%d\n: %d", + cpu, ret); + goto out_fail; + } + } + + /* + * Finally register our queue on the host to start receiving events + */ + ret = vbus_pci_eventq_register(); + if (ret < 0) { + printk(KERN_ERR "VBUS_PCI: Could not register with host: %d\n", + ret); + goto out_fail; + } + + vbus_pci.enabled = true; + + printk(KERN_INFO "Virtual-Bus: Copyright (c) 2009, " \ + "Gregory Haskins \n"); + + return 0; + + out_fail: + vbus_pci_release(); + + return ret; +} + +static void __devexit +vbus_pci_remove(struct pci_dev *pdev) +{ + vbus_pci_release(); +} + +static DEFINE_PCI_DEVICE_TABLE(vbus_pci_tbl) = { + { PCI_DEVICE(0x11da, 0x2000) }, + { 0 }, +}; + +MODULE_DEVICE_TABLE(pci, vbus_pci_tbl); + +static struct pci_driver vbus_pci_driver = { + .name = VBUS_PCI_NAME, + .id_table = vbus_pci_tbl, + .probe = vbus_pci_probe, + .remove = vbus_pci_remove, +}; + +int __init +vbus_pci_init(void) +{ + memset(&vbus_pci, 0, sizeof(vbus_pci)); + spin_lock_init(&vbus_pci.lock); + + return pci_register_driver(&vbus_pci_driver); +} + +static void __exit +vbus_pci_exit(void) +{ + pci_unregister_driver(&vbus_pci_driver); +} + +module_init(vbus_pci_init); +module_exit(vbus_pci_exit); + diff --git a/include/linux/Kbuild b/include/linux/Kbuild index 32b3eb8..fa15bbf 100644 --- a/include/linux/Kbuild +++ b/include/linux/Kbuild @@ -358,6 +358,7 @@ unifdef-y += uio.h unifdef-y += unistd.h unifdef-y += usbdevice_fs.h unifdef-y += utsname.h +unifdef-y += vbus_pci.h unifdef-y += videodev2.h unifdef-y += videodev.h unifdef-y += virtio_config.h diff --git a/include/linux/vbus_pci.h b/include/linux/vbus_pci.h new file mode 100644 index 0000000..fe33759 --- /dev/null +++ b/include/linux/vbus_pci.h @@ -0,0 +1,145 @@ +/* + * Copyright 2009 Novell. All Rights Reserved. + * + * PCI to Virtual-Bus Bridge + * + * Author: + * Gregory Haskins + * + * This file is free software; you can redistribute it and/or modify + * it under the terms of version 2 of the GNU General Public License + * as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. + */ + +#ifndef _LINUX_VBUS_PCI_H +#define _LINUX_VBUS_PCI_H + +#include +#include + +#define VBUS_PCI_ABI_MAGIC 0xbf53eef5 +#define VBUS_PCI_ABI_VERSION 2 +#define VBUS_PCI_HC_VERSION 1 + +enum { + VBUS_PCI_BRIDGE_NEGOTIATE, + VBUS_PCI_BRIDGE_QREG, + VBUS_PCI_BRIDGE_SLOWCALL, + VBUS_PCI_BRIDGE_FASTCALL_ADD, + VBUS_PCI_BRIDGE_FASTCALL_DROP, + + VBUS_PCI_BRIDGE_MAX, /* must be last */ +}; + +enum { + VBUS_PCI_HC_DEVOPEN, + VBUS_PCI_HC_DEVCLOSE, + VBUS_PCI_HC_DEVCALL, + VBUS_PCI_HC_DEVSHM, + + VBUS_PCI_HC_MAX, /* must be last */ +}; + +struct vbus_pci_bridge_negotiate { + __u32 magic; + __u32 version; + __u64 capabilities; +}; + +struct vbus_pci_deviceopen { + __u32 devid; + __u32 version; /* device ABI version */ + __u64 handle; /* return value for devh */ +}; + +struct vbus_pci_devicecall { + __u64 devh; /* device-handle (returned from DEVICEOPEN */ + __u32 func; + __u32 len; + __u32 flags; + __u64 datap; +}; + +struct vbus_pci_deviceshm { + __u64 devh; /* device-handle (returned from DEVICEOPEN */ + __u32 id; + __u32 len; + __u32 flags; + struct { + __u32 offset; + __u32 prio; + __u64 cookie; /* token to pass back when signaling client */ + } signal; + __u64 datap; +}; + +struct vbus_pci_call_desc { + __u32 vector; + __u32 len; + __u64 datap; +}; + +struct vbus_pci_fastcall_desc { + struct vbus_pci_call_desc call; + __u32 result; +}; + +struct vbus_pci_regs { + struct vbus_pci_call_desc bridgecall; + __u8 pad[48]; +}; + +struct vbus_pci_signals { + __u32 eventq; + __u32 fastcall; + __u32 shmsignal; + __u8 pad[20]; +}; + +struct vbus_pci_eventqreg { + __u32 count; + __u64 ring; + __u64 data; +}; + +struct vbus_pci_busreg { + __u32 count; /* supporting multiple queues allows for prio, etc */ + struct vbus_pci_eventqreg eventq[1]; +}; + +enum vbus_pci_eventid { + VBUS_PCI_EVENT_DEVADD, + VBUS_PCI_EVENT_DEVDROP, + VBUS_PCI_EVENT_SHMSIGNAL, + VBUS_PCI_EVENT_SHMCLOSE, +}; + +#define VBUS_MAX_DEVTYPE_LEN 128 + +struct vbus_pci_add_event { + __u64 id; + char type[VBUS_MAX_DEVTYPE_LEN]; +}; + +struct vbus_pci_handle_event { + __u64 handle; +}; + +struct vbus_pci_event { + __u32 eventid; + union { + struct vbus_pci_add_event add; + struct vbus_pci_handle_event handle; + } data; +}; + +#endif /* _LINUX_VBUS_PCI_H */