From patchwork Fri Jun 5 12:02:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 481179 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 8254A1402AD for ; Fri, 5 Jun 2015 22:05:06 +1000 (AEST) Received: from localhost ([::1]:46787 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z0qMy-0003fB-AD for incoming@patchwork.ozlabs.org; Fri, 05 Jun 2015 08:05:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40583) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z0qKj-0008GV-3G for qemu-devel@nongnu.org; Fri, 05 Jun 2015 08:02:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z0qKf-0004h1-S8 for qemu-devel@nongnu.org; Fri, 05 Jun 2015 08:02:45 -0400 Received: from mail.emea.novell.com ([130.57.118.101]:54638) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z0qKf-0004gq-E1 for qemu-devel@nongnu.org; Fri, 05 Jun 2015 08:02:41 -0400 Received: from EMEA1-MTA by mail.emea.novell.com with Novell_GroupWise; Fri, 05 Jun 2015 13:02:40 +0100 Message-Id: <5571AC000200007800081567@mail.emea.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.0.1 Date: Fri, 05 Jun 2015 13:02:40 +0100 From: "Jan Beulich" To: References: <5571AA3B020000780008152E@mail.emea.novell.com> In-Reply-To: <5571AA3B020000780008152E@mail.emea.novell.com> Mime-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 130.57.118.101 Cc: xen-devel , Stefano Stabellini Subject: [Qemu-devel] [PATCH 3/6] xen/MSI-X: really enforce alignment X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The way the generic infrastructure works the intention of not allowing unaligned accesses can't be achieved by simply setting .unaligned to false. The benefit is that we can now replace the conditionals in {get,set}_entry_value() by assert()-s. Signed-off-by: Jan Beulich xen/MSI-X: really enforce alignment The way the generic infrastructure works the intention of not allowing unaligned accesses can't be achieved by simply setting .unaligned to false. The benefit is that we can now replace the conditionals in {get,set}_entry_value() by assert()-s. Signed-off-by: Jan Beulich --- a/qemu/upstream/hw/xen/xen_pt_msi.c +++ b/qemu/upstream/hw/xen/xen_pt_msi.c @@ -421,16 +421,14 @@ int xen_pt_msix_update_remap(XenPCIPasst static uint32_t get_entry_value(XenPTMSIXEntry *e, int offset) { - return !(offset % sizeof(*e->latch)) - ? e->latch[offset / sizeof(*e->latch)] : 0; + assert(!(offset % sizeof(*e->latch))); + return e->latch[offset / sizeof(*e->latch)]; } static void set_entry_value(XenPTMSIXEntry *e, int offset, uint32_t val) { - if (!(offset % sizeof(*e->latch))) - { - e->latch[offset / sizeof(*e->latch)] = val; - } + assert(!(offset % sizeof(*e->latch))); + e->latch[offset / sizeof(*e->latch)] = val; } static void pci_msix_write(void *opaque, hwaddr addr, @@ -496,6 +494,12 @@ static uint64_t pci_msix_read(void *opaq } } +static bool pci_msix_accepts(void *opaque, hwaddr addr, + unsigned size, bool is_write) +{ + return !(addr & (size - 1)); +} + static const MemoryRegionOps pci_msix_ops = { .read = pci_msix_read, .write = pci_msix_write, @@ -504,7 +508,13 @@ static const MemoryRegionOps pci_msix_op .min_access_size = 4, .max_access_size = 4, .unaligned = false, + .accepts = pci_msix_accepts }, + .impl = { + .min_access_size = 4, + .max_access_size = 4, + .unaligned = false + } }; int xen_pt_msix_init(XenPCIPassthroughState *s, uint32_t base) Reviewed-by: Stefano Stabellini --- a/qemu/upstream/hw/xen/xen_pt_msi.c +++ b/qemu/upstream/hw/xen/xen_pt_msi.c @@ -421,16 +421,14 @@ int xen_pt_msix_update_remap(XenPCIPasst static uint32_t get_entry_value(XenPTMSIXEntry *e, int offset) { - return !(offset % sizeof(*e->latch)) - ? e->latch[offset / sizeof(*e->latch)] : 0; + assert(!(offset % sizeof(*e->latch))); + return e->latch[offset / sizeof(*e->latch)]; } static void set_entry_value(XenPTMSIXEntry *e, int offset, uint32_t val) { - if (!(offset % sizeof(*e->latch))) - { - e->latch[offset / sizeof(*e->latch)] = val; - } + assert(!(offset % sizeof(*e->latch))); + e->latch[offset / sizeof(*e->latch)] = val; } static void pci_msix_write(void *opaque, hwaddr addr, @@ -496,6 +494,12 @@ static uint64_t pci_msix_read(void *opaq } } +static bool pci_msix_accepts(void *opaque, hwaddr addr, + unsigned size, bool is_write) +{ + return !(addr & (size - 1)); +} + static const MemoryRegionOps pci_msix_ops = { .read = pci_msix_read, .write = pci_msix_write, @@ -504,7 +508,13 @@ static const MemoryRegionOps pci_msix_op .min_access_size = 4, .max_access_size = 4, .unaligned = false, + .accepts = pci_msix_accepts }, + .impl = { + .min_access_size = 4, + .max_access_size = 4, + .unaligned = false + } }; int xen_pt_msix_init(XenPCIPassthroughState *s, uint32_t base)