From patchwork Wed Jul 22 06:57:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333594 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRF74CVZz9sR4 for ; Wed, 22 Jul 2020 17:00:43 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vFwLhs/C; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRF56Z2LzDqpH for ; Wed, 22 Jul 2020 17:00:41 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::444; helo=mail-pf1-x444.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=vFwLhs/C; dkim-atps=neutral Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9R28VRzDqwV for ; Wed, 22 Jul 2020 16:57:30 +1000 (AEST) Received: by mail-pf1-x444.google.com with SMTP id m9so692057pfh.0 for ; Tue, 21 Jul 2020 23:57:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=o4rfS1bZCbTCL2vOis8mrYP6qVcTo/L4J6w7GCGJKaI=; b=vFwLhs/CGOicIvb2nMetVXr0REddX+wXjcNCEAkUvlWPjmJE/tRAOyK9N84/28gpId uZ42TT/FmqQ2Llg+VGWq8T0egQMMyKF5obS0V5tLske60D2t8xgK5QMwmkB2+4J4LA8f Bb6LfgVLDcbvNAFo/0ToePR1WMEydJ+hVToBgcoJ5cDp1nv+uiZRJAr8XPfmzfDOLnkg Syio8gvm0++ehubPNyFapU10SP4ToeNaqZxG/cJtQzJx8OC96VN31fNBlcfInkHZtisd dO+mGE1qQOgRQl4qxXHFBTOZM6MCawAwqm/1PgL+6GW1ea4ebpqp/Q5WmQGYlh60eJTc nk8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=o4rfS1bZCbTCL2vOis8mrYP6qVcTo/L4J6w7GCGJKaI=; b=S7hb+LR+ctYImuQRLN7itLfBWikEmjZ7zCpX8WamBAryIu56uHs2JScdx7UEqa9+UV LOh09hVZSOGkhKF2sDvdhEdSPIea291cV6atLhXL2eyJWTJ7vrKC7iaoj3LkAFvUsJX2 NC4Tkrmpo8XvDFuEBSh6K/jLdypdjbcvITid5c9ZIbFGwcqtdT1fHgTBaBTOpQiYOE8S 4yqJrYwj+GOIVCXNMJciUgzTwaFOe/Nbb6z28gYqqtzVV9A+uDuk2t8u++/6aSKUsQxT fqcgIQW4VXEgyyjPq8lEIoWJGungb9XJm+tMJUjNrizLf+VlSo1UJnEI+X2a5p6Pllb3 kOwQ== X-Gm-Message-State: AOAM532TjvC6ZZZZhPOHR1m2DSo23jLFZuiQDKNeX+mNtGIu1ficojFk GNIo6wIVVD1ymyC0KkVTNzGShUcJBFI= X-Google-Smtp-Source: ABdhPJyplZ+kwtr0HZk5fAF4IRn5kkGXPOthPVDmeTiQA3qvwTPCI89BIUWqQt8IxlqXGJMfpPLrjw== X-Received: by 2002:a62:e30f:: with SMTP id g15mr26590564pfh.203.1595401047263; Tue, 21 Jul 2020 23:57:27 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:26 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 01/16] powernv/pci: Add pci_bus_to_pnvhb() helper Date: Wed, 22 Jul 2020 16:57:00 +1000 Message-Id: <20200722065715.1432738-1-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Add a helper to go from a pci_bus structure to the pnv_phb that hosts that bus. There's a lot of instances of the following pattern: struct pci_controller *hose = pci_bus_to_host(pdev->bus); struct pnv_phb *phb = hose->private_data; Without any other uses of the pci_controller inside the function. This is hard to read since it requires you to memorise the contents of the private data fields and kind of error prone since it involves blindly assigning a void pointer. Add a helper to make it more concise and explicit. Signed-off-by: Oliver O'Halloran --- v2: no change --- arch/powerpc/platforms/powernv/pci-ioda.c | 88 +++++++---------------- arch/powerpc/platforms/powernv/pci.c | 14 ++-- arch/powerpc/platforms/powernv/pci.h | 10 +++ 3 files changed, 38 insertions(+), 74 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 31c3e6d58c41..687919db0347 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -252,8 +252,7 @@ static int pnv_ioda2_init_m64(struct pnv_phb *phb) static void pnv_ioda_reserve_dev_m64_pe(struct pci_dev *pdev, unsigned long *pe_bitmap) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct resource *r; resource_size_t base, sgsz, start, end; int segno, i; @@ -351,8 +350,7 @@ static void pnv_ioda_reserve_m64_pe(struct pci_bus *bus, static struct pnv_ioda_pe *pnv_ioda_pick_m64_pe(struct pci_bus *bus, bool all) { - struct pci_controller *hose = pci_bus_to_host(bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(bus); struct pnv_ioda_pe *master_pe, *pe; unsigned long size, *pe_alloc; int i; @@ -673,8 +671,7 @@ struct pnv_ioda_pe *pnv_pci_bdfn_to_pe(struct pnv_phb *phb, u16 bdfn) struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev) { - struct pci_controller *hose = pci_bus_to_host(dev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(dev->bus); struct pci_dn *pdn = pci_get_pdn(dev); if (!pdn) @@ -1069,8 +1066,7 @@ static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset) static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev) { - struct pci_controller *hose = pci_bus_to_host(dev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(dev->bus); struct pci_dn *pdn = pci_get_pdn(dev); struct pnv_ioda_pe *pe; @@ -1129,8 +1125,7 @@ static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev) */ static struct pnv_ioda_pe *pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all) { - struct pci_controller *hose = pci_bus_to_host(bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(bus); struct pnv_ioda_pe *pe = NULL; unsigned int pe_num; @@ -1196,8 +1191,7 @@ static struct pnv_ioda_pe *pnv_ioda_setup_npu_PE(struct pci_dev *npu_pdev) struct pnv_ioda_pe *pe; struct pci_dev *gpu_pdev; struct pci_dn *npu_pdn; - struct pci_controller *hose = pci_bus_to_host(npu_pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(npu_pdev->bus); /* * Intentionally leak a reference on the npu device (for @@ -1300,16 +1294,12 @@ static void pnv_pci_ioda_setup_nvlink(void) #ifdef CONFIG_PCI_IOV static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) { - struct pci_bus *bus; - struct pci_controller *hose; struct pnv_phb *phb; struct pci_dn *pdn; int i, j; int m64_bars; - bus = pdev->bus; - hose = pci_bus_to_host(bus); - phb = hose->private_data; + phb = pci_bus_to_pnvhb(pdev->bus); pdn = pci_get_pdn(pdev); if (pdn->m64_single_mode) @@ -1333,8 +1323,6 @@ static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) { - struct pci_bus *bus; - struct pci_controller *hose; struct pnv_phb *phb; struct pci_dn *pdn; unsigned int win; @@ -1346,9 +1334,7 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) int pe_num; int m64_bars; - bus = pdev->bus; - hose = pci_bus_to_host(bus); - phb = hose->private_data; + phb = pci_bus_to_pnvhb(pdev->bus); pdn = pci_get_pdn(pdev); total_vfs = pci_sriov_get_totalvfs(pdev); @@ -1459,15 +1445,11 @@ static void pnv_pci_ioda2_release_dma_pe(struct pci_dev *dev, struct pnv_ioda_pe static void pnv_ioda_release_vf_PE(struct pci_dev *pdev) { - struct pci_bus *bus; - struct pci_controller *hose; struct pnv_phb *phb; struct pnv_ioda_pe *pe, *pe_n; struct pci_dn *pdn; - bus = pdev->bus; - hose = pci_bus_to_host(bus); - phb = hose->private_data; + phb = pci_bus_to_pnvhb(pdev->bus); pdn = pci_get_pdn(pdev); if (!pdev->is_physfn) @@ -1492,16 +1474,12 @@ static void pnv_ioda_release_vf_PE(struct pci_dev *pdev) static void pnv_pci_sriov_disable(struct pci_dev *pdev) { - struct pci_bus *bus; - struct pci_controller *hose; struct pnv_phb *phb; struct pnv_ioda_pe *pe; struct pci_dn *pdn; u16 num_vfs, i; - bus = pdev->bus; - hose = pci_bus_to_host(bus); - phb = hose->private_data; + phb = pci_bus_to_pnvhb(pdev->bus); pdn = pci_get_pdn(pdev); num_vfs = pdn->num_vfs; @@ -1535,17 +1513,13 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) { - struct pci_bus *bus; - struct pci_controller *hose; struct pnv_phb *phb; struct pnv_ioda_pe *pe; int pe_num; u16 vf_index; struct pci_dn *pdn; - bus = pdev->bus; - hose = pci_bus_to_host(bus); - phb = hose->private_data; + phb = pci_bus_to_pnvhb(pdev->bus); pdn = pci_get_pdn(pdev); if (!pdev->is_physfn) @@ -1572,7 +1546,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) pe->rid = (vf_bus << 8) | vf_devfn; pe_info(pe, "VF %04d:%02d:%02d.%d associated with PE#%x\n", - hose->global_number, pdev->bus->number, + pci_domain_nr(pdev->bus), pdev->bus->number, PCI_SLOT(vf_devfn), PCI_FUNC(vf_devfn), pe_num); if (pnv_ioda_configure_pe(phb, pe)) { @@ -1602,17 +1576,13 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) { - struct pci_bus *bus; - struct pci_controller *hose; struct pnv_phb *phb; struct pnv_ioda_pe *pe; struct pci_dn *pdn; int ret; u16 i; - bus = pdev->bus; - hose = pci_bus_to_host(bus); - phb = hose->private_data; + phb = pci_bus_to_pnvhb(pdev->bus); pdn = pci_get_pdn(pdev); if (phb->type == PNV_PHB_IODA2) { @@ -1735,8 +1705,7 @@ static int pnv_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) static void pnv_pci_ioda_dma_dev_setup(struct pci_dev *pdev) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct pci_dn *pdn = pci_get_pdn(pdev); struct pnv_ioda_pe *pe; @@ -1847,8 +1816,7 @@ static int pnv_pci_ioda_dma_64bit_bypass(struct pnv_ioda_pe *pe) static bool pnv_pci_ioda_iommu_bypass_supported(struct pci_dev *pdev, u64 dma_mask) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct pci_dn *pdn = pci_get_pdn(pdev); struct pnv_ioda_pe *pe; @@ -2766,8 +2734,7 @@ static void pnv_pci_init_ioda_msis(struct pnv_phb *phb) #ifdef CONFIG_PCI_IOV static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); const resource_size_t gate = phb->ioda.m64_segsize >> 2; struct resource *res; int i; @@ -3101,10 +3068,9 @@ static void pnv_pci_ioda_fixup(void) static resource_size_t pnv_pci_window_alignment(struct pci_bus *bus, unsigned long type) { - struct pci_dev *bridge; - struct pci_controller *hose = pci_bus_to_host(bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(bus); int num_pci_bridges = 0; + struct pci_dev *bridge; bridge = bus->self; while (bridge) { @@ -3190,8 +3156,7 @@ static void pnv_pci_fixup_bridge_resources(struct pci_bus *bus, static void pnv_pci_configure_bus(struct pci_bus *bus) { - struct pci_controller *hose = pci_bus_to_host(bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(bus); struct pci_dev *bridge = bus->self; struct pnv_ioda_pe *pe; bool all = (bridge && pci_pcie_type(bridge) == PCI_EXP_TYPE_PCI_BRIDGE); @@ -3237,8 +3202,7 @@ static resource_size_t pnv_pci_default_alignment(void) static resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, int resno) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct pci_dn *pdn = pci_get_pdn(pdev); resource_size_t align; @@ -3274,8 +3238,7 @@ static resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, */ static bool pnv_pci_enable_device_hook(struct pci_dev *dev) { - struct pci_controller *hose = pci_bus_to_host(dev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(dev->bus); struct pci_dn *pdn; /* The function is probably called while the PEs have @@ -3488,8 +3451,7 @@ static void pnv_ioda_release_pe(struct pnv_ioda_pe *pe) static void pnv_pci_release_device(struct pci_dev *pdev) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct pci_dn *pdn = pci_get_pdn(pdev); struct pnv_ioda_pe *pe; @@ -3534,8 +3496,7 @@ static void pnv_pci_ioda_shutdown(struct pci_controller *hose) static void pnv_pci_ioda_dma_bus_setup(struct pci_bus *bus) { - struct pci_controller *hose = bus->sysdata; - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(bus); struct pnv_ioda_pe *pe; list_for_each_entry(pe, &phb->ioda.pe_list, list) { @@ -3873,8 +3834,7 @@ void __init pnv_pci_init_npu2_opencapi_phb(struct device_node *np) static void pnv_npu2_opencapi_cfg_size_fixup(struct pci_dev *dev) { - struct pci_controller *hose = pci_bus_to_host(dev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(dev->bus); if (!machine_is(powernv)) return; diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c index 091fe1cf386b..9b9bca169275 100644 --- a/arch/powerpc/platforms/powernv/pci.c +++ b/arch/powerpc/platforms/powernv/pci.c @@ -162,8 +162,7 @@ EXPORT_SYMBOL_GPL(pnv_pci_set_power_state); int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct msi_desc *entry; struct msi_msg msg; int hwirq; @@ -211,8 +210,7 @@ int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type) void pnv_teardown_msi_irqs(struct pci_dev *pdev) { - struct pci_controller *hose = pci_bus_to_host(pdev->bus); - struct pnv_phb *phb = hose->private_data; + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct msi_desc *entry; irq_hw_number_t hwirq; @@ -824,10 +822,9 @@ EXPORT_SYMBOL(pnv_pci_get_phb_node); int pnv_pci_set_tunnel_bar(struct pci_dev *dev, u64 addr, int enable) { - __be64 val; - struct pci_controller *hose; - struct pnv_phb *phb; + struct pnv_phb *phb = pci_bus_to_pnvhb(dev->bus); u64 tunnel_bar; + __be64 val; int rc; if (!opal_check_token(OPAL_PCI_GET_PBCQ_TUNNEL_BAR)) @@ -835,9 +832,6 @@ int pnv_pci_set_tunnel_bar(struct pci_dev *dev, u64 addr, int enable) if (!opal_check_token(OPAL_PCI_SET_PBCQ_TUNNEL_BAR)) return -ENXIO; - hose = pci_bus_to_host(dev->bus); - phb = hose->private_data; - mutex_lock(&tunnel_mutex); rc = opal_pci_get_pbcq_tunnel_bar(phb->opal_id, &val); if (rc != OPAL_SUCCESS) { diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 51c254f2f3cb..0727dec9a0d1 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -260,4 +260,14 @@ extern void pnv_pci_setup_iommu_table(struct iommu_table *tbl, extern unsigned long pnv_ioda_parse_tce_sizes(struct pnv_phb *phb); +static inline struct pnv_phb *pci_bus_to_pnvhb(struct pci_bus *bus) +{ + struct pci_controller *hose = bus->sysdata; + + if (hose) + return hose->private_data; + + return NULL; +} + #endif /* __POWERNV_PCI_H */ From patchwork Wed Jul 22 06:57:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333596 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRHF4Qnzz9sQt for ; Wed, 22 Jul 2020 17:02:33 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZvtHHsdG; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRHF3fQhzDqxf for ; Wed, 22 Jul 2020 17:02:33 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::641; helo=mail-pl1-x641.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=ZvtHHsdG; dkim-atps=neutral Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9W5Fv7zDqvP for ; Wed, 22 Jul 2020 16:57:35 +1000 (AEST) Received: by mail-pl1-x641.google.com with SMTP id m16so484025pls.5 for ; Tue, 21 Jul 2020 23:57:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ErldX1NwviUCI0Ulhy3AQukqvvEFapOXU0i3IsVVs6w=; b=ZvtHHsdG6OSkQWPE3i6+x2l7GnD+I1JEilUdOG+vD8oPfLY6DIRaZBUkyE8vWw3grS /iILjwJJqP7zm7LjLgRZGMgtahqrk+3kaJWcoSGexSLf0nbQgdZfR2RZwMVa1SkMks5f 86c6pW/HJeWjVZ+XvmaD8/SyjVSf+sCg9pYwU79CTK7kFxhEaIM4Hve+Q99KFDiq69n/ tvYetzcqVhRD0nEf7+oG9HmvukoqVymqLFjLSo1dg4CSWSKeD7tVVHzNa8vhlQjm2jCJ 35vJIVlk/0Ixg2qHoliaaauOsEmK0/bJVmvblbHO/cnTUieWSiMc78C5reLA4lGMX6V4 0hNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ErldX1NwviUCI0Ulhy3AQukqvvEFapOXU0i3IsVVs6w=; b=rLg+GNEyfCc4YYp7U3+Y4908KyR/31HF48dXjKBYWuSTclLzzE+lpZJz96sDSsjcPC bacGxBucqM/J6LMzu/SyhGSMEqSxIjtssjg5o0whIgckXEQMzmXOTR72U7hVD7Ca3Oc+ AA9QNWgSBWsolzC76t7QWDS1rkZEcBEvXLLY8D4utqf/FrE4Qh17ppjwy9RBT1XQ9Dgd CT4/trYgMEGFEWrNyQhVyupm4Vx8dUSlH3FtAc2KKjaeEkZ8iK2rt2RnKInyaFuy5dai kdbI7QA9eYv6fqaAmESgjYShlUMpKnnI/mULflWYYkzmyrBIJpScbf/v7q6Y7bxsyrd8 XlXw== X-Gm-Message-State: AOAM5322AQRKflMsSCmYZ/UNj3h+T9nAD3MmDhQwD8POg/zB+80BrFOF f/L+RHsdO0CySBn5+Aaa/7bJa7raFLU= X-Google-Smtp-Source: ABdhPJz/LT6cbxuEeLe+ixYAAzz9SvU+TFUyj8y5OBqyUGEN/C8CnvNfZJSy4d1kScz8LFYqQR3GHQ== X-Received: by 2002:a17:90a:a106:: with SMTP id s6mr8513856pjp.53.1595401049805; Tue, 21 Jul 2020 23:57:29 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:29 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 02/16] powerpc/powernv/pci: Always tear down DMA windows on PE release Date: Wed, 22 Jul 2020 16:57:01 +1000 Message-Id: <20200722065715.1432738-2-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Currently we have these two functions: pnv_pci_ioda2_release_dma_pe(), and pnv_pci_ioda2_release_pe_dma() The first is used when tearing down VF PEs and the other is used for normal devices. There's very little difference between the two though. The latter (non-VF) will skip a call to pnv_pci_ioda2_unset_window() unless CONFIG_IOMMU_API=y is set. There's no real point in doing this so fold the two together. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no change --- arch/powerpc/platforms/powernv/pci-ioda.c | 30 +++-------------------- 1 file changed, 3 insertions(+), 27 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 687919db0347..bfb40607aa0e 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -1422,26 +1422,7 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) return -EBUSY; } -static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group, - int num); - -static void pnv_pci_ioda2_release_dma_pe(struct pci_dev *dev, struct pnv_ioda_pe *pe) -{ - struct iommu_table *tbl; - int64_t rc; - - tbl = pe->table_group.tables[0]; - rc = pnv_pci_ioda2_unset_window(&pe->table_group, 0); - if (rc) - pe_warn(pe, "OPAL error %lld release DMA window\n", rc); - - pnv_pci_ioda2_set_bypass(pe, false); - if (pe->table_group.group) { - iommu_group_put(pe->table_group.group); - BUG_ON(pe->table_group.group); - } - iommu_tce_table_put(tbl); -} +static void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe); static void pnv_ioda_release_vf_PE(struct pci_dev *pdev) { @@ -1455,11 +1436,12 @@ static void pnv_ioda_release_vf_PE(struct pci_dev *pdev) if (!pdev->is_physfn) return; + /* FIXME: Use pnv_ioda_release_pe()? */ list_for_each_entry_safe(pe, pe_n, &phb->ioda.pe_list, list) { if (pe->parent_dev != pdev) continue; - pnv_pci_ioda2_release_dma_pe(pdev, pe); + pnv_pci_ioda2_release_pe_dma(pe); /* Remove from list */ mutex_lock(&phb->ioda.pe_list_mutex); @@ -2429,7 +2411,6 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe) return 0; } -#if defined(CONFIG_IOMMU_API) || defined(CONFIG_PCI_IOV) static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group, int num) { @@ -2453,7 +2434,6 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group, return ret; } -#endif #ifdef CONFIG_IOMMU_API unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift, @@ -3334,18 +3314,14 @@ static void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe) { struct iommu_table *tbl = pe->table_group.tables[0]; unsigned int weight = pnv_pci_ioda_pe_dma_weight(pe); -#ifdef CONFIG_IOMMU_API int64_t rc; -#endif if (!weight) return; -#ifdef CONFIG_IOMMU_API rc = pnv_pci_ioda2_unset_window(&pe->table_group, 0); if (rc) pe_warn(pe, "OPAL error %lld release DMA window\n", rc); -#endif pnv_pci_ioda2_set_bypass(pe, false); if (pe->table_group.group) { From patchwork Wed Jul 22 06:57:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333597 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRKN3b6Qz9sR4 for ; Wed, 22 Jul 2020 17:04:24 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EdlsF31X; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRKN0LH7zDr0p for ; Wed, 22 Jul 2020 17:04:24 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::642; helo=mail-pl1-x642.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EdlsF31X; dkim-atps=neutral Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9X3FYNzDqvn for ; Wed, 22 Jul 2020 16:57:36 +1000 (AEST) Received: by mail-pl1-x642.google.com with SMTP id m16so484069pls.5 for ; Tue, 21 Jul 2020 23:57:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=afft68K6Qsp/b3f5GUwnORW6zEIGCv9R4Sb8w5FuxDo=; b=EdlsF31X8zIn2tvdmv3c4quXenFy5fI3CWU2hSbKKyOxmvEZEJdlbc4leM41GNGzVQ TgQ4+94byGnWaw3f5Sc1cwjNRhqGrdMj/iqUG9Rg9lZ8w1N+HU7dwftbE9XI3b4gslMj dbZDOFbVJbDVe5JAz/fH6dBMVSrKyp5brd+HyPEW8OuYYdGPXDuJ16Ostmy5S3lac0Mm NFiLjqHRwQhBsOL98rRVUpYKucTX5hbY1O4J4bdv6x+r3GWQ1bLDn0cWqwu+8VYCQ+0F eZ7MYxpKtHQCkavMvBmOnlvnYuM/pJrM3QNvmJ7Ard6o7RaRnmFLV8FKwYZdYIVFdr9Y tvRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=afft68K6Qsp/b3f5GUwnORW6zEIGCv9R4Sb8w5FuxDo=; b=Oe4XQD3AX9CnU9IKv5jxQNYOs9BEq7Wf1E2iPUx3K+5HuoA5+SZUm61RFPGiRZYIZT pc25fxUqBGgtSvd+/a6XXUaeMq9ekQIboJwcVcu7dhIuQ0VfOrusJE0nT08O6gFlieWX mWwT+dk6kjrLqTRYdD1BohW//Yxs+tr2N2QLSyn6csC0+wjq8HUbeIirjVliFNfudO5H 0YMRIum0TonRILGPuCDQWRV5U20flNNNFabC6f8rAexGOYyTPijmqpc5Pu9XVLJTfV+J Ih1jbSwC3tdaDrZ+1sizCyrvYKxDT/CvyZNK7FWLPM3QCAmZbp6pxrQG83Z4+hw1y7l0 ON3g== X-Gm-Message-State: AOAM531Pvmv14/aTl7Ugc/1jI6yEkOBztntp+acbquja5QLfu8FXLM8o Ya2Pma7VHLJb6lJGDWY2zgB+wYmkYlo= X-Google-Smtp-Source: ABdhPJz+2GjW2dzVkEzO9jOONCEBnAMQJC++TQUp/PNcikvOYtFVJq2ThOnA5TFi92/VChWponaWqA== X-Received: by 2002:a17:902:6904:: with SMTP id j4mr26019882plk.198.1595401052007; Tue, 21 Jul 2020 23:57:32 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:31 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 03/16] powerpc/powernv/pci: Add explicit tracking of the DMA setup state Date: Wed, 22 Jul 2020 16:57:02 +1000 Message-Id: <20200722065715.1432738-3-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" There's an optimisation in the PE setup which skips performing DMA setup for a PE if we only have bridges in a PE. The assumption being that only "real" devices will DMA to system memory, which is probably fair. However, if we start off with only bridge devices in a PE then add a non-bridge device the new device won't be able to use DMA because we never configured it. Fix this (admittedly pretty weird) edge case by tracking whether we've done the DMA setup for the PE or not. If a non-bridge device is added to the PE (via rescan or hotplug, or whatever) we can set up DMA on demand. This also means the only remaining user of the old "DMA Weight" code is the IODA1 DMA setup code that it was originally added for, which is good. Cc: Alexey Kardashevskiy Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no changes --- arch/powerpc/platforms/powernv/pci-ioda.c | 48 ++++++++++++++--------- arch/powerpc/platforms/powernv/pci.h | 7 ++++ 2 files changed, 36 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index bfb40607aa0e..bb9c1cc60c33 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -141,6 +141,7 @@ static struct pnv_ioda_pe *pnv_ioda_init_pe(struct pnv_phb *phb, int pe_no) phb->ioda.pe_array[pe_no].phb = phb; phb->ioda.pe_array[pe_no].pe_number = pe_no; + phb->ioda.pe_array[pe_no].dma_setup_done = false; /* * Clear the PE frozen state as it might be put into frozen state @@ -1685,6 +1686,12 @@ static int pnv_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) } #endif /* CONFIG_PCI_IOV */ +static void pnv_pci_ioda1_setup_dma_pe(struct pnv_phb *phb, + struct pnv_ioda_pe *pe); + +static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, + struct pnv_ioda_pe *pe); + static void pnv_pci_ioda_dma_dev_setup(struct pci_dev *pdev) { struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); @@ -1713,6 +1720,24 @@ static void pnv_pci_ioda_dma_dev_setup(struct pci_dev *pdev) pci_info(pdev, "Added to existing PE#%x\n", pe->pe_number); } + /* + * We assume that bridges *probably* don't need to do any DMA so we can + * skip allocating a TCE table, etc unless we get a non-bridge device. + */ + if (!pe->dma_setup_done && !pci_is_bridge(pdev)) { + switch (phb->type) { + case PNV_PHB_IODA1: + pnv_pci_ioda1_setup_dma_pe(phb, pe); + break; + case PNV_PHB_IODA2: + pnv_pci_ioda2_setup_dma_pe(phb, pe); + break; + default: + pr_warn("%s: No DMA for PHB#%x (type %d)\n", + __func__, phb->hose->global_number, phb->type); + } + } + if (pdn) pdn->pe_number = pe->pe_number; pe->device_count++; @@ -2222,6 +2247,7 @@ static void pnv_pci_ioda1_setup_dma_pe(struct pnv_phb *phb, pe->table_group.tce32_size = tbl->it_size << tbl->it_page_shift; iommu_init_table(tbl, phb->hose->node, 0, 0); + pe->dma_setup_done = true; return; fail: /* XXX Failure: Try to fallback to 64-bit only ? */ @@ -2536,9 +2562,6 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, { int64_t rc; - if (!pnv_pci_ioda_pe_dma_weight(pe)) - return; - /* TVE #1 is selected by PCI address bit 59 */ pe->tce_bypass_base = 1ull << 59; @@ -2563,6 +2586,7 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, iommu_register_group(&pe->table_group, phb->hose->global_number, pe->pe_number); #endif + pe->dma_setup_done = true; } int64_t pnv_opal_pci_msi_eoi(struct irq_chip *chip, unsigned int hw_irq) @@ -3136,7 +3160,6 @@ static void pnv_pci_fixup_bridge_resources(struct pci_bus *bus, static void pnv_pci_configure_bus(struct pci_bus *bus) { - struct pnv_phb *phb = pci_bus_to_pnvhb(bus); struct pci_dev *bridge = bus->self; struct pnv_ioda_pe *pe; bool all = (bridge && pci_pcie_type(bridge) == PCI_EXP_TYPE_PCI_BRIDGE); @@ -3160,17 +3183,6 @@ static void pnv_pci_configure_bus(struct pci_bus *bus) return; pnv_ioda_setup_pe_seg(pe); - switch (phb->type) { - case PNV_PHB_IODA1: - pnv_pci_ioda1_setup_dma_pe(phb, pe); - break; - case PNV_PHB_IODA2: - pnv_pci_ioda2_setup_dma_pe(phb, pe); - break; - default: - pr_warn("%s: No DMA for PHB#%x (type %d)\n", - __func__, phb->hose->global_number, phb->type); - } } static resource_size_t pnv_pci_default_alignment(void) @@ -3289,11 +3301,10 @@ static long pnv_pci_ioda1_unset_window(struct iommu_table_group *table_group, static void pnv_pci_ioda1_release_pe_dma(struct pnv_ioda_pe *pe) { - unsigned int weight = pnv_pci_ioda_pe_dma_weight(pe); struct iommu_table *tbl = pe->table_group.tables[0]; int64_t rc; - if (!weight) + if (!pe->dma_setup_done) return; rc = pnv_pci_ioda1_unset_window(&pe->table_group, 0); @@ -3313,10 +3324,9 @@ static void pnv_pci_ioda1_release_pe_dma(struct pnv_ioda_pe *pe) static void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe) { struct iommu_table *tbl = pe->table_group.tables[0]; - unsigned int weight = pnv_pci_ioda_pe_dma_weight(pe); int64_t rc; - if (!weight) + if (pe->dma_setup_done) return; rc = pnv_pci_ioda2_unset_window(&pe->table_group, 0); diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 0727dec9a0d1..6aa6aefb637d 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -87,6 +87,13 @@ struct pnv_ioda_pe { bool tce_bypass_enabled; uint64_t tce_bypass_base; + /* + * Used to track whether we've done DMA setup for this PE or not. We + * want to defer allocating TCE tables, etc until we've added a + * non-bridge device to the PE. + */ + bool dma_setup_done; + /* MSIs. MVE index is identical for for 32 and 64 bit MSI * and -1 if not supported. (It's actually identical to the * PE number) From patchwork Wed Jul 22 06:57:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333598 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRMb1Wrjz9sR4 for ; Wed, 22 Jul 2020 17:06:19 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=NfEEZdqn; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRMb0dHBzDqr6 for ; Wed, 22 Jul 2020 17:06:19 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::641; helo=mail-pl1-x641.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=NfEEZdqn; dkim-atps=neutral Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9Y0qtFzDqDB for ; Wed, 22 Jul 2020 16:57:37 +1000 (AEST) Received: by mail-pl1-x641.google.com with SMTP id o1so494077plk.1 for ; Tue, 21 Jul 2020 23:57:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fgHhEh2oykE77A38LJ/db5x8IRjZ9TjDd+AV3bNcM54=; b=NfEEZdqndS0HIxaUlBmqwNq3y4r+rCdAdrNMkCDQsAyevIMTW2/GULm8uP/ubHL8SN nYGg5aRQI9D80ozJFVNkTQydht4m79SLGMkJZrBIF6oKWTqqDXeziKElucesy3OJo7gc G5YFSRVvvJ3GDbAMJnEF5gQRPmuFHlDxJd7p1/Qy/jIhCFRTf7bRtkFdmbSAYsAnvrk3 Zi4fnYgLwJSEqLEA0nS4IIHcEFK5+9VKKTKsJAWF4/OCxaLnwj7LPMLqR9tnV5R+Gkup 8jVmVYiTWCQEhb2RdqUd5Jr1LyoQXacXSzpMYpP1ciDWMNr5nBekofpNgC5l+OK7N3m+ JbUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fgHhEh2oykE77A38LJ/db5x8IRjZ9TjDd+AV3bNcM54=; b=LfV30OjHVTN4LUh+162Y2+gOK7+FHYuenvcgZ0DqhaJcAqSCUSP6cQKSl0As3j0IH6 +o69EU2ttrMDNkXLiRn7EGbAc/NxwFvp3bNkTb0ynv5tvPPV4jVFLEc5LL5zYJKhh+ny jaPKf9LAF5a2oxVVXbAg7hBXW0YD7+scntzyaRyNeCIL2ykSq8pi2D378NcjrIwPD1XT ENi4JPBbMMuzcq5pyGzhcQYaUxw5v4Vs1bsF6YwmYAbX3xrj3o6BheQQjgr99ldesOYa 4QAsIoQgYRXo88XoegauQFudCviee8VB5KMRk/0qO4qm3/CU/rIxiduKXJHzfD83kYvN XNzQ== X-Gm-Message-State: AOAM531SjhavM5CcKgkpo2bwXYeIHJKD3uUtrJQ0qNp8Tt8BSoBiSO2O NGuKsAyE5siAryh+m9oum+hB2ym8GFo= X-Google-Smtp-Source: ABdhPJwG+X9pHnzBZwbgKnzfDYD2l9MiDF+KwA7ZMpbLVECNj2WMQJRDbNe9FQ7pbE+yHbVtUHh+Tw== X-Received: by 2002:a17:902:7241:: with SMTP id c1mr26291905pll.79.1595401054666; Tue, 21 Jul 2020 23:57:34 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:34 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 04/16] powerpc/powernv/pci: Initialise M64 for IODA1 as a 1-1 window Date: Wed, 22 Jul 2020 16:57:03 +1000 Message-Id: <20200722065715.1432738-4-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" We pre-configure the m64 window for IODA1 as a 1-1 segment-PE mapping, similar to PHB3. Currently the actual mapping of segments occurs in pnv_ioda_pick_m64_pe(), but we can move it into pnv_ioda1_init_m64() and drop the IODA1 specific code paths in the PE setup / teardown. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no changes --- arch/powerpc/platforms/powernv/pci-ioda.c | 55 +++++++++++------------ 1 file changed, 25 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index bb9c1cc60c33..8fb17676d914 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -311,6 +311,28 @@ static int pnv_ioda1_init_m64(struct pnv_phb *phb) } } + for (index = 0; index < phb->ioda.total_pe_num; index++) { + int64_t rc; + + /* + * P7IOC supports M64DT, which helps mapping M64 segment + * to one particular PE#. However, PHB3 has fixed mapping + * between M64 segment and PE#. In order to have same logic + * for P7IOC and PHB3, we enforce fixed mapping between M64 + * segment and PE# on P7IOC. + */ + rc = opal_pci_map_pe_mmio_window(phb->opal_id, + index, OPAL_M64_WINDOW_TYPE, + index / PNV_IODA1_M64_SEGS, + index % PNV_IODA1_M64_SEGS); + if (rc != OPAL_SUCCESS) { + pr_warn("%s: Error %lld mapping M64 for PHB#%x-PE#%x\n", + __func__, rc, phb->hose->global_number, + index); + goto fail; + } + } + /* * Exclude the segments for reserved and root bus PE, which * are first or last two PEs. @@ -402,26 +424,6 @@ static struct pnv_ioda_pe *pnv_ioda_pick_m64_pe(struct pci_bus *bus, bool all) pe->master = master_pe; list_add_tail(&pe->list, &master_pe->slaves); } - - /* - * P7IOC supports M64DT, which helps mapping M64 segment - * to one particular PE#. However, PHB3 has fixed mapping - * between M64 segment and PE#. In order to have same logic - * for P7IOC and PHB3, we enforce fixed mapping between M64 - * segment and PE# on P7IOC. - */ - if (phb->type == PNV_PHB_IODA1) { - int64_t rc; - - rc = opal_pci_map_pe_mmio_window(phb->opal_id, - pe->pe_number, OPAL_M64_WINDOW_TYPE, - pe->pe_number / PNV_IODA1_M64_SEGS, - pe->pe_number % PNV_IODA1_M64_SEGS); - if (rc != OPAL_SUCCESS) - pr_warn("%s: Error %lld mapping M64 for PHB#%x-PE#%x\n", - __func__, rc, phb->hose->global_number, - pe->pe_number); - } } kfree(pe_alloc); @@ -3354,14 +3356,8 @@ static void pnv_ioda_free_pe_seg(struct pnv_ioda_pe *pe, if (map[idx] != pe->pe_number) continue; - if (win == OPAL_M64_WINDOW_TYPE) - rc = opal_pci_map_pe_mmio_window(phb->opal_id, - phb->ioda.reserved_pe_idx, win, - idx / PNV_IODA1_M64_SEGS, - idx % PNV_IODA1_M64_SEGS); - else - rc = opal_pci_map_pe_mmio_window(phb->opal_id, - phb->ioda.reserved_pe_idx, win, 0, idx); + rc = opal_pci_map_pe_mmio_window(phb->opal_id, + phb->ioda.reserved_pe_idx, win, 0, idx); if (rc != OPAL_SUCCESS) pe_warn(pe, "Error %lld unmapping (%d) segment#%d\n", @@ -3380,8 +3376,7 @@ static void pnv_ioda_release_pe_seg(struct pnv_ioda_pe *pe) phb->ioda.io_segmap); pnv_ioda_free_pe_seg(pe, OPAL_M32_WINDOW_TYPE, phb->ioda.m32_segmap); - pnv_ioda_free_pe_seg(pe, OPAL_M64_WINDOW_TYPE, - phb->ioda.m64_segmap); + /* M64 is pre-configured by pnv_ioda1_init_m64() */ } else if (phb->type == PNV_PHB_IODA2) { pnv_ioda_free_pe_seg(pe, OPAL_M32_WINDOW_TYPE, phb->ioda.m32_segmap); From patchwork Wed Jul 22 06:57:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333600 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRQ05064z9sSy for ; Wed, 22 Jul 2020 17:08:24 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=gWOoEZ9Y; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRQ02cQ2zDqy9 for ; Wed, 22 Jul 2020 17:08:24 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::536; helo=mail-pg1-x536.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=gWOoEZ9Y; dkim-atps=neutral Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9c6NDYzDqvH for ; Wed, 22 Jul 2020 16:57:40 +1000 (AEST) Received: by mail-pg1-x536.google.com with SMTP id t6so705233pgq.1 for ; Tue, 21 Jul 2020 23:57:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tUAh9HWT4WXewrgkLN91a8VcpTBYXqFaYftnk7jeRUQ=; b=gWOoEZ9YAfQ6fD62Rc3PDB6fXiXtB5ytt1Xs6e5dYho1GtpMnh0UyXHpRtLiJJpxQG AI16htQDlo/1a5wnEHu894tJ7z13QogenQ82by9yVnpyeLIQ9HKXHhOOa5O6WjsqKMA6 qszegsfNq9h0lLPdj/+Z1OfJzj48AQu2bA1NT3fgU7ijiAszaKgRxQJikMvq+K1aA7ib YpsY4CVr0HpJoQ5nZDA5zwVv0QzeTSb6dE+tV5F8YFdAQMTXZRQYZz7JRv8r4hZmJM52 ulFrjKzmTE2PpLeSpLg6PCINzFkd1k8EtWNqc1lC0tBFyZUwDt99XTNs7D9I7DkXRmL6 bN9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tUAh9HWT4WXewrgkLN91a8VcpTBYXqFaYftnk7jeRUQ=; b=pGPv+a5GJ0vpa2PfqkaBAJalXtCVQv6Eqv9zuZntOm3q5IWzNtFvJM3UgMMvqp/PNH 4DzQH86HkDcx4zNzFE9/J++sGPrdWqh0oitgG24jlErLP9U7XZuGJGJSR/jCegiK/uIm HnuLtmhOxKSYcZJV+0huQDOXaeVAZEGXMyKYiCvllzBStaqPpK6K14FaMUNSoT/Ojdxk VYzQie/1y3GOd7EaR9M+LdAXh7hXcSAY6TDPxpek9Auo7kpwIEezyMQ2badmkZbtlgY+ psA5J5OUxKzj0WjTMR+1RfoNDSsvDQDWXo4I1ZVSql00uwxcEM9kKsRdpFinle+aQw6j LVrQ== X-Gm-Message-State: AOAM533ew01MvGffTfTJkLcAX1q4wXG8g30m87M9T0PSRaQJGIRpk432 xyGjSutm5p6Clm1Fag2eAekLQ4PNZNk= X-Google-Smtp-Source: ABdhPJwn41jtwRoEwey08G+IIrKzBzgSB7a/zK7sY1/Z50ADjlF22Zxhm/ZzFJOGiGWnu2aitKE1Yw== X-Received: by 2002:a63:6e0e:: with SMTP id j14mr24672868pgc.384.1595401056908; Tue, 21 Jul 2020 23:57:36 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:36 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 05/16] powerpc/powernv/sriov: Move SR-IOV into a separate file Date: Wed, 22 Jul 2020 16:57:04 +1000 Message-Id: <20200722065715.1432738-5-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" pci-ioda.c is getting a bit unwieldly due to the amount of stuff jammed in there. The SR-IOV support can be extracted easily enough and is mostly standalone, so move it into a separate file. This patch also moves the PowerNV SR-IOV specific fields from pci_dn and moves them into a platform specific structure. I'm not sure how they ended up in there in the first place, but leaking platform specifics into common code has proven to be a terrible idea so far so lets stop doing that. Signed-off-by: Oliver O'Halloran --- v2: no changes --- arch/powerpc/include/asm/device.h | 3 + arch/powerpc/platforms/powernv/Makefile | 1 + arch/powerpc/platforms/powernv/pci-ioda.c | 673 +-------------------- arch/powerpc/platforms/powernv/pci-sriov.c | 642 ++++++++++++++++++++ arch/powerpc/platforms/powernv/pci.h | 74 +++ 5 files changed, 738 insertions(+), 655 deletions(-) create mode 100644 arch/powerpc/platforms/powernv/pci-sriov.c diff --git a/arch/powerpc/include/asm/device.h b/arch/powerpc/include/asm/device.h index 266542769e4b..4d8934db7ef5 100644 --- a/arch/powerpc/include/asm/device.h +++ b/arch/powerpc/include/asm/device.h @@ -49,6 +49,9 @@ struct dev_archdata { #ifdef CONFIG_CXL_BASE struct cxl_context *cxl_ctx; #endif +#ifdef CONFIG_PCI_IOV + void *iov_data; +#endif }; struct pdev_archdata { diff --git a/arch/powerpc/platforms/powernv/Makefile b/arch/powerpc/platforms/powernv/Makefile index fe3f0fb5aeca..2eb6ae150d1f 100644 --- a/arch/powerpc/platforms/powernv/Makefile +++ b/arch/powerpc/platforms/powernv/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_FA_DUMP) += opal-fadump.o obj-$(CONFIG_PRESERVE_FA_DUMP) += opal-fadump.o obj-$(CONFIG_OPAL_CORE) += opal-core.o obj-$(CONFIG_PCI) += pci.o pci-ioda.o npu-dma.o pci-ioda-tce.o +obj-$(CONFIG_PCI_IOV) += pci-sriov.o obj-$(CONFIG_CXL_BASE) += pci-cxl.o obj-$(CONFIG_EEH) += eeh-powernv.o obj-$(CONFIG_MEMORY_FAILURE) += opal-memory-errors.o diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 8fb17676d914..2d36a9ebf0e9 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -115,26 +115,6 @@ static int __init pci_reset_phbs_setup(char *str) early_param("ppc_pci_reset_phbs", pci_reset_phbs_setup); -static inline bool pnv_pci_is_m64(struct pnv_phb *phb, struct resource *r) -{ - /* - * WARNING: We cannot rely on the resource flags. The Linux PCI - * allocation code sometimes decides to put a 64-bit prefetchable - * BAR in the 32-bit window, so we have to compare the addresses. - * - * For simplicity we only test resource start. - */ - return (r->start >= phb->ioda.m64_base && - r->start < (phb->ioda.m64_base + phb->ioda.m64_size)); -} - -static inline bool pnv_pci_is_m64_flags(unsigned long resource_flags) -{ - unsigned long flags = (IORESOURCE_MEM_64 | IORESOURCE_PREFETCH); - - return (resource_flags & flags) == flags; -} - static struct pnv_ioda_pe *pnv_ioda_init_pe(struct pnv_phb *phb, int pe_no) { s64 rc; @@ -172,7 +152,7 @@ static void pnv_ioda_reserve_pe(struct pnv_phb *phb, int pe_no) pnv_ioda_init_pe(phb, pe_no); } -static struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb) +struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb) { long pe; @@ -184,7 +164,7 @@ static struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb) return NULL; } -static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) +void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) { struct pnv_phb *phb = pe->phb; unsigned int pe_num = pe->pe_number; @@ -816,7 +796,7 @@ static void pnv_ioda_unset_peltv(struct pnv_phb *phb, pe_warn(pe, "OPAL error %lld remove self from PELTV\n", rc); } -static int pnv_ioda_deconfigure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) +int pnv_ioda_deconfigure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) { struct pci_dev *parent; uint8_t bcomp, dcomp, fcomp; @@ -887,7 +867,7 @@ static int pnv_ioda_deconfigure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) return 0; } -static int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) +int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) { struct pci_dev *parent; uint8_t bcomp, dcomp, fcomp; @@ -982,91 +962,6 @@ static int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) return 0; } -#ifdef CONFIG_PCI_IOV -static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset) -{ - struct pci_dn *pdn = pci_get_pdn(dev); - int i; - struct resource *res, res2; - resource_size_t size; - u16 num_vfs; - - if (!dev->is_physfn) - return -EINVAL; - - /* - * "offset" is in VFs. The M64 windows are sized so that when they - * are segmented, each segment is the same size as the IOV BAR. - * Each segment is in a separate PE, and the high order bits of the - * address are the PE number. Therefore, each VF's BAR is in a - * separate PE, and changing the IOV BAR start address changes the - * range of PEs the VFs are in. - */ - num_vfs = pdn->num_vfs; - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &dev->resource[i + PCI_IOV_RESOURCES]; - if (!res->flags || !res->parent) - continue; - - /* - * The actual IOV BAR range is determined by the start address - * and the actual size for num_vfs VFs BAR. This check is to - * make sure that after shifting, the range will not overlap - * with another device. - */ - size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES); - res2.flags = res->flags; - res2.start = res->start + (size * offset); - res2.end = res2.start + (size * num_vfs) - 1; - - if (res2.end > res->end) { - dev_err(&dev->dev, "VF BAR%d: %pR would extend past %pR (trying to enable %d VFs shifted by %d)\n", - i, &res2, res, num_vfs, offset); - return -EBUSY; - } - } - - /* - * Since M64 BAR shares segments among all possible 256 PEs, - * we have to shift the beginning of PF IOV BAR to make it start from - * the segment which belongs to the PE number assigned to the first VF. - * This creates a "hole" in the /proc/iomem which could be used for - * allocating other resources so we reserve this area below and - * release when IOV is released. - */ - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &dev->resource[i + PCI_IOV_RESOURCES]; - if (!res->flags || !res->parent) - continue; - - size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES); - res2 = *res; - res->start += size * offset; - - dev_info(&dev->dev, "VF BAR%d: %pR shifted to %pR (%sabling %d VFs shifted by %d)\n", - i, &res2, res, (offset > 0) ? "En" : "Dis", - num_vfs, offset); - - if (offset < 0) { - devm_release_resource(&dev->dev, &pdn->holes[i]); - memset(&pdn->holes[i], 0, sizeof(pdn->holes[i])); - } - - pci_update_resource(dev, i + PCI_IOV_RESOURCES); - - if (offset > 0) { - pdn->holes[i].start = res2.start; - pdn->holes[i].end = res2.start + size * offset - 1; - pdn->holes[i].flags = IORESOURCE_BUS; - pdn->holes[i].name = "pnv_iov_reserved"; - devm_request_resource(&dev->dev, res->parent, - &pdn->holes[i]); - } - } - return 0; -} -#endif /* CONFIG_PCI_IOV */ - static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev) { struct pnv_phb *phb = pci_bus_to_pnvhb(dev->bus); @@ -1294,406 +1189,9 @@ static void pnv_pci_ioda_setup_nvlink(void) #endif } -#ifdef CONFIG_PCI_IOV -static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) -{ - struct pnv_phb *phb; - struct pci_dn *pdn; - int i, j; - int m64_bars; - - phb = pci_bus_to_pnvhb(pdev->bus); - pdn = pci_get_pdn(pdev); - - if (pdn->m64_single_mode) - m64_bars = num_vfs; - else - m64_bars = 1; - - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) - for (j = 0; j < m64_bars; j++) { - if (pdn->m64_map[j][i] == IODA_INVALID_M64) - continue; - opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, pdn->m64_map[j][i], 0); - clear_bit(pdn->m64_map[j][i], &phb->ioda.m64_bar_alloc); - pdn->m64_map[j][i] = IODA_INVALID_M64; - } - - kfree(pdn->m64_map); - return 0; -} - -static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) -{ - struct pnv_phb *phb; - struct pci_dn *pdn; - unsigned int win; - struct resource *res; - int i, j; - int64_t rc; - int total_vfs; - resource_size_t size, start; - int pe_num; - int m64_bars; - - phb = pci_bus_to_pnvhb(pdev->bus); - pdn = pci_get_pdn(pdev); - total_vfs = pci_sriov_get_totalvfs(pdev); - - if (pdn->m64_single_mode) - m64_bars = num_vfs; - else - m64_bars = 1; - - pdn->m64_map = kmalloc_array(m64_bars, - sizeof(*pdn->m64_map), - GFP_KERNEL); - if (!pdn->m64_map) - return -ENOMEM; - /* Initialize the m64_map to IODA_INVALID_M64 */ - for (i = 0; i < m64_bars ; i++) - for (j = 0; j < PCI_SRIOV_NUM_BARS; j++) - pdn->m64_map[i][j] = IODA_INVALID_M64; - - - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &pdev->resource[i + PCI_IOV_RESOURCES]; - if (!res->flags || !res->parent) - continue; - - for (j = 0; j < m64_bars; j++) { - do { - win = find_next_zero_bit(&phb->ioda.m64_bar_alloc, - phb->ioda.m64_bar_idx + 1, 0); - - if (win >= phb->ioda.m64_bar_idx + 1) - goto m64_failed; - } while (test_and_set_bit(win, &phb->ioda.m64_bar_alloc)); - - pdn->m64_map[j][i] = win; - - if (pdn->m64_single_mode) { - size = pci_iov_resource_size(pdev, - PCI_IOV_RESOURCES + i); - start = res->start + size * j; - } else { - size = resource_size(res); - start = res->start; - } - - /* Map the M64 here */ - if (pdn->m64_single_mode) { - pe_num = pdn->pe_num_map[j]; - rc = opal_pci_map_pe_mmio_window(phb->opal_id, - pe_num, OPAL_M64_WINDOW_TYPE, - pdn->m64_map[j][i], 0); - } - - rc = opal_pci_set_phb_mem_window(phb->opal_id, - OPAL_M64_WINDOW_TYPE, - pdn->m64_map[j][i], - start, - 0, /* unused */ - size); - - - if (rc != OPAL_SUCCESS) { - dev_err(&pdev->dev, "Failed to map M64 window #%d: %lld\n", - win, rc); - goto m64_failed; - } - - if (pdn->m64_single_mode) - rc = opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, pdn->m64_map[j][i], 2); - else - rc = opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, pdn->m64_map[j][i], 1); - - if (rc != OPAL_SUCCESS) { - dev_err(&pdev->dev, "Failed to enable M64 window #%d: %llx\n", - win, rc); - goto m64_failed; - } - } - } - return 0; - -m64_failed: - pnv_pci_vf_release_m64(pdev, num_vfs); - return -EBUSY; -} - -static void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe); - -static void pnv_ioda_release_vf_PE(struct pci_dev *pdev) -{ - struct pnv_phb *phb; - struct pnv_ioda_pe *pe, *pe_n; - struct pci_dn *pdn; - - phb = pci_bus_to_pnvhb(pdev->bus); - pdn = pci_get_pdn(pdev); - - if (!pdev->is_physfn) - return; - - /* FIXME: Use pnv_ioda_release_pe()? */ - list_for_each_entry_safe(pe, pe_n, &phb->ioda.pe_list, list) { - if (pe->parent_dev != pdev) - continue; - - pnv_pci_ioda2_release_pe_dma(pe); - - /* Remove from list */ - mutex_lock(&phb->ioda.pe_list_mutex); - list_del(&pe->list); - mutex_unlock(&phb->ioda.pe_list_mutex); - - pnv_ioda_deconfigure_pe(phb, pe); - - pnv_ioda_free_pe(pe); - } -} - -static void pnv_pci_sriov_disable(struct pci_dev *pdev) -{ - struct pnv_phb *phb; - struct pnv_ioda_pe *pe; - struct pci_dn *pdn; - u16 num_vfs, i; - - phb = pci_bus_to_pnvhb(pdev->bus); - pdn = pci_get_pdn(pdev); - num_vfs = pdn->num_vfs; - - /* Release VF PEs */ - pnv_ioda_release_vf_PE(pdev); - - if (phb->type == PNV_PHB_IODA2) { - if (!pdn->m64_single_mode) - pnv_pci_vf_resource_shift(pdev, -*pdn->pe_num_map); - - /* Release M64 windows */ - pnv_pci_vf_release_m64(pdev, num_vfs); - - /* Release PE numbers */ - if (pdn->m64_single_mode) { - for (i = 0; i < num_vfs; i++) { - if (pdn->pe_num_map[i] == IODA_INVALID_PE) - continue; - - pe = &phb->ioda.pe_array[pdn->pe_num_map[i]]; - pnv_ioda_free_pe(pe); - } - } else - bitmap_clear(phb->ioda.pe_alloc, *pdn->pe_num_map, num_vfs); - /* Releasing pe_num_map */ - kfree(pdn->pe_num_map); - } -} - -static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, - struct pnv_ioda_pe *pe); -static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) -{ - struct pnv_phb *phb; - struct pnv_ioda_pe *pe; - int pe_num; - u16 vf_index; - struct pci_dn *pdn; - - phb = pci_bus_to_pnvhb(pdev->bus); - pdn = pci_get_pdn(pdev); - - if (!pdev->is_physfn) - return; - - /* Reserve PE for each VF */ - for (vf_index = 0; vf_index < num_vfs; vf_index++) { - int vf_devfn = pci_iov_virtfn_devfn(pdev, vf_index); - int vf_bus = pci_iov_virtfn_bus(pdev, vf_index); - struct pci_dn *vf_pdn; - - if (pdn->m64_single_mode) - pe_num = pdn->pe_num_map[vf_index]; - else - pe_num = *pdn->pe_num_map + vf_index; - - pe = &phb->ioda.pe_array[pe_num]; - pe->pe_number = pe_num; - pe->phb = phb; - pe->flags = PNV_IODA_PE_VF; - pe->pbus = NULL; - pe->parent_dev = pdev; - pe->mve_number = -1; - pe->rid = (vf_bus << 8) | vf_devfn; - - pe_info(pe, "VF %04d:%02d:%02d.%d associated with PE#%x\n", - pci_domain_nr(pdev->bus), pdev->bus->number, - PCI_SLOT(vf_devfn), PCI_FUNC(vf_devfn), pe_num); - - if (pnv_ioda_configure_pe(phb, pe)) { - /* XXX What do we do here ? */ - pnv_ioda_free_pe(pe); - pe->pdev = NULL; - continue; - } - - /* Put PE to the list */ - mutex_lock(&phb->ioda.pe_list_mutex); - list_add_tail(&pe->list, &phb->ioda.pe_list); - mutex_unlock(&phb->ioda.pe_list_mutex); - - /* associate this pe to it's pdn */ - list_for_each_entry(vf_pdn, &pdn->parent->child_list, list) { - if (vf_pdn->busno == vf_bus && - vf_pdn->devfn == vf_devfn) { - vf_pdn->pe_number = pe_num; - break; - } - } - - pnv_pci_ioda2_setup_dma_pe(phb, pe); - } -} - -static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) -{ - struct pnv_phb *phb; - struct pnv_ioda_pe *pe; - struct pci_dn *pdn; - int ret; - u16 i; - - phb = pci_bus_to_pnvhb(pdev->bus); - pdn = pci_get_pdn(pdev); - - if (phb->type == PNV_PHB_IODA2) { - if (!pdn->vfs_expanded) { - dev_info(&pdev->dev, "don't support this SRIOV device" - " with non 64bit-prefetchable IOV BAR\n"); - return -ENOSPC; - } - - /* - * When M64 BARs functions in Single PE mode, the number of VFs - * could be enabled must be less than the number of M64 BARs. - */ - if (pdn->m64_single_mode && num_vfs > phb->ioda.m64_bar_idx) { - dev_info(&pdev->dev, "Not enough M64 BAR for VFs\n"); - return -EBUSY; - } - - /* Allocating pe_num_map */ - if (pdn->m64_single_mode) - pdn->pe_num_map = kmalloc_array(num_vfs, - sizeof(*pdn->pe_num_map), - GFP_KERNEL); - else - pdn->pe_num_map = kmalloc(sizeof(*pdn->pe_num_map), GFP_KERNEL); - - if (!pdn->pe_num_map) - return -ENOMEM; - - if (pdn->m64_single_mode) - for (i = 0; i < num_vfs; i++) - pdn->pe_num_map[i] = IODA_INVALID_PE; - - /* Calculate available PE for required VFs */ - if (pdn->m64_single_mode) { - for (i = 0; i < num_vfs; i++) { - pe = pnv_ioda_alloc_pe(phb); - if (!pe) { - ret = -EBUSY; - goto m64_failed; - } - - pdn->pe_num_map[i] = pe->pe_number; - } - } else { - mutex_lock(&phb->ioda.pe_alloc_mutex); - *pdn->pe_num_map = bitmap_find_next_zero_area( - phb->ioda.pe_alloc, phb->ioda.total_pe_num, - 0, num_vfs, 0); - if (*pdn->pe_num_map >= phb->ioda.total_pe_num) { - mutex_unlock(&phb->ioda.pe_alloc_mutex); - dev_info(&pdev->dev, "Failed to enable VF%d\n", num_vfs); - kfree(pdn->pe_num_map); - return -EBUSY; - } - bitmap_set(phb->ioda.pe_alloc, *pdn->pe_num_map, num_vfs); - mutex_unlock(&phb->ioda.pe_alloc_mutex); - } - pdn->num_vfs = num_vfs; - - /* Assign M64 window accordingly */ - ret = pnv_pci_vf_assign_m64(pdev, num_vfs); - if (ret) { - dev_info(&pdev->dev, "Not enough M64 window resources\n"); - goto m64_failed; - } - - /* - * When using one M64 BAR to map one IOV BAR, we need to shift - * the IOV BAR according to the PE# allocated to the VFs. - * Otherwise, the PE# for the VF will conflict with others. - */ - if (!pdn->m64_single_mode) { - ret = pnv_pci_vf_resource_shift(pdev, *pdn->pe_num_map); - if (ret) - goto m64_failed; - } - } - - /* Setup VF PEs */ - pnv_ioda_setup_vf_PE(pdev, num_vfs); - - return 0; - -m64_failed: - if (pdn->m64_single_mode) { - for (i = 0; i < num_vfs; i++) { - if (pdn->pe_num_map[i] == IODA_INVALID_PE) - continue; - - pe = &phb->ioda.pe_array[pdn->pe_num_map[i]]; - pnv_ioda_free_pe(pe); - } - } else - bitmap_clear(phb->ioda.pe_alloc, *pdn->pe_num_map, num_vfs); - - /* Releasing pe_num_map */ - kfree(pdn->pe_num_map); - - return ret; -} - -static int pnv_pcibios_sriov_disable(struct pci_dev *pdev) -{ - pnv_pci_sriov_disable(pdev); - - /* Release PCI data */ - remove_sriov_vf_pdns(pdev); - return 0; -} - -static int pnv_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) -{ - /* Allocate PCI data */ - add_sriov_vf_pdns(pdev); - - return pnv_pci_sriov_enable(pdev, num_vfs); -} -#endif /* CONFIG_PCI_IOV */ - static void pnv_pci_ioda1_setup_dma_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); -static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, - struct pnv_ioda_pe *pe); - static void pnv_pci_ioda_dma_dev_setup(struct pci_dev *pdev) { struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); @@ -2559,8 +2057,8 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = { }; #endif -static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, - struct pnv_ioda_pe *pe) +void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, + struct pnv_ioda_pe *pe) { int64_t rc; @@ -2737,117 +2235,6 @@ static void pnv_pci_init_ioda_msis(struct pnv_phb *phb) count, phb->msi_base); } -#ifdef CONFIG_PCI_IOV -static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) -{ - struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); - const resource_size_t gate = phb->ioda.m64_segsize >> 2; - struct resource *res; - int i; - resource_size_t size, total_vf_bar_sz; - struct pci_dn *pdn; - int mul, total_vfs; - - pdn = pci_get_pdn(pdev); - pdn->vfs_expanded = 0; - pdn->m64_single_mode = false; - - total_vfs = pci_sriov_get_totalvfs(pdev); - mul = phb->ioda.total_pe_num; - total_vf_bar_sz = 0; - - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &pdev->resource[i + PCI_IOV_RESOURCES]; - if (!res->flags || res->parent) - continue; - if (!pnv_pci_is_m64_flags(res->flags)) { - dev_warn(&pdev->dev, "Don't support SR-IOV with" - " non M64 VF BAR%d: %pR. \n", - i, res); - goto truncate_iov; - } - - total_vf_bar_sz += pci_iov_resource_size(pdev, - i + PCI_IOV_RESOURCES); - - /* - * If bigger than quarter of M64 segment size, just round up - * power of two. - * - * Generally, one M64 BAR maps one IOV BAR. To avoid conflict - * with other devices, IOV BAR size is expanded to be - * (total_pe * VF_BAR_size). When VF_BAR_size is half of M64 - * segment size , the expanded size would equal to half of the - * whole M64 space size, which will exhaust the M64 Space and - * limit the system flexibility. This is a design decision to - * set the boundary to quarter of the M64 segment size. - */ - if (total_vf_bar_sz > gate) { - mul = roundup_pow_of_two(total_vfs); - dev_info(&pdev->dev, - "VF BAR Total IOV size %llx > %llx, roundup to %d VFs\n", - total_vf_bar_sz, gate, mul); - pdn->m64_single_mode = true; - break; - } - } - - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &pdev->resource[i + PCI_IOV_RESOURCES]; - if (!res->flags || res->parent) - continue; - - size = pci_iov_resource_size(pdev, i + PCI_IOV_RESOURCES); - /* - * On PHB3, the minimum size alignment of M64 BAR in single - * mode is 32MB. - */ - if (pdn->m64_single_mode && (size < SZ_32M)) - goto truncate_iov; - dev_dbg(&pdev->dev, " Fixing VF BAR%d: %pR to\n", i, res); - res->end = res->start + size * mul - 1; - dev_dbg(&pdev->dev, " %pR\n", res); - dev_info(&pdev->dev, "VF BAR%d: %pR (expanded to %d VFs for PE alignment)", - i, res, mul); - } - pdn->vfs_expanded = mul; - - return; - -truncate_iov: - /* To save MMIO space, IOV BAR is truncated. */ - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &pdev->resource[i + PCI_IOV_RESOURCES]; - res->flags = 0; - res->end = res->start - 1; - } -} - -static void pnv_pci_ioda_fixup_iov(struct pci_dev *pdev) -{ - if (WARN_ON(pci_dev_is_added(pdev))) - return; - - if (pdev->is_virtfn) { - struct pnv_ioda_pe *pe = pnv_ioda_get_pe(pdev); - - /* - * VF PEs are single-device PEs so their pdev pointer needs to - * be set. The pdev doesn't exist when the PE is allocated (in - * (pcibios_sriov_enable()) so we fix it up here. - */ - pe->pdev = pdev; - WARN_ON(!(pe->flags & PNV_IODA_PE_VF)); - } else if (pdev->is_physfn) { - /* - * For PFs adjust their allocated IOV resources to match what - * the PHB can support using it's M64 BAR table. - */ - pnv_pci_ioda_fixup_iov_resources(pdev); - } -} -#endif /* CONFIG_PCI_IOV */ - static void pnv_ioda_setup_pe_res(struct pnv_ioda_pe *pe, struct resource *res) { @@ -3192,41 +2579,6 @@ static resource_size_t pnv_pci_default_alignment(void) return PAGE_SIZE; } -#ifdef CONFIG_PCI_IOV -static resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, - int resno) -{ - struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); - struct pci_dn *pdn = pci_get_pdn(pdev); - resource_size_t align; - - /* - * On PowerNV platform, IOV BAR is mapped by M64 BAR to enable the - * SR-IOV. While from hardware perspective, the range mapped by M64 - * BAR should be size aligned. - * - * When IOV BAR is mapped with M64 BAR in Single PE mode, the extra - * powernv-specific hardware restriction is gone. But if just use the - * VF BAR size as the alignment, PF BAR / VF BAR may be allocated with - * in one segment of M64 #15, which introduces the PE conflict between - * PF and VF. Based on this, the minimum alignment of an IOV BAR is - * m64_segsize. - * - * This function returns the total IOV BAR size if M64 BAR is in - * Shared PE mode or just VF BAR size if not. - * If the M64 BAR is in Single PE mode, return the VF BAR size or - * M64 segment size if IOV BAR size is less. - */ - align = pci_iov_resource_size(pdev, resno); - if (!pdn->vfs_expanded) - return align; - if (pdn->m64_single_mode) - return max(align, (resource_size_t)phb->ioda.m64_segsize); - - return pdn->vfs_expanded * align; -} -#endif /* CONFIG_PCI_IOV */ - /* Prevent enabling devices for which we couldn't properly * assign a PE */ @@ -3323,7 +2675,7 @@ static void pnv_pci_ioda1_release_pe_dma(struct pnv_ioda_pe *pe) iommu_tce_table_put(tbl); } -static void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe) +void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe) { struct iommu_table *tbl = pe->table_group.tables[0]; int64_t rc; @@ -3436,12 +2788,23 @@ static void pnv_pci_release_device(struct pci_dev *pdev) struct pci_dn *pdn = pci_get_pdn(pdev); struct pnv_ioda_pe *pe; + /* The VF PE state is torn down when sriov_disable() is called */ if (pdev->is_virtfn) return; if (!pdn || pdn->pe_number == IODA_INVALID_PE) return; +#ifdef CONFIG_PCI_IOV + /* + * FIXME: Try move this to sriov_disable(). It's here since we allocate + * the iov state at probe time since we need to fiddle with the IOV + * resources. + */ + if (pdev->is_physfn) + kfree(pdev->dev.archdata.iov_data); +#endif + /* * PCI hotplug can happen as part of EEH error recovery. The @pdn * isn't removed and added afterwards in this scenario. We should diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c new file mode 100644 index 000000000000..080ea39f5a83 --- /dev/null +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -0,0 +1,642 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include + +#include + +#include "pci.h" + +/* for pci_dev_is_added() */ +#include "../../../../drivers/pci/pci.h" + + +static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) +{ + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); + const resource_size_t gate = phb->ioda.m64_segsize >> 2; + struct resource *res; + int i; + resource_size_t size, total_vf_bar_sz; + struct pnv_iov_data *iov; + int mul, total_vfs; + + iov = kzalloc(sizeof(*iov), GFP_KERNEL); + if (!iov) + goto truncate_iov; + pdev->dev.archdata.iov_data = iov; + + total_vfs = pci_sriov_get_totalvfs(pdev); + mul = phb->ioda.total_pe_num; + total_vf_bar_sz = 0; + + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { + res = &pdev->resource[i + PCI_IOV_RESOURCES]; + if (!res->flags || res->parent) + continue; + if (!pnv_pci_is_m64_flags(res->flags)) { + dev_warn(&pdev->dev, "Don't support SR-IOV with" + " non M64 VF BAR%d: %pR. \n", + i, res); + goto truncate_iov; + } + + total_vf_bar_sz += pci_iov_resource_size(pdev, + i + PCI_IOV_RESOURCES); + + /* + * If bigger than quarter of M64 segment size, just round up + * power of two. + * + * Generally, one M64 BAR maps one IOV BAR. To avoid conflict + * with other devices, IOV BAR size is expanded to be + * (total_pe * VF_BAR_size). When VF_BAR_size is half of M64 + * segment size , the expanded size would equal to half of the + * whole M64 space size, which will exhaust the M64 Space and + * limit the system flexibility. This is a design decision to + * set the boundary to quarter of the M64 segment size. + */ + if (total_vf_bar_sz > gate) { + mul = roundup_pow_of_two(total_vfs); + dev_info(&pdev->dev, + "VF BAR Total IOV size %llx > %llx, roundup to %d VFs\n", + total_vf_bar_sz, gate, mul); + iov->m64_single_mode = true; + break; + } + } + + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { + res = &pdev->resource[i + PCI_IOV_RESOURCES]; + if (!res->flags || res->parent) + continue; + + size = pci_iov_resource_size(pdev, i + PCI_IOV_RESOURCES); + /* + * On PHB3, the minimum size alignment of M64 BAR in single + * mode is 32MB. + */ + if (iov->m64_single_mode && (size < SZ_32M)) + goto truncate_iov; + dev_dbg(&pdev->dev, " Fixing VF BAR%d: %pR to\n", i, res); + res->end = res->start + size * mul - 1; + dev_dbg(&pdev->dev, " %pR\n", res); + dev_info(&pdev->dev, "VF BAR%d: %pR (expanded to %d VFs for PE alignment)", + i, res, mul); + } + iov->vfs_expanded = mul; + + return; + +truncate_iov: + /* To save MMIO space, IOV BAR is truncated. */ + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { + res = &pdev->resource[i + PCI_IOV_RESOURCES]; + res->flags = 0; + res->end = res->start - 1; + } + + pdev->dev.archdata.iov_data = NULL; + kfree(iov); +} + +void pnv_pci_ioda_fixup_iov(struct pci_dev *pdev) +{ + if (WARN_ON(pci_dev_is_added(pdev))) + return; + + if (pdev->is_virtfn) { + struct pnv_ioda_pe *pe = pnv_ioda_get_pe(pdev); + + /* + * VF PEs are single-device PEs so their pdev pointer needs to + * be set. The pdev doesn't exist when the PE is allocated (in + * (pcibios_sriov_enable()) so we fix it up here. + */ + pe->pdev = pdev; + WARN_ON(!(pe->flags & PNV_IODA_PE_VF)); + } else if (pdev->is_physfn) { + /* + * For PFs adjust their allocated IOV resources to match what + * the PHB can support using it's M64 BAR table. + */ + pnv_pci_ioda_fixup_iov_resources(pdev); + } +} + +resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, + int resno) +{ + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); + struct pnv_iov_data *iov = pnv_iov_get(pdev); + resource_size_t align; + + /* + * On PowerNV platform, IOV BAR is mapped by M64 BAR to enable the + * SR-IOV. While from hardware perspective, the range mapped by M64 + * BAR should be size aligned. + * + * When IOV BAR is mapped with M64 BAR in Single PE mode, the extra + * powernv-specific hardware restriction is gone. But if just use the + * VF BAR size as the alignment, PF BAR / VF BAR may be allocated with + * in one segment of M64 #15, which introduces the PE conflict between + * PF and VF. Based on this, the minimum alignment of an IOV BAR is + * m64_segsize. + * + * This function returns the total IOV BAR size if M64 BAR is in + * Shared PE mode or just VF BAR size if not. + * If the M64 BAR is in Single PE mode, return the VF BAR size or + * M64 segment size if IOV BAR size is less. + */ + align = pci_iov_resource_size(pdev, resno); + + /* + * iov can be null if we have an SR-IOV device with IOV BAR that can't + * be placed in the m64 space (i.e. The BAR is 32bit or non-prefetch). + * In that case we don't allow VFs to be enabled so just return the + * default alignment. + */ + if (!iov) + return align; + if (!iov->vfs_expanded) + return align; + if (iov->m64_single_mode) + return max(align, (resource_size_t)phb->ioda.m64_segsize); + + return iov->vfs_expanded * align; +} + +static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) +{ + struct pnv_iov_data *iov; + struct pnv_phb *phb; + int i, j; + int m64_bars; + + phb = pci_bus_to_pnvhb(pdev->bus); + iov = pnv_iov_get(pdev); + + if (iov->m64_single_mode) + m64_bars = num_vfs; + else + m64_bars = 1; + + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) + for (j = 0; j < m64_bars; j++) { + if (iov->m64_map[j][i] == IODA_INVALID_M64) + continue; + opal_pci_phb_mmio_enable(phb->opal_id, + OPAL_M64_WINDOW_TYPE, iov->m64_map[j][i], 0); + clear_bit(iov->m64_map[j][i], &phb->ioda.m64_bar_alloc); + iov->m64_map[j][i] = IODA_INVALID_M64; + } + + kfree(iov->m64_map); + return 0; +} + +static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) +{ + struct pnv_iov_data *iov; + struct pnv_phb *phb; + unsigned int win; + struct resource *res; + int i, j; + int64_t rc; + int total_vfs; + resource_size_t size, start; + int pe_num; + int m64_bars; + + phb = pci_bus_to_pnvhb(pdev->bus); + iov = pnv_iov_get(pdev); + total_vfs = pci_sriov_get_totalvfs(pdev); + + if (iov->m64_single_mode) + m64_bars = num_vfs; + else + m64_bars = 1; + + iov->m64_map = kmalloc_array(m64_bars, + sizeof(*iov->m64_map), + GFP_KERNEL); + if (!iov->m64_map) + return -ENOMEM; + /* Initialize the m64_map to IODA_INVALID_M64 */ + for (i = 0; i < m64_bars ; i++) + for (j = 0; j < PCI_SRIOV_NUM_BARS; j++) + iov->m64_map[i][j] = IODA_INVALID_M64; + + + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { + res = &pdev->resource[i + PCI_IOV_RESOURCES]; + if (!res->flags || !res->parent) + continue; + + for (j = 0; j < m64_bars; j++) { + do { + win = find_next_zero_bit(&phb->ioda.m64_bar_alloc, + phb->ioda.m64_bar_idx + 1, 0); + + if (win >= phb->ioda.m64_bar_idx + 1) + goto m64_failed; + } while (test_and_set_bit(win, &phb->ioda.m64_bar_alloc)); + + iov->m64_map[j][i] = win; + + if (iov->m64_single_mode) { + size = pci_iov_resource_size(pdev, + PCI_IOV_RESOURCES + i); + start = res->start + size * j; + } else { + size = resource_size(res); + start = res->start; + } + + /* Map the M64 here */ + if (iov->m64_single_mode) { + pe_num = iov->pe_num_map[j]; + rc = opal_pci_map_pe_mmio_window(phb->opal_id, + pe_num, OPAL_M64_WINDOW_TYPE, + iov->m64_map[j][i], 0); + } + + rc = opal_pci_set_phb_mem_window(phb->opal_id, + OPAL_M64_WINDOW_TYPE, + iov->m64_map[j][i], + start, + 0, /* unused */ + size); + + + if (rc != OPAL_SUCCESS) { + dev_err(&pdev->dev, "Failed to map M64 window #%d: %lld\n", + win, rc); + goto m64_failed; + } + + if (iov->m64_single_mode) + rc = opal_pci_phb_mmio_enable(phb->opal_id, + OPAL_M64_WINDOW_TYPE, iov->m64_map[j][i], 2); + else + rc = opal_pci_phb_mmio_enable(phb->opal_id, + OPAL_M64_WINDOW_TYPE, iov->m64_map[j][i], 1); + + if (rc != OPAL_SUCCESS) { + dev_err(&pdev->dev, "Failed to enable M64 window #%d: %llx\n", + win, rc); + goto m64_failed; + } + } + } + return 0; + +m64_failed: + pnv_pci_vf_release_m64(pdev, num_vfs); + return -EBUSY; +} + +static void pnv_ioda_release_vf_PE(struct pci_dev *pdev) +{ + struct pnv_phb *phb; + struct pnv_ioda_pe *pe, *pe_n; + + phb = pci_bus_to_pnvhb(pdev->bus); + + if (!pdev->is_physfn) + return; + + /* FIXME: Use pnv_ioda_release_pe()? */ + list_for_each_entry_safe(pe, pe_n, &phb->ioda.pe_list, list) { + if (pe->parent_dev != pdev) + continue; + + pnv_pci_ioda2_release_pe_dma(pe); + + /* Remove from list */ + mutex_lock(&phb->ioda.pe_list_mutex); + list_del(&pe->list); + mutex_unlock(&phb->ioda.pe_list_mutex); + + pnv_ioda_deconfigure_pe(phb, pe); + + pnv_ioda_free_pe(pe); + } +} + +static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset) +{ + struct resource *res, res2; + struct pnv_iov_data *iov; + resource_size_t size; + u16 num_vfs; + int i; + + if (!dev->is_physfn) + return -EINVAL; + iov = pnv_iov_get(dev); + + /* + * "offset" is in VFs. The M64 windows are sized so that when they + * are segmented, each segment is the same size as the IOV BAR. + * Each segment is in a separate PE, and the high order bits of the + * address are the PE number. Therefore, each VF's BAR is in a + * separate PE, and changing the IOV BAR start address changes the + * range of PEs the VFs are in. + */ + num_vfs = iov->num_vfs; + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { + res = &dev->resource[i + PCI_IOV_RESOURCES]; + if (!res->flags || !res->parent) + continue; + + /* + * The actual IOV BAR range is determined by the start address + * and the actual size for num_vfs VFs BAR. This check is to + * make sure that after shifting, the range will not overlap + * with another device. + */ + size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES); + res2.flags = res->flags; + res2.start = res->start + (size * offset); + res2.end = res2.start + (size * num_vfs) - 1; + + if (res2.end > res->end) { + dev_err(&dev->dev, "VF BAR%d: %pR would extend past %pR (trying to enable %d VFs shifted by %d)\n", + i, &res2, res, num_vfs, offset); + return -EBUSY; + } + } + + /* + * Since M64 BAR shares segments among all possible 256 PEs, + * we have to shift the beginning of PF IOV BAR to make it start from + * the segment which belongs to the PE number assigned to the first VF. + * This creates a "hole" in the /proc/iomem which could be used for + * allocating other resources so we reserve this area below and + * release when IOV is released. + */ + for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { + res = &dev->resource[i + PCI_IOV_RESOURCES]; + if (!res->flags || !res->parent) + continue; + + size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES); + res2 = *res; + res->start += size * offset; + + dev_info(&dev->dev, "VF BAR%d: %pR shifted to %pR (%sabling %d VFs shifted by %d)\n", + i, &res2, res, (offset > 0) ? "En" : "Dis", + num_vfs, offset); + + if (offset < 0) { + devm_release_resource(&dev->dev, &iov->holes[i]); + memset(&iov->holes[i], 0, sizeof(iov->holes[i])); + } + + pci_update_resource(dev, i + PCI_IOV_RESOURCES); + + if (offset > 0) { + iov->holes[i].start = res2.start; + iov->holes[i].end = res2.start + size * offset - 1; + iov->holes[i].flags = IORESOURCE_BUS; + iov->holes[i].name = "pnv_iov_reserved"; + devm_request_resource(&dev->dev, res->parent, + &iov->holes[i]); + } + } + return 0; +} + +static void pnv_pci_sriov_disable(struct pci_dev *pdev) +{ + struct pnv_phb *phb; + struct pnv_ioda_pe *pe; + struct pnv_iov_data *iov; + u16 num_vfs, i; + + phb = pci_bus_to_pnvhb(pdev->bus); + iov = pnv_iov_get(pdev); + num_vfs = iov->num_vfs; + + /* Release VF PEs */ + pnv_ioda_release_vf_PE(pdev); + + if (phb->type == PNV_PHB_IODA2) { + if (!iov->m64_single_mode) + pnv_pci_vf_resource_shift(pdev, -*iov->pe_num_map); + + /* Release M64 windows */ + pnv_pci_vf_release_m64(pdev, num_vfs); + + /* Release PE numbers */ + if (iov->m64_single_mode) { + for (i = 0; i < num_vfs; i++) { + if (iov->pe_num_map[i] == IODA_INVALID_PE) + continue; + + pe = &phb->ioda.pe_array[iov->pe_num_map[i]]; + pnv_ioda_free_pe(pe); + } + } else + bitmap_clear(phb->ioda.pe_alloc, *iov->pe_num_map, num_vfs); + /* Releasing pe_num_map */ + kfree(iov->pe_num_map); + } +} + +static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) +{ + struct pnv_phb *phb; + struct pnv_ioda_pe *pe; + int pe_num; + u16 vf_index; + struct pnv_iov_data *iov; + struct pci_dn *pdn; + + if (!pdev->is_physfn) + return; + + phb = pci_bus_to_pnvhb(pdev->bus); + pdn = pci_get_pdn(pdev); + iov = pnv_iov_get(pdev); + + /* Reserve PE for each VF */ + for (vf_index = 0; vf_index < num_vfs; vf_index++) { + int vf_devfn = pci_iov_virtfn_devfn(pdev, vf_index); + int vf_bus = pci_iov_virtfn_bus(pdev, vf_index); + struct pci_dn *vf_pdn; + + if (iov->m64_single_mode) + pe_num = iov->pe_num_map[vf_index]; + else + pe_num = *iov->pe_num_map + vf_index; + + pe = &phb->ioda.pe_array[pe_num]; + pe->pe_number = pe_num; + pe->phb = phb; + pe->flags = PNV_IODA_PE_VF; + pe->pbus = NULL; + pe->parent_dev = pdev; + pe->mve_number = -1; + pe->rid = (vf_bus << 8) | vf_devfn; + + pe_info(pe, "VF %04d:%02d:%02d.%d associated with PE#%x\n", + pci_domain_nr(pdev->bus), pdev->bus->number, + PCI_SLOT(vf_devfn), PCI_FUNC(vf_devfn), pe_num); + + if (pnv_ioda_configure_pe(phb, pe)) { + /* XXX What do we do here ? */ + pnv_ioda_free_pe(pe); + pe->pdev = NULL; + continue; + } + + /* Put PE to the list */ + mutex_lock(&phb->ioda.pe_list_mutex); + list_add_tail(&pe->list, &phb->ioda.pe_list); + mutex_unlock(&phb->ioda.pe_list_mutex); + + /* associate this pe to it's pdn */ + list_for_each_entry(vf_pdn, &pdn->parent->child_list, list) { + if (vf_pdn->busno == vf_bus && + vf_pdn->devfn == vf_devfn) { + vf_pdn->pe_number = pe_num; + break; + } + } + + pnv_pci_ioda2_setup_dma_pe(phb, pe); + } +} + +static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) +{ + struct pnv_iov_data *iov; + struct pnv_phb *phb; + struct pnv_ioda_pe *pe; + int ret; + u16 i; + + phb = pci_bus_to_pnvhb(pdev->bus); + iov = pnv_iov_get(pdev); + + if (phb->type == PNV_PHB_IODA2) { + if (!iov->vfs_expanded) { + dev_info(&pdev->dev, "don't support this SRIOV device" + " with non 64bit-prefetchable IOV BAR\n"); + return -ENOSPC; + } + + /* + * When M64 BARs functions in Single PE mode, the number of VFs + * could be enabled must be less than the number of M64 BARs. + */ + if (iov->m64_single_mode && num_vfs > phb->ioda.m64_bar_idx) { + dev_info(&pdev->dev, "Not enough M64 BAR for VFs\n"); + return -EBUSY; + } + + /* Allocating pe_num_map */ + if (iov->m64_single_mode) + iov->pe_num_map = kmalloc_array(num_vfs, + sizeof(*iov->pe_num_map), + GFP_KERNEL); + else + iov->pe_num_map = kmalloc(sizeof(*iov->pe_num_map), GFP_KERNEL); + + if (!iov->pe_num_map) + return -ENOMEM; + + if (iov->m64_single_mode) + for (i = 0; i < num_vfs; i++) + iov->pe_num_map[i] = IODA_INVALID_PE; + + /* Calculate available PE for required VFs */ + if (iov->m64_single_mode) { + for (i = 0; i < num_vfs; i++) { + pe = pnv_ioda_alloc_pe(phb); + if (!pe) { + ret = -EBUSY; + goto m64_failed; + } + + iov->pe_num_map[i] = pe->pe_number; + } + } else { + mutex_lock(&phb->ioda.pe_alloc_mutex); + *iov->pe_num_map = bitmap_find_next_zero_area( + phb->ioda.pe_alloc, phb->ioda.total_pe_num, + 0, num_vfs, 0); + if (*iov->pe_num_map >= phb->ioda.total_pe_num) { + mutex_unlock(&phb->ioda.pe_alloc_mutex); + dev_info(&pdev->dev, "Failed to enable VF%d\n", num_vfs); + kfree(iov->pe_num_map); + return -EBUSY; + } + bitmap_set(phb->ioda.pe_alloc, *iov->pe_num_map, num_vfs); + mutex_unlock(&phb->ioda.pe_alloc_mutex); + } + iov->num_vfs = num_vfs; + + /* Assign M64 window accordingly */ + ret = pnv_pci_vf_assign_m64(pdev, num_vfs); + if (ret) { + dev_info(&pdev->dev, "Not enough M64 window resources\n"); + goto m64_failed; + } + + /* + * When using one M64 BAR to map one IOV BAR, we need to shift + * the IOV BAR according to the PE# allocated to the VFs. + * Otherwise, the PE# for the VF will conflict with others. + */ + if (!iov->m64_single_mode) { + ret = pnv_pci_vf_resource_shift(pdev, *iov->pe_num_map); + if (ret) + goto m64_failed; + } + } + + /* Setup VF PEs */ + pnv_ioda_setup_vf_PE(pdev, num_vfs); + + return 0; + +m64_failed: + if (iov->m64_single_mode) { + for (i = 0; i < num_vfs; i++) { + if (iov->pe_num_map[i] == IODA_INVALID_PE) + continue; + + pe = &phb->ioda.pe_array[iov->pe_num_map[i]]; + pnv_ioda_free_pe(pe); + } + } else + bitmap_clear(phb->ioda.pe_alloc, *iov->pe_num_map, num_vfs); + + /* Releasing pe_num_map */ + kfree(iov->pe_num_map); + + return ret; +} + +int pnv_pcibios_sriov_disable(struct pci_dev *pdev) +{ + pnv_pci_sriov_disable(pdev); + + /* Release PCI data */ + remove_sriov_vf_pdns(pdev); + return 0; +} + +int pnv_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) +{ + /* Allocate PCI data */ + add_sriov_vf_pdns(pdev); + + return pnv_pci_sriov_enable(pdev, num_vfs); +} + diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 6aa6aefb637d..0156d7d17f7d 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -194,6 +194,80 @@ struct pnv_phb { u8 *diag_data; }; + +/* IODA PE management */ + +static inline bool pnv_pci_is_m64(struct pnv_phb *phb, struct resource *r) +{ + /* + * WARNING: We cannot rely on the resource flags. The Linux PCI + * allocation code sometimes decides to put a 64-bit prefetchable + * BAR in the 32-bit window, so we have to compare the addresses. + * + * For simplicity we only test resource start. + */ + return (r->start >= phb->ioda.m64_base && + r->start < (phb->ioda.m64_base + phb->ioda.m64_size)); +} + +static inline bool pnv_pci_is_m64_flags(unsigned long resource_flags) +{ + unsigned long flags = (IORESOURCE_MEM_64 | IORESOURCE_PREFETCH); + + return (resource_flags & flags) == flags; +} + +int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); +int pnv_ioda_deconfigure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); + +void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); +void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe); + +struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb); +void pnv_ioda_free_pe(struct pnv_ioda_pe *pe); + +#ifdef CONFIG_PCI_IOV +/* + * For SR-IOV we want to put each VF's MMIO resource in to a separate PE. + * This requires a bit of acrobatics with the MMIO -> PE configuration + * and this structure is used to keep track of it all. + */ +struct pnv_iov_data { + /* number of VFs IOV BAR expanded. FIXME: rename this to something less bad */ + u16 vfs_expanded; + + /* number of VFs enabled */ + u16 num_vfs; + unsigned int *pe_num_map; /* PE# for the first VF PE or array */ + + /* Did we map the VF BARs with single-PE IODA BARs? */ + bool m64_single_mode; + + int (*m64_map)[PCI_SRIOV_NUM_BARS]; +#define IODA_INVALID_M64 (-1) + + /* + * If we map the SR-IOV BARs with a segmented window then + * parts of that window will be "claimed" by other PEs. + * + * "holes" here is used to reserve the leading portion + * of the window that is used by other (non VF) PEs. + */ + struct resource holes[PCI_SRIOV_NUM_BARS]; +}; + +static inline struct pnv_iov_data *pnv_iov_get(struct pci_dev *pdev) +{ + return pdev->dev.archdata.iov_data; +} + +void pnv_pci_ioda_fixup_iov(struct pci_dev *pdev); +resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, int resno); + +int pnv_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs); +int pnv_pcibios_sriov_disable(struct pci_dev *pdev); +#endif /* CONFIG_PCI_IOV */ + extern struct pci_ops pnv_pci_ops; void pnv_pci_dump_phb_diag_data(struct pci_controller *hose, From patchwork Wed Jul 22 06:57:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333603 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRS76dGTz9sSy for ; Wed, 22 Jul 2020 17:10:15 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dhHwLv4c; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRS63mJgzDqq5 for ; Wed, 22 Jul 2020 17:10:14 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1041; helo=mail-pj1-x1041.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=dhHwLv4c; dkim-atps=neutral Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9f4LfHzDqvR for ; Wed, 22 Jul 2020 16:57:42 +1000 (AEST) Received: by mail-pj1-x1041.google.com with SMTP id 8so669170pjj.1 for ; Tue, 21 Jul 2020 23:57:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6RKYFxjNHum5BSaKU7MvqOLOxO9CB56bY6rOrYSQUc0=; b=dhHwLv4cIpePwMxw7A9xSSx1Fdn32SrLeS/NsMfNWKBaKk3C/hj31rrj8SRULM9iek vc+rkroU0gCcgHVINsIgwbtaWfoyIDBbrtia970xjfEpilkDJrfXO5dlu0vf6mKqm1Oz K1pxf2Ivq4MwXpqJtVr6XFbxAU8skhUaHhNYRy2TRKJUZnYpuLW5ZitqOK2m8IppXFs/ CGju/JhhHCal3CwN0SPCMkVsPznKD/Lw3Cf6om5dUq4zedhYVBMFA1vbNOgNJdg5zfFs /sUbA752Z5pPrk7kBGof8Kk/XDzQXnjOldr06NLRPKOV9OfoOARQbSLI9QBpeXAp8rfJ 58/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6RKYFxjNHum5BSaKU7MvqOLOxO9CB56bY6rOrYSQUc0=; b=fXrqQENY+ISbbQsmvPr5lP6N0UtsfLRTNBb0PTri77Z7nDkR3fS31Tym9k/AZyhbW8 GqYmmDASTIuP9346U5Zjdcx+PE2i2biNjT3xUOIn0Lzpo+ZQNizh81YI+SQwfjCqIT/B v9wkmyv1L/me60LnEd5HbsJ2kGQE9pE5SmVMSG9hKWcYa8JeoghSYAnS+8Y4Dtw9o4YD YmGMrap67cZuc3bcj8dbcrs2c2vqlK7JhciOwIy88mh10COgz2ONUHIIH9+YURr7jkFl dSyF3RV0kQw/1LPwOULyWukUZkbyDORAI154nwgHD1UgmQbyScVgCbGAOXWdpfIP/j9M kzjA== X-Gm-Message-State: AOAM531gxrZCFm/84v1TAK0DLJzvCrDkfsvE7u4mGs1DXmsbNek2BLIG YswIdk1rFyfUeHziRFhGxildIxt0Il8= X-Google-Smtp-Source: ABdhPJzMgeCVvAIF6vOwN8m/ZSjX49TjHVClnvkug3tVSXuUs74YesddQT/hTssUkqsUm5hJ41axnA== X-Received: by 2002:a17:90a:2683:: with SMTP id m3mr8963212pje.8.1595401059321; Tue, 21 Jul 2020 23:57:39 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:38 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 06/16] powerpc/powernv/sriov: Explain how SR-IOV works on PowerNV Date: Wed, 22 Jul 2020 16:57:05 +1000 Message-Id: <20200722065715.1432738-6-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" SR-IOV support on PowerNV is a byzantine maze of hooks. I have no idea how anyone is supposed to know how it works except through a lot of stuffering. Write up some docs about the overall story to help out the next sucker^Wperson who needs to tinker with it. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no changes --- arch/powerpc/platforms/powernv/pci-sriov.c | 130 +++++++++++++++++++++ 1 file changed, 130 insertions(+) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index 080ea39f5a83..f4c74ab1284d 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -12,6 +12,136 @@ /* for pci_dev_is_added() */ #include "../../../../drivers/pci/pci.h" +/* + * The majority of the complexity in supporting SR-IOV on PowerNV comes from + * the need to put the MMIO space for each VF into a separate PE. Internally + * the PHB maps MMIO addresses to a specific PE using the "Memory BAR Table". + * The MBT historically only applied to the 64bit MMIO window of the PHB + * so it's common to see it referred to as the "M64BT". + * + * An MBT entry stores the mapped range as an , pair. This forces + * the address range that we want to map to be power-of-two sized and aligned. + * For conventional PCI devices this isn't really an issue since PCI device BARs + * have the same requirement. + * + * For a SR-IOV BAR things are a little more awkward since size and alignment + * are not coupled. The alignment is set based on the the per-VF BAR size, but + * the total BAR area is: number-of-vfs * per-vf-size. The number of VFs + * isn't necessarily a power of two, so neither is the total size. To fix that + * we need to finesse (read: hack) the Linux BAR allocator so that it will + * allocate the SR-IOV BARs in a way that lets us map them using the MBT. + * + * The changes to size and alignment that we need to do depend on the "mode" + * of MBT entry that we use. We only support SR-IOV on PHB3 (IODA2) and above, + * so as a baseline we can assume that we have the following BAR modes + * available: + * + * NB: $PE_COUNT is the number of PEs that the PHB supports. + * + * a) A segmented BAR that splits the mapped range into $PE_COUNT equally sized + * segments. The n'th segment is mapped to the n'th PE. + * b) An un-segmented BAR that maps the whole address range to a specific PE. + * + * + * We prefer to use mode a) since it only requires one MBT entry per SR-IOV BAR + * For comparison b) requires one entry per-VF per-BAR, or: + * (num-vfs * num-sriov-bars) in total. To use a) we need the size of each segment + * to equal the size of the per-VF BAR area. So: + * + * new_size = per-vf-size * number-of-PEs + * + * The alignment for the SR-IOV BAR also needs to be changed from per-vf-size + * to "new_size", calculated above. Implementing this is a convoluted process + * which requires several hooks in the PCI core: + * + * 1. In pcibios_add_device() we call pnv_pci_ioda_fixup_iov(). + * + * At this point the device has been probed and the device's BARs are sized, + * but no resource allocations have been done. The SR-IOV BARs are sized + * based on the maximum number of VFs supported by the device and we need + * to increase that to new_size. + * + * 2. Later, when Linux actually assigns resources it tries to make the resource + * allocations for each PCI bus as compact as possible. As a part of that it + * sorts the BARs on a bus by their required alignment, which is calculated + * using pci_resource_alignment(). + * + * For IOV resources this goes: + * pci_resource_alignment() + * pci_sriov_resource_alignment() + * pcibios_sriov_resource_alignment() + * pnv_pci_iov_resource_alignment() + * + * Our hook overrides the default alignment, equal to the per-vf-size, with + * new_size computed above. + * + * 3. When userspace enables VFs for a device: + * + * sriov_enable() + * pcibios_sriov_enable() + * pnv_pcibios_sriov_enable() + * + * This is where we actually allocate PE numbers for each VF and setup the + * MBT mapping for each SR-IOV BAR. In steps 1) and 2) we setup an "arena" + * where each MBT segment is equal in size to the VF BAR so we can shift + * around the actual SR-IOV BAR location within this arena. We need this + * ability because the PE space is shared by all devices on the same PHB. + * When using mode a) described above segment 0 in maps to PE#0 which might + * be already being used by another device on the PHB. + * + * As a result we need allocate a contigious range of PE numbers, then shift + * the address programmed into the SR-IOV BAR of the PF so that the address + * of VF0 matches up with the segment corresponding to the first allocated + * PE number. This is handled in pnv_pci_vf_resource_shift(). + * + * Once all that is done we return to the PCI core which then enables VFs, + * scans them and creates pci_devs for each. The init process for a VF is + * largely the same as a normal device, but the VF is inserted into the IODA + * PE that we allocated for it rather than the PE associated with the bus. + * + * 4. When userspace disables VFs we unwind the above in + * pnv_pcibios_sriov_disable(). Fortunately this is relatively simple since + * we don't need to validate anything, just tear down the mappings and + * move SR-IOV resource back to its "proper" location. + * + * That's how mode a) works. In theory mode b) (single PE mapping) is less work + * since we can map each individual VF with a separate BAR. However, there's a + * few limitations: + * + * 1) For IODA2 mode b) has a minimum alignment requirement of 32MB. This makes + * it only usable for devices with very large per-VF BARs. Such devices are + * similar to Big Foot. They definitely exist, but I've never seen one. + * + * 2) The number of MBT entries that we have is limited. PHB3 and PHB4 only + * 16 total and some are needed for. Most SR-IOV capable network cards can support + * more than 16 VFs on each port. + * + * We use b) when using a) would use more than 1/4 of the entire 64 bit MMIO + * window of the PHB. + * + * + * + * PHB4 (IODA3) added a few new features that would be useful for SR-IOV. It + * allowed the MBT to map 32bit MMIO space in addition to 64bit which allows + * us to support SR-IOV BARs in the 32bit MMIO window. This is useful since + * the Linux BAR allocation will place any BAR marked as non-prefetchable into + * the non-prefetchable bridge window, which is 32bit only. It also added two + * new modes: + * + * c) A segmented BAR similar to a), but each segment can be individually + * mapped to any PE. This is matches how the 32bit MMIO window worked on + * IODA1&2. + * + * d) A segmented BAR with 8, 64, or 128 segments. This works similarly to a), + * but with fewer segments and configurable base PE. + * + * i.e. The n'th segment maps to the (n + base)'th PE. + * + * The base PE is also required to be a multiple of the window size. + * + * Unfortunately, the OPAL API doesn't currently (as of skiboot v6.6) allow us + * to exploit any of the IODA3 features. + */ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) { From patchwork Wed Jul 22 06:57:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333610 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRVV042Sz9sSn for ; Wed, 22 Jul 2020 17:12:18 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=KgpWj2xr; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRVT4yDzzDqs1 for ; Wed, 22 Jul 2020 17:12:17 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::643; helo=mail-pl1-x643.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=KgpWj2xr; dkim-atps=neutral Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9h3dH7zDqvr for ; Wed, 22 Jul 2020 16:57:44 +1000 (AEST) Received: by mail-pl1-x643.google.com with SMTP id d7so470654plq.13 for ; Tue, 21 Jul 2020 23:57:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bJw0iqXmNmr9DtxXpaTA8LhlkH6t9S6k9Omt08nMv0g=; b=KgpWj2xrE3gjkYPYhwTtTh7bxk5HT1nZd30g9BfX7BJOLXq+0VOEPQCmTFhWvHnAr4 1U4jx/1DeXPNd2Y8FqqT/Me/3DhhMP21RBtZvw9mntNG4S4ewyODqlyo4eZXyjvM9CAq nFAlXFHVfXwsWMLTzwvgH2AF5CGb9w1dpcKXezhEFR+NVQ/VtZS4mUIhPSH/Vh3C5s0n k4+d9TisAWToksUPYh6ERtzURDItdT4a2GdyQAU6RlX75qNJFuTKC/skR72U8zRtDfYW j4K+L15pC68poQUsqb6RnJuDp8DDnQWojNYVQf9tzR0QQnFxqI90Hk98yfpzu2WPaWvG +4Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bJw0iqXmNmr9DtxXpaTA8LhlkH6t9S6k9Omt08nMv0g=; b=nz0pRr7lCdFhXFwaSXe9wzkTdsK6UBUI9EMizdBCemOuRbLXbpC4AhM9X5SD/uw1go j05dPbhM22OA2LmzbpGiooJBn5GcFU5jdXnm1HzTkI6uV1XVkF/WJnuSXw3U+rN9VueF DCiLN3CloghpOVfPq98/tVDD31ALddEZnvIgSonlJtv1Nhf8ap7zXgQLzYC6Q9xVlqIV QkhZx6Ar0pycDYkQqdKqJ74WRtLKU5DexZh6Xr4mlHfV+ExpZXJTCv+zgCa9nMMM939U KB8oeQB/oVS4PrLJybMkSkR8AwFlKLEdXIamnIVd0rHfKZtt4AGA/K63uEoy+MkiiQay 6y8Q== X-Gm-Message-State: AOAM532xwMuSgyETc4KX5YyE5QX8p0oh8923HAxT0WkuV0ur2HXkLSYK jL6MF1o+kapWkS+xzvlAAoww1Ou3k+E= X-Google-Smtp-Source: ABdhPJyPQTNk9jsCGbt3+21sy4ApDUkGaCXCXRwsyo3nBT/kDnqGfjwTEnSUDYQI2fIbXJzILBwSuw== X-Received: by 2002:a17:90a:62c1:: with SMTP id k1mr7939400pjs.168.1595401061864; Tue, 21 Jul 2020 23:57:41 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:41 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 07/16] powerpc/powernv/sriov: Rename truncate_iov Date: Wed, 22 Jul 2020 16:57:06 +1000 Message-Id: <20200722065715.1432738-7-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This prevents SR-IOV being used by making the SR-IOV BAR resources unallocatable. Rename it to reflect what it actually does. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no changes --- arch/powerpc/platforms/powernv/pci-sriov.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index f4c74ab1284d..216ceeff69b0 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -155,7 +155,7 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) iov = kzalloc(sizeof(*iov), GFP_KERNEL); if (!iov) - goto truncate_iov; + goto disable_iov; pdev->dev.archdata.iov_data = iov; total_vfs = pci_sriov_get_totalvfs(pdev); @@ -170,7 +170,7 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) dev_warn(&pdev->dev, "Don't support SR-IOV with" " non M64 VF BAR%d: %pR. \n", i, res); - goto truncate_iov; + goto disable_iov; } total_vf_bar_sz += pci_iov_resource_size(pdev, @@ -209,7 +209,8 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) * mode is 32MB. */ if (iov->m64_single_mode && (size < SZ_32M)) - goto truncate_iov; + goto disable_iov; + dev_dbg(&pdev->dev, " Fixing VF BAR%d: %pR to\n", i, res); res->end = res->start + size * mul - 1; dev_dbg(&pdev->dev, " %pR\n", res); @@ -220,8 +221,8 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) return; -truncate_iov: - /* To save MMIO space, IOV BAR is truncated. */ +disable_iov: + /* Save ourselves some MMIO space by disabling the unusable BARs */ for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { res = &pdev->resource[i + PCI_IOV_RESOURCES]; res->flags = 0; From patchwork Wed Jul 22 06:57:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333611 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRXm5Gj7z9sPf for ; Wed, 22 Jul 2020 17:14:16 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=GnT3Q7QQ; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRXm3b4QzDqpy for ; Wed, 22 Jul 2020 17:14:16 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1043; helo=mail-pj1-x1043.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=GnT3Q7QQ; dkim-atps=neutral Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9l5BRCzDqwb for ; Wed, 22 Jul 2020 16:57:47 +1000 (AEST) Received: by mail-pj1-x1043.google.com with SMTP id k1so661562pjt.5 for ; Tue, 21 Jul 2020 23:57:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1jux0qkPu0GauZI9+jV32KW0ChRLqu0PMO62D9YL1Rc=; b=GnT3Q7QQ4P7QDN1XSIAby5yCNf1yDgFecFzyEaaaTezeJx5x13zorNGbMsq9QlKf6o uYQhlpXNUr1bQuvaf9+ux8T1OMUffTJBBSiGIwqQppXOit3qgAAZACZgdPXhUncQksaJ mVnYhYp5BFD9I2NvCGUrHoNgaj8gTEcVX79uAA2kT9UaXggURf/3C6MIEhhNmiAytYS0 TvvKeUvssQwnj0MyEA+Jf1gX5yvhKboL7BMGoSBPJ/I5ImuNfo5S8YoqkjxdXpiBATST DslA5OwvIxnXDxxFCxarWQYgUP0gicEJr/Xf3E2ubESa57MZ3VOQE8XIA94R4bnR5x1S gpNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1jux0qkPu0GauZI9+jV32KW0ChRLqu0PMO62D9YL1Rc=; b=rn/W0aCBaYuuBBnWfLeFv7xR1piDHj4D1cDHhf77tL5OCOzPcgJMNIX6W5BS1nuI+J YHIVfj4RFqWC5FtrN6TTSg7+otyxYhJB7uK0XZhYcPjHqxMrK2ftTkERS1PcwEM2eAS6 8ilUkrdYAuC/h/SjmgfFtooXC0pwQNsEyOd2G5MP9rKryDcdYuX5BLOXZ/M0wCYSh7UZ 412EYE1po6jw4qV5W3r8rA7Gnfgq2J5ADRWYUXdxi02+CPv8WWH3zbvTwJi1j5+4SOXf 2fAfoIyoM5FKj0PZHGzuqbeKm8tRROzVreEBQHHOwi9yx8qpQxvF49cZPzopfH3cdKop aQXA== X-Gm-Message-State: AOAM532BFV2LaVMjCQhEaUfpiAv/EhI8OsAaVYS7BQ2IOlSvDGrwfQ6U jlO6fbnCCzFyjC0ywuOTFtgXHvlB0og= X-Google-Smtp-Source: ABdhPJy2yV3SerU5VO9KDXC8G7SY2+livcaRihaOyG/4afYQ/Eb4/0dIh2ven/pj+1F63057IBjHPA== X-Received: by 2002:a17:90a:3567:: with SMTP id q94mr8817028pjb.226.1595401064050; Tue, 21 Jul 2020 23:57:44 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:43 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 08/16] powerpc/powernv/sriov: Simplify used window tracking Date: Wed, 22 Jul 2020 16:57:07 +1000 Message-Id: <20200722065715.1432738-8-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" No need for the multi-dimensional arrays, just use a bitmap. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: Fixed license to GPL-2.0-or-later Added MAX_M64_BARS for the size of the M64 allocation bitmap rather than open coding 64. --- arch/powerpc/platforms/powernv/pci-sriov.c | 50 +++++++--------------- arch/powerpc/platforms/powernv/pci.h | 8 +++- 2 files changed, 22 insertions(+), 36 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index 216ceeff69b0..b48952e59ce0 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +// SPDX-License-Identifier: GPL-2.0-or-later #include #include @@ -303,28 +303,20 @@ static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) { struct pnv_iov_data *iov; struct pnv_phb *phb; - int i, j; - int m64_bars; + int window_id; phb = pci_bus_to_pnvhb(pdev->bus); iov = pnv_iov_get(pdev); - if (iov->m64_single_mode) - m64_bars = num_vfs; - else - m64_bars = 1; + for_each_set_bit(window_id, iov->used_m64_bar_mask, MAX_M64_BARS) { + opal_pci_phb_mmio_enable(phb->opal_id, + OPAL_M64_WINDOW_TYPE, + window_id, + 0); - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) - for (j = 0; j < m64_bars; j++) { - if (iov->m64_map[j][i] == IODA_INVALID_M64) - continue; - opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, iov->m64_map[j][i], 0); - clear_bit(iov->m64_map[j][i], &phb->ioda.m64_bar_alloc); - iov->m64_map[j][i] = IODA_INVALID_M64; - } + clear_bit(window_id, &phb->ioda.m64_bar_alloc); + } - kfree(iov->m64_map); return 0; } @@ -350,23 +342,14 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) else m64_bars = 1; - iov->m64_map = kmalloc_array(m64_bars, - sizeof(*iov->m64_map), - GFP_KERNEL); - if (!iov->m64_map) - return -ENOMEM; - /* Initialize the m64_map to IODA_INVALID_M64 */ - for (i = 0; i < m64_bars ; i++) - for (j = 0; j < PCI_SRIOV_NUM_BARS; j++) - iov->m64_map[i][j] = IODA_INVALID_M64; - - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { res = &pdev->resource[i + PCI_IOV_RESOURCES]; if (!res->flags || !res->parent) continue; for (j = 0; j < m64_bars; j++) { + + /* allocate a window ID for this BAR */ do { win = find_next_zero_bit(&phb->ioda.m64_bar_alloc, phb->ioda.m64_bar_idx + 1, 0); @@ -374,8 +357,7 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) if (win >= phb->ioda.m64_bar_idx + 1) goto m64_failed; } while (test_and_set_bit(win, &phb->ioda.m64_bar_alloc)); - - iov->m64_map[j][i] = win; + set_bit(win, iov->used_m64_bar_mask); if (iov->m64_single_mode) { size = pci_iov_resource_size(pdev, @@ -391,12 +373,12 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) pe_num = iov->pe_num_map[j]; rc = opal_pci_map_pe_mmio_window(phb->opal_id, pe_num, OPAL_M64_WINDOW_TYPE, - iov->m64_map[j][i], 0); + win, 0); } rc = opal_pci_set_phb_mem_window(phb->opal_id, OPAL_M64_WINDOW_TYPE, - iov->m64_map[j][i], + win, start, 0, /* unused */ size); @@ -410,10 +392,10 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) if (iov->m64_single_mode) rc = opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, iov->m64_map[j][i], 2); + OPAL_M64_WINDOW_TYPE, win, 2); else rc = opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, iov->m64_map[j][i], 1); + OPAL_M64_WINDOW_TYPE, win, 1); if (rc != OPAL_SUCCESS) { dev_err(&pdev->dev, "Failed to enable M64 window #%d: %llx\n", diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 0156d7d17f7d..23fc5e391c7f 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -154,6 +154,7 @@ struct pnv_phb { unsigned long m64_size; unsigned long m64_segsize; unsigned long m64_base; +#define MAX_M64_BARS 64 unsigned long m64_bar_alloc; /* IO ports */ @@ -243,8 +244,11 @@ struct pnv_iov_data { /* Did we map the VF BARs with single-PE IODA BARs? */ bool m64_single_mode; - int (*m64_map)[PCI_SRIOV_NUM_BARS]; -#define IODA_INVALID_M64 (-1) + /* + * Bit mask used to track which m64 windows are used to map the + * SR-IOV BARs for this device. + */ + DECLARE_BITMAP(used_m64_bar_mask, MAX_M64_BARS); /* * If we map the SR-IOV BARs with a segmented window then From patchwork Wed Jul 22 06:57:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333612 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRbB34qVz9sPf for ; Wed, 22 Jul 2020 17:16:22 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Efn4exzK; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRbB2DJKzDqlr for ; Wed, 22 Jul 2020 17:16:22 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::544; helo=mail-pg1-x544.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Efn4exzK; dkim-atps=neutral Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9p3kDkzDqvr for ; Wed, 22 Jul 2020 16:57:50 +1000 (AEST) Received: by mail-pg1-x544.google.com with SMTP id d4so698677pgk.4 for ; Tue, 21 Jul 2020 23:57:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SYTuuE0NHLp4onXPzByawlZIwLDnubpu2XqPvZZCzfw=; b=Efn4exzK5yoSIhQVUw4AOwrXlNGJh3jqHSlorRqM8A+wMfarrzJQaf63gvtzYCcrRt nRRtpw21DXHgE/GTqaQDiSMUL3p1Ir2/wDiCou9giZhZKwoYp5q8CN1ccAW9aJZ4N1mh wizITlLxf2wCaVlGHIQROSDp8ORMUmG4jcVTYNnPpcEVNXXLxu4ouAPwQRw2d7/VTZE8 cTul2zOWSDg8VQDxYtJ1egGHKA1urEs8O4FRmKbJrf3dK7i5dXcmoakUmR9+KygYTQEG hhOyt3L+k25Ll1itihXL8KtFR3ybCsuXEAXx2xUpu1fzb2SLXyRNyhheI84oqaKk0HaU 8wDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SYTuuE0NHLp4onXPzByawlZIwLDnubpu2XqPvZZCzfw=; b=qTXbeL8es5Mq+YjBqC5OgkWs1o4Bzu31e7KLpXdFStkR6/qTxnzE92aEb31BcVb26R Ys49iwTxrob+c65A/HWWIy/rU5YPlwhfqZ/bc5HQfPCtDyOCHIjMO8f2gPDQTBRK2Cyv G8qJ+y3b+W6Q8RHYitiScPTw2cJRVKI9sX8n/ySmHBEHIlzd7aVkXwR5jodv19NxluA8 mAm92yyO9mI4JoiZML2C6YV3592AyKh5WKtnKGOzAEeDdsU0GYH3RoTR5XoNFGyTTUHZ HEK0byYiQ7dAitkaT4tPrPs9+eZItoecS+Z4SLa9W0yFoShL1WK+84/1XM3ULqAyuoSv 0r9w== X-Gm-Message-State: AOAM530hj0HcPy3Ez8kQ7s6ULR0XDCWOuzqCyiAT323yhBewkFEV+062 YFJgY1mDRJbdTwnkL3hFsUnPCyM70Vc= X-Google-Smtp-Source: ABdhPJzsbRHMvsSZXJU/TatZV/DDtsiT3IyQznUloyYMf+l33lX5DJuOC5DsWdompyWsQLF8m7v52A== X-Received: by 2002:aa7:970a:: with SMTP id a10mr28826790pfg.319.1595401066302; Tue, 21 Jul 2020 23:57:46 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:45 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 09/16] powerpc/powernv/sriov: Factor out M64 BAR setup Date: Wed, 22 Jul 2020 16:57:08 +1000 Message-Id: <20200722065715.1432738-9-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The sequence required to use the single PE BAR mode is kinda janky and requires a little explanation. The API was designed with P7-IOC style windows where the setup process is something like: 1. Configure the window start / end address 2. Enable the window 3. Map the segments of each window to the PE For Single PE BARs the process is: 1. Set the PE for segment zero on a disabled window 2. Set the range 3. Enable the window Move the OPAL calls into their own helper functions where the quirks can be contained. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: renamed "accordion" window to "segmented" --- arch/powerpc/platforms/powernv/pci-sriov.c | 129 ++++++++++++++++----- 1 file changed, 100 insertions(+), 29 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index b48952e59ce0..d90e11218add 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -320,6 +320,99 @@ static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) return 0; } + +/* + * PHB3 and beyond support segmented windows. The window's address range + * is subdivided into phb->ioda.total_pe_num segments and there's a 1-1 + * mapping between PEs and segments. + */ +static int64_t pnv_ioda_map_m64_segmented(struct pnv_phb *phb, + int window_id, + resource_size_t start, + resource_size_t size) +{ + int64_t rc; + + rc = opal_pci_set_phb_mem_window(phb->opal_id, + OPAL_M64_WINDOW_TYPE, + window_id, + start, + 0, /* unused */ + size); + if (rc) + goto out; + + rc = opal_pci_phb_mmio_enable(phb->opal_id, + OPAL_M64_WINDOW_TYPE, + window_id, + OPAL_ENABLE_M64_SPLIT); +out: + if (rc) + pr_err("Failed to map M64 window #%d: %lld\n", window_id, rc); + + return rc; +} + +static int64_t pnv_ioda_map_m64_single(struct pnv_phb *phb, + int pe_num, + int window_id, + resource_size_t start, + resource_size_t size) +{ + int64_t rc; + + /* + * The API for setting up m64 mmio windows seems to have been designed + * with P7-IOC in mind. For that chip each M64 BAR (window) had a fixed + * split of 8 equally sized segments each of which could individually + * assigned to a PE. + * + * The problem with this is that the API doesn't have any way to + * communicate the number of segments we want on a BAR. This wasn't + * a problem for p7-ioc since you didn't have a choice, but the + * single PE windows added in PHB3 don't map cleanly to this API. + * + * As a result we've got this slightly awkward process where we + * call opal_pci_map_pe_mmio_window() to put the single in single + * PE mode, and set the PE for the window before setting the address + * bounds. We need to do it this way because the single PE windows + * for PHB3 have different alignment requirements on PHB3. + */ + rc = opal_pci_map_pe_mmio_window(phb->opal_id, + pe_num, + OPAL_M64_WINDOW_TYPE, + window_id, + 0); + if (rc) + goto out; + + /* + * NB: In single PE mode the window needs to be aligned to 32MB + */ + rc = opal_pci_set_phb_mem_window(phb->opal_id, + OPAL_M64_WINDOW_TYPE, + window_id, + start, + 0, /* ignored by FW, m64 is 1-1 */ + size); + if (rc) + goto out; + + /* + * Now actually enable it. We specified the BAR should be in "non-split" + * mode so FW will validate that the BAR is in single PE mode. + */ + rc = opal_pci_phb_mmio_enable(phb->opal_id, + OPAL_M64_WINDOW_TYPE, + window_id, + OPAL_ENABLE_M64_NON_SPLIT); +out: + if (rc) + pr_err("Error mapping single PE BAR\n"); + + return rc; +} + static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) { struct pnv_iov_data *iov; @@ -330,7 +423,6 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) int64_t rc; int total_vfs; resource_size_t size, start; - int pe_num; int m64_bars; phb = pci_bus_to_pnvhb(pdev->bus); @@ -359,49 +451,28 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) } while (test_and_set_bit(win, &phb->ioda.m64_bar_alloc)); set_bit(win, iov->used_m64_bar_mask); + if (iov->m64_single_mode) { size = pci_iov_resource_size(pdev, PCI_IOV_RESOURCES + i); start = res->start + size * j; + rc = pnv_ioda_map_m64_single(phb, win, + iov->pe_num_map[j], + start, + size); } else { size = resource_size(res); start = res->start; - } - /* Map the M64 here */ - if (iov->m64_single_mode) { - pe_num = iov->pe_num_map[j]; - rc = opal_pci_map_pe_mmio_window(phb->opal_id, - pe_num, OPAL_M64_WINDOW_TYPE, - win, 0); + rc = pnv_ioda_map_m64_segmented(phb, win, start, + size); } - rc = opal_pci_set_phb_mem_window(phb->opal_id, - OPAL_M64_WINDOW_TYPE, - win, - start, - 0, /* unused */ - size); - - if (rc != OPAL_SUCCESS) { dev_err(&pdev->dev, "Failed to map M64 window #%d: %lld\n", win, rc); goto m64_failed; } - - if (iov->m64_single_mode) - rc = opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, win, 2); - else - rc = opal_pci_phb_mmio_enable(phb->opal_id, - OPAL_M64_WINDOW_TYPE, win, 1); - - if (rc != OPAL_SUCCESS) { - dev_err(&pdev->dev, "Failed to enable M64 window #%d: %llx\n", - win, rc); - goto m64_failed; - } } } return 0; From patchwork Wed Jul 22 06:57:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333613 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRd958NPz9sPf for ; Wed, 22 Jul 2020 17:18:05 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=QL+bQrI+; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRd949twzDqyB for ; Wed, 22 Jul 2020 17:18:05 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1044; helo=mail-pj1-x1044.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=QL+bQrI+; dkim-atps=neutral Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9q56SLzDqvV for ; Wed, 22 Jul 2020 16:57:51 +1000 (AEST) Received: by mail-pj1-x1044.google.com with SMTP id a9so665882pjd.3 for ; Tue, 21 Jul 2020 23:57:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ab2zru3LbsxkRx8LDlOwi5GsBI6XqfxyDpTcP2Svmfk=; b=QL+bQrI+NJSkJCq+Ll63AMOhQCkEo1gRlwPUulmadNezhl7Sailu3mt5lTB7Zj673A NjqWHTggNlpWHZXmrHWCfUK8PKH6ITQJVpHE6PSbYgLS6j85hHOlTvnAbbPriAQ2aR4E 4nRswTDOl3VTi+GXLjW1gtV9fW9uAYoar6Kh85oCAhpGATQETd7Iycw8ukZLPikC63Bk nuUo52+y96+9AxFq14tRV9Q5M/r1cSVJ0FReUTOOiVQlB7D37ZSdmwbnTGiox0Hjb5Nv tsK0TJ52C5UHY8/MRwiK3rSe5NlpQKSHzX+3voYjEulYhnq2Lxkx5rYDpuM/Dt/iBLgY bzQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ab2zru3LbsxkRx8LDlOwi5GsBI6XqfxyDpTcP2Svmfk=; b=NOMf63VNa6nshcoGQeYYXahitfK5ex6ih+DaNHqZ9XK7iRqpfosS0rd3O6gFNdjpmi 3ZXqCSWmSbPyWnHIsGvU6RyWnlF1zhvxHT6VrMvta3/hAglqDaK5ItG31WnAO+m3R/fa N+JaXVomQxkSRwGR3rxABPGO8QogZ/SxoA4ZmKoi2rp04jkwVnIvILbGb8PLTrqtwkPA whVEmaU2+JEF3cr1xRu8MwLlrrEcoih82QNg2s8xzREVl/tk37paVuuc+uTUCZPIYGyc qoOXzKwDT6DBUML3tR04N5xRwf0IX1xa29aeQtaO6Qc4l31LfZmhofUuKR3polBNIO5G gLcQ== X-Gm-Message-State: AOAM5318kRXhJOAqa/FESwvsRuXAr8o5ZMQfXfr4fYk1C9r9a17x0Nn5 vaNwA0X1uz5U/eBqmlJVg66726C3MwU= X-Google-Smtp-Source: ABdhPJz7utqnnh+ldJ2rLmMc92JwLvnC+VLzfcOMQrMN6gnfStZq/Mk3Po9nugt2JGGO94hgDBFhDg== X-Received: by 2002:a17:90a:ac14:: with SMTP id o20mr8648799pjq.185.1595401068201; Tue, 21 Jul 2020 23:57:48 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:47 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 10/16] powerpc/powernv/pci: Refactor pnv_ioda_alloc_pe() Date: Wed, 22 Jul 2020 16:57:09 +1000 Message-Id: <20200722065715.1432738-10-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Rework the PE allocation logic to allow allocating blocks of PEs rather than individually. We'll use this to allocate contigious blocks of PEs for the SR-IOVs. This patch also adds code to pnv_ioda_alloc_pe() and pnv_ioda_reserve_pe() to use the existing, but unused, phb->pe_alloc_mutex. Currently these functions use atomic bit ops to release a currently allocated PE number. However, the pnv_ioda_alloc_pe() wants to have exclusive access to the bit map while scanning for hole large enough to accomodate the allocation size. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: Add some details about the pe_alloc mutex and why we're using it. --- arch/powerpc/platforms/powernv/pci-ioda.c | 41 ++++++++++++++++++----- arch/powerpc/platforms/powernv/pci.h | 2 +- 2 files changed, 34 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 2d36a9ebf0e9..c9c25fb0783c 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -145,23 +145,45 @@ static void pnv_ioda_reserve_pe(struct pnv_phb *phb, int pe_no) return; } + mutex_lock(&phb->ioda.pe_alloc_mutex); if (test_and_set_bit(pe_no, phb->ioda.pe_alloc)) pr_debug("%s: PE %x was reserved on PHB#%x\n", __func__, pe_no, phb->hose->global_number); + mutex_unlock(&phb->ioda.pe_alloc_mutex); pnv_ioda_init_pe(phb, pe_no); } -struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb) +struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb, int count) { - long pe; + struct pnv_ioda_pe *ret = NULL; + int run = 0, pe, i; + mutex_lock(&phb->ioda.pe_alloc_mutex); + + /* scan backwards for a run of @count cleared bits */ for (pe = phb->ioda.total_pe_num - 1; pe >= 0; pe--) { - if (!test_and_set_bit(pe, phb->ioda.pe_alloc)) - return pnv_ioda_init_pe(phb, pe); + if (test_bit(pe, phb->ioda.pe_alloc)) { + run = 0; + continue; + } + + run++; + if (run == count) + break; } + if (run != count) + goto out; - return NULL; + for (i = pe; i < pe + count; i++) { + set_bit(i, phb->ioda.pe_alloc); + pnv_ioda_init_pe(phb, i); + } + ret = &phb->ioda.pe_array[pe]; + +out: + mutex_unlock(&phb->ioda.pe_alloc_mutex); + return ret; } void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) @@ -173,7 +195,10 @@ void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) WARN_ON(pe->npucomp); /* NPUs for nvlink are not supposed to be freed */ kfree(pe->npucomp); memset(pe, 0, sizeof(struct pnv_ioda_pe)); + + mutex_lock(&phb->ioda.pe_alloc_mutex); clear_bit(pe_num, phb->ioda.pe_alloc); + mutex_unlock(&phb->ioda.pe_alloc_mutex); } /* The default M64 BAR is shared by all PEs */ @@ -976,7 +1001,7 @@ static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev) if (pdn->pe_number != IODA_INVALID_PE) return NULL; - pe = pnv_ioda_alloc_pe(phb); + pe = pnv_ioda_alloc_pe(phb, 1); if (!pe) { pr_warn("%s: Not enough PE# available, disabling device\n", pci_name(dev)); @@ -1047,7 +1072,7 @@ static struct pnv_ioda_pe *pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all) /* The PE number isn't pinned by M64 */ if (!pe) - pe = pnv_ioda_alloc_pe(phb); + pe = pnv_ioda_alloc_pe(phb, 1); if (!pe) { pr_warn("%s: Not enough PE# available for PCI bus %04x:%02x\n", @@ -3065,7 +3090,7 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np, pnv_ioda_reserve_pe(phb, phb->ioda.root_pe_idx); } else { /* otherwise just allocate one */ - root_pe = pnv_ioda_alloc_pe(phb); + root_pe = pnv_ioda_alloc_pe(phb, 1); phb->ioda.root_pe_idx = root_pe->pe_number; } diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 23fc5e391c7f..06431a452130 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -224,7 +224,7 @@ int pnv_ioda_deconfigure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe); void pnv_pci_ioda2_release_pe_dma(struct pnv_ioda_pe *pe); -struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb); +struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb, int count); void pnv_ioda_free_pe(struct pnv_ioda_pe *pe); #ifdef CONFIG_PCI_IOV From patchwork Wed Jul 22 06:57:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333614 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRgH5nw2z9sPf for ; Wed, 22 Jul 2020 17:19:55 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=crqusWbR; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRgH36wjzDqs3 for ; Wed, 22 Jul 2020 17:19:55 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1044; helo=mail-pj1-x1044.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=crqusWbR; dkim-atps=neutral Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9r6zCbzDqpC for ; Wed, 22 Jul 2020 16:57:52 +1000 (AEST) Received: by mail-pj1-x1044.google.com with SMTP id cv18so2467900pjb.1 for ; Tue, 21 Jul 2020 23:57:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hO+2WIHTQM3BvDNG7v5/SZMe2HFUf9RBriHjej/BGr0=; b=crqusWbR1wdXh5GCLB7p5pneeXss35sLO+hbl999b0rutyYgXOPYG53brBjPNQ+jya mqVIn4OsVzU4novFV5t5A7RIIJ0z6yMZZX4+ngzy+IAWEfjGcgkcQq+eOzbHFLWx+kdf ybFbreiIV/1NBmPTG9cWTWO8z71BcUU9mO+twyEnqgR6SS30K1r0II63kwoZFmIOauz4 liKWreWxVEJUNfOFaaA5zxR83CKh2N5Anss/gLbrK6IKQm6Ri7swZyEhWIPhKMj3Epe0 3QfOzglyex5D+RmtyToN1a+wXiS8p/iah176/v8CJDn7g7uo/B+bBKVj0WwBWKgASPRV BF7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hO+2WIHTQM3BvDNG7v5/SZMe2HFUf9RBriHjej/BGr0=; b=tuWLDDSd1OvrQI/77OmIjR8SVkmqR9428emSyUik1cdMAnuBah5EBGhtBz65P9rOVL Vtn4o1EPLo9wOtlwcyn4OvHj4nYwUkKbo/eOU7AyV0Zy9xNGUixSKrTazTDFd6X846n/ 6N+eQcpYT/PWxVAjGHHfoQpawLqwkNS8FPdN24XE6RNyCSIQYIVC3eI+Ca53TStaAgaj 7HNvsDLDjKKvlpBAPGwotz37KXYfxeoVtZ+MiTI9SuJA47EmtRzQPtvBuOw2izxJyyoA XhxpuJ8G4gayQHFzCLF2BbkUHgm0v/8Ph8++6b3Dpja4MF8joP77xWD6XakjxEUSBC5/ YYAA== X-Gm-Message-State: AOAM531hEAe6WO6RvJoHJLWGkRM9LLocyf+02KaT4OzOK8oDNRsjpUA/ kFU7XT6mU2tMNM5344puBq4ig+JJrPs= X-Google-Smtp-Source: ABdhPJzkTyDJoE3V8nc48bI/RogLzr4M35DHFvBL48xeUAy3jN/Hsp2xrtt6e1e1hnvg4vBH66lTNg== X-Received: by 2002:a17:902:d911:: with SMTP id c17mr24621094plz.227.1595401070588; Tue, 21 Jul 2020 23:57:50 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:50 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 11/16] powerpc/powernv/sriov: Drop iov->pe_num_map[] Date: Wed, 22 Jul 2020 16:57:10 +1000 Message-Id: <20200722065715.1432738-11-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Currently the iov->pe_num_map[] does one of two things depending on whether single PE mode is being used or not. When it is, this contains an array which maps a vf_index to the corresponding PE number. When single PE mode is not being used this contains a scalar which is the base PE for the set of enabled VFs (for for VFn is base + n). The array was necessary because when calling pnv_ioda_alloc_pe() there is no guarantee that the allocated PEs would be contigious. We can now allocate contigious blocks of PEs so this is no longer an issue. This allows us to drop the if (single_mode) {} .. else {} block scattered through the SR-IOV code which is a nice clean up. This also fixes a bug in pnv_pci_sriov_disable() which is the non-atomic bitmap_clear() to manipulate the PE allocation map. Other users of the map assume it will be accessed with atomic ops. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: Added a note to the commit message about bitmap_clear() --- arch/powerpc/platforms/powernv/pci-sriov.c | 109 +++++---------------- arch/powerpc/platforms/powernv/pci.h | 7 +- 2 files changed, 28 insertions(+), 88 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index d90e11218add..5981323cd9a6 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -453,11 +453,13 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) if (iov->m64_single_mode) { + int pe_num = iov->vf_pe_arr[j].pe_number; + size = pci_iov_resource_size(pdev, PCI_IOV_RESOURCES + i); start = res->start + size * j; rc = pnv_ioda_map_m64_single(phb, win, - iov->pe_num_map[j], + pe_num, start, size); } else { @@ -596,38 +598,24 @@ static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset) static void pnv_pci_sriov_disable(struct pci_dev *pdev) { + u16 num_vfs, base_pe; struct pnv_phb *phb; - struct pnv_ioda_pe *pe; struct pnv_iov_data *iov; - u16 num_vfs, i; phb = pci_bus_to_pnvhb(pdev->bus); iov = pnv_iov_get(pdev); num_vfs = iov->num_vfs; + base_pe = iov->vf_pe_arr[0].pe_number; /* Release VF PEs */ pnv_ioda_release_vf_PE(pdev); if (phb->type == PNV_PHB_IODA2) { if (!iov->m64_single_mode) - pnv_pci_vf_resource_shift(pdev, -*iov->pe_num_map); + pnv_pci_vf_resource_shift(pdev, -base_pe); /* Release M64 windows */ pnv_pci_vf_release_m64(pdev, num_vfs); - - /* Release PE numbers */ - if (iov->m64_single_mode) { - for (i = 0; i < num_vfs; i++) { - if (iov->pe_num_map[i] == IODA_INVALID_PE) - continue; - - pe = &phb->ioda.pe_array[iov->pe_num_map[i]]; - pnv_ioda_free_pe(pe); - } - } else - bitmap_clear(phb->ioda.pe_alloc, *iov->pe_num_map, num_vfs); - /* Releasing pe_num_map */ - kfree(iov->pe_num_map); } } @@ -653,13 +641,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) int vf_bus = pci_iov_virtfn_bus(pdev, vf_index); struct pci_dn *vf_pdn; - if (iov->m64_single_mode) - pe_num = iov->pe_num_map[vf_index]; - else - pe_num = *iov->pe_num_map + vf_index; - - pe = &phb->ioda.pe_array[pe_num]; - pe->pe_number = pe_num; + pe = &iov->vf_pe_arr[vf_index]; pe->phb = phb; pe->flags = PNV_IODA_PE_VF; pe->pbus = NULL; @@ -667,6 +649,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) pe->mve_number = -1; pe->rid = (vf_bus << 8) | vf_devfn; + pe_num = pe->pe_number; pe_info(pe, "VF %04d:%02d:%02d.%d associated with PE#%x\n", pci_domain_nr(pdev->bus), pdev->bus->number, PCI_SLOT(vf_devfn), PCI_FUNC(vf_devfn), pe_num); @@ -698,9 +681,9 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) { + struct pnv_ioda_pe *base_pe; struct pnv_iov_data *iov; struct pnv_phb *phb; - struct pnv_ioda_pe *pe; int ret; u16 i; @@ -714,55 +697,14 @@ static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) return -ENOSPC; } - /* - * When M64 BARs functions in Single PE mode, the number of VFs - * could be enabled must be less than the number of M64 BARs. - */ - if (iov->m64_single_mode && num_vfs > phb->ioda.m64_bar_idx) { - dev_info(&pdev->dev, "Not enough M64 BAR for VFs\n"); + /* allocate a contigious block of PEs for our VFs */ + base_pe = pnv_ioda_alloc_pe(phb, num_vfs); + if (!base_pe) { + pci_err(pdev, "Unable to allocate PEs for %d VFs\n", num_vfs); return -EBUSY; } - /* Allocating pe_num_map */ - if (iov->m64_single_mode) - iov->pe_num_map = kmalloc_array(num_vfs, - sizeof(*iov->pe_num_map), - GFP_KERNEL); - else - iov->pe_num_map = kmalloc(sizeof(*iov->pe_num_map), GFP_KERNEL); - - if (!iov->pe_num_map) - return -ENOMEM; - - if (iov->m64_single_mode) - for (i = 0; i < num_vfs; i++) - iov->pe_num_map[i] = IODA_INVALID_PE; - - /* Calculate available PE for required VFs */ - if (iov->m64_single_mode) { - for (i = 0; i < num_vfs; i++) { - pe = pnv_ioda_alloc_pe(phb); - if (!pe) { - ret = -EBUSY; - goto m64_failed; - } - - iov->pe_num_map[i] = pe->pe_number; - } - } else { - mutex_lock(&phb->ioda.pe_alloc_mutex); - *iov->pe_num_map = bitmap_find_next_zero_area( - phb->ioda.pe_alloc, phb->ioda.total_pe_num, - 0, num_vfs, 0); - if (*iov->pe_num_map >= phb->ioda.total_pe_num) { - mutex_unlock(&phb->ioda.pe_alloc_mutex); - dev_info(&pdev->dev, "Failed to enable VF%d\n", num_vfs); - kfree(iov->pe_num_map); - return -EBUSY; - } - bitmap_set(phb->ioda.pe_alloc, *iov->pe_num_map, num_vfs); - mutex_unlock(&phb->ioda.pe_alloc_mutex); - } + iov->vf_pe_arr = base_pe; iov->num_vfs = num_vfs; /* Assign M64 window accordingly */ @@ -778,9 +720,10 @@ static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) * Otherwise, the PE# for the VF will conflict with others. */ if (!iov->m64_single_mode) { - ret = pnv_pci_vf_resource_shift(pdev, *iov->pe_num_map); + ret = pnv_pci_vf_resource_shift(pdev, + base_pe->pe_number); if (ret) - goto m64_failed; + goto shift_failed; } } @@ -789,20 +732,12 @@ static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) return 0; -m64_failed: - if (iov->m64_single_mode) { - for (i = 0; i < num_vfs; i++) { - if (iov->pe_num_map[i] == IODA_INVALID_PE) - continue; - - pe = &phb->ioda.pe_array[iov->pe_num_map[i]]; - pnv_ioda_free_pe(pe); - } - } else - bitmap_clear(phb->ioda.pe_alloc, *iov->pe_num_map, num_vfs); +shift_failed: + pnv_pci_vf_release_m64(pdev, num_vfs); - /* Releasing pe_num_map */ - kfree(iov->pe_num_map); +m64_failed: + for (i = 0; i < num_vfs; i++) + pnv_ioda_free_pe(&iov->vf_pe_arr[i]); return ret; } diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 06431a452130..f76923f44f66 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -239,7 +239,12 @@ struct pnv_iov_data { /* number of VFs enabled */ u16 num_vfs; - unsigned int *pe_num_map; /* PE# for the first VF PE or array */ + + /* + * Pointer to the IODA PE state of each VF. Note that this is a pointer + * into the PHB's PE array (phb->ioda.pe_array). + */ + struct pnv_ioda_pe *vf_pe_arr; /* Did we map the VF BARs with single-PE IODA BARs? */ bool m64_single_mode; From patchwork Wed Jul 22 06:57:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333615 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRjb6fMxz9sPf for ; Wed, 22 Jul 2020 17:21:55 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=KVX7tUqV; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRjb6Bf5zDqyx for ; Wed, 22 Jul 2020 17:21:55 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1042; helo=mail-pj1-x1042.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=KVX7tUqV; dkim-atps=neutral Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9v3w7RzDqw1 for ; Wed, 22 Jul 2020 16:57:55 +1000 (AEST) Received: by mail-pj1-x1042.google.com with SMTP id 8so669446pjj.1 for ; Tue, 21 Jul 2020 23:57:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v1d7vh3QCDTZ6cLHO+NmAHiM61hWNbjISBFEEsGtYTM=; b=KVX7tUqVi4JmqA4Vwg9Ee6Z+j54N43DPZCaI4QARd/xMZhrID2xLiYbVLFpDiTtS+q qYozlnV9EcWaGyZ4xmZS3RQovtGFPjMe767Pe5BC3jlKnd2rML3JWu1TusW/h3z4MXKK 3ov/s76JpFFEKgabcDPS4YBqB+x9RHksAKFePPe1g8mau2L70/MiZ70wIQ+pcvx1yxfz ur9h6vH1Wu1q82NtZDm9lFc9W+Z7ne8RqnM5kewF6MWMkzIgqPiKaf2l+HUPOUE6qMi4 sn3rfh1qVKdbMfpBCbrPM6C/OFDalbYY2xvGYSHElJrpSGR1Wxa+wtKJ+HXiaHwH36+3 UynA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v1d7vh3QCDTZ6cLHO+NmAHiM61hWNbjISBFEEsGtYTM=; b=Ek6DjLXwZqAS2yAPPpVZSwWWbpQJdg76YlW6SkpuQf7JQf1bwA6PW7yMAldyglO3zX UiCDHPztphngv9MEOrRUpsMC7raYhbF6lrHvTOruYRscrCk+T6xDbhRCD8bqXEjplBgj jnsdvuhVTonBfowddaB74Ysi8LpnGZwZt6mW0pVe2va8iw/k9MQcqJNFW47fS9ZiMJsG 3GT8Tb/sLymPYH/x53dOnlrQOH3S6o5S8qlQdB5F9QGwNwfGgH7WN9iOeURcXWbjQJcW yjqNTX26zLt5hs2zvlLwP6DQMvWilS85GHbnMtEFvZ3UeIbMBOUh+OEGxrPcWaF2ObDE iffQ== X-Gm-Message-State: AOAM531yO04MCnLTDZEKn/k+YYuKZKYTaLGID7cKRWY9x5VcRgIREkQf zlvhwIxu/NyxV+GM5hDnAD/huXdqb8w= X-Google-Smtp-Source: ABdhPJy8QJnSaZCOJCWnoCB3n77b1KhVGnGJsHDTXrELy7iR4v1vNBuBpFlge1qePg3+/Sj8pEqk9g== X-Received: by 2002:a17:90a:ba05:: with SMTP id s5mr7878780pjr.132.1595401072523; Tue, 21 Jul 2020 23:57:52 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:52 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 12/16] powerpc/powernv/sriov: De-indent setup and teardown Date: Wed, 22 Jul 2020 16:57:11 +1000 Message-Id: <20200722065715.1432738-12-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Remove the IODA2 PHB checks. We already assume IODA2 in several places so there's not much point in wrapping most of the setup and teardown process in an if block. Signed-off-by: Oliver O'Halloran --- v2: Added a note that iov->vf_pe_arr is a pointer into the PHB's PE array rather than something we allocate. --- arch/powerpc/platforms/powernv/pci-sriov.c | 86 ++++++++++++---------- arch/powerpc/platforms/powernv/pci.h | 5 +- 2 files changed, 50 insertions(+), 41 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index 5981323cd9a6..b60d8a054a61 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -607,16 +607,18 @@ static void pnv_pci_sriov_disable(struct pci_dev *pdev) num_vfs = iov->num_vfs; base_pe = iov->vf_pe_arr[0].pe_number; + if (WARN_ON(!iov)) + return; + /* Release VF PEs */ pnv_ioda_release_vf_PE(pdev); - if (phb->type == PNV_PHB_IODA2) { - if (!iov->m64_single_mode) - pnv_pci_vf_resource_shift(pdev, -base_pe); + /* Un-shift the IOV BAR resources */ + if (!iov->m64_single_mode) + pnv_pci_vf_resource_shift(pdev, -base_pe); - /* Release M64 windows */ - pnv_pci_vf_release_m64(pdev, num_vfs); - } + /* Release M64 windows */ + pnv_pci_vf_release_m64(pdev, num_vfs); } static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs) @@ -690,41 +692,51 @@ static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) phb = pci_bus_to_pnvhb(pdev->bus); iov = pnv_iov_get(pdev); - if (phb->type == PNV_PHB_IODA2) { - if (!iov->vfs_expanded) { - dev_info(&pdev->dev, "don't support this SRIOV device" - " with non 64bit-prefetchable IOV BAR\n"); - return -ENOSPC; - } + /* + * There's a calls to IODA2 PE setup code littered throughout. We could + * probably fix that, but we'd still have problems due to the + * restriction inherent on IODA1 PHBs. + * + * NB: We class IODA3 as IODA2 since they're very similar. + */ + if (phb->type != PNV_PHB_IODA2) { + pci_err(pdev, "SR-IOV is not supported on this PHB\n"); + return -ENXIO; + } - /* allocate a contigious block of PEs for our VFs */ - base_pe = pnv_ioda_alloc_pe(phb, num_vfs); - if (!base_pe) { - pci_err(pdev, "Unable to allocate PEs for %d VFs\n", num_vfs); - return -EBUSY; - } + if (!iov->vfs_expanded) { + dev_info(&pdev->dev, "don't support this SRIOV device" + " with non 64bit-prefetchable IOV BAR\n"); + return -ENOSPC; + } - iov->vf_pe_arr = base_pe; - iov->num_vfs = num_vfs; + /* allocate a contigious block of PEs for our VFs */ + base_pe = pnv_ioda_alloc_pe(phb, num_vfs); + if (!base_pe) { + pci_err(pdev, "Unable to allocate PEs for %d VFs\n", num_vfs); + return -EBUSY; + } - /* Assign M64 window accordingly */ - ret = pnv_pci_vf_assign_m64(pdev, num_vfs); - if (ret) { - dev_info(&pdev->dev, "Not enough M64 window resources\n"); - goto m64_failed; - } + iov->vf_pe_arr = base_pe; + iov->num_vfs = num_vfs; - /* - * When using one M64 BAR to map one IOV BAR, we need to shift - * the IOV BAR according to the PE# allocated to the VFs. - * Otherwise, the PE# for the VF will conflict with others. - */ - if (!iov->m64_single_mode) { - ret = pnv_pci_vf_resource_shift(pdev, - base_pe->pe_number); - if (ret) - goto shift_failed; - } + /* Assign M64 window accordingly */ + ret = pnv_pci_vf_assign_m64(pdev, num_vfs); + if (ret) { + dev_info(&pdev->dev, "Not enough M64 window resources\n"); + goto m64_failed; + } + + /* + * When using one M64 BAR to map one IOV BAR, we need to shift + * the IOV BAR according to the PE# allocated to the VFs. + * Otherwise, the PE# for the VF will conflict with others. + */ + if (!iov->m64_single_mode) { + ret = pnv_pci_vf_resource_shift(pdev, + base_pe->pe_number); + if (ret) + goto shift_failed; } /* Setup VF PEs */ diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index f76923f44f66..41a6f4e938e4 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -240,10 +240,7 @@ struct pnv_iov_data { /* number of VFs enabled */ u16 num_vfs; - /* - * Pointer to the IODA PE state of each VF. Note that this is a pointer - * into the PHB's PE array (phb->ioda.pe_array). - */ + /* pointer to the array of VF PEs. num_vfs long*/ struct pnv_ioda_pe *vf_pe_arr; /* Did we map the VF BARs with single-PE IODA BARs? */ From patchwork Wed Jul 22 06:57:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333616 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRlt3bnGz9sPf for ; Wed, 22 Jul 2020 17:23:54 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EiIn3Fwa; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRlt0JG2zDqYL for ; Wed, 22 Jul 2020 17:23:54 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::642; helo=mail-pl1-x642.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=EiIn3Fwa; dkim-atps=neutral Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9x1py9zDqvW for ; Wed, 22 Jul 2020 16:57:57 +1000 (AEST) Received: by mail-pl1-x642.google.com with SMTP id d7so470901plq.13 for ; Tue, 21 Jul 2020 23:57:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O3tAPaLbPOKzWTimgu274iEtdGXYBF7csg6GtMWWCoI=; b=EiIn3FwaJ5iuSF8ZpJXXB5lVvVrfT2qq6LMuzRHD+JfxAgwGQAQ8AUY+6tw9jdNrmj 7iNog58RNf/l1hgnCvm3gr3/+B2o94wzhZSayTW4ctZBCl7OBiVxIsqRmd+BbsY+5RxL n/qqhetjfp8Q7kXd8hKz/ZQMgnkELMiGRRAlCbIBLx2/PNtHECsmaa/PZLMO3OzdJyrs tcXDV5kKDNTRg0bGszEmNqPdSYH30faRx4IxpRt+NI6LcmrbEecgAs1k6Rl4E7SYRL4P p0Kb4ucYB/ToEvqPWi79NxcCtwlay/hbl1tYp0e0mTXEN8du2oNmFxHl9B80oTUd961J Ambg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O3tAPaLbPOKzWTimgu274iEtdGXYBF7csg6GtMWWCoI=; b=NYAJZnaMan0vPua+BMbpT6LO6Ib9//JNYONPxPJk6WLS6at7vuRepdfmyBOMIA3/3B 6SlFrlZatXb87XTXhTRYw3h6BibVCHHDShNZmiA//QkaaYyoQrBjfFlktuxf0H0lSrVV 4LE3xQd8HH/To/rK3On6WYwzrThBL2yD5eoqkr5giqfRGcfabJSpS4EWLDc3C/gUgfOF dphe1S6bHoGWpyzrhwceniyQMnHQeOBPm8Ygo7pst/qw+S5VROp8BBh66Ux++enzB5XR T9aBtK74OnmFj+T7QZPnyxpTIlbTKP4+UvI79asTJJiMDKjkQl722XjC9Mjcva/PBtXO kZvw== X-Gm-Message-State: AOAM5316PfdqRtCcaRpGgzvbuEC5lqsnMPi+WuY/DRyL9akCKlZ9Vfgd rDCnI7hggM3qDTW1Xs/bMDOINMrQzkQ= X-Google-Smtp-Source: ABdhPJycd5oauROKEXEdg3TZIiSgcjVF0KdFpJblcmvsUNHwmS5BdDL9WHfFJ1sJfI1j2O5UEcwINg== X-Received: by 2002:a17:90a:30ea:: with SMTP id h97mr8632084pjb.32.1595401074918; Tue, 21 Jul 2020 23:57:54 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:54 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 13/16] powerpc/powernv/sriov: Move M64 BAR allocation into a helper Date: Wed, 22 Jul 2020 16:57:12 +1000 Message-Id: <20200722065715.1432738-13-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" I want to refactor the loop this code is currently inside of. Hoist it on out. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no change --- arch/powerpc/platforms/powernv/pci-sriov.c | 31 ++++++++++++++-------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index b60d8a054a61..f9aa5773dc6d 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -413,6 +413,23 @@ static int64_t pnv_ioda_map_m64_single(struct pnv_phb *phb, return rc; } +static int pnv_pci_alloc_m64_bar(struct pnv_phb *phb, struct pnv_iov_data *iov) +{ + int win; + + do { + win = find_next_zero_bit(&phb->ioda.m64_bar_alloc, + phb->ioda.m64_bar_idx + 1, 0); + + if (win >= phb->ioda.m64_bar_idx + 1) + return -1; + } while (test_and_set_bit(win, &phb->ioda.m64_bar_alloc)); + + set_bit(win, iov->used_m64_bar_mask); + + return win; +} + static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) { struct pnv_iov_data *iov; @@ -440,17 +457,9 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) continue; for (j = 0; j < m64_bars; j++) { - - /* allocate a window ID for this BAR */ - do { - win = find_next_zero_bit(&phb->ioda.m64_bar_alloc, - phb->ioda.m64_bar_idx + 1, 0); - - if (win >= phb->ioda.m64_bar_idx + 1) - goto m64_failed; - } while (test_and_set_bit(win, &phb->ioda.m64_bar_alloc)); - set_bit(win, iov->used_m64_bar_mask); - + win = pnv_pci_alloc_m64_bar(phb, iov); + if (win < 0) + goto m64_failed; if (iov->m64_single_mode) { int pe_num = iov->vf_pe_arr[j].pe_number; From patchwork Wed Jul 22 06:57:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333617 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRny229Pz9sPf for ; Wed, 22 Jul 2020 17:25:42 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=m4F7Q7gk; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRny14BQzDqKK for ; Wed, 22 Jul 2020 17:25:42 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::1042; helo=mail-pj1-x1042.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=m4F7Q7gk; dkim-atps=neutral Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBR9z18B5zDqwD for ; Wed, 22 Jul 2020 16:57:59 +1000 (AEST) Received: by mail-pj1-x1042.google.com with SMTP id mn17so663119pjb.4 for ; Tue, 21 Jul 2020 23:57:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9r/vtCCqtBUQj5l3GLs9PfeYlwV8MTvzAhemK5hlBeU=; b=m4F7Q7gkHy0ArlfG4gPyZ1gFklL8uLxXPxeycaoxGyAP2gpJd4HErAsWbHjLiFGG1V rUAgEltQJtAeFj6WkhFCMoq+JgQkXutzca9Ioj60IDx+7nVODQOcMAoJi6jSJSkFMW/f slALgAOl5SB7l4oxF7dRASID+3MU8/lblj3b1pBRWYdI1aYUrjJkz4JoSGKslYqsLYoq KFgIEpmkav1pEyired8Uc4fO71tuzCmVIsWSQ3if0ZLd6sW7FRmr8zK0zkj+jPh+2UFM Mvj0U5TCq1j7WwBvox3WnM+yjlx+SXOBbkWS5R55KZr+HMPFuMmizxgRAnWLJV0oqYDP lzHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9r/vtCCqtBUQj5l3GLs9PfeYlwV8MTvzAhemK5hlBeU=; b=N8JyPplwmV8jSsDtGzcjje6l9hSRCj3AgVn70aFrLDHRvGxxFu5VGuzn4y+GPiXbZd 6FUvr7VRVOMgzLaoMMCvBpqg2YJdWEBA2FBWrcIDE+bdxEnD1IsIwLVpbbHiML46ESyI b9EkT7QK3OHxD6QNw8IwXgpxStHiBH6aV6vPogEmw0NROIMZvj3sIwX2BoMr03xTVnhe GrjGQY8oYQFYA3Fdlqzxae/wTiAAGgIKu608mIMqjwY2bscRLmrgDy4XdNs8l02IhVVk RhBKSVmVmpm62pzK5SB8AAOv1A8IwdhjhZVxebr2sry33NCPBKPlpYwrb+PwQ7a3+Oee 7LHw== X-Gm-Message-State: AOAM5323oRl4dA9wZGKv+FszOXJwl4+p7x/tvKzuDk9DJsukTBYJuP2o xz46WcX4rJ8tOPuTPGleSTZMKbaEtz8= X-Google-Smtp-Source: ABdhPJx6OXPcO9L1h0k9Ir+wi3b3rsfarNPLdYvcnc9N7+J27yI0GE4LQxi8J4hF0ebaJTIjUG6DFA== X-Received: by 2002:a17:90a:800b:: with SMTP id b11mr8619379pjn.105.1595401077103; Tue, 21 Jul 2020 23:57:57 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:56 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 14/16] powerpc/powernv/sriov: Refactor M64 BAR setup Date: Wed, 22 Jul 2020 16:57:13 +1000 Message-Id: <20200722065715.1432738-14-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Split up the logic so that we have one branch that handles setting up a segmented window and another that handles setting up single PE windows for each VF. Signed-off-by: Oliver O'Halloran Reviewed-by: Alexey Kardashevskiy --- v2: no changes --- arch/powerpc/platforms/powernv/pci-sriov.c | 57 ++++++++++------------ 1 file changed, 27 insertions(+), 30 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index f9aa5773dc6d..ce8ad6851d73 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -438,52 +438,49 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) struct resource *res; int i, j; int64_t rc; - int total_vfs; resource_size_t size, start; - int m64_bars; + int base_pe_num; phb = pci_bus_to_pnvhb(pdev->bus); iov = pnv_iov_get(pdev); - total_vfs = pci_sriov_get_totalvfs(pdev); - - if (iov->m64_single_mode) - m64_bars = num_vfs; - else - m64_bars = 1; for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { res = &pdev->resource[i + PCI_IOV_RESOURCES]; if (!res->flags || !res->parent) continue; - for (j = 0; j < m64_bars; j++) { + /* don't need single mode? map everything in one go! */ + if (!iov->m64_single_mode) { win = pnv_pci_alloc_m64_bar(phb, iov); if (win < 0) goto m64_failed; - if (iov->m64_single_mode) { - int pe_num = iov->vf_pe_arr[j].pe_number; - - size = pci_iov_resource_size(pdev, - PCI_IOV_RESOURCES + i); - start = res->start + size * j; - rc = pnv_ioda_map_m64_single(phb, win, - pe_num, - start, - size); - } else { - size = resource_size(res); - start = res->start; - - rc = pnv_ioda_map_m64_segmented(phb, win, start, - size); - } + size = resource_size(res); + start = res->start; - if (rc != OPAL_SUCCESS) { - dev_err(&pdev->dev, "Failed to map M64 window #%d: %lld\n", - win, rc); + rc = pnv_ioda_map_m64_segmented(phb, win, start, size); + if (rc) + goto m64_failed; + + continue; + } + + /* otherwise map each VF with single PE BARs */ + size = pci_iov_resource_size(pdev, PCI_IOV_RESOURCES + i); + base_pe_num = iov->vf_pe_arr[0].pe_number; + + for (j = 0; j < num_vfs; j++) { + win = pnv_pci_alloc_m64_bar(phb, iov); + if (win < 0) + goto m64_failed; + + start = res->start + size * j; + rc = pnv_ioda_map_m64_single(phb, win, + base_pe_num + j, + start, + size); + if (rc) goto m64_failed; - } } } return 0; From patchwork Wed Jul 22 06:57:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333619 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRqw4gYPz9sPf for ; Wed, 22 Jul 2020 17:27:24 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=baF9Gsbh; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRqw3xLJzDr2d for ; Wed, 22 Jul 2020 17:27:24 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::541; helo=mail-pg1-x541.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=baF9Gsbh; dkim-atps=neutral Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBRB236fNzDqw9 for ; Wed, 22 Jul 2020 16:58:02 +1000 (AEST) Received: by mail-pg1-x541.google.com with SMTP id g67so690784pgc.8 for ; Tue, 21 Jul 2020 23:58:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JOM6RKmhGC8xksQhcfeat+pOB/Cl07/L2Xe4srt3kGI=; b=baF9GsbhbMYNtpKFxNbugAuGdLzP5EZ5GfQOGntUbpGEUJMil84gmd0IjdjMW+syy8 0CnWHUGJnP9mwrhMGey0YFGh4d42gjpc10zfnMODW11tqxM2nusx/DG7pfZBiXJ/pGPm U33b7pHen0JSEJX4uCNP7IehgEwhezGW7Xx5xioLGDGIPmh6euDSR7ymc6foJZoW9lh7 ol3XKTt4q94Yxq/+NGpPV8l2cG7WsncqBU6kRMtTTg8TLeI2LkU4j3UdV/2gOPlywcFk AH8J/oaaY9hpWJ1hLoSu5TCXBNCsHSN2WDrmKt+czTO6FgfjV9kBs0+ZZQhmspPKfcCx /YHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JOM6RKmhGC8xksQhcfeat+pOB/Cl07/L2Xe4srt3kGI=; b=bhQjbKqYZigr7r2xFbeSmEBaMF/FcHjFsCh9Oh31rkFZcIUaRhZYQjOxrwbKlVOCBO h4dZ158X9XAgFsLTYyYKkHPLwNWKvSUFSi4jNSAu6o1Z2D2h2Q3Dy1fT8+Yz/J9NzUL5 Ex9UawfGiS9CdZgQPiCvtu95obiC8bLli9iLwsEhyIOhDL4bWoMiV1FQKweSiOBhxVXm RoOjFSF5z1ljHNt4DZGG4BMK5DmwP5jXW3YXF+y2GSC+Eg/fUvDwmYw8DgHw231GfMcn BTjh8mLSdR7zKPdxeeoHzIwKr9+XwwIUo4jVHcDgeGCaRfDSQ9uF6aYbDWtGhhxSEPY4 q/rw== X-Gm-Message-State: AOAM530RI8y4gEUUf1CV/+cKQkpcBfZXJZNTOo1BuKdcXqV6hJRDf06D znmKnTK7PwPbT0K7BRD34/b4VMfuP8Y= X-Google-Smtp-Source: ABdhPJw0BAA0NkfNqI8MgoesPVaPl0pdWS3xKLRf8r+9fLRvWmLtUpR8GRYduPiMLMd2XZJKBQ918w== X-Received: by 2002:aa7:955a:: with SMTP id w26mr28145936pfq.137.1595401079170; Tue, 21 Jul 2020 23:57:59 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:57:58 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 15/16] powerpc/powernv/sriov: Make single PE mode a per-BAR setting Date: Wed, 22 Jul 2020 16:57:14 +1000 Message-Id: <20200722065715.1432738-15-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Using single PE BARs to map an SR-IOV BAR is really a choice about what strategy to use when mapping a BAR. It doesn't make much sense for this to be a global setting since a device might have one large BAR which needs to be mapped with single PE windows and another smaller BAR that can be mapped with a regular segmented window. Make the segmented vs single decision a per-BAR setting and clean up the logic that decides which mode to use. Signed-off-by: Oliver O'Halloran --- v2: Dropped unused total_vfs variables in pnv_pci_ioda_fixup_iov_resources() Dropped bar_no from pnv_pci_iov_resource_alignment() Minor re-wording of comments. --- arch/powerpc/platforms/powernv/pci-sriov.c | 131 ++++++++++----------- arch/powerpc/platforms/powernv/pci.h | 11 +- 2 files changed, 73 insertions(+), 69 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index ce8ad6851d73..76215d01405b 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -146,21 +146,17 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) { struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); - const resource_size_t gate = phb->ioda.m64_segsize >> 2; struct resource *res; int i; - resource_size_t size, total_vf_bar_sz; + resource_size_t vf_bar_sz; struct pnv_iov_data *iov; - int mul, total_vfs; + int mul; iov = kzalloc(sizeof(*iov), GFP_KERNEL); if (!iov) goto disable_iov; pdev->dev.archdata.iov_data = iov; - - total_vfs = pci_sriov_get_totalvfs(pdev); mul = phb->ioda.total_pe_num; - total_vf_bar_sz = 0; for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { res = &pdev->resource[i + PCI_IOV_RESOURCES]; @@ -173,50 +169,50 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) goto disable_iov; } - total_vf_bar_sz += pci_iov_resource_size(pdev, - i + PCI_IOV_RESOURCES); + vf_bar_sz = pci_iov_resource_size(pdev, i + PCI_IOV_RESOURCES); /* - * If bigger than quarter of M64 segment size, just round up - * power of two. + * Generally, one segmented M64 BAR maps one IOV BAR. However, + * if a VF BAR is too large we end up wasting a lot of space. + * If each VF needs more than 1/4 of the default m64 segment + * then each VF BAR should be mapped in single-PE mode to reduce + * the amount of space required. This does however limit the + * number of VFs we can support. * - * Generally, one M64 BAR maps one IOV BAR. To avoid conflict - * with other devices, IOV BAR size is expanded to be - * (total_pe * VF_BAR_size). When VF_BAR_size is half of M64 - * segment size , the expanded size would equal to half of the - * whole M64 space size, which will exhaust the M64 Space and - * limit the system flexibility. This is a design decision to - * set the boundary to quarter of the M64 segment size. + * The 1/4 limit is arbitrary and can be tweaked. */ - if (total_vf_bar_sz > gate) { - mul = roundup_pow_of_two(total_vfs); - dev_info(&pdev->dev, - "VF BAR Total IOV size %llx > %llx, roundup to %d VFs\n", - total_vf_bar_sz, gate, mul); - iov->m64_single_mode = true; - break; - } - } + if (vf_bar_sz > (phb->ioda.m64_segsize >> 2)) { + /* + * On PHB3, the minimum size alignment of M64 BAR in + * single mode is 32MB. If this VF BAR is smaller than + * 32MB, but still too large for a segmented window + * then we can't map it and need to disable SR-IOV for + * this device. + */ + if (vf_bar_sz < SZ_32M) { + pci_err(pdev, "VF BAR%d: %pR can't be mapped in single PE mode\n", + i, res); + goto disable_iov; + } - for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { - res = &pdev->resource[i + PCI_IOV_RESOURCES]; - if (!res->flags || res->parent) + iov->m64_single_mode[i] = true; continue; + } - size = pci_iov_resource_size(pdev, i + PCI_IOV_RESOURCES); /* - * On PHB3, the minimum size alignment of M64 BAR in single - * mode is 32MB. + * This BAR can be mapped with one segmented window, so adjust + * te resource size to accommodate. */ - if (iov->m64_single_mode && (size < SZ_32M)) - goto disable_iov; + pci_dbg(pdev, " Fixing VF BAR%d: %pR to\n", i, res); + res->end = res->start + vf_bar_sz * mul - 1; + pci_dbg(pdev, " %pR\n", res); - dev_dbg(&pdev->dev, " Fixing VF BAR%d: %pR to\n", i, res); - res->end = res->start + size * mul - 1; - dev_dbg(&pdev->dev, " %pR\n", res); - dev_info(&pdev->dev, "VF BAR%d: %pR (expanded to %d VFs for PE alignment)", + pci_info(pdev, "VF BAR%d: %pR (expanded to %d VFs for PE alignment)", i, res, mul); + + iov->need_shift = true; } + iov->vfs_expanded = mul; return; @@ -260,42 +256,40 @@ void pnv_pci_ioda_fixup_iov(struct pci_dev *pdev) resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, int resno) { - struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct pnv_iov_data *iov = pnv_iov_get(pdev); resource_size_t align; + /* + * iov can be null if we have an SR-IOV device with IOV BAR that can't + * be placed in the m64 space (i.e. The BAR is 32bit or non-prefetch). + * In that case we don't allow VFs to be enabled since one of their + * BARs would not be placed in the correct PE. + */ + if (!iov) + return align; + if (!iov->vfs_expanded) + return align; + + align = pci_iov_resource_size(pdev, resno); + + /* + * If we're using single mode then we can just use the native VF BAR + * alignment. We validated that it's possible to use a single PE + * window above when we did the fixup. + */ + if (iov->m64_single_mode[resno - PCI_IOV_RESOURCES]) + return align; + /* * On PowerNV platform, IOV BAR is mapped by M64 BAR to enable the * SR-IOV. While from hardware perspective, the range mapped by M64 * BAR should be size aligned. * - * When IOV BAR is mapped with M64 BAR in Single PE mode, the extra - * powernv-specific hardware restriction is gone. But if just use the - * VF BAR size as the alignment, PF BAR / VF BAR may be allocated with - * in one segment of M64 #15, which introduces the PE conflict between - * PF and VF. Based on this, the minimum alignment of an IOV BAR is - * m64_segsize. - * * This function returns the total IOV BAR size if M64 BAR is in * Shared PE mode or just VF BAR size if not. * If the M64 BAR is in Single PE mode, return the VF BAR size or * M64 segment size if IOV BAR size is less. */ - align = pci_iov_resource_size(pdev, resno); - - /* - * iov can be null if we have an SR-IOV device with IOV BAR that can't - * be placed in the m64 space (i.e. The BAR is 32bit or non-prefetch). - * In that case we don't allow VFs to be enabled so just return the - * default alignment. - */ - if (!iov) - return align; - if (!iov->vfs_expanded) - return align; - if (iov->m64_single_mode) - return max(align, (resource_size_t)phb->ioda.m64_segsize); - return iov->vfs_expanded * align; } @@ -450,7 +444,7 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) continue; /* don't need single mode? map everything in one go! */ - if (!iov->m64_single_mode) { + if (!iov->m64_single_mode[i]) { win = pnv_pci_alloc_m64_bar(phb, iov); if (win < 0) goto m64_failed; @@ -543,6 +537,8 @@ static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset) res = &dev->resource[i + PCI_IOV_RESOURCES]; if (!res->flags || !res->parent) continue; + if (iov->m64_single_mode[i]) + continue; /* * The actual IOV BAR range is determined by the start address @@ -574,6 +570,8 @@ static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset) res = &dev->resource[i + PCI_IOV_RESOURCES]; if (!res->flags || !res->parent) continue; + if (iov->m64_single_mode[i]) + continue; size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES); res2 = *res; @@ -619,8 +617,8 @@ static void pnv_pci_sriov_disable(struct pci_dev *pdev) /* Release VF PEs */ pnv_ioda_release_vf_PE(pdev); - /* Un-shift the IOV BAR resources */ - if (!iov->m64_single_mode) + /* Un-shift the IOV BARs if we need to */ + if (iov->need_shift) pnv_pci_vf_resource_shift(pdev, -base_pe); /* Release M64 windows */ @@ -738,9 +736,8 @@ static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) * the IOV BAR according to the PE# allocated to the VFs. * Otherwise, the PE# for the VF will conflict with others. */ - if (!iov->m64_single_mode) { - ret = pnv_pci_vf_resource_shift(pdev, - base_pe->pe_number); + if (iov->need_shift) { + ret = pnv_pci_vf_resource_shift(pdev, base_pe->pe_number); if (ret) goto shift_failed; } diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 41a6f4e938e4..902e928c7c22 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -243,8 +243,15 @@ struct pnv_iov_data { /* pointer to the array of VF PEs. num_vfs long*/ struct pnv_ioda_pe *vf_pe_arr; - /* Did we map the VF BARs with single-PE IODA BARs? */ - bool m64_single_mode; + /* Did we map the VF BAR with single-PE IODA BARs? */ + bool m64_single_mode[PCI_SRIOV_NUM_BARS]; + + /* + * True if we're using any segmented windows. In that case we need + * shift the start of the IOV resource the segment corresponding to + * the allocated PE. + */ + bool need_shift; /* * Bit mask used to track which m64 windows are used to map the From patchwork Wed Jul 22 06:57:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver O'Halloran X-Patchwork-Id: 1333620 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BBRsw0Jymz9sPf for ; Wed, 22 Jul 2020 17:29:08 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=rAtaviGr; dkim-atps=neutral Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BBRsv2FTCzDqpM for ; Wed, 22 Jul 2020 17:29:07 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com (client-ip=2607:f8b0:4864:20::543; helo=mail-pg1-x543.google.com; envelope-from=oohall@gmail.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=rAtaviGr; dkim-atps=neutral Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BBRB40jZbzDqv3 for ; Wed, 22 Jul 2020 16:58:03 +1000 (AEST) Received: by mail-pg1-x543.google.com with SMTP id g67so690824pgc.8 for ; Tue, 21 Jul 2020 23:58:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gYLUIQDudARZS2ppTMcFMprGe8T6RymlErRvYcS+NhM=; b=rAtaviGrOwMolVVqU1D9XlB3v5933VsKhh6lA01jOHALnx9XbAufHJq3ivE3y0T9ry diqefzccaVT01S822Casoo984eYUzpgIlza+sEGxhmoJyzCdeE/C6dMShWkkeTTAtZOd Xf/sSo3EKMPwoubZImsRmWLbHZ9ZzpbRSKgvS4S+WPM1HYjhKKDbFizh/LXUdY8fL7Yl hGT4Pf5sU3VEJK297YAx4CwedTuBKeMBmoP40agmO+EHwL4TPBe4+lvktEpctuwofiic zBL2/n3eHKcjdedIQxXqZDCMhaoJxFKi8QEngip2Qea4La7G7E1gi6XQL0CJoro9zSaw hEng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gYLUIQDudARZS2ppTMcFMprGe8T6RymlErRvYcS+NhM=; b=AUb6dFxSCOQvi4InREZqaHZ6wNa8hDYEBiCd9eiycLw3GjdFkh3LYSiDwd2mEUTJNg cKUCqAOdIpvYNKDtEy0f2aWAAzcA+3TXlbbITuHyPnt2rILE0qejy95wdLoPPilKNpV3 HALREOxRU1wqUraqiFa+58mUvTtyo5Q1SUKi166Q+bm94VWlRdem0l77V8AR1DdnoiYX 8VsDiJ9lde4JSNzsQDJj9dBa1QuspBZEUeeJdcVliyUcK9mkzzZf5RKzmxYyJhmVfXXr zitDDr/bRfBUrjC0iC8VpbJXRH0RqinabNXIFB3rVRf+8++4V99383+enH1XX2dlBlja y+tw== X-Gm-Message-State: AOAM533q+P+OA2kUte5Da71PePiuaACWFAq3eu6SpwOL6PNncXJxFm8O yA66AzfvFAoMCxh7KGE9jzidAk9AbPk= X-Google-Smtp-Source: ABdhPJz4EZ9W24c91caTpH3hTXw0j8Y/W7l7VIpKj13z656hPDgZSwpl3lGHWOkN/B23y2E8XXYa3w== X-Received: by 2002:a62:1d90:: with SMTP id d138mr27972281pfd.159.1595401081093; Tue, 21 Jul 2020 23:58:01 -0700 (PDT) Received: from localhost.ibm.com (203-219-159-24.tpgi.com.au. [203.219.159.24]) by smtp.gmail.com with ESMTPSA id c14sm22645104pfj.82.2020.07.21.23.57.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jul 2020 23:58:00 -0700 (PDT) From: Oliver O'Halloran To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v2 16/16] powerpc/powernv/sriov: Remove vfs_expanded Date: Wed, 22 Jul 2020 16:57:15 +1000 Message-Id: <20200722065715.1432738-16-oohall@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200722065715.1432738-1-oohall@gmail.com> References: <20200722065715.1432738-1-oohall@gmail.com> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Oliver O'Halloran Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Previously iov->vfs_expanded was used for two purposes. 1) To work out how much we need to multiple the per-VF BAR size to figure out the total space required for the IOV BAR. 2) To indicate that IOV is not usable with this device (vfs_expanded == 0). We don't really need the field for either since the multiple in 1) is always the number PEs supported by the PHB. Similarly, we don't really need it in 2) either since the IOV data field will be NULL if we can't use IOV with the device. Signed-off-by: Oliver O'Halloran --- v2: New --- arch/powerpc/platforms/powernv/pci-sriov.c | 9 +++------ arch/powerpc/platforms/powernv/pci.h | 3 --- 2 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index 76215d01405b..5742215b4093 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -213,8 +213,6 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev) iov->need_shift = true; } - iov->vfs_expanded = mul; - return; disable_iov: @@ -256,6 +254,7 @@ void pnv_pci_ioda_fixup_iov(struct pci_dev *pdev) resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, int resno) { + struct pnv_phb *phb = pci_bus_to_pnvhb(pdev->bus); struct pnv_iov_data *iov = pnv_iov_get(pdev); resource_size_t align; @@ -267,8 +266,6 @@ resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, */ if (!iov) return align; - if (!iov->vfs_expanded) - return align; align = pci_iov_resource_size(pdev, resno); @@ -290,7 +287,7 @@ resource_size_t pnv_pci_iov_resource_alignment(struct pci_dev *pdev, * If the M64 BAR is in Single PE mode, return the VF BAR size or * M64 segment size if IOV BAR size is less. */ - return iov->vfs_expanded * align; + return phb->ioda.total_pe_num * align; } static int pnv_pci_vf_release_m64(struct pci_dev *pdev, u16 num_vfs) @@ -708,7 +705,7 @@ static int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs) return -ENXIO; } - if (!iov->vfs_expanded) { + if (!iov) { dev_info(&pdev->dev, "don't support this SRIOV device" " with non 64bit-prefetchable IOV BAR\n"); return -ENOSPC; diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h index 902e928c7c22..c8cc152bdf52 100644 --- a/arch/powerpc/platforms/powernv/pci.h +++ b/arch/powerpc/platforms/powernv/pci.h @@ -234,9 +234,6 @@ void pnv_ioda_free_pe(struct pnv_ioda_pe *pe); * and this structure is used to keep track of it all. */ struct pnv_iov_data { - /* number of VFs IOV BAR expanded. FIXME: rename this to something less bad */ - u16 vfs_expanded; - /* number of VFs enabled */ u16 num_vfs;