Message ID | 1573040408-3831-2-git-send-email-jonathan.derrick@intel.com |
---|---|
State | Changes Requested |
Delegated to: | Lorenzo Pieralisi |
Headers | show |
Series | PCI: vmd: Reducing tail latency by affining to the storage stack | expand |
On Wed, Nov 06, 2019 at 04:40:06AM -0700, Jon Derrick wrote: > + max_vectors = min_t(int, vmd->msix_count, num_possible_cpus() + 1); > + if (nvec > max_vectors) > + return max_vectors; If vmd's msix vectors beyond num_possible_cpus() are inaccessible, why not just limit vmd's msix_count the same way?
On Thu, 2019-11-07 at 03:02 +0900, Keith Busch wrote: > On Wed, Nov 06, 2019 at 04:40:06AM -0700, Jon Derrick wrote: > > + max_vectors = min_t(int, vmd->msix_count, num_possible_cpus() + 1); > > + if (nvec > max_vectors) > > + return max_vectors; > > If vmd's msix vectors beyond num_possible_cpus() are inaccessible, why > not just limit vmd's msix_count the same way? Yes I could probably do that instead
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 8bce647..ebe7ff6 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -260,9 +260,20 @@ static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev, { struct pci_dev *pdev = to_pci_dev(dev); struct vmd_dev *vmd = vmd_from_bus(pdev->bus); + int max_vectors; - if (nvec > vmd->msix_count) - return vmd->msix_count; + /* + * VMD exists primarily as an NVMe storage domain. It thus makes sense + * to reduce the number of VMD vectors exposed to child devices using + * the same calculation as the NVMe driver. This allows better affinity + * matching along the entire stack when multiple device vectors share + * VMD IRQ lists. One additional VMD vector is reserved for pciehp and + * non-NVMe interrupts, and NVMe Admin Queue interrupts can also be + * placed on this slow interrupt. + */ + max_vectors = min_t(int, vmd->msix_count, num_possible_cpus() + 1); + if (nvec > max_vectors) + return max_vectors; memset(arg, 0, sizeof(*arg)); return 0;
In order to better affine VMD IRQs, VMD IRQ lists, and child NVMe devices, reduce the number of VMD vectors exposed to the MSI domain using the same calculation as NVMe. VMD will still retain one vector for pciehp and non-NVMe vectors. The remaining will match the maximum number of NVMe child device IO vectors. Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> --- drivers/pci/controller/vmd.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)