Message ID | 20241028100341.16631-1-michal.swiatkowski@linux.intel.com |
---|---|
Headers | show |
Series | ice: managing MSI-X in driver | expand |
On 10/28/2024 3:03 AM, Michal Swiatkowski wrote: > Hi, > > It is another try to allow user to manage amount of MSI-X used for each > feature in ice. First was via devlink resources API, it wasn't accepted > in upstream. Also static MSI-X allocation using devlink resources isn't > really user friendly. > > This try is using more dynamic way. "Dynamic" across whole kernel when > platform supports it and "dynamic" across the driver when not. > > To achieve that reuse global devlink parameter pf_msix_max and > pf_msix_min. It fits how ice hardware counts MSI-X. In case of ice amount > of MSI-X reported on PCI is a whole MSI-X for the card (with MSI-X for > VFs also). Having pf_msix_max allow user to statically set how many > MSI-X he wants on PF and how many should be reserved for VFs. > > pf_msix_min is used to set minimum number of MSI-X with which ice driver > should probe correctly. > > Meaning of this field in case of dynamic vs static allocation: > - on system with dynamic MSI-X allocation support > * alloc pf_msix_min as static, rest will be allocated dynamically > - on system without dynamic MSI-X allocation support > * try alloc pf_msix_max as static, minimum acceptable result is > pf_msix_min > > As Jesse and Piotr suggested pf_msix_max and pf_msix_min can (an > probably should) be stored in NVM. This patchset isn't implementing > that. > > Dynamic (kernel or driver) way means that splitting MSI-X across the > RDMA and eth in case there is a MSI-X shortage isn't correct. Can work > when dynamic is only on driver site, but can't when dynamic is on kernel > site. > > Let's remove this code and move to MSI-X allocation feature by feature. > If there is no more MSI-X for a feature, a feature is working with less > MSI-X or it is turned off. > > There is a regression here. With MSI-X splitting user can run RDMA and > eth even on system with not enough MSI-X. Now only eth will work. RDMA > can be turned on by changing number of PF queues (lowering) and reprobe > RDMA driver. > > Example: > 72 CPU number, eth, RDMA and flow director (1 MSI-X), 1 MSI-X for OICR > on PF, and 1 more for RDMA. Card is using 1 + 72 + 1 + 72 + 1 = 147. > > We set pf_msix_min = 2, pf_msix_max = 128 > > OICR: 1 > eth: 72 > flow director: 1 > RDMA: 128 - 74 = 54 > > We can change number of queues on pf to 36 and do devlink reinit > > OICR: 1 > eth: 36 > RDMA: 73 > flow director: 1 > > We can also (implemented in "ice: enable_rdma devlink param") turned > RDMA off. > > OICR: 1 > eth: 72 > RDMA: 0 (turned off) > flow director: 1 > > After this changes we have a static base vector for SRIOV (SIOV probably > in the feature). Last patch from this series is simplifying managing VF > MSI-X code based on static vector. > > Now changing queues using ethtool is also changing MSI-X. If there is > enough MSI-X it is always one to one. When there is not enough there > will be more queues than MSI-X. There is a lack of ability to set how > many queues should be used per MSI-X. Maybe we should introduce another > ethtool param for it? Sth like queues_per_vector? > > v5 --> v6: [5] > * set default MSI-X max value based on needs instead of const define > (patch 3) > > v4 --> v5: [4] > * count combined queues in ethtool for case the vectors aren't mapped > 1:1 to queues (patch 1) > * change min_t to min where the casting isn't needed (and can hide > problems) (patch 4) > * load msix_max and msix_min value after devlink reload; it accidentally > wasn't added after removing loading in probe path to mitigate error > from devl_para_driverinit...() (patch 2) > * add documentation in develink/ice for new parameters (patch 2) > > v3 --> v4: [3] > * drop unnecessary text in devlink validation comments > * assume that devl_param_driverinit...() shouldn't return error in > normal execution path > > v2 --> v3: [2] > * move flow director init before RDMA init > * fix unrolling RDMA MSI-X allocation > * add comment in commit message about lowering control RDMA MSI-X > amount > > v1 --> v2: [1] > * change permanent MSI-X cmode parameters to driverinit > * remove locking during devlink parameter registration (it is now > locked for whole init/deinit part) > > [5] https://lore.kernel.org/netdev/20241024121230.5861-1-michal.swiatkowski@linux.intel.com/T/#t > [4] https://lore.kernel.org/netdev/20240930120402.3468-1-michal.swiatkowski@linux.intel.com/ > [3] https://lore.kernel.org/netdev/20240808072016.10321-1-michal.swiatkowski@linux.intel.com/ > [2] https://lore.kernel.org/netdev/20240801093115.8553-1-michal.swiatkowski@linux.intel.com/ > [1] https://lore.kernel.org/netdev/20240213073509.77622-1-michal.swiatkowski@linux.intel.com/ > This version looks good to me! A lot of great simplification here too. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Thanks, Jake > Michal Swiatkowski (9): > ice: count combined queues using Rx/Tx count > ice: devlink PF MSI-X max and min parameter > ice: remove splitting MSI-X between features > ice: get rid of num_lan_msix field > ice, irdma: move interrupts code to irdma > ice: treat dyn_allowed only as suggestion > ice: enable_rdma devlink param > ice: simplify VF MSI-X managing > ice: init flow director before RDMA > > Documentation/networking/devlink/ice.rst | 11 + > drivers/infiniband/hw/irdma/hw.c | 2 - > drivers/infiniband/hw/irdma/main.c | 46 ++- > drivers/infiniband/hw/irdma/main.h | 3 + > .../net/ethernet/intel/ice/devlink/devlink.c | 102 ++++++- > drivers/net/ethernet/intel/ice/ice.h | 21 +- > drivers/net/ethernet/intel/ice/ice_base.c | 10 +- > drivers/net/ethernet/intel/ice/ice_ethtool.c | 9 +- > drivers/net/ethernet/intel/ice/ice_idc.c | 64 +--- > drivers/net/ethernet/intel/ice/ice_irq.c | 275 ++++++------------ > drivers/net/ethernet/intel/ice/ice_irq.h | 13 +- > drivers/net/ethernet/intel/ice/ice_lib.c | 35 ++- > drivers/net/ethernet/intel/ice/ice_main.c | 6 +- > drivers/net/ethernet/intel/ice/ice_sriov.c | 154 +--------- > include/linux/net/intel/iidc.h | 2 + > 15 files changed, 328 insertions(+), 425 deletions(-) >