Message ID | 20240510152620.2227312-8-aleksander.lobakin@intel.com |
---|---|
State | RFC |
Headers | show |
Series | idpf: XDP chapter I: convert Rx to libeth | expand |
On 5/10/2024 8:26 AM, Alexander Lobakin wrote: > Currently, there's no HW supporting idpf in the singleq model. Still, > this dead code is supported by the driver and often times add hotpath The driver supports the HW in single queue mode (not the other way around), so I would like to point out that all current HW on which idpf can load supports this model. > branches and redundant cacheline accesses. > While it can't currently be removed, add CONFIG_IDPF_SINGLEQ and build > the singleq code only when it's enabled manually. This corresponds to > -10 Kb of object code size and a good bunch of hotpath checks. > idpf_is_queue_model_split() works as a gate and compiles out to `true` > when the config option is disabled. Compiling singleq out does introduce an issue for the users that would end up without a netdev if the driver is loaded on a setup where the Control Plane is configured for singlq mode. In that scenario the user will have to recompile the driver/kernel in order to load the driver successfully. > > Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> > --- > drivers/net/ethernet/intel/Kconfig | 13 +--------- > drivers/net/ethernet/intel/idpf/Kconfig | 26 +++++++++++++++++++ > drivers/net/ethernet/intel/idpf/Makefile | 3 ++- > drivers/net/ethernet/intel/idpf/idpf.h | 3 ++- > drivers/net/ethernet/intel/idpf/idpf_txrx.c | 2 +- > .../net/ethernet/intel/idpf/idpf_virtchnl.c | 15 ++++++++--- > 6 files changed, 43 insertions(+), 19 deletions(-) > create mode 100644 drivers/net/ethernet/intel/idpf/Kconfig > > diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig > index e0287fbd501d..0375c7448a57 100644 > --- a/drivers/net/ethernet/intel/Kconfig > +++ b/drivers/net/ethernet/intel/Kconfig > @@ -384,17 +384,6 @@ config IGC_LEDS > Optional support for controlling the NIC LED's with the netdev > LED trigger. > > -config IDPF > - tristate "Intel(R) Infrastructure Data Path Function Support" > - depends on PCI_MSI > - select DIMLIB > - select PAGE_POOL > - select PAGE_POOL_STATS > - help > - This driver supports Intel(R) Infrastructure Data Path Function > - devices. > - > - To compile this driver as a module, choose M here. The module > - will be called idpf. > +source "drivers/net/ethernet/intel/idpf/Kconfig" > > endif # NET_VENDOR_INTEL > diff --git a/drivers/net/ethernet/intel/idpf/Kconfig b/drivers/net/ethernet/intel/idpf/Kconfig > new file mode 100644 > index 000000000000..bee83a40f218 > --- /dev/null > +++ b/drivers/net/ethernet/intel/idpf/Kconfig > @@ -0,0 +1,26 @@ > +# SPDX-License-Identifier: GPL-2.0-only > +# Copyright (C) 2024 Intel Corporation > + > +config IDPF > + tristate "Intel(R) Infrastructure Data Path Function Support" > + depends on PCI_MSI > + select DIMLIB > + select PAGE_POOL > + select PAGE_POOL_STATS > + help > + This driver supports Intel(R) Infrastructure Data Path Function > + devices. > + > + To compile this driver as a module, choose M here. The module > + will be called idpf. > + > +if IDPF > + > +config IDPF_SINGLEQ > + bool "idpf singleq support" > + help > + This option enables support for legacy single Rx/Tx queues w/no > + completion and fill queues. Only enable if you have such hardware This description is not accurate - all HW supports single queue. The configuration is done by the Control Plane. Furthermore, without access to the Control Plane config, there is no way for the user to know what mode is enabled. Thanks, Emil > + as it increases the driver size and adds runtme checks on hotpath. > + > +endif # IDPF > diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile > index 6844ead2f3ac..2ce01a0b5898 100644 > --- a/drivers/net/ethernet/intel/idpf/Makefile > +++ b/drivers/net/ethernet/intel/idpf/Makefile > @@ -12,7 +12,8 @@ idpf-y := \ > idpf_ethtool.o \ > idpf_lib.o \ > idpf_main.o \ > - idpf_singleq_txrx.o \ > idpf_txrx.o \ > idpf_virtchnl.o \ > idpf_vf_dev.o > + > +idpf-$(CONFIG_IDPF_SINGLEQ) += idpf_singleq_txrx.o > diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h > index f9e43d171f17..5d9529f5b41b 100644 > --- a/drivers/net/ethernet/intel/idpf/idpf.h > +++ b/drivers/net/ethernet/intel/idpf/idpf.h > @@ -599,7 +599,8 @@ struct idpf_adapter { > */ > static inline int idpf_is_queue_model_split(u16 q_model) > { > - return q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT; > + return !IS_ENABLED(CONFIG_IDPF_SINGLEQ) || > + q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT; > } > > #define idpf_is_cap_ena(adapter, field, flag) \ > diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c > index 4aa5ee781bd7..2bc1a5a0b50f 100644 > --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c > +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c > @@ -1306,7 +1306,7 @@ static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport, > static void idpf_rxq_set_descids(const struct idpf_vport *vport, > struct idpf_rx_queue *q) > { > - if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { > + if (idpf_is_queue_model_split(vport->rxq_model)) { > q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; > } else { > if (vport->base_rxd) > diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c > index 44602b87cd41..d1705fcb701a 100644 > --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c > +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c > @@ -1256,12 +1256,12 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter, > vport_msg->vport_type = cpu_to_le16(VIRTCHNL2_VPORT_TYPE_DEFAULT); > vport_msg->vport_index = cpu_to_le16(idx); > > - if (adapter->req_tx_splitq) > + if (adapter->req_tx_splitq || !IS_ENABLED(CONFIG_IDPF_SINGLEQ)) > vport_msg->txq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SPLIT); > else > vport_msg->txq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SINGLE); > > - if (adapter->req_rx_splitq) > + if (adapter->req_rx_splitq || !IS_ENABLED(CONFIG_IDPF_SINGLEQ)) > vport_msg->rxq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SPLIT); > else > vport_msg->rxq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SINGLE); > @@ -1323,10 +1323,17 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) > > vport_msg = adapter->vport_params_recvd[vport->idx]; > > + if (!IS_ENABLED(CONFIG_IDPF_SINGLEQ) && > + (vport_msg->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE || > + vport_msg->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)) { > + dev_err(&adapter->pdev->dev, "singleq mode requested, but not compiled-in\n"); > + return -EOPNOTSUPP; > + } > + > rx_desc_ids = le64_to_cpu(vport_msg->rx_desc_ids); > tx_desc_ids = le64_to_cpu(vport_msg->tx_desc_ids); > > - if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { > + if (idpf_is_queue_model_split(vport->rxq_model)) { > if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M)) { > dev_info(&adapter->pdev->dev, "Minimum RX descriptor support not provided, using the default\n"); > vport_msg->rx_desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); > @@ -1336,7 +1343,7 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) > vport->base_rxd = true; > } > > - if (vport->txq_model != VIRTCHNL2_QUEUE_MODEL_SPLIT) > + if (!idpf_is_queue_model_split(vport->txq_model)) > return 0; > > if ((tx_desc_ids & MIN_SUPPORT_TXDID) != MIN_SUPPORT_TXDID) {
From: Tantilov, Emil S <emil.s.tantilov@intel.com> Date: Mon, 13 May 2024 19:01:30 -0700 > On 5/10/2024 8:26 AM, Alexander Lobakin wrote: >> Currently, there's no HW supporting idpf in the singleq model. Still, >> this dead code is supported by the driver and often times add hotpath > The driver supports the HW in single queue mode (not the other way > around), so I would like to point out that all current HW on which idpf > can load supports this model. But no hardware currently wants to work in this mode. > >> branches and redundant cacheline accesses. >> While it can't currently be removed, add CONFIG_IDPF_SINGLEQ and build >> the singleq code only when it's enabled manually. This corresponds to >> -10 Kb of object code size and a good bunch of hotpath checks. >> idpf_is_queue_model_split() works as a gate and compiles out to `true` >> when the config option is disabled. > Compiling singleq out does introduce an issue for the users that would > end up without a netdev if the driver is loaded on a setup where the There will be a message in dmesg saying that the support is not compiled in. > Control Plane is configured for singlq mode. In that scenario the user Any real world examples where the CP requests singleq mode? > will have to recompile the driver/kernel in order to load the driver > successfully. All distros will have it enabled as usually. > >> >> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> >> --- >> drivers/net/ethernet/intel/Kconfig | 13 +--------- >> drivers/net/ethernet/intel/idpf/Kconfig | 26 +++++++++++++++++++ >> drivers/net/ethernet/intel/idpf/Makefile | 3 ++- >> drivers/net/ethernet/intel/idpf/idpf.h | 3 ++- >> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 2 +- >> .../net/ethernet/intel/idpf/idpf_virtchnl.c | 15 ++++++++--- >> 6 files changed, 43 insertions(+), 19 deletions(-) >> create mode 100644 drivers/net/ethernet/intel/idpf/Kconfig >> >> diff --git a/drivers/net/ethernet/intel/Kconfig >> b/drivers/net/ethernet/intel/Kconfig >> index e0287fbd501d..0375c7448a57 100644 >> --- a/drivers/net/ethernet/intel/Kconfig >> +++ b/drivers/net/ethernet/intel/Kconfig >> @@ -384,17 +384,6 @@ config IGC_LEDS >> Optional support for controlling the NIC LED's with the netdev >> LED trigger. >> -config IDPF >> - tristate "Intel(R) Infrastructure Data Path Function Support" >> - depends on PCI_MSI >> - select DIMLIB >> - select PAGE_POOL >> - select PAGE_POOL_STATS >> - help >> - This driver supports Intel(R) Infrastructure Data Path Function >> - devices. >> - >> - To compile this driver as a module, choose M here. The module >> - will be called idpf. >> +source "drivers/net/ethernet/intel/idpf/Kconfig" >> endif # NET_VENDOR_INTEL >> diff --git a/drivers/net/ethernet/intel/idpf/Kconfig >> b/drivers/net/ethernet/intel/idpf/Kconfig >> new file mode 100644 >> index 000000000000..bee83a40f218 >> --- /dev/null >> +++ b/drivers/net/ethernet/intel/idpf/Kconfig >> @@ -0,0 +1,26 @@ >> +# SPDX-License-Identifier: GPL-2.0-only >> +# Copyright (C) 2024 Intel Corporation >> + >> +config IDPF >> + tristate "Intel(R) Infrastructure Data Path Function Support" >> + depends on PCI_MSI >> + select DIMLIB >> + select PAGE_POOL >> + select PAGE_POOL_STATS >> + help >> + This driver supports Intel(R) Infrastructure Data Path Function >> + devices. >> + >> + To compile this driver as a module, choose M here. The module >> + will be called idpf. >> + >> +if IDPF >> + >> +config IDPF_SINGLEQ >> + bool "idpf singleq support" >> + help >> + This option enables support for legacy single Rx/Tx queues w/no >> + completion and fill queues. Only enable if you have such hardware > This description is not accurate - all HW supports single queue. The > configuration is done by the Control Plane. Furthermore, without access > to the Control Plane config, there is no way for the user to know what > mode is enabled. I can rephrase to "if your hardware is configured to work in singleq mode". Anyway, during both the internal review and this one, no one still gave me a real world example where the HW wants singleq mode. When the idpf made it into the kernel, singleq didn't work, which was proven by several patches from me which fixed critical issues. If it's not tested and no HW wants this when splitq is available, why decrease perf by dead code? > > Thanks, > Emil Thanks, Olek
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index e0287fbd501d..0375c7448a57 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -384,17 +384,6 @@ config IGC_LEDS Optional support for controlling the NIC LED's with the netdev LED trigger. -config IDPF - tristate "Intel(R) Infrastructure Data Path Function Support" - depends on PCI_MSI - select DIMLIB - select PAGE_POOL - select PAGE_POOL_STATS - help - This driver supports Intel(R) Infrastructure Data Path Function - devices. - - To compile this driver as a module, choose M here. The module - will be called idpf. +source "drivers/net/ethernet/intel/idpf/Kconfig" endif # NET_VENDOR_INTEL diff --git a/drivers/net/ethernet/intel/idpf/Kconfig b/drivers/net/ethernet/intel/idpf/Kconfig new file mode 100644 index 000000000000..bee83a40f218 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/Kconfig @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2024 Intel Corporation + +config IDPF + tristate "Intel(R) Infrastructure Data Path Function Support" + depends on PCI_MSI + select DIMLIB + select PAGE_POOL + select PAGE_POOL_STATS + help + This driver supports Intel(R) Infrastructure Data Path Function + devices. + + To compile this driver as a module, choose M here. The module + will be called idpf. + +if IDPF + +config IDPF_SINGLEQ + bool "idpf singleq support" + help + This option enables support for legacy single Rx/Tx queues w/no + completion and fill queues. Only enable if you have such hardware + as it increases the driver size and adds runtme checks on hotpath. + +endif # IDPF diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile index 6844ead2f3ac..2ce01a0b5898 100644 --- a/drivers/net/ethernet/intel/idpf/Makefile +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -12,7 +12,8 @@ idpf-y := \ idpf_ethtool.o \ idpf_lib.o \ idpf_main.o \ - idpf_singleq_txrx.o \ idpf_txrx.o \ idpf_virtchnl.o \ idpf_vf_dev.o + +idpf-$(CONFIG_IDPF_SINGLEQ) += idpf_singleq_txrx.o diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index f9e43d171f17..5d9529f5b41b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -599,7 +599,8 @@ struct idpf_adapter { */ static inline int idpf_is_queue_model_split(u16 q_model) { - return q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT; + return !IS_ENABLED(CONFIG_IDPF_SINGLEQ) || + q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT; } #define idpf_is_cap_ena(adapter, field, flag) \ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 4aa5ee781bd7..2bc1a5a0b50f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1306,7 +1306,7 @@ static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport, static void idpf_rxq_set_descids(const struct idpf_vport *vport, struct idpf_rx_queue *q) { - if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + if (idpf_is_queue_model_split(vport->rxq_model)) { q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; } else { if (vport->base_rxd) diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 44602b87cd41..d1705fcb701a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1256,12 +1256,12 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter, vport_msg->vport_type = cpu_to_le16(VIRTCHNL2_VPORT_TYPE_DEFAULT); vport_msg->vport_index = cpu_to_le16(idx); - if (adapter->req_tx_splitq) + if (adapter->req_tx_splitq || !IS_ENABLED(CONFIG_IDPF_SINGLEQ)) vport_msg->txq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SPLIT); else vport_msg->txq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SINGLE); - if (adapter->req_rx_splitq) + if (adapter->req_rx_splitq || !IS_ENABLED(CONFIG_IDPF_SINGLEQ)) vport_msg->rxq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SPLIT); else vport_msg->rxq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SINGLE); @@ -1323,10 +1323,17 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) vport_msg = adapter->vport_params_recvd[vport->idx]; + if (!IS_ENABLED(CONFIG_IDPF_SINGLEQ) && + (vport_msg->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE || + vport_msg->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)) { + dev_err(&adapter->pdev->dev, "singleq mode requested, but not compiled-in\n"); + return -EOPNOTSUPP; + } + rx_desc_ids = le64_to_cpu(vport_msg->rx_desc_ids); tx_desc_ids = le64_to_cpu(vport_msg->tx_desc_ids); - if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + if (idpf_is_queue_model_split(vport->rxq_model)) { if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M)) { dev_info(&adapter->pdev->dev, "Minimum RX descriptor support not provided, using the default\n"); vport_msg->rx_desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); @@ -1336,7 +1343,7 @@ int idpf_check_supported_desc_ids(struct idpf_vport *vport) vport->base_rxd = true; } - if (vport->txq_model != VIRTCHNL2_QUEUE_MODEL_SPLIT) + if (!idpf_is_queue_model_split(vport->txq_model)) return 0; if ((tx_desc_ids & MIN_SUPPORT_TXDID) != MIN_SUPPORT_TXDID) {
Currently, there's no HW supporting idpf in the singleq model. Still, this dead code is supported by the driver and often times add hotpath branches and redundant cacheline accesses. While it can't currently be removed, add CONFIG_IDPF_SINGLEQ and build the singleq code only when it's enabled manually. This corresponds to -10 Kb of object code size and a good bunch of hotpath checks. idpf_is_queue_model_split() works as a gate and compiles out to `true` when the config option is disabled. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- drivers/net/ethernet/intel/Kconfig | 13 +--------- drivers/net/ethernet/intel/idpf/Kconfig | 26 +++++++++++++++++++ drivers/net/ethernet/intel/idpf/Makefile | 3 ++- drivers/net/ethernet/intel/idpf/idpf.h | 3 ++- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 2 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 15 ++++++++--- 6 files changed, 43 insertions(+), 19 deletions(-) create mode 100644 drivers/net/ethernet/intel/idpf/Kconfig