Message ID | 20191014080033.12407-1-alobakin@dlink.ru |
---|---|
State | Accepted |
Delegated to: | David Miller |
Headers | show |
Series | [v2,net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() | expand |
From: Alexander Lobakin <alobakin@dlink.ru> Date: Mon, 14 Oct 2019 11:00:33 +0300 > Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL > skbs") made use of listified skb processing for the users of > napi_gro_frags(). > The same technique can be used in a way more common napi_gro_receive() > to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers > including gro_cells and mac80211 users. > This slightly changes the return value in cases where skb is being > dropped by the core stack, but it seems to have no impact on related > drivers' functionality. > gro_normal_batch is left untouched as it's very individual for every > single system configuration and might be tuned in manual order to > achieve an optimal performance. > > Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> > Acked-by: Edward Cree <ecree@solarflare.com> Applied, thank you.
David Miller wrote 16.10.2019 04:16: > From: Alexander Lobakin <alobakin@dlink.ru> > Date: Mon, 14 Oct 2019 11:00:33 +0300 > >> Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL >> skbs") made use of listified skb processing for the users of >> napi_gro_frags(). >> The same technique can be used in a way more common napi_gro_receive() >> to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers >> including gro_cells and mac80211 users. >> This slightly changes the return value in cases where skb is being >> dropped by the core stack, but it seems to have no impact on related >> drivers' functionality. >> gro_normal_batch is left untouched as it's very individual for every >> single system configuration and might be tuned in manual order to >> achieve an optimal performance. >> >> Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> >> Acked-by: Edward Cree <ecree@solarflare.com> > > Applied, thank you. David, Edward, Eric, Ilias, thank you for your time. Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
Hi, On Wed, Oct 16, 2019 at 10:31:31AM +0300, Alexander Lobakin wrote: > David Miller wrote 16.10.2019 04:16: > > From: Alexander Lobakin <alobakin@dlink.ru> > > Date: Mon, 14 Oct 2019 11:00:33 +0300 > > > > > Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL > > > skbs") made use of listified skb processing for the users of > > > napi_gro_frags(). > > > The same technique can be used in a way more common napi_gro_receive() > > > to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers > > > including gro_cells and mac80211 users. > > > This slightly changes the return value in cases where skb is being > > > dropped by the core stack, but it seems to have no impact on related > > > drivers' functionality. > > > gro_normal_batch is left untouched as it's very individual for every > > > single system configuration and might be tuned in manual order to > > > achieve an optimal performance. > > > > > > Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> > > > Acked-by: Edward Cree <ecree@solarflare.com> > > > > Applied, thank you. > > David, Edward, Eric, Ilias, > thank you for your time. > > Regards, > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ I am very sorry to be the bearer of bad news. It appears that this commit is causing a regression in Linux 5.4.0-rc8-next-20191122, preventing me from connecting to Wi-Fi networks. I have a Dell XPS 9370 (Intel Core i7-8650U) with Intel Wireless 8265 [8086:24fd]. I did a bisect, and this commit was named the culprit. I then applied the reverse patch on another clone of Linux next-20191122, and it started working. 6570bc79c0dfff0f228b7afd2de720fb4e84d61d net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() You can see more at the bug report I filed at [0]. [0] https://bugzilla.kernel.org/show_bug.cgi?id=205647 I called on others at [0] to try to reproduce this - you should not pull a patch because of a single reporter - as I could be wrong. Please let me know if you want me to give more debugging information or test any potential fixes. I am happy to help to fix this. :) Kind regards, Nicholas Johnson
Nicholas Johnson wrote 25.11.2019 10:29: > Hi, > > On Wed, Oct 16, 2019 at 10:31:31AM +0300, Alexander Lobakin wrote: >> David Miller wrote 16.10.2019 04:16: >> > From: Alexander Lobakin <alobakin@dlink.ru> >> > Date: Mon, 14 Oct 2019 11:00:33 +0300 >> > >> > > Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL >> > > skbs") made use of listified skb processing for the users of >> > > napi_gro_frags(). >> > > The same technique can be used in a way more common napi_gro_receive() >> > > to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers >> > > including gro_cells and mac80211 users. >> > > This slightly changes the return value in cases where skb is being >> > > dropped by the core stack, but it seems to have no impact on related >> > > drivers' functionality. >> > > gro_normal_batch is left untouched as it's very individual for every >> > > single system configuration and might be tuned in manual order to >> > > achieve an optimal performance. >> > > >> > > Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> >> > > Acked-by: Edward Cree <ecree@solarflare.com> >> > >> > Applied, thank you. >> >> David, Edward, Eric, Ilias, >> thank you for your time. >> >> Regards, >> ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ > > I am very sorry to be the bearer of bad news. It appears that this > commit is causing a regression in Linux 5.4.0-rc8-next-20191122, > preventing me from connecting to Wi-Fi networks. I have a Dell XPS 9370 > (Intel Core i7-8650U) with Intel Wireless 8265 [8086:24fd]. Hi! It's a bit strange as this commit doesn't directly affect the packet flow. I don't have any iwlwifi hardware at the moment, so let's see if anyone else will be able to reproduce this (for now, it is the first report in a ~6 weeks after applying to net-next). Anyway, I'll investigate iwlwifi's Rx processing -- maybe I could find something driver-specific that might produce this. Thank you for the report. > I did a bisect, and this commit was named the culprit. I then applied > the reverse patch on another clone of Linux next-20191122, and it > started working. > > 6570bc79c0dfff0f228b7afd2de720fb4e84d61d > net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() > > You can see more at the bug report I filed at [0]. > > [0] > https://bugzilla.kernel.org/show_bug.cgi?id=205647 > > I called on others at [0] to try to reproduce this - you should not > pull > a patch because of a single reporter - as I could be wrong. > > Please let me know if you want me to give more debugging information or > test any potential fixes. I am happy to help to fix this. :) > > Kind regards, > Nicholas Johnson Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
Alexander Lobakin wrote 25.11.2019 10:54: > Nicholas Johnson wrote 25.11.2019 10:29: >> Hi, >> >> On Wed, Oct 16, 2019 at 10:31:31AM +0300, Alexander Lobakin wrote: >>> David Miller wrote 16.10.2019 04:16: >>> > From: Alexander Lobakin <alobakin@dlink.ru> >>> > Date: Mon, 14 Oct 2019 11:00:33 +0300 >>> > >>> > > Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL >>> > > skbs") made use of listified skb processing for the users of >>> > > napi_gro_frags(). >>> > > The same technique can be used in a way more common napi_gro_receive() >>> > > to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers >>> > > including gro_cells and mac80211 users. >>> > > This slightly changes the return value in cases where skb is being >>> > > dropped by the core stack, but it seems to have no impact on related >>> > > drivers' functionality. >>> > > gro_normal_batch is left untouched as it's very individual for every >>> > > single system configuration and might be tuned in manual order to >>> > > achieve an optimal performance. >>> > > >>> > > Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> >>> > > Acked-by: Edward Cree <ecree@solarflare.com> >>> > >>> > Applied, thank you. >>> >>> David, Edward, Eric, Ilias, >>> thank you for your time. >>> >>> Regards, >>> ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ >> >> I am very sorry to be the bearer of bad news. It appears that this >> commit is causing a regression in Linux 5.4.0-rc8-next-20191122, >> preventing me from connecting to Wi-Fi networks. I have a Dell XPS >> 9370 >> (Intel Core i7-8650U) with Intel Wireless 8265 [8086:24fd]. > > Hi! > > It's a bit strange as this commit doesn't directly affect the packet > flow. I don't have any iwlwifi hardware at the moment, so let's see if > anyone else will be able to reproduce this (for now, it is the first > report in a ~6 weeks after applying to net-next). > Anyway, I'll investigate iwlwifi's Rx processing -- maybe I could find > something driver-specific that might produce this. > > Thank you for the report. > >> I did a bisect, and this commit was named the culprit. I then applied >> the reverse patch on another clone of Linux next-20191122, and it >> started working. >> >> 6570bc79c0dfff0f228b7afd2de720fb4e84d61d >> net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() >> >> You can see more at the bug report I filed at [0]. >> >> [0] >> https://bugzilla.kernel.org/show_bug.cgi?id=205647 >> >> I called on others at [0] to try to reproduce this - you should not >> pull >> a patch because of a single reporter - as I could be wrong. >> >> Please let me know if you want me to give more debugging information >> or >> test any potential fixes. I am happy to help to fix this. :) And you can also set /proc/sys/net/core/gro_normal_batch to the value of 1 and see if there are any changes. This value makes GRO stack to behave just like without the patch. >> Kind regards, >> Nicholas Johnson > > Regards, > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
On Mon, Nov 25, 2019 at 11:25:50AM +0300, Alexander Lobakin wrote: > Alexander Lobakin wrote 25.11.2019 10:54: > > Nicholas Johnson wrote 25.11.2019 10:29: > > > Hi, > > > > > > On Wed, Oct 16, 2019 at 10:31:31AM +0300, Alexander Lobakin wrote: > > > > David Miller wrote 16.10.2019 04:16: > > > > > From: Alexander Lobakin <alobakin@dlink.ru> > > > > > Date: Mon, 14 Oct 2019 11:00:33 +0300 > > > > > > > > > > > Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL > > > > > > skbs") made use of listified skb processing for the users of > > > > > > napi_gro_frags(). > > > > > > The same technique can be used in a way more common napi_gro_receive() > > > > > > to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers > > > > > > including gro_cells and mac80211 users. > > > > > > This slightly changes the return value in cases where skb is being > > > > > > dropped by the core stack, but it seems to have no impact on related > > > > > > drivers' functionality. > > > > > > gro_normal_batch is left untouched as it's very individual for every > > > > > > single system configuration and might be tuned in manual order to > > > > > > achieve an optimal performance. > > > > > > > > > > > > Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> > > > > > > Acked-by: Edward Cree <ecree@solarflare.com> > > > > > > > > > > Applied, thank you. > > > > > > > > David, Edward, Eric, Ilias, > > > > thank you for your time. > > > > > > > > Regards, > > > > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ > > > > > > I am very sorry to be the bearer of bad news. It appears that this > > > commit is causing a regression in Linux 5.4.0-rc8-next-20191122, > > > preventing me from connecting to Wi-Fi networks. I have a Dell XPS > > > 9370 > > > (Intel Core i7-8650U) with Intel Wireless 8265 [8086:24fd]. > > > > Hi! > > > > It's a bit strange as this commit doesn't directly affect the packet > > flow. I don't have any iwlwifi hardware at the moment, so let's see if > > anyone else will be able to reproduce this (for now, it is the first > > report in a ~6 weeks after applying to net-next). > > Anyway, I'll investigate iwlwifi's Rx processing -- maybe I could find > > something driver-specific that might produce this. Just in case, I double checked by reapplying the patch to check it is the problem. The problem reappeared. So I am sure. Here's what I will do. I know somebody with the same Dell XPS 9370, except theirs has the Intel Core i7 8550U and Killer Wi-Fi. Mine is the "business" model, which was harder to obtain. I have been doing bisects on a USB-C SSD because I do not have enough space on the internal NVMe drive. I will ask to borrow their laptop, and boot off the drive as I have been doing with my laptop. If the problem does not appear on their laptop, then there is a good chance that the problem is specific to iwlwifi. > > > > Thank you for the report. > > > > > I did a bisect, and this commit was named the culprit. I then applied > > > the reverse patch on another clone of Linux next-20191122, and it > > > started working. > > > > > > 6570bc79c0dfff0f228b7afd2de720fb4e84d61d > > > net: core: use listified Rx for GRO_NORMAL in napi_gro_receive() > > > > > > You can see more at the bug report I filed at [0]. > > > > > > [0] > > > https://bugzilla.kernel.org/show_bug.cgi?id=205647 > > > > > > I called on others at [0] to try to reproduce this - you should not > > > pull > > > a patch because of a single reporter - as I could be wrong. > > > > > > Please let me know if you want me to give more debugging information > > > or > > > test any potential fixes. I am happy to help to fix this. :) > > And you can also set /proc/sys/net/core/gro_normal_batch to the value > of 1 and see if there are any changes. This value makes GRO stack to > behave just like without the patch. The default value of /proc/sys/net/core/gro_normal_batch was 8. Setting it to 1 allowed it to connect to Wi-Fi network. Setting it back to 8 did not kill the connection. But when I disconnected and tried to reconnect, it did not re-connect. Hence, it appears that the problem only affects the initial handshake when associating with a network, and not normal packet flow. > > > > Kind regards, > > > Nicholas Johnson > > > > Regards, > > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ > > Regards, > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ Regards, Nicholas
On 25/11/2019 09:09, Nicholas Johnson wrote: > The default value of /proc/sys/net/core/gro_normal_batch was 8. > Setting it to 1 allowed it to connect to Wi-Fi network. > > Setting it back to 8 did not kill the connection. > > But when I disconnected and tried to reconnect, it did not re-connect. > > Hence, it appears that the problem only affects the initial handshake > when associating with a network, and not normal packet flow. That sounds like the GRO batch isn't getting flushed at the endof the NAPI — maybe the driver isn't calling napi_complete_done() at the appropriate time? Indeed, from digging through the layers of iwlwifi I eventually get to iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1); return 0; }) and instead calls napi_gro_flush() at the end of its RX handling. Unfortunately, napi_gro_flush() is no longer enough, because it doesn't call gro_normal_list() so the packets on the GRO_NORMAL list just sit there indefinitely. It was seeing drivers calling napi_gro_flush() directly that had me worried in the first place about whether listifying napi_gro_receive() was safe and where the gro_normal_list() should go. I wondered if other drivers that show up in [1] needed fixing with a gro_normal_list() next to their napi_gro_flush() call. From a cursory check: brocade/bna: has a real poller, calls napi_complete_done() so is OK. cortina/gemini: calls napi_complete_done() straight after napi_gro_flush(), so is OK. hisilicon/hns3: calls napi_complete(), so is _probably_ OK. But it's far from clear to me why *any* of those drivers are calling napi_gro_flush() themselves... -Ed [1]: https://elixir.bootlin.com/linux/latest/ident/napi_gro_flush
Edward Cree wrote 25.11.2019 13:31: > On 25/11/2019 09:09, Nicholas Johnson wrote: >> The default value of /proc/sys/net/core/gro_normal_batch was 8. >> Setting it to 1 allowed it to connect to Wi-Fi network. >> >> Setting it back to 8 did not kill the connection. >> >> But when I disconnected and tried to reconnect, it did not re-connect. >> >> Hence, it appears that the problem only affects the initial handshake >> when associating with a network, and not normal packet flow. > That sounds like the GRO batch isn't getting flushed at the endof the > NAPI — maybe the driver isn't calling napi_complete_done() at the > appropriate time? Yes, this was the first reason I thought about, but didn't look at iwlwifi yet. I already knew this driver has some tricky parts, but this 'fake NAPI' solution seems rather strange to me. > Indeed, from digging through the layers of iwlwifi I eventually get to > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1); > return 0; }) and instead calls napi_gro_flush() at the end of its RX > handling. Unfortunately, napi_gro_flush() is no longer enough, > because it doesn't call gro_normal_list() so the packets on the > GRO_NORMAL list just sit there indefinitely. > > It was seeing drivers calling napi_gro_flush() directly that had me > worried in the first place about whether listifying napi_gro_receive() > was safe and where the gro_normal_list() should go. > I wondered if other drivers that show up in [1] needed fixing with a > gro_normal_list() next to their napi_gro_flush() call. From a cursory > check: > brocade/bna: has a real poller, calls napi_complete_done() so is OK. > cortina/gemini: calls napi_complete_done() straight after > napi_gro_flush(), so is OK. > hisilicon/hns3: calls napi_complete(), so is _probably_ OK. > But it's far from clear to me why *any* of those drivers are calling > napi_gro_flush() themselves... Agree. I mean, we _can_ handle this particular problem from networking core side, but from my point of view only rethinking driver's logic is the correct way to solve this and other issues that may potentionally appear in future. > -Ed > > [1]: https://elixir.bootlin.com/linux/latest/ident/napi_gro_flush Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote: > Edward Cree wrote 25.11.2019 13:31: > > On 25/11/2019 09:09, Nicholas Johnson wrote: > > > The default value of /proc/sys/net/core/gro_normal_batch was 8. > > > Setting it to 1 allowed it to connect to Wi-Fi network. > > > > > > Setting it back to 8 did not kill the connection. > > > > > > But when I disconnected and tried to reconnect, it did not re-connect. > > > > > > Hence, it appears that the problem only affects the initial handshake > > > when associating with a network, and not normal packet flow. > > That sounds like the GRO batch isn't getting flushed at the endof the > > NAPI — maybe the driver isn't calling napi_complete_done() at the > > appropriate time? > > Yes, this was the first reason I thought about, but didn't look at > iwlwifi yet. I already knew this driver has some tricky parts, but > this 'fake NAPI' solution seems rather strange to me. Truth be told, we kinda just fudged it until we got GRO, since that's what we really want on wifi (to reduce the costly TCP ACKs if possible). Maybe we should call napi_complete_done() instead? But as Edward noted (below), we don't actually really do NAPI polling, we just fake it for each interrupt since we will often get a lot of frames in one interrupt if there's high throughput (A-MPDUs are basically coming in all at the same time). I've never really looked too much at what exactly happens here, beyond seeing the difference from GRO. > > Indeed, from digging through the layers of iwlwifi I eventually get to > > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the > > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1); > > return 0; }) and instead calls napi_gro_flush() at the end of its RX > > handling. Unfortunately, napi_gro_flush() is no longer enough, > > because it doesn't call gro_normal_list() so the packets on the > > GRO_NORMAL list just sit there indefinitely. > > > > It was seeing drivers calling napi_gro_flush() directly that had me > > worried in the first place about whether listifying napi_gro_receive() > > was safe and where the gro_normal_list() should go. > > I wondered if other drivers that show up in [1] needed fixing with a > > gro_normal_list() next to their napi_gro_flush() call. From a cursory > > check: > > brocade/bna: has a real poller, calls napi_complete_done() so is OK. > > cortina/gemini: calls napi_complete_done() straight after > > napi_gro_flush(), so is OK. > > hisilicon/hns3: calls napi_complete(), so is _probably_ OK. > > But it's far from clear to me why *any* of those drivers are calling > > napi_gro_flush() themselves... > > Agree. I mean, we _can_ handle this particular problem from networking > core side, but from my point of view only rethinking driver's logic is > the correct way to solve this and other issues that may potentionally > appear in future. Do tell what you think it should be doing :) One additional wrinkle is that we have firmware notifications, command completions and actual RX interleaved, so I think we do want to have interrupts for the notifications and command completions? johannes
On Mon, 2019-11-25 at 12:05 +0100, Johannes Berg wrote: > On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote: > > Edward Cree wrote 25.11.2019 13:31: > > > On 25/11/2019 09:09, Nicholas Johnson wrote: > > > > The default value of /proc/sys/net/core/gro_normal_batch was 8. > > > > Setting it to 1 allowed it to connect to Wi-Fi network. > > > > > > > > Setting it back to 8 did not kill the connection. > > > > > > > > But when I disconnected and tried to reconnect, it did not re-connect. > > > > > > > > Hence, it appears that the problem only affects the initial handshake > > > > when associating with a network, and not normal packet flow. > > > That sounds like the GRO batch isn't getting flushed at the endof the > > > NAPI — maybe the driver isn't calling napi_complete_done() at the > > > appropriate time? > > > > Yes, this was the first reason I thought about, but didn't look at > > iwlwifi yet. I already knew this driver has some tricky parts, but > > this 'fake NAPI' solution seems rather strange to me. > > Truth be told, we kinda just fudged it until we got GRO, since that's > what we really want on wifi (to reduce the costly TCP ACKs if possible). > > Maybe we should call napi_complete_done() instead? But as Edward noted > (below), we don't actually really do NAPI polling, we just fake it for > each interrupt since we will often get a lot of frames in one interrupt > if there's high throughput (A-MPDUs are basically coming in all at the > same time). I've never really looked too much at what exactly happens > here, beyond seeing the difference from GRO. > > > > > Indeed, from digging through the layers of iwlwifi I eventually get to > > > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the > > > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1); > > > return 0; }) and instead calls napi_gro_flush() at the end of its RX > > > handling. Unfortunately, napi_gro_flush() is no longer enough, > > > because it doesn't call gro_normal_list() so the packets on the > > > GRO_NORMAL list just sit there indefinitely. > > > > > > It was seeing drivers calling napi_gro_flush() directly that had me > > > worried in the first place about whether listifying napi_gro_receive() > > > was safe and where the gro_normal_list() should go. > > > I wondered if other drivers that show up in [1] needed fixing with a > > > gro_normal_list() next to their napi_gro_flush() call. From a cursory > > > check: > > > brocade/bna: has a real poller, calls napi_complete_done() so is OK. > > > cortina/gemini: calls napi_complete_done() straight after > > > napi_gro_flush(), so is OK. > > > hisilicon/hns3: calls napi_complete(), so is _probably_ OK. > > > But it's far from clear to me why *any* of those drivers are calling > > > napi_gro_flush() themselves... > > > > Agree. I mean, we _can_ handle this particular problem from networking > > core side, but from my point of view only rethinking driver's logic is > > the correct way to solve this and other issues that may potentionally > > appear in future. > > Do tell what you think it should be doing :) > > One additional wrinkle is that we have firmware notifications, command > completions and actual RX interleaved, so I think we do want to have > interrupts for the notifications and command completions? I think it would be nice moving the iwlwifi driver to full/plain NAPI mode. The interrupt handler could keep processing extra work as it does now and queue real pkts on some internal queue, and than schedule the relevant napi, which in turn could process such queue in the napi poll method. Likely I missed tons of details and/or oversimplified it... For -net, I *think* something as dumb and hacky as the following could possibly work: ---- diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c index 4bba6b8a863c..df82fad96cbb 100644 --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c @@ -1527,7 +1527,7 @@ static void iwl_pcie_rx_handle(struct iwl_trans *trans, int queue) iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); if (rxq->napi.poll) - napi_gro_flush(&rxq->napi, false); + napi_complete_done(&rxq->napi, 0); iwl_pcie_rxq_restock(trans, rxq); } --- Cheers, Paolo
Paolo Abeni wrote 25.11.2019 14:42: > On Mon, 2019-11-25 at 12:05 +0100, Johannes Berg wrote: >> On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote: >> > Edward Cree wrote 25.11.2019 13:31: >> > > On 25/11/2019 09:09, Nicholas Johnson wrote: >> > > > The default value of /proc/sys/net/core/gro_normal_batch was 8. >> > > > Setting it to 1 allowed it to connect to Wi-Fi network. >> > > > >> > > > Setting it back to 8 did not kill the connection. >> > > > >> > > > But when I disconnected and tried to reconnect, it did not re-connect. >> > > > >> > > > Hence, it appears that the problem only affects the initial handshake >> > > > when associating with a network, and not normal packet flow. >> > > That sounds like the GRO batch isn't getting flushed at the endof the >> > > NAPI — maybe the driver isn't calling napi_complete_done() at the >> > > appropriate time? >> > >> > Yes, this was the first reason I thought about, but didn't look at >> > iwlwifi yet. I already knew this driver has some tricky parts, but >> > this 'fake NAPI' solution seems rather strange to me. >> >> Truth be told, we kinda just fudged it until we got GRO, since that's >> what we really want on wifi (to reduce the costly TCP ACKs if >> possible). >> >> Maybe we should call napi_complete_done() instead? But as Edward noted >> (below), we don't actually really do NAPI polling, we just fake it for >> each interrupt since we will often get a lot of frames in one >> interrupt >> if there's high throughput (A-MPDUs are basically coming in all at the >> same time). I've never really looked too much at what exactly happens >> here, beyond seeing the difference from GRO. >> >> >> > > Indeed, from digging through the layers of iwlwifi I eventually get to >> > > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the >> > > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1); >> > > return 0; }) and instead calls napi_gro_flush() at the end of its RX >> > > handling. Unfortunately, napi_gro_flush() is no longer enough, >> > > because it doesn't call gro_normal_list() so the packets on the >> > > GRO_NORMAL list just sit there indefinitely. >> > > >> > > It was seeing drivers calling napi_gro_flush() directly that had me >> > > worried in the first place about whether listifying napi_gro_receive() >> > > was safe and where the gro_normal_list() should go. >> > > I wondered if other drivers that show up in [1] needed fixing with a >> > > gro_normal_list() next to their napi_gro_flush() call. From a cursory >> > > check: >> > > brocade/bna: has a real poller, calls napi_complete_done() so is OK. >> > > cortina/gemini: calls napi_complete_done() straight after >> > > napi_gro_flush(), so is OK. >> > > hisilicon/hns3: calls napi_complete(), so is _probably_ OK. >> > > But it's far from clear to me why *any* of those drivers are calling >> > > napi_gro_flush() themselves... >> > >> > Agree. I mean, we _can_ handle this particular problem from networking >> > core side, but from my point of view only rethinking driver's logic is >> > the correct way to solve this and other issues that may potentionally >> > appear in future. >> >> Do tell what you think it should be doing :) >> >> One additional wrinkle is that we have firmware notifications, command >> completions and actual RX interleaved, so I think we do want to have >> interrupts for the notifications and command completions? > > I think it would be nice moving the iwlwifi driver to full/plain NAPI > mode. The interrupt handler could keep processing extra work as it does > now and queue real pkts on some internal queue, and than schedule the > relevant napi, which in turn could process such queue in the napi poll > method. Likely I missed tons of details and/or oversimplified it... Yep, full NAPI is the best variant, but I may miss a lot too. > For -net, I *think* something as dumb and hacky as the following could > possibly work: > ---- > diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > index 4bba6b8a863c..df82fad96cbb 100644 > --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > @@ -1527,7 +1527,7 @@ static void iwl_pcie_rx_handle(struct iwl_trans > *trans, int queue) > iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); > > if (rxq->napi.poll) > - napi_gro_flush(&rxq->napi, false); > + napi_complete_done(&rxq->napi, 0); > > iwl_pcie_rxq_restock(trans, rxq); > } > --- napi_complete_done(napi, 0) has an equivalent static inline napi_complete(napi). I'm not sure it will work without any issues as iwlwifi doesn't _really_ turn NAPI into scheduling state. I'm not very familiar with iwlwifi, but as a work around manual napi_gro_flush() you can also manually flush napi->rx_list to prevent packets from stalling: diff -Naur a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 14:55:03.610355230 +0300 +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 14:57:29.399556868 +0300 @@ -1526,8 +1526,16 @@ if (unlikely(emergency && count)) iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); - if (rxq->napi.poll) + if (rxq->napi.poll) { + if (rxq->napi.rx_count) { + netif_receive_skb_list(&rxq->napi.rx_list); + + INIT_LIST_HEAD(&rxq->napi.rx_list); + rxq->napi.rx_count = 0; + } + napi_gro_flush(&rxq->napi, false); + } iwl_pcie_rxq_restock(trans, rxq); } > Cheers, > > Paolo Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
Paolo Abeni <pabeni@redhat.com> writes: > On Mon, 2019-11-25 at 12:05 +0100, Johannes Berg wrote: >> On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote: >> >> > Agree. I mean, we _can_ handle this particular problem from networking >> > core side, but from my point of view only rethinking driver's logic is >> > the correct way to solve this and other issues that may potentionally >> > appear in future. >> >> Do tell what you think it should be doing :) >> >> One additional wrinkle is that we have firmware notifications, command >> completions and actual RX interleaved, so I think we do want to have >> interrupts for the notifications and command completions? > > I think it would be nice moving the iwlwifi driver to full/plain NAPI > mode. The interrupt handler could keep processing extra work as it does > now and queue real pkts on some internal queue, and than schedule the > relevant napi, which in turn could process such queue in the napi poll > method. Likely I missed tons of details and/or oversimplified it... Sorry for hijacking the thread, but I have a patch pending for ath10k (another wireless driver) which adds NAPI support to SDIO devices: https://patchwork.kernel.org/patch/11188393/ I think it does just what you suggested, but I'm no NAPI expert and would appreciate if someone more knowledgeable could take a look :)
On Mon, Nov 25, 2019 at 12:42:44PM +0100, Paolo Abeni wrote: > I think it would be nice moving the iwlwifi driver to full/plain NAPI > mode. The interrupt handler could keep processing extra work as it does > now and queue real pkts on some internal queue, and than schedule the > relevant napi, which in turn could process such queue in the napi poll > method. Likely I missed tons of details and/or oversimplified it... It must have something to do with iwlwifi (as if we needed more evidence). I just booted a different variant of the Dell XPS 9370 which has Qualcomm Wi-Fi [168c:003e] (ath10k_pci.ko), instead of Intel Wi-Fi, from the same USB SSD as before, and it has no issues whatsoever. My regression report quickly blown up to be way over my head in terms of understanding, but I will keep monitoring the discussions and try to learn from it. Everybody, please keep me CC'd into any further communications with driver teams, as I am genuinely interested in the journey and the outcome. > Cheers, > > Paolo > > Cheers, Nicholas
On Mon, Nov 25, 2019 at 10:31:12AM +0000, Edward Cree wrote: > On 25/11/2019 09:09, Nicholas Johnson wrote: > > The default value of /proc/sys/net/core/gro_normal_batch was 8. > > Setting it to 1 allowed it to connect to Wi-Fi network. > > > > Setting it back to 8 did not kill the connection. > > > > But when I disconnected and tried to reconnect, it did not re-connect. > > > > Hence, it appears that the problem only affects the initial handshake > > when associating with a network, and not normal packet flow. > That sounds like the GRO batch isn't getting flushed at the endof the > NAPI — maybe the driver isn't calling napi_complete_done() at the > appropriate time? > Indeed, from digging through the layers of iwlwifi I eventually get to > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1); > return 0; }) and instead calls napi_gro_flush() at the end of its RX > handling. Unfortunately, napi_gro_flush() is no longer enough, > because it doesn't call gro_normal_list() so the packets on the > GRO_NORMAL list just sit there indefinitely. > > It was seeing drivers calling napi_gro_flush() directly that had me > worried in the first place about whether listifying napi_gro_receive() > was safe and where the gro_normal_list() should go. > I wondered if other drivers that show up in [1] needed fixing with a > gro_normal_list() next to their napi_gro_flush() call. From a cursory > check: > brocade/bna: has a real poller, calls napi_complete_done() so is OK. > cortina/gemini: calls napi_complete_done() straight after > napi_gro_flush(), so is OK. > hisilicon/hns3: calls napi_complete(), so is _probably_ OK. > But it's far from clear to me why *any* of those drivers are calling > napi_gro_flush() themselves... Pardon my lack of understanding, but is it unusual that something that the drivers should not be calling be exposed to the drivers? Could it be hidden from the drivers so that it is out of scope, once the current drivers are modified to not use it? > > -Ed > > [1]: https://elixir.bootlin.com/linux/latest/ident/napi_gro_flush Kind regards, Nicholas
On 25/11/2019 12:02, Alexander Lobakin wrote: > I'm not very familiar with iwlwifi, but as a work around manual > napi_gro_flush() you can also manually flush napi->rx_list to > prevent packets from stalling: > > diff -Naur a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 14:55:03.610355230 +0300 > +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 14:57:29.399556868 +0300 > @@ -1526,8 +1526,16 @@ > if (unlikely(emergency && count)) > iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); > > - if (rxq->napi.poll) > + if (rxq->napi.poll) { > + if (rxq->napi.rx_count) { > + netif_receive_skb_list(&rxq->napi.rx_list); > + > + INIT_LIST_HEAD(&rxq->napi.rx_list); > + rxq->napi.rx_count = 0; > + } > + > napi_gro_flush(&rxq->napi, false); > + } > > iwl_pcie_rxq_restock(trans, rxq); > } ... or we could export gro_normal_list(), instead of open-coding it in the driver? -Ed
Edward Cree wrote 25.11.2019 16:21: > On 25/11/2019 12:02, Alexander Lobakin wrote: >> I'm not very familiar with iwlwifi, but as a work around manual >> napi_gro_flush() you can also manually flush napi->rx_list to >> prevent packets from stalling: >> >> diff -Naur a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 >> 14:55:03.610355230 +0300 >> +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 >> 14:57:29.399556868 +0300 >> @@ -1526,8 +1526,16 @@ >> if (unlikely(emergency && count)) >> iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); >> >> - if (rxq->napi.poll) >> + if (rxq->napi.poll) { >> + if (rxq->napi.rx_count) { >> + netif_receive_skb_list(&rxq->napi.rx_list); >> + >> + INIT_LIST_HEAD(&rxq->napi.rx_list); >> + rxq->napi.rx_count = 0; >> + } >> + >> napi_gro_flush(&rxq->napi, false); >> + } >> >> iwl_pcie_rxq_restock(trans, rxq); >> } > ... or we could export gro_normal_list(), instead of open-coding it > in the driver? I thought about this too, but don't like it. This patch is proposed as a *very* temporary solution until iwlwifi will get more straightforward logic. I wish we could make napi_gro_flush() static in the future and keep gro_normal_list() private to: - prevent them from using in any new drivers; - give more opportunity to CC to optimize the core code. > > -Ed Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
From: Alexander Lobakin <alobakin@dlink.ru> Date: Mon, 25 Nov 2019 15:02:24 +0300 > Paolo Abeni wrote 25.11.2019 14:42: >> For -net, I *think* something as dumb and hacky as the following could >> possibly work: >> ---- >> diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> index 4bba6b8a863c..df82fad96cbb 100644 >> --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> @@ -1527,7 +1527,7 @@ static void iwl_pcie_rx_handle(struct iwl_trans >> *trans, int queue) >> iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); >> if (rxq->napi.poll) >> - napi_gro_flush(&rxq->napi, false); >> + napi_complete_done(&rxq->napi, 0); >> iwl_pcie_rxq_restock(trans, rxq); >> } >> --- > > napi_complete_done(napi, 0) has an equivalent static inline > napi_complete(napi). I'm not sure it will work without any issues > as iwlwifi doesn't _really_ turn NAPI into scheduling state. > > I'm not very familiar with iwlwifi, but as a work around manual > napi_gro_flush() you can also manually flush napi->rx_list to > prevent packets from stalling: > > diff -Naur a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c > --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 > --- 14:55:03.610355230 +0300 > +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 > 14:57:29.399556868 +0300 ... Thanks to everyone for looking into this. Can I get some kind of fix in the next 24 hours? I want to send a quick follow-on pull request to Linus to deal with all of the fallout, and in particular fix this regression. Thanks!
David Miller wrote 27.11.2019 02:57: > From: Alexander Lobakin <alobakin@dlink.ru> > Date: Mon, 25 Nov 2019 15:02:24 +0300 > >> Paolo Abeni wrote 25.11.2019 14:42: >>> For -net, I *think* something as dumb and hacky as the following >>> could >>> possibly work: >>> ---- >>> diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> index 4bba6b8a863c..df82fad96cbb 100644 >>> --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> @@ -1527,7 +1527,7 @@ static void iwl_pcie_rx_handle(struct iwl_trans >>> *trans, int queue) >>> iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); >>> if (rxq->napi.poll) >>> - napi_gro_flush(&rxq->napi, false); >>> + napi_complete_done(&rxq->napi, 0); >>> iwl_pcie_rxq_restock(trans, rxq); >>> } >>> --- >> >> napi_complete_done(napi, 0) has an equivalent static inline >> napi_complete(napi). I'm not sure it will work without any issues >> as iwlwifi doesn't _really_ turn NAPI into scheduling state. >> >> I'm not very familiar with iwlwifi, but as a work around manual >> napi_gro_flush() you can also manually flush napi->rx_list to >> prevent packets from stalling: >> >> diff -Naur a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 >> --- 14:55:03.610355230 +0300 >> +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 >> 14:57:29.399556868 +0300 > ... > > Thanks to everyone for looking into this. > > Can I get some kind of fix in the next 24 hours? I want to send a > quick > follow-on pull request to Linus to deal with all of the fallout, and in > particular fix this regression. If Intel guys and others will agree, I'll send a patch which will add manual napi->rx_list flushing in iwlwifi driver in about ~2-3 hours. Anyway, this driver should get a proper NAPI in future releases to prevent problems like this one. > Thanks! Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
On Wed, 2019-11-27 at 10:47 +0300, Alexander Lobakin wrote: > > Can I get some kind of fix in the next 24 hours? I want to send a > > quick > > follow-on pull request to Linus to deal with all of the fallout, and in > > particular fix this regression. > > If Intel guys and others will agree, I'll send a patch which will add > manual napi->rx_list flushing in iwlwifi driver in about ~2-3 hours. Sounds fine to me. > Anyway, this driver should get a proper NAPI in future releases to > prevent problems like this one. Yeah, we'll work on that, but that might take a bit longer :) johannes
David Miller wrote 27.11.2019 02:57: > From: Alexander Lobakin <alobakin@dlink.ru> > Date: Mon, 25 Nov 2019 15:02:24 +0300 > >> Paolo Abeni wrote 25.11.2019 14:42: >>> For -net, I *think* something as dumb and hacky as the following >>> could >>> possibly work: >>> ---- >>> diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> index 4bba6b8a863c..df82fad96cbb 100644 >>> --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >>> @@ -1527,7 +1527,7 @@ static void iwl_pcie_rx_handle(struct iwl_trans >>> *trans, int queue) >>> iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq); >>> if (rxq->napi.poll) >>> - napi_gro_flush(&rxq->napi, false); >>> + napi_complete_done(&rxq->napi, 0); >>> iwl_pcie_rxq_restock(trans, rxq); >>> } >>> --- >> >> napi_complete_done(napi, 0) has an equivalent static inline >> napi_complete(napi). I'm not sure it will work without any issues >> as iwlwifi doesn't _really_ turn NAPI into scheduling state. >> >> I'm not very familiar with iwlwifi, but as a work around manual >> napi_gro_flush() you can also manually flush napi->rx_list to >> prevent packets from stalling: >> >> diff -Naur a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c >> --- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 >> --- 14:55:03.610355230 +0300 >> +++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c 2019-11-25 >> 14:57:29.399556868 +0300 > ... > > Thanks to everyone for looking into this. > > Can I get some kind of fix in the next 24 hours? I want to send a > quick > follow-on pull request to Linus to deal with all of the fallout, and in > particular fix this regression. The fix is here: [1] It's pretty straightforward, but needs a minimal testing anyways. If any changes are needed, please let me know. > Thanks! Regards, ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ [1] https://lore.kernel.org/netdev/20191127094123.18161-1-alobakin@dlink.ru
Hi Alexander, This patch introduced a behavior change around GRO_DROP: napi_skb_finish used to sometimes return GRO_DROP: > -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb) > +static gro_result_t napi_skb_finish(struct napi_struct *napi, > + struct sk_buff *skb, > + gro_result_t ret) > { > switch (ret) { > case GRO_NORMAL: > - if (netif_receive_skb_internal(skb)) > - ret = GRO_DROP; > + gro_normal_one(napi, skb); > But under your change, gro_normal_one and the various calls that makes never propagates its return value, and so GRO_DROP is never returned to the caller, even if something drops it. Was this intentional? Or should I start looking into how to restore it? Thanks, Jason
On Wed, Jun 24, 2020 at 03:06:10PM -0600, Jason A. Donenfeld wrote: > Hi Alexander, > > This patch introduced a behavior change around GRO_DROP: > > napi_skb_finish used to sometimes return GRO_DROP: > > > -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb) > > +static gro_result_t napi_skb_finish(struct napi_struct *napi, > > + struct sk_buff *skb, > > + gro_result_t ret) > > { > > switch (ret) { > > case GRO_NORMAL: > > - if (netif_receive_skb_internal(skb)) > > - ret = GRO_DROP; > > + gro_normal_one(napi, skb); > > > > But under your change, gro_normal_one and the various calls that makes > never propagates its return value, and so GRO_DROP is never returned to > the caller, even if something drops it. > > Was this intentional? Or should I start looking into how to restore it? > > Thanks, > Jason For some context, I'm consequently mulling over this change in my code, since checking for GRO_DROP now constitutes dead code: diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c index 91438144e4f7..9b2ab6fc91cd 100644 --- a/drivers/net/wireguard/receive.c +++ b/drivers/net/wireguard/receive.c @@ -414,14 +414,8 @@ static void wg_packet_consume_data_done(struct wg_peer *peer, if (unlikely(routed_peer != peer)) goto dishonest_packet_peer; - if (unlikely(napi_gro_receive(&peer->napi, skb) == GRO_DROP)) { - ++dev->stats.rx_dropped; - net_dbg_ratelimited("%s: Failed to give packet to userspace from peer %llu (%pISpfsc)\n", - dev->name, peer->internal_id, - &peer->endpoint.addr); - } else { - update_rx_stats(peer, message_data_len(len_before_trim)); - } + napi_gro_receive(&peer->napi, skb); + update_rx_stats(peer, message_data_len(len_before_trim)); return; dishonest_packet_peer:
On 24/06/2020 22:06, Jason A. Donenfeld wrote: > Hi Alexander, > > This patch introduced a behavior change around GRO_DROP: > > napi_skb_finish used to sometimes return GRO_DROP: > >> -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb) >> +static gro_result_t napi_skb_finish(struct napi_struct *napi, >> + struct sk_buff *skb, >> + gro_result_t ret) >> { >> switch (ret) { >> case GRO_NORMAL: >> - if (netif_receive_skb_internal(skb)) >> - ret = GRO_DROP; >> + gro_normal_one(napi, skb); >> > But under your change, gro_normal_one and the various calls that makes > never propagates its return value, and so GRO_DROP is never returned to > the caller, even if something drops it. This followed the pattern set by napi_frags_finish(), and is intentional: gro_normal_one() usually defers processing of the skb to the end of the napi poll, so by the time we know that the network stack has dropped it, the caller has long since returned. In fact the RX will be handled by netif_receive_skb_list_internal(), which can't return NET_RX_SUCCESS vs. NET_RX_DROP, because it's handling many skbs which might not all have the same verdict. When originally doing this work I felt this was OK because almost no-one was sensitive to the return value — almost the only callers that were were in our own sfc driver, and then only for making bogus decisions about interrupt moderation. Alexander just followed my lead, so don't blame him ;-) > For some context, I'm consequently mulling over this change in my code, > since checking for GRO_DROP now constitutes dead code: Incidentally, it's only dead because dev_gro_receive() can't return GRO_DROP either. If it could, napi_skb_finish() would pass that on. And napi_gro_frags() (which AIUI is the better API for some performance reasons that I can't remember) can still return GRO_DROP too. However, I think that incrementing your rx_dropped stat when the network stack chose to drop the packet is the wrong thing to do anyway (IMHO rx_dropped is for "there was a packet on the wire but either the hardware or the driver was unable to receive it"), so I'd say go ahead and remove the check. HTH -ed
diff --git a/net/core/dev.c b/net/core/dev.c index 8bc3dce71fc0..74f593986524 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5884,6 +5884,26 @@ struct packet_offload *gro_find_complete_by_type(__be16 type) } EXPORT_SYMBOL(gro_find_complete_by_type); +/* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ +static void gro_normal_list(struct napi_struct *napi) +{ + if (!napi->rx_count) + return; + netif_receive_skb_list_internal(&napi->rx_list); + INIT_LIST_HEAD(&napi->rx_list); + napi->rx_count = 0; +} + +/* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, + * pass the whole batch up to the stack. + */ +static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb) +{ + list_add_tail(&skb->list, &napi->rx_list); + if (++napi->rx_count >= gro_normal_batch) + gro_normal_list(napi); +} + static void napi_skb_free_stolen_head(struct sk_buff *skb) { skb_dst_drop(skb); @@ -5891,12 +5911,13 @@ static void napi_skb_free_stolen_head(struct sk_buff *skb) kmem_cache_free(skbuff_head_cache, skb); } -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb) +static gro_result_t napi_skb_finish(struct napi_struct *napi, + struct sk_buff *skb, + gro_result_t ret) { switch (ret) { case GRO_NORMAL: - if (netif_receive_skb_internal(skb)) - ret = GRO_DROP; + gro_normal_one(napi, skb); break; case GRO_DROP: @@ -5928,7 +5949,7 @@ gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb) skb_gro_reset_offset(skb); - ret = napi_skb_finish(dev_gro_receive(napi, skb), skb); + ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb)); trace_napi_gro_receive_exit(ret); return ret; @@ -5974,26 +5995,6 @@ struct sk_buff *napi_get_frags(struct napi_struct *napi) } EXPORT_SYMBOL(napi_get_frags); -/* Pass the currently batched GRO_NORMAL SKBs up to the stack. */ -static void gro_normal_list(struct napi_struct *napi) -{ - if (!napi->rx_count) - return; - netif_receive_skb_list_internal(&napi->rx_list); - INIT_LIST_HEAD(&napi->rx_list); - napi->rx_count = 0; -} - -/* Queue one GRO_NORMAL SKB up for list processing. If batch size exceeded, - * pass the whole batch up to the stack. - */ -static void gro_normal_one(struct napi_struct *napi, struct sk_buff *skb) -{ - list_add_tail(&skb->list, &napi->rx_list); - if (++napi->rx_count >= gro_normal_batch) - gro_normal_list(napi); -} - static gro_result_t napi_frags_finish(struct napi_struct *napi, struct sk_buff *skb, gro_result_t ret)