Message ID | 20241216162735.2047544-3-brianvv@google.com |
---|---|
State | Accepted |
Delegated to: | Anthony Nguyen |
Headers | show |
Series | IDPF Virtchnl: Enhance error reporting & fix locking/workqueue issues | expand |
From: Brian Vazquez <brianvv@google.com> Date: Mon, 16 Dec 2024 16:27:34 +0000 > From: Marco Leogrande <leogrande@google.com> > > When a workqueue is created with `WQ_UNBOUND`, its work items are > served by special worker-pools, whose host workers are not bound to > any specific CPU. In the default configuration (i.e. when > `queue_delayed_work` and friends do not specify which CPU to run the > work item on), `WQ_UNBOUND` allows the work item to be executed on any > CPU in the same node of the CPU it was enqueued on. While this > solution potentially sacrifices locality, it avoids contention with > other processes that might dominate the CPU time of the processor the > work item was scheduled on. > > This is not just a theoretical problem: in a particular scenario > misconfigured process was hogging most of the time from CPU0, leaving > less than 0.5% of its CPU time to the kworker. The IDPF workqueues > that were using the kworker on CPU0 suffered large completion delays > as a result, causing performance degradation, timeouts and eventual > system crash. Wasn't this inspired by [0]? [0] https://lore.kernel.org/netdev/20241126035849.6441-11-milena.olech@intel.com Thanks, Olek
On Mon, Dec 16, 2024 at 1:11 PM Alexander Lobakin <aleksander.lobakin@intel.com> wrote: > > From: Brian Vazquez <brianvv@google.com> > Date: Mon, 16 Dec 2024 16:27:34 +0000 > > > From: Marco Leogrande <leogrande@google.com> > > > > When a workqueue is created with `WQ_UNBOUND`, its work items are > > served by special worker-pools, whose host workers are not bound to > > any specific CPU. In the default configuration (i.e. when > > `queue_delayed_work` and friends do not specify which CPU to run the > > work item on), `WQ_UNBOUND` allows the work item to be executed on any > > CPU in the same node of the CPU it was enqueued on. While this > > solution potentially sacrifices locality, it avoids contention with > > other processes that might dominate the CPU time of the processor the > > work item was scheduled on. > > > > This is not just a theoretical problem: in a particular scenario > > misconfigured process was hogging most of the time from CPU0, leaving > > less than 0.5% of its CPU time to the kworker. The IDPF workqueues > > that were using the kworker on CPU0 suffered large completion delays > > as a result, causing performance degradation, timeouts and eventual > > system crash. > > Wasn't this inspired by [0]? > > [0] > https://lore.kernel.org/netdev/20241126035849.6441-11-milena.olech@intel.com The root cause is exactly the same so I do see the similarity and I'm not surprised that both were addressed with a similar patch, we hit this problem some time ago and the first attempt to have this was in August [0]. [0] https://lore.kernel.org/netdev/20240813182747.1770032-4-manojvishy@google.com/ > > Thanks, > Olek
> -----Original Message----- > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of > Brian Vazquez > Sent: Monday, December 16, 2024 12:13 PM > To: Lobakin, Aleksander <aleksander.lobakin@intel.com> > Cc: Brian Vazquez <brianvv.kernel@gmail.com>; Nguyen, Anthony L > <anthony.l.nguyen@intel.com>; Kitszel, Przemyslaw > <przemyslaw.kitszel@intel.com>; David S. Miller <davem@davemloft.net>; > Eric Dumazet <edumazet@google.com>; Jakub Kicinski <kuba@kernel.org>; > Paolo Abeni <pabeni@redhat.com>; intel-wired-lan@lists.osuosl.org; David > Decotigny <decot@google.com>; Vivek Kumar <vivekmr@google.com>; > Singhai, Anjali <anjali.singhai@intel.com>; Samudrala, Sridhar > <sridhar.samudrala@intel.com>; linux-kernel@vger.kernel.org; > netdev@vger.kernel.org; Tantilov, Emil S <emil.s.tantilov@intel.com>; Marco > Leogrande <leogrande@google.com>; Manoj Vishwanathan > <manojvishy@google.com>; Keller, Jacob E <jacob.e.keller@intel.com>; Linga, > Pavan Kumar <pavan.kumar.linga@intel.com> > Subject: Re: [Intel-wired-lan] [iwl-next PATCH v4 2/3] idpf: convert > workqueues to unbound > > On Mon, Dec 16, 2024 at 1:11 PM Alexander Lobakin > <aleksander.lobakin@intel.com> wrote: > > > > From: Brian Vazquez <brianvv@google.com> > > Date: Mon, 16 Dec 2024 16:27:34 +0000 > > > > > From: Marco Leogrande <leogrande@google.com> > > > > > > When a workqueue is created with `WQ_UNBOUND`, its work items are > > > served by special worker-pools, whose host workers are not bound to > > > any specific CPU. In the default configuration (i.e. when > > > `queue_delayed_work` and friends do not specify which CPU to run the > > > work item on), `WQ_UNBOUND` allows the work item to be executed on > any > > > CPU in the same node of the CPU it was enqueued on. While this > > > solution potentially sacrifices locality, it avoids contention with > > > other processes that might dominate the CPU time of the processor the > > > work item was scheduled on. > > > > > > This is not just a theoretical problem: in a particular scenario > > > misconfigured process was hogging most of the time from CPU0, leaving > > > less than 0.5% of its CPU time to the kworker. The IDPF workqueues > > > that were using the kworker on CPU0 suffered large completion delays > > > as a result, causing performance degradation, timeouts and eventual > > > system crash. > > > > Wasn't this inspired by [0]? > > > > [0] > > https://lore.kernel.org/netdev/20241126035849.6441-11- > milena.olech@intel.com > > The root cause is exactly the same so I do see the similarity and I'm > not surprised that both were addressed with a similar patch, we hit > this problem some time ago and the first attempt to have this was in > August [0]. > > [0] > https://lore.kernel.org/netdev/20240813182747.1770032-4- > manojvishy@google.com/ > > > > > Thanks, > > Olek Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c index 305958c4c230..da1e3525719f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_main.c +++ b/drivers/net/ethernet/intel/idpf/idpf_main.c @@ -198,7 +198,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_master(pdev); pci_set_drvdata(pdev, adapter); - adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0, + adapter->init_wq = alloc_workqueue("%s-%s-init", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, dev_driver_string(dev), dev_name(dev)); if (!adapter->init_wq) { @@ -207,7 +208,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_free; } - adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0, + adapter->serv_wq = alloc_workqueue("%s-%s-service", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, dev_driver_string(dev), dev_name(dev)); if (!adapter->serv_wq) { @@ -216,7 +218,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_serv_wq_alloc; } - adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0, + adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, dev_driver_string(dev), dev_name(dev)); if (!adapter->mbx_wq) { @@ -225,7 +228,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_mbx_wq_alloc; } - adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0, + adapter->stats_wq = alloc_workqueue("%s-%s-stats", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, dev_driver_string(dev), dev_name(dev)); if (!adapter->stats_wq) { @@ -234,7 +238,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_stats_wq_alloc; } - adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0, + adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0, dev_driver_string(dev), dev_name(dev)); if (!adapter->vc_event_wq) {