From patchwork Wed Dec 16 18:10:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Van Haaren, Harry" X-Patchwork-Id: 1417339 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.138; helo=whitealder.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4Cx39x6pshz9sSC for ; Thu, 17 Dec 2020 05:12:05 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 4240186BB6; Wed, 16 Dec 2020 18:12:04 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ovnM0vP18fWR; Wed, 16 Dec 2020 18:11:57 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by whitealder.osuosl.org (Postfix) with ESMTP id 2EAB386C53; Wed, 16 Dec 2020 18:11:20 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 05267C1DA5; Wed, 16 Dec 2020 18:11:20 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 07B94C1DA0 for ; Wed, 16 Dec 2020 18:11:16 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id D710386962 for ; Wed, 16 Dec 2020 18:11:15 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4AdWrOqqUmut for ; Wed, 16 Dec 2020 18:11:11 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by whitealder.osuosl.org (Postfix) with ESMTPS id BA8008689E for ; Wed, 16 Dec 2020 18:11:05 +0000 (UTC) IronPort-SDR: 9x5wu3GC8sDWxYjyfD0tb5tg7INh643CQxSb4TTHobEt8uhJtqwd7bYsmqGFfrvNblhvZRtX0b STAdT39Ksz0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9837"; a="175215273" X-IronPort-AV: E=Sophos;i="5.78,425,1599548400"; d="scan'208";a="175215273" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2020 10:11:05 -0800 IronPort-SDR: sSHKUIqWFaP7uS1Z2M4BMPzn7q0cW1hnzXsn2Rot25kR5iohCjY6hP3i2s+eegRxjQVNvp8R8l Kps5sCqlq13w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,425,1599548400"; d="scan'208";a="559988321" Received: from silpixa00400633.ir.intel.com ([10.237.213.44]) by fmsmga005.fm.intel.com with ESMTP; 16 Dec 2020 10:11:02 -0800 From: Harry van Haaren To: ovs-dev@openvswitch.org Date: Wed, 16 Dec 2020 18:10:28 +0000 Message-Id: <20201216181033.572425-10-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201216181033.572425-1-harry.van.haaren@intel.com> References: <20201208172753.349273-1-harry.van.haaren@intel.com> <20201216181033.572425-1-harry.van.haaren@intel.com> MIME-Version: 1.0 Cc: i.maximets@ovn.org Subject: [ovs-dev] [PATCH v7 09/14] dpif-netdev: Move pmd_try_optimize function in file. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" This commit moves the pmd_try_optimize function to a more appropriate location in the file - currently it sits in the DPCLS section, which is not its correct home. Signed-off-by: Harry van Haaren --- lib/dpif-netdev.c | 146 +++++++++++++++++++++++----------------------- 1 file changed, 73 insertions(+), 73 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 4c074995c..eea6c11f0 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -5638,6 +5638,79 @@ reload: return NULL; } +static inline void +dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, + struct polled_queue *poll_list, int poll_cnt) +{ + struct dpcls *cls; + uint64_t tot_idle = 0, tot_proc = 0; + unsigned int pmd_load = 0; + + if (pmd->ctx.now > pmd->rxq_next_cycle_store) { + uint64_t curr_tsc; + struct pmd_auto_lb *pmd_alb = &pmd->dp->pmd_alb; + if (pmd_alb->is_enabled && !pmd->isolated + && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] >= + pmd->prev_stats[PMD_CYCLES_ITER_IDLE]) + && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] >= + pmd->prev_stats[PMD_CYCLES_ITER_BUSY])) + { + tot_idle = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] - + pmd->prev_stats[PMD_CYCLES_ITER_IDLE]; + tot_proc = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] - + pmd->prev_stats[PMD_CYCLES_ITER_BUSY]; + + if (tot_proc) { + pmd_load = ((tot_proc * 100) / (tot_idle + tot_proc)); + } + + if (pmd_load >= ALB_PMD_LOAD_THRESHOLD) { + atomic_count_inc(&pmd->pmd_overloaded); + } else { + atomic_count_set(&pmd->pmd_overloaded, 0); + } + } + + pmd->prev_stats[PMD_CYCLES_ITER_IDLE] = + pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE]; + pmd->prev_stats[PMD_CYCLES_ITER_BUSY] = + pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY]; + + /* Get the cycles that were used to process each queue and store. */ + for (unsigned i = 0; i < poll_cnt; i++) { + uint64_t rxq_cyc_curr = dp_netdev_rxq_get_cycles(poll_list[i].rxq, + RXQ_CYCLES_PROC_CURR); + dp_netdev_rxq_set_intrvl_cycles(poll_list[i].rxq, rxq_cyc_curr); + dp_netdev_rxq_set_cycles(poll_list[i].rxq, RXQ_CYCLES_PROC_CURR, + 0); + } + curr_tsc = cycles_counter_update(&pmd->perf_stats); + if (pmd->intrvl_tsc_prev) { + /* There is a prev timestamp, store a new intrvl cycle count. */ + atomic_store_relaxed(&pmd->intrvl_cycles, + curr_tsc - pmd->intrvl_tsc_prev); + } + pmd->intrvl_tsc_prev = curr_tsc; + /* Start new measuring interval */ + pmd->rxq_next_cycle_store = pmd->ctx.now + PMD_RXQ_INTERVAL_LEN; + } + + if (pmd->ctx.now > pmd->next_optimization) { + /* Try to obtain the flow lock to block out revalidator threads. + * If not possible, just try next time. */ + if (!ovs_mutex_trylock(&pmd->flow_mutex)) { + /* Optimize each classifier */ + CMAP_FOR_EACH (cls, node, &pmd->classifiers) { + dpcls_sort_subtable_vector(cls); + } + ovs_mutex_unlock(&pmd->flow_mutex); + /* Start new measuring interval */ + pmd->next_optimization = pmd->ctx.now + + DPCLS_OPTIMIZATION_INTERVAL; + } + } +} + static void dp_netdev_disable_upcall(struct dp_netdev *dp) OVS_ACQUIRES(dp->upcall_rwlock) @@ -8304,79 +8377,6 @@ dpcls_sort_subtable_vector(struct dpcls *cls) pvector_publish(pvec); } -static inline void -dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd, - struct polled_queue *poll_list, int poll_cnt) -{ - struct dpcls *cls; - uint64_t tot_idle = 0, tot_proc = 0; - unsigned int pmd_load = 0; - - if (pmd->ctx.now > pmd->rxq_next_cycle_store) { - uint64_t curr_tsc; - struct pmd_auto_lb *pmd_alb = &pmd->dp->pmd_alb; - if (pmd_alb->is_enabled && !pmd->isolated - && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] >= - pmd->prev_stats[PMD_CYCLES_ITER_IDLE]) - && (pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] >= - pmd->prev_stats[PMD_CYCLES_ITER_BUSY])) - { - tot_idle = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE] - - pmd->prev_stats[PMD_CYCLES_ITER_IDLE]; - tot_proc = pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY] - - pmd->prev_stats[PMD_CYCLES_ITER_BUSY]; - - if (tot_proc) { - pmd_load = ((tot_proc * 100) / (tot_idle + tot_proc)); - } - - if (pmd_load >= ALB_PMD_LOAD_THRESHOLD) { - atomic_count_inc(&pmd->pmd_overloaded); - } else { - atomic_count_set(&pmd->pmd_overloaded, 0); - } - } - - pmd->prev_stats[PMD_CYCLES_ITER_IDLE] = - pmd->perf_stats.counters.n[PMD_CYCLES_ITER_IDLE]; - pmd->prev_stats[PMD_CYCLES_ITER_BUSY] = - pmd->perf_stats.counters.n[PMD_CYCLES_ITER_BUSY]; - - /* Get the cycles that were used to process each queue and store. */ - for (unsigned i = 0; i < poll_cnt; i++) { - uint64_t rxq_cyc_curr = dp_netdev_rxq_get_cycles(poll_list[i].rxq, - RXQ_CYCLES_PROC_CURR); - dp_netdev_rxq_set_intrvl_cycles(poll_list[i].rxq, rxq_cyc_curr); - dp_netdev_rxq_set_cycles(poll_list[i].rxq, RXQ_CYCLES_PROC_CURR, - 0); - } - curr_tsc = cycles_counter_update(&pmd->perf_stats); - if (pmd->intrvl_tsc_prev) { - /* There is a prev timestamp, store a new intrvl cycle count. */ - atomic_store_relaxed(&pmd->intrvl_cycles, - curr_tsc - pmd->intrvl_tsc_prev); - } - pmd->intrvl_tsc_prev = curr_tsc; - /* Start new measuring interval */ - pmd->rxq_next_cycle_store = pmd->ctx.now + PMD_RXQ_INTERVAL_LEN; - } - - if (pmd->ctx.now > pmd->next_optimization) { - /* Try to obtain the flow lock to block out revalidator threads. - * If not possible, just try next time. */ - if (!ovs_mutex_trylock(&pmd->flow_mutex)) { - /* Optimize each classifier */ - CMAP_FOR_EACH (cls, node, &pmd->classifiers) { - dpcls_sort_subtable_vector(cls); - } - ovs_mutex_unlock(&pmd->flow_mutex); - /* Start new measuring interval */ - pmd->next_optimization = pmd->ctx.now - + DPCLS_OPTIMIZATION_INTERVAL; - } - } -} - /* Insert 'rule' into 'cls'. */ static void dpcls_insert(struct dpcls *cls, struct dpcls_rule *rule,