From patchwork Thu Sep 19 16:54:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167784 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="boPFN0nY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7ph2kblz9sNf for ; Thu, 26 Sep 2019 18:47:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728361AbfIZIrX (ORCPT ); Thu, 26 Sep 2019 04:47:23 -0400 Received: from mail-pf1-f178.google.com ([209.85.210.178]:45139 "EHLO mail-pf1-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfIZIrX (ORCPT ); Thu, 26 Sep 2019 04:47:23 -0400 Received: by mail-pf1-f178.google.com with SMTP id y72so1408019pfb.12 for ; Thu, 26 Sep 2019 01:47:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=t6XjOWdkdSKhiXjRuo+Xi7CGq4ZXtRbOkaCMfbncl8c=; b=boPFN0nYP9FQmj97Gv7mYZapoag4bFR/m43NKkuTWNkUYg8Wg33G7i4uJGUghXXWM5 CwkClYLsdM+DDZ/j/4nXC/kcHsoWnJyBfTtF6j0YrHuPfWhP8+38P1DCn1sGJvu9Chjl ABCDcteqFYZdEzM1v2gYxDC29qR6wRA374i75OOPWGumgqzp5jLURDYwh/bzqeM0Spb5 6OhDtDFBQGJWdM2dSLFuhnu6fgRJb+qsPqyDmoZ6DZNGeCQV45Gd49CQMXoXqMWCUGX5 WyAQhwuyu1FTTwh+3LavAXyzcomoYimzg72MO7qH3WhVeEuercfWkIPhgzroNtiIzXyK DAUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=t6XjOWdkdSKhiXjRuo+Xi7CGq4ZXtRbOkaCMfbncl8c=; b=DZXsiVa4DWZF5FioItEjVuoSCFcYMzuZj/KCTq1e83l35LntYqP1XuEMsaGswMJRh4 jfacQeAes3whqtdGybPea/45dlSDHSqFZUa94uHZ3+DRTgaTrSi9U6leYj2SJBofh8kJ YZF48eu9hh1M4jgafZJIuCZSzTss7Vj8NGqTvkji77ku5luFFqAqZa7dKgzgzQOU5uYE ikyBs19QG3xVpnBKq3gBpu+xFwz099qsK9Ma9ZEUQks2CeKTtGnINgrngofx0q2TTPcJ bO+gLZ+M0E096AcCSvtA3vOmOtgqf3ikjwM/3LpuN1/greTfmVPo9+zkdjHIBgtk+Juf 3j6g== X-Gm-Message-State: APjAAAXelfMnkBaTEns15qf0m/LWtZRLWPXuA4pu1BuWmKyOGcTArPhs cFMB/fC2w0SrpBNXsqNTI/Y= X-Google-Smtp-Source: APXvYqxwwY7RxobminO8QBP91RpcR7O6TVwK9clgrw0RvTQUmZ/JAgPhOg8odslxTUmhVZOKcG7u4Q== X-Received: by 2002:a63:5703:: with SMTP id l3mr2383749pgb.112.1569487641997; Thu, 26 Sep 2019 01:47:21 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:21 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 1/7] net: openvswitch: add flow-mask cache for performance Date: Fri, 20 Sep 2019 00:54:47 +0800 Message-Id: <1568912093-68535-2-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang The idea of this optimization comes from a patch which is committed in 2014, openvswitch community. The author is Pravin B Shelar. In order to get high performance, I implement it again. Later patches will use it. Pravin B Shelar, says: | On every packet OVS needs to lookup flow-table with every | mask until it finds a match. The packet flow-key is first | masked with mask in the list and then the masked key is | looked up in flow-table. Therefore number of masks can | affect packet processing performance. Link: https://github.com/openvswitch/ovs/commit/5604935e4e1cbc16611d2d97f50b717aa31e8ec5 Signed-off-by: Tonghao Zhang --- net/openvswitch/datapath.c | 3 +- net/openvswitch/flow_table.c | 109 +++++++++++++++++++++++++++++++++++++------ net/openvswitch/flow_table.h | 11 ++++- 3 files changed, 107 insertions(+), 16 deletions(-) diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index dde9d76..3d7b1c4 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -227,7 +227,8 @@ void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key) stats = this_cpu_ptr(dp->stats_percpu); /* Look up flow. */ - flow = ovs_flow_tbl_lookup_stats(&dp->table, key, &n_mask_hit); + flow = ovs_flow_tbl_lookup_stats(&dp->table, key, skb_get_hash(skb), + &n_mask_hit); if (unlikely(!flow)) { struct dp_upcall_info upcall; diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index cf3582c..3d515c0 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -36,6 +36,10 @@ #define TBL_MIN_BUCKETS 1024 #define REHASH_INTERVAL (10 * 60 * HZ) +#define MC_HASH_SHIFT 8 +#define MC_HASH_ENTRIES (1u << MC_HASH_SHIFT) +#define MC_HASH_SEGS ((sizeof(uint32_t) * 8) / MC_HASH_SHIFT) + static struct kmem_cache *flow_cache; struct kmem_cache *flow_stats_cache __read_mostly; @@ -168,10 +172,15 @@ int ovs_flow_tbl_init(struct flow_table *table) { struct table_instance *ti, *ufid_ti; - ti = table_instance_alloc(TBL_MIN_BUCKETS); + table->mask_cache = __alloc_percpu(sizeof(struct mask_cache_entry) * + MC_HASH_ENTRIES, + __alignof__(struct mask_cache_entry)); + if (!table->mask_cache) + return -ENOMEM; + ti = table_instance_alloc(TBL_MIN_BUCKETS); if (!ti) - return -ENOMEM; + goto free_mask_cache; ufid_ti = table_instance_alloc(TBL_MIN_BUCKETS); if (!ufid_ti) @@ -187,6 +196,8 @@ int ovs_flow_tbl_init(struct flow_table *table) free_ti: __table_instance_destroy(ti); +free_mask_cache: + free_percpu(table->mask_cache); return -ENOMEM; } @@ -243,6 +254,7 @@ void ovs_flow_tbl_destroy(struct flow_table *table) struct table_instance *ti = rcu_dereference_raw(table->ti); struct table_instance *ufid_ti = rcu_dereference_raw(table->ufid_ti); + free_percpu(table->mask_cache); table_instance_destroy(ti, ufid_ti, false); } @@ -425,7 +437,8 @@ static bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow, static struct sw_flow *masked_flow_lookup(struct table_instance *ti, const struct sw_flow_key *unmasked, - const struct sw_flow_mask *mask) + const struct sw_flow_mask *mask, + u32 *n_mask_hit) { struct sw_flow *flow; struct hlist_head *head; @@ -435,6 +448,8 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti, ovs_flow_mask_key(&masked_key, unmasked, false, mask); hash = flow_hash(&masked_key, &mask->range); head = find_bucket(ti, hash); + (*n_mask_hit)++; + hlist_for_each_entry_rcu(flow, head, flow_table.node[ti->node_ver]) { if (flow->mask == mask && flow->flow_table.hash == hash && flow_cmp_masked_key(flow, &masked_key, &mask->range)) @@ -443,30 +458,97 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti, return NULL; } -struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, - const struct sw_flow_key *key, - u32 *n_mask_hit) +static struct sw_flow *flow_lookup(struct flow_table *tbl, + struct table_instance *ti, + const struct sw_flow_key *key, + u32 *n_mask_hit) { - struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); struct sw_flow_mask *mask; struct sw_flow *flow; - *n_mask_hit = 0; list_for_each_entry_rcu(mask, &tbl->mask_list, list) { - (*n_mask_hit)++; - flow = masked_flow_lookup(ti, key, mask); + flow = masked_flow_lookup(ti, key, mask, n_mask_hit); if (flow) /* Found */ return flow; } return NULL; } +/* + * mask_cache maps flow to probable mask. This cache is not tightly + * coupled cache, It means updates to mask list can result in inconsistent + * cache entry in mask cache. + * This is per cpu cache and is divided in MC_HASH_SEGS segments. + * In case of a hash collision the entry is hashed in next segment. + * */ +struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, + const struct sw_flow_key *key, + u32 skb_hash, + u32 *n_mask_hit) +{ + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); + struct mask_cache_entry *entries, *ce, *del; + struct sw_flow *flow; + u32 hash = skb_hash; + int seg; + + *n_mask_hit = 0; + if (unlikely(!skb_hash)) + return flow_lookup(tbl, ti, key, n_mask_hit); + + del = NULL; + entries = this_cpu_ptr(tbl->mask_cache); + + for (seg = 0; seg < MC_HASH_SEGS; seg++) { + int index; + + index = hash & (MC_HASH_ENTRIES - 1); + ce = &entries[index]; + + if (ce->skb_hash == skb_hash) { + struct sw_flow_mask *mask; + int i; + + i = 0; + list_for_each_entry_rcu(mask, &tbl->mask_list, list) { + if (ce->mask_index == i++) { + flow = masked_flow_lookup(ti, key, mask, + n_mask_hit); + if (flow) /* Found */ + return flow; + + break; + } + } + + del = ce; + break; + } + + if (!del || (del->skb_hash && !ce->skb_hash)) { + del = ce; + } + + hash >>= MC_HASH_SHIFT; + } + + flow = flow_lookup(tbl, ti, key, n_mask_hit); + + if (flow) { + del->skb_hash = skb_hash; + del->mask_index = (*n_mask_hit - 1); + } + + return flow; +} + struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, const struct sw_flow_key *key) { + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); u32 __always_unused n_mask_hit; - return ovs_flow_tbl_lookup_stats(tbl, key, &n_mask_hit); + return flow_lookup(tbl, ti, key, &n_mask_hit); } struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, @@ -475,10 +557,11 @@ struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); struct sw_flow_mask *mask; struct sw_flow *flow; + u32 __always_unused n_mask_hit; /* Always called under ovs-mutex. */ list_for_each_entry(mask, &tbl->mask_list, list) { - flow = masked_flow_lookup(ti, match->key, mask); + flow = masked_flow_lookup(ti, match->key, mask, &n_mask_hit); if (flow && ovs_identifier_is_key(&flow->id) && ovs_flow_cmp_unmasked_key(flow, match)) return flow; @@ -631,7 +714,7 @@ static int flow_mask_insert(struct flow_table *tbl, struct sw_flow *flow, return -ENOMEM; mask->key = new->key; mask->range = new->range; - list_add_rcu(&mask->list, &tbl->mask_list); + list_add_tail_rcu(&mask->list, &tbl->mask_list); } else { BUG_ON(!mask->ref_count); mask->ref_count++; diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h index bc52045..04b6b1c 100644 --- a/net/openvswitch/flow_table.h +++ b/net/openvswitch/flow_table.h @@ -22,6 +22,11 @@ #include "flow.h" +struct mask_cache_entry { + u32 skb_hash; + u32 mask_index; +}; + struct table_instance { struct hlist_head *buckets; unsigned int n_buckets; @@ -34,6 +39,7 @@ struct table_instance { struct flow_table { struct table_instance __rcu *ti; struct table_instance __rcu *ufid_ti; + struct mask_cache_entry __percpu *mask_cache; struct list_head mask_list; unsigned long last_rehash; unsigned int count; @@ -60,8 +66,9 @@ int ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow, struct sw_flow *ovs_flow_tbl_dump_next(struct table_instance *table, u32 *bucket, u32 *idx); struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *, - const struct sw_flow_key *, - u32 *n_mask_hit); + const struct sw_flow_key *, + u32 skb_hash, + u32 *n_mask_hit); struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *, const struct sw_flow_key *); struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, From patchwork Thu Sep 19 16:54:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167785 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="uoEav9KD"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7pl1h6gz9sNf for ; Thu, 26 Sep 2019 18:47:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728799AbfIZIrZ (ORCPT ); Thu, 26 Sep 2019 04:47:25 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:46762 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfIZIrY (ORCPT ); Thu, 26 Sep 2019 04:47:24 -0400 Received: by mail-pf1-f196.google.com with SMTP id q5so1404632pfg.13 for ; Thu, 26 Sep 2019 01:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FOXGs3dGs/tTHBHBhZJVHOJPRZHVi5a8OXwYOCUK8lk=; b=uoEav9KDQQ/3zN83BlXsZuAcOY/cEj2pz9jNSkemLNiECTqxDTbrqGWDqwJfdRcT8N Uo8uzE11k2XKrb2jc17SrgIJTM79qI77PD8ZqyICWKfG4xxkGGp9D7ySVg7SwCAO+jyv W4VyxSsApylLroMF2i6ijWyzooezvefeki5lTohF9GFZRlSfg3+Dqg0hZNMq1oIjw28z U6QuqJfVAs9omBYzuV6nlDAkxJUxRBIsOW8YrBGGckaEMz1M15QL+QvZ9z3utstN2ZYG fJs2bBxWZNtNQjLMMVnBAG5CbyxzeEXDLn3k8eeCcw50nIoE0AGMeby2+o8+GavpqvN6 pI3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FOXGs3dGs/tTHBHBhZJVHOJPRZHVi5a8OXwYOCUK8lk=; b=OphslExIR2bmqoyAVxSj4ZF2IxjzrCklhHt2knslG+7mVQ801I78Mg1fGkYKkjWBZf gjaFZbXndFiiA7izAD/ym2CSBpHVvXaCuQ+UfrOrDB93HvmQ+vfVCozgd0X+s1ZSWG7J iZfeZHWKk4F1gCtr8fAPrWeED+TqZ62yGmLIJYg1nmOrv1SY6bWkK8s7uzDo8vvlNC4u RTUaEaZTnxk8n4BW5QkAE4WbR7j4sdb4noEYmHAde71oyR6kC1Seonpwr/iPGIk4I5qq iicgb5y3xhDCz393X9tigZeERCWiWmXX7wpR226PhdxExVQED3YE+CqzHw6v80wc4vn7 MmYw== X-Gm-Message-State: APjAAAV+P93OReFthET3fKPs8KQct34kE9fzOn6XfNQo16hKNVP7ji7K kr5GuJrprjeBnc0RJ2UQ8B8= X-Google-Smtp-Source: APXvYqwvzrcpMGe/rCxngZ6qjeScvGzOao8A+ZMcyd1L5Nbp8P2K6ehANOO7lt1KRFdzGLLjo7dzcg== X-Received: by 2002:a17:90a:6348:: with SMTP id v8mr2326601pjs.7.1569487644079; Thu, 26 Sep 2019 01:47:24 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:23 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 2/7] net: openvswitch: convert mask list in mask array Date: Fri, 20 Sep 2019 00:54:48 +0800 Message-Id: <1568912093-68535-3-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang Port the codes to linux upstream and with little changes. Pravin B Shelar, says: | mask caches index of mask in mask_list. On packet recv OVS | need to traverse mask-list to get cached mask. Therefore array | is better for retrieving cached mask. This also allows better | cache replacement algorithm by directly checking mask's existence. Link: https://github.com/openvswitch/ovs/commit/d49fc3ff53c65e4eca9cabd52ac63396746a7ef5 Signed-off-by: Tonghao Zhang --- net/openvswitch/flow.h | 1 - net/openvswitch/flow_table.c | 218 +++++++++++++++++++++++++++++++++---------- net/openvswitch/flow_table.h | 8 +- 3 files changed, 175 insertions(+), 52 deletions(-) diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h index b830d5f..8080518 100644 --- a/net/openvswitch/flow.h +++ b/net/openvswitch/flow.h @@ -166,7 +166,6 @@ struct sw_flow_key_range { struct sw_flow_mask { int ref_count; struct rcu_head rcu; - struct list_head list; struct sw_flow_key_range range; struct sw_flow_key key; }; diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 3d515c0..99954fa 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -34,6 +34,7 @@ #include #define TBL_MIN_BUCKETS 1024 +#define MASK_ARRAY_SIZE_MIN 16 #define REHASH_INTERVAL (10 * 60 * HZ) #define MC_HASH_SHIFT 8 @@ -168,9 +169,59 @@ static struct table_instance *table_instance_alloc(int new_size) return ti; } +static void mask_array_rcu_cb(struct rcu_head *rcu) +{ + struct mask_array *ma = container_of(rcu, struct mask_array, rcu); + + kfree(ma); +} + +static struct mask_array *tbl_mask_array_alloc(int size) +{ + struct mask_array *new; + + size = max(MASK_ARRAY_SIZE_MIN, size); + new = kzalloc(sizeof(struct mask_array) + + sizeof(struct sw_flow_mask *) * size, GFP_KERNEL); + if (!new) + return NULL; + + new->count = 0; + new->max = size; + + return new; +} + +static int tbl_mask_array_realloc(struct flow_table *tbl, int size) +{ + struct mask_array *old; + struct mask_array *new; + + new = tbl_mask_array_alloc(size); + if (!new) + return -ENOMEM; + + old = ovsl_dereference(tbl->mask_array); + if (old) { + int i; + + for (i = 0; i < old->max; i++) { + if (ovsl_dereference(old->masks[i])) + new->masks[new->count++] = old->masks[i]; + } + } + rcu_assign_pointer(tbl->mask_array, new); + + if (old) + call_rcu(&old->rcu, mask_array_rcu_cb); + + return 0; +} + int ovs_flow_tbl_init(struct flow_table *table) { struct table_instance *ti, *ufid_ti; + struct mask_array *ma; table->mask_cache = __alloc_percpu(sizeof(struct mask_cache_entry) * MC_HASH_ENTRIES, @@ -178,9 +229,13 @@ int ovs_flow_tbl_init(struct flow_table *table) if (!table->mask_cache) return -ENOMEM; + ma = tbl_mask_array_alloc(MASK_ARRAY_SIZE_MIN); + if (!ma) + goto free_mask_cache; + ti = table_instance_alloc(TBL_MIN_BUCKETS); if (!ti) - goto free_mask_cache; + goto free_mask_array; ufid_ti = table_instance_alloc(TBL_MIN_BUCKETS); if (!ufid_ti) @@ -188,7 +243,7 @@ int ovs_flow_tbl_init(struct flow_table *table) rcu_assign_pointer(table->ti, ti); rcu_assign_pointer(table->ufid_ti, ufid_ti); - INIT_LIST_HEAD(&table->mask_list); + rcu_assign_pointer(table->mask_array, ma); table->last_rehash = jiffies; table->count = 0; table->ufid_count = 0; @@ -196,6 +251,8 @@ int ovs_flow_tbl_init(struct flow_table *table) free_ti: __table_instance_destroy(ti); +free_mask_array: + kfree(ma); free_mask_cache: free_percpu(table->mask_cache); return -ENOMEM; @@ -255,6 +312,7 @@ void ovs_flow_tbl_destroy(struct flow_table *table) struct table_instance *ufid_ti = rcu_dereference_raw(table->ufid_ti); free_percpu(table->mask_cache); + kfree(rcu_dereference_raw(table->mask_array)); table_instance_destroy(ti, ufid_ti, false); } @@ -460,17 +518,27 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti, static struct sw_flow *flow_lookup(struct flow_table *tbl, struct table_instance *ti, + struct mask_array *ma, const struct sw_flow_key *key, - u32 *n_mask_hit) + u32 *n_mask_hit, + u32 *index) { - struct sw_flow_mask *mask; struct sw_flow *flow; + int i; - list_for_each_entry_rcu(mask, &tbl->mask_list, list) { - flow = masked_flow_lookup(ti, key, mask, n_mask_hit); - if (flow) /* Found */ - return flow; + for (i = 0; i < ma->max; i++) { + struct sw_flow_mask *mask; + + mask = rcu_dereference_ovsl(ma->masks[i]); + if (mask) { + flow = masked_flow_lookup(ti, key, mask, n_mask_hit); + if (flow) { /* Found */ + *index = i; + return flow; + } + } } + return NULL; } @@ -486,6 +554,7 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, u32 skb_hash, u32 *n_mask_hit) { + struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); struct mask_cache_entry *entries, *ce, *del; struct sw_flow *flow; @@ -493,8 +562,11 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, int seg; *n_mask_hit = 0; - if (unlikely(!skb_hash)) - return flow_lookup(tbl, ti, key, n_mask_hit); + if (unlikely(!skb_hash)) { + u32 __always_unused mask_index; + + return flow_lookup(tbl, ti, ma, key, n_mask_hit, &mask_index); + } del = NULL; entries = this_cpu_ptr(tbl->mask_cache); @@ -507,37 +579,33 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, if (ce->skb_hash == skb_hash) { struct sw_flow_mask *mask; - int i; - - i = 0; - list_for_each_entry_rcu(mask, &tbl->mask_list, list) { - if (ce->mask_index == i++) { - flow = masked_flow_lookup(ti, key, mask, - n_mask_hit); - if (flow) /* Found */ - return flow; - - break; - } + struct sw_flow *flow; + + mask = rcu_dereference_ovsl(ma->masks[ce->mask_index]); + if (mask) { + flow = masked_flow_lookup(ti, key, mask, + n_mask_hit); + if (flow) /* Found */ + return flow; } del = ce; break; } - if (!del || (del->skb_hash && !ce->skb_hash)) { + if (!del || (del->skb_hash && !ce->skb_hash) || + (rcu_dereference_ovsl(ma->masks[del->mask_index]) && + !rcu_dereference_ovsl(ma->masks[ce->mask_index]))) { del = ce; } hash >>= MC_HASH_SHIFT; } - flow = flow_lookup(tbl, ti, key, n_mask_hit); + flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, &del->mask_index); - if (flow) { + if (flow) del->skb_hash = skb_hash; - del->mask_index = (*n_mask_hit - 1); - } return flow; } @@ -546,26 +614,38 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, const struct sw_flow_key *key) { struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); + struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); + u32 __always_unused n_mask_hit; + u32 __always_unused index; - return flow_lookup(tbl, ti, key, &n_mask_hit); + return flow_lookup(tbl, ti, ma, key, &n_mask_hit, &index); } struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, const struct sw_flow_match *match) { - struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); - struct sw_flow_mask *mask; - struct sw_flow *flow; - u32 __always_unused n_mask_hit; + struct mask_array *ma = ovsl_dereference(tbl->mask_array); + int i; /* Always called under ovs-mutex. */ - list_for_each_entry(mask, &tbl->mask_list, list) { + for (i = 0; i < ma->max; i++) { + struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); + u32 __always_unused n_mask_hit; + struct sw_flow_mask *mask; + struct sw_flow *flow; + + mask = ovsl_dereference(ma->masks[i]); + if (!mask) + continue; + flow = masked_flow_lookup(ti, match->key, mask, &n_mask_hit); if (flow && ovs_identifier_is_key(&flow->id) && - ovs_flow_cmp_unmasked_key(flow, match)) + ovs_flow_cmp_unmasked_key(flow, match)) { return flow; + } } + return NULL; } @@ -611,13 +691,8 @@ struct sw_flow *ovs_flow_tbl_lookup_ufid(struct flow_table *tbl, int ovs_flow_tbl_num_masks(const struct flow_table *table) { - struct sw_flow_mask *mask; - int num = 0; - - list_for_each_entry(mask, &table->mask_list, list) - num++; - - return num; + struct mask_array *ma = rcu_dereference_ovsl(table->mask_array); + return ma->count; } static struct table_instance *table_instance_expand(struct table_instance *ti, @@ -638,8 +713,19 @@ static void flow_mask_remove(struct flow_table *tbl, struct sw_flow_mask *mask) mask->ref_count--; if (!mask->ref_count) { - list_del_rcu(&mask->list); - kfree_rcu(mask, rcu); + struct mask_array *ma; + int i; + + ma = ovsl_dereference(tbl->mask_array); + for (i = 0; i < ma->max; i++) { + if (mask == ovsl_dereference(ma->masks[i])) { + RCU_INIT_POINTER(ma->masks[i], NULL); + ma->count--; + kfree_rcu(mask, rcu); + return; + } + } + BUG(); } } } @@ -689,13 +775,16 @@ static bool mask_equal(const struct sw_flow_mask *a, static struct sw_flow_mask *flow_mask_find(const struct flow_table *tbl, const struct sw_flow_mask *mask) { - struct list_head *ml; + struct mask_array *ma; + int i; - list_for_each(ml, &tbl->mask_list) { - struct sw_flow_mask *m; - m = container_of(ml, struct sw_flow_mask, list); - if (mask_equal(mask, m)) - return m; + ma = ovsl_dereference(tbl->mask_array); + for (i = 0; i < ma->max; i++) { + struct sw_flow_mask *t; + t = ovsl_dereference(ma->masks[i]); + + if (t && mask_equal(mask, t)) + return t; } return NULL; @@ -706,15 +795,44 @@ static int flow_mask_insert(struct flow_table *tbl, struct sw_flow *flow, const struct sw_flow_mask *new) { struct sw_flow_mask *mask; + mask = flow_mask_find(tbl, new); if (!mask) { + struct mask_array *ma; + int i; + /* Allocate a new mask if none exsits. */ mask = mask_alloc(); if (!mask) return -ENOMEM; mask->key = new->key; mask->range = new->range; - list_add_tail_rcu(&mask->list, &tbl->mask_list); + + /* Add mask to mask-list. */ + ma = ovsl_dereference(tbl->mask_array); + if (ma->count >= ma->max) { + int err; + + err = tbl_mask_array_realloc(tbl, ma->max + + MASK_ARRAY_SIZE_MIN); + if (err) { + kfree(mask); + return err; + } + + ma = ovsl_dereference(tbl->mask_array); + } + + for (i = 0; i < ma->max; i++) { + const struct sw_flow_mask *t; + + t = ovsl_dereference(ma->masks[i]); + if (!t) { + rcu_assign_pointer(ma->masks[i], mask); + ma->count++; + break; + } + } } else { BUG_ON(!mask->ref_count); mask->ref_count++; diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h index 04b6b1c..8a5cea6 100644 --- a/net/openvswitch/flow_table.h +++ b/net/openvswitch/flow_table.h @@ -27,6 +27,12 @@ struct mask_cache_entry { u32 mask_index; }; +struct mask_array { + struct rcu_head rcu; + int count, max; + struct sw_flow_mask __rcu *masks[]; +}; + struct table_instance { struct hlist_head *buckets; unsigned int n_buckets; @@ -40,7 +46,7 @@ struct flow_table { struct table_instance __rcu *ti; struct table_instance __rcu *ufid_ti; struct mask_cache_entry __percpu *mask_cache; - struct list_head mask_list; + struct mask_array __rcu *mask_array; unsigned long last_rehash; unsigned int count; unsigned int ufid_count; From patchwork Thu Sep 19 16:54:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167786 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="qEbPCTkW"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7pn3Rpcz9sNf for ; Thu, 26 Sep 2019 18:47:29 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729642AbfIZIr2 (ORCPT ); Thu, 26 Sep 2019 04:47:28 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:34489 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfIZIr0 (ORCPT ); Thu, 26 Sep 2019 04:47:26 -0400 Received: by mail-pg1-f195.google.com with SMTP id y35so1213150pgl.1 for ; Thu, 26 Sep 2019 01:47:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LRRofKZdw88izW3jLkOBrYfzBBphlunfCEFOpEUKOZc=; b=qEbPCTkWjV/3ty9+tQhapgXBnZb6XZbv1kFegwiDKcrKG8ucgD1emh//DYIAX0utWf Xc8I/PobDNr+MgrQ/brD80vpv+EtSr2HrpwazJphQ5Jq9a+tz9LoQ4agvRRLTLpbI/QG Azaw9OSyZqr5gkPZf2/LI3VI/y6GtCTQL3G2ZAUtZM5ITOAm1wNR5B+U4y9/Wm+r/LSE IJgeBgSl4pI2Fi0RvJD410IRMY7yrFqXz2aDpnhuw26t+TqO6bO/fAZXTZFO5dTZyiHq Pol4k/Dck7LyJGMQTbDyGAR6XwQP6CRpM4tw4sUOZyLJhw56hrGrxOavT+V89vE8xPQZ fUig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LRRofKZdw88izW3jLkOBrYfzBBphlunfCEFOpEUKOZc=; b=r763zctUEIkKYH16/g/QRLpHLVm+S/Rclk8pRLsPwviqCui3WEAQTCqs9vuI3WhdPD XuxfqICq8UCN4vp5bPl6JOl4XCNEZe7H56X/1Sqi9CjW0769dy/qbTAhoiMln2riSB2W doSxqvR9OGd7yEupIjT1zKA07nV8pyF0WFtCVmzM5ZuONKmMKY5EtEYgIDoDBrIANJqm aVR8jDKza2SFOvYfUXIGvAPqZ/mZBPJ/tuW/dRukjjlksB3fYmg8kG9jSER1WhkHjdVo xl/AjSFI8HKbVrg6TmbLFcwVm3nymWrOIoboHn87mfD2DAoni1JtUenMTSSW+1cjBOvW 5M6g== X-Gm-Message-State: APjAAAUm/3OLLg6IjxjO5A9v1d+Hcr7vBxuWmz6ufFvELerHj/1GtMLT P8SAvqqZQI+Sj2BAY+z+MCg= X-Google-Smtp-Source: APXvYqwqXSi3x8LDbO2Pd29sHynaHaaxX9ospMyHXJ+troKm6t5EF83PrUQpzjpAgvWNH7109pQ3mg== X-Received: by 2002:a65:67ce:: with SMTP id b14mr2320346pgs.68.1569487646197; Thu, 26 Sep 2019 01:47:26 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:25 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 3/7] net: openvswitch: shrink the mask array if necessary Date: Fri, 20 Sep 2019 00:54:49 +0800 Message-Id: <1568912093-68535-4-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang When creating and inserting flow-mask, if there is no available flow-mask, we realloc the mask array. When removing flow-mask, if necessary, we shrink mask array. Signed-off-by: Tonghao Zhang --- net/openvswitch/flow_table.c | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 99954fa..9c72aab 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -701,6 +701,23 @@ static struct table_instance *table_instance_expand(struct table_instance *ti, return table_instance_rehash(ti, ti->n_buckets * 2, ufid); } +static void tbl_mask_array_delete_mask(struct mask_array *ma, + struct sw_flow_mask *mask) +{ + int i; + + /* Remove the deleted mask pointers from the array */ + for (i = 0; i < ma->max; i++) { + if (mask == ovsl_dereference(ma->masks[i])) { + RCU_INIT_POINTER(ma->masks[i], NULL); + ma->count--; + kfree_rcu(mask, rcu); + return; + } + } + BUG(); +} + /* Remove 'mask' from the mask list, if it is not needed any more. */ static void flow_mask_remove(struct flow_table *tbl, struct sw_flow_mask *mask) { @@ -714,18 +731,14 @@ static void flow_mask_remove(struct flow_table *tbl, struct sw_flow_mask *mask) if (!mask->ref_count) { struct mask_array *ma; - int i; ma = ovsl_dereference(tbl->mask_array); - for (i = 0; i < ma->max; i++) { - if (mask == ovsl_dereference(ma->masks[i])) { - RCU_INIT_POINTER(ma->masks[i], NULL); - ma->count--; - kfree_rcu(mask, rcu); - return; - } - } - BUG(); + tbl_mask_array_delete_mask(ma, mask); + + /* Shrink the mask array if necessary. */ + if (ma->max >= (MASK_ARRAY_SIZE_MIN * 2) && + ma->count <= (ma->max / 3)) + tbl_mask_array_realloc(tbl, ma->max / 2); } } } From patchwork Thu Sep 19 16:54:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167787 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="B+tje5Gp"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7pq0QXDz9sNf for ; Thu, 26 Sep 2019 18:47:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729787AbfIZIra (ORCPT ); Thu, 26 Sep 2019 04:47:30 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:46772 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfIZIr3 (ORCPT ); Thu, 26 Sep 2019 04:47:29 -0400 Received: by mail-pf1-f196.google.com with SMTP id q5so1404767pfg.13 for ; Thu, 26 Sep 2019 01:47:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MfKiaD7BLDCj9sb393V4hNHiOobedPu8xnBx7OnK1E0=; b=B+tje5GpeSqlDQFsoh6doItPI/xBdQOdP6g8N12+POdRZGl6FtzJdnFb3TYJAkvrBi HCEqfRWVbfGBtUFyjJ+n/BHnA0l47mKzN9lrZNehhyAeKSwHZOMgJ9tzFcxidmTiy7Y5 eu4rEaUg4e6cwBl611Ob+Gb7dEPTZ8Vho/7W4dvfblF1FmhTtaRUveDMIUaxv92Fr5lk o9QgRsfzUGU/1Pb80Q6GmDnNK3GaCI+0ZidXPkg5FhWDAkz4/pxNGUOVECD5AWftcBWp Fvry43QjWRITrzirIgFKJm1Il9I8S4fogIMOs3jR8GIl2jX3KqplqAGfvfG8n4CFo+Bw mahA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MfKiaD7BLDCj9sb393V4hNHiOobedPu8xnBx7OnK1E0=; b=luldWdHtCFkfTsBmYYH4afIUR7xvJCqVg49AP/uE9VEYy0ujc4oDqc4UK/YOcc/VjV qiZfaRF6PIo63zIUwVOWA9MJnzcxnfJ3nkO/lGH6w60S5KUqK+TDpycAA8dE9O46tkoC U4KFcurpavbVNOUBhp8T9wfNPaIwLoiQqjhjxwFs6umdktZwW4+eXv1O9AoXF4Ynt6SX 0iBfacvOlHfMZmUlmae87Wt1FHrXENPLQtmlpjB6SCBm4HO3ORtbDHcntz0yUwWGIgLl 6GQz2Brc4ixUygU5kUZY62/Y+Re2e9XFQxq8yP2PwyeY07ooozbM3upjLZo/EeTFMFPY FCpQ== X-Gm-Message-State: APjAAAXcEm15AMFb2+JgvQrEeOyGVYcX0daA1maub+8Gssj2Owme3puw Sj5kAwsghW9yGRR7+4fxtrr2s2g04Mo= X-Google-Smtp-Source: APXvYqy5CwqM5C5FnpTvGLWuRKDMY23conjUj6coQT/8cudvk8+JT+/F9sNI1gICgX/Xs31VZDjfGg== X-Received: by 2002:a63:4142:: with SMTP id o63mr2351863pga.426.1569487648479; Thu, 26 Sep 2019 01:47:28 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:28 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 4/7] net: openvswitch: optimize flow mask cache hash collision Date: Fri, 20 Sep 2019 00:54:50 +0800 Message-Id: <1568912093-68535-5-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang Port the codes to linux upstream and with little changes. Pravin B Shelar, says: | In case hash collision on mask cache, OVS does extra flow | lookup. Following patch avoid it. Link: https://github.com/openvswitch/ovs/commit/0e6efbe2712da03522532dc5e84806a96f6a0dd1 Signed-off-by: Tonghao Zhang --- net/openvswitch/flow_table.c | 95 ++++++++++++++++++++++++-------------------- 1 file changed, 53 insertions(+), 42 deletions(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 9c72aab..e59fac5 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -516,6 +516,9 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti, return NULL; } +/* Flow lookup does full lookup on flow table. It starts with + * mask from index passed in *index. + */ static struct sw_flow *flow_lookup(struct flow_table *tbl, struct table_instance *ti, struct mask_array *ma, @@ -524,18 +527,31 @@ static struct sw_flow *flow_lookup(struct flow_table *tbl, u32 *index) { struct sw_flow *flow; + struct sw_flow_mask *mask; int i; - for (i = 0; i < ma->max; i++) { - struct sw_flow_mask *mask; - - mask = rcu_dereference_ovsl(ma->masks[i]); + if (*index < ma->max) { + mask = rcu_dereference_ovsl(ma->masks[*index]); if (mask) { flow = masked_flow_lookup(ti, key, mask, n_mask_hit); - if (flow) { /* Found */ - *index = i; + if (flow) return flow; - } + } + } + + for (i = 0; i < ma->max; i++) { + + if (i == *index) + continue; + + mask = rcu_dereference_ovsl(ma->masks[i]); + if (!mask) + continue; + + flow = masked_flow_lookup(ti, key, mask, n_mask_hit); + if (flow) { /* Found */ + *index = i; + return flow; } } @@ -554,58 +570,54 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, u32 skb_hash, u32 *n_mask_hit) { - struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); - struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); - struct mask_cache_entry *entries, *ce, *del; + struct mask_array *ma = rcu_dereference(tbl->mask_array); + struct table_instance *ti = rcu_dereference(tbl->ti); + struct mask_cache_entry *entries, *ce; struct sw_flow *flow; - u32 hash = skb_hash; + u32 hash; int seg; *n_mask_hit = 0; if (unlikely(!skb_hash)) { - u32 __always_unused mask_index; + u32 mask_index = 0; return flow_lookup(tbl, ti, ma, key, n_mask_hit, &mask_index); } - del = NULL; + /* Pre and post recirulation flows usually have the same skb_hash + * value. To avoid hash collisions, rehash the 'skb_hash' with + * 'recirc_id'. */ + if (key->recirc_id) + skb_hash = jhash_1word(skb_hash, key->recirc_id); + + ce = NULL; + hash = skb_hash; entries = this_cpu_ptr(tbl->mask_cache); + /* Find the cache entry 'ce' to operate on. */ for (seg = 0; seg < MC_HASH_SEGS; seg++) { - int index; - - index = hash & (MC_HASH_ENTRIES - 1); - ce = &entries[index]; - - if (ce->skb_hash == skb_hash) { - struct sw_flow_mask *mask; - struct sw_flow *flow; - - mask = rcu_dereference_ovsl(ma->masks[ce->mask_index]); - if (mask) { - flow = masked_flow_lookup(ti, key, mask, - n_mask_hit); - if (flow) /* Found */ - return flow; - } - - del = ce; - break; + int index = hash & (MC_HASH_ENTRIES - 1); + struct mask_cache_entry *e; + + e = &entries[index]; + if (e->skb_hash == skb_hash) { + flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, + &e->mask_index); + if (!flow) + e->skb_hash = 0; + return flow; } - if (!del || (del->skb_hash && !ce->skb_hash) || - (rcu_dereference_ovsl(ma->masks[del->mask_index]) && - !rcu_dereference_ovsl(ma->masks[ce->mask_index]))) { - del = ce; - } + if (!ce || e->skb_hash < ce->skb_hash) + ce = e; /* A better replacement cache candidate. */ hash >>= MC_HASH_SHIFT; } - flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, &del->mask_index); - + /* Cache miss, do full lookup. */ + flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, &ce->mask_index); if (flow) - del->skb_hash = skb_hash; + ce->skb_hash = skb_hash; return flow; } @@ -615,9 +627,8 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, { struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); - u32 __always_unused n_mask_hit; - u32 __always_unused index; + u32 index = 0; return flow_lookup(tbl, ti, ma, key, &n_mask_hit, &index); } From patchwork Thu Sep 19 16:54:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167788 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="e2Yo3Kt3"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7pt5wRnz9sNx for ; Thu, 26 Sep 2019 18:47:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729802AbfIZIrd (ORCPT ); Thu, 26 Sep 2019 04:47:33 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:33420 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfIZIrb (ORCPT ); Thu, 26 Sep 2019 04:47:31 -0400 Received: by mail-pl1-f194.google.com with SMTP id d22so1062715pls.0 for ; Thu, 26 Sep 2019 01:47:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ftsflGWkrcHAHdXLSchXHv9raVuDy3sldKCC/X/xNbY=; b=e2Yo3Kt3puyP9tTLBHH56VHtEAJBoY7uLkb2UroHmbQZgNaIuxXDW2fNpRX4GeC1fT VAAMJtm8kKwUfm7TfeXiS9ZVWF9kX9MXPWKdr5zRP4+qstsAIGQ69BoA0CQlxXDZTi1a pnyjjnyDV7L6ZXzUY2K1Zv1idGhONkNHwL5FnmqmoAxHXIxPvF+1pUlyPHCnoqrXR1gq IraifGsXxUoZaA11aUU5U7+rTExBV2IjzfHO8YdcYZ7xYFDcPX99LlHd1ymNOIoaK3ci bGFhpsFTq/GmMN+/VrwaHPRrPqmyMuRH79vZA8x0AsHYUHJ3fzk3GZ5ltXfrZnj6qsn/ oUbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ftsflGWkrcHAHdXLSchXHv9raVuDy3sldKCC/X/xNbY=; b=Fzn67FfjOW5OkPDq03z/b7XE3CZqPwVBoYUJanT7oixldsMdtOtCsVC6gBSfaLGUF0 sSmMW1k7GxoLO08mwB7lE/tsSpagONSP+mAZsxfjoXr2Q7I65Ld/NULvrBE94t01l3uz 8lpUbynQdt88miPv2Vn/VD9Ny9v6QO6QccIlAXFiNDrUlLUDZutkEEVhkPc0chcJlvqs a3M3HZeLq6vGwrBuX0JzxJ7EHcY/LbnVohBufuInB79PM0oap3b6MdhY2FXP9D3vT22K QYnQyGoUP7t29TyfucCJGEa6as9VjoNa5zh37sdxfkxDHJXHEhbXj60OdQ97AeS0M+4s 81bA== X-Gm-Message-State: APjAAAWepxS56FkQhi2C8DJWMANY+4ju91K+flWYuAy1fJvARiI3f/do oZOq5Xzte7SdbkFRlqhqVZ8= X-Google-Smtp-Source: APXvYqwdDkKJyVhTgubhGWHcAwJGX3QGkNLBojJRPuUJRtGTXuCZ/B2l0/5RvYZYTixw9snR0aiaeA== X-Received: by 2002:a17:902:7244:: with SMTP id c4mr2621981pll.178.1569487650499; Thu, 26 Sep 2019 01:47:30 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:30 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 5/7] net: openvswitch: optimize flow-mask looking up Date: Fri, 20 Sep 2019 00:54:51 +0800 Message-Id: <1568912093-68535-6-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang The full looking up on flow table traverses all mask array. If mask-array is too large, the number of invalid flow-mask increase, performance will be drop. This patch optimizes mask-array operation: * Inserting, insert it [ma->count- 1] directly. * Removing, only change last and current mask point, and free current mask. * Looking up, full looking up will break if mask is NULL. Signed-off-by: Tonghao Zhang --- net/openvswitch/flow_table.c | 93 ++++++++++++++++++++++---------------------- 1 file changed, 46 insertions(+), 47 deletions(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index e59fac5..2d74d74 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -546,7 +546,7 @@ static struct sw_flow *flow_lookup(struct flow_table *tbl, mask = rcu_dereference_ovsl(ma->masks[i]); if (!mask) - continue; + break; flow = masked_flow_lookup(ti, key, mask, n_mask_hit); if (flow) { /* Found */ @@ -712,21 +712,31 @@ static struct table_instance *table_instance_expand(struct table_instance *ti, return table_instance_rehash(ti, ti->n_buckets * 2, ufid); } -static void tbl_mask_array_delete_mask(struct mask_array *ma, - struct sw_flow_mask *mask) +static void tbl_mask_array_del_mask(struct flow_table *tbl, + struct sw_flow_mask *mask) { + struct mask_array *ma = ovsl_dereference(tbl->mask_array); int i; /* Remove the deleted mask pointers from the array */ - for (i = 0; i < ma->max; i++) { - if (mask == ovsl_dereference(ma->masks[i])) { - RCU_INIT_POINTER(ma->masks[i], NULL); - ma->count--; - kfree_rcu(mask, rcu); - return; - } + for (i = 0; i < ma->count; i++) { + if (mask == ovsl_dereference(ma->masks[i])) + goto found; } + BUG(); + return; + +found: + rcu_assign_pointer(ma->masks[i], ma->masks[ma->count -1]); + RCU_INIT_POINTER(ma->masks[ma->count -1], NULL); + ma->count--; + kfree_rcu(mask, rcu); + + /* Shrink the mask array if necessary. */ + if (ma->max >= (MASK_ARRAY_SIZE_MIN * 2) && + ma->count <= (ma->max / 3)) + tbl_mask_array_realloc(tbl, ma->max / 2); } /* Remove 'mask' from the mask list, if it is not needed any more. */ @@ -740,17 +750,8 @@ static void flow_mask_remove(struct flow_table *tbl, struct sw_flow_mask *mask) BUG_ON(!mask->ref_count); mask->ref_count--; - if (!mask->ref_count) { - struct mask_array *ma; - - ma = ovsl_dereference(tbl->mask_array); - tbl_mask_array_delete_mask(ma, mask); - - /* Shrink the mask array if necessary. */ - if (ma->max >= (MASK_ARRAY_SIZE_MIN * 2) && - ma->count <= (ma->max / 3)) - tbl_mask_array_realloc(tbl, ma->max / 2); - } + if (!mask->ref_count) + tbl_mask_array_del_mask(tbl, mask); } } @@ -814,6 +815,27 @@ static struct sw_flow_mask *flow_mask_find(const struct flow_table *tbl, return NULL; } +static int tbl_mask_array_add_mask(struct flow_table *tbl, + struct sw_flow_mask *new) +{ + struct mask_array *ma = ovsl_dereference(tbl->mask_array); + int err; + + if (ma->count >= ma->max) { + err = tbl_mask_array_realloc(tbl, ma->max + + MASK_ARRAY_SIZE_MIN); + if (err) + return err; + } + + BUG_ON(ovsl_dereference(ma->masks[ma->count])); + + rcu_assign_pointer(ma->masks[ma->count], new); + ma->count++; + + return 0; +} + /* Add 'mask' into the mask list, if it is not already there. */ static int flow_mask_insert(struct flow_table *tbl, struct sw_flow *flow, const struct sw_flow_mask *new) @@ -822,9 +844,6 @@ static int flow_mask_insert(struct flow_table *tbl, struct sw_flow *flow, mask = flow_mask_find(tbl, new); if (!mask) { - struct mask_array *ma; - int i; - /* Allocate a new mask if none exsits. */ mask = mask_alloc(); if (!mask) @@ -833,29 +852,9 @@ static int flow_mask_insert(struct flow_table *tbl, struct sw_flow *flow, mask->range = new->range; /* Add mask to mask-list. */ - ma = ovsl_dereference(tbl->mask_array); - if (ma->count >= ma->max) { - int err; - - err = tbl_mask_array_realloc(tbl, ma->max + - MASK_ARRAY_SIZE_MIN); - if (err) { - kfree(mask); - return err; - } - - ma = ovsl_dereference(tbl->mask_array); - } - - for (i = 0; i < ma->max; i++) { - const struct sw_flow_mask *t; - - t = ovsl_dereference(ma->masks[i]); - if (!t) { - rcu_assign_pointer(ma->masks[i], mask); - ma->count++; - break; - } + if (tbl_mask_array_add_mask(tbl, mask)) { + kfree(mask); + return -ENOMEM; } } else { BUG_ON(!mask->ref_count); From patchwork Thu Sep 19 16:54:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167789 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Swzji5Df"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7pv5vJrz9sNf for ; Thu, 26 Sep 2019 18:47:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729818AbfIZIre (ORCPT ); Thu, 26 Sep 2019 04:47:34 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:35667 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729796AbfIZIrd (ORCPT ); Thu, 26 Sep 2019 04:47:33 -0400 Received: by mail-pl1-f195.google.com with SMTP id y10so1052177plp.2 for ; Thu, 26 Sep 2019 01:47:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Xn09ODc7huSMK4P7ZxrxGCNOImb30lD/7/46RF0Tnv8=; b=Swzji5DfzzPdgDpF4YLaL4DIsla65I6Cxl87kdPfEPmhwIMzPJr13DcVAkdLTOr8MJ elxpiWIlIbIO0Uyw8Dt9v5bAZERx0ROsc88I4Ok4OANBCNzLJITQKNGbAZriyHQJE0IA /kYD7yQrODgzqoPtb6zzaf8okDd/OBhOHjLhTR4+/1S4qMchU+9vMVmREqBqFF1EU2HM /iBPQinTnFL6R59USLJXMCWXvA7aRKOhEdrxIXFjnwWMcqOtS4yeMvtOOiZZOXxZv+aP jTOaQmj/y4+qdkpT9a2Qj8hbJr7w9IhaghmKNQzMUFCjbeKIitvRiMOr8rahTUtlSGyB mjoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Xn09ODc7huSMK4P7ZxrxGCNOImb30lD/7/46RF0Tnv8=; b=CgwzXGhEY/aAFFGLmL22KMhjJ35rkKJeAcUV3XIsMO3DTj1VQzQXjZHQ7jJKr8SjjU LqVAo8rAFWVsTG6fTx6pff4Wk/SCqawDPwk6dAme3RQlgxVorQL3DkugM8g6Kifjul/5 e23t4aoVq5/Bs+/USdSdCH8ftETz4QFrZ5Y6do8D16RElmZqTuPDmlAmCRq0tNQ1bWhv LtQ2sEFSbdxSVfc+XL6IWXdV9OlkXbGAFSQHheoCrAu77gg4GJkAhKQH+specJep+7yR N5wexgD3iNTKZFmQ4cnMotBeMJ0fxWQbuQ+LENlnDC1I9iVlo9ACLHzjZv7Qq8FFx7pj UJFw== X-Gm-Message-State: APjAAAUzQiiJZi8jnTSdvoMWcVa/b0/zSxYjJqRuNY+BQSstEEeVoW1m eXoFXE3NQPfdyHUziQbrOiE= X-Google-Smtp-Source: APXvYqzq242T1drYw04Q+1Ta4OmMmg083ojX5MWW/1+hDuAmr8fr8I9FID17SicaHl5qMI83R4g1Jw== X-Received: by 2002:a17:902:bd4a:: with SMTP id b10mr2633124plx.305.1569487652778; Thu, 26 Sep 2019 01:47:32 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:32 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 6/7] net: openvswitch: simplify the flow_hash Date: Fri, 20 Sep 2019 00:54:52 +0800 Message-Id: <1568912093-68535-7-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang Simplify the code and remove the unnecessary BUILD_BUG_ON. Signed-off-by: Tonghao Zhang --- net/openvswitch/flow_table.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 2d74d74..223470a 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -440,13 +440,9 @@ int ovs_flow_tbl_flush(struct flow_table *flow_table) static u32 flow_hash(const struct sw_flow_key *key, const struct sw_flow_key_range *range) { - int key_start = range->start; - int key_end = range->end; - const u32 *hash_key = (const u32 *)((const u8 *)key + key_start); - int hash_u32s = (key_end - key_start) >> 2; - + const u32 *hash_key = (const u32 *)((const u8 *)key + range->start); /* Make sure number of hash bytes are multiple of u32. */ - BUILD_BUG_ON(sizeof(long) % sizeof(u32)); + int hash_u32s = range_n_bytes(range) >> 2; return jhash2(hash_key, hash_u32s, 0); } From patchwork Thu Sep 19 16:54:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1167790 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="QLNDotYx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46f7px62zJz9sNf for ; Thu, 26 Sep 2019 18:47:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729834AbfIZIrg (ORCPT ); Thu, 26 Sep 2019 04:47:36 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:44870 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729796AbfIZIrf (ORCPT ); Thu, 26 Sep 2019 04:47:35 -0400 Received: by mail-pf1-f195.google.com with SMTP id q21so1414327pfn.11 for ; Thu, 26 Sep 2019 01:47:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PkpaOHs/sE044m5RxbVsVjFlZYsusmNZ0HrX5q5KwCc=; b=QLNDotYxwlGyC65QixoJr9otbjjcm4GLiLHMWkPUQ4hKHSf9lqlsJOO9JODe7YAU34 RGjDY/toeXbvlyXKfk61gMEJN6f8X6Zb4hTL2HNZarzstVNdStcCE/TJ77xM7fZTUvjZ vLBzM80P7L/4f6Q2cVymG+hvcAqzKYDIpIKbV7EBds32HBiTHSTRHyPqrKhPtozLBacE WfMLEgiic1pNYzcg7XOKdXFRFdM0k/w6zT7mXpeMTYmqNdQjW8DM9UelMyp5E/4uuMmm o77VUDktTWyCLlS+HNNhL8/MYR+W5AZtFwRNJzLPqs1BZjL+V3Hy13+UbBirFAXs70b4 hFlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PkpaOHs/sE044m5RxbVsVjFlZYsusmNZ0HrX5q5KwCc=; b=q2dIsnQmQ03nk+WfsaqDROGFkTsLMeFwQtabqg5GaVS0MJgi3psbsEt6zaE7gx60A2 NXp2iLmv4Ng2iITjUQtcSiQIv+p7axgXMougk1iX/3FPEM3tuihhJ3ZvCp35NQPvq/+g N/gqAtpmC9GWnqNcErfJL8uWw3yqb/IDFv8+/Jhqw196XHHlQTyDUqNU80s0yheBU6oM cN+zJW+2xf8t/1VcP0Rmximn1cHOWA2Av83VVKKdp+oxzzCQxXtwtvfRQ3ele7DyBX0p YNzZBErklsUDRWwZoS0XVjMeSOZX6q+6B8Oit3G2ju6+rthWIo851EvnOC2jpACAHy4T K2wg== X-Gm-Message-State: APjAAAXyxp5LZKu0bGGH68sd5t0bb4SX42LRsansb/0IdfNqHzGPnHc0 H7IMMuuMIyVs05yFygWPbrnwHKEkO3c= X-Google-Smtp-Source: APXvYqxGg+yhfQRY2Yfs39rVaBEecUaiMFmzPR0m6f4V75R63MbJ7Ke+VIAhAkImS1/KQShqpD5gmQ== X-Received: by 2002:a63:5745:: with SMTP id h5mr2328424pgm.268.1569487654780; Thu, 26 Sep 2019 01:47:34 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id t12sm1340513pjq.18.2019.09.26.01.47.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Sep 2019 01:47:34 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: pshelar@ovn.org, gvrose8192@gmail.com Cc: netdev@vger.kernel.org, Tonghao Zhang Subject: [PATCH net-next 7/7] net: openvswitch: add likely in flow_lookup Date: Fri, 20 Sep 2019 00:54:53 +0800 Message-Id: <1568912093-68535-8-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1568912093-68535-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang The most case *index < ma->max, we add likely for performance. Signed-off-by: Tonghao Zhang --- net/openvswitch/flow_table.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 223470a..bd7e976 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -526,7 +526,7 @@ static struct sw_flow *flow_lookup(struct flow_table *tbl, struct sw_flow_mask *mask; int i; - if (*index < ma->max) { + if (likely(*index < ma->max)) { mask = rcu_dereference_ovsl(ma->masks[*index]); if (mask) { flow = masked_flow_lookup(ti, key, mask, n_mask_hit);