From patchwork Sat Oct 19 08:08:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 1186471 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="nwaBic+4"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 472vZR3KyXz9sPd for ; Wed, 30 Oct 2019 14:48:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727289AbfJ3Dsn (ORCPT ); Tue, 29 Oct 2019 23:48:43 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:39021 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727209AbfJ3Dsm (ORCPT ); Tue, 29 Oct 2019 23:48:42 -0400 Received: by mail-pl1-f194.google.com with SMTP id t12so324257plo.6 for ; Tue, 29 Oct 2019 20:48:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BQZqcFwFIMbecsaknh9C7llBVU/lIsJ7X4zgcj4FguQ=; b=nwaBic+4MeZSrtm7At33f1Qe7E63otc14/6cPBu1PDr9Q0RX15QkT8pNi8Y3QcjqDI Nf7gyUNUReeZNC+KRHeNErYWgX7N6qsJ3Vos5r0mxtQ0IEqSSumMP5pHHrSd1OYfr3pR 8N50Fv6NxuY9+beWb1QfxrXIhKKlkTg1Gpk9rHcILlI5GrzwzYrSukoEOZFaP35E0Y+u 7k4n1al9NyIl3eVVEQtJlXWLAfnV92eFGuGP7WJT/wzB5fZGwQivafCIKXBMeM9DnSOi vxIR/N21eDTNbznPJAFmODsUmNz57uPfOM3MAreL1KRnGxOCY2h2WKh8ZGs8s7Ys9noK XbEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BQZqcFwFIMbecsaknh9C7llBVU/lIsJ7X4zgcj4FguQ=; b=B4gYyk2/bxkYuzIT+Gul4NnhTsvpVpH0eX9Sx3KrmN5C9Aj60fjbjaAv114negV7HS mArK6B0L7d10N1BIxCc9MJ4sITDv8y90qO3WRHZH/I0qan/1BSUULUBbdAtRQZWrK3Yh JVtueb20m9y85XqiXCru3cX+Ytd0+9pbt1IGQUI0n2AS6SNPrVrhml8cT3FzjTolWDuk KkX/9jJQVzvIzLz0fPLVVtnKOfKiTrxEcKcOifocZWcP4xfaJlOJ46cHx/4rg5p9A7Su 6r90bF/vaMaB5ji1o+yfiNYUoZaz1YvkcK4lX/g5LnnNDxD1j/Ftomkuhd9MgnYpUjol IwiA== X-Gm-Message-State: APjAAAWfwDWwb87bLmvgJ/tyNUzBAvAW76A0obqt/R1648r4sOpfjAX4 KPZQqltY+LTsMv5wZsq+Hi0= X-Google-Smtp-Source: APXvYqz6IUGV/oe9mW1aFHzJUXM+sJHk0/+roU6FJXmxa8aWs2TJGzwDE2G0g1eSbVA/uKuVj2tJhg== X-Received: by 2002:a17:902:14e:: with SMTP id 72mr2195512plb.271.1572407321690; Tue, 29 Oct 2019 20:48:41 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id l22sm632390pgj.4.2019.10.29.20.48.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 29 Oct 2019 20:48:41 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: gvrose8192@gmail.com, pshelar@ovn.org Cc: netdev@vger.kernel.org, dev@openvswitch.org, Tonghao Zhang Subject: [PATCH net-next v5 04/10] net: openvswitch: optimize flow mask cache hash collision Date: Sat, 19 Oct 2019 16:08:38 +0800 Message-Id: <1571472524-73832-5-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1571472524-73832-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1571472524-73832-1-git-send-email-xiangxia.m.yue@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tonghao Zhang Port the codes to linux upstream and with little changes. Pravin B Shelar, says: | In case hash collision on mask cache, OVS does extra flow | lookup. Following patch avoid it. Link: https://github.com/openvswitch/ovs/commit/0e6efbe2712da03522532dc5e84806a96f6a0dd1 Signed-off-by: Tonghao Zhang Tested-by: Greg Rose --- net/openvswitch/flow_table.c | 95 ++++++++++++++++++++++++-------------------- 1 file changed, 53 insertions(+), 42 deletions(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 237cf85..8d4f50d 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -508,6 +508,9 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti, return NULL; } +/* Flow lookup does full lookup on flow table. It starts with + * mask from index passed in *index. + */ static struct sw_flow *flow_lookup(struct flow_table *tbl, struct table_instance *ti, struct mask_array *ma, @@ -516,18 +519,31 @@ static struct sw_flow *flow_lookup(struct flow_table *tbl, u32 *index) { struct sw_flow *flow; + struct sw_flow_mask *mask; int i; - for (i = 0; i < ma->max; i++) { - struct sw_flow_mask *mask; - - mask = rcu_dereference_ovsl(ma->masks[i]); + if (*index < ma->max) { + mask = rcu_dereference_ovsl(ma->masks[*index]); if (mask) { flow = masked_flow_lookup(ti, key, mask, n_mask_hit); - if (flow) { /* Found */ - *index = i; + if (flow) return flow; - } + } + } + + for (i = 0; i < ma->max; i++) { + + if (i == *index) + continue; + + mask = rcu_dereference_ovsl(ma->masks[i]); + if (!mask) + continue; + + flow = masked_flow_lookup(ti, key, mask, n_mask_hit); + if (flow) { /* Found */ + *index = i; + return flow; } } @@ -546,58 +562,54 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl, u32 skb_hash, u32 *n_mask_hit) { - struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); - struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); - struct mask_cache_entry *entries, *ce, *del; + struct mask_array *ma = rcu_dereference(tbl->mask_array); + struct table_instance *ti = rcu_dereference(tbl->ti); + struct mask_cache_entry *entries, *ce; struct sw_flow *flow; - u32 hash = skb_hash; + u32 hash; int seg; *n_mask_hit = 0; if (unlikely(!skb_hash)) { - u32 __always_unused mask_index; + u32 mask_index = 0; return flow_lookup(tbl, ti, ma, key, n_mask_hit, &mask_index); } - del = NULL; + /* Pre and post recirulation flows usually have the same skb_hash + * value. To avoid hash collisions, rehash the 'skb_hash' with + * 'recirc_id'. */ + if (key->recirc_id) + skb_hash = jhash_1word(skb_hash, key->recirc_id); + + ce = NULL; + hash = skb_hash; entries = this_cpu_ptr(tbl->mask_cache); + /* Find the cache entry 'ce' to operate on. */ for (seg = 0; seg < MC_HASH_SEGS; seg++) { - int index; - - index = hash & (MC_HASH_ENTRIES - 1); - ce = &entries[index]; - - if (ce->skb_hash == skb_hash) { - struct sw_flow_mask *mask; - struct sw_flow *flow; - - mask = rcu_dereference_ovsl(ma->masks[ce->mask_index]); - if (mask) { - flow = masked_flow_lookup(ti, key, mask, - n_mask_hit); - if (flow) /* Found */ - return flow; - } - - del = ce; - break; + int index = hash & (MC_HASH_ENTRIES - 1); + struct mask_cache_entry *e; + + e = &entries[index]; + if (e->skb_hash == skb_hash) { + flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, + &e->mask_index); + if (!flow) + e->skb_hash = 0; + return flow; } - if (!del || (del->skb_hash && !ce->skb_hash) || - (rcu_dereference_ovsl(ma->masks[del->mask_index]) && - !rcu_dereference_ovsl(ma->masks[ce->mask_index]))) { - del = ce; - } + if (!ce || e->skb_hash < ce->skb_hash) + ce = e; /* A better replacement cache candidate. */ hash >>= MC_HASH_SHIFT; } - flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, &del->mask_index); - + /* Cache miss, do full lookup. */ + flow = flow_lookup(tbl, ti, ma, key, n_mask_hit, &ce->mask_index); if (flow) - del->skb_hash = skb_hash; + ce->skb_hash = skb_hash; return flow; } @@ -607,9 +619,8 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, { struct table_instance *ti = rcu_dereference_ovsl(tbl->ti); struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); - u32 __always_unused n_mask_hit; - u32 __always_unused index; + u32 index = 0; return flow_lookup(tbl, ti, ma, key, &n_mask_hit, &index); }