From patchwork Fri Jul 26 09:17:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaokun Zhang X-Patchwork-Id: 1137332 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45w3Sg5b29z9sBF for ; Fri, 26 Jul 2019 19:19:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726138AbfGZJTn (ORCPT ); Fri, 26 Jul 2019 05:19:43 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2778 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725815AbfGZJTn (ORCPT ); Fri, 26 Jul 2019 05:19:43 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id C2D0C9EE189CFEC56772; Fri, 26 Jul 2019 17:19:41 +0800 (CST) Received: from localhost.localdomain (10.67.212.132) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Fri, 26 Jul 2019 17:19:31 +0800 From: Shaokun Zhang To: , CC: Yang Guo , "David S. Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI , Eric Dumazet , Jiri Pirko Subject: [PATCH] Revert "net: get rid of an signed integer overflow in ip_idents_reserve()" Date: Fri, 26 Jul 2019 17:17:15 +0800 Message-ID: <1564132635-57634-1-git-send-email-zhangshaokun@hisilicon.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-Originating-IP: [10.67.212.132] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yang Guo There is an significant performance regression with the following commit-id ("net: get rid of an signed integer overflow in ip_idents_reserve()"). Both on x86 server(Skylake) and ARM64 server, when cpu core number increase, the function ip_idents_reserve() of cpu usage is very high, and the performance will become bad. After revert the patch, we can avoid this problem when cpu core number increases. With the patch on x86, ip_idents_reserve() cpu usage is 63.05% when iperf3 is run with 32 cpu cores. Samples: 18K of event 'cycles:ppp', Event count (approx.) Children Self Command Shared Object Symbol 63.18% 63.05% iperf3 [kernel.vmlinux] [k] ip_idents_reserve And the IOPS is 4483830pps. 10:46:13 AM IFACE rxpck/s txpck/s rxkB/s txkB/s 10:46:14 AM lo 4483830.00 4483830.00 192664.57 192664.57 Resert the patch, ip_idents_reserve() cpu usage is 17.05%. Samples: 37K of event 'cycles:ppp', 4000 Hz, Event count (approx.) Children Self Shared Object Symbol 17.07% 17.05% [kernel] [k] ip_idents_reserve And the IOPS is 1160021pps. 05:03:15 PM IFACE rxpck/s txpck/s rxkB/s txkB/s 05:03:16 PM lo 11600213.00 11600213.00 498446.65 498446.65 The performance regression was also found on ARM64 server and discussed a few days ago: https://lore.kernel.org/netdev/98b95fbe-adcc-c95f-7f3d-6c57122f4586 @pengutronix.de/T/#t Cc: "David S. Miller" Cc: Alexey Kuznetsov Cc: Hideaki YOSHIFUJI Cc: Eric Dumazet Cc: Jiri Pirko Signed-off-by: Yang Guo --- net/ipv4/route.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 517300d..dff457b 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -489,18 +489,12 @@ u32 ip_idents_reserve(u32 hash, int segs) atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ; u32 old = READ_ONCE(*p_tstamp); u32 now = (u32)jiffies; - u32 new, delta = 0; + u32 delta = 0; if (old != now && cmpxchg(p_tstamp, old, now) == old) delta = prandom_u32_max(now - old); - /* Do not use atomic_add_return() as it makes UBSAN unhappy */ - do { - old = (u32)atomic_read(p_id); - new = old + delta + segs; - } while (atomic_cmpxchg(p_id, old, new) != old); - - return new - segs; + return atomic_add_return(segs + delta, p_id) - segs; } EXPORT_SYMBOL(ip_idents_reserve);