From patchwork Fri Aug 26 20:38:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brenden Blanco X-Patchwork-Id: 663240 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sLXtt5m0Cz9sRZ for ; Sat, 27 Aug 2016 06:38:30 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=plumgrid-com.20150623.gappssmtp.com header.i=@plumgrid-com.20150623.gappssmtp.com header.b=CnXW4Nzr; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751802AbcHZUi1 (ORCPT ); Fri, 26 Aug 2016 16:38:27 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:34691 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751036AbcHZUi0 (ORCPT ); Fri, 26 Aug 2016 16:38:26 -0400 Received: by mail-pa0-f54.google.com with SMTP id fi15so30277013pac.1 for ; Fri, 26 Aug 2016 13:38:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=plumgrid-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=64agb+ysqbwYc0hl20iY5Pi4/+lyNZ0Qw1K+VpBJJrE=; b=CnXW4Nzr0AaikMrauO66EmfQkx+vjn+0qUBC+gT7A9RrAsye/e498CI5j55mv4JcZX PcpJV+y4ax3AUXmnLi3GqjkG9c9D5XFB9XwScCoChN1Ku9YnorBQGouG24DD2iAeTiDy /lAy83LEYN5JyqKr7pYly/Bhn001vgKobuoQEKHgKNn0kt/eNDI3IqLP3Jq22T6yfQT7 njbd8JwQIKrQm6q6kF8AY7hUtJ7khCWo3zDybGPRTVkA7QmtbZ2NFEC35tp3kh0DZr+t 3oapyPt6o95ktEmcjvkDuso3hqji4ZEDQGSQ77YbuXrqdGWZC8Wh25TrYjY8D9AKW/T/ 0EsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=64agb+ysqbwYc0hl20iY5Pi4/+lyNZ0Qw1K+VpBJJrE=; b=brl3pzmrFHwiLiQIB+dcMesn8FJQJ3qsB/NVFdmbhC7cSEi3mTKoklI7dTbjQHVfHA qAxMs94xtVoDw47dU6PV8fZNfipL56UoC7sx1cPxZGx04eZFqL5z9xE7dyJyC6EA7EdR uePph7VwvgC6dLOU6ETM9C7Y2Dy7pTDndsEPFdi6uLSK+riBZGHY9WEppA6JFN6Cpih1 aj59RonIFZ+n9hlyEjyBNXkfyCbG8xg4JH4fiZL0ORWUQKsyg8eHRuAKuKpuoHai0yiJ LAbq5i3VRl2T5tOa/Zh86Qd2V3/PZAiLCE8swN3Ocdq7KE8Y5GJbz/7/or2HIfivpdFg NPAQ== X-Gm-Message-State: AE9vXwPdNURNZFgRhyUT6GvjPfFQv93YOHDxS45qrHWyU12hV4o9fM76k98rkkvKYv1qKTkV X-Received: by 10.66.9.42 with SMTP id w10mr9343811paa.34.1472243905214; Fri, 26 Aug 2016 13:38:25 -0700 (PDT) Received: from iovisor-test1.plumgrid.com ([12.97.19.201]) by smtp.gmail.com with ESMTPSA id wp4sm30889494pab.15.2016.08.26.13.38.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Aug 2016 13:38:24 -0700 (PDT) From: Brenden Blanco To: davem@davemloft.net, netdev@vger.kernel.org Cc: Brenden Blanco , Daniel Borkmann , Alexei Starovoitov , Tariq Toukan , Or Gerlitz , Tom Herbert Subject: [PATCH] net/mlx4_en: protect ring->xdp_prog with rcu_read_lock Date: Fri, 26 Aug 2016 13:38:08 -0700 Message-Id: <20160826203808.23664-1-bblanco@plumgrid.com> X-Mailer: git-send-email 2.9.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Depending on the preempt mode, the bpf_prog stored in xdp_prog may be freed despite the use of call_rcu inside bpf_prog_put. The situation is possible when running in PREEMPT_RCU=y mode, for instance, since the rcu callback for destroying the bpf prog can run even during the bh handling in the mlx4 rx path. Several options were considered before this patch was settled on: Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all of the rings are updated with the new program. This approach has the disadvantage that as the number of rings increases, the speed of udpate will slow down significantly due to napi_synchronize's msleep(1). Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh. The action of the bpf_prog_put_bh would be to then call bpf_prog_put later. Those drivers that consume a bpf prog in a bh context (like mlx4) would then use the bpf_prog_put_bh instead when the ring is up. This has the problem of complexity, in maintaining proper refcnts and rcu lists, and would likely be harder to review. In addition, this approach to freeing must be exclusive with other frees of the bpf prog, for instance a _bh prog must not be referenced from a prog array that is consumed by a non-_bh prog. The placement of rcu_read_lock in this patch is functionally the same as putting an rcu_read_lock in napi_poll. Actually doing so could be a potentially controversial change, but would bring the implementation in line with sk_busy_loop (though of course the nature of those two paths is substantially different), and would also avoid future copy/paste problems with future supporters of XDP. Still, this patch does not take that opinionated option. Testing was done with kernels in either PREEMPT_RCU=y or CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did not show up in the perf report whatsoever, and with PREEMPT_RCU=y the overhead of rcu_read_lock (according to perf) was the same before/after. In the rx path, rcu_read_lock is eventually called for every packet from netif_receive_skb_internal, so the napi poll call's rcu_read_lock is easily amortized. Fixes: d576acf0a22 ("net/mlx4_en: add page recycle to prepare rx ring for tx support") Acked-by: Daniel Borkmann Acked-by: Alexei Starovoitov Signed-off-by: Brenden Blanco --- drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index 2040dad..efed546 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -800,6 +800,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud if (budget <= 0) return polled; + rcu_read_lock(); xdp_prog = READ_ONCE(ring->xdp_prog); doorbell_pending = 0; tx_index = (priv->tx_ring_num - priv->xdp_ring_num) + cq->ring; @@ -1077,6 +1078,7 @@ consumed: } out: + rcu_read_unlock(); if (doorbell_pending) mlx4_en_xmit_doorbell(priv->tx_ring[tx_index]);