From patchwork Thu Jul 26 14:25:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Toshiaki Makita X-Patchwork-Id: 949738 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="YJ99YMP+"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41bvY84gsQz9rxx for ; Fri, 27 Jul 2018 00:26:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731082AbeGZPnl (ORCPT ); Thu, 26 Jul 2018 11:43:41 -0400 Received: from mail-pl0-f65.google.com ([209.85.160.65]:41353 "EHLO mail-pl0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730059AbeGZPnl (ORCPT ); Thu, 26 Jul 2018 11:43:41 -0400 Received: by mail-pl0-f65.google.com with SMTP id w8-v6so901686ply.8 for ; Thu, 26 Jul 2018 07:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ip+LznBbWgPsui9+KSGKb+EJLihd4mp0d9m9iatZskg=; b=YJ99YMP+YHTYacv6HGf0qT3AJGY0km5mT7uSowU9bNKO5tCGwQ6XqQ05YQYOpZJgab X9rGUs+6z+jGkBn3mk0BATbgqKUPncUJ/j7cvJ9XdNFu0ZJHEJwqOtV5Wh/qMKmU4xu/ Ob+NoM+6dxayM4bpI4vW37kHgJP3Diod9h3Rw42KsdbHsN/jrYTx3l2gLzEAVwKbwbIW OOzn/p3FhcAxJXlsNuIJgy7n+2Y5ymjmdtFnJ7GBaSd7jUgtSykh6NelmNuTiGT0HvK0 247faqxIoT89PfrwQkjBbNLxLi/jTgnsw/+bHsTYMUQCoPPwY4VgIGS9qaVxTppYjRoc R2KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ip+LznBbWgPsui9+KSGKb+EJLihd4mp0d9m9iatZskg=; b=L0s1S3qTTAXKomEbWYCGxANfzE2wJ0btWcEp4NhHDraYLDGL0Y9h1Bak1qNKkPQF+x CEVij9YzOR4dTY7wYFR9Cvrti4R0NUYuTUotV7f2OhD1mQVQoRllGvBALKN7xGUOTIDJ dtyGrJFVwwjriHxDQ6TtB4e+fs/NnO5ELt6b80ogJGu0YP0aM77GBtYeZZVW+C4/rUWl hhdz6jDhAcWe8O9e929NmTCfjTMPJfB7phbE4WXoq+Yn2Vn3q08+kE/JyjJMFPrrLEZE FJBv6H8FJM0dnDY2Oyw1SmM3VXgfyrFMVnzBDqxgNiRQQx3IoIqBXsfb+HnekuZDTrF2 Zs/A== X-Gm-Message-State: AOUpUlHi1MRIl66/OfrTdmnqMAU1ggp0YiSofwhyywAcwtM3/Aaw7+tx YKafHfaXs3H7NZqxycCD2nqdFD1Z X-Google-Smtp-Source: AAOMgpdIEDaBX3ycKUntNiJ4/NoXsYsup63rMMmnH7ntLXBRO/qk1LYNBO48zlYOuPpXsrJWVEsnEw== X-Received: by 2002:a17:902:8506:: with SMTP id bj6-v6mr2177452plb.210.1532615194121; Thu, 26 Jul 2018 07:26:34 -0700 (PDT) Received: from localhost.localdomain (i153-145-22-9.s42.a013.ap.plala.or.jp. [153.145.22.9]) by smtp.gmail.com with ESMTPSA id q26-v6sm2484150pff.9.2018.07.26.07.26.31 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Jul 2018 07:26:33 -0700 (PDT) From: Toshiaki Makita To: netdev@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann Cc: Toshiaki Makita , Jesper Dangaard Brouer , Jakub Kicinski , Toshiaki Makita Subject: [PATCH v4 bpf-next 7/9] xdp: Helpers for disabling napi_direct of xdp_return_frame Date: Thu, 26 Jul 2018 23:25:55 +0900 Message-Id: <20180726142557.1765-8-toshiaki.makita1@gmail.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180726142557.1765-1-toshiaki.makita1@gmail.com> References: <20180726142557.1765-1-toshiaki.makita1@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Toshiaki Makita We need some mechanism to disable napi_direct on calling xdp_return_frame_rx_napi() from some context. When veth gets support of XDP_REDIRECT, it will redirects packets which are redirected from other devices. On redirection veth will reuse xdp_mem_info of the redirection source device to make return_frame work. But in this case .ndo_xdp_xmit() called from veth redirection uses xdp_mem_info which is not guarded by NAPI, because the .ndo_xdp_xmit() is not called directly from the rxq which owns the xdp_mem_info. This approach introduces a flag in bpf_redirect_info to indicate that napi_direct should be disabled even when _rx_napi variant is used as well as helper functions to use it. A NAPI handler who wants to use this flag needs to call xdp_set_return_frame_no_direct() before processing packets, and call xdp_clear_return_frame_no_direct() after xdp_do_flush_map() before exiting NAPI. v4: - Use bpf_redirect_info for storing the flag instead of xdp_mem_info to avoid per-frame copy cost. Signed-off-by: Toshiaki Makita Signed-off-by: Toshiaki Makita --- include/linux/filter.h | 25 +++++++++++++++++++++++++ net/core/xdp.c | 6 ++++-- 2 files changed, 29 insertions(+), 2 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 4717af8b95e6..2b072dab32c0 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -543,10 +543,14 @@ struct bpf_redirect_info { struct bpf_map *map; struct bpf_map *map_to_flush; unsigned long map_owner; + u32 kern_flags; }; DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); +/* flags for bpf_redirect_info kern_flags */ +#define BPF_RI_F_RF_NO_DIRECT BIT(0) /* no napi_direct on return_frame */ + /* Compute the linear packet data range [data, data_end) which * will be accessed by various program types (cls_bpf, act_bpf, * lwt, ...). Subsystems allowing direct data access must (!) @@ -775,6 +779,27 @@ static inline bool bpf_dump_raw_ok(void) struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off, const struct bpf_insn *patch, u32 len); +static inline bool xdp_return_frame_no_direct(void) +{ + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + + return ri->kern_flags & BPF_RI_F_RF_NO_DIRECT; +} + +static inline void xdp_set_return_frame_no_direct(void) +{ + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + + ri->kern_flags |= BPF_RI_F_RF_NO_DIRECT; +} + +static inline void xdp_clear_return_frame_no_direct(void) +{ + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + + ri->kern_flags &= ~BPF_RI_F_RF_NO_DIRECT; +} + static inline int xdp_ok_fwd_dev(const struct net_device *fwd, unsigned int pktlen) { diff --git a/net/core/xdp.c b/net/core/xdp.c index 57285383ed00..3dd99e1c04f5 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -330,10 +330,12 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */ xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); page = virt_to_head_page(data); - if (xa) + if (xa) { + napi_direct &= !xdp_return_frame_no_direct(); page_pool_put_page(xa->page_pool, page, napi_direct); - else + } else { put_page(page); + } rcu_read_unlock(); break; case MEM_TYPE_PAGE_SHARED: