From patchwork Mon Dec 17 19:39:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Tu X-Patchwork-Id: 1014722 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="VEpCBFLR"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43JWjC26wFz9sB5 for ; Tue, 18 Dec 2018 06:40:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388368AbeLQTkp (ORCPT ); Mon, 17 Dec 2018 14:40:45 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:38062 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726812AbeLQTkn (ORCPT ); Mon, 17 Dec 2018 14:40:43 -0500 Received: by mail-pl1-f195.google.com with SMTP id e5so6616444plb.5 for ; Mon, 17 Dec 2018 11:40:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=1Nk/gcmek+wupu+j4h2ijsw3KTTcYUi88FBqxaLRPws=; b=VEpCBFLRtFSN2VKdZGJh457tgWHB2wGwBJOM/slMNdv9nM1FGnohlf1oJO7mqqrTAN uCNcktQWPQ6HH2G/8HZ8Yly1/GZyz5Qp/lZZ55mSjM2CsgLIJ5vI0+m49zfEkz0fP1UH 0abuFYk/6j4alEB2YaNfsi9MZuiCHp+flLfyhLf/9ij6XHDXK62GkPxnDMWVZKLHMufk FZyWQHRL01rz7YSpwoNevwNNk4tjyj8DE+RtVd/T32D+MgflAkk6EMIwA5tf37YHGfiE cZvi8iGcR9VFa5Hw3WOX5+bxXG0VOV6bI8GARHTvuHWrK6dkqfm+MfCRh7k5EuZAis0K apjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=1Nk/gcmek+wupu+j4h2ijsw3KTTcYUi88FBqxaLRPws=; b=RTn5gT/BXCcfXvnLMLlW4lPFgFNEs9SMTQBZHBgxo8BY+SXCoAZpeDXR27z/EoUYbx ANm6x3Z9b3gjqGhlPILD2lkzkuBM//Tg5vXFlkW5XZKsT1qwaXA5MppeXosgKNe/Q3M0 xilhDeA14UUJWAlbE4un2KuvXUD9ORc22a6765rC1rS79T/VphTA07d7zZKgLkTff9+O SigYn3G9SnDA94Diot9fUE62J1bNHsx+6Rb1QTmmBBfdCBC/Yzk6BOmICIU3s/wU6s4D gJeiSn9dFN1fgXWVxHLDDXEWQGYMMBq1WEbN+HMVeRgr1FQbHIb1aG+3j6i7BHI2NdPc 1a7g== X-Gm-Message-State: AA+aEWbI540I6ZcK/Q0I2KMDaBu2JvoUStOit6IUkLP3tL34dsUhCdWS GnKREJ7pIRXZLfgldFEppbw= X-Google-Smtp-Source: AFSGD/X3hqF+B+PhpT0gX3RCGgh2wgnXRa0vTrQ4PlGpSRkJTHfkIEs1duPE8fyrfcKZI1soB/HgxQ== X-Received: by 2002:a17:902:b78b:: with SMTP id e11mr13848257pls.90.1545075643075; Mon, 17 Dec 2018 11:40:43 -0800 (PST) Received: from sc9-mailhost3.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q75sm19965721pfa.38.2018.12.17.11.40.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 17 Dec 2018 11:40:42 -0800 (PST) From: William Tu To: bjorn.topel@gmail.com, magnus.karlsson@gmail.com, ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp Subject: [bpf-next RFC 1/3] xsk: add xsk_umem_consume_tx_virtual. Date: Mon, 17 Dec 2018 11:39:43 -0800 Message-Id: <1545075585-27744-2-git-send-email-u9012063@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545075585-27744-1-git-send-email-u9012063@gmail.com> References: <1545075585-27744-1-git-send-email-u9012063@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Currently the xsk_umem_consume_tx expects only the physical NICs so the api returns a dma address. This patch introduce the new function to return the virtual address, when XSK is used by a virtual device. Signed-off-by: William Tu --- include/net/xdp_sock.h | 7 +++++++ net/xdp/xsk.c | 24 ++++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index 13acb9803a6d..8de6b8456945 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -81,6 +81,7 @@ u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr); void xsk_umem_discard_addr(struct xdp_umem *umem); void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries); bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len); +bool xsk_umem_consume_tx_virtual(struct xdp_umem *umem, char **addr, u32 *len); void xsk_umem_consume_tx_done(struct xdp_umem *umem); struct xdp_umem_fq_reuse *xsk_reuseq_prepare(u32 nentries); struct xdp_umem_fq_reuse *xsk_reuseq_swap(struct xdp_umem *umem, @@ -165,6 +166,12 @@ static inline bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, return false; } +static inline bool xsk_umem_consume_tx_virtual(struct xdp_umem *umem, + char **dma, u32 *len) +{ + return false; +} + static inline void xsk_umem_consume_tx_done(struct xdp_umem *umem) { } diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 07156f43d295..379f5e9d0c81 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -197,6 +197,30 @@ bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len) } EXPORT_SYMBOL(xsk_umem_consume_tx); +bool xsk_umem_consume_tx_virtual(struct xdp_umem *umem, char **addr, u32 *len) +{ + struct xdp_desc desc; + struct xdp_sock *xs; + + rcu_read_lock(); + list_for_each_entry_rcu(xs, &umem->xsk_list, list) { + if (!xskq_peek_desc(xs->tx, &desc)) + continue; + if (xskq_produce_addr_lazy(umem->cq, desc.addr)) + goto out; + + *addr = xdp_umem_get_data(umem, desc.addr); + *len = desc.len; + xskq_discard_desc(xs->tx); + rcu_read_unlock(); + return true; + } +out: + rcu_read_unlock(); + return false; +} +EXPORT_SYMBOL(xsk_umem_consume_tx_virtual); + static int xsk_zc_xmit(struct sock *sk) { struct xdp_sock *xs = xdp_sk(sk); From patchwork Mon Dec 17 19:39:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Tu X-Patchwork-Id: 1014724 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="df9Jy4CJ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43JWjD5QnHz9sBQ for ; Tue, 18 Dec 2018 06:40:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388373AbeLQTkq (ORCPT ); Mon, 17 Dec 2018 14:40:46 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:34813 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732467AbeLQTkp (ORCPT ); Mon, 17 Dec 2018 14:40:45 -0500 Received: by mail-pg1-f194.google.com with SMTP id j10so5391191pga.1 for ; Mon, 17 Dec 2018 11:40:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=VmxClqP4ARYRaSd9ZoCLYLnCNlg6n6IqWrBMcy5uzrc=; b=df9Jy4CJXeWUlM2T/4W9IHiE0fPR87tMs8Ka2iyYAAGBpx7FPTnR8mx6NRKk6s188G vTPESOaV+l6HEgWc8xh9GjB40+tJvLrzLoqE3olZZrhtezQUOI2/HfmpgAD/WXM3GlUk D7DjAGFySkSJXkkuRkqN77lyGMumdVvA1+wunoJe9++CJKRajG1L+iOOijvQvDp+Wi/5 +UpxsVZdhEeg5SYSOXr5+i3wylKuVNYQVT7koMP5UGwk6P9+rZ7Nxwwdwf5YyRJGDX/L CQxDFxvu5FYc/vZajSEWWPHO34GY+C0XuzTZ53S1tYwb55uBKjgigVimGno5l1Zs0zjH O5Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=VmxClqP4ARYRaSd9ZoCLYLnCNlg6n6IqWrBMcy5uzrc=; b=uIbRa82UhyKUPFFDwHGVZIoR89BcN4X8YHIPIc28hl8RYE+GcroOPWjb0itXFMoZqD GwTZ6ZjnwWb/QqnzQuhbX01xVL7pYhda16ua21fnELLRCNRhIHSNjAqG1p6Xp0cXUfmy UhFFAia8c/8zi2Y77SxYlw4J+56+JmiGOWTaU6KVCCA6ttn4U8+QdLHeYO7siK/Tjcrf zAdW5IgRbtIKKinUUPwmt+Mt2XDrHvPlcfgl3WBGPNTE7SDao6FpqyNqc+hQ8FbkhGv3 /uQLuTvDAbhlFp74MSy47Qv//FS9C3jzDuWuRb0lE2E3d+c5QY4AGby44yrUYgr6q3e7 LMRA== X-Gm-Message-State: AA+aEWYLvW+2rftt1fsgdwhahDY7fqRAeX/iTm00vfMbGDdMnuybyQOT BMa73rRNVD6HHMzsh6r9mtY= X-Google-Smtp-Source: AFSGD/V1HuW1rlo/GebZARGfoPKXp1lfPj6kBLV33Fg65Dnd5cJXTkdrgBCKbAsel00tLaJB6xDvBg== X-Received: by 2002:a62:cec6:: with SMTP id y189mr189733pfg.100.1545075643972; Mon, 17 Dec 2018 11:40:43 -0800 (PST) Received: from sc9-mailhost3.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q75sm19965721pfa.38.2018.12.17.11.40.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 17 Dec 2018 11:40:43 -0800 (PST) From: William Tu To: bjorn.topel@gmail.com, magnus.karlsson@gmail.com, ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp Subject: [bpf-next RFC 2/3] veth: support AF_XDP. Date: Mon, 17 Dec 2018 11:39:44 -0800 Message-Id: <1545075585-27744-3-git-send-email-u9012063@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545075585-27744-1-git-send-email-u9012063@gmail.com> References: <1545075585-27744-1-git-send-email-u9012063@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The patch adds support for AF_XDP async xmit. Users can use AF_XDP on both side of the veth and get better performance, with the cost of ksoftirqd doing the xmit. The veth_xsk_async_xmit simply kicks the napi function, veth_poll, to run, and the transmit logic is implemented there. Tested using two namespaces, one runs xdpsock and the other runs xdp_rxq_info. A simple script comparing the performance with/without AF_XDP shows improvement from 724Kpps to 1.1Mpps. ip netns add at_ns0 ip link add p0 type veth peer name p1 ip link set p0 netns at_ns0 ip link set dev p1 up ip netns exec at_ns0 ip link set dev p0 up # receiver ip netns exec at_ns0 xdp_rxq_info --dev p0 --action XDP_DROP # sender xdpsock -i p1 -t -N -z or xdpsock -i p1 -t -S Signed-off-by: William Tu --- drivers/net/veth.c | 247 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 245 insertions(+), 2 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index f412ea1cef18..5171ddad5973 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -25,6 +25,10 @@ #include #include #include +#include +#include +#include +#include #define DRV_NAME "veth" #define DRV_VERSION "1.0" @@ -53,6 +57,7 @@ struct veth_rq { bool rx_notify_masked; struct ptr_ring xdp_ring; struct xdp_rxq_info xdp_rxq; + struct xdp_umem *xsk_umem; }; struct veth_priv { @@ -61,6 +66,11 @@ struct veth_priv { struct bpf_prog *_xdp_prog; struct veth_rq *rq; unsigned int requested_headroom; + + /* AF_XDP zero-copy */ + struct xdp_umem **xsk_umems; + u16 num_xsk_umems_used; + u16 num_xsk_umems; }; /* @@ -742,10 +752,87 @@ static int veth_poll(struct napi_struct *napi, int budget) struct veth_rq *rq = container_of(napi, struct veth_rq, xdp_napi); unsigned int xdp_xmit = 0; - int done; + int tx_budget = budget; + int done = 0; + + /* tx: use netif_tx_napi_add or here? */ + while (rq->xsk_umem && tx_budget--) { + struct veth_priv *priv, *peer_priv; + struct net_device *dev, *peer_dev; + unsigned int inner_xdp_xmit = 0; + unsigned int metasize = 0; + struct veth_rq *peer_rq; + struct xdp_frame *xdpf; + bool dropped = false; + struct sk_buff *skb; + struct page *page; + char *vaddr; + void *addr; + u32 len; + + if (!xsk_umem_consume_tx_virtual(rq->xsk_umem, &vaddr, &len)) + break; + + page = dev_alloc_page(); + if (!page) + return -ENOMEM; + + addr = page_to_virt(page); + xdpf = addr; + memset(xdpf, 0, sizeof(*xdpf)); + + addr += sizeof(*xdpf); + memcpy(addr, vaddr, len); + + xdpf->data = addr + metasize; + xdpf->len = len; + xdpf->headroom = 0; + xdpf->metasize = metasize; + xdpf->mem.type = MEM_TYPE_PAGE_SHARED; + + /* Invoke peer rq to rcv */ + dev = rq->dev; + priv = netdev_priv(dev); + peer_dev = priv->peer; + peer_priv = netdev_priv(peer_dev); + peer_rq = peer_priv->rq; + + /* put into peer rq */ + skb = veth_xdp_rcv_one(peer_rq, xdpf, &inner_xdp_xmit); + if (!skb) { + /* Peer side has XDP program attached */ + if (inner_xdp_xmit & VETH_XDP_TX) { + /* Not supported */ + xsk_umem_complete_tx(rq->xsk_umem, 1); + xsk_umem_consume_tx_done(rq->xsk_umem); + xdp_return_frame(xdpf); + goto skip_tx; + } else if (inner_xdp_xmit & VETH_XDP_REDIR) { + xdp_do_flush_map(); + } else { + dropped = true; + } + } else { + /* Peer side has no XDP attached */ + napi_gro_receive(&peer_rq->xdp_napi, skb); + } + xsk_umem_complete_tx(rq->xsk_umem, 1); + xsk_umem_consume_tx_done(rq->xsk_umem); + + /* update peer stats */ + u64_stats_update_begin(&peer_rq->stats.syncp); + peer_rq->stats.xdp_packets++; + peer_rq->stats.xdp_bytes += len; + if (dropped) + rq->stats.xdp_drops++; + u64_stats_update_end(&peer_rq->stats.syncp); + done++; + } +skip_tx: + /* rx */ xdp_set_return_frame_no_direct(); - done = veth_xdp_rcv(rq, budget, &xdp_xmit); + done += veth_xdp_rcv(rq, budget, &xdp_xmit); if (done < budget && napi_complete_done(napi, done)) { /* Write rx_notify_masked before reading ptr_ring */ @@ -1115,6 +1202,141 @@ static u32 veth_xdp_query(struct net_device *dev) return 0; } +static int veth_alloc_xsk_umems(struct net_device *dev) +{ + struct veth_priv *priv = netdev_priv(dev); + + if (priv->xsk_umems) + return 0; + + priv->num_xsk_umems_used = 0; + priv->num_xsk_umems = dev->real_num_rx_queues; + priv->xsk_umems = kcalloc(priv->num_xsk_umems, + sizeof(*priv->xsk_umems), + GFP_KERNEL); + return 0; +} + +static int veth_add_xsk_umem(struct net_device *dev, + struct xdp_umem *umem, + u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + int err; + + err = veth_alloc_xsk_umems(dev); + if (err) + return err; + + priv->xsk_umems[qid] = umem; + priv->num_xsk_umems_used++; + priv->rq[qid].xsk_umem = umem; + + return 0; +} + +static void veth_remove_xsk_umem(struct net_device *dev, u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + + priv->xsk_umems[qid] = NULL; + priv->num_xsk_umems_used--; + + if (priv->num_xsk_umems == 0) { + kfree(priv->xsk_umems); + priv->xsk_umems = NULL; + priv->num_xsk_umems = 0; + } +} + +int veth_xsk_umem_query(struct net_device *dev, struct xdp_umem **umem, + u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + + if (qid >= dev->real_num_rx_queues) + return -EINVAL; + + if (priv->xsk_umems) { + if (qid >= priv->num_xsk_umems) + return -EINVAL; + *umem = priv->xsk_umems[qid]; + return 0; + } + + *umem = NULL; + return 0; +} + +static int veth_xsk_umem_enable(struct net_device *dev, + struct xdp_umem *umem, + u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + struct xdp_umem_fq_reuse *reuseq; + int err = 0; + + if (qid >= dev->real_num_rx_queues) + return -EINVAL; + + if (priv->xsk_umems) { + if (qid >= priv->num_xsk_umems) + return -EINVAL; + if (priv->xsk_umems[qid]) + return -EBUSY; + } + + reuseq = xsk_reuseq_prepare(priv->rq[0].xdp_ring.size); + if (!reuseq) + return -ENOMEM; + + xsk_reuseq_free(xsk_reuseq_swap(umem, reuseq)); + + /* Check if_running and disable/enable? */ + err = veth_add_xsk_umem(dev, umem, qid); + + return err; +} + +static int veth_xsk_umem_disable(struct net_device *dev, + u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + + if (!priv->xsk_umems || qid >= priv->num_xsk_umems || + !priv->xsk_umems[qid]) + return -EINVAL; + + veth_remove_xsk_umem(dev, qid); + return 0; +} + +int veth_xsk_umem_setup(struct net_device *dev, struct xdp_umem *umem, + u16 qid) +{ + return umem ? veth_xsk_umem_enable(dev, umem, qid) : + veth_xsk_umem_disable(dev, qid); +} + +int veth_xsk_async_xmit(struct net_device *dev, u32 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + struct veth_rq *rq; + + rq = &priv->rq[qid]; + + if (qid >= dev->real_num_rx_queues) + return -ENXIO; + + if (!priv->xsk_umems) + return -ENXIO; + + if (!napi_if_scheduled_mark_missed(&rq->xdp_napi)) + napi_schedule(&rq->xdp_napi); + + return 0; +} + static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp) { switch (xdp->command) { @@ -1123,6 +1345,26 @@ static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp) case XDP_QUERY_PROG: xdp->prog_id = veth_xdp_query(dev); return 0; + case XDP_QUERY_XSK_UMEM: + return veth_xsk_umem_query(dev, &xdp->xsk.umem, + xdp->xsk.queue_id); + case XDP_SETUP_XSK_UMEM: { + struct veth_priv *priv; + int err; + + /* Enable XDP on both sides */ + err = veth_enable_xdp(dev); + if (err) + return err; + + priv = netdev_priv(dev); + err = veth_enable_xdp(priv->peer); + if (err) + return err; + + return veth_xsk_umem_setup(dev, xdp->xsk.umem, + xdp->xsk.queue_id); + } default: return -EINVAL; } @@ -1145,6 +1387,7 @@ static const struct net_device_ops veth_netdev_ops = { .ndo_set_rx_headroom = veth_set_rx_headroom, .ndo_bpf = veth_xdp, .ndo_xdp_xmit = veth_xdp_xmit, + .ndo_xsk_async_xmit = veth_xsk_async_xmit, }; #define VETH_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HW_CSUM | \ From patchwork Mon Dec 17 19:39:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Tu X-Patchwork-Id: 1014725 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="dJef9mmL"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43JWjG4bSTz9s6w for ; Tue, 18 Dec 2018 06:40:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388385AbeLQTkt (ORCPT ); Mon, 17 Dec 2018 14:40:49 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:36626 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388355AbeLQTkq (ORCPT ); Mon, 17 Dec 2018 14:40:46 -0500 Received: by mail-pl1-f196.google.com with SMTP id g9so6600222plo.3 for ; Mon, 17 Dec 2018 11:40:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=IL1HdoG7f7b4Z+/dVhvpoRhu3cmc1w9Xu0Sqvt8ovTw=; b=dJef9mmLV20DAJdRvI8dokndvpQn3PM6d4JZLGga/+qA7g5l3hSc8+ps4n28IjXw9Z SK3dT8e6DiWW2AlNf99hF5fO20vGgLIF5ZbV4OcuqmQ/TCisGJq0bMVWfM4yr2zqIzyv Kdo/g1ZbNp1IfK0F4BLTk2rxNKv52zF8fh61f7APLyYrzSGxUfZcLZzRxKHwgJPmYCGk 7jDe6KZceyhhwb8d5ypQwfZiCfLzJIS2eGfacbe+x7YUYOQDKYxIu95ohCEl2nJpzWoK UkPdJ52BS53z7FHsxtjUJ0m2dcY18PmcBkRLKipVn906ZD0yUjcPd1JMy1Ks8B7JIFgI kwoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=IL1HdoG7f7b4Z+/dVhvpoRhu3cmc1w9Xu0Sqvt8ovTw=; b=ilapwRgOaGfeUDahY0+QpAvVXsTZyNaX1+n6freamD8V0LIqhIYDYNz2tW7ch/3TWm BxZ0OzcNrrjCEHi84YdSYepQYsa8xk0we2gOct+T4jb5YirVVNAxumSZWMY5IYgrjiQZ EYrVKR/84DAGsRPVEgIET3XgG+zwVuZjKM940ll3LsZRlbOgsJIdYACUXMQBgRyZy6TY nP58xsbsSZHbKGo1bXNr8dDj53LQCfDe8efHB0jxfKATlVOq8uLbtH6XTgyTNd3wqMqP dXmUhC31FVKXJTdjy+j49UBdvo9zFVztVMcH2cxYkrZPo1nrq8dRoBxOfv2m6Ota15Dk L/HA== X-Gm-Message-State: AA+aEWZCVRRqHlUxsBvGFf2uSCUVrzqCCdgJfzHSEh9yZOJEqmlcTlaG MLYg/kGSJRYHR4vhPlyqIyQ= X-Google-Smtp-Source: AFSGD/Wwjdgl3fSI3yUPkpwiK/jupPjOb18r7MjPOQ3Rug+2bgLFVEF/IF8b67r0+8kHVBUvO2vsiQ== X-Received: by 2002:a17:902:5ac2:: with SMTP id g2mr13788500plm.313.1545075644933; Mon, 17 Dec 2018 11:40:44 -0800 (PST) Received: from sc9-mailhost3.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q75sm19965721pfa.38.2018.12.17.11.40.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 17 Dec 2018 11:40:44 -0800 (PST) From: William Tu To: bjorn.topel@gmail.com, magnus.karlsson@gmail.com, ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp Subject: [bpf-next RFC 3/3] samples: bpf: add veth AF_XDP example. Date: Mon, 17 Dec 2018 11:39:45 -0800 Message-Id: <1545075585-27744-4-git-send-email-u9012063@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545075585-27744-1-git-send-email-u9012063@gmail.com> References: <1545075585-27744-1-git-send-email-u9012063@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add example use cases for AF_XDP socket on two namespaces. The script runs sender at the root namespace, and receiver at the at_ns0 namespace with different XDP actions. Signed-off-by: William Tu --- samples/bpf/test_veth_afxdp.sh | 67 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100755 samples/bpf/test_veth_afxdp.sh diff --git a/samples/bpf/test_veth_afxdp.sh b/samples/bpf/test_veth_afxdp.sh new file mode 100755 index 000000000000..b65311cf15f7 --- /dev/null +++ b/samples/bpf/test_veth_afxdp.sh @@ -0,0 +1,67 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# The script runs the sender at the root namespace, and +# the receiver at the namespace at_ns0, with different mode +# 1. XDP_DROP +# 2. XDP_TX +# 3. XDP_PASS +# 4. XDP_REDIRECT +# 5. Generic XDP + +XDPSOCK=./xdpsock +XDP_RXQ_INFO=./xdp_rxq_info + +ip netns add at_ns0 +ip link add p0 type veth peer name p1 +ip link set p0 netns at_ns0 +ip link set dev p1 up +ip netns exec at_ns0 ip link set dev p0 up + +# send at root namespace +test_xdp_drop() +{ + echo "[Peer] XDP_DROP" + ip netns exec at_ns0 $XDP_RXQ_INFO --dev p0 --action XDP_DROP & + $XDPSOCK -i p1 -t -N -z &> /tmp/t &> /dev/null & +} + +test_xdp_pass() +{ + echo "[Peer] XDP_PASS" + ip netns exec at_ns0 $XDP_RXQ_INFO --dev p0 --action XDP_PASS & + $XDPSOCK -i p1 -t -N -z &> /tmp/t &> /dev/null & +} + +test_xdp_tx() +{ + echo "[Peer] XDP_TX" + ip netns exec at_ns0 $XDP_RXQ_INFO --dev p0 --action XDP_TX & + $XDPSOCK -i p1 -t -N -z &> /tmp/t &> /dev/null & +} + +test_generic_xdp() +{ + echo "[Peer] Generic XDP" + ip netns exec at_ns0 $XDPSOCK -i p0 -r -S & + $XDPSOCK -i p1 -t -N -z &> /tmp/t &> /dev/null & +} + +test_xdp_redirect() +{ + echo "[Peer] XDP_REDIRECT" + ip netns exec at_ns0 $XDPSOCK -i p0 -r -N & + $XDPSOCK -i p1 -t -N -z &> /tmp/t &> /dev/null & +} + +cleanup() { + killall xdpsock + killall xdp_rxq_info + ip netns del at_ns0 + ip link del p1 +} + +trap cleanup 0 3 6 + +test_xdp_drop +cleanup