From patchwork Wed Dec 26 20:27:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Tu X-Patchwork-Id: 1018730 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="tI6oJEXe"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43Q4L30crnz9s3l for ; Thu, 27 Dec 2018 07:28:27 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727697AbeLZU2Y (ORCPT ); Wed, 26 Dec 2018 15:28:24 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:41187 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727674AbeLZU2W (ORCPT ); Wed, 26 Dec 2018 15:28:22 -0500 Received: by mail-pg1-f196.google.com with SMTP id m1so7916879pgq.8 for ; Wed, 26 Dec 2018 12:28:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references; bh=Rvp4tCA03NhWaP8VAiDM4rXCpO7gQ18Uxv5ij1P/5yE=; b=tI6oJEXeK7J2MBdNXhTWqLbMh6mNbwiutn2EvY4QOhO8ORYk+11sty6PFgZ59X9blB FqcPUpxWvDby1Y2corAnCS1ygNNLpBs9lzeAWN/u4Aw2rVTIWg98qV/ZnDgPGmg0p+0g youIYVN6NO8iagbylXddInqRdU+di45+4TCNqXBupEgdrfmQjrwxKXnaNNIBDwpShmFh dNOP9tgj9c0PzE53KyfttcoH1dKhPmrvTBNrdahnJ3lvgg+Flg8eqed8WCxVm5iuFMwq s3AiMCsMnRhOY3c+Z2dI9QG4HHb4L/2t3IkWt5abSD847IdEbLgpexE1AntJB63u91yv tj6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=Rvp4tCA03NhWaP8VAiDM4rXCpO7gQ18Uxv5ij1P/5yE=; b=VidbEjRYqyZG3xyfOVGzR7C9PtbUq7Fs7jlzwydhoJfOy+fIRX/C5eNcEbfZ57yBxq mw18N4H8dGMEdkzGQfTN/YbLRaKLVhPYvuW7EaR0EQcBpRH4OYG9hAzYZ0GWy+nkA5Bw t8MYL6RD80nEplv4EDuLeu+N3d3OmKoHxouNiQtr6J7PRag9J0sRj+eZXcfgSuGAq0nn CkYT/5TX3btOvYBuKmQBIDwNP79ZhJKcinf4Qf6sS+Kj1MuSDlln4NHlCuw036buwDHv YhyfDwDGTk/LhvGIUZ/lC0s2rm7kPGhGxFDfCXcVwfwODgkxOl7qVVAb0jauaG/IhXYt 8e7Q== X-Gm-Message-State: AA+aEWaQfgb5OcX51qIwUlcLfZshecz+k0lbAAYR9PwMWkhFFKdgd5N0 eiYqMNu7hIpkQ27GDrVQSlg= X-Google-Smtp-Source: AFSGD/VRk9dp4IWh6MFVxuYoBrzeULoKTFQjvYocCZ02vIPjGQaSe9dqih2u8fYNIOa7fK3DmtVN3A== X-Received: by 2002:a62:7c47:: with SMTP id x68mr21726538pfc.209.1545856101269; Wed, 26 Dec 2018 12:28:21 -0800 (PST) Received: from sc9-mailhost2.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id t90sm72712460pfj.23.2018.12.26.12.28.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 26 Dec 2018 12:28:20 -0800 (PST) From: William Tu To: bjorn.topel@gmail.com, magnus.karlsson@gmail.com, ast@kernel.org, daniel@iogearbox.net, netdev@vger.kernel.org, makita.toshiaki@lab.ntt.co.jp, yihung.wei@gmail.com, magnus.karlsson@intel.com Subject: [PATCH bpf-next RFCv3 2/6] veth: support AF_XDP TX copy-mode. Date: Wed, 26 Dec 2018 12:27:49 -0800 Message-Id: <1545856073-8680-3-git-send-email-u9012063@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545856073-8680-1-git-send-email-u9012063@gmail.com> References: <1545856073-8680-1-git-send-email-u9012063@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The patch adds support for AF_XDP async xmit. Users can use AF_XDP on both sides of the veth and get better performance, with the cost of ksoftirqd doing the xmit. The veth_xsk_async_xmit simply kicks the napi function, veth_poll, to receive the packets that are on the umem transmit ring at the _peer_ side. Tested using two namespaces, one runs xdpsock and the other runs xdp_rxq_info. A simple script comparing the performance with/without AF_XDP shows improvement from 724Kpps to 1.1Mpps. ip netns add at_ns0 ip link add p0 type veth peer name p1 ip link set p0 netns at_ns0 ip link set dev p1 up ip netns exec at_ns0 ip link set dev p0 up # receiver ip netns exec at_ns0 xdp_rxq_info --dev p0 --action XDP_DROP # sender xdpsock -i p1 -t -N -z or xdpsock -i p1 -t -S Signed-off-by: William Tu --- drivers/net/veth.c | 200 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 199 insertions(+), 1 deletion(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index f412ea1cef18..10cf9ded59f1 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -25,6 +25,10 @@ #include #include #include +#include +#include +#include +#include #define DRV_NAME "veth" #define DRV_VERSION "1.0" @@ -53,6 +57,8 @@ struct veth_rq { bool rx_notify_masked; struct ptr_ring xdp_ring; struct xdp_rxq_info xdp_rxq; + struct xdp_umem *xsk_umem; + u16 qid; }; struct veth_priv { @@ -737,11 +743,95 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, unsigned int *xdp_xmit) return done; } +static int veth_xsk_poll(struct napi_struct *napi, int budget) +{ + struct veth_priv *priv, *peer_priv; + struct net_device *dev, *peer_dev; + struct veth_rq *peer_rq; + struct veth_rq *rq = + container_of(napi, struct veth_rq, xdp_napi); + int done = 0; + + dev = rq->dev; + priv = netdev_priv(dev); + peer_dev = priv->peer; + peer_priv = netdev_priv(peer_dev); + peer_rq = &peer_priv->rq[rq->qid]; + + while (peer_rq->xsk_umem && budget--) { + unsigned int inner_xdp_xmit = 0; + unsigned int metasize = 0; + struct xdp_frame *xdpf; + bool dropped = false; + struct sk_buff *skb; + struct page *page; + void *vaddr; + void *addr; + u32 len; + + if (!xsk_umem_consume_tx_virtual(peer_rq->xsk_umem, &vaddr, &len)) + break; + + page = dev_alloc_page(); + if (!page) { + xsk_umem_complete_tx(peer_rq->xsk_umem, 1); + xsk_umem_consume_tx_done(peer_rq->xsk_umem); + return -ENOMEM; + } + + addr = page_to_virt(page); + xdpf = addr; + memset(xdpf, 0, sizeof(*xdpf)); + + addr += sizeof(*xdpf); + memcpy(addr, vaddr, len); + + xdpf->data = addr + metasize; + xdpf->len = len; + xdpf->headroom = 0; + xdpf->metasize = metasize; + xdpf->mem.type = MEM_TYPE_PAGE_SHARED; + + /* put into rq */ + skb = veth_xdp_rcv_one(rq, xdpf, &inner_xdp_xmit); + if (!skb) { + /* Peer side has XDP program attached */ + if (inner_xdp_xmit & VETH_XDP_TX) { + /* Not supported */ + pr_warn("veth: peer XDP_TX not supported\n"); + xdp_return_frame(xdpf); + dropped = true; + goto skip_tx; + } else if (inner_xdp_xmit & VETH_XDP_REDIR) { + xdp_do_flush_map(); + } else { + dropped = true; + } + } else { + napi_gro_receive(&rq->xdp_napi, skb); + } +skip_tx: + xsk_umem_complete_tx(peer_rq->xsk_umem, 1); + xsk_umem_consume_tx_done(peer_rq->xsk_umem); + + /* update rq stats */ + u64_stats_update_begin(&rq->stats.syncp); + rq->stats.xdp_packets++; + rq->stats.xdp_bytes += len; + if (dropped) + rq->stats.xdp_drops++; + u64_stats_update_end(&rq->stats.syncp); + done++; + } + return done; +} + static int veth_poll(struct napi_struct *napi, int budget) { struct veth_rq *rq = container_of(napi, struct veth_rq, xdp_napi); unsigned int xdp_xmit = 0; + int tx_done; int done; xdp_set_return_frame_no_direct(); @@ -756,13 +846,17 @@ static int veth_poll(struct napi_struct *napi, int budget) } } + tx_done = veth_xsk_poll(napi, budget); + if (tx_done > 0) + done += tx_done; + if (xdp_xmit & VETH_XDP_TX) veth_xdp_flush(rq->dev); if (xdp_xmit & VETH_XDP_REDIR) xdp_do_flush_map(); xdp_clear_return_frame_no_direct(); - return done; + return done > budget ? budget : done; } static int veth_napi_add(struct net_device *dev) @@ -776,6 +870,7 @@ static int veth_napi_add(struct net_device *dev) err = ptr_ring_init(&rq->xdp_ring, VETH_RING_SIZE, GFP_KERNEL); if (err) goto err_xdp_ring; + rq->qid = i; } for (i = 0; i < dev->real_num_rx_queues; i++) { @@ -812,6 +907,7 @@ static void veth_napi_del(struct net_device *dev) netif_napi_del(&rq->xdp_napi); rq->rx_notify_masked = false; ptr_ring_cleanup(&rq->xdp_ring, veth_ptr_free); + rq->qid = -1; } } @@ -836,6 +932,7 @@ static int veth_enable_xdp(struct net_device *dev) /* Save original mem info as it can be overwritten */ rq->xdp_mem = rq->xdp_rxq.mem; + rq->qid = i; } err = veth_napi_add(dev); @@ -1115,6 +1212,84 @@ static u32 veth_xdp_query(struct net_device *dev) return 0; } +int veth_xsk_umem_query(struct net_device *dev, struct xdp_umem **umem, + u16 qid) +{ + struct xdp_umem *queried_umem; + + queried_umem = xdp_get_umem_from_qid(dev, qid); + + if (!queried_umem) + return -EINVAL; + + *umem = queried_umem; + return 0; +} + +static int veth_xsk_umem_enable(struct net_device *dev, + struct xdp_umem *umem, + u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + struct xdp_umem_fq_reuse *reuseq; + int err = 0; + + if (qid >= dev->real_num_rx_queues) + return -EINVAL; + + reuseq = xsk_reuseq_prepare(priv->rq[0].xdp_ring.size); + if (!reuseq) + return -ENOMEM; + + xsk_reuseq_free(xsk_reuseq_swap(umem, reuseq)); + + priv->rq[qid].xsk_umem = umem; + + return err; +} + +static int veth_xsk_umem_disable(struct net_device *dev, + u16 qid) +{ + struct veth_priv *priv = netdev_priv(dev); + struct xdp_umem *umem; + + umem = xdp_get_umem_from_qid(dev, qid); + if (!umem) + return -EINVAL; + + priv->rq[qid].xsk_umem = NULL; + return 0; +} + +int veth_xsk_umem_setup(struct net_device *dev, struct xdp_umem *umem, + u16 qid) +{ + return umem ? veth_xsk_umem_enable(dev, umem, qid) : + veth_xsk_umem_disable(dev, qid); +} + +int veth_xsk_async_xmit(struct net_device *dev, u32 qid) +{ + struct veth_priv *priv, *peer_priv; + struct net_device *peer_dev; + struct veth_rq *peer_rq; + + priv = netdev_priv(dev); + peer_dev = priv->peer; + peer_priv = netdev_priv(peer_dev); + peer_rq = &peer_priv->rq[qid]; + + if (qid >= dev->real_num_rx_queues) + return -ENXIO; + + /* Schedule the peer side NAPI to receive */ + if (!napi_if_scheduled_mark_missed(&peer_rq->xdp_napi)) + napi_schedule(&peer_rq->xdp_napi); + + return 0; +} + static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp) { switch (xdp->command) { @@ -1123,6 +1298,28 @@ static int veth_xdp(struct net_device *dev, struct netdev_bpf *xdp) case XDP_QUERY_PROG: xdp->prog_id = veth_xdp_query(dev); return 0; + case XDP_QUERY_XSK_UMEM: + return veth_xsk_umem_query(dev, &xdp->xsk.umem, + xdp->xsk.queue_id); + case XDP_SETUP_XSK_UMEM: { + struct veth_priv *priv; + int err; + + /* Enable NAPI on both sides, by enabling + * their XDP. + */ + err = veth_enable_xdp(dev); + if (err) + return err; + + priv = netdev_priv(dev); + err = veth_enable_xdp(priv->peer); + if (err) + return err; + + return veth_xsk_umem_setup(dev, xdp->xsk.umem, + xdp->xsk.queue_id); + } default: return -EINVAL; } @@ -1145,6 +1342,7 @@ static const struct net_device_ops veth_netdev_ops = { .ndo_set_rx_headroom = veth_set_rx_headroom, .ndo_bpf = veth_xdp, .ndo_xdp_xmit = veth_xdp_xmit, + .ndo_xsk_async_xmit = veth_xsk_async_xmit, }; #define VETH_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HW_CSUM | \