From patchwork Sat Feb 25 17:33:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 732423 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3vVw7C6Qxgz9s7M for ; Sun, 26 Feb 2017 04:33:43 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="h2IgO56j"; dkim-atps=neutral Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 725CC86964; Sat, 25 Feb 2017 17:33:42 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sPnJAH7F4GK5; Sat, 25 Feb 2017 17:33:41 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by fraxinus.osuosl.org (Postfix) with ESMTP id 7FE77868EE; Sat, 25 Feb 2017 17:33:41 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by ash.osuosl.org (Postfix) with ESMTP id 69C5B1BFCC3 for ; Sat, 25 Feb 2017 17:33:39 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 6408E88B39 for ; Sat, 25 Feb 2017 17:33:39 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tVRc0d2eFKmS for ; Sat, 25 Feb 2017 17:33:38 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-pg0-f67.google.com (mail-pg0-f67.google.com [74.125.83.67]) by hemlock.osuosl.org (Postfix) with ESMTPS id D80F188AFE for ; Sat, 25 Feb 2017 17:33:38 +0000 (UTC) Received: by mail-pg0-f67.google.com with SMTP id 5so7254595pgj.0 for ; Sat, 25 Feb 2017 09:33:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:subject:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=u1sG9dxakFBMXEddsWYGxcfipTPz+dG5cqKh0n8AQ/0=; b=h2IgO56jPxUsuhNPJWA03g8GXMy7S3Ra8waeXtAWcOs+yDLIwCcUFVzbAubcZnEFeI vZFDuEGOq3OQRuFfalS4EcnDnTUW3z61idVBWjQY+q1XXeYfZqRF0/iZEZ4K1iAnR95D KZ0f6NpXk27e2WlJZ0yhaf2nLzOiGtnawKQlCsUfzortzUTyD0oeirxCCnQj0AD8X+Fa V7As2zeWQggvvhU1EYt9EXQiAio7Pen+Fc6ULE1NuYsIueEnJcYqY5PBCytkodsnQyx7 AnokCntRFkb/OiHihW8GuP20xDXzmc9iZt5CXssG3QMHL+D44TVV8hD+tSp26r9xe9oC M3yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=u1sG9dxakFBMXEddsWYGxcfipTPz+dG5cqKh0n8AQ/0=; b=gEBQn9rKXKxhbnQZh6WxwRXw/RCU/ZfqjULIVIv12X6yPLCCU7eBgrO51doVW6tVpP cve7DBS/N0gleWYdDE3IH45puE77/aTSnvCTofaITcgjHCK+OE0w/L/dHsdshKKEsGph annZtjUPTaPdIhqpAo8WOQ3NCRrVIw7eJRukfD5cV2l+I28dR9sFfGx8Yb1bAyPvV9Vg hrKXfvieaqcPGsCuxTMVMu8LsM+1xQDIYwZWlhDKTX9azDmB++VxUuPjjt2MDDkBUY0g UqbuKQG5JEck51Tmsn4BKyk/gRR2YuBcf6CiG7ZD3IQabiHnnjkyG3818jUSsDUPxPMU zXaQ== X-Gm-Message-State: AMke39ki5yZw3Udk/T0Mi0gSw3CcOgs4L22CcqlnpUKMDJ5sfxTxsBnl6q8QgavJST+G7w== X-Received: by 10.84.254.66 with SMTP id a2mr12295145pln.57.1488044018514; Sat, 25 Feb 2017 09:33:38 -0800 (PST) Received: from [127.0.1.1] ([72.168.144.251]) by smtp.gmail.com with ESMTPSA id 80sm349037pgd.39.2017.02.25.09.33.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Feb 2017 09:33:38 -0800 (PST) From: John Fastabend X-Google-Original-From: John Fastabend To: alexander.h.duyck@intel.com, daniel@iogearbox.net, alexander.duyck@gmail.com, john.r.fastabend@intel.com, bjorn.topel@intel.com, alexei.starovoitov@gmail.com, john.fastabend@intel.com Date: Sat, 25 Feb 2017 09:33:20 -0800 Message-ID: <20170225173320.32741.88705.stgit@john-Precision-Tower-5810> In-Reply-To: <20170225172422.32741.67877.stgit@john-Precision-Tower-5810> References: <20170225172422.32741.67877.stgit@john-Precision-Tower-5810> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: intel-wired-lan@lists.osuosl.org, magnus.karlsson@intel.com Subject: [Intel-wired-lan] [net-next PATCH 3/3] ixgbe: xdp support for adjust head X-BeenThere: intel-wired-lan@lists.osuosl.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@lists.osuosl.org Sender: "Intel-wired-lan" Add adjust_head support for XDP however at the moment we are only adding IXGBE_SKB_PAD bytes of headroom to align with driver paths. The infrastructure is is such that a follow on patch can extend headroom up to 196B without changing RX path. Signed-off-by: John Fastabend --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 45 +++++++++++++++++-------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 227caf8..040e469 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -2031,6 +2031,7 @@ static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer) static void ixgbe_add_rx_frag(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, struct sk_buff *skb, + unsigned int headroom, unsigned int size) { #if (PAGE_SIZE < 8192) @@ -2041,7 +2042,8 @@ static void ixgbe_add_rx_frag(struct ixgbe_ring *rx_ring, SKB_DATA_ALIGN(size); #endif skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, - rx_buffer->page_offset, size, truesize); + rx_buffer->page_offset - (IXGBE_SKB_PAD - headroom), + size, truesize); #if (PAGE_SIZE < 8192) rx_buffer->page_offset ^= truesize; #else @@ -2114,6 +2116,7 @@ static void ixgbe_put_rx_buffer(struct ixgbe_ring *rx_ring, static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, union ixgbe_adv_rx_desc *rx_desc, + unsigned int headroom, unsigned int size) { void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; @@ -2122,6 +2125,7 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, #else unsigned int truesize = SKB_DATA_ALIGN(size); #endif + unsigned int off_page; struct sk_buff *skb; /* prefetch first cache line of first page */ @@ -2135,12 +2139,14 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, if (unlikely(!skb)) return NULL; + off_page = IXGBE_SKB_PAD - headroom; + if (size > IXGBE_RX_HDR_SIZE) { if (!ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_EOP)) IXGBE_CB(skb)->dma = rx_buffer->dma; skb_add_rx_frag(skb, 0, rx_buffer->page, - rx_buffer->page_offset, + rx_buffer->page_offset - off_page, size, truesize); #if (PAGE_SIZE < 8192) rx_buffer->page_offset ^= truesize; @@ -2148,7 +2154,8 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, rx_buffer->page_offset += truesize; #endif } else { - memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); + memcpy(__skb_put(skb, size), va - off_page, + ALIGN(size, sizeof(long))); rx_buffer->pagecnt_bias++; } @@ -2158,6 +2165,7 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, static struct sk_buff *ixgbe_build_skb(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, union ixgbe_adv_rx_desc *rx_desc, + unsigned int headroom, unsigned int size) { void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; @@ -2181,7 +2189,7 @@ static struct sk_buff *ixgbe_build_skb(struct ixgbe_ring *rx_ring, return NULL; /* update pointers within the skb to store the data */ - skb_reserve(skb, IXGBE_SKB_PAD); + skb_reserve(skb, headroom); __skb_put(skb, size); /* record DMA address if this is the start of a chain of buffers */ @@ -2202,10 +2210,14 @@ static int ixgbe_xmit_xdp_ring(struct xdp_buff *xdp, struct ixgbe_adapter *adapter, struct ixgbe_ring *tx_ring); +#define IXGBE_XDP_PASS 0 +#define IXGBE_XDP_CONSUMED 1 + static int ixgbe_run_xdp(struct ixgbe_adapter *adapter, struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, - unsigned int size) + unsigned int *headroom, + unsigned int *size) { struct ixgbe_ring *xdp_ring; struct bpf_prog *xdp_prog; @@ -2216,17 +2228,19 @@ static int ixgbe_run_xdp(struct ixgbe_adapter *adapter, xdp_prog = READ_ONCE(rx_ring->xdp_prog); if (!xdp_prog) - return 0; + return IXGBE_XDP_PASS; addr = page_address(rx_buffer->page) + rx_buffer->page_offset; - xdp.data_hard_start = addr; + xdp.data_hard_start = addr - *headroom; xdp.data = addr; - xdp.data_end = addr + size; + xdp.data_end = addr + *size; act = bpf_prog_run_xdp(xdp_prog, &xdp); switch (act) { case XDP_PASS: - return 0; + *headroom = xdp.data - xdp.data_hard_start; + *size = xdp.data_end - xdp.data; + return IXGBE_XDP_PASS; case XDP_TX: xdp_ring = adapter->xdp_ring[smp_processor_id()]; @@ -2246,7 +2260,7 @@ static int ixgbe_run_xdp(struct ixgbe_adapter *adapter, rx_buffer->pagecnt_bias++; /* give page back */ break; } - return size; + return IXGBE_XDP_CONSUMED; } /** @@ -2275,6 +2289,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, u16 cleaned_count = ixgbe_desc_unused(rx_ring); while (likely(total_rx_packets < budget)) { + unsigned int headroom = ixgbe_rx_offset(rx_ring); union ixgbe_adv_rx_desc *rx_desc; struct ixgbe_rx_buffer *rx_buffer; struct sk_buff *skb; @@ -2301,7 +2316,8 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, rx_buffer = ixgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size); rcu_read_lock(); - consumed = ixgbe_run_xdp(adapter, rx_ring, rx_buffer, size); + consumed = ixgbe_run_xdp(adapter, rx_ring, rx_buffer, + &headroom, &size); rcu_read_unlock(); if (consumed) { @@ -2315,13 +2331,14 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, /* retrieve a buffer from the ring */ if (skb) - ixgbe_add_rx_frag(rx_ring, rx_buffer, skb, size); + ixgbe_add_rx_frag(rx_ring, rx_buffer, skb, + headroom, size); else if (ring_uses_build_skb(rx_ring)) skb = ixgbe_build_skb(rx_ring, rx_buffer, - rx_desc, size); + rx_desc, headroom, size); else skb = ixgbe_construct_skb(rx_ring, rx_buffer, - rx_desc, size); + rx_desc, headroom, size); /* exit if we failed to retrieve a buffer */ if (!skb) {