From patchwork Fri Mar 3 17:57:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 735197 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3vZcNn0yK3z9ryQ for ; Sat, 4 Mar 2017 04:58:17 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="JjKY3URY"; dkim-atps=neutral Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id A5D9F30FD9; Fri, 3 Mar 2017 17:58:15 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NX9-nJSdwfPw; Fri, 3 Mar 2017 17:58:14 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by silver.osuosl.org (Postfix) with ESMTP id 166DB30FD8; Fri, 3 Mar 2017 17:58:14 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id 155201C1625 for ; Fri, 3 Mar 2017 17:58:13 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 0297B30FDA for ; Fri, 3 Mar 2017 17:58:12 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ksx-xFWmvjof for ; Fri, 3 Mar 2017 17:58:11 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mail-pg0-f68.google.com (mail-pg0-f68.google.com [74.125.83.68]) by silver.osuosl.org (Postfix) with ESMTPS id 2DCB030FDD for ; Fri, 3 Mar 2017 17:58:10 +0000 (UTC) Received: by mail-pg0-f68.google.com with SMTP id s67so13305092pgb.1 for ; Fri, 03 Mar 2017 09:58:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:subject:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=anMKCTXlGe987N3XuvYQIp0+aUC7Hlhj/BUWoXFII3Y=; b=JjKY3URYHG1dE8iFlKAdaSdwiFMykNVtcZKV10HkmkIB4DT6/myE2vUIG7Ehl5JI8X exI96bwV89WbczbJBOKxmkxi60BS3oqyR1sniYnS8fG159LiAeep/RAUhB6DTY7vT4jS s+/DaSiaFtQKsWvIG2oGuTlgWdmhaW9piMx2g6pBlaScQnoA/Mdu+yaKGvsoU8gTNMNT rORCVOq0grFpIGm+b1JwviEGc7zx/Gxe8TFspKXBuh+9MqwBvHqnjEe4i8H+UbcSm7BQ ZokOMFJLmDb2J6M1LBHLPj8KBpIxWgLH37BYjqkTjIaSz5Q4z2a9L6XLTqjyPU9DKKAd UeUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=anMKCTXlGe987N3XuvYQIp0+aUC7Hlhj/BUWoXFII3Y=; b=e3fbUJQCuheZMaWW38zPVA2wuKkhzCvoJPtsyM/Orv1Kl2rRmNMbGDR4RU6gyMUie6 ArhV+j73+y2bS0BADOijIi/0/D6j+9NeRPTfMCQtA6Hm7S+DwNkB2BvV9alaKk2rDOFT xTrBUWy36jLKDqBBhGltxeWE+B7/3esioGvf09wsZrrYdo1NVNtweY0DnR7ZeM33DMYa TQblLt2+aeiGQ7glFT0YADz5WpEI7RcTvmLs0bfnFHJG3WVkCLwxmiX2ilyeCExaMUJZ foUMmf7GlgpUKMz+getZrpWBssYYRhZancQhFym8Qi+435yvjYSZBqlSLxF3b32gNiFs zJzw== X-Gm-Message-State: AMke39lRyZlWmkGj+2iRVHP3kSSKADHhYrU8bd+dsFHrtjGxTi5vOJCqXek8rnajSFeSlQ== X-Received: by 10.99.38.2 with SMTP id m2mr4822046pgm.169.1488563889785; Fri, 03 Mar 2017 09:58:09 -0800 (PST) Received: from [127.0.1.1] ([72.168.144.124]) by smtp.gmail.com with ESMTPSA id f125sm24685764pfc.4.2017.03.03.09.57.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Mar 2017 09:58:09 -0800 (PST) From: John Fastabend X-Google-Original-From: John Fastabend To: alexander.duyck@gmail.com Date: Fri, 03 Mar 2017 09:57:46 -0800 Message-ID: <20170303175746.25015.22531.stgit@john-Precision-Tower-5810> In-Reply-To: <20170303175526.25015.12183.stgit@john-Precision-Tower-5810> References: <20170303175526.25015.12183.stgit@john-Precision-Tower-5810> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: daniel@iogearbox.net, intel-wired-lan@lists.osuosl.org, bjorn.topel@intel.com, alexei.starovoitov@gmail.com, magnus.karlsson@intel.com Subject: [Intel-wired-lan] [net-next PATCH v3 3/3] ixgbe: xdp support for adjust head X-BeenThere: intel-wired-lan@lists.osuosl.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@lists.osuosl.org Sender: "Intel-wired-lan" Add adjust_head support for XDP however at the moment we are only adding IXGBE_SKB_PAD bytes of headroom to align with driver paths. The infrastructure is is such that a follow on patch can extend headroom up to 196B without changing RX path. Signed-off-by: John Fastabend --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 31 +++++++++++++++++-------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index fa37d48..1892c42 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -2117,6 +2117,7 @@ static void ixgbe_put_rx_buffer(struct ixgbe_ring *rx_ring, static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, union ixgbe_adv_rx_desc *rx_desc, + unsigned int headroom, unsigned int size) { void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; @@ -2125,6 +2126,7 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, #else unsigned int truesize = SKB_DATA_ALIGN(size); #endif + unsigned int off_page; struct sk_buff *skb; /* prefetch first cache line of first page */ @@ -2138,12 +2140,14 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, if (unlikely(!skb)) return NULL; + off_page = IXGBE_SKB_PAD - headroom; + if (size > IXGBE_RX_HDR_SIZE) { if (!ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_EOP)) IXGBE_CB(skb)->dma = rx_buffer->dma; skb_add_rx_frag(skb, 0, rx_buffer->page, - rx_buffer->page_offset, + rx_buffer->page_offset - off_page, size, truesize); #if (PAGE_SIZE < 8192) rx_buffer->page_offset ^= truesize; @@ -2151,7 +2155,8 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, rx_buffer->page_offset += truesize; #endif } else { - memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); + memcpy(__skb_put(skb, size), va - off_page, + ALIGN(size, sizeof(long))); rx_buffer->pagecnt_bias++; } @@ -2161,6 +2166,7 @@ static struct sk_buff *ixgbe_construct_skb(struct ixgbe_ring *rx_ring, static struct sk_buff *ixgbe_build_skb(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, union ixgbe_adv_rx_desc *rx_desc, + unsigned int headroom, unsigned int size) { void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; @@ -2184,7 +2190,7 @@ static struct sk_buff *ixgbe_build_skb(struct ixgbe_ring *rx_ring, return NULL; /* update pointers within the skb to store the data */ - skb_reserve(skb, IXGBE_SKB_PAD); + skb_reserve(skb, headroom); __skb_put(skb, size); /* record DMA address if this is the start of a chain of buffers */ @@ -2211,7 +2217,8 @@ static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter, static struct sk_buff *ixgbe_run_xdp(struct ixgbe_adapter *adapter, struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, - unsigned int size) + unsigned int *headroom, + unsigned int *size) { int result = IXGBE_XDP_PASS; struct bpf_prog *xdp_prog; @@ -2226,14 +2233,16 @@ static struct sk_buff *ixgbe_run_xdp(struct ixgbe_adapter *adapter, goto xdp_out; addr = page_address(rx_buffer->page) + rx_buffer->page_offset; - xdp.data_hard_start = addr; + xdp.data_hard_start = addr - *headroom; xdp.data = addr; - xdp.data_end = addr + size; + xdp.data_end = addr + *size; act = bpf_prog_run_xdp(xdp_prog, &xdp); switch (act) { case XDP_PASS: - break; + *headroom = xdp.data - xdp.data_hard_start; + *size = xdp.data_end - xdp.data; + return IXGBE_XDP_PASS; case XDP_TX: result = ixgbe_xmit_xdp_ring(adapter, &xdp); if (result == IXGBE_XDP_TX) { @@ -2289,6 +2298,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, u16 cleaned_count = ixgbe_desc_unused(rx_ring); while (likely(total_rx_packets < budget)) { + unsigned int headroom = ixgbe_rx_offset(rx_ring); union ixgbe_adv_rx_desc *rx_desc; struct ixgbe_rx_buffer *rx_buffer; struct sk_buff *skb; @@ -2313,7 +2323,8 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, rx_buffer = ixgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size); - skb = ixgbe_run_xdp(adapter, rx_ring, rx_buffer, size); + skb = ixgbe_run_xdp(adapter, rx_ring, rx_buffer, + &headroom, &size); if (IS_ERR(skb)) { /* XDP consumed buffer */ total_rx_packets++; total_rx_bytes += size; @@ -2321,10 +2332,10 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, ixgbe_add_rx_frag(rx_ring, rx_buffer, skb, size); } else if (ring_uses_build_skb(rx_ring)) { skb = ixgbe_build_skb(rx_ring, rx_buffer, - rx_desc, size); + rx_desc, headroom, size); } else { skb = ixgbe_construct_skb(rx_ring, rx_buffer, - rx_desc, size); + rx_desc, headroom, size); } /* exit if we failed to retrieve a buffer */