From patchwork Tue Aug 25 12:13:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 1351047 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Hsjh+YUq; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BbSZr1Ddpz9sTY for ; Tue, 25 Aug 2020 22:13:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729691AbgHYMNy (ORCPT ); Tue, 25 Aug 2020 08:13:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726681AbgHYMNm (ORCPT ); Tue, 25 Aug 2020 08:13:42 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7F5BC061574 for ; Tue, 25 Aug 2020 05:13:42 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id 31so4448117pgy.13 for ; Tue, 25 Aug 2020 05:13:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UtTChdCSLkSxd+sRXkD3oHe4KiRrekM2v0o2HWq7aYE=; b=Hsjh+YUqtOge9m7LmuU6ncdvyxCBqkwk2x4en09veWJwHGTmJzThKIqvoJdIneojxK A2jTmZz5mjkLkj5e+osH45RxpSw5R/FbOy4qlWbFqbs3/d5JV67mjpdxRjWJown1aijQ AU0gJwbZKQ1oRXWh51AjaCEadzPRTwZsqKzGv92csICFSw543BHT9JNXmVcXWqZXubg5 u5qGf0E2oI0fSAfXdh2znvI1Es+umwPEFNl+XyJcG2kjfN/8vkgdV8BA2kAWOuku/Kge 9wjyOMZ70Iajv+TiCTyEa08NvJ6sYzAMMpxGjW9+053Eu/LXAEibBB9OXHilzvbphg7w IgsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UtTChdCSLkSxd+sRXkD3oHe4KiRrekM2v0o2HWq7aYE=; b=sHlKvuCYKimWtwGtJEEdJZMva19+J+Gdtas0NOH8nYkuPBaFLCzZLvJCRyVg9L80tj TuS2OA65mr/i2w6T5wko1gUoFMaple7/cxu81t47e+ap0pQOX9/mXWg08bmHyPmt3HLA SmmgOpzwXkLAxoYNBeYLw5H5zjsoWyuf8xJ+UZXNn0h/V31A52zDM5QJLviV1wCT0E7C 6sSV52HOuPIAuGfa5PT5M8IrwaX+HQmq3YRYvcCrjw3Y+BEHQHhYZ8gglxP8REfLqThe tLgJhGZEIgn5IJzy6lbuztTFHUHYz2F2RqHB2R4o0Qjdkpj1IfPqunC22yeLwuL1UirH NxgA== X-Gm-Message-State: AOAM533Y3B7vO4Gy8iSwhe/NY2dGiJugCjzzn3F9OadjOPtJE9xPGYS9 SjcLk+BLRZeCF3cEDhLMA465pxyNBz1TFw== X-Google-Smtp-Source: ABdhPJxq2jgXbMfch3hvPWv2t4wcFSZbHnmDuuycePhXOAbytzWpmCsLxoAJWeANtrTdUlyQ0qfQ5g== X-Received: by 2002:a17:902:6944:: with SMTP id k4mr6973661plt.147.1598357622464; Tue, 25 Aug 2020 05:13:42 -0700 (PDT) Received: from btopel-mobl.ger.intel.com ([192.55.54.40]) by smtp.gmail.com with ESMTPSA id d5sm2700031pjw.18.2020.08.25.05.13.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 05:13:41 -0700 (PDT) From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: jeffrey.t.kirsher@intel.com, intel-wired-lan@lists.osuosl.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , magnus.karlsson@intel.com, magnus.karlsson@gmail.com, netdev@vger.kernel.org, maciej.fijalkowski@intel.com, piotr.raczynski@intel.com, maciej.machnikowski@intel.com, lirongqing@baidu.com Subject: [PATCH net v2 2/3] ixgbe: avoid premature Rx buffer reuse Date: Tue, 25 Aug 2020 14:13:22 +0200 Message-Id: <20200825121323.20239-3-bjorn.topel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200825121323.20239-1-bjorn.topel@gmail.com> References: <20200825121323.20239-1-bjorn.topel@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The page recycle code, incorrectly, relied on that a page fragment could not be freed inside xdp_do_redirect(). This assumption leads to that page fragments that are used by the stack/XDP redirect can be reused and overwritten. To avoid this, store the page count prior invoking xdp_do_redirect(). Fixes: 6453073987ba ("ixgbe: add initial support for xdp redirect") Reported-and-analyzed-by: Li RongQing Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 24 +++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 2f8a4cfc5fa1..824c776a3abc 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -1945,7 +1945,8 @@ static inline bool ixgbe_page_is_reserved(struct page *page) return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } -static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer) +static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer, + int rx_buffer_pgcnt) { unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; struct page *page = rx_buffer->page; @@ -1956,7 +1957,7 @@ static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer) #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((page_ref_count(page) - pagecnt_bias) > 1)) + if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) return false; #else /* The last offset is a bit aggressive in that we assume the @@ -2021,11 +2022,18 @@ static void ixgbe_add_rx_frag(struct ixgbe_ring *rx_ring, static struct ixgbe_rx_buffer *ixgbe_get_rx_buffer(struct ixgbe_ring *rx_ring, union ixgbe_adv_rx_desc *rx_desc, struct sk_buff **skb, - const unsigned int size) + const unsigned int size, + int *rx_buffer_pgcnt) { struct ixgbe_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + *rx_buffer_pgcnt = +#if (PAGE_SIZE < 8192) + page_count(rx_buffer->page); +#else + 0; +#endif prefetchw(rx_buffer->page); *skb = rx_buffer->skb; @@ -2055,9 +2063,10 @@ static struct ixgbe_rx_buffer *ixgbe_get_rx_buffer(struct ixgbe_ring *rx_ring, static void ixgbe_put_rx_buffer(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, - struct sk_buff *skb) + struct sk_buff *skb, + int rx_buffer_pgcnt) { - if (ixgbe_can_reuse_rx_page(rx_buffer)) { + if (ixgbe_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) { /* hand second half of page back to the ring */ ixgbe_reuse_rx_page(rx_ring, rx_buffer); } else { @@ -2308,6 +2317,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, union ixgbe_adv_rx_desc *rx_desc; struct ixgbe_rx_buffer *rx_buffer; struct sk_buff *skb; + int rx_buffer_pgcnt; unsigned int size; /* return some buffers to hardware, one at a time is too slow */ @@ -2327,7 +2337,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, */ dma_rmb(); - rx_buffer = ixgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size); + rx_buffer = ixgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size, &rx_buffer_pgcnt); /* retrieve a buffer from the ring */ if (!skb) { @@ -2372,7 +2382,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, break; } - ixgbe_put_rx_buffer(rx_ring, rx_buffer, skb); + ixgbe_put_rx_buffer(rx_ring, rx_buffer, skb, rx_buffer_pgcnt); cleaned_count++; /* place incomplete frames back on ring for completion */