From patchwork Tue Oct 11 05:56:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1688465 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=KjyGQN6J; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4MmlRz27Mcz23k1 for ; Tue, 11 Oct 2022 16:57:35 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1oi8GX-0008IX-Pn; Tue, 11 Oct 2022 05:57:21 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1oi8GO-0008ER-VP for kernel-team@lists.ubuntu.com; Tue, 11 Oct 2022 05:57:12 +0000 Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 7048140275 for ; Tue, 11 Oct 2022 05:57:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1665467831; bh=DIB3kVg1dcWauoJpAgUw20BsZpgNFDBsEREsoFAlRUo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KjyGQN6JcuQHKKbpB8abdUYFt+5IIUwKDQ9yO12lLHQXnmcuB1GuKfczd+Ej3htvH S+X/JZEUvolM/fTmLP5IV2SeGep2LboeVEIGTh1Rf9VLRHJM3oY95S9JA7++wCrUcI o4E4knpe1l6Q9JVxQ4jypMor6htb1zi0rLR0qLj88h+mx6fgplSC8PJpD6l/+z1cUB fCac4ruhT1ZD/ExxYdTrIWHdrlgKJavybJZkHA5ngJJOSb8iqrY0BTbW/ZOvIE3jm3 HggXMvJ9b70+7f7EjP9nrU1+cxs6QSf4POWHh3k3AMNddk/Ke1xoX1Yey9w2NbTmVP oturWgg4dOuPw== Received: by mail-qk1-f197.google.com with SMTP id bp17-20020a05620a459100b006ce7f4bb0b7so10779519qkb.5 for ; Mon, 10 Oct 2022 22:57:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DIB3kVg1dcWauoJpAgUw20BsZpgNFDBsEREsoFAlRUo=; b=wHZTtTfJmASqcrrEnMAYrr6JiMzYWzVA17V/zCYua9h0WwbQwsmBFJzyjld8TDgHrQ B4Y2/QswTYUF0e9Z5HVh0oPRMGQeTo/IOdmagCfFT9xe1JOkchstkphmAKwpv2xA5D4A kP16KGnMzbuJ6Rbbjbb433Aoph7CMomWNjlDTBHRWKGtMQ4Z16NOvYcoWy0pzQxxQJwZ j/DCqAblP/+CdT6Z/si4T0LRCvyA0WOYAVHJ8obQ73fsCC0LDAeNUyhoIahH1HrUc145 kGWUmSfqIK+1ebdr7eyBJ9tB+lXzmSWyFH8TmQIR/W9HfTh6ytuojJwD/OaTdcUh9Xt8 J4qQ== X-Gm-Message-State: ACrzQf0XRERocAkGQ6tPMoI2g1IDW8glIbV+e0Nj9qV8riO4Zu57LysZ qg5qAl9oEchgOHVwmOilpXx2cwenCXOd2Irz9YtaRIWfs4JsIdmzou1bMdWL0QbLfjleF2uKvCf MnaFH1U9FPXPgwNXq7UJdPKy6B8q9kPr2fQ0/4bi2Rw== X-Received: by 2002:a05:620a:30b:b0:6e4:6de2:7f38 with SMTP id s11-20020a05620a030b00b006e46de27f38mr15503197qkm.520.1665467829053; Mon, 10 Oct 2022 22:57:09 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6no2Emdpq0ZXzJXmpziTB4feqU2hE7JpgJobIr0lOSmYjfNSHV5SwJtB1eFi552bBcFhPmzg== X-Received: by 2002:a05:620a:30b:b0:6e4:6de2:7f38 with SMTP id s11-20020a05620a030b00b006e46de27f38mr15503188qkm.520.1665467828682; Mon, 10 Oct 2022 22:57:08 -0700 (PDT) Received: from rpi4-work.fuzzbuzz.org ([38.147.253.164]) by smtp.gmail.com with ESMTPSA id z8-20020ac81008000000b0039351b26714sm9848927qti.7.2022.10.10.22.57.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Oct 2022 22:57:08 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [PATCH 01/19] gve: Add RX context. Date: Tue, 11 Oct 2022 01:56:46 -0400 Message-Id: <20221011055704.642271-2-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221011055704.642271-1-khalid.elmously@canonical.com> References: <20221011055704.642271-1-khalid.elmously@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: David Awogbemila BugLink: https://bugs.launchpad.net/bugs/1953575 This refactor moves the skb_head and skb_tail fields into a new gve_rx_ctx struct. This new struct will contain information about the current packet being processed. This is in preparation for multi-descriptor RX packets. Signed-off-by: David Awogbemila Signed-off-by: Jeroen de Borst Reviewed-by: Catherine Sullivan Signed-off-by: David S. Miller (cherry picked from commit 1344e751e91092ac0cb63b194621e59d2f364197) Signed-off-by: Khalid Elmously --- drivers/net/ethernet/google/gve/gve.h | 13 +++- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 68 ++++++++++---------- 2 files changed, 44 insertions(+), 37 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index b1273dce4795..2cb193fd104d 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -142,6 +142,15 @@ struct gve_index_list { s16 tail; }; +/* A single received packet split across multiple buffers may be + * reconstructed using the information in this structure. + */ +struct gve_rx_ctx { + /* head and tail of skb chain for the current packet or NULL if none */ + struct sk_buff *skb_head; + struct sk_buff *skb_tail; +}; + /* Contains datapath state used to represent an RX queue. */ struct gve_rx_ring { struct gve_priv *gve; @@ -206,9 +215,7 @@ struct gve_rx_ring { dma_addr_t q_resources_bus; /* dma address for the queue resources */ struct u64_stats_sync statss; /* sync stats for 32bit archs */ - /* head and tail of skb chain for the current packet or NULL if none */ - struct sk_buff *skb_head; - struct sk_buff *skb_tail; + struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */ }; /* A TX desc ring entry */ diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 7b18b4fd9e54..1efa32bc72b4 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -240,8 +240,8 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) rx->dqo.bufq.mask = buffer_queue_slots - 1; rx->dqo.complq.num_free_slots = completion_queue_slots; rx->dqo.complq.mask = completion_queue_slots - 1; - rx->skb_head = NULL; - rx->skb_tail = NULL; + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; rx->dqo.num_buf_states = min_t(s16, S16_MAX, buffer_queue_slots * 4); rx->dqo.buf_states = kvcalloc(rx->dqo.num_buf_states, @@ -467,12 +467,12 @@ static void gve_rx_skb_hash(struct sk_buff *skb, static void gve_rx_free_skb(struct gve_rx_ring *rx) { - if (!rx->skb_head) + if (!rx->ctx.skb_head) return; - dev_kfree_skb_any(rx->skb_head); - rx->skb_head = NULL; - rx->skb_tail = NULL; + dev_kfree_skb_any(rx->ctx.skb_head); + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; } /* Chains multi skbs for single rx packet. @@ -483,7 +483,7 @@ static int gve_rx_append_frags(struct napi_struct *napi, u16 buf_len, struct gve_rx_ring *rx, struct gve_priv *priv) { - int num_frags = skb_shinfo(rx->skb_tail)->nr_frags; + int num_frags = skb_shinfo(rx->ctx.skb_tail)->nr_frags; if (unlikely(num_frags == MAX_SKB_FRAGS)) { struct sk_buff *skb; @@ -492,17 +492,17 @@ static int gve_rx_append_frags(struct napi_struct *napi, if (!skb) return -1; - skb_shinfo(rx->skb_tail)->frag_list = skb; - rx->skb_tail = skb; + skb_shinfo(rx->ctx.skb_tail)->frag_list = skb; + rx->ctx.skb_tail = skb; num_frags = 0; } - if (rx->skb_tail != rx->skb_head) { - rx->skb_head->len += buf_len; - rx->skb_head->data_len += buf_len; - rx->skb_head->truesize += priv->data_buffer_size_dqo; + if (rx->ctx.skb_tail != rx->ctx.skb_head) { + rx->ctx.skb_head->len += buf_len; + rx->ctx.skb_head->data_len += buf_len; + rx->ctx.skb_head->truesize += priv->data_buffer_size_dqo; } - skb_add_rx_frag(rx->skb_tail, num_frags, + skb_add_rx_frag(rx->ctx.skb_tail, num_frags, buf_state->page_info.page, buf_state->page_info.page_offset, buf_len, priv->data_buffer_size_dqo); @@ -556,7 +556,7 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, buf_len, DMA_FROM_DEVICE); /* Append to current skb if one exists. */ - if (rx->skb_head) { + if (rx->ctx.skb_head) { if (unlikely(gve_rx_append_frags(napi, buf_state, buf_len, rx, priv)) != 0) { goto error; @@ -567,11 +567,11 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, } if (eop && buf_len <= priv->rx_copybreak) { - rx->skb_head = gve_rx_copy(priv->dev, napi, - &buf_state->page_info, buf_len, 0); - if (unlikely(!rx->skb_head)) + rx->ctx.skb_head = gve_rx_copy(priv->dev, napi, + &buf_state->page_info, buf_len, 0); + if (unlikely(!rx->ctx.skb_head)) goto error; - rx->skb_tail = rx->skb_head; + rx->ctx.skb_tail = rx->ctx.skb_head; u64_stats_update_begin(&rx->statss); rx->rx_copied_pkt++; @@ -583,12 +583,12 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, return 0; } - rx->skb_head = napi_get_frags(napi); - if (unlikely(!rx->skb_head)) + rx->ctx.skb_head = napi_get_frags(napi); + if (unlikely(!rx->ctx.skb_head)) goto error; - rx->skb_tail = rx->skb_head; + rx->ctx.skb_tail = rx->ctx.skb_head; - skb_add_rx_frag(rx->skb_head, 0, buf_state->page_info.page, + skb_add_rx_frag(rx->ctx.skb_head, 0, buf_state->page_info.page, buf_state->page_info.page_offset, buf_len, priv->data_buffer_size_dqo); gve_dec_pagecnt_bias(&buf_state->page_info); @@ -635,27 +635,27 @@ static int gve_rx_complete_skb(struct gve_rx_ring *rx, struct napi_struct *napi, rx->gve->ptype_lut_dqo->ptypes[desc->packet_type]; int err; - skb_record_rx_queue(rx->skb_head, rx->q_num); + skb_record_rx_queue(rx->ctx.skb_head, rx->q_num); if (feat & NETIF_F_RXHASH) - gve_rx_skb_hash(rx->skb_head, desc, ptype); + gve_rx_skb_hash(rx->ctx.skb_head, desc, ptype); if (feat & NETIF_F_RXCSUM) - gve_rx_skb_csum(rx->skb_head, desc, ptype); + gve_rx_skb_csum(rx->ctx.skb_head, desc, ptype); /* RSC packets must set gso_size otherwise the TCP stack will complain * that packets are larger than MTU. */ if (desc->rsc) { - err = gve_rx_complete_rsc(rx->skb_head, desc, ptype); + err = gve_rx_complete_rsc(rx->ctx.skb_head, desc, ptype); if (err < 0) return err; } - if (skb_headlen(rx->skb_head) == 0) + if (skb_headlen(rx->ctx.skb_head) == 0) napi_gro_frags(napi); else - napi_gro_receive(napi, rx->skb_head); + napi_gro_receive(napi, rx->ctx.skb_head); return 0; } @@ -717,18 +717,18 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) /* Free running counter of completed descriptors */ rx->cnt++; - if (!rx->skb_head) + if (!rx->ctx.skb_head) continue; if (!compl_desc->end_of_packet) continue; work_done++; - pkt_bytes = rx->skb_head->len; + pkt_bytes = rx->ctx.skb_head->len; /* The ethernet header (first ETH_HLEN bytes) is snipped off * by eth_type_trans. */ - if (skb_headlen(rx->skb_head)) + if (skb_headlen(rx->ctx.skb_head)) pkt_bytes += ETH_HLEN; /* gve_rx_complete_skb() will consume skb if successful */ @@ -741,8 +741,8 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) } bytes += pkt_bytes; - rx->skb_head = NULL; - rx->skb_tail = NULL; + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; } gve_rx_post_buffers_dqo(rx);