From patchwork Mon Jul 23 14:36:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Toshiaki Makita X-Patchwork-Id: 947808 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="fIXPq1kg"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Z3wD59Wxz9s4Z for ; Tue, 24 Jul 2018 00:36:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388501AbeGWPiP (ORCPT ); Mon, 23 Jul 2018 11:38:15 -0400 Received: from mail-pl0-f68.google.com ([209.85.160.68]:33399 "EHLO mail-pl0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387877AbeGWPiP (ORCPT ); Mon, 23 Jul 2018 11:38:15 -0400 Received: by mail-pl0-f68.google.com with SMTP id 6-v6so319647plb.0 for ; Mon, 23 Jul 2018 07:36:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=H801NZPtTO3nQdd2RQdcRAzVMoZ8IbYzzd0HMI13g0U=; b=fIXPq1kgDSBEoaPuPcCwFyXzQgaT0DrD8VcUuOYzmtg79PnNiNXtCrU3ECQyi1ZvMP KNpHhajDCJYDNN1L3IDi4JnGTEO9shhIWlVE7TIqpmTpOw0lSIOmFjvNof5/WMoyMdjh 3x21mK45Y9RZhZdYH3rmSpjJQ5oUlzLYR5pOf8HC/O5FmBrKRCG2/Ivh0ldup/WJQdBf 0NbfUrYjm+K5ioFbIRMj9UdcgvX2fz6z1njGRk5iZaKLiy3vFdyiPwZeE+SnQV2tor2P p+2KF9PbC1Vb+rLqjFUc0Zz0imongalQehPCPJoLhSOR6ZncWL5heZXe0jzLamjazc89 AYkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=H801NZPtTO3nQdd2RQdcRAzVMoZ8IbYzzd0HMI13g0U=; b=mxsgvEK/nEqdT+b06D+pXgE4V0E9t8Yf3gBMjDPjlresmXZW/ZSXOC6KLFH7/bFzac SBo/Ks4h8RrbFnTO9j/jzBbhj562Ud4GeWaClG3kMvnAoWzyCkKtJzBxONTVfoezJY5u tMtazwJa0sM4Ejb+VLmZEMyP+mNuhg7lHoWv+pgiwJIdJtEe9P+hgmyU/wn1EBgaeqkF i6u/kd89dKLsdj7vkdxAxTF340R0K4r1NGqKNzlh1pnqm+5OH7pScKvHv6CiBBRPFhN3 LmSwuKo312/zAJc7nWyDVT+psEHq2QCe1LanF4hSogXtAxNCJPt4CP49RLmDHhISu1kZ +ECw== X-Gm-Message-State: AOUpUlGrzQJrddYYThPl3oWEjNz8gZDBOEc1Ugbs/ZWAKNXAOMpVPjDy lSjzdz8YyTMaAqS9dtYL16c= X-Google-Smtp-Source: AAOMgpc7DtFJyG0qR71qp812wIlXQG0NxaPY9PRVuFk9f6Eu29NaXjhVSRKHzvY9pO/+sTuAFKQuBg== X-Received: by 2002:a17:902:aa8f:: with SMTP id d15-v6mr13348952plr.64.1532356602730; Mon, 23 Jul 2018 07:36:42 -0700 (PDT) Received: from localhost.localdomain (i153-145-22-9.s42.a013.ap.plala.or.jp. [153.145.22.9]) by smtp.gmail.com with ESMTPSA id 65-v6sm14415503pfq.81.2018.07.23.07.36.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Jul 2018 07:36:41 -0700 (PDT) From: Toshiaki Makita To: "Michael S. Tsirkin" , Jason Wang , "David S. Miller" Cc: Toshiaki Makita , netdev@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH net-next 3/6] virtio_net: Make drop counter per-queue Date: Mon, 23 Jul 2018 23:36:06 +0900 Message-Id: <20180723143609.2242-4-toshiaki.makita1@gmail.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180723143609.2242-1-toshiaki.makita1@gmail.com> References: <20180723143609.2242-1-toshiaki.makita1@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Toshiaki Makita Since when XDP was introduced, drop counter has been able to be updated much more frequently than before, as XDP_DROP increments the counter. Thus for performance analysis per-queue drop counter would be useful. Also this avoids cache contention and race on updating the counter. It is currently racy because napi handlers read-modify-write it without any locks. There are more counters in dev->stats that are racy, but I left them per-device, because they are rarely updated and does not worth being per-queue counters IMHO. To fix them we need atomic ops or some kind of locks. Signed-off-by: Toshiaki Makita --- drivers/net/virtio_net.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d03bfc4fce8e..7a47ce750a43 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -87,6 +87,7 @@ struct virtnet_sq_stats { struct virtnet_rq_stat_items { u64 packets; u64 bytes; + u64 drops; }; struct virtnet_rq_stats { @@ -109,6 +110,7 @@ static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = { static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { { "packets", VIRTNET_RQ_STAT(packets) }, { "bytes", VIRTNET_RQ_STAT(bytes) }, + { "drops", VIRTNET_RQ_STAT(drops) }, }; #define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) @@ -705,7 +707,7 @@ static struct sk_buff *receive_small(struct net_device *dev, err_xdp: rcu_read_unlock(); - dev->stats.rx_dropped++; + stats->rx.drops++; put_page(page); xdp_xmit: return NULL; @@ -728,7 +730,7 @@ static struct sk_buff *receive_big(struct net_device *dev, return skb; err: - dev->stats.rx_dropped++; + stats->rx.drops++; give_pages(rq, page); return NULL; } @@ -952,7 +954,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, put_page(page); } err_buf: - dev->stats.rx_dropped++; + stats->rx.drops++; dev_kfree_skb(head_skb); xdp_xmit: return NULL; @@ -1632,7 +1634,7 @@ static void virtnet_stats(struct net_device *dev, int i; for (i = 0; i < vi->max_queue_pairs; i++) { - u64 tpackets, tbytes, rpackets, rbytes; + u64 tpackets, tbytes, rpackets, rbytes, rdrops; struct receive_queue *rq = &vi->rq[i]; struct send_queue *sq = &vi->sq[i]; @@ -1646,17 +1648,18 @@ static void virtnet_stats(struct net_device *dev, start = u64_stats_fetch_begin_irq(&rq->stats.syncp); rpackets = rq->stats.items.packets; rbytes = rq->stats.items.bytes; + rdrops = rq->stats.items.drops; } while (u64_stats_fetch_retry_irq(&rq->stats.syncp, start)); tot->rx_packets += rpackets; tot->tx_packets += tpackets; tot->rx_bytes += rbytes; tot->tx_bytes += tbytes; + tot->rx_dropped += rdrops; } tot->tx_dropped = dev->stats.tx_dropped; tot->tx_fifo_errors = dev->stats.tx_fifo_errors; - tot->rx_dropped = dev->stats.rx_dropped; tot->rx_length_errors = dev->stats.rx_length_errors; tot->rx_frame_errors = dev->stats.rx_frame_errors; }