From patchwork Thu Jun 6 08:50:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ioana Radulescu X-Patchwork-Id: 1110960 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KKB54rDRz9s3l for ; Thu, 6 Jun 2019 18:50:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727493AbfFFIug (ORCPT ); Thu, 6 Jun 2019 04:50:36 -0400 Received: from inva021.nxp.com ([92.121.34.21]:46230 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727250AbfFFIuc (ORCPT ); Thu, 6 Jun 2019 04:50:32 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 81324200A22; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 73C2A200A18; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) Received: from fsr-ub1664-019.ea.freescale.net (fsr-ub1664-019.ea.freescale.net [10.171.71.230]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 3B1B0205C7; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) From: Ioana Radulescu To: netdev@vger.kernel.org, davem@davemloft.net Cc: ioana.ciornei@nxp.com Subject: [PATCH net-next v2 1/3] dpaa2-eth: Refactor xps code Date: Thu, 6 Jun 2019 11:50:27 +0300 Message-Id: <1559811029-28002-2-git-send-email-ruxandra.radulescu@nxp.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559811029-28002-1-git-send-email-ruxandra.radulescu@nxp.com> References: <1559811029-28002-1-git-send-email-ruxandra.radulescu@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Move the code configuring xps on the netdev TX queues to a separate function. A subsequent patch will need to call this in another context as well. Signed-off-by: Ioana Radulescu --- v2: no changes drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c | 45 +++++++++++++++++------- 1 file changed, 32 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 753957e..a12fc45 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -1872,6 +1872,35 @@ static int dpaa2_eth_xdp_xmit(struct net_device *net_dev, int n, return n - drops; } +static int update_xps(struct dpaa2_eth_priv *priv) +{ + struct net_device *net_dev = priv->net_dev; + struct cpumask xps_mask; + struct dpaa2_eth_fq *fq; + int i, num_queues; + int err = 0; + + num_queues = dpaa2_eth_queue_count(priv); + + /* The first entries in priv->fq array are Tx/Tx conf + * queues, so only process those + */ + for (i = 0; i < num_queues; i++) { + fq = &priv->fq[i]; + + cpumask_clear(&xps_mask); + cpumask_set_cpu(fq->target_cpu, &xps_mask); + + err = netif_set_xps_queue(net_dev, &xps_mask, i); + if (err) { + netdev_warn_once(net_dev, "Error setting XPS queue\n"); + break; + } + } + + return err; +} + static const struct net_device_ops dpaa2_eth_ops = { .ndo_open = dpaa2_eth_open, .ndo_start_xmit = dpaa2_eth_tx, @@ -2138,10 +2167,9 @@ static struct dpaa2_eth_channel *get_affine_channel(struct dpaa2_eth_priv *priv, static void set_fq_affinity(struct dpaa2_eth_priv *priv) { struct device *dev = priv->net_dev->dev.parent; - struct cpumask xps_mask; struct dpaa2_eth_fq *fq; int rx_cpu, txc_cpu; - int i, err; + int i; /* For each FQ, pick one channel/CPU to deliver frames to. * This may well change at runtime, either through irqbalance or @@ -2160,17 +2188,6 @@ static void set_fq_affinity(struct dpaa2_eth_priv *priv) break; case DPAA2_TX_CONF_FQ: fq->target_cpu = txc_cpu; - - /* Tell the stack to affine to txc_cpu the Tx queue - * associated with the confirmation one - */ - cpumask_clear(&xps_mask); - cpumask_set_cpu(txc_cpu, &xps_mask); - err = netif_set_xps_queue(priv->net_dev, &xps_mask, - fq->flowid); - if (err) - dev_err(dev, "Error setting XPS queue\n"); - txc_cpu = cpumask_next(txc_cpu, &priv->dpio_cpumask); if (txc_cpu >= nr_cpu_ids) txc_cpu = cpumask_first(&priv->dpio_cpumask); @@ -2180,6 +2197,8 @@ static void set_fq_affinity(struct dpaa2_eth_priv *priv) } fq->channel = get_affine_channel(priv, fq->target_cpu); } + + update_xps(priv); } static void setup_fqs(struct dpaa2_eth_priv *priv) From patchwork Thu Jun 6 08:50:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ioana Radulescu X-Patchwork-Id: 1110959 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KKB34dWfz9sDX for ; Thu, 6 Jun 2019 18:50:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727487AbfFFIue (ORCPT ); Thu, 6 Jun 2019 04:50:34 -0400 Received: from inva020.nxp.com ([92.121.34.13]:50232 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727402AbfFFIuc (ORCPT ); Thu, 6 Jun 2019 04:50:32 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C76F61A0A72; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BAF6D1A0A7D; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) Received: from fsr-ub1664-019.ea.freescale.net (fsr-ub1664-019.ea.freescale.net [10.171.71.230]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 833F4205C7; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) From: Ioana Radulescu To: netdev@vger.kernel.org, davem@davemloft.net Cc: ioana.ciornei@nxp.com Subject: [PATCH net-next v2 2/3] dpaa2-eth: Support multiple traffic classes on Tx Date: Thu, 6 Jun 2019 11:50:28 +0300 Message-Id: <1559811029-28002-3-git-send-email-ruxandra.radulescu@nxp.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559811029-28002-1-git-send-email-ruxandra.radulescu@nxp.com> References: <1559811029-28002-1-git-send-email-ruxandra.radulescu@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org DPNI objects can have multiple traffic classes, as reflected by the num_tc attribute. Until now we ignored its value and only used traffic class 0. This patch adds support for multiple Tx traffic classes; the skb priority information received from the stack is used to select the hardware Tx queue on which to enqueue the frame. Signed-off-by: Ioana Radulescu --- v2: Extra processing on the fast path happens only when TC is used drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c | 47 ++++++++++++++++-------- drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h | 9 ++++- 2 files changed, 40 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index a12fc45..98de092 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -757,6 +757,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) u16 queue_mapping; unsigned int needed_headroom; u32 fd_len; + u8 prio = 0; int err, i; percpu_stats = this_cpu_ptr(priv->percpu_stats); @@ -814,6 +815,18 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) * a queue affined to the same core that processed the Rx frame */ queue_mapping = skb_get_queue_mapping(skb); + + if (net_dev->num_tc) { + prio = netdev_txq_to_tc(net_dev, queue_mapping); + /* Hardware interprets priority level 0 as being the highest, + * so we need to do a reverse mapping to the netdev tc index + */ + prio = net_dev->num_tc - prio - 1; + /* We have only one FQ array entry for all Tx hardware queues + * with the same flow id (but different priority levels) + */ + queue_mapping %= dpaa2_eth_queue_count(priv); + } fq = &priv->fq[queue_mapping]; fd_len = dpaa2_fd_get_len(&fd); @@ -824,7 +837,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) * the Tx confirmation callback for this frame */ for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, 0); + err = priv->enqueue(priv, fq, &fd, prio); if (err != -EBUSY) break; } @@ -1877,16 +1890,17 @@ static int update_xps(struct dpaa2_eth_priv *priv) struct net_device *net_dev = priv->net_dev; struct cpumask xps_mask; struct dpaa2_eth_fq *fq; - int i, num_queues; + int i, num_queues, netdev_queues; int err = 0; num_queues = dpaa2_eth_queue_count(priv); + netdev_queues = (net_dev->num_tc ? : 1) * num_queues; /* The first entries in priv->fq array are Tx/Tx conf * queues, so only process those */ - for (i = 0; i < num_queues; i++) { - fq = &priv->fq[i]; + for (i = 0; i < netdev_queues; i++) { + fq = &priv->fq[i % num_queues]; cpumask_clear(&xps_mask); cpumask_set_cpu(fq->target_cpu, &xps_mask); @@ -2380,11 +2394,10 @@ static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv, static inline int dpaa2_eth_enqueue_fq(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, - u8 prio __always_unused) + struct dpaa2_fd *fd, u8 prio) { return dpaa2_io_service_enqueue_fq(fq->channel->dpio, - fq->tx_fqid, fd); + fq->tx_fqid[prio], fd); } static void set_enqueue_mode(struct dpaa2_eth_priv *priv) @@ -2540,17 +2553,21 @@ static int setup_tx_flow(struct dpaa2_eth_priv *priv, struct device *dev = priv->net_dev->dev.parent; struct dpni_queue queue; struct dpni_queue_id qid; - int err; + int i, err; - err = dpni_get_queue(priv->mc_io, 0, priv->mc_token, - DPNI_QUEUE_TX, 0, fq->flowid, &queue, &qid); - if (err) { - dev_err(dev, "dpni_get_queue(TX) failed\n"); - return err; + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + err = dpni_get_queue(priv->mc_io, 0, priv->mc_token, + DPNI_QUEUE_TX, i, fq->flowid, + &queue, &qid); + if (err) { + dev_err(dev, "dpni_get_queue(TX) failed\n"); + return err; + } + fq->tx_fqid[i] = qid.fqid; } + /* All Tx queues belonging to the same flowid have the same qdbin */ fq->tx_qdbin = qid.qdbin; - fq->tx_fqid = qid.fqid; err = dpni_get_queue(priv->mc_io, 0, priv->mc_token, DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid, @@ -3250,7 +3267,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev) dev = &dpni_dev->dev; /* Net device */ - net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES); + net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_NETDEV_QUEUES); if (!net_dev) { dev_err(dev, "alloc_etherdev_mq() failed\n"); return -ENOMEM; diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index e180d5a..9af18c2 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -282,10 +282,13 @@ struct dpaa2_eth_ch_stats { }; /* Maximum number of queues associated with a DPNI */ +#define DPAA2_ETH_MAX_TCS 8 #define DPAA2_ETH_MAX_RX_QUEUES 16 #define DPAA2_ETH_MAX_TX_QUEUES 16 #define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \ DPAA2_ETH_MAX_TX_QUEUES) +#define DPAA2_ETH_MAX_NETDEV_QUEUES \ + (DPAA2_ETH_MAX_TX_QUEUES * DPAA2_ETH_MAX_TCS) #define DPAA2_ETH_MAX_DPCONS 16 @@ -299,8 +302,9 @@ struct dpaa2_eth_priv; struct dpaa2_eth_fq { u32 fqid; u32 tx_qdbin; - u32 tx_fqid; + u32 tx_fqid[DPAA2_ETH_MAX_TCS]; u16 flowid; + u8 tc; int target_cpu; u32 dq_frames; u32 dq_bytes; @@ -448,6 +452,9 @@ static inline int dpaa2_eth_cmp_dpni_ver(struct dpaa2_eth_priv *priv, #define dpaa2_eth_fs_count(priv) \ ((priv)->dpni_attrs.fs_entries) +#define dpaa2_eth_tc_count(priv) \ + ((priv)->dpni_attrs.num_tcs) + /* We have exactly one {Rx, Tx conf} queue per channel */ #define dpaa2_eth_queue_count(priv) \ ((priv)->num_channels) From patchwork Thu Jun 6 08:50:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ioana Radulescu X-Patchwork-Id: 1110961 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=nxp.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45KKB61L4Vz9sDX for ; Thu, 6 Jun 2019 18:50:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727483AbfFFIue (ORCPT ); Thu, 6 Jun 2019 04:50:34 -0400 Received: from inva020.nxp.com ([92.121.34.13]:50244 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727420AbfFFIuc (ORCPT ); Thu, 6 Jun 2019 04:50:32 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 19C341A0A7D; Thu, 6 Jun 2019 10:50:31 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0DECC1A0A64; Thu, 6 Jun 2019 10:50:31 +0200 (CEST) Received: from fsr-ub1664-019.ea.freescale.net (fsr-ub1664-019.ea.freescale.net [10.171.71.230]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id CA2EA205C7; Thu, 6 Jun 2019 10:50:30 +0200 (CEST) From: Ioana Radulescu To: netdev@vger.kernel.org, davem@davemloft.net Cc: ioana.ciornei@nxp.com Subject: [PATCH net-next v2 3/3] dpaa2-eth: Add mqprio support Date: Thu, 6 Jun 2019 11:50:29 +0300 Message-Id: <1559811029-28002-4-git-send-email-ruxandra.radulescu@nxp.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559811029-28002-1-git-send-email-ruxandra.radulescu@nxp.com> References: <1559811029-28002-1-git-send-email-ruxandra.radulescu@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Implement mqprio qdisc support by mapping traffic classes to different hardware enqueue priorities. The maximum number of supported traffic classes is an attribute of each DPNI object. The traffic classes map to hardware priorities from highest (0) to lowest (highest prio number). The driver assigns num_queues to each traffic class, for a total of num_queues x num_tcs hardware frame queues. Signed-off-by: Ioana Radulescu Signed-off-by: Bogdan Purcareata --- v2: no changes drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c | 43 ++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 98de092..6f69422 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -1915,6 +1915,48 @@ static int update_xps(struct dpaa2_eth_priv *priv) return err; } +static int dpaa2_eth_setup_tc(struct net_device *net_dev, + enum tc_setup_type type, void *type_data) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + struct tc_mqprio_qopt *mqprio = type_data; + u8 num_tc, num_queues; + int i; + + if (type != TC_SETUP_QDISC_MQPRIO) + return -EINVAL; + + mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS; + num_queues = dpaa2_eth_queue_count(priv); + num_tc = mqprio->num_tc; + + if (num_tc == net_dev->num_tc) + return 0; + + if (num_tc > dpaa2_eth_tc_count(priv)) { + netdev_err(net_dev, "Max %d traffic classes supported\n", + dpaa2_eth_tc_count(priv)); + return -EINVAL; + } + + if (!num_tc) { + netdev_reset_tc(net_dev); + netif_set_real_num_tx_queues(net_dev, num_queues); + goto out; + } + + netdev_set_num_tc(net_dev, num_tc); + netif_set_real_num_tx_queues(net_dev, num_tc * num_queues); + + for (i = 0; i < num_tc; i++) + netdev_set_tc_queue(net_dev, i, num_queues, i * num_queues); + +out: + update_xps(priv); + + return 0; +} + static const struct net_device_ops dpaa2_eth_ops = { .ndo_open = dpaa2_eth_open, .ndo_start_xmit = dpaa2_eth_tx, @@ -1927,6 +1969,7 @@ static const struct net_device_ops dpaa2_eth_ops = { .ndo_change_mtu = dpaa2_eth_change_mtu, .ndo_bpf = dpaa2_eth_xdp, .ndo_xdp_xmit = dpaa2_eth_xdp_xmit, + .ndo_setup_tc = dpaa2_eth_setup_tc, }; static void cdan_cb(struct dpaa2_io_notification_ctx *ctx)