From patchwork Thu Sep 17 06:49:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1365909 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=ZmkyLwHQ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BsSS22TPzz9sSW for ; Thu, 17 Sep 2020 16:56:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726306AbgIQG4c (ORCPT ); Thu, 17 Sep 2020 02:56:32 -0400 Received: from wout1-smtp.messagingengine.com ([64.147.123.24]:48427 "EHLO wout1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726285AbgIQG4Y (ORCPT ); Thu, 17 Sep 2020 02:56:24 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id B5AC97A3; Thu, 17 Sep 2020 02:50:09 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 17 Sep 2020 02:50:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=lNZomN65RHVfLFwBlWn5rhdasdtS4GX5mz9DGj0+xEo=; b=ZmkyLwHQ RWy2BWbJ26JJhX5B/BXShh/yGg1CU9OFAuhd7GvyFTJqCUDCEjF6ynLqEMmBE72/ JgnLC8XBAeQ9/C0L+Dvm2XEA0apKIRPhhLPFnxHWPMxPQ3geoRZHu9YoC0P8jVBJ CyFECyHMKxTXy7B3i6O/JrFv0T2wn67v4IyfZhoCUgayZV1KhjQVs88MCmskslpQ 8/AWLke0dzMAikoOGx3EQxo2dKmZH1/kaEJnWwGrftAccMAUJj8ftiu4uq/P5Lfe IS7wfhGHtlXGDSSZaiMpVvQDFEjFwpA0DHG6vrNlL+gGuMmXJ+O4y0EZGETJZorp Fk64Zjip7LjDMQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrtdefgdduudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrfeeirdekvden ucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-36-82.inter.net.il [84.229.36.82]) by mail.messagingengine.com (Postfix) with ESMTPA id 8317F306468A; Thu, 17 Sep 2020 02:50:07 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, petrm@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 1/3] mlxsw: spectrum_buffers: Support two headroom modes Date: Thu, 17 Sep 2020 09:49:01 +0300 Message-Id: <20200917064903.260700-2-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200917064903.260700-1-idosch@idosch.org> References: <20200917064903.260700-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata There are two interfaces to configure ETS: qdiscs and DCB. Historically, DCB ETS configuration was projected to ingress as well, and configured port buffers. Qdisc was not. So as not to break clients that today use DCB ETS and PFC and rely on getting a reasonable ingress buffer priomap, keep the ETS mirroring in effect. Since qdiscs have not done this mirroring historically, it is reasonable not to introduce it, but rather permit manual ingress configuration through dcbnl_setbuffer only in the qdisc mode. This will require a toggle to indicate whether buffer sizes should be autocomputed or taken from dcbnl_setbuffer, and likewise for priomaps. Introduce such and initialize it, and guard port buffer size configuration as appropriate. The toggle is currently left in the DCB position. In a following patch, qdisc code will switch it. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum.h | 11 ++++++++++ .../mellanox/mlxsw/spectrum_buffers.c | 22 ++++++++++++++++--- 2 files changed, 30 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h index 247a6aebd402..f0a4347acd25 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h @@ -434,18 +434,29 @@ struct mlxsw_sp_hdroom_prio { u8 buf_idx; /* Value of buf_idx deduced from the DCB ETS configuration. */ u8 ets_buf_idx; + /* Value of buf_idx taken from the dcbnl_setbuffer configuration. */ + u8 set_buf_idx; bool lossy; }; struct mlxsw_sp_hdroom_buf { u32 thres_cells; u32 size_cells; + /* Size requirement form dcbnl_setbuffer. */ + u32 set_size_cells; bool lossy; }; +enum mlxsw_sp_hdroom_mode { + MLXSW_SP_HDROOM_MODE_DCB, + MLXSW_SP_HDROOM_MODE_TC, +}; + #define MLXSW_SP_PB_COUNT 10 struct mlxsw_sp_hdroom { + enum mlxsw_sp_hdroom_mode mode; + struct { struct mlxsw_sp_hdroom_prio prio[IEEE_8021Q_MAX_PRIORITIES]; } prios; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c index 68286cd70c33..37ff29a1686e 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c @@ -304,8 +304,16 @@ void mlxsw_sp_hdroom_prios_reset_buf_idx(struct mlxsw_sp_hdroom *hdroom) { int prio; - for (prio = 0; prio < IEEE_8021QAZ_MAX_TCS; prio++) - hdroom->prios.prio[prio].buf_idx = hdroom->prios.prio[prio].ets_buf_idx; + for (prio = 0; prio < IEEE_8021QAZ_MAX_TCS; prio++) { + switch (hdroom->mode) { + case MLXSW_SP_HDROOM_MODE_DCB: + hdroom->prios.prio[prio].buf_idx = hdroom->prios.prio[prio].ets_buf_idx; + break; + case MLXSW_SP_HDROOM_MODE_TC: + hdroom->prios.prio[prio].buf_idx = hdroom->prios.prio[prio].set_buf_idx; + break; + } + } } void mlxsw_sp_hdroom_bufs_reset_lossiness(struct mlxsw_sp_hdroom *hdroom) @@ -411,7 +419,14 @@ void mlxsw_sp_hdroom_bufs_reset_sizes(struct mlxsw_sp_port *mlxsw_sp_port, delay_cells = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, delay_cells); buf->thres_cells = thres_cells; - buf->size_cells = thres_cells + delay_cells; + if (hdroom->mode == MLXSW_SP_HDROOM_MODE_DCB) { + buf->size_cells = thres_cells + delay_cells; + } else { + /* Do not allow going below the minimum size, even if + * the user requested it. + */ + buf->size_cells = max(buf->set_size_cells, buf->thres_cells); + } } } @@ -575,6 +590,7 @@ static int mlxsw_sp_port_headroom_init(struct mlxsw_sp_port *mlxsw_sp_port) int prio; hdroom.mtu = mlxsw_sp_port->dev->mtu; + hdroom.mode = MLXSW_SP_HDROOM_MODE_DCB; for (prio = 0; prio < IEEE_8021QAZ_MAX_TCS; prio++) hdroom.prios.prio[prio].lossy = true; From patchwork Thu Sep 17 06:49:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1365908 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=DUQlhtiy; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BsSRz5gNyz9sSW for ; Thu, 17 Sep 2020 16:56:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726301AbgIQG43 (ORCPT ); Thu, 17 Sep 2020 02:56:29 -0400 Received: from wout1-smtp.messagingengine.com ([64.147.123.24]:60507 "EHLO wout1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726279AbgIQG4Y (ORCPT ); Thu, 17 Sep 2020 02:56:24 -0400 X-Greylist: delayed 375 seconds by postgrey-1.27 at vger.kernel.org; Thu, 17 Sep 2020 02:56:23 EDT Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 13692605; Thu, 17 Sep 2020 02:50:12 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 17 Sep 2020 02:50:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=YAtZMiPt3aI4blXiy8x1tva3KE4h31URv94Gc6R0kBo=; b=DUQlhtiy riTo2YS8xhedoW2kqNWmwFe6cA7z6Mi5asG2MoGmO6CQZmC2og4vWSz5FQUCKAQd 3bt41FG0djTiIgyQuoAK33+KVFZKRQP0qM90AjUJD3OpFKo8Z7AI8Yjf+E8XsA5T jFa5XWz0WuUuoknMNxZzaCpZZz2fOS6tLUYTzT3pyPzRxU72KZxwJXzyPCJclr1j SMQqrAp2ditMxriDk1Pj/HI4KeJloIyofvBQLG9dIh+D38A/UzFpERt+8owCJxgy +vFug5ZlvPUPK542PnEJZXLNv57fM+bxri+NpBXexmDu4H+/qYEnuEeVkWQEmUA+ 4sOEOrJBf+oyJw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrtdefgdduudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrfeeirdekvden ucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-36-82.inter.net.il [84.229.36.82]) by mail.messagingengine.com (Postfix) with ESMTPA id A7DBB3064684; Thu, 17 Sep 2020 02:50:09 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, petrm@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 2/3] mlxsw: spectrum_dcb: Implement dcbnl_setbuffer / getbuffer Date: Thu, 17 Sep 2020 09:49:02 +0300 Message-Id: <20200917064903.260700-3-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200917064903.260700-1-idosch@idosch.org> References: <20200917064903.260700-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata Add dcbnl_setbuffer, which bounces requests if a headroom is in DCB mode. Implement dcbnl_getbuffer such that it can always be used to determine port-buffer configuration, regardless of headroom mode. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_dcb.c | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c index d9a556dbe85e..5f92b1691360 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c @@ -592,6 +592,62 @@ static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev, return err; } +static int mlxsw_sp_dcbnl_getbuffer(struct net_device *dev, struct dcbnl_buffer *buf) +{ + struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev); + struct mlxsw_sp_hdroom *hdroom = mlxsw_sp_port->hdroom; + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + int prio; + int i; + + buf->total_size = 0; + + BUILD_BUG_ON(DCBX_MAX_BUFFERS > MLXSW_SP_PB_COUNT); + for (i = 0; i < MLXSW_SP_PB_COUNT; i++) { + u32 bytes = mlxsw_sp_cells_bytes(mlxsw_sp, hdroom->bufs.buf[i].size_cells); + + if (i < DCBX_MAX_BUFFERS) + buf->buffer_size[i] = bytes; + buf->total_size += bytes; + } + + buf->total_size += mlxsw_sp_cells_bytes(mlxsw_sp, hdroom->int_buf.size_cells); + + for (prio = 0; prio < IEEE_8021Q_MAX_PRIORITIES; prio++) + buf->prio2buffer[prio] = hdroom->prios.prio[prio].buf_idx; + + return 0; +} + +static int mlxsw_sp_dcbnl_setbuffer(struct net_device *dev, struct dcbnl_buffer *buf) +{ + struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev); + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_hdroom hdroom; + int prio; + int i; + + hdroom = *mlxsw_sp_port->hdroom; + + if (hdroom.mode != MLXSW_SP_HDROOM_MODE_TC) { + netdev_err(dev, "The use of dcbnl_setbuffer is only allowed if egress is configured using TC\n"); + return -EINVAL; + } + + for (prio = 0; prio < IEEE_8021Q_MAX_PRIORITIES; prio++) + hdroom.prios.prio[prio].set_buf_idx = buf->prio2buffer[prio]; + + BUILD_BUG_ON(DCBX_MAX_BUFFERS > MLXSW_SP_PB_COUNT); + for (i = 0; i < DCBX_MAX_BUFFERS; i++) + hdroom.bufs.buf[i].set_size_cells = mlxsw_sp_bytes_cells(mlxsw_sp, + buf->buffer_size[i]); + + mlxsw_sp_hdroom_prios_reset_buf_idx(&hdroom); + mlxsw_sp_hdroom_bufs_reset_lossiness(&hdroom); + mlxsw_sp_hdroom_bufs_reset_sizes(mlxsw_sp_port, &hdroom); + return mlxsw_sp_hdroom_configure(mlxsw_sp_port, &hdroom); +} + static const struct dcbnl_rtnl_ops mlxsw_sp_dcbnl_ops = { .ieee_getets = mlxsw_sp_dcbnl_ieee_getets, .ieee_setets = mlxsw_sp_dcbnl_ieee_setets, @@ -604,6 +660,9 @@ static const struct dcbnl_rtnl_ops mlxsw_sp_dcbnl_ops = { .getdcbx = mlxsw_sp_dcbnl_getdcbx, .setdcbx = mlxsw_sp_dcbnl_setdcbx, + + .dcbnl_getbuffer = mlxsw_sp_dcbnl_getbuffer, + .dcbnl_setbuffer = mlxsw_sp_dcbnl_setbuffer, }; static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port) From patchwork Thu Sep 17 06:49:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 1365910 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=idosch.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=RwnaynU+; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BsSS44mxcz9sSW for ; Thu, 17 Sep 2020 16:56:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726312AbgIQG4g (ORCPT ); Thu, 17 Sep 2020 02:56:36 -0400 Received: from wout1-smtp.messagingengine.com ([64.147.123.24]:50841 "EHLO wout1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726290AbgIQG4Y (ORCPT ); Thu, 17 Sep 2020 02:56:24 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 3BCB777B; Thu, 17 Sep 2020 02:50:14 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 17 Sep 2020 02:50:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=QiYAGUn33p2izElOMqkqYk5B2f5wa840MMlY46h4VGY=; b=RwnaynU+ Yth+R1ycr6bMBGHBVcsG4TldaNaEJBs8lmvTppcTk5kzvSiof188EnZMuAWuXnN0 G1+XOoF09NjKZxtAbqnrvc1bouR7j5CAOCDlvGxvpaJSMNjK5bTlMrFFNtrD+eci 0rl8B1i3fV301CYp0ytk3DuyfEd5DxFTtKn/l9YhNm1xq3SwGJov1yAzaj778h6p pAFoBfYbHKqFGf1OSKfao67q0NLS4xLzXYoSGwWF4Dm2nfGt2HkKHGMKeQoKOc2z DGcOiGsxwKxbhBgMTJ7Wc2AZEZJEBIqzUXh5NrrOA9kiCwEjScyM2StsHm/owxBi VCktFYB/Zrhfmw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrtdefgdduudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrfeeirdekvden ucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-36-82.inter.net.il [84.229.36.82]) by mail.messagingengine.com (Postfix) with ESMTPA id CCAD73064680; Thu, 17 Sep 2020 02:50:11 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, petrm@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 3/3] mlxsw: spectrum_qdisc: Disable port buffer autoresize with qdiscs Date: Thu, 17 Sep 2020 09:49:03 +0300 Message-Id: <20200917064903.260700-4-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200917064903.260700-1-idosch@idosch.org> References: <20200917064903.260700-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata There are two interfaces to configure ETS: qdiscs and DCB. Historically, DCB ETS configuration was projected to ingress as well, and configured port buffers. Qdisc was not. Keep qdiscs behaving this way, and if an offloaded qdisc is configured on a port, move this port's headroom to a manual mode, thus allowing configuration of port buffers through dcbnl_setbuffer. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_qdisc.c | 34 ++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c index 964fd444bb10..fd672c6c9133 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c @@ -140,18 +140,31 @@ static int mlxsw_sp_qdisc_destroy(struct mlxsw_sp_port *mlxsw_sp_port, struct mlxsw_sp_qdisc *mlxsw_sp_qdisc) { + struct mlxsw_sp_qdisc *root_qdisc = &mlxsw_sp_port->qdisc->root_qdisc; + int err_hdroom = 0; int err = 0; if (!mlxsw_sp_qdisc) return 0; + if (root_qdisc == mlxsw_sp_qdisc) { + struct mlxsw_sp_hdroom hdroom = *mlxsw_sp_port->hdroom; + + hdroom.mode = MLXSW_SP_HDROOM_MODE_DCB; + mlxsw_sp_hdroom_prios_reset_buf_idx(&hdroom); + mlxsw_sp_hdroom_bufs_reset_lossiness(&hdroom); + mlxsw_sp_hdroom_bufs_reset_sizes(mlxsw_sp_port, &hdroom); + err_hdroom = mlxsw_sp_hdroom_configure(mlxsw_sp_port, &hdroom); + } + if (mlxsw_sp_qdisc->ops && mlxsw_sp_qdisc->ops->destroy) err = mlxsw_sp_qdisc->ops->destroy(mlxsw_sp_port, mlxsw_sp_qdisc); mlxsw_sp_qdisc->handle = TC_H_UNSPEC; mlxsw_sp_qdisc->ops = NULL; - return err; + + return err_hdroom ?: err; } static int @@ -159,6 +172,8 @@ mlxsw_sp_qdisc_replace(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, struct mlxsw_sp_qdisc *mlxsw_sp_qdisc, struct mlxsw_sp_qdisc_ops *ops, void *params) { + struct mlxsw_sp_qdisc *root_qdisc = &mlxsw_sp_port->qdisc->root_qdisc; + struct mlxsw_sp_hdroom orig_hdroom; int err; if (mlxsw_sp_qdisc->ops && mlxsw_sp_qdisc->ops->type != ops->type) @@ -168,6 +183,21 @@ mlxsw_sp_qdisc_replace(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, * new one. */ mlxsw_sp_qdisc_destroy(mlxsw_sp_port, mlxsw_sp_qdisc); + + orig_hdroom = *mlxsw_sp_port->hdroom; + if (root_qdisc == mlxsw_sp_qdisc) { + struct mlxsw_sp_hdroom hdroom = orig_hdroom; + + hdroom.mode = MLXSW_SP_HDROOM_MODE_TC; + mlxsw_sp_hdroom_prios_reset_buf_idx(&hdroom); + mlxsw_sp_hdroom_bufs_reset_lossiness(&hdroom); + mlxsw_sp_hdroom_bufs_reset_sizes(mlxsw_sp_port, &hdroom); + + err = mlxsw_sp_hdroom_configure(mlxsw_sp_port, &hdroom); + if (err) + goto err_hdroom_configure; + } + err = ops->check_params(mlxsw_sp_port, mlxsw_sp_qdisc, params); if (err) goto err_bad_param; @@ -191,6 +221,8 @@ mlxsw_sp_qdisc_replace(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, err_bad_param: err_config: + mlxsw_sp_hdroom_configure(mlxsw_sp_port, &orig_hdroom); +err_hdroom_configure: if (mlxsw_sp_qdisc->handle == handle && ops->unoffload) ops->unoffload(mlxsw_sp_port, mlxsw_sp_qdisc, params);