From patchwork Thu Mar 28 13:27:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068173 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="ywJsiJgw"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfM2BGzz9s7T for ; Fri, 29 Mar 2019 00:27:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726476AbfC1N1y (ORCPT ); Thu, 28 Mar 2019 09:27:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:34140 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725948AbfC1N1x (ORCPT ); Thu, 28 Mar 2019 09:27:53 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 792F221773; Thu, 28 Mar 2019 13:27:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779673; bh=SnOKQTBwXQ14qxsFf8Lr1VtLIcGaw2Zq7rYaIy3hYBU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ywJsiJgwYtFwK1+ADIn7ipB2UpTufp6Ub4f2RBKOpjAFw0J9Ij7l3B8Z9IGexQvS3 9hh2Tgt97ZrTvCMI7s028v+PHfQgv36Is3WxPI7fGiBpbQqs4jd0KAb647p/w1NtBR SkWOiogZR2j0ZOfhkqXYWpW/WDIjADiXYLRuUWn0= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH mlx5-next 01/12] net/mlx5: E-Switch, don't use hardcoded values for FDB prios Date: Thu, 28 Mar 2019 15:27:31 +0200 Message-Id: <20190328132742.12070-2-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch When creating the FDB prios, use the enum values already defined and not the hardcoded values. Signed-off-by: Mark Bloch Reviewed-by: Maor Gottlieb Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 5 ----- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 5 +++-- include/linux/mlx5/fs.h | 5 +++++ 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c index f2260391be5b..6c8a17ca236e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c @@ -43,11 +43,6 @@ #include "ecpf.h" #include "lib/eq.h" -enum { - FDB_FAST_PATH = 0, - FDB_SLOW_PATH -}; - /* There are two match-all miss flows, one for unicast dst mac and * one for multicast. */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 0be3eb86dd84..f34515d03a3b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -2517,7 +2517,8 @@ static int init_fdb_root_ns(struct mlx5_flow_steering *steering) return -ENOMEM; levels = 2 * FDB_MAX_PRIO * (FDB_MAX_CHAIN + 1); - maj_prio = fs_create_prio_chained(&steering->fdb_root_ns->ns, 0, + maj_prio = fs_create_prio_chained(&steering->fdb_root_ns->ns, + FDB_FAST_PATH, levels); if (IS_ERR(maj_prio)) { err = PTR_ERR(maj_prio); @@ -2542,7 +2543,7 @@ static int init_fdb_root_ns(struct mlx5_flow_steering *steering) steering->fdb_sub_ns[chain] = ns; } - maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, 1, 1); + maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, FDB_SLOW_PATH, 1); if (IS_ERR(maj_prio)) { err = PTR_ERR(maj_prio); goto out_err; diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 9df51da04621..3eeb04154317 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -75,6 +75,11 @@ enum mlx5_flow_namespace_type { MLX5_FLOW_NAMESPACE_EGRESS, }; +enum { + FDB_FAST_PATH, + FDB_SLOW_PATH, +}; + struct mlx5_flow_table; struct mlx5_flow_group; struct mlx5_flow_namespace; From patchwork Thu Mar 28 13:27:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068176 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="H7AOIygw"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfb1v5rz9s7T for ; Fri, 29 Mar 2019 00:28:07 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727090AbfC1N2G (ORCPT ); Thu, 28 Mar 2019 09:28:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:34316 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725948AbfC1N2F (ORCPT ); Thu, 28 Mar 2019 09:28:05 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 73A3D21773; Thu, 28 Mar 2019 13:28:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779685; bh=76iJ4FYd+CzDGuPZr8V26/PbA3a0drgm8kQPI14Wjmo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H7AOIygw6Rx+A1rlwcm9rBSxWew2o/SUjovsVMKk4v5jsNmt4Kc6RV65fb0ehWfGA wEFmEaZHP3Gk/R0+rQ54ahM5izx2W9uB5YtYiZXebOQs83mXsj0gj009Ne2/JjxgnZ KaQ1TA0/Vz50tMEDUtfI/7G54ZpjyExkoTY+sdYw= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH mlx5-next 02/12] net/mlx5: E-Switch, add a new prio to be used by the RDMA side Date: Thu, 28 Mar 2019 15:27:32 +0200 Message-Id: <20190328132742.12070-3-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch Create a new prio in the FDB, it will be used when inserting steering rules into the FDB from the RDMA side. We create a new PRIO so rules from the net side and rules from the RDMA side won't be inserted to the same PRIO, each side has it's own sandbox to play in. Signed-off-by: Mark Bloch Reviewed-by: Maor Gottlieb Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 7 +++++++ include/linux/mlx5/fs.h | 1 + 2 files changed, 8 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index f34515d03a3b..f0b460879c91 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -2516,6 +2516,13 @@ static int init_fdb_root_ns(struct mlx5_flow_steering *steering) if (!steering->fdb_sub_ns) return -ENOMEM; + maj_prio = fs_create_prio(&steering->fdb_root_ns->ns, FDB_BYPASS_PATH, + 1); + if (IS_ERR(maj_prio)) { + err = PTR_ERR(maj_prio); + goto out_err; + } + levels = 2 * FDB_MAX_PRIO * (FDB_MAX_CHAIN + 1); maj_prio = fs_create_prio_chained(&steering->fdb_root_ns->ns, FDB_FAST_PATH, diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h index 3eeb04154317..fd91df3a4e09 100644 --- a/include/linux/mlx5/fs.h +++ b/include/linux/mlx5/fs.h @@ -76,6 +76,7 @@ enum mlx5_flow_namespace_type { }; enum { + FDB_BYPASS_PATH, FDB_FAST_PATH, FDB_SLOW_PATH, }; From patchwork Thu Mar 28 13:27:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068174 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="oJUmZqVQ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfT039rz9s7T for ; Fri, 29 Mar 2019 00:28:01 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726687AbfC1N17 (ORCPT ); Thu, 28 Mar 2019 09:27:59 -0400 Received: from mail.kernel.org ([198.145.29.99]:34192 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725948AbfC1N17 (ORCPT ); Thu, 28 Mar 2019 09:27:59 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 880762075E; Thu, 28 Mar 2019 13:27:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779678; bh=JWam2lwWegKiELabj/3pkKQILw+fI1MsSsDy1dRBEJE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oJUmZqVQ+RctyGv/mjxXSKofqndRq8s6Vs5KAczKqtRLMAG19zlrxQ16kbBDI459C ZnhpGXBjl1pa2qeuxl/8kiwdfpG1Q7iy7csbYJDpGDtjYWL9eDQm3GPUbPKFH7w7R8 kJhDWcxVDiSCB4Bm+daEyOl1JA+5NJbizEmrzqTE= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 03/12] RDMA/mlx5: Move netdev info into the port struct Date: Thu, 28 Mar 2019 15:27:33 +0200 Message-Id: <20190328132742.12070-4-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch Netdev info is stored in a separate array and holds data relevant on a per port basis, move it to be part of the port struct. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 30 ++++++++++++++-------------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 14 ++++++------- drivers/infiniband/hw/mlx5/qp.c | 2 +- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index e58ba0851983..a8522a0dedae 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -257,11 +257,11 @@ static struct net_device *mlx5_ib_get_netdev(struct ib_device *device, /* Ensure ndev does not disappear before we invoke dev_hold() */ - read_lock(&ibdev->roce[port_num - 1].netdev_lock); - ndev = ibdev->roce[port_num - 1].netdev; + read_lock(&ibdev->port[port_num - 1].roce.netdev_lock); + ndev = ibdev->port[port_num - 1].roce.netdev; if (ndev) dev_hold(ndev); - read_unlock(&ibdev->roce[port_num - 1].netdev_lock); + read_unlock(&ibdev->port[port_num - 1].roce.netdev_lock); out: mlx5_ib_put_native_port_mdev(ibdev, port_num); @@ -1956,7 +1956,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx, atomic_set(&context->tx_port_affinity, atomic_add_return( - 1, &dev->roce[port].tx_port_affinity)); + 1, &dev->port[port].roce.tx_port_affinity)); } return 0; @@ -5020,10 +5020,10 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) { int err; - dev->roce[port_num].nb.notifier_call = mlx5_netdev_event; - err = register_netdevice_notifier(&dev->roce[port_num].nb); + dev->port[port_num].roce.nb.notifier_call = mlx5_netdev_event; + err = register_netdevice_notifier(&dev->port[port_num].roce.nb); if (err) { - dev->roce[port_num].nb.notifier_call = NULL; + dev->port[port_num].roce.nb.notifier_call = NULL; return err; } @@ -5032,9 +5032,9 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) static void mlx5_remove_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) { - if (dev->roce[port_num].nb.notifier_call) { - unregister_netdevice_notifier(&dev->roce[port_num].nb); - dev->roce[port_num].nb.notifier_call = NULL; + if (dev->port[port_num].roce.nb.notifier_call) { + unregister_netdevice_notifier(&dev->port[port_num].roce.nb); + dev->port[port_num].roce.nb.notifier_call = NULL; } } @@ -5657,7 +5657,7 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev, mlx5_ib_err(ibdev, "Failed to unaffiliate port %u\n", port_num + 1); - ibdev->roce[port_num].last_port_state = IB_PORT_DOWN; + ibdev->port[port_num].roce.last_port_state = IB_PORT_DOWN; } /* The mlx5_ib_multiport_mutex should be held when calling this function */ @@ -5930,7 +5930,7 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) for (i = 0; i < dev->num_ports; i++) { spin_lock_init(&dev->port[i].mp.mpi_lock); - rwlock_init(&dev->roce[i].netdev_lock); + rwlock_init(&dev->port[i].roce.netdev_lock); } err = mlx5_ib_init_multiport_master(dev); @@ -6233,9 +6233,9 @@ static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) int i; for (i = 0; i < dev->num_ports; i++) { - dev->roce[i].dev = dev; - dev->roce[i].native_port_num = i + 1; - dev->roce[i].last_port_state = IB_PORT_DOWN; + dev->port[i].roce.dev = dev; + dev->port[i].roce.native_port_num = i + 1; + dev->port[i].roce.last_port_state = IB_PORT_DOWN; } dev->ib_dev.uverbs_ex_cmd_mask |= diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 16311cb05cae..662cc73b7b6b 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -706,12 +706,6 @@ struct mlx5_ib_multiport { spinlock_t mpi_lock; }; -struct mlx5_ib_port { - struct mlx5_ib_counters cnts; - struct mlx5_ib_multiport mp; - struct mlx5_ib_dbg_cc_params *dbg_cc_params; -}; - struct mlx5_roce { /* Protect mlx5_ib_get_netdev from invoking dev_hold() with a NULL * netdev pointer @@ -725,6 +719,13 @@ struct mlx5_roce { u8 native_port_num; }; +struct mlx5_ib_port { + struct mlx5_ib_counters cnts; + struct mlx5_ib_multiport mp; + struct mlx5_ib_dbg_cc_params *dbg_cc_params; + struct mlx5_roce roce; +}; + struct mlx5_ib_dbg_param { int offset; struct mlx5_ib_dev *dev; @@ -909,7 +910,6 @@ struct mlx5_ib_dev { struct ib_device ib_dev; struct mlx5_core_dev *mdev; struct notifier_block mdev_events; - struct mlx5_roce roce[MLX5_MAX_PORTS]; int num_ports; /* serialize update of capability mask */ diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 9a4fca35875c..d84c7f478513 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3296,7 +3296,7 @@ static unsigned int get_tx_affinity(struct mlx5_ib_dev *dev, } else { tx_port_affinity = (unsigned int)atomic_add_return( - 1, &dev->roce[port_num].tx_port_affinity) % + 1, &dev->port[port_num].roce.tx_port_affinity) % MLX5_MAX_PORTS + 1; mlx5_ib_dbg(dev, "Set tx affinity 0x%x to qpn 0x%x\n", From patchwork Thu Mar 28 13:27:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068175 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="ZEnJOG0d"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfX0pWRz9s7T for ; Fri, 29 Mar 2019 00:28:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726956AbfC1N2D (ORCPT ); Thu, 28 Mar 2019 09:28:03 -0400 Received: from mail.kernel.org ([198.145.29.99]:34246 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725948AbfC1N2C (ORCPT ); Thu, 28 Mar 2019 09:28:02 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0984B217F5; Thu, 28 Mar 2019 13:28:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779681; bh=krozi8047jpCoPGk0bN8eqqNETci+JmmfQbrk8g7p8o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZEnJOG0dXn8JJsQch479kSD6lcxV9YNj8MTjs1qb6Rgj/glPZlqoKwMq1YlyWzGKw 3LIyYEMgyxX2l7i//adWce835ixA9GLMkvumNMk8V2iXrxKqPn4NVddaf59pguCGSY bGqnVaz8HMNXtCJXbGldcOtEHHEyRcnz1sKED7JM= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 04/12] RDMA/mlx5: Free IB device on remove Date: Thu, 28 Mar 2019 15:27:34 +0200 Message-Id: <20190328132742.12070-5-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch Simplify the code and move the deallocation of the IB device into the remove function. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/ib_rep.c | 5 +---- drivers/infiniband/hw/mlx5/main.c | 4 ++-- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index b8639ac71336..87d553396fb4 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -65,10 +65,8 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) ibdev->mdev = dev; ibdev->num_ports = max(MLX5_CAP_GEN(dev, num_ports), MLX5_CAP_GEN(dev, num_vhca_ports)); - if (!__mlx5_ib_add(ibdev, profile)) { - ib_dealloc_device(&ibdev->ib_dev); + if (!__mlx5_ib_add(ibdev, profile)) return -EINVAL; - } rep->rep_if[REP_IB].priv = ibdev; @@ -86,7 +84,6 @@ mlx5_ib_vport_rep_unload(struct mlx5_eswitch_rep *rep) dev = mlx5_ib_rep_to_dev(rep); __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX); rep->rep_if[REP_IB].priv = NULL; - ib_dealloc_device(&dev->ib_dev); } static void *mlx5_ib_vport_get_proto_dev(struct mlx5_eswitch_rep *rep) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index a8522a0dedae..df38e24119ff 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -6491,6 +6491,8 @@ void __mlx5_ib_remove(struct mlx5_ib_dev *dev, if (profile->stage[stage].cleanup) profile->stage[stage].cleanup(dev); } + + ib_dealloc_device(&dev->ib_dev); } void *__mlx5_ib_add(struct mlx5_ib_dev *dev, @@ -6713,8 +6715,6 @@ static void mlx5_ib_remove(struct mlx5_core_dev *mdev, void *context) dev = context; __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX); - - ib_dealloc_device((struct ib_device *)dev); } static struct mlx5_interface mlx5_ib_interface = { From patchwork Thu Mar 28 13:27:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068177 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="TZccS816"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfh2W5Nz9sQx for ; Fri, 29 Mar 2019 00:28:12 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727231AbfC1N2J (ORCPT ); Thu, 28 Mar 2019 09:28:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:34370 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727093AbfC1N2J (ORCPT ); Thu, 28 Mar 2019 09:28:09 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F1822217D7; Thu, 28 Mar 2019 13:28:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779688; bh=uwtLs6w6NcUAyVi2wQBOeudX8r8VSImhH91g2xzQjHg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TZccS816m2JSSze22X/jwbsXj8l5SULklOGKLu2q9Fcnaeybe+Wr0Gg6+kGAvKBSI KbN7gXgvxZZGoWFumrGVTsVwT9AVaf32q3X4vTnmO7oOQXjqrErjhl4J4ySlhrcuYT gZQO98fCeim6ySSVoD88rGibMRD6Vhnq6bWL8Hq4= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 05/12] RDMA/mlx5: Move ports allocation to outside of INIT stage Date: Thu, 28 Mar 2019 15:27:35 +0200 Message-Id: <20190328132742.12070-6-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch In downstream patches we will need access to the ports before doing any stages, in order to set net device per representor. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/ib_rep.c | 12 ++++++++++-- drivers/infiniband/hw/mlx5/main.c | 24 ++++++++++++------------ 2 files changed, 22 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index 87d553396fb4..14ac728b460c 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -51,6 +51,7 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) { const struct mlx5_ib_profile *profile; struct mlx5_ib_dev *ibdev; + int num_ports = 1; if (rep->vport == MLX5_VPORT_UPLINK) profile = &uplink_rep_profile; @@ -61,10 +62,17 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) if (!ibdev) return -ENOMEM; + ibdev->port = kcalloc(num_ports, sizeof(*ibdev->port), + GFP_KERNEL); + if (!ibdev->port) { + ib_dealloc_device(&ibdev->ib_dev); + return -ENOMEM; + } + ibdev->rep = rep; ibdev->mdev = dev; - ibdev->num_ports = max(MLX5_CAP_GEN(dev, num_ports), - MLX5_CAP_GEN(dev, num_vhca_ports)); + ibdev->num_ports = num_ports; + if (!__mlx5_ib_add(ibdev, profile)) return -EINVAL; diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index df38e24119ff..e3483ecc2f98 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5914,7 +5914,6 @@ void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) srcu_barrier(&dev->mr_srcu); cleanup_srcu_struct(&dev->mr_srcu); } - kfree(dev->port); } int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) @@ -5923,11 +5922,6 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) int err; int i; - dev->port = kcalloc(dev->num_ports, sizeof(*dev->port), - GFP_KERNEL); - if (!dev->port) - return -ENOMEM; - for (i = 0; i < dev->num_ports; i++) { spin_lock_init(&dev->port[i].mp.mpi_lock); rwlock_init(&dev->port[i].roce.netdev_lock); @@ -5935,7 +5929,7 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) err = mlx5_ib_init_multiport_master(dev); if (err) - goto err_free_port; + return err; if (!mlx5_core_mp_enabled(mdev)) { for (i = 1; i <= dev->num_ports; i++) { @@ -5976,9 +5970,6 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) err_mp: mlx5_ib_cleanup_multiport_master(dev); -err_free_port: - kfree(dev->port); - return -ENOMEM; } @@ -6492,6 +6483,7 @@ void __mlx5_ib_remove(struct mlx5_ib_dev *dev, profile->stage[stage].cleanup(dev); } + kfree(dev->port); ib_dealloc_device(&dev->ib_dev); } @@ -6667,6 +6659,7 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev) enum rdma_link_layer ll; struct mlx5_ib_dev *dev; int port_type_cap; + int num_ports; printk_once(KERN_INFO "%s", mlx5_version); @@ -6682,13 +6675,20 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev) if (mlx5_core_is_mp_slave(mdev) && ll == IB_LINK_LAYER_ETHERNET) return mlx5_ib_add_slave_port(mdev); + num_ports = max(MLX5_CAP_GEN(mdev, num_ports), + MLX5_CAP_GEN(mdev, num_vhca_ports)); dev = ib_alloc_device(mlx5_ib_dev, ib_dev); if (!dev) return NULL; + dev->port = kcalloc(num_ports, sizeof(*dev->port), + GFP_KERNEL); + if (!dev->port) { + ib_dealloc_device((struct ib_device *)dev); + return NULL; + } dev->mdev = mdev; - dev->num_ports = max(MLX5_CAP_GEN(mdev, num_ports), - MLX5_CAP_GEN(mdev, num_vhca_ports)); + dev->num_ports = num_ports; return __mlx5_ib_add(dev, &pf_profile); } From patchwork Thu Mar 28 13:27:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068180 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="c+KWqQcN"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfq0br6z9s7T for ; Fri, 29 Mar 2019 00:28:19 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727411AbfC1N2S (ORCPT ); Thu, 28 Mar 2019 09:28:18 -0400 Received: from mail.kernel.org ([198.145.29.99]:34594 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727247AbfC1N2Q (ORCPT ); Thu, 28 Mar 2019 09:28:16 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B192021773; Thu, 28 Mar 2019 13:28:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779696; bh=Argtmch2DnxSv31gmS7BtUfmqa20Wro8z7yMq4bv1c4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c+KWqQcNTtya6o00iDn6Fs3+FzJJd6LZDwX6sd3STCut+YSvraPeyVxML3ygAXvQJ DGOIpc8oRYKvJkMNSWMBr2pMbmwFkzG6Khv1awgyq4dCY88kUiCp7qVS3Wstz/z3gq kp7W/cG382N1LoC9iL0149SKqaZMuBux5DrF/6tQ= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 06/12] RDMA/mlx5: Use correct size for device resources Date: Thu, 28 Mar 2019 15:27:36 +0200 Message-Id: <20190328132742.12070-7-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch On allocation we use the array size and on destruction num_ports, use the array size of destruction as well, in this context the array corresponds to the native/actual ports on the NIC so no need to adjust this logic for representors. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index e3483ecc2f98..14cc4c56e2b1 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -4862,8 +4862,6 @@ static int create_dev_resources(struct mlx5_ib_resources *devr) static void destroy_dev_resources(struct mlx5_ib_resources *devr) { - struct mlx5_ib_dev *dev = - container_of(devr, struct mlx5_ib_dev, devr); int port; mlx5_ib_destroy_srq(devr->s1); @@ -4877,7 +4875,7 @@ static void destroy_dev_resources(struct mlx5_ib_resources *devr) kfree(devr->p0); /* Make sure no change P_Key work items are still executing */ - for (port = 0; port < dev->num_ports; ++port) + for (port = 0; port < ARRAY_SIZE(devr->ports); ++port) cancel_work_sync(&devr->ports[port].pkey_change_work); } From patchwork Thu Mar 28 13:27:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068178 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="Ljd62jZu"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfk5qFCz9sQx for ; Fri, 29 Mar 2019 00:28:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727283AbfC1N2N (ORCPT ); Thu, 28 Mar 2019 09:28:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:34466 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725948AbfC1N2N (ORCPT ); Thu, 28 Mar 2019 09:28:13 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 800B62075E; Thu, 28 Mar 2019 13:28:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779692; bh=6uhYp8GEMc3n+cBFc03bnXyZfrBM0OB1lcC+d+IZFWs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ljd62jZu/9LRoDtJtGy9a9rbwMgWrbcpsS3GFQRrdU2UOrJcWFMJzMNgV2qRkFKJr P1m4PkKQpzwxA+f2EpB7OM6fzlZnvFSiuu5U3rpcxu3z7qEFX0Dpx24W+CoeQ5MI/M 6weZgS+v6qpnvM7M9b2MmjuIm4UMJ+33FpNML71Y= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 07/12] RDMA/mlx5: Move rep into port struct Date: Thu, 28 Mar 2019 15:27:37 +0200 Message-Id: <20190328132742.12070-8-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch In preparation of moving into a model of single IB device multiple ports move rep to be part of the port structure. We mark a representor device by setting is_rep, no functional change with this patch. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/devx.c | 2 +- drivers/infiniband/hw/mlx5/flow.c | 2 +- drivers/infiniband/hw/mlx5/ib_rep.c | 7 ++++--- drivers/infiniband/hw/mlx5/main.c | 22 +++++++++++++--------- drivers/infiniband/hw/mlx5/mlx5_ib.h | 3 ++- drivers/infiniband/hw/mlx5/mr.c | 6 +++--- drivers/infiniband/hw/mlx5/qp.c | 4 ++-- 7 files changed, 26 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c index 9e08df7914aa..687425ccc870 100644 --- a/drivers/infiniband/hw/mlx5/devx.c +++ b/drivers/infiniband/hw/mlx5/devx.c @@ -1900,7 +1900,7 @@ static bool devx_is_supported(struct ib_device *device) { struct mlx5_ib_dev *dev = to_mdev(device); - return !dev->rep && MLX5_CAP_GEN(dev->mdev, log_max_uctx); + return !dev->is_rep && MLX5_CAP_GEN(dev->mdev, log_max_uctx); } const struct uapi_definition mlx5_ib_devx_defs[] = { diff --git a/drivers/infiniband/hw/mlx5/flow.c b/drivers/infiniband/hw/mlx5/flow.c index 798591a18484..969fafb6ed9b 100644 --- a/drivers/infiniband/hw/mlx5/flow.c +++ b/drivers/infiniband/hw/mlx5/flow.c @@ -621,7 +621,7 @@ DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_FLOW_MATCHER, static bool flow_is_supported(struct ib_device *device) { - return !to_mdev(device)->rep; + return !to_mdev(device)->is_rep; } const struct uapi_definition mlx5_ib_flow_defs[] = { diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index 14ac728b460c..64256dc1d1de 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -69,7 +69,8 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) return -ENOMEM; } - ibdev->rep = rep; + ibdev->is_rep = true; + ibdev->port[0].rep = rep; ibdev->mdev = dev; ibdev->num_ports = num_ports; @@ -151,12 +152,12 @@ int create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, struct mlx5_flow_handle *flow_rule; struct mlx5_eswitch *esw = dev->mdev->priv.eswitch; - if (!dev->rep) + if (!dev->is_rep) return 0; flow_rule = mlx5_eswitch_add_send_to_vport_rule(esw, - dev->rep->vport, + dev->port[0].rep->vport, sq->base.mqp.qpn); if (IS_ERR(flow_rule)) return PTR_ERR(flow_rule); diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 14cc4c56e2b1..d9dfc9975164 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -173,12 +173,12 @@ static int mlx5_netdev_event(struct notifier_block *this, switch (event) { case NETDEV_REGISTER: write_lock(&roce->netdev_lock); - if (ibdev->rep) { + if (ibdev->is_rep) { struct mlx5_eswitch *esw = ibdev->mdev->priv.eswitch; + struct mlx5_eswitch_rep *rep = ibdev->port[0].rep; struct net_device *rep_ndev; - rep_ndev = mlx5_ib_get_rep_netdev(esw, - ibdev->rep->vport); + rep_ndev = mlx5_ib_get_rep_netdev(esw, rep->vport); if (rep_ndev == ndev) roce->netdev = ndev; } else if (ndev->dev.parent == &mdev->pdev->dev) { @@ -3149,10 +3149,10 @@ static struct mlx5_ib_flow_prio *get_flow_table(struct mlx5_ib_dev *dev, if (ft_type == MLX5_IB_FT_RX) { fn_type = MLX5_FLOW_NAMESPACE_BYPASS; prio = &dev->flow_db->prios[priority]; - if (!dev->rep && + if (!dev->is_rep && MLX5_CAP_FLOWTABLE_NIC_RX(dev->mdev, decap)) flags |= MLX5_FLOW_TABLE_TUNNEL_EN_DECAP; - if (!dev->rep && + if (!dev->is_rep && MLX5_CAP_FLOWTABLE_NIC_RX(dev->mdev, reformat_l3_tunnel_to_l2)) flags |= MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; @@ -3162,7 +3162,7 @@ static struct mlx5_ib_flow_prio *get_flow_table(struct mlx5_ib_dev *dev, log_max_ft_size)); fn_type = MLX5_FLOW_NAMESPACE_EGRESS; prio = &dev->flow_db->egress_prios[priority]; - if (!dev->rep && + if (!dev->is_rep && MLX5_CAP_FLOWTABLE_NIC_TX(dev->mdev, reformat)) flags |= MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; } @@ -3368,7 +3368,7 @@ static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev, if (!is_valid_attr(dev->mdev, flow_attr)) return ERR_PTR(-EINVAL); - if (dev->rep && is_egress) + if (dev->is_rep && is_egress) return ERR_PTR(-EINVAL); spec = kvzalloc(sizeof(*spec), GFP_KERNEL); @@ -3399,13 +3399,17 @@ static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev, if (!flow_is_multicast_only(flow_attr)) set_underlay_qp(dev, spec, underlay_qpn); - if (dev->rep) { + if (dev->is_rep) { void *misc; + if (!dev->port[flow_attr->port - 1].rep) { + err = -EINVAL; + goto free; + } misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters); MLX5_SET(fte_match_set_misc, misc, source_port, - dev->rep->vport); + dev->port[flow_attr->port - 1].rep->vport); misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters); MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 662cc73b7b6b..885f87d0a3f9 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -724,6 +724,7 @@ struct mlx5_ib_port { struct mlx5_ib_multiport mp; struct mlx5_ib_dbg_cc_params *dbg_cc_params; struct mlx5_roce roce; + struct mlx5_eswitch_rep *rep; }; struct mlx5_ib_dbg_param { @@ -944,7 +945,7 @@ struct mlx5_ib_dev { struct mlx5_sq_bfreg fp_bfreg; struct mlx5_ib_delay_drop delay_drop; const struct mlx5_ib_profile *profile; - struct mlx5_eswitch_rep *rep; + bool is_rep; int lag_active; struct mlx5_ib_lb_state lb; diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index c85f00255884..02119fe8338f 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -600,7 +600,7 @@ static void clean_keys(struct mlx5_ib_dev *dev, int c) static void mlx5_mr_cache_debugfs_cleanup(struct mlx5_ib_dev *dev) { - if (!mlx5_debugfs_root || dev->rep) + if (!mlx5_debugfs_root || dev->is_rep) return; debugfs_remove_recursive(dev->cache.root); @@ -614,7 +614,7 @@ static void mlx5_mr_cache_debugfs_init(struct mlx5_ib_dev *dev) struct dentry *dir; int i; - if (!mlx5_debugfs_root || dev->rep) + if (!mlx5_debugfs_root || dev->is_rep) return; cache->root = debugfs_create_dir("mr_cache", dev->mdev->priv.dbg_root); @@ -677,7 +677,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev) MLX5_IB_UMR_OCTOWORD; ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT; if ((dev->mdev->profile->mask & MLX5_PROF_MASK_MR_CACHE) && - !dev->rep && + !dev->is_rep && mlx5_core_is_pf(dev->mdev)) ent->limit = dev->mdev->profile->mr_cache[i].limit; else diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index d84c7f478513..c597e0b7f904 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -1430,7 +1430,7 @@ static int create_raw_packet_qp_tir(struct mlx5_ib_dev *dev, if (*qp_flags_en & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST; - if (dev->rep) { + if (dev->is_rep) { lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; *qp_flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC; } @@ -1642,7 +1642,7 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, return -EOPNOTSUPP; } - if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC || dev->rep) { + if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC || dev->is_rep) { lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC; } From patchwork Thu Mar 28 13:27:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068184 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="MX2TRPbS"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfy4T3wz9sR6 for ; Fri, 29 Mar 2019 00:28:26 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727471AbfC1N2Z (ORCPT ); Thu, 28 Mar 2019 09:28:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:34782 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727247AbfC1N2Y (ORCPT ); Thu, 28 Mar 2019 09:28:24 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BD48C217F9; Thu, 28 Mar 2019 13:28:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779703; bh=Gy2st3ZeJod9VilUrOocnn0MuTzPhPENV7uvqqe9bgo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MX2TRPbSA4hPy7jaeIxIFj6BkTDPJpfj8VIs2HZMT8rLxl3cnovHmo5NyyyzrRcll mgZ+A+V/8As9L/prlIhvXzCtjfqVJvDcjfGKqlsFhcLwOntOlnayBcrp0ft4thFBjN oI9VZJaIThCve6wCc1iqZjPPFtmpZkO+jFgKwOk4= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 08/12] RDMA/mlx5: Move default representors SQ steering to rule to modify QP Date: Thu, 28 Mar 2019 15:27:38 +0200 Message-Id: <20190328132742.12070-9-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch Currently the steering for SQs created on representors is done on creation, once we move to representors as ports of an IB device we need the port argument which is given only at the modify QP stage, adjust the code appropriately. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/ib_rep.c | 25 +++++++++--------- drivers/infiniband/hw/mlx5/ib_rep.h | 13 ++++++---- drivers/infiniband/hw/mlx5/qp.c | 40 ++++++++++++++++++++--------- 3 files changed, 48 insertions(+), 30 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index 64256dc1d1de..d3988f6ae2ae 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -146,22 +146,21 @@ struct mlx5_eswitch_rep *mlx5_ib_vport_rep(struct mlx5_eswitch *esw, int vport) return mlx5_eswitch_vport_rep(esw, vport); } -int create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, - struct mlx5_ib_sq *sq) +struct mlx5_flow_handle *create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, + struct mlx5_ib_sq *sq, + u16 port) { - struct mlx5_flow_handle *flow_rule; struct mlx5_eswitch *esw = dev->mdev->priv.eswitch; + struct mlx5_eswitch_rep *rep; - if (!dev->is_rep) - return 0; + if (!dev->is_rep || !port) + return NULL; - flow_rule = - mlx5_eswitch_add_send_to_vport_rule(esw, - dev->port[0].rep->vport, - sq->base.mqp.qpn); - if (IS_ERR(flow_rule)) - return PTR_ERR(flow_rule); - sq->flow_rule = flow_rule; + if (!dev->port[port - 1].rep) + return ERR_PTR(-EINVAL); - return 0; + rep = dev->port[port - 1].rep; + + return mlx5_eswitch_add_send_to_vport_rule(esw, rep->vport, + sq->base.mqp.qpn); } diff --git a/drivers/infiniband/hw/mlx5/ib_rep.h b/drivers/infiniband/hw/mlx5/ib_rep.h index 798d41e61fb4..1d9778da8a50 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.h +++ b/drivers/infiniband/hw/mlx5/ib_rep.h @@ -20,8 +20,9 @@ struct mlx5_eswitch_rep *mlx5_ib_vport_rep(struct mlx5_eswitch *esw, int vport_index); void mlx5_ib_register_vport_reps(struct mlx5_core_dev *mdev); void mlx5_ib_unregister_vport_reps(struct mlx5_core_dev *mdev); -int create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, - struct mlx5_ib_sq *sq); +struct mlx5_flow_handle *create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, + struct mlx5_ib_sq *sq, + u16 port); struct net_device *mlx5_ib_get_rep_netdev(struct mlx5_eswitch *esw, int vport_index); #else /* CONFIG_MLX5_ESWITCH */ @@ -52,10 +53,12 @@ struct mlx5_eswitch_rep *mlx5_ib_vport_rep(struct mlx5_eswitch *esw, static inline void mlx5_ib_register_vport_reps(struct mlx5_core_dev *mdev) {} static inline void mlx5_ib_unregister_vport_reps(struct mlx5_core_dev *mdev) {} -static inline int create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, - struct mlx5_ib_sq *sq) +static inline +struct mlx5_flow_handle *create_flow_rule_vport_sq(struct mlx5_ib_dev *dev, + struct mlx5_ib_sq *sq, + u16 port) { - return 0; + return NULL; } static inline diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index c597e0b7f904..3ff7c32207b9 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -93,6 +93,7 @@ struct mlx5_modify_raw_qp_param { struct mlx5_rate_limit rl; u8 rq_q_ctr_id; + u16 port; }; static void get_cqs(enum ib_qp_type qp_type, @@ -1207,11 +1208,11 @@ static void destroy_raw_packet_qp_tis(struct mlx5_ib_dev *dev, mlx5_cmd_destroy_tis(dev->mdev, sq->tisn, to_mpd(pd)->uid); } -static void destroy_flow_rule_vport_sq(struct mlx5_ib_dev *dev, - struct mlx5_ib_sq *sq) +static void destroy_flow_rule_vport_sq(struct mlx5_ib_sq *sq) { if (sq->flow_rule) mlx5_del_flow_rules(sq->flow_rule); + sq->flow_rule = NULL; } static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev, @@ -1279,15 +1280,8 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev, if (err) goto err_umem; - err = create_flow_rule_vport_sq(dev, sq); - if (err) - goto err_flow; - return 0; -err_flow: - mlx5_core_destroy_sq_tracked(dev->mdev, &sq->base.mqp); - err_umem: ib_umem_release(sq->ubuffer.umem); sq->ubuffer.umem = NULL; @@ -1298,7 +1292,7 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev, static void destroy_raw_packet_qp_sq(struct mlx5_ib_dev *dev, struct mlx5_ib_sq *sq) { - destroy_flow_rule_vport_sq(dev, sq); + destroy_flow_rule_vport_sq(sq); mlx5_core_destroy_sq_tracked(dev->mdev, &sq->base.mqp); ib_umem_release(sq->ubuffer.umem); } @@ -3262,6 +3256,8 @@ static int modify_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, } if (modify_sq) { + struct mlx5_flow_handle *flow_rule; + if (tx_affinity) { err = modify_raw_packet_tx_affinity(dev->mdev, sq, tx_affinity, @@ -3270,8 +3266,25 @@ static int modify_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, return err; } - return modify_raw_packet_qp_sq(dev->mdev, sq, sq_state, - raw_qp_param, qp->ibqp.pd); + flow_rule = create_flow_rule_vport_sq(dev, sq, + raw_qp_param->port); + if (IS_ERR(flow_rule)) + return err; + + err = modify_raw_packet_qp_sq(dev->mdev, sq, sq_state, + raw_qp_param, qp->ibqp.pd); + if (err) { + if (flow_rule) + mlx5_del_flow_rules(flow_rule); + return err; + } + + if (flow_rule) { + destroy_flow_rule_vport_sq(sq); + sq->flow_rule = flow_rule; + } + + return err; } return 0; @@ -3588,6 +3601,9 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp, raw_qp_param.set_mask |= MLX5_RAW_QP_MOD_SET_RQ_Q_CTR_ID; } + if (attr_mask & IB_QP_PORT) + raw_qp_param.port = attr->port_num; + if (attr_mask & IB_QP_RATE_LIMIT) { raw_qp_param.rl.rate = attr->rate_limit; From patchwork Thu Mar 28 13:27:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068182 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="QOKV0rDq"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQfv3VgRz9sR1 for ; Fri, 29 Mar 2019 00:28:23 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727451AbfC1N2W (ORCPT ); Thu, 28 Mar 2019 09:28:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:34684 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727247AbfC1N2V (ORCPT ); Thu, 28 Mar 2019 09:28:21 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 522B42184C; Thu, 28 Mar 2019 13:28:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779700; bh=fWz75ua8JHfQfurADaBk7YUlqAde8lWLRtzZcDNn7fY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QOKV0rDqp2wbpDm1M8XwuWOVUxWU5DEAaPDaks1WPtDhyItYwz+oXSyoPTc9W49KX 9kQhoCQE5biz0GM+quBrM9svd7KoWtwrF4YUqaY0hEcTp9vWf/CprwL4MZY04bRX4T +RJitaT1ZZJPSUpl5mghGnd++GYSJh+I+lTg+Bwk= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 09/12] RDMA/mlx5: Refactor netdev affinity code Date: Thu, 28 Mar 2019 15:27:39 +0200 Message-Id: <20190328132742.12070-10-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch The design of representors is such that once an IB representor is created, the netdev of representor already exists, we can use that fact to simplify the netdev affinity code. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/ib_rep.c | 2 ++ drivers/infiniband/hw/mlx5/main.c | 47 +++++++++++++++++++++++------ 2 files changed, 39 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index d3988f6ae2ae..7946cf26421b 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -71,6 +71,8 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) ibdev->is_rep = true; ibdev->port[0].rep = rep; + ibdev->port[0].roce.netdev = + mlx5_ib_get_rep_netdev(dev->priv.eswitch, rep->vport); ibdev->mdev = dev; ibdev->num_ports = num_ports; diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index d9dfc9975164..9ce8ae5565a3 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -156,6 +156,34 @@ static int get_port_state(struct ib_device *ibdev, return ret; } +static struct mlx5_roce *mlx5_get_rep_roce(struct mlx5_ib_dev *dev, + struct net_device *ndev, + u8 *port_num) +{ + struct mlx5_eswitch *esw = dev->mdev->priv.eswitch; + struct net_device *rep_ndev; + struct mlx5_ib_port *port; + int i; + + for (i = 0; i < dev->num_ports; i++) { + port = &dev->port[i]; + if (!port->rep) + continue; + + read_lock(&port->roce.netdev_lock); + rep_ndev = mlx5_ib_get_rep_netdev(esw, + port->rep->vport); + if (rep_ndev == ndev) { + read_unlock(&port->roce.netdev_lock); + *port_num = i + 1; + return &port->roce; + } + read_unlock(&port->roce.netdev_lock); + } + + return NULL; +} + static int mlx5_netdev_event(struct notifier_block *this, unsigned long event, void *ptr) { @@ -172,22 +200,17 @@ static int mlx5_netdev_event(struct notifier_block *this, switch (event) { case NETDEV_REGISTER: + /* Should already be registered during the load */ + if (ibdev->is_rep) + break; write_lock(&roce->netdev_lock); - if (ibdev->is_rep) { - struct mlx5_eswitch *esw = ibdev->mdev->priv.eswitch; - struct mlx5_eswitch_rep *rep = ibdev->port[0].rep; - struct net_device *rep_ndev; - - rep_ndev = mlx5_ib_get_rep_netdev(esw, rep->vport); - if (rep_ndev == ndev) - roce->netdev = ndev; - } else if (ndev->dev.parent == &mdev->pdev->dev) { + if (ndev->dev.parent == &mdev->pdev->dev) roce->netdev = ndev; - } write_unlock(&roce->netdev_lock); break; case NETDEV_UNREGISTER: + /* In case of reps, ib device goes away before the netdevs */ write_lock(&roce->netdev_lock); if (roce->netdev == ndev) roce->netdev = NULL; @@ -205,6 +228,10 @@ static int mlx5_netdev_event(struct notifier_block *this, dev_put(lag_ndev); } + if (ibdev->is_rep) + roce = mlx5_get_rep_roce(ibdev, ndev, &port_num); + if (!roce) + return NOTIFY_DONE; if ((upper == ndev || (!upper && ndev == roce->netdev)) && ibdev->ib_active) { struct ib_event ibev = { }; From patchwork Thu Mar 28 13:27:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068190 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="UaSEPYsv"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQg84PqCz9sRD for ; Fri, 29 Mar 2019 00:28:36 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727591AbfC1N2f (ORCPT ); Thu, 28 Mar 2019 09:28:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:35014 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727565AbfC1N2e (ORCPT ); Thu, 28 Mar 2019 09:28:34 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6143D21773; Thu, 28 Mar 2019 13:28:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779714; bh=WJbMYDVgiZvdzyv+tt62/jHG9pyICV/enIF0sbyX1Lw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UaSEPYsvOMq8YmxcBSkg9mXNb+isH4IEuY8BnNt/9MF3PzjPKDkchnr9y/HYi4fSY +MXwmU7SKjkPafcoMJ350JC8kk15N6Gx+7FqLCxNdCkfInp3Gwk2MFwClBeR5zjsi6 P+5drF1oCjSSqU/5Y1x6A2Oz8doqMC3NnGtZ+EIM= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 10/12] RDMA/mlx5: Move SMI caps logic Date: Thu, 28 Mar 2019 15:27:40 +0200 Message-Id: <20190328132742.12070-11-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch We store the SMI information in the core device's struct, make sure we set that information only once (and not per port), while here make the for loop based on the actual size of the array. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 9ce8ae5565a3..7eca3978c50c 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -4534,7 +4534,7 @@ static int set_has_smi_cap(struct mlx5_ib_dev *dev) int err; int port; - for (port = 1; port <= dev->num_ports; port++) { + for (port = 1; port <= ARRAY_SIZE(dev->mdev->port_caps); port++) { dev->mdev->port_caps[port - 1].has_smi = false; if (MLX5_CAP_GEN(dev->mdev, port_type) == MLX5_CAP_PORT_TYPE_IB) { @@ -4580,10 +4580,6 @@ static int get_port_caps(struct mlx5_ib_dev *dev, u8 port) if (!dprops) goto out; - err = set_has_smi_cap(dev); - if (err) - goto out; - err = mlx5_ib_query_device(&dev->ib_dev, dprops, &uhw); if (err) { mlx5_ib_warn(dev, "query_device failed %d\n", err); @@ -5960,6 +5956,10 @@ int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) if (err) return err; + err = set_has_smi_cap(dev); + if (err) + return err; + if (!mlx5_core_mp_enabled(mdev)) { for (i = 1; i <= dev->num_ports; i++) { err = get_port_caps(dev, i); From patchwork Thu Mar 28 13:27:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068186 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="NXpCSVjM"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQg1420qz9sRC for ; Fri, 29 Mar 2019 00:28:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727522AbfC1N22 (ORCPT ); Thu, 28 Mar 2019 09:28:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:34872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727473AbfC1N22 (ORCPT ); Thu, 28 Mar 2019 09:28:28 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3624821773; Thu, 28 Mar 2019 13:28:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779706; bh=w64+2+dtwGPgQZkzWVtnWcHyvcDCATtrGoh+WluGBNQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NXpCSVjMxUNWuU+PgIszpMPON5c8rp1ZY+Fw/XOL4pPPS5Ja6gDpbvmY/CAALiATS YOTyobjDkTDPY2to7rTcQiqysCkvYefHBggfsKDoUUBgyOc4MwDl88nXVtNh4M2/wF ohqIzKLylEURAqCoa58JSkhNpGdGHhOpeHK6f/X8= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 11/12] RDMA/mlx5: Move to single device multiport ports in switchdev mode Date: Thu, 28 Mar 2019 15:27:41 +0200 Message-Id: <20190328132742.12070-12-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch Move from IB device (representor) per virtual function to single IB device with port per virtual function (port 1 represents the uplink). As number of ports is a static property of an IB device, declare the IB device with as many port as the possible according to the PCI bus. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/ib_rep.c | 31 +++++++++++++++++++++++----- drivers/infiniband/hw/mlx5/main.c | 26 +++++++++++++++++++---- drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + 3 files changed, 49 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index 7946cf26421b..224ef6c88d17 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -46,17 +46,36 @@ static const struct mlx5_ib_profile vf_rep_profile = { NULL), }; +static int +mlx5_ib_set_vport_rep(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) +{ + struct mlx5_ib_dev *ibdev; + int vport_index; + + ibdev = mlx5_ib_get_uplink_ibdev(dev->priv.eswitch); + vport_index = ibdev->free_port++; + + ibdev->port[vport_index].rep = rep; + write_lock(&ibdev->port[vport_index].roce.netdev_lock); + ibdev->port[vport_index].roce.netdev = + mlx5_ib_get_rep_netdev(dev->priv.eswitch, rep->vport); + write_unlock(&ibdev->port[vport_index].roce.netdev_lock); + + return 0; +} + static int mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) { + int num_ports = MLX5_TOTAL_VPORTS(dev); const struct mlx5_ib_profile *profile; struct mlx5_ib_dev *ibdev; - int num_ports = 1; + int vport_index; if (rep->vport == MLX5_VPORT_UPLINK) profile = &uplink_rep_profile; else - profile = &vf_rep_profile; + return mlx5_ib_set_vport_rep(dev, rep); ibdev = ib_alloc_device(mlx5_ib_dev, ib_dev); if (!ibdev) @@ -70,8 +89,9 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) } ibdev->is_rep = true; - ibdev->port[0].rep = rep; - ibdev->port[0].roce.netdev = + vport_index = ibdev->free_port++; + ibdev->port[vport_index].rep = rep; + ibdev->port[vport_index].roce.netdev = mlx5_ib_get_rep_netdev(dev->priv.eswitch, rep->vport); ibdev->mdev = dev; ibdev->num_ports = num_ports; @@ -89,7 +109,8 @@ mlx5_ib_vport_rep_unload(struct mlx5_eswitch_rep *rep) { struct mlx5_ib_dev *dev; - if (!rep->rep_if[REP_IB].priv) + if (!rep->rep_if[REP_IB].priv || + rep->vport != MLX5_VPORT_UPLINK) return; dev = mlx5_ib_rep_to_dev(rep); diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 7eca3978c50c..57d269c1b874 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -506,9 +506,14 @@ static int mlx5_query_port_roce(struct ib_device *device, u8 port_num, /* Possible bad flows are checked before filling out props so in case * of an error it will still be zeroed out. + * Use native port in case of reps */ - err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, - mdev_port_num); + if (dev->is_rep) + err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, + 1); + else + err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN, + mdev_port_num); if (err) goto out; ext = MLX5_CAP_PCAM_FEATURE(dev->mdev, ptys_extended_ethernet); @@ -1432,7 +1437,9 @@ static int mlx5_ib_rep_query_port(struct ib_device *ibdev, u8 port, { int ret; - /* Only link layer == ethernet is valid for representors */ + /* Only link layer == ethernet is valid for representors + * and we always use port 1 + */ ret = mlx5_query_port_roce(ibdev, port, props); if (ret || !props) return ret; @@ -4565,7 +4572,7 @@ static void get_ext_port_caps(struct mlx5_ib_dev *dev) mlx5_query_ext_port_caps(dev, port); } -static int get_port_caps(struct mlx5_ib_dev *dev, u8 port) +static int __get_port_caps(struct mlx5_ib_dev *dev, u8 port) { struct ib_device_attr *dprops = NULL; struct ib_port_attr *pprops = NULL; @@ -4608,6 +4615,16 @@ static int get_port_caps(struct mlx5_ib_dev *dev, u8 port) return err; } +static int get_port_caps(struct mlx5_ib_dev *dev, u8 port) +{ + /* For representors use port 1, is this is the only native + * port + */ + if (dev->is_rep) + return __get_port_caps(dev, 1); + return __get_port_caps(dev, port); +} + static void destroy_umrc_res(struct mlx5_ib_dev *dev) { int err; @@ -6268,6 +6285,7 @@ static int mlx5_ib_stage_common_roce_init(struct mlx5_ib_dev *dev) port_num = mlx5_core_native_port_num(dev->mdev) - 1; + /* Register only for native ports */ return mlx5_add_netdev_notifier(dev, port_num); } diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 885f87d0a3f9..7daa20102b1d 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -956,6 +956,7 @@ struct mlx5_ib_dev { u16 devx_whitelist_uid; struct mlx5_srq_table srq_table; struct mlx5_async_ctx async_ctx; + int free_port; }; static inline struct mlx5_ib_cq *to_mibcq(struct mlx5_core_cq *mcq) From patchwork Thu Mar 28 13:27:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1068189 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="eHqFHtX2"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44VQg455Vfz9sRD for ; Fri, 29 Mar 2019 00:28:32 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727555AbfC1N2b (ORCPT ); Thu, 28 Mar 2019 09:28:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:34944 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727538AbfC1N2b (ORCPT ); Thu, 28 Mar 2019 09:28:31 -0400 Received: from localhost (unknown [77.138.135.184]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A385C2183F; Thu, 28 Mar 2019 13:28:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553779710; bh=SMnOKNbGfWOUfCcEOiys78NcLml66rt1BQLrMG/8O5w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eHqFHtX2hN/dJ03FMX+PS1nrdBWbINNSPJFEUPEZCKBnUN2SqmfTA3qV0Ulj6SjKl kRnOIXaW6mGX94u9SfTPMrQ6gZvgndVimitR1E59Zz05fkeIYrLCez//WAZLWcjWk1 3rex2ajrbu8M8MFz9n0SNCryzrj8d+x9mVpYx7tk= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Maor Gottlieb , Mark Bloch , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 12/12] RDMA/mlx5: Remove VF representor profile Date: Thu, 28 Mar 2019 15:27:42 +0200 Message-Id: <20190328132742.12070-13-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328132742.12070-1-leon@kernel.org> References: <20190328132742.12070-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Bloch Now that we have a single IB device with multipul ports we can remove the VF representor profile. Signed-off-by: Mark Bloch Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/ib_rep.c | 39 ----------------------- drivers/infiniband/hw/mlx5/main.c | 46 ++++++++++------------------ drivers/infiniband/hw/mlx5/mlx5_ib.h | 17 ---------- 3 files changed, 16 insertions(+), 86 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c index 224ef6c88d17..cbcc40d776b9 100644 --- a/drivers/infiniband/hw/mlx5/ib_rep.c +++ b/drivers/infiniband/hw/mlx5/ib_rep.c @@ -7,45 +7,6 @@ #include "ib_rep.h" #include "srq.h" -static const struct mlx5_ib_profile vf_rep_profile = { - STAGE_CREATE(MLX5_IB_STAGE_INIT, - mlx5_ib_stage_init_init, - mlx5_ib_stage_init_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_FLOW_DB, - mlx5_ib_stage_rep_flow_db_init, - NULL), - STAGE_CREATE(MLX5_IB_STAGE_CAPS, - mlx5_ib_stage_caps_init, - NULL), - STAGE_CREATE(MLX5_IB_STAGE_NON_DEFAULT_CB, - mlx5_ib_stage_rep_non_default_cb, - NULL), - STAGE_CREATE(MLX5_IB_STAGE_ROCE, - mlx5_ib_stage_rep_roce_init, - mlx5_ib_stage_rep_roce_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_SRQ, - mlx5_init_srq_table, - mlx5_cleanup_srq_table), - STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES, - mlx5_ib_stage_dev_res_init, - mlx5_ib_stage_dev_res_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_COUNTERS, - mlx5_ib_stage_counters_init, - mlx5_ib_stage_counters_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_BFREG, - mlx5_ib_stage_bfrag_init, - mlx5_ib_stage_bfrag_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_PRE_IB_REG_UMR, - NULL, - mlx5_ib_stage_pre_ib_reg_umr_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_IB_REG, - mlx5_ib_stage_ib_reg_init, - mlx5_ib_stage_ib_reg_cleanup), - STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR, - mlx5_ib_stage_post_ib_reg_umr_init, - NULL), -}; - static int mlx5_ib_set_vport_rep(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep) { diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 57d269c1b874..840240e0367e 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5949,7 +5949,7 @@ static struct ib_counters *mlx5_ib_create_counters(struct ib_device *device, return &mcounters->ibcntrs; } -void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) { mlx5_ib_cleanup_multiport_master(dev); if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) { @@ -5958,7 +5958,7 @@ void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) } } -int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; int err; @@ -6031,20 +6031,6 @@ static int mlx5_ib_stage_flow_db_init(struct mlx5_ib_dev *dev) return 0; } -int mlx5_ib_stage_rep_flow_db_init(struct mlx5_ib_dev *dev) -{ - struct mlx5_ib_dev *nic_dev; - - nic_dev = mlx5_ib_get_uplink_ibdev(dev->mdev->priv.eswitch); - - if (!nic_dev) - return -EINVAL; - - dev->flow_db = nic_dev->flow_db; - - return 0; -} - static void mlx5_ib_stage_flow_db_cleanup(struct mlx5_ib_dev *dev) { kfree(dev->flow_db); @@ -6143,7 +6129,7 @@ static const struct ib_device_ops mlx5_ib_dev_dm_ops = { .reg_dm_mr = mlx5_ib_reg_dm_mr, }; -int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; int err; @@ -6249,7 +6235,7 @@ static const struct ib_device_ops mlx5_ib_dev_port_rep_ops = { .query_port = mlx5_ib_rep_query_port, }; -int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev) { ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_port_rep_ops); return 0; @@ -6296,7 +6282,7 @@ static void mlx5_ib_stage_common_roce_cleanup(struct mlx5_ib_dev *dev) mlx5_remove_netdev_notifier(dev, port_num); } -int mlx5_ib_stage_rep_roce_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_rep_roce_init(struct mlx5_ib_dev *dev) { struct mlx5_core_dev *mdev = dev->mdev; enum rdma_link_layer ll; @@ -6312,7 +6298,7 @@ int mlx5_ib_stage_rep_roce_init(struct mlx5_ib_dev *dev) return err; } -void mlx5_ib_stage_rep_roce_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_rep_roce_cleanup(struct mlx5_ib_dev *dev) { mlx5_ib_stage_common_roce_cleanup(dev); } @@ -6359,12 +6345,12 @@ static void mlx5_ib_stage_roce_cleanup(struct mlx5_ib_dev *dev) } } -int mlx5_ib_stage_dev_res_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_dev_res_init(struct mlx5_ib_dev *dev) { return create_dev_resources(&dev->devr); } -void mlx5_ib_stage_dev_res_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_dev_res_cleanup(struct mlx5_ib_dev *dev) { destroy_dev_resources(&dev->devr); } @@ -6390,7 +6376,7 @@ static const struct ib_device_ops mlx5_ib_dev_hw_stats_ops = { .counter_update_stats = mlx5_ib_counter_update_stats, }; -int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) { ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_hw_stats_ops); @@ -6401,7 +6387,7 @@ int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev) return 0; } -void mlx5_ib_stage_counters_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_counters_cleanup(struct mlx5_ib_dev *dev) { if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) mlx5_ib_dealloc_counters(dev); @@ -6431,7 +6417,7 @@ static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev) mlx5_put_uars_page(dev->mdev, dev->mdev->priv.uar); } -int mlx5_ib_stage_bfrag_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_bfrag_init(struct mlx5_ib_dev *dev) { int err; @@ -6446,13 +6432,13 @@ int mlx5_ib_stage_bfrag_init(struct mlx5_ib_dev *dev) return err; } -void mlx5_ib_stage_bfrag_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_bfrag_cleanup(struct mlx5_ib_dev *dev) { mlx5_free_bfreg(dev->mdev, &dev->fp_bfreg); mlx5_free_bfreg(dev->mdev, &dev->bfreg); } -int mlx5_ib_stage_ib_reg_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_ib_reg_init(struct mlx5_ib_dev *dev) { const char *name; @@ -6464,17 +6450,17 @@ int mlx5_ib_stage_ib_reg_init(struct mlx5_ib_dev *dev) return ib_register_device(&dev->ib_dev, name); } -void mlx5_ib_stage_pre_ib_reg_umr_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_pre_ib_reg_umr_cleanup(struct mlx5_ib_dev *dev) { destroy_umrc_res(dev); } -void mlx5_ib_stage_ib_reg_cleanup(struct mlx5_ib_dev *dev) +static void mlx5_ib_stage_ib_reg_cleanup(struct mlx5_ib_dev *dev) { ib_unregister_device(&dev->ib_dev); } -int mlx5_ib_stage_post_ib_reg_umr_init(struct mlx5_ib_dev *dev) +static int mlx5_ib_stage_post_ib_reg_umr_init(struct mlx5_ib_dev *dev) { return create_umr_res(dev); } diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 7daa20102b1d..505332015ff2 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1235,23 +1235,6 @@ static inline void mlx5_ib_invalidate_range(struct ib_umem_odp *umem_odp, #endif /* CONFIG_INFINIBAND_ON_DEMAND_PAGING */ /* Needed for rep profile */ -int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_rep_flow_db_init(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_rep_non_default_cb(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_rep_roce_init(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_rep_roce_cleanup(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_dev_res_init(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_dev_res_cleanup(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_counters_cleanup(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_bfrag_init(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_bfrag_cleanup(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_pre_ib_reg_umr_cleanup(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_ib_reg_init(struct mlx5_ib_dev *dev); -void mlx5_ib_stage_ib_reg_cleanup(struct mlx5_ib_dev *dev); -int mlx5_ib_stage_post_ib_reg_umr_init(struct mlx5_ib_dev *dev); void __mlx5_ib_remove(struct mlx5_ib_dev *dev, const struct mlx5_ib_profile *profile, int stage);