From patchwork Sun Apr 7 07:50:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1079902 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="2aPuxU/b"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44cQhk6vRyz9ryj for ; Sun, 7 Apr 2019 17:50:46 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726475AbfDGHuq (ORCPT ); Sun, 7 Apr 2019 03:50:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:54346 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726372AbfDGHup (ORCPT ); Sun, 7 Apr 2019 03:50:45 -0400 Received: from localhost (unknown [193.47.165.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EC29B21738; Sun, 7 Apr 2019 07:50:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554623444; bh=G/Nq+RD9h9KgjYYI/J8xsEYNS+vpgQqe6RmdO7BjWPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2aPuxU/b8oeQ2Fa4lLiGSWLIGRidwjdAqGF1V3Mf7H/zwub8o85C7sJwNplnmQRkD bdwlHNhC8iDscBa36PZZSDgxYq4V7rikx8FltJIwja0Yrutr1pBNnZjpipGjVVfJVg 4tTvu1bVvstD3svUm3SaPaLQs0MKhbTaG0RTFpXA= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Majd Dibbiny , Mark Zhang , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next v1 09/17] IB/mlx5: Support statistic q counter configuration Date: Sun, 7 Apr 2019 10:50:05 +0300 Message-Id: <20190407075013.12955-10-leon@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190407075013.12955-1-leon@kernel.org> References: <20190407075013.12955-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Zhang Add support for ib callbacks counter_bind_qp() and counter_unbind_qp(). Signed-off-by: Mark Zhang Reviewed-by: Majd Dibbiny Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 55 +++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index d0b3e916ca8e..8e9dfc1119c2 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5398,6 +5398,59 @@ static int mlx5_ib_get_hw_stats(struct ib_device *ibdev, return num_counters; } +static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter, + struct ib_qp *qp) +{ + struct mlx5_ib_dev *dev = to_mdev(qp->device); + u16 cnt_set_id = 0; + int err; + + if (counter->id == 0) { + err = mlx5_cmd_alloc_q_counter(dev->mdev, + &cnt_set_id, + MLX5_SHARED_RESOURCE_UID); + if (err) + return err; + counter->id = cnt_set_id; + } + + err = mlx5_ib_qp_set_counter(qp, counter); + if (err) + goto fail_set_counter; + + return 0; + +fail_set_counter: + if (cnt_set_id != 0) { + mlx5_core_dealloc_q_counter(dev->mdev, cnt_set_id); + counter->id = 0; + } + + return err; +} + +static int mlx5_ib_counter_unbind_qp(struct ib_qp *qp, bool force) +{ + struct mlx5_ib_dev *dev = to_mdev(qp->device); + struct rdma_counter *counter = qp->counter; + int err; + + err = mlx5_ib_qp_set_counter(qp, NULL); + if (err && !force) + return err; + + /* + * Deallocate the counter if this is the last QP bound on it; + * If @force is set then we still deallocate the q counter + * no matter if there's any error in previous. used for cases + * like qp destroy. + */ + if (atomic_read(&counter->usecnt) == 1) + return mlx5_core_dealloc_q_counter(dev->mdev, counter->id); + + return 0; +} + static int mlx5_ib_rn_get_params(struct ib_device *device, u8 port_num, enum rdma_netdev_t type, struct rdma_netdev_alloc_params *params) @@ -6276,6 +6329,8 @@ static void mlx5_ib_stage_odp_cleanup(struct mlx5_ib_dev *dev) static const struct ib_device_ops mlx5_ib_dev_hw_stats_ops = { .alloc_hw_stats = mlx5_ib_alloc_hw_stats, .get_hw_stats = mlx5_ib_get_hw_stats, + .counter_bind_qp = mlx5_ib_counter_bind_qp, + .counter_unbind_qp = mlx5_ib_counter_unbind_qp, }; int mlx5_ib_stage_counters_init(struct mlx5_ib_dev *dev)