From patchwork Thu Aug 22 02:48:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Williams X-Patchwork-Id: 1151242 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.denx.de (client-ip=81.169.180.215; helo=lists.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=marvell.com header.i=@marvell.com header.b="UuNW8+LK"; dkim-atps=neutral Received: from lists.denx.de (dione.denx.de [81.169.180.215]) by ozlabs.org (Postfix) with ESMTP id 46DTWV6mzcz9s7T for ; Thu, 22 Aug 2019 12:49:08 +1000 (AEST) Received: by lists.denx.de (Postfix, from userid 105) id 0EFBEC22024; Thu, 22 Aug 2019 02:49:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on lists.denx.de X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=T_DKIM_INVALID autolearn=unavailable autolearn_force=no version=3.4.0 Received: from lists.denx.de (localhost [IPv6:::1]) by lists.denx.de (Postfix) with ESMTP id 85F35C21F29; Thu, 22 Aug 2019 02:48:59 +0000 (UTC) Received: by lists.denx.de (Postfix, from userid 105) id 04E4BC21F29; Thu, 22 Aug 2019 02:48:57 +0000 (UTC) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lists.denx.de (Postfix) with ESMTPS id 4011BC21C2F for ; Thu, 22 Aug 2019 02:48:57 +0000 (UTC) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7M2ipRv004415; Wed, 21 Aug 2019 19:48:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=R6nDTHydcRVJeNrqG6zH+1DMcPS9mbAYIw7UBqvWFt8=; b=UuNW8+LKlxF+B5W1qPPNl1nuR4f0MjhVVfQFb92GXGcfGuie9Rq4Dq+MqVBGXI5RnGNa ZBDqrk+Ml0fwc2ySrIDcQGkIPuSTA15KWbRBGImUe+67+2ljO4KS30ngKs4g0a3fRQ49 g1svdVgL/AYGknne7f76vnam1EWpJPoOjJ54eMJKwlek1L2npBKMHX5SC29eUYJy2zG2 TsohTTY9Mqyc415WKG/6kE4ALSjCEnERhnGbThydYui2U/Kh0nqFCZj9T0WR9y2j8f+2 d64K3e1nHVWm1q0Yyt8/aBYpdy6dxmzfOlfA1GcSkK723GaEaL+iHrUonXWIsxoPyseE kg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2uhad3syg7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 21 Aug 2019 19:48:54 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 21 Aug 2019 19:48:53 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 21 Aug 2019 19:48:53 -0700 Received: from marvell.com (unknown [10.85.93.181]) by maili.marvell.com (Postfix) with ESMTP id 2073D3F703F; Wed, 21 Aug 2019 19:48:53 -0700 (PDT) From: Aaron Williams To: Date: Wed, 21 Aug 2019 19:48:46 -0700 Message-ID: <20190822024846.20728-1-awilliams@marvell.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-22_02:2019-08-19,2019-08-22 signatures=0 Cc: Aaron Williams Subject: [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.18 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" From: Aaron Williams When large writes take place I saw a Samsung EVO 970+ return a status value of 0x13, PRP Offset Invalid. I tracked this down to the improper handling of PRP entries. The blocks the PRP entries are placed in cannot cross a page boundary and thus should be allocated on page boundaries. This is how the Linux kernel driver works. With this patch, the PRP pool is allocated on a page boundary and other than the very first allocation, the pool size is a multiple of the page size. Each page can hold (4096 / 8) - 1 entries since the last entry must point to the next page in the pool. Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e Signed-off-by: Aaron Williams --- drivers/nvme/nvme.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c index 7008a54a6d..71ea226820 100644 --- a/drivers/nvme/nvme.c +++ b/drivers/nvme/nvme.c @@ -74,6 +74,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2, u64 *prp_pool; int length = total_len; int i, nprps; + u32 prps_per_page = (page_size >> 3) - 1; + u32 num_pages; + length -= (page_size - offset); if (length <= 0) { @@ -90,15 +93,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2, } nprps = DIV_ROUND_UP(length, page_size); + num_pages = DIV_ROUND_UP(nprps, prps_per_page); if (nprps > dev->prp_entry_num) { free(dev->prp_pool); - dev->prp_pool = malloc(nprps << 3); + dev->prp_pool = memalign(page_size, num_pages * page_size); if (!dev->prp_pool) { printf("Error: malloc prp_pool fail\n"); return -ENOMEM; } - dev->prp_entry_num = nprps; + dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages; } prp_pool = dev->prp_pool; @@ -791,12 +795,6 @@ static int nvme_probe(struct udevice *udev) } memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *)); - ndev->prp_pool = malloc(MAX_PRP_POOL); - if (!ndev->prp_pool) { - ret = -ENOMEM; - printf("Error: %s: Out of memory!\n", udev->name); - goto free_nvme; - } ndev->prp_entry_num = MAX_PRP_POOL >> 3; ndev->cap = nvme_readq(&ndev->bar->cap); @@ -808,6 +806,13 @@ static int nvme_probe(struct udevice *udev) if (ret) goto free_queue; + ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL); + if (!ndev->prp_pool) { + ret = -ENOMEM; + printf("Error: %s: Out of memory!\n", udev->name); + goto free_nvme; + } + ret = nvme_setup_io_queues(ndev); if (ret) goto free_queue;