From patchwork Tue Jun 18 15:34:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949251 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=RX3KCE73; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W5y6stGz20Wb for ; Wed, 19 Jun 2024 01:35:02 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 80104882EF; Tue, 18 Jun 2024 17:34:50 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="RX3KCE73"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 74C8F883BB; Tue, 18 Jun 2024 17:34:49 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 9D6B98834B for ; Tue, 18 Jun 2024 17:34:46 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5849261A02; Tue, 18 Jun 2024 15:34:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26E43C3277B; Tue, 18 Jun 2024 15:34:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724885; bh=MPlblwN3eISBv5Uf6HhRVKiDEPr8Q8Kkp+s/+WnyXMI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RX3KCE73f4fo8wYkEltQYzw8Nj0eN2PUHbC39RVzzXsQxt/kLYfqNxJMKmHLkqfYW uLyy6HrURhyR/9UrJZLaD8/LrUnP6mMln1+PdTWVxWfIgDf4l/v6msPBhA6fwHbqR9 lDtJTOPvVXCqTF+m4eHXuPHmBRRQO0YCyWJfTBl+TIUbKD9JAI4ctVtvhfWo44tVyf x89X1+oZip10vAQEMpHdP2T//34q80zGafTdgAnPMcUUe4CqkDbRTixusDwESL0p1h ybBREv51zxOuH9UlDsiJgsQpTb7HchlNFlrEz4oMGnoMt+Wt/i6YB//wh7FjkB/HK3 HHqRVzt6bcVRg== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 01/16] arm: mvebu: turris_omnia: Disable ext4 write support in defconfig Date: Tue, 18 Jun 2024 17:34:24 +0200 Message-ID: <20240618153439.9518-2-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Turris Omnia defconfig is nearing image size limit. Disable ext4 write support to reserve space for more important stuff. This reduces the size of the KWB image by ~19 KiB. If in the future U-Boot supports compressing itself and decompressing on load, we may enable this again. Signed-off-by: Marek Behún --- configs/turris_omnia_defconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/configs/turris_omnia_defconfig b/configs/turris_omnia_defconfig index 225a76f993..d5699782d7 100644 --- a/configs/turris_omnia_defconfig +++ b/configs/turris_omnia_defconfig @@ -124,4 +124,3 @@ CONFIG_USB_XHCI_HCD=y CONFIG_USB_EHCI_HCD=y CONFIG_WDT=y CONFIG_WDT_ORION=y -CONFIG_EXT4_WRITE=y From patchwork Tue Jun 18 15:34:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949253 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=pFqIUBGj; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W6C5J0Yz20KL for ; Wed, 19 Jun 2024 01:35:15 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id EDFEC88365; Tue, 18 Jun 2024 17:34:50 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="pFqIUBGj"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id DA78A8819D; Tue, 18 Jun 2024 17:34:49 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id DC967882EF for ; Tue, 18 Jun 2024 17:34:47 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A7133617DF; Tue, 18 Jun 2024 15:34:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C22FC3277B; Tue, 18 Jun 2024 15:34:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724886; bh=J5BnbFVc9JNZak57au9RJy1uTBZfIa/8FQ148kuVOj8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pFqIUBGjF7kLASoaEsj9JqdXtOAeBSdC+Syopc076qIkDsny+LvrDC5a0e3LK/SZw sS9lZe1tAbHjM92U3eBpuqFVr0hS9aaI2cbrmRjGG7u2JctSZzQyTIhrb9XJDv2dQT yl9I75mrIXMC+4NvyqX7Jn6pMopi2lzm4aFyI7IyJpM4sHUqxkmeZ9ngQAYrBh8yN1 WcucAln/tjI7j/zJFrLTM8sctWYCkkJHLCT3YpnPTifiHb6vukt62uYd0qWx/QQv7p 5tHIxLLBfvZtJGWiNq8aoN9tJ3YH66ouJh6eBgUwlRk5ptDFjHsUNn9SzOdg1DEjCg eSKKg+RNVHhFQ== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 02/16] ddr: marvell: a38x: debug: return from ddr3_tip_print_log() early if we won't print anything Date: Tue, 18 Jun 2024 17:34:25 +0200 Message-ID: <20240618153439.9518-3-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Return from ddr3_tip_print_log() early if we won't print anything anyway. This way the compiler can optimize away the VALIDATE_IF_ACTIVE() calls in the for-loop, so if the SILENT_LIB macro is defined, no code is generated for the rest of the function, which saves some space. Signed-off-by: Marek Behún --- drivers/ddr/marvell/a38x/ddr3_debug.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/ddr/marvell/a38x/ddr3_debug.c b/drivers/ddr/marvell/a38x/ddr3_debug.c index 9e499cfb99..0374a84387 100644 --- a/drivers/ddr/marvell/a38x/ddr3_debug.c +++ b/drivers/ddr/marvell/a38x/ddr3_debug.c @@ -399,6 +399,15 @@ int ddr3_tip_print_log(u32 dev_num, u32 mem_addr) } #endif /* DDR_VIEWER_TOOL */ + /* return early if we won't print anything anyway */ + if ( +#if defined(SILENT_LIB) + 1 || +#endif + debug_training < DEBUG_LEVEL_INFO) { + return MV_OK; + } + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { VALIDATE_IF_ACTIVE(tm->if_act_mask, if_id); From patchwork Tue Jun 18 15:34:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949254 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=C4rYJlpe; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=85.214.62.61; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W6S1W6Mz20KL for ; Wed, 19 Jun 2024 01:35:28 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 62279883E7; Tue, 18 Jun 2024 17:34:53 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="C4rYJlpe"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 62205883AD; Tue, 18 Jun 2024 17:34:51 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 2BB17883AD for ; Tue, 18 Jun 2024 17:34:49 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id F318561A01; Tue, 18 Jun 2024 15:34:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA41EC3277B; Tue, 18 Jun 2024 15:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724887; bh=Ps/2LqHCDFSVqcJojoZT/A8rQUDV0p/oKLR38QzUwXk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C4rYJlpe8FuqnNV46yMVoxDuzrwDF2LPw+uLX2iJuCMk0mM0vGhyaIIdLE3jzcJr2 Q1bMnBDqcahqvL3X4qxtp/U810WCU6flu2OOQrK9ukYIhV//EB3uJIwBJ+XeM0Gxw/ a+9SOMY7pnrDfkHyXkwdyLvJueSXkINq1jHO3JzBLBA6aMgPgEGncjMMj+E0upb7XX ka3kFh6Q0J6wgNyJDTM++OOlShrwnjbwn/fnRzrFhnr6RxgICmgqaeI2lrEAZWbpTd xbxeo4j/s2qh5H+ey5j4aeF2WgV9ibMYPTk1qwjz+nfHUfq6gBjVQ5lUnUC7d/PArp upFFLkp3ThxCQ== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 03/16] ddr: marvell: a38x: debug: Remove unused variables Date: Tue, 18 Jun 2024 17:34:26 +0200 Message-ID: <20240618153439.9518-4-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean The variables is_default_centralization, is_tune_result and is_bist_reset_bit are never used. Signed-off-by: Marek Behún --- drivers/ddr/marvell/a38x/ddr3_debug.c | 3 --- drivers/ddr/marvell/a38x/ddr3_init.h | 1 - 2 files changed, 4 deletions(-) diff --git a/drivers/ddr/marvell/a38x/ddr3_debug.c b/drivers/ddr/marvell/a38x/ddr3_debug.c index 0374a84387..c659ae92d8 100644 --- a/drivers/ddr/marvell/a38x/ddr3_debug.c +++ b/drivers/ddr/marvell/a38x/ddr3_debug.c @@ -117,12 +117,9 @@ u32 ctrl_level_phase[MAX_CS_NUM * MAX_INTERFACE_NUM * MAX_BUS_NUM]; #endif /* DDR_VIEWER_TOOL */ struct hws_tip_config_func_db config_func_info[MAX_DEVICE_NUM]; -u8 is_default_centralization = 0; -u8 is_tune_result = 0; u8 is_validate_window_per_if = 0; u8 is_validate_window_per_pup = 0; u8 sweep_cnt = 1; -u32 is_bist_reset_bit = 1; u8 is_run_leveling_sweep_tests; static struct hws_xsb_info xsb_info[MAX_DEVICE_NUM]; diff --git a/drivers/ddr/marvell/a38x/ddr3_init.h b/drivers/ddr/marvell/a38x/ddr3_init.h index 6854bb49de..9288073a78 100644 --- a/drivers/ddr/marvell/a38x/ddr3_init.h +++ b/drivers/ddr/marvell/a38x/ddr3_init.h @@ -116,7 +116,6 @@ extern u32 clamp_tbl[]; extern u32 freq_mask[MAX_DEVICE_NUM][MV_DDR_FREQ_LAST]; extern u32 maxt_poll_tries; -extern u32 is_bist_reset_bit; extern u8 vref_window_size[MAX_INTERFACE_NUM][MAX_BUS_NUM]; extern u32 effective_cs; From patchwork Tue Jun 18 15:34:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949255 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=fvpaJnoi; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=85.214.62.61; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W6p1hmLz20KL for ; Wed, 19 Jun 2024 01:35:46 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id EBC0188415; Tue, 18 Jun 2024 17:34:53 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="fvpaJnoi"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 86166883A8; Tue, 18 Jun 2024 17:34:51 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 8B51B88354 for ; Tue, 18 Jun 2024 17:34:49 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4B70C617DF; Tue, 18 Jun 2024 15:34:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23F3BC4AF1C; Tue, 18 Jun 2024 15:34:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724889; bh=Loks44TuMatVwM0xSmgQ+VzkAwUMu+itOqdL+NR+4Ec=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fvpaJnoi3NQSJ1Sg+4j3FOw3vO92s+gYW2gJjwUNkdyiDA9wtUuLXR61EgqrPg4t4 puKjoShAd0s5xpBMMocJDuMGz0OqsbOpCgy65WRD6c5L6Ayi+xP91g21tQPiHsn2l8 uYuqZKqMvBoF0lPHPU9SxwNShRBN0lmX+ejlt6NugJKm8pCvJgDLDXf9dsJ8z3sSmJ Zt9fj/OxouyRD0mXRL7l9CA2j7GchOoatKwnjODEAFlDpaoto1LPPxngjGTd2b7lwV kyL7N+vzQbLK0vxY8UazAcdXGVImTq8Z6c83DN5eh/+LDZ6VakZfQCJlelSQXz4OcK apuxdK6B21gqg== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 04/16] ddr: marvell: a38x: debug: Define DDR_VIEWER_TOOL variables only if needed, and make them static Date: Tue, 18 Jun 2024 17:34:27 +0200 Message-ID: <20240618153439.9518-5-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean The variables is_validate_window_per_if, is_validate_window_per_pup, sweep_cnt and is_run_leveling_sweep_tests are only used if DDR_VIEWER_TOOL macro is defined, so define them only in that case. Make them static since they are only used in ddr3_debug.c. Signed-off-by: Marek Behún --- drivers/ddr/marvell/a38x/ddr3_debug.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/ddr/marvell/a38x/ddr3_debug.c b/drivers/ddr/marvell/a38x/ddr3_debug.c index c659ae92d8..d32d42c408 100644 --- a/drivers/ddr/marvell/a38x/ddr3_debug.c +++ b/drivers/ddr/marvell/a38x/ddr3_debug.c @@ -114,13 +114,14 @@ u32 ctrl_adll[MAX_CS_NUM * MAX_INTERFACE_NUM * MAX_BUS_NUM]; u32 ctrl_adll1[MAX_CS_NUM * MAX_INTERFACE_NUM * MAX_BUS_NUM]; u32 ctrl_level_phase[MAX_CS_NUM * MAX_INTERFACE_NUM * MAX_BUS_NUM]; #endif /* EXCLUDE_SWITCH_DEBUG */ + +static u8 is_validate_window_per_if = 0; +static u8 is_validate_window_per_pup = 0; +static u8 sweep_cnt = 1; +static u8 is_run_leveling_sweep_tests; #endif /* DDR_VIEWER_TOOL */ struct hws_tip_config_func_db config_func_info[MAX_DEVICE_NUM]; -u8 is_validate_window_per_if = 0; -u8 is_validate_window_per_pup = 0; -u8 sweep_cnt = 1; -u8 is_run_leveling_sweep_tests; static struct hws_xsb_info xsb_info[MAX_DEVICE_NUM]; From patchwork Tue Jun 18 15:34:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949257 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=UC5yMcPl; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W6y70N4z20Wb for ; Wed, 19 Jun 2024 01:35:54 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 684C4882D4; Tue, 18 Jun 2024 17:34:56 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="UC5yMcPl"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 9215A883BB; Tue, 18 Jun 2024 17:34:55 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 2B3C3883DA for ; Tue, 18 Jun 2024 17:34:53 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 12A16CE1A0B; Tue, 18 Jun 2024 15:34:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6EC07C3277B; Tue, 18 Jun 2024 15:34:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724890; bh=PSNFG7oHBNWejUNKpunxtGdbLXVuGQ+lBqGlcH8hC9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UC5yMcPlFxlp/u2Sv8+W8Tamyn9F+4zM3IzvjOLUx3l8VkZkfXwV90eyJtVpOOVA2 mrvcxKXTT56Ncumlx+gDr3oP7tNOWmlRUev7TREW/69+I30l0M9uSicAc6AHtcUAjr 5lIDnj+UoqaOXC5CKK+knK77vkJp5URO4r6tFGw+iJ5trLy5Y5rzrOGOt+VS+oiVUH uhTqKikGgPlP2rYcJo7ly2nX7RC91PThnQh3ZPsywUrNj0DRE31UBLmu8+0rr+9z8f 5/VIX+Yra+5agRqtSsJo5rSHs6lwK+WY6Fk/jHlITO3sTfN+HUJI8JBL5OsVaHu0xy yd9yAXqZj0bpg== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 05/16] ddr: marvell: a38x: debug: Allow compiling with immutable debug settings to reduce binary size Date: Tue, 18 Jun 2024 17:34:28 +0200 Message-ID: <20240618153439.9518-6-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Allow compiling with immutable debug settings: - DEBUG_LEVEL is always set to DEBUG_LEVEL_ERROR - register dumps are disabled This can save around 10 KiB of space in the resulting binary, which is a lot in U-Boot SPL. Signed-off-by: Marek Behún --- arch/arm/mach-mvebu/Kconfig | 10 +++++++ drivers/ddr/marvell/a38x/ddr3_debug.c | 9 ++++-- drivers/ddr/marvell/a38x/ddr3_init.c | 3 +- drivers/ddr/marvell/a38x/ddr3_init.h | 42 ++++++++++++++++++++++----- 4 files changed, 53 insertions(+), 11 deletions(-) diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig index f15d3cc5ed..a320793a30 100644 --- a/arch/arm/mach-mvebu/Kconfig +++ b/arch/arm/mach-mvebu/Kconfig @@ -250,6 +250,16 @@ config DDR_LOG_LEVEL At level 3, rovides the windows margin of each DQ as a results of DQS centeralization. +config DDR_IMMUTABLE_DEBUG_SETTINGS + bool "Immutable DDR debug level (always DEBUG_LEVEL_ERROR)" + depends on ARMADA_38X + help + Makes the DDR training code debug level settings immutable. + The debug level setting from board topology definition is ignored. + The debug level is always set to DEBUG_LEVEL_ERROR and register + dumps are disabled. + This can save around 10 KiB of space in SPL binary. + config DDR_RESET_ON_TRAINING_FAILURE bool "Reset the board on DDR training failure instead of hanging" depends on ARMADA_38X || ARMADA_XP diff --git a/drivers/ddr/marvell/a38x/ddr3_debug.c b/drivers/ddr/marvell/a38x/ddr3_debug.c index d32d42c408..0b65168d82 100644 --- a/drivers/ddr/marvell/a38x/ddr3_debug.c +++ b/drivers/ddr/marvell/a38x/ddr3_debug.c @@ -7,18 +7,21 @@ #include "mv_ddr_training_db.h" #include "mv_ddr_regs.h" +#if !defined(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS) u8 is_reg_dump = 0; u8 debug_pbs = DEBUG_LEVEL_ERROR; +#endif /* * API to change flags outside of the lib */ -#if defined(SILENT_LIB) +#if defined(SILENT_LIB) || defined(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS) void ddr3_hws_set_log_level(enum ddr_lib_debug_block block, u8 level) { /* do nothing */ } -#else /* SILENT_LIB */ +#else /* !SILENT_LIB && !CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS */ + /* Debug flags for other Training modules */ u8 debug_training_static = DEBUG_LEVEL_ERROR; u8 debug_training = DEBUG_LEVEL_ERROR; @@ -104,7 +107,7 @@ void ddr3_hws_set_log_level(enum ddr_lib_debug_block block, u8 level) #endif /* CONFIG_DDR4 */ } } -#endif /* SILENT_LIB */ +#endif /* !SILENT_LIB && !CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS */ #if defined(DDR_VIEWER_TOOL) static char *convert_freq(enum mv_ddr_freq freq); diff --git a/drivers/ddr/marvell/a38x/ddr3_init.c b/drivers/ddr/marvell/a38x/ddr3_init.c index 27eb3ac173..7c5147f474 100644 --- a/drivers/ddr/marvell/a38x/ddr3_init.c +++ b/drivers/ddr/marvell/a38x/ddr3_init.c @@ -41,7 +41,8 @@ int ddr3_init(void) mv_ddr_pre_training_soc_config(ddr_type); /* Set log level for training library */ - mv_ddr_user_log_level_set(DEBUG_BLOCK_ALL); + if (!IS_ENABLED(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS)) + mv_ddr_user_log_level_set(DEBUG_BLOCK_ALL); mv_ddr_early_init(); diff --git a/drivers/ddr/marvell/a38x/ddr3_init.h b/drivers/ddr/marvell/a38x/ddr3_init.h index 9288073a78..b513a13c53 100644 --- a/drivers/ddr/marvell/a38x/ddr3_init.h +++ b/drivers/ddr/marvell/a38x/ddr3_init.h @@ -45,15 +45,46 @@ enum log_level { #define MISL_PHY_ODT_N_OFFS 0x0 /* Globals */ -extern u8 debug_training, debug_calibration, debug_ddr4_centralization, - debug_tap_tuning, debug_dm_tuning; +#if defined(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS) +static const u8 is_reg_dump = 0; +static const u8 debug_training_static = DEBUG_LEVEL_ERROR; +static const u8 debug_training = DEBUG_LEVEL_ERROR; +static const u8 debug_leveling = DEBUG_LEVEL_ERROR; +static const u8 debug_centralization = DEBUG_LEVEL_ERROR; +static const u8 debug_training_ip = DEBUG_LEVEL_ERROR; +static const u8 debug_training_bist = DEBUG_LEVEL_ERROR; +static const u8 debug_training_hw_alg = DEBUG_LEVEL_ERROR; +static const u8 debug_training_access = DEBUG_LEVEL_ERROR; +static const u8 debug_training_device = DEBUG_LEVEL_ERROR; +static const u8 debug_pbs = DEBUG_LEVEL_ERROR; + +static const u8 debug_tap_tuning = DEBUG_LEVEL_ERROR; +static const u8 debug_calibration = DEBUG_LEVEL_ERROR; +static const u8 debug_ddr4_centralization = DEBUG_LEVEL_ERROR; +static const u8 debug_dm_tuning = DEBUG_LEVEL_ERROR; +#else /* !CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS */ extern u8 is_reg_dump; +extern u8 debug_training_static; +extern u8 debug_training; +extern u8 debug_leveling; +extern u8 debug_centralization; +extern u8 debug_training_ip; +extern u8 debug_training_bist; +extern u8 debug_training_hw_alg; +extern u8 debug_training_access; +extern u8 debug_training_device; +extern u8 debug_pbs; + +extern u8 debug_tap_tuning; +extern u8 debug_calibration; +extern u8 debug_ddr4_centralization; +extern u8 debug_dm_tuning; +#endif /* !CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS */ + extern u8 generic_init_controller; /* list of allowed frequency listed in order of enum mv_ddr_freq */ extern u32 is_pll_old; extern struct pattern_info pattern_table[]; -extern u8 debug_centralization, debug_training_ip, debug_training_bist, - debug_pbs, debug_training_static, debug_leveling; extern struct hws_tip_config_func_db config_func_info[]; extern u8 twr_mask_table[]; extern u8 cl_mask_table[]; @@ -76,7 +107,6 @@ extern u32 g_rtt_nom; extern u32 g_rtt_wr; extern u32 g_rtt_park; -extern u8 debug_training_access; extern u32 first_active_if; extern u32 delay_enable, ck_delay, ca_delay; extern u32 mask_tune_func; @@ -122,8 +152,6 @@ extern u32 effective_cs; extern int ddr3_tip_centr_skip_min_win_check; extern u32 *dq_map_table; -extern u8 debug_training_hw_alg; - extern u32 start_xsb_offset; extern u32 odt_config; From patchwork Tue Jun 18 15:34:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949260 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=aRyazklG; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=85.214.62.61; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W7B5TGgz20KL for ; Wed, 19 Jun 2024 01:36:06 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id D00BE88423; Tue, 18 Jun 2024 17:34:58 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="aRyazklG"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 21896882D4; Tue, 18 Jun 2024 17:34:56 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 2B4BB88354 for ; Tue, 18 Jun 2024 17:34:54 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 64ECCCE1A46; Tue, 18 Jun 2024 15:34:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BA139C4AF1D; Tue, 18 Jun 2024 15:34:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724891; bh=6aksVOZlMysEOi1SakAthvb7yb7+cSiEqKe21FMLcKA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aRyazklGp66tvBV5bbPO9IPrTmUpLk7OW1IKUpMbxCwqbweqcv3ZVAk22QE+/SIiM nOxjkydRp/bY5Y3ylJecZMZRxSeMhF05l0DG/y4mjKFtTxSnt7oXxqAJ0cciSgC9hg uD8qXDbNSVYCJefimERzEhm9aYb58yhGBX+8SI5V3rOakMzOiAZMwS7hstqNFeNi0p n1zqGhKA8eKnn4xuH2ugnxCz1zzvlgwOucUVZ04MwKFOK/4oqdTQNvMixR9K/FIGyB nNXeJoQh0yah0nsaozO9DDE6FUrCAOw9EIA7qS8sNrU9eE+f4cLwjQJ5hDYZ+2x9cT cbMYPLaXKUzNA== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 06/16] arm: mvebu: turris_omnia: Enable immutable debug settings in DDR3 training by default Date: Tue, 18 Jun 2024 17:34:29 +0200 Message-ID: <20240618153439.9518-7-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Save 10 KiB in Turris Omnia's SPL binary by enabling immutable debug settings for DDR3 training code. Signed-off-by: Marek Behún --- configs/turris_omnia_defconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/configs/turris_omnia_defconfig b/configs/turris_omnia_defconfig index d5699782d7..938ad88ae5 100644 --- a/configs/turris_omnia_defconfig +++ b/configs/turris_omnia_defconfig @@ -10,6 +10,7 @@ CONFIG_NR_DRAM_BANKS=2 CONFIG_HAS_CUSTOM_SYS_INIT_SP_ADDR=y CONFIG_CUSTOM_SYS_INIT_SP_ADDR=0xff0000 CONFIG_TARGET_TURRIS_OMNIA=y +CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS=y CONFIG_DDR_RESET_ON_TRAINING_FAILURE=y CONFIG_MVEBU_EFUSE_VHV_GPIO="mcu_56" CONFIG_MVEBU_EFUSE_VHV_GPIO_ACTIVE_LOW=y From patchwork Tue Jun 18 15:34:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949263 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=LeT8/g7E; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=85.214.62.61; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W7Q68qFz20KL for ; Wed, 19 Jun 2024 01:36:18 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 3B4058842C; Tue, 18 Jun 2024 17:34:59 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="LeT8/g7E"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 822908841E; Tue, 18 Jun 2024 17:34:57 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 5A022883C9 for ; Tue, 18 Jun 2024 17:34:54 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 28A0A61A02; Tue, 18 Jun 2024 15:34:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1BDDFC4AF1C; Tue, 18 Jun 2024 15:34:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724893; bh=ykhfI4BEUMqi/KpQ3Faw3lvkHK5XKp8rYARlOHfEZ7M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LeT8/g7E2k8YiEVw85fDrMYbklFVsZYZQExz4Fzkxie6HP9X/1WgWUAwZW5lDhNvy FalrHAt+ROj03pgpCNL6q/ZiIpAutalf/gIfCG3hoFLbTIJV59TZ6B8aNGbOySVR9u poRfG0Pv7EvSg8MhY3C7zBJq1O+sSs3uPwB9DKVjQIdH3k6k9BSi/Dzs4kSigKzsgp SyaVUATmWuYIPK7QHuX/eSKXTe9IT8G0M8Pl3Nz+c60WHYm866HL0JeVQKAcdJq9SZ C+ZHozhBunmPtLDuWuGdtivIt12H/EoSwv3pTDEDMnNXg/S/l6V5kwvCBTcvXWQw3V AHOhfzF4sIbWg== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 07/16] arm: mvebu: turris_omnia: Fix ethernet PHY reset gpio FDT fixup Date: Tue, 18 Jun 2024 17:34:30 +0200 Message-ID: <20240618153439.9518-8-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean For board revisions where the WAN ethernet PHY reset GPIO is controllable via MCU we currently insert a phy-reset-gpios property into the ethernet controller node. The mvneta driver parses this property and uses the GPIO to reset the PHY. But this phy-reset-gpios property is not a valid DT binding in upstream kernel. Instead, a reset-gpios property should be inserted into the ethernet PHY node. This correct DT binding is supported by the DM ETH PHY U-Boot driver. Insert the reset-gpios property into the WAN PHY node instead the phy-reset-gpios property in WAN ETH node so that Linux will correctly use the reset GPIO. Enable the CONFIG_DM_ETH_PHY config option so that U-Boot will also use the correct DT property. Note: currently there are 4 ethernet controller drivers parsing the wrong DT property: dwc_eth_qos, fex_mxc, mvneta and mvpp2. We should convert all relevant device-trees to use reset-gpios so that we can get rid of these drivers parsing this property. Fixes: 1da53ae26afc ("arm: mvebu: turris_omnia: Add support for design with SW reset signals") Signed-off-by: Marek Behún --- board/CZ.NIC/turris_omnia/turris_omnia.c | 44 ++++++++++++++---------- configs/turris_omnia_defconfig | 1 + 2 files changed, 26 insertions(+), 19 deletions(-) diff --git a/board/CZ.NIC/turris_omnia/turris_omnia.c b/board/CZ.NIC/turris_omnia/turris_omnia.c index 4ee1a394b0..100a991406 100644 --- a/board/CZ.NIC/turris_omnia/turris_omnia.c +++ b/board/CZ.NIC/turris_omnia/turris_omnia.c @@ -978,11 +978,21 @@ static int fixup_mcu_gpio_in_pcie_nodes(void *blob) return 0; } -static int fixup_mcu_gpio_in_eth_wan_node(void *blob) +static int get_phy_wan_node_offset(const void *blob) +{ + u32 phy_wan_phandle; + + phy_wan_phandle = fdt_getprop_u32_default(blob, "ethernet2", "phy-handle", 0); + if (!phy_wan_phandle) + return -FDT_ERR_NOTFOUND; + + return fdt_node_offset_by_phandle(blob, phy_wan_phandle); +} + +static int fixup_mcu_gpio_in_phy_wan_node(void *blob) { unsigned int mcu_phandle; - int eth_wan_node; - int ret; + int phy_wan_node, ret; ret = fdt_increase_size(blob, 64); if (ret < 0) { @@ -990,21 +1000,17 @@ static int fixup_mcu_gpio_in_eth_wan_node(void *blob) return ret; } - eth_wan_node = fdt_path_offset(blob, "ethernet2"); - if (eth_wan_node < 0) - return eth_wan_node; + phy_wan_node = get_phy_wan_node_offset(blob); + if (phy_wan_node < 0) + return phy_wan_node; mcu_phandle = fdt_create_phandle_by_compatible(blob, "cznic,turris-omnia-mcu"); if (!mcu_phandle) return -FDT_ERR_NOPHANDLES; - /* insert: phy-reset-gpios = <&mcu 2 gpio GPIO_ACTIVE_LOW>; */ - ret = insert_mcu_gpio_prop(blob, eth_wan_node, "phy-reset-gpios", - mcu_phandle, 2, ilog2(EXT_CTL_nRES_PHY), GPIO_ACTIVE_LOW); - if (ret < 0) - return ret; - - return 0; + /* insert: reset-gpios = <&mcu 2 gpio GPIO_ACTIVE_LOW>; */ + return insert_mcu_gpio_prop(blob, phy_wan_node, "reset-gpios", + mcu_phandle, 2, ilog2(EXT_CTL_nRES_PHY), GPIO_ACTIVE_LOW); } static void fixup_atsha_node(void *blob) @@ -1033,7 +1039,7 @@ int board_fix_fdt(void *blob) { if (omnia_mcu_has_feature(FEAT_PERIPH_MCU)) { fixup_mcu_gpio_in_pcie_nodes(blob); - fixup_mcu_gpio_in_eth_wan_node(blob); + fixup_mcu_gpio_in_phy_wan_node(blob); } fixup_msata_port_nodes(blob); @@ -1218,14 +1224,14 @@ int ft_board_setup(void *blob, struct bd_info *bd) int node; /* - * U-Boot's FDT blob contains phy-reset-gpios in ethernet2 - * node when MCU controls all peripherals resets. + * U-Boot's FDT blob contains reset-gpios in ethernet2 PHY node when MCU + * controls all peripherals resets. * Fixup MCU GPIO nodes in PCIe and eth wan nodes in this case. */ - node = fdt_path_offset(gd->fdt_blob, "ethernet2"); - if (node >= 0 && fdt_getprop(gd->fdt_blob, node, "phy-reset-gpios", NULL)) { + node = get_phy_wan_node_offset(gd->fdt_blob); + if (node >= 0 && fdt_getprop(gd->fdt_blob, node, "reset-gpios", NULL)) { fixup_mcu_gpio_in_pcie_nodes(blob); - fixup_mcu_gpio_in_eth_wan_node(blob); + fixup_mcu_gpio_in_phy_wan_node(blob); } fixup_spi_nor_partitions(blob); diff --git a/configs/turris_omnia_defconfig b/configs/turris_omnia_defconfig index 938ad88ae5..08e3e21215 100644 --- a/configs/turris_omnia_defconfig +++ b/configs/turris_omnia_defconfig @@ -101,6 +101,7 @@ CONFIG_PHY_ANEG_TIMEOUT=8000 CONFIG_PHY_MARVELL=y CONFIG_PHY_FIXED=y CONFIG_DM_DSA=y +CONFIG_DM_ETH_PHY=y CONFIG_PHY_GIGE=y CONFIG_MV88E6XXX=y CONFIG_MVNETA=y From patchwork Tue Jun 18 15:34:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949264 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=lEbzs87A; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W7g1C0Rz20KL for ; Wed, 19 Jun 2024 01:36:31 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 9CDC6883BA; Tue, 18 Jun 2024 17:35:00 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="lEbzs87A"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 7630888438; Tue, 18 Jun 2024 17:34:59 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id CF3E5883BB for ; Tue, 18 Jun 2024 17:34:56 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 0EC82CE1A0B; Tue, 18 Jun 2024 15:34:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68792C3277B; Tue, 18 Jun 2024 15:34:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724894; bh=EVb0XnfS/mt0ArpWnHZbPThh/VveSgImfLFfQbKJklE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lEbzs87ATgbgymoGQlFbTRebYPw8Sl5cvt6M0eDVaVDvFfsSalX0KPQtli5W14cRw 4b3UeO9NK8DCsbDmEslMQkZokfG49CwiuJDI7OfKz0bMvFQHQFOUME5UeUCji3g1fR 6f1Pl6Ndo55qm2SpX6zlB+Ej6aJS3wuxxGsldL58pUa3nDPFctun+uizU+JpC/mGgh JDg6X9l75bvLyLrEJZs/08RQQE4obKoeK4RDAXeZ9QdGxLqYvM8oVEprxOG3gTBBed ybyt7+0wrqDTl317eJp3DDBJVvJRBF0R0nmmFUpg5GKIsa+BCihln8/yav67OjM3ft SegLXVaHIRuhQ== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 08/16] arm: mvebu: turris_omnia: Implement EEPROM layout for the 'eeprom' command Date: Tue, 18 Jun 2024 17:34:31 +0200 Message-ID: <20240618153439.9518-9-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Implement Turris Omnia EEPROM layout for the 'eeprom' command. When the 'eeprom' command (with layout support) is enabled, we can now use the 'eeprom print' and 'eeprom update' commands, for example: => eeprom print Magic constant 34a04103 RAM size in GB 2 Wi-Fi Region CRC32 checksum cecbc2a1 Signed-off-by: Marek Behún --- board/CZ.NIC/turris_omnia/Makefile | 1 + board/CZ.NIC/turris_omnia/eeprom.c | 109 +++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) create mode 100644 board/CZ.NIC/turris_omnia/eeprom.c diff --git a/board/CZ.NIC/turris_omnia/Makefile b/board/CZ.NIC/turris_omnia/Makefile index 341378b4e5..216e11958a 100644 --- a/board/CZ.NIC/turris_omnia/Makefile +++ b/board/CZ.NIC/turris_omnia/Makefile @@ -3,3 +3,4 @@ # Copyright (C) 2017 Marek Behún obj-y := turris_omnia.o ../turris_atsha_otp.o ../turris_common.o +obj-$(CONFIG_CMD_EEPROM_LAYOUT) += eeprom.o diff --git a/board/CZ.NIC/turris_omnia/eeprom.c b/board/CZ.NIC/turris_omnia/eeprom.c new file mode 100644 index 0000000000..a4f1dab469 --- /dev/null +++ b/board/CZ.NIC/turris_omnia/eeprom.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2024 Marek Behún + */ + +#include +#include +#include +#include +#include +#include +#include + +#define _DEF_FIELD(_n, _s, _t) \ + { _n, _s, NULL, eeprom_field_print_ ## _t, eeprom_field_update_ ## _t } + +static void eeprom_field_print_ramsz(const struct eeprom_field *field) +{ + printf(PRINT_FIELD_SEGMENT, field->name); + printf("%u\n", get_unaligned_le32(field->buf)); +} + +static int eeprom_field_update_ramsz(struct eeprom_field *field, char *value) +{ + u32 sz; + + if (value[0] == '1' || value[0] == '2' || value[0] == '4') + sz = value[0] - '0'; + else + return -1; + + if (value[1] != '\0') + return -1; + + put_unaligned_le32(sz, field->buf); + + return 0; +} + +static void eeprom_field_print_region(const struct eeprom_field *field) +{ + eeprom_field_print_ascii(field); +} + +static int eeprom_field_update_region(struct eeprom_field *field, char *value) +{ + if (strlen(value) != 2) { + printf("%s: has to be 2 characters\n", field->name); + return -1; + } + + memcpy(field->buf, value, 2); + memset(&field->buf[2], '\0', 2); + + return 0; +} + +static struct eeprom_field omnia_layout[] = { + _DEF_FIELD("Magic constant", 4, bin), + _DEF_FIELD("RAM size in GB", 4, ramsz), + _DEF_FIELD("Wi-Fi Region", 4, region), + _DEF_FIELD("CRC32 checksum", 4, bin), +}; + +static struct eeprom_field *crc_field = &omnia_layout[3]; + +static int omnia_update_field(struct eeprom_layout *layout, char *field_name, + char *new_data) +{ + struct eeprom_field *field; + int err; + + if (!new_data) + return 0; + + if (!field_name) + return -1; + + field = eeprom_layout_find_field(layout, field_name, true); + if (!field) + return -1; + + err = field->update(field, new_data); + if (err) { + printf("Invalid data for field %s\n", field_name); + return err; + } + + if (field < crc_field) { + u32 crc = crc32(0, layout->data, 12); + put_unaligned_le32(crc, crc_field->buf); + } + + return 0; +} + +void eeprom_layout_assign(struct eeprom_layout *layout, int) +{ + layout->fields = omnia_layout; + layout->num_of_fields = ARRAY_SIZE(omnia_layout); + layout->update = omnia_update_field; + layout->data_size = 16; +} + +int eeprom_layout_detect(unsigned char *) +{ + /* Turris Omnia has only one version of EEPROM layout */ + return 0; +} From patchwork Tue Jun 18 15:34:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949268 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=X9Gd0zM3; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W8S51GHz20KL for ; Wed, 19 Jun 2024 01:37:12 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 04B4E884A9; Tue, 18 Jun 2024 17:35:07 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="X9Gd0zM3"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 9622E88445; Tue, 18 Jun 2024 17:35:04 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 01612883E0 for ; Tue, 18 Jun 2024 17:34:57 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id DC17B618BC; Tue, 18 Jun 2024 15:34:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B39C2C4AF1C; Tue, 18 Jun 2024 15:34:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724895; bh=nXO9rkJWV576Z2F7fANwqDC2Dln0yQVpvmk1k7oBMCw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X9Gd0zM3iljAIxh5jSk39ZqNWT5BWEtPlTShoHum2TGNyz+DkGiKUdxjuuv8ixGUW 19P1c5VMxT0uEk9Bosn0qSF113Cq6bbSqW2oo163c72HRbGbgON0JuIvO9yCsyYaRR X1ipw1KOJrUyBpZcZ+9ltxYWY1ULFWoIA7SnqVfPMMNR5+qocoqEHohfKvf5Lyowne m7TQ+itn/f+NOjnW8xzCfsRNOzwHdW0h6tHx8lkO3CUeuU6jXZPS5UKbmmEYUCUjyj IJu2IbY5pUZWx+2KuwaVmvr6FgK+iIWp9RoQJOqE0fTEKXbT8x3xnWSeUJEiRmtlny Ltkm49NGn1ZYg== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 09/16] arm: mvebu: turris_omnia: Enable the 'eeprom' command Date: Tue, 18 Jun 2024 17:34:32 +0200 Message-ID: <20240618153439.9518-10-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Enable the 'eeprom' command with support for EEPROM layout for Turris Omnia. Enable the i2c-eeprom driver so that the EEPROM is accessed via driver model. Signed-off-by: Marek Behún --- configs/turris_omnia_defconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/configs/turris_omnia_defconfig b/configs/turris_omnia_defconfig index 08e3e21215..c8756a3a78 100644 --- a/configs/turris_omnia_defconfig +++ b/configs/turris_omnia_defconfig @@ -55,6 +55,8 @@ CONFIG_SPL_BOARD_INIT=y CONFIG_SPL_ENV_SUPPORT=y CONFIG_SPL_I2C=y CONFIG_SYS_MAXARGS=32 +CONFIG_CMD_EEPROM=y +CONFIG_CMD_EEPROM_LAYOUT=y CONFIG_CMD_MEMTEST=y CONFIG_SYS_ALT_MEMTEST=y CONFIG_CMD_SHA1SUM=y @@ -90,6 +92,7 @@ CONFIG_SPL_OF_TRANSLATE=y CONFIG_AHCI_PCI=y CONFIG_AHCI_MVEBU=y CONFIG_DM_PCA953X=y +CONFIG_I2C_EEPROM=y CONFIG_MMC_SDHCI=y CONFIG_MMC_SDHCI_MV=y CONFIG_DM_MTD=y From patchwork Tue Jun 18 15:34:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949265 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=I4VRTBrL; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W7v3DGnz20KL for ; Wed, 19 Jun 2024 01:36:43 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 1EED08842A; Tue, 18 Jun 2024 17:35:02 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="I4VRTBrL"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 8B5328844C; Tue, 18 Jun 2024 17:35:00 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 53D62883BA for ; Tue, 18 Jun 2024 17:34:58 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 360A76199A; Tue, 18 Jun 2024 15:34:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CC56C3277B; Tue, 18 Jun 2024 15:34:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724896; bh=Ep4flZGOVvlZlPR7Qu7CVKMrmFCnhW8WSDnXBckZ5Ss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I4VRTBrLbj/5Q3JiowItbRRAlB0VGRnK4RX3Nl3/+mNbJGpFWQ+5/MIL7uCjumD3b Dz02x1iIbM777GsM10TIyuaZIJnDnnYD4jeKxvSFu7YjT+nMKOgIcqM5uy2WTUZK7Y O/tRfYZ25B977IXPUWYgxO14WFFBbiH23BptMIwhbdG89V10rMZADR/9ijNf7UI+bq +vSRGYM/MMog0zWf7AF19DeU0ZpJG+AxCBiogsSWK5THi5H3fDaA4qtuobJr4ujjLP hYQcS7AoXHSBsw9ExhEmDDD9bsExM2e4crymTI6XYW6E8O/KPhpNtMr74057EkvP+j US3kV9o5fPWew== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 10/16] arm: mvebu: turris_omnia: Extend EEPROM info structure Date: Tue, 18 Jun 2024 17:34:33 +0200 Message-ID: <20240618153439.9518-11-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Extend the Omnia EEPROM information structure in preparation for more variables to be stored there. Signed-off-by: Marek Behún --- board/CZ.NIC/turris_omnia/eeprom.c | 11 ++++++- board/CZ.NIC/turris_omnia/turris_omnia.c | 42 ++++++++++++++++++++---- 2 files changed, 46 insertions(+), 7 deletions(-) diff --git a/board/CZ.NIC/turris_omnia/eeprom.c b/board/CZ.NIC/turris_omnia/eeprom.c index a4f1dab469..ea13e95b37 100644 --- a/board/CZ.NIC/turris_omnia/eeprom.c +++ b/board/CZ.NIC/turris_omnia/eeprom.c @@ -60,9 +60,13 @@ static struct eeprom_field omnia_layout[] = { _DEF_FIELD("RAM size in GB", 4, ramsz), _DEF_FIELD("Wi-Fi Region", 4, region), _DEF_FIELD("CRC32 checksum", 4, bin), + _DEF_FIELD("Extended reserved fields", 44, reserved), + _DEF_FIELD("Extended CRC32 checksum", 4, bin), }; static struct eeprom_field *crc_field = &omnia_layout[3]; +static struct eeprom_field *ext_crc_field = + &omnia_layout[ARRAY_SIZE(omnia_layout) - 1]; static int omnia_update_field(struct eeprom_layout *layout, char *field_name, char *new_data) @@ -91,6 +95,11 @@ static int omnia_update_field(struct eeprom_layout *layout, char *field_name, put_unaligned_le32(crc, crc_field->buf); } + if (field < ext_crc_field) { + u32 crc = crc32(0, layout->data, 44); + put_unaligned_le32(crc, ext_crc_field->buf); + } + return 0; } @@ -99,7 +108,7 @@ void eeprom_layout_assign(struct eeprom_layout *layout, int) layout->fields = omnia_layout; layout->num_of_fields = ARRAY_SIZE(omnia_layout); layout->update = omnia_update_field; - layout->data_size = 16; + layout->data_size = 64; } int eeprom_layout_detect(unsigned char *) diff --git a/board/CZ.NIC/turris_omnia/turris_omnia.c b/board/CZ.NIC/turris_omnia/turris_omnia.c index 100a991406..c2f91b762f 100644 --- a/board/CZ.NIC/turris_omnia/turris_omnia.c +++ b/board/CZ.NIC/turris_omnia/turris_omnia.c @@ -429,12 +429,40 @@ struct omnia_eeprom { u32 ramsize; char region[4]; u32 crc; + + /* second part (only considered if crc2 is not all-ones) */ + u8 reserved[44]; + u32 crc2; }; +static bool is_omnia_eeprom_second_part_valid(const struct omnia_eeprom *oep) +{ + return oep->crc2 != 0xffffffff; +} + +static void make_omnia_eeprom_second_part_invalid(struct omnia_eeprom *oep) +{ + oep->crc2 = 0xffffffff; +} + +static bool check_eeprom_crc(const void *buf, size_t size, u32 expected, + const char *name) +{ + u32 crc; + + crc = crc32(0, buf, size); + if (crc != expected) { + printf("bad %s EEPROM CRC (stored %08x, computed %08x)\n", + name, expected, crc); + return false; + } + + return true; +} + static bool omnia_read_eeprom(struct omnia_eeprom *oep) { struct udevice *chip; - u32 crc; int ret; chip = omnia_get_i2c_chip("EEPROM", OMNIA_I2C_EEPROM_CHIP_ADDR, @@ -455,12 +483,14 @@ static bool omnia_read_eeprom(struct omnia_eeprom *oep) return false; } - crc = crc32(0, (void *)oep, sizeof(*oep) - 4); - if (crc != oep->crc) { - printf("bad EEPROM CRC (stored %08x, computed %08x)\n", - oep->crc, crc); + if (!check_eeprom_crc(oep, offsetof(struct omnia_eeprom, crc), oep->crc, + "first")) return false; - } + + if (is_omnia_eeprom_second_part_valid(oep) && + !check_eeprom_crc(oep, offsetof(struct omnia_eeprom, crc2), + oep->crc2, "second")) + make_omnia_eeprom_second_part_invalid(oep); return true; } From patchwork Tue Jun 18 15:34:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949266 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=aW8O2oKm; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W874wkwz20KL for ; Wed, 19 Jun 2024 01:36:55 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 8FA6B88422; Tue, 18 Jun 2024 17:35:04 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="aW8O2oKm"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id C8BBE88422; Tue, 18 Jun 2024 17:35:01 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 98D0B8840E for ; Tue, 18 Jun 2024 17:34:59 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 82991618BC; Tue, 18 Jun 2024 15:34:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5ACBAC4AF1C; Tue, 18 Jun 2024 15:34:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724898; bh=FQNCbXfLDMVM6k4shY+rW+zJ3n3uupp5ct3DLUSUdUY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aW8O2oKm2FHaNM0Bn4TyJP2YaZIBKV+jzru1TnFARQ0iyMCbouXpSk+udRzSjqceb jDisu3KYx2S8GWBRh6MsPzpu5bDS24vCPzvsh7dyuxxqjQ0IR1hJbU7HuDc/vTjGEP AsmlNKy/1MMFss6eWEoLGStvmXxPWsMWpW0jWF6uNTkmlRyIgqsJHL5zgz1VbVllkt w2zsZxGGW7b5bvflIwViyXtLLaXq96LPAX1p2ff3e8N3yuqZYJCDy+LGkbxUH0DUyP yXh2OVTx7yeq1miOD87dfYdgtOdOjT/lSybiXuJJ9pQzvos6S4oKvU48cYIaH08g4n zlq+P7iE+d4WQ== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 11/16] arm: mvebu: turris_omnia: Read DDR speed from EEPROM Date: Tue, 18 Jun 2024 17:34:34 +0200 Message-ID: <20240618153439.9518-12-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Some Turris Omnia boards experience memory issues, and by experimentation we found that some of these issues can be solved by slowing DDR speed. Add a new field in the extended EEPROM information structure, ddr_speed. Support several values in this field (for now 1066F, 1333H, and the default, 1600K) and use it to overwrite the DDR topology parameters used by the DDR training algorithm. Signed-off-by: Marek Behún --- board/CZ.NIC/turris_omnia/eeprom.c | 41 +++++++++- board/CZ.NIC/turris_omnia/turris_omnia.c | 99 +++++++++++++++++++++++- 2 files changed, 135 insertions(+), 5 deletions(-) diff --git a/board/CZ.NIC/turris_omnia/eeprom.c b/board/CZ.NIC/turris_omnia/eeprom.c index ea13e95b37..32572481ce 100644 --- a/board/CZ.NIC/turris_omnia/eeprom.c +++ b/board/CZ.NIC/turris_omnia/eeprom.c @@ -55,12 +55,49 @@ static int eeprom_field_update_region(struct eeprom_field *field, char *value) return 0; } +static void eeprom_field_print_ddr_speed(const struct eeprom_field *field) +{ + printf(PRINT_FIELD_SEGMENT, field->name); + + if (field->buf[0] == '\0' || field->buf[0] == 0xff) + puts("(empty, defaults to 1600K)\n"); + else + printf("%.5s\n", field->buf); +} + +bool omnia_valid_ddr_speed(const char *name); +void omnia_print_ddr_speeds(void); + +static int eeprom_field_update_ddr_speed(struct eeprom_field *field, + char *value) +{ + if (value[0] == '\0') { + /* setting default value */ + memset(field->buf, 0xff, field->size); + + return 0; + } + + if (!omnia_valid_ddr_speed(value)) { + printf("%s: invalid setting, supported values are:\n ", + field->name); + omnia_print_ddr_speeds(); + + return -1; + } + + strncpy(field->buf, value, field->size); + + return 0; +} + static struct eeprom_field omnia_layout[] = { _DEF_FIELD("Magic constant", 4, bin), _DEF_FIELD("RAM size in GB", 4, ramsz), _DEF_FIELD("Wi-Fi Region", 4, region), _DEF_FIELD("CRC32 checksum", 4, bin), - _DEF_FIELD("Extended reserved fields", 44, reserved), + _DEF_FIELD("DDR speed", 5, ddr_speed), + _DEF_FIELD("Extended reserved fields", 39, reserved), _DEF_FIELD("Extended CRC32 checksum", 4, bin), }; @@ -96,7 +133,7 @@ static int omnia_update_field(struct eeprom_layout *layout, char *field_name, } if (field < ext_crc_field) { - u32 crc = crc32(0, layout->data, 44); + u32 crc = crc32(0, layout->data, 60); put_unaligned_le32(crc, ext_crc_field->buf); } diff --git a/board/CZ.NIC/turris_omnia/turris_omnia.c b/board/CZ.NIC/turris_omnia/turris_omnia.c index c2f91b762f..544784e860 100644 --- a/board/CZ.NIC/turris_omnia/turris_omnia.c +++ b/board/CZ.NIC/turris_omnia/turris_omnia.c @@ -431,7 +431,8 @@ struct omnia_eeprom { u32 crc; /* second part (only considered if crc2 is not all-ones) */ - u8 reserved[44]; + char ddr_speed[5]; + u8 reserved[39]; u32 crc2; }; @@ -520,6 +521,26 @@ static int omnia_get_ram_size_gb(void) return ram_size; } +static const char *omnia_get_ddr_speed(void) +{ + struct omnia_eeprom oep; + static char speed[sizeof(oep.ddr_speed) + 1]; + + if (!omnia_read_eeprom(&oep)) + return NULL; + + if (!is_omnia_eeprom_second_part_valid(&oep)) + return NULL; + + if (!oep.ddr_speed[0] || oep.ddr_speed[0] == 0xff) + return NULL; + + memcpy(&speed, &oep.ddr_speed, sizeof(oep.ddr_speed)); + speed[sizeof(speed) - 1] = '\0'; + + return speed; +} + static const char * const omnia_get_mcu_type(void) { static char result[] = "xxxxxxx (with peripheral resets)"; @@ -634,12 +655,84 @@ static struct mv_ddr_topology_map board_topology_map_2g = { {0} /* timing parameters */ }; +static const struct omnia_ddr_speed { + char name[5]; + u8 speed_bin; + u8 freq; +} omnia_ddr_speeds[] = { + { "1066F", SPEED_BIN_DDR_1066F, MV_DDR_FREQ_533 }, + { "1333H", SPEED_BIN_DDR_1333H, MV_DDR_FREQ_667 }, + { "1600K", SPEED_BIN_DDR_1600K, MV_DDR_FREQ_800 }, +}; + +static const struct omnia_ddr_speed *find_ddr_speed_setting(const char *name) +{ + for (int i = 0; i < ARRAY_SIZE(omnia_ddr_speeds); ++i) + if (!strncmp(name, omnia_ddr_speeds[i].name, 5)) + return &omnia_ddr_speeds[i]; + + return NULL; +} + +bool omnia_valid_ddr_speed(const char *name) +{ + return find_ddr_speed_setting(name) != NULL; +} + +void omnia_print_ddr_speeds(void) +{ + for (int i = 0; i < ARRAY_SIZE(omnia_ddr_speeds); ++i) + printf("%.5s%s", omnia_ddr_speeds[i].name, + i == ARRAY_SIZE(omnia_ddr_speeds) - 1 ? "\n" : ", "); +} + +static void fixup_speed_in_ddr_topology(struct mv_ddr_topology_map *topology) +{ + typeof(topology->interface_params[0]) *params; + const struct omnia_ddr_speed *setting; + const char *speed; + static bool done; + + if (done) + return; + + done = true; + + speed = omnia_get_ddr_speed(); + if (!speed) + return; + + setting = find_ddr_speed_setting(speed); + if (!setting) { + printf("Unsupported value %s for DDR3 speed in EEPROM!\n", + speed); + return; + } + + params = &topology->interface_params[0]; + + /* don't inform if we are not changing the speed from the default one */ + if (params->speed_bin_index == setting->speed_bin) + return; + + printf("Fixing up DDR3 speed (EEPROM defines %s)\n", speed); + + params->speed_bin_index = setting->speed_bin; + params->memory_freq = setting->freq; +} + struct mv_ddr_topology_map *mv_ddr_topology_map_get(void) { + struct mv_ddr_topology_map *topology; + if (omnia_get_ram_size_gb() == 2) - return &board_topology_map_2g; + topology = &board_topology_map_2g; else - return &board_topology_map_1g; + topology = &board_topology_map_1g; + + fixup_speed_in_ddr_topology(topology); + + return topology; } static int set_regdomain(void) From patchwork Tue Jun 18 15:34:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949435 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=llJP76rR; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=85.214.62.61; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3bd45Qkmz20Ws for ; Wed, 19 Jun 2024 04:58:48 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 39488882F9; Tue, 18 Jun 2024 20:58:30 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="llJP76rR"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 80B9B88363; Tue, 18 Jun 2024 17:35:09 +0200 (CEST) Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 20CCE884C3 for ; Tue, 18 Jun 2024 17:35:08 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id AD0A2CE1B27; Tue, 18 Jun 2024 15:35:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0873C3277B; Tue, 18 Jun 2024 15:34:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724901; bh=31TPqg3Rj6bmRujegyIqmFzBS9vhkfMr4M5wmOQgnW4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=llJP76rRZ0ixtmBEfCHU3VkGwT16jiq5yFI9NV4Mj7EdtNf0/NQkEbGwFiZ811hgY OERia1L2r9/oRIQvcITvVZGSoxcSKZIBUSsrY7jwHcc+5URJDJIwfUCHrDovRPIH43 MmumeOJGH9X5rom7mer4E7TGrxR4gYSUy0kjx2D+PumKU8GlwLbTSl0PnqmM3Ov2PG ZJNDrTkpLLtljBpEUlGSDhCcBSBBpvaB3A9x1+171dX5d1xzPxPIYrjo+TSUgheWrK Q5+y4rApPLMyNany7p3x8hCXNdoDhN719ZN8wLgGCef7ttUTWY1sx2+LdfotjJMOWS kbj/kwaEZy0DA== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 12/16] ddr: marvell: a38x: Import old DDR training code from 2017 version of U-Boot Date: Tue, 18 Jun 2024 17:34:35 +0200 Message-ID: <20240618153439.9518-13-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-Mailman-Approved-At: Tue, 18 Jun 2024 20:58:28 +0200 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Import DDR training code from commit 1b69ce2fc0ec ("arm: mvebu: ddr3_debug: remove self assignments") into drivers/ddr/marvell/a38x/old/. The code is not used yet. Explanation: Since 2019, on some Turris Omnia boards we have been having problems with newer versions of Marvell's DDR3 training code for Armada 38x, which is ported from mv-ddr-marvell [1] to U-Boot into the drivers/ddr/marvell/a38x/ directory: - sometimes the DDR3 training fails on some older boards, sometime it fails on some newer boards - other times it succeeds, but some boards experience crashes of the operating system after running for some time. Using the stock version of Turris Omnia's U-Boot from solved these issues, but this solution was not satisfactory, since we wanted features from new U-Boot. Back in 2020-2022 we have spent several months trying to debug the issues, working with Marvell, on our own, and also with U-Boot community, but these issues persist still. One solution we used back in 2019 was a "hybrid U-Boot": the SPL part (containing the DDR3 training code) was taken from the stock version, while the proper part was current U-Boot at the time. This solution also has its drawbacks, of which the main one is the need to glue binaries from two separate builds. Since then there have been some more changes to the DDR3 training code in upstream mv-ddr-marvell that have been ported to U-Boot. We have provided our users experimental builds of U-Boot in the TurrisOS so that they could try upgrading the firmware and let us know if those problems still exist. And they do. We do not have the time nor manpower to debug this problem and fix it properly. Marvell was also no able to provide a solution to this, probably because they do not have the manpower as well. I have therefore come up with this "not that pretty" solution: take the DDR3 training code from an older version of U-Boot that is known to work, put it into current U-Boot under old/ subdirectory within drivers/ddr/marvell/a38x/, build into the SPL binary both the old and new versions and make it possible to select the old version via an env variable. [1] https://github.com/MarvellEmbeddedProcessors/mv-ddr-marvell Signed-off-by: Marek Behún --- drivers/ddr/marvell/a38x/old/Makefile | 18 + drivers/ddr/marvell/a38x/old/ddr3_a38x.c | 736 +++++ drivers/ddr/marvell/a38x/old/ddr3_a38x.h | 93 + .../marvell/a38x/old/ddr3_a38x_mc_static.h | 226 ++ .../ddr/marvell/a38x/old/ddr3_a38x_topology.h | 22 + .../ddr/marvell/a38x/old/ddr3_a38x_training.c | 39 + drivers/ddr/marvell/a38x/old/ddr3_debug.c | 1526 ++++++++++ .../marvell/a38x/old/ddr3_hws_hw_training.c | 147 + .../marvell/a38x/old/ddr3_hws_hw_training.h | 49 + .../a38x/old/ddr3_hws_hw_training_def.h | 464 +++ .../marvell/a38x/old/ddr3_hws_sil_training.h | 17 + drivers/ddr/marvell/a38x/old/ddr3_init.c | 768 +++++ drivers/ddr/marvell/a38x/old/ddr3_init.h | 393 +++ .../ddr/marvell/a38x/old/ddr3_logging_def.h | 101 + .../marvell/a38x/old/ddr3_patterns_64bit.h | 924 ++++++ .../ddr/marvell/a38x/old/ddr3_topology_def.h | 76 + drivers/ddr/marvell/a38x/old/ddr3_training.c | 2649 +++++++++++++++++ .../ddr/marvell/a38x/old/ddr3_training_bist.c | 288 ++ .../a38x/old/ddr3_training_centralization.c | 711 +++++ .../ddr/marvell/a38x/old/ddr3_training_db.c | 651 ++++ .../marvell/a38x/old/ddr3_training_hw_algo.c | 685 +++++ .../marvell/a38x/old/ddr3_training_hw_algo.h | 14 + .../ddr/marvell/a38x/old/ddr3_training_ip.h | 178 ++ .../marvell/a38x/old/ddr3_training_ip_bist.h | 54 + .../old/ddr3_training_ip_centralization.h | 15 + .../marvell/a38x/old/ddr3_training_ip_db.h | 34 + .../marvell/a38x/old/ddr3_training_ip_def.h | 173 ++ .../a38x/old/ddr3_training_ip_engine.c | 1353 +++++++++ .../a38x/old/ddr3_training_ip_engine.h | 85 + .../marvell/a38x/old/ddr3_training_ip_flow.h | 349 +++ .../marvell/a38x/old/ddr3_training_ip_pbs.h | 41 + .../a38x/old/ddr3_training_ip_prv_if.h | 107 + .../a38x/old/ddr3_training_ip_static.h | 31 + .../marvell/a38x/old/ddr3_training_leveling.c | 1835 ++++++++++++ .../marvell/a38x/old/ddr3_training_leveling.h | 17 + .../ddr/marvell/a38x/old/ddr3_training_pbs.c | 994 +++++++ .../marvell/a38x/old/ddr3_training_static.c | 537 ++++ .../ddr/marvell/a38x/old/ddr_topology_def.h | 121 + .../ddr/marvell/a38x/old/ddr_training_ip_db.h | 16 + drivers/ddr/marvell/a38x/old/silicon_if.h | 17 + drivers/ddr/marvell/a38x/old/xor.h | 92 + 41 files changed, 16646 insertions(+) create mode 100644 drivers/ddr/marvell/a38x/old/Makefile create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_a38x.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_a38x.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_a38x_mc_static.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_a38x_topology.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_a38x_training.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_debug.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training_def.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_hws_sil_training.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_init.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_init.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_logging_def.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_patterns_64bit.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_topology_def.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_bist.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_centralization.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_db.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_bist.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_centralization.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_db.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_def.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_flow.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_pbs.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_prv_if.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_ip_static.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_leveling.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_pbs.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr3_training_static.c create mode 100644 drivers/ddr/marvell/a38x/old/ddr_topology_def.h create mode 100644 drivers/ddr/marvell/a38x/old/ddr_training_ip_db.h create mode 100644 drivers/ddr/marvell/a38x/old/silicon_if.h create mode 100644 drivers/ddr/marvell/a38x/old/xor.h diff --git a/drivers/ddr/marvell/a38x/old/Makefile b/drivers/ddr/marvell/a38x/old/Makefile new file mode 100644 index 0000000000..e7b723bb24 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/Makefile @@ -0,0 +1,18 @@ +# +# SPDX-License-Identifier: GPL-2.0+ +# + +obj-$(CONFIG_SPL_BUILD) += ddr3_a38x.o +obj-$(CONFIG_SPL_BUILD) += ddr3_a38x_training.o +obj-$(CONFIG_SPL_BUILD) += ddr3_debug.o +obj-$(CONFIG_SPL_BUILD) += ddr3_hws_hw_training.o +obj-$(CONFIG_SPL_BUILD) += ddr3_init.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_bist.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_centralization.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_db.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_hw_algo.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_ip_engine.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_leveling.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_pbs.o +obj-$(CONFIG_SPL_BUILD) += ddr3_training_static.o diff --git a/drivers/ddr/marvell/a38x/old/ddr3_a38x.c b/drivers/ddr/marvell/a38x/old/ddr3_a38x.c new file mode 100644 index 0000000000..cc2e498d50 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_a38x.c @@ -0,0 +1,736 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define A38X_NUMBER_OF_INTERFACES 5 + +#define SAR_DEV_ID_OFFS 27 +#define SAR_DEV_ID_MASK 0x7 + +/* Termal Sensor Registers */ +#define TSEN_STATE_REG 0xe4070 +#define TSEN_STATE_OFFSET 31 +#define TSEN_STATE_MASK (0x1 << TSEN_STATE_OFFSET) +#define TSEN_CONF_REG 0xe4074 +#define TSEN_CONF_RST_OFFSET 8 +#define TSEN_CONF_RST_MASK (0x1 << TSEN_CONF_RST_OFFSET) +#define TSEN_STATUS_REG 0xe4078 +#define TSEN_STATUS_READOUT_VALID_OFFSET 10 +#define TSEN_STATUS_READOUT_VALID_MASK (0x1 << \ + TSEN_STATUS_READOUT_VALID_OFFSET) +#define TSEN_STATUS_TEMP_OUT_OFFSET 0 +#define TSEN_STATUS_TEMP_OUT_MASK (0x3ff << TSEN_STATUS_TEMP_OUT_OFFSET) + +static struct dfx_access interface_map[] = { + /* Pipe Client */ + { 0, 17 }, + { 1, 7 }, + { 1, 11 }, + { 0, 3 }, + { 1, 25 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 } +}; + +/* This array hold the board round trip delay (DQ and CK) per */ +struct trip_delay_element a38x_board_round_trip_delay_array[] = { + /* 1st board */ + /* Interface bus DQS-delay CK-delay */ + { 3952, 5060 }, + { 3192, 4493 }, + { 4785, 6677 }, + { 3413, 7267 }, + { 4282, 6086 }, /* ECC PUP */ + { 3952, 5134 }, + { 3192, 4567 }, + { 4785, 6751 }, + { 3413, 7341 }, + { 4282, 6160 }, /* ECC PUP */ + + /* 2nd board */ + /* Interface bus DQS-delay CK-delay */ + { 3952, 5060 }, + { 3192, 4493 }, + { 4785, 6677 }, + { 3413, 7267 }, + { 4282, 6086 }, /* ECC PUP */ + { 3952, 5134 }, + { 3192, 4567 }, + { 4785, 6751 }, + { 3413, 7341 }, + { 4282, 6160 } /* ECC PUP */ +}; + +#ifdef STATIC_ALGO_SUPPORT +/* package trace */ +static struct trip_delay_element a38x_package_round_trip_delay_array[] = { + /* IF BUS DQ_DELAY CK_DELAY */ + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 }, + { 0, 0 } +}; + +static int a38x_silicon_delay_offset[] = { + /* board 0 */ + 0, + /* board 1 */ + 0, + /* board 2 */ + 0 +}; +#endif + +static u8 a38x_bw_per_freq[DDR_FREQ_LIMIT] = { + 0x3, /* DDR_FREQ_100 */ + 0x4, /* DDR_FREQ_400 */ + 0x4, /* DDR_FREQ_533 */ + 0x5, /* DDR_FREQ_667 */ + 0x5, /* DDR_FREQ_800 */ + 0x5, /* DDR_FREQ_933 */ + 0x5, /* DDR_FREQ_1066 */ + 0x3, /* DDR_FREQ_311 */ + 0x3, /* DDR_FREQ_333 */ + 0x4, /* DDR_FREQ_467 */ + 0x5, /* DDR_FREQ_850 */ + 0x5, /* DDR_FREQ_600 */ + 0x3, /* DDR_FREQ_300 */ + 0x5, /* DDR_FREQ_900 */ + 0x3, /* DDR_FREQ_360 */ + 0x5 /* DDR_FREQ_1000 */ +}; + +static u8 a38x_rate_per_freq[DDR_FREQ_LIMIT] = { + /*TBD*/ 0x1, /* DDR_FREQ_100 */ + 0x2, /* DDR_FREQ_400 */ + 0x2, /* DDR_FREQ_533 */ + 0x2, /* DDR_FREQ_667 */ + 0x2, /* DDR_FREQ_800 */ + 0x3, /* DDR_FREQ_933 */ + 0x3, /* DDR_FREQ_1066 */ + 0x1, /* DDR_FREQ_311 */ + 0x1, /* DDR_FREQ_333 */ + 0x2, /* DDR_FREQ_467 */ + 0x2, /* DDR_FREQ_850 */ + 0x2, /* DDR_FREQ_600 */ + 0x1, /* DDR_FREQ_300 */ + 0x2, /* DDR_FREQ_900 */ + 0x1, /* DDR_FREQ_360 */ + 0x2 /* DDR_FREQ_1000 */ +}; + +static u16 a38x_vco_freq_per_sar[] = { + 666, /* 0 */ + 1332, + 800, + 1600, + 1066, + 2132, + 1200, + 2400, + 1332, + 1332, + 1500, + 1500, + 1600, /* 12 */ + 1600, + 1700, + 1700, + 1866, + 1866, + 1800, /* 18 */ + 2000, + 2000, + 4000, + 2132, + 2132, + 2300, + 2300, + 2400, + 2400, + 2500, + 2500, + 800 +}; + +u32 pipe_multicast_mask; + +u32 dq_bit_map_2_phy_pin[] = { + 1, 0, 2, 6, 9, 8, 3, 7, /* 0 */ + 8, 9, 1, 7, 2, 6, 3, 0, /* 1 */ + 3, 9, 7, 8, 1, 0, 2, 6, /* 2 */ + 1, 0, 6, 2, 8, 3, 7, 9, /* 3 */ + 0, 1, 2, 9, 7, 8, 3, 6, /* 4 */ +}; + +static int ddr3_tip_a38x_set_divider(u8 dev_num, u32 if_id, + enum hws_ddr_freq freq); + +/* + * Read temperature TJ value + */ +u32 ddr3_ctrl_get_junc_temp(u8 dev_num) +{ + int reg = 0; + + /* Initiates TSEN hardware reset once */ + if ((reg_read(TSEN_CONF_REG) & TSEN_CONF_RST_MASK) == 0) + reg_bit_set(TSEN_CONF_REG, TSEN_CONF_RST_MASK); + mdelay(10); + + /* Check if the readout field is valid */ + if ((reg_read(TSEN_STATUS_REG) & TSEN_STATUS_READOUT_VALID_MASK) == 0) { + printf("%s: TSEN not ready\n", __func__); + return 0; + } + + reg = reg_read(TSEN_STATUS_REG); + reg = (reg & TSEN_STATUS_TEMP_OUT_MASK) >> TSEN_STATUS_TEMP_OUT_OFFSET; + + return ((((10000 * reg) / 21445) * 1000) - 272674) / 1000; +} + +/* + * Name: ddr3_tip_a38x_get_freq_config. + * Desc: + * Args: + * Notes: + * Returns: MV_OK if success, other error code if fail. + */ +int ddr3_tip_a38x_get_freq_config(u8 dev_num, enum hws_ddr_freq freq, + struct hws_tip_freq_config_info + *freq_config_info) +{ + if (a38x_bw_per_freq[freq] == 0xff) + return MV_NOT_SUPPORTED; + + if (freq_config_info == NULL) + return MV_BAD_PARAM; + + freq_config_info->bw_per_freq = a38x_bw_per_freq[freq]; + freq_config_info->rate_per_freq = a38x_rate_per_freq[freq]; + freq_config_info->is_supported = 1; + + return MV_OK; +} + +/* + * Name: ddr3_tip_a38x_pipe_enable. + * Desc: + * Args: + * Notes: + * Returns: MV_OK if success, other error code if fail. + */ +int ddr3_tip_a38x_pipe_enable(u8 dev_num, enum hws_access_type interface_access, + u32 if_id, int enable) +{ + u32 data_value, pipe_enable_mask = 0; + + if (enable == 0) { + pipe_enable_mask = 0; + } else { + if (interface_access == ACCESS_TYPE_MULTICAST) + pipe_enable_mask = pipe_multicast_mask; + else + pipe_enable_mask = (1 << interface_map[if_id].pipe); + } + + CHECK_STATUS(ddr3_tip_reg_read + (dev_num, PIPE_ENABLE_ADDR, &data_value, MASK_ALL_BITS)); + data_value = (data_value & (~0xff)) | pipe_enable_mask; + CHECK_STATUS(ddr3_tip_reg_write(dev_num, PIPE_ENABLE_ADDR, data_value)); + + return MV_OK; +} + +/* + * Name: ddr3_tip_a38x_if_write. + * Desc: + * Args: + * Notes: + * Returns: MV_OK if success, other error code if fail. + */ +int ddr3_tip_a38x_if_write(u8 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 data_value, + u32 mask) +{ + u32 ui_data_read; + + if (mask != MASK_ALL_BITS) { + CHECK_STATUS(ddr3_tip_a38x_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, reg_addr, + &ui_data_read, MASK_ALL_BITS)); + data_value = (ui_data_read & (~mask)) | (data_value & mask); + } + + reg_write(reg_addr, data_value); + + return MV_OK; +} + +/* + * Name: ddr3_tip_a38x_if_read. + * Desc: + * Args: + * Notes: + * Returns: MV_OK if success, other error code if fail. + */ +int ddr3_tip_a38x_if_read(u8 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 *data, u32 mask) +{ + *data = reg_read(reg_addr) & mask; + + return MV_OK; +} + +/* + * Name: ddr3_tip_a38x_select_ddr_controller. + * Desc: Enable/Disable access to Marvell's server. + * Args: dev_num - device number + * enable - whether to enable or disable the server + * Notes: + * Returns: MV_OK if success, other error code if fail. + */ +int ddr3_tip_a38x_select_ddr_controller(u8 dev_num, int enable) +{ + u32 reg; + + reg = reg_read(CS_ENABLE_REG); + + if (enable) + reg |= (1 << 6); + else + reg &= ~(1 << 6); + + reg_write(CS_ENABLE_REG, reg); + + return MV_OK; +} + +/* + * Name: ddr3_tip_init_a38x_silicon. + * Desc: init Training SW DB. + * Args: + * Notes: + * Returns: MV_OK if success, other error code if fail. + */ +static int ddr3_tip_init_a38x_silicon(u32 dev_num, u32 board_id) +{ + struct hws_tip_config_func_db config_func; + enum hws_ddr_freq ddr_freq; + int status; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* new read leveling version */ + config_func.tip_dunit_read_func = ddr3_tip_a38x_if_read; + config_func.tip_dunit_write_func = ddr3_tip_a38x_if_write; + config_func.tip_dunit_mux_select_func = + ddr3_tip_a38x_select_ddr_controller; + config_func.tip_get_freq_config_info_func = + ddr3_tip_a38x_get_freq_config; + config_func.tip_set_freq_divider_func = ddr3_tip_a38x_set_divider; + config_func.tip_get_device_info_func = ddr3_tip_a38x_get_device_info; + config_func.tip_get_temperature = ddr3_ctrl_get_junc_temp; + + ddr3_tip_init_config_func(dev_num, &config_func); + + ddr3_tip_register_dq_table(dev_num, dq_bit_map_2_phy_pin); + +#ifdef STATIC_ALGO_SUPPORT + { + struct hws_tip_static_config_info static_config; + u32 board_offset = + board_id * A38X_NUMBER_OF_INTERFACES * + tm->num_of_bus_per_interface; + + static_config.silicon_delay = + a38x_silicon_delay_offset[board_id]; + static_config.package_trace_arr = + a38x_package_round_trip_delay_array; + static_config.board_trace_arr = + &a38x_board_round_trip_delay_array[board_offset]; + ddr3_tip_init_static_config_db(dev_num, &static_config); + } +#endif + status = ddr3_tip_a38x_get_init_freq(dev_num, &ddr_freq); + if (MV_OK != status) { + DEBUG_TRAINING_ACCESS(DEBUG_LEVEL_ERROR, + ("DDR3 silicon get target frequency - FAILED 0x%x\n", + status)); + return status; + } + + rl_version = 1; + mask_tune_func = (SET_LOW_FREQ_MASK_BIT | + LOAD_PATTERN_MASK_BIT | + SET_MEDIUM_FREQ_MASK_BIT | WRITE_LEVELING_MASK_BIT | + /* LOAD_PATTERN_2_MASK_BIT | */ + WRITE_LEVELING_SUPP_MASK_BIT | + READ_LEVELING_MASK_BIT | + PBS_RX_MASK_BIT | + PBS_TX_MASK_BIT | + SET_TARGET_FREQ_MASK_BIT | + WRITE_LEVELING_TF_MASK_BIT | + WRITE_LEVELING_SUPP_TF_MASK_BIT | + READ_LEVELING_TF_MASK_BIT | + CENTRALIZATION_RX_MASK_BIT | + CENTRALIZATION_TX_MASK_BIT); + rl_mid_freq_wa = 1; + + if ((ddr_freq == DDR_FREQ_333) || (ddr_freq == DDR_FREQ_400)) { + mask_tune_func = (WRITE_LEVELING_MASK_BIT | + LOAD_PATTERN_2_MASK_BIT | + WRITE_LEVELING_SUPP_MASK_BIT | + READ_LEVELING_MASK_BIT | + PBS_RX_MASK_BIT | + PBS_TX_MASK_BIT | + CENTRALIZATION_RX_MASK_BIT | + CENTRALIZATION_TX_MASK_BIT); + rl_mid_freq_wa = 0; /* WA not needed if 333/400 is TF */ + } + + /* Supplementary not supported for ECC modes */ + if (1 == ddr3_if_ecc_enabled()) { + mask_tune_func &= ~WRITE_LEVELING_SUPP_TF_MASK_BIT; + mask_tune_func &= ~WRITE_LEVELING_SUPP_MASK_BIT; + mask_tune_func &= ~PBS_TX_MASK_BIT; + mask_tune_func &= ~PBS_RX_MASK_BIT; + } + + if (ck_delay == -1) + ck_delay = 160; + if (ck_delay_16 == -1) + ck_delay_16 = 160; + ca_delay = 0; + delay_enable = 1; + + calibration_update_control = 1; + + init_freq = tm->interface_params[first_active_if].memory_freq; + + ddr3_tip_a38x_get_medium_freq(dev_num, &medium_freq); + + return MV_OK; +} + +int ddr3_a38x_update_topology_map(u32 dev_num, struct hws_topology_map *tm) +{ + u32 if_id = 0; + enum hws_ddr_freq freq; + + ddr3_tip_a38x_get_init_freq(dev_num, &freq); + tm->interface_params[if_id].memory_freq = freq; + + /* + * re-calc topology parameters according to topology updates + * (if needed) + */ + CHECK_STATUS(hws_ddr3_tip_load_topology_map(dev_num, tm)); + + return MV_OK; +} + +int ddr3_tip_init_a38x(u32 dev_num, u32 board_id) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (NULL == tm) + return MV_FAIL; + + ddr3_a38x_update_topology_map(dev_num, tm); + ddr3_tip_init_a38x_silicon(dev_num, board_id); + + return MV_OK; +} + +int ddr3_tip_a38x_get_init_freq(int dev_num, enum hws_ddr_freq *freq) +{ + u32 reg; + + /* Read sample at reset setting */ + reg = (reg_read(REG_DEVICE_SAR1_ADDR) >> + RST2_CPU_DDR_CLOCK_SELECT_IN_OFFSET) & + RST2_CPU_DDR_CLOCK_SELECT_IN_MASK; + switch (reg) { + case 0x0: + case 0x1: + *freq = DDR_FREQ_333; + break; + case 0x2: + case 0x3: + *freq = DDR_FREQ_400; + break; + case 0x4: + case 0xd: + *freq = DDR_FREQ_533; + break; + case 0x6: + *freq = DDR_FREQ_600; + break; + case 0x8: + case 0x11: + case 0x14: + *freq = DDR_FREQ_667; + break; + case 0xc: + case 0x15: + case 0x1b: + *freq = DDR_FREQ_800; + break; + case 0x10: + *freq = DDR_FREQ_933; + break; + case 0x12: + *freq = DDR_FREQ_900; + break; + case 0x13: + *freq = DDR_FREQ_900; + break; + default: + *freq = 0; + return MV_NOT_SUPPORTED; + } + + return MV_OK; +} + +int ddr3_tip_a38x_get_medium_freq(int dev_num, enum hws_ddr_freq *freq) +{ + u32 reg; + + /* Read sample at reset setting */ + reg = (reg_read(REG_DEVICE_SAR1_ADDR) >> + RST2_CPU_DDR_CLOCK_SELECT_IN_OFFSET) & + RST2_CPU_DDR_CLOCK_SELECT_IN_MASK; + switch (reg) { + case 0x0: + case 0x1: + /* Medium is same as TF to run PBS in this freq */ + *freq = DDR_FREQ_333; + break; + case 0x2: + case 0x3: + /* Medium is same as TF to run PBS in this freq */ + *freq = DDR_FREQ_400; + break; + case 0x4: + case 0xd: + *freq = DDR_FREQ_533; + break; + case 0x8: + case 0x11: + case 0x14: + *freq = DDR_FREQ_333; + break; + case 0xc: + case 0x15: + case 0x1b: + *freq = DDR_FREQ_400; + break; + case 0x6: + *freq = DDR_FREQ_300; + break; + case 0x12: + *freq = DDR_FREQ_360; + break; + case 0x13: + *freq = DDR_FREQ_400; + break; + default: + *freq = 0; + return MV_NOT_SUPPORTED; + } + + return MV_OK; +} + +u32 ddr3_tip_get_init_freq(void) +{ + enum hws_ddr_freq freq; + + ddr3_tip_a38x_get_init_freq(0, &freq); + + return freq; +} + +static int ddr3_tip_a38x_set_divider(u8 dev_num, u32 if_id, + enum hws_ddr_freq frequency) +{ + u32 divider = 0; + u32 sar_val; + + if (if_id != 0) { + DEBUG_TRAINING_ACCESS(DEBUG_LEVEL_ERROR, + ("A38x does not support interface 0x%x\n", + if_id)); + return MV_BAD_PARAM; + } + + /* get VCO freq index */ + sar_val = (reg_read(REG_DEVICE_SAR1_ADDR) >> + RST2_CPU_DDR_CLOCK_SELECT_IN_OFFSET) & + RST2_CPU_DDR_CLOCK_SELECT_IN_MASK; + divider = a38x_vco_freq_per_sar[sar_val] / freq_val[frequency]; + + /* Set Sync mode */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x20220, 0x0, + 0x1000)); + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe42f4, 0x0, + 0x200)); + + /* cpupll_clkdiv_reset_mask */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4264, 0x1f, + 0xff)); + + /* cpupll_clkdiv_reload_smooth */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4260, + (0x2 << 8), (0xff << 8))); + + /* cpupll_clkdiv_relax_en */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4260, + (0x2 << 24), (0xff << 24))); + + /* write the divider */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4268, + (divider << 8), (0x3f << 8))); + + /* set cpupll_clkdiv_reload_ratio */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4264, + (1 << 8), (1 << 8))); + + /* undet cpupll_clkdiv_reload_ratio */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4264, 0, + (1 << 8))); + + /* clear cpupll_clkdiv_reload_force */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4260, 0, + (0xff << 8))); + + /* clear cpupll_clkdiv_relax_en */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4260, 0, + (0xff << 24))); + + /* clear cpupll_clkdiv_reset_mask */ + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0xe4264, 0, + 0xff)); + + /* Dunit training clock + 1:1 mode */ + if ((frequency == DDR_FREQ_LOW_FREQ) || (freq_val[frequency] <= 400)) { + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x18488, + (1 << 16), (1 << 16))); + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x1524, + (0 << 15), (1 << 15))); + } else { + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x18488, + 0, (1 << 16))); + CHECK_STATUS(ddr3_tip_a38x_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x1524, + (1 << 15), (1 << 15))); + } + + return MV_OK; +} + +/* + * external read from memory + */ +int ddr3_tip_ext_read(u32 dev_num, u32 if_id, u32 reg_addr, + u32 num_of_bursts, u32 *data) +{ + u32 burst_num; + + for (burst_num = 0; burst_num < num_of_bursts * 8; burst_num++) + data[burst_num] = readl(reg_addr + 4 * burst_num); + + return MV_OK; +} + +/* + * external write to memory + */ +int ddr3_tip_ext_write(u32 dev_num, u32 if_id, u32 reg_addr, + u32 num_of_bursts, u32 *data) { + u32 burst_num; + + for (burst_num = 0; burst_num < num_of_bursts * 8; burst_num++) + writel(data[burst_num], reg_addr + 4 * burst_num); + + return MV_OK; +} + +int ddr3_silicon_pre_init(void) +{ + return ddr3_silicon_init(); +} + +int ddr3_post_run_alg(void) +{ + return MV_OK; +} + +int ddr3_silicon_post_init(void) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* Set half bus width */ + if (DDR3_IS_16BIT_DRAM_MODE(tm->bus_act_mask)) { + CHECK_STATUS(ddr3_tip_if_write + (0, ACCESS_TYPE_UNICAST, PARAM_NOT_CARE, + REG_SDRAM_CONFIG_ADDR, 0x0, 0x8000)); + } + + return MV_OK; +} + +int ddr3_tip_a38x_get_device_info(u8 dev_num, struct ddr3_device_info *info_ptr) +{ + info_ptr->device_id = 0x6800; + info_ptr->ck_delay = ck_delay; + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_a38x.h b/drivers/ddr/marvell/a38x/old/ddr3_a38x.h new file mode 100644 index 0000000000..1ed517446f --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_a38x.h @@ -0,0 +1,93 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_A38X_H +#define _DDR3_A38X_H + +#define MAX_INTERFACE_NUM 1 +#define MAX_BUS_NUM 5 + +#include "ddr3_hws_hw_training_def.h" + +#define ECC_SUPPORT + +/* right now, we're not supporting this in mainline */ +#undef SUPPORT_STATIC_DUNIT_CONFIG + +/* Controler bus divider 1 for 32 bit, 2 for 64 bit */ +#define DDR_CONTROLLER_BUS_WIDTH_MULTIPLIER 1 + +/* Tune internal training params values */ +#define TUNE_TRAINING_PARAMS_CK_DELAY 160 +#define TUNE_TRAINING_PARAMS_CK_DELAY_16 160 +#define TUNE_TRAINING_PARAMS_PFINGER 41 +#define TUNE_TRAINING_PARAMS_NFINGER 43 +#define TUNE_TRAINING_PARAMS_PHYREG3VAL 0xa + +#define MARVELL_BOARD MARVELL_BOARD_ID_BASE + + +#define REG_DEVICE_SAR1_ADDR 0xe4204 +#define RST2_CPU_DDR_CLOCK_SELECT_IN_OFFSET 17 +#define RST2_CPU_DDR_CLOCK_SELECT_IN_MASK 0x1f + +/* DRAM Windows */ +#define REG_XBAR_WIN_5_CTRL_ADDR 0x20050 +#define REG_XBAR_WIN_5_BASE_ADDR 0x20054 + +/* DRAM Windows */ +#define REG_XBAR_WIN_4_CTRL_ADDR 0x20040 +#define REG_XBAR_WIN_4_BASE_ADDR 0x20044 +#define REG_XBAR_WIN_4_REMAP_ADDR 0x20048 +#define REG_XBAR_WIN_7_REMAP_ADDR 0x20078 +#define REG_XBAR_WIN_16_CTRL_ADDR 0x200d0 +#define REG_XBAR_WIN_16_BASE_ADDR 0x200d4 +#define REG_XBAR_WIN_16_REMAP_ADDR 0x200dc +#define REG_XBAR_WIN_19_CTRL_ADDR 0x200e8 + +#define REG_FASTPATH_WIN_BASE_ADDR(win) (0x20180 + (0x8 * win)) +#define REG_FASTPATH_WIN_CTRL_ADDR(win) (0x20184 + (0x8 * win)) + +/* SatR defined too change topology busWidth and ECC configuration */ +#define DDR_SATR_CONFIG_MASK_WIDTH 0x8 +#define DDR_SATR_CONFIG_MASK_ECC 0x10 +#define DDR_SATR_CONFIG_MASK_ECC_PUP 0x20 + +#define REG_SAMPLE_RESET_HIGH_ADDR 0x18600 + +#define MV_BOARD_REFCLK MV_BOARD_REFCLK_25MHZ + +/* Matrix enables DRAM modes (bus width/ECC) per boardId */ +#define TOPOLOGY_UPDATE_32BIT 0 +#define TOPOLOGY_UPDATE_32BIT_ECC 1 +#define TOPOLOGY_UPDATE_16BIT 2 +#define TOPOLOGY_UPDATE_16BIT_ECC 3 +#define TOPOLOGY_UPDATE_16BIT_ECC_PUP3 4 +#define TOPOLOGY_UPDATE { \ + /* 32Bit, 32bit ECC, 16bit, 16bit ECC PUP4, 16bit ECC PUP3 */ \ + {1, 1, 1, 1, 1}, /* RD_NAS_68XX_ID */ \ + {1, 1, 1, 1, 1}, /* DB_68XX_ID */ \ + {1, 0, 1, 0, 1}, /* RD_AP_68XX_ID */ \ + {1, 0, 1, 0, 1}, /* DB_AP_68XX_ID */ \ + {1, 0, 1, 0, 1}, /* DB_GP_68XX_ID */ \ + {0, 0, 1, 1, 0}, /* DB_BP_6821_ID */ \ + {1, 1, 1, 1, 1} /* DB_AMC_6820_ID */ \ + }; + +enum { + CPU_1066MHZ_DDR_400MHZ, + CPU_RESERVED_DDR_RESERVED0, + CPU_667MHZ_DDR_667MHZ, + CPU_800MHZ_DDR_800MHZ, + CPU_RESERVED_DDR_RESERVED1, + CPU_RESERVED_DDR_RESERVED2, + CPU_RESERVED_DDR_RESERVED3, + LAST_FREQ +}; + +#define ACTIVE_INTERFACE_MASK 0x1 + +#endif /* _DDR3_A38X_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_a38x_mc_static.h b/drivers/ddr/marvell/a38x/old/ddr3_a38x_mc_static.h new file mode 100644 index 0000000000..b879a01031 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_a38x_mc_static.h @@ -0,0 +1,226 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_A38X_MC_STATIC_H +#define _DDR3_A38X_MC_STATIC_H + +#include "ddr3_a38x.h" + +#ifdef SUPPORT_STATIC_DUNIT_CONFIG + +#ifdef CONFIG_CUSTOMER_BOARD_SUPPORT +static struct reg_data ddr3_customer_800[] = { + /* parameters for customer board (based on 800MHZ) */ + {0x1400, 0x7b00cc30, 0xffffffff}, + {0x1404, 0x36301820, 0xffffffff}, + {0x1408, 0x5415baab, 0xffffffff}, + {0x140c, 0x38411def, 0xffffffff}, + {0x1410, 0x18300000, 0xffffffff}, + {0x1414, 0x00000700, 0xffffffff}, + {0x1424, 0x0060f3ff, 0xffffffff}, + {0x1428, 0x0011a940, 0xffffffff}, + {0x142c, 0x28c5134, 0xffffffff}, + {0x1474, 0x00000000, 0xffffffff}, + {0x147c, 0x0000d771, 0xffffffff}, + {0x1494, 0x00030000, 0xffffffff}, + {0x149c, 0x00000300, 0xffffffff}, + {0x14a8, 0x00000000, 0xffffffff}, + {0x14cc, 0xbd09000d, 0xffffffff}, + {0x1504, 0xfffffff1, 0xffffffff}, + {0x150c, 0xffffffe5, 0xffffffff}, + {0x1514, 0x00000000, 0xffffffff}, + {0x151c, 0x00000000, 0xffffffff}, + {0x1538, 0x00000b0b, 0xffffffff}, + {0x153c, 0x00000c0c, 0xffffffff}, + {0x15d0, 0x00000670, 0xffffffff}, + {0x15d4, 0x00000046, 0xffffffff}, + {0x15d8, 0x00000010, 0xffffffff}, + {0x15dc, 0x00000000, 0xffffffff}, + {0x15e0, 0x00000023, 0xffffffff}, + {0x15e4, 0x00203c18, 0xffffffff}, + {0x15ec, 0xf8000019, 0xffffffff}, + {0x16a0, 0xcc000006, 0xffffffff}, /* Clock Delay */ + {0xe4124, 0x08008073, 0xffffffff}, /* AVS BG default */ + {0, 0, 0} +}; + +#else /* CONFIG_CUSTOMER_BOARD_SUPPORT */ + +struct reg_data ddr3_a38x_933[MV_MAX_DDR3_STATIC_SIZE] = { + /* parameters for 933MHZ */ + {0x1400, 0x7b00ce3a, 0xffffffff}, + {0x1404, 0x36301820, 0xffffffff}, + {0x1408, 0x7417eccf, 0xffffffff}, + {0x140c, 0x3e421f98, 0xffffffff}, + {0x1410, 0x1a300000, 0xffffffff}, + {0x1414, 0x00000700, 0xffffffff}, + {0x1424, 0x0060f3ff, 0xffffffff}, + {0x1428, 0x0013ca50, 0xffffffff}, + {0x142c, 0x028c5165, 0xffffffff}, + {0x1474, 0x00000000, 0xffffffff}, + {0x147c, 0x0000e871, 0xffffffff}, + {0x1494, 0x00010000, 0xffffffff}, + {0x149c, 0x00000001, 0xffffffff}, + {0x14a8, 0x00000000, 0xffffffff}, + {0x14cc, 0xbd09000d, 0xffffffff}, + {0x1504, 0xffffffe1, 0xffffffff}, + {0x150c, 0xffffffe5, 0xffffffff}, + {0x1514, 0x00000000, 0xffffffff}, + {0x151c, 0x00000000, 0xffffffff}, + {0x1538, 0x00000d0d, 0xffffffff}, + {0x153c, 0x00000d0d, 0xffffffff}, + {0x15d0, 0x00000608, 0xffffffff}, + {0x15d4, 0x00000044, 0xffffffff}, + {0x15d8, 0x00000020, 0xffffffff}, + {0x15dc, 0x00000000, 0xffffffff}, + {0x15e0, 0x00000021, 0xffffffff}, + {0x15e4, 0x00203c18, 0xffffffff}, + {0x15ec, 0xf8000019, 0xffffffff}, + {0x16a0, 0xcc000006, 0xffffffff}, /* Clock Delay */ + {0xe4124, 0x08008073, 0xffffffff}, /* AVS BG default */ + {0, 0, 0} +}; + +static struct reg_data ddr3_a38x_800[] = { + /* parameters for 800MHZ */ + {0x1400, 0x7b00cc30, 0xffffffff}, + {0x1404, 0x36301820, 0xffffffff}, + {0x1408, 0x5415baab, 0xffffffff}, + {0x140c, 0x38411def, 0xffffffff}, + {0x1410, 0x18300000, 0xffffffff}, + {0x1414, 0x00000700, 0xffffffff}, + {0x1424, 0x0060f3ff, 0xffffffff}, + {0x1428, 0x0011a940, 0xffffffff}, + {0x142c, 0x28c5134, 0xffffffff}, + {0x1474, 0x00000000, 0xffffffff}, + {0x147c, 0x0000d771, 0xffffffff}, + {0x1494, 0x00030000, 0xffffffff}, + {0x149c, 0x00000300, 0xffffffff}, + {0x14a8, 0x00000000, 0xffffffff}, + {0x14cc, 0xbd09000d, 0xffffffff}, + {0x1504, 0xfffffff1, 0xffffffff}, + {0x150c, 0xffffffe5, 0xffffffff}, + {0x1514, 0x00000000, 0xffffffff}, + {0x151c, 0x00000000, 0xffffffff}, + {0x1538, 0x00000b0b, 0xffffffff}, + {0x153c, 0x00000c0c, 0xffffffff}, + {0x15d0, 0x00000670, 0xffffffff}, + {0x15d4, 0x00000046, 0xffffffff}, + {0x15d8, 0x00000010, 0xffffffff}, + {0x15dc, 0x00000000, 0xffffffff}, + {0x15e0, 0x00000023, 0xffffffff}, + {0x15e4, 0x00203c18, 0xffffffff}, + {0x15ec, 0xf8000019, 0xffffffff}, + {0x16a0, 0xcc000006, 0xffffffff}, /* Clock Delay */ + {0xe4124, 0x08008073, 0xffffffff}, /* AVS BG default */ + {0, 0, 0} +}; + +static struct reg_data ddr3_a38x_667[] = { + /* parameters for 667MHZ */ + /* DDR SDRAM Configuration Register */ + {0x1400, 0x7b00ca28, 0xffffffff}, + /* Dunit Control Low Register - kw28 bit12 low (disable CLK1) */ + {0x1404, 0x36301820, 0xffffffff}, + /* DDR SDRAM Timing (Low) Register */ + {0x1408, 0x43149997, 0xffffffff}, + /* DDR SDRAM Timing (High) Register */ + {0x140c, 0x38411bc7, 0xffffffff}, + /* DDR SDRAM Address Control Register */ + {0x1410, 0x14330000, 0xffffffff}, + /* DDR SDRAM Open Pages Control Register */ + {0x1414, 0x00000700, 0xffffffff}, + /* Dunit Control High Register (2 :1 - bits 15:12 = 0xd) */ + {0x1424, 0x0060f3ff, 0xffffffff}, + /* Dunit Control High Register */ + {0x1428, 0x000f8830, 0xffffffff}, + /* Dunit Control High Register (2:1 - bit 29 = '1') */ + {0x142c, 0x28c50f8, 0xffffffff}, + {0x147c, 0x0000c671, 0xffffffff}, + /* DDR SDRAM ODT Control (Low) Register */ + {0x1494, 0x00030000, 0xffffffff}, + /* DDR SDRAM ODT Control (High) Register, will be configured at WL */ + {0x1498, 0x00000000, 0xffffffff}, + /* DDR Dunit ODT Control Register */ + {0x149c, 0x00000300, 0xffffffff}, + {0x14a8, 0x00000000, 0xffffffff}, /* */ + {0x14cc, 0xbd09000d, 0xffffffff}, /* */ + {0x1474, 0x00000000, 0xffffffff}, + /* Read Data Sample Delays Register */ + {0x1538, 0x00000009, 0xffffffff}, + /* Read Data Ready Delay Register */ + {0x153c, 0x0000000c, 0xffffffff}, + {0x1504, 0xfffffff1, 0xffffffff}, /* */ + {0x150c, 0xffffffe5, 0xffffffff}, /* */ + {0x1514, 0x00000000, 0xffffffff}, /* */ + {0x151c, 0x0, 0xffffffff}, /* */ + {0x15d0, 0x00000650, 0xffffffff}, /* MR0 */ + {0x15d4, 0x00000046, 0xffffffff}, /* MR1 */ + {0x15d8, 0x00000010, 0xffffffff}, /* MR2 */ + {0x15dc, 0x00000000, 0xffffffff}, /* MR3 */ + {0x15e0, 0x23, 0xffffffff}, /* */ + {0x15e4, 0x00203c18, 0xffffffff}, /* ZQC Configuration Register */ + {0x15ec, 0xf8000019, 0xffffffff}, /* DDR PHY */ + {0x16a0, 0xcc000006, 0xffffffff}, /* Clock Delay */ + {0xe4124, 0x08008073, 0xffffffff}, /* AVS BG default */ + {0, 0, 0} +}; + +static struct reg_data ddr3_a38x_533[] = { + /* parameters for 533MHZ */ + /* DDR SDRAM Configuration Register */ + {0x1400, 0x7b00d040, 0xffffffff}, + /* Dunit Control Low Register - kw28 bit12 low (disable CLK1) */ + {0x1404, 0x36301820, 0xffffffff}, + /* DDR SDRAM Timing (Low) Register */ + {0x1408, 0x33137772, 0xffffffff}, + /* DDR SDRAM Timing (High) Register */ + {0x140c, 0x3841199f, 0xffffffff}, + /* DDR SDRAM Address Control Register */ + {0x1410, 0x10330000, 0xffffffff}, + /* DDR SDRAM Open Pages Control Register */ + {0x1414, 0x00000700, 0xffffffff}, + /* Dunit Control High Register (2 :1 - bits 15:12 = 0xd) */ + {0x1424, 0x0060f3ff, 0xffffffff}, + /* Dunit Control High Register */ + {0x1428, 0x000d6720, 0xffffffff}, + /* Dunit Control High Register (2:1 - bit 29 = '1') */ + {0x142c, 0x028c50c3, 0xffffffff}, + {0x147c, 0x0000b571, 0xffffffff}, + /* DDR SDRAM ODT Control (Low) Register */ + {0x1494, 0x00030000, 0xffffffff}, + /* DDR SDRAM ODT Control (High) Register, will be configured at WL */ + {0x1498, 0x00000000, 0xffffffff}, + /* DDR Dunit ODT Control Register */ + {0x149c, 0x00000003, 0xffffffff}, + {0x14a8, 0x00000000, 0xffffffff}, /* */ + {0x14cc, 0xbd09000d, 0xffffffff}, /* */ + {0x1474, 0x00000000, 0xffffffff}, + /* Read Data Sample Delays Register */ + {0x1538, 0x00000707, 0xffffffff}, + /* Read Data Ready Delay Register */ + {0x153c, 0x00000707, 0xffffffff}, + {0x1504, 0xffffffe1, 0xffffffff}, /* */ + {0x150c, 0xffffffe5, 0xffffffff}, /* */ + {0x1514, 0x00000000, 0xffffffff}, /* */ + {0x151c, 0x00000000, 0xffffffff}, /* */ + {0x15d0, 0x00000630, 0xffffffff}, /* MR0 */ + {0x15d4, 0x00000046, 0xffffffff}, /* MR1 */ + {0x15d8, 0x00000008, 0xffffffff}, /* MR2 */ + {0x15dc, 0x00000000, 0xffffffff}, /* MR3 */ + {0x15e0, 0x00000023, 0xffffffff}, /* */ + {0x15e4, 0x00203c18, 0xffffffff}, /* ZQC Configuration Register */ + {0x15ec, 0xf8000019, 0xffffffff}, /* DDR PHY */ + {0x16a0, 0xcc000006, 0xffffffff}, /* Clock Delay */ + {0xe4124, 0x08008073, 0xffffffff}, /* AVS BG default */ + {0, 0, 0} +}; + +#endif /* CONFIG_CUSTOMER_BOARD_SUPPORT */ + +#endif /* SUPPORT_STATIC_DUNIT_CONFIG */ + +#endif /* _DDR3_A38X_MC_STATIC_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_a38x_topology.h b/drivers/ddr/marvell/a38x/old/ddr3_a38x_topology.h new file mode 100644 index 0000000000..f27bbff733 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_a38x_topology.h @@ -0,0 +1,22 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_A38X_TOPOLOGY_H +#define _DDR3_A38X_TOPOLOGY_H + +#include "ddr_topology_def.h" + +/* Bus mask variants */ +#define BUS_MASK_32BIT 0xf +#define BUS_MASK_32BIT_ECC 0x1f +#define BUS_MASK_16BIT 0x3 +#define BUS_MASK_16BIT_ECC 0x13 +#define BUS_MASK_16BIT_ECC_PUP3 0xb + +#define DYNAMIC_CS_SIZE_CONFIG +#define DISABLE_L2_FILTERING_DURING_DDR_TRAINING + +#endif /* _DDR3_A38X_TOPOLOGY_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_a38x_training.c b/drivers/ddr/marvell/a38x/old/ddr3_a38x_training.c new file mode 100644 index 0000000000..edb2e706bb --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_a38x_training.c @@ -0,0 +1,39 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include +#include + +#include "ddr3_init.h" + +/* + * Name: ddr3_tip_init_silicon + * Desc: initiate silicon parameters + * Args: + * Notes: + * Returns: required value + */ +int ddr3_silicon_init(void) +{ + int status; + static int init_done; + + if (init_done == 1) + return MV_OK; + + status = ddr3_tip_init_a38x(0, 0); + if (MV_OK != status) { + printf("DDR3 A38x silicon init - FAILED 0x%x\n", status); + return status; + } + + init_done = 1; + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_debug.c b/drivers/ddr/marvell/a38x/old/ddr3_debug.c new file mode 100644 index 0000000000..d8bd328dfd --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_debug.c @@ -0,0 +1,1526 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include +#include + +#include "ddr3_init.h" + +u8 is_reg_dump = 0; +u8 debug_pbs = DEBUG_LEVEL_ERROR; + +/* + * API to change flags outside of the lib + */ +#ifndef SILENT_LIB +/* Debug flags for other Training modules */ +u8 debug_training_static = DEBUG_LEVEL_ERROR; +u8 debug_training = DEBUG_LEVEL_ERROR; +u8 debug_leveling = DEBUG_LEVEL_ERROR; +u8 debug_centralization = DEBUG_LEVEL_ERROR; +u8 debug_training_ip = DEBUG_LEVEL_ERROR; +u8 debug_training_bist = DEBUG_LEVEL_ERROR; +u8 debug_training_hw_alg = DEBUG_LEVEL_ERROR; +u8 debug_training_access = DEBUG_LEVEL_ERROR; +u8 debug_training_a38x = DEBUG_LEVEL_ERROR; + +void ddr3_hws_set_log_level(enum ddr_lib_debug_block block, u8 level) +{ + switch (block) { + case DEBUG_BLOCK_STATIC: + debug_training_static = level; + break; + case DEBUG_BLOCK_TRAINING_MAIN: + debug_training = level; + break; + case DEBUG_BLOCK_LEVELING: + debug_leveling = level; + break; + case DEBUG_BLOCK_CENTRALIZATION: + debug_centralization = level; + break; + case DEBUG_BLOCK_PBS: + debug_pbs = level; + break; + case DEBUG_BLOCK_ALG: + debug_training_hw_alg = level; + break; + case DEBUG_BLOCK_DEVICE: + debug_training_a38x = level; + break; + case DEBUG_BLOCK_ACCESS: + debug_training_access = level; + break; + case DEBUG_STAGES_REG_DUMP: + if (level == DEBUG_LEVEL_TRACE) + is_reg_dump = 1; + else + is_reg_dump = 0; + break; + case DEBUG_BLOCK_ALL: + default: + debug_training_static = level; + debug_training = level; + debug_leveling = level; + debug_centralization = level; + debug_pbs = level; + debug_training_hw_alg = level; + debug_training_access = level; + debug_training_a38x = level; + } +} +#else +void ddr3_hws_set_log_level(enum ddr_lib_debug_block block, u8 level) +{ + return; +} +#endif + +struct hws_tip_config_func_db config_func_info[HWS_MAX_DEVICE_NUM]; +u8 is_default_centralization = 0; +u8 is_tune_result = 0; +u8 is_validate_window_per_if = 0; +u8 is_validate_window_per_pup = 0; +u8 sweep_cnt = 1; +u32 is_bist_reset_bit = 1; +static struct hws_xsb_info xsb_info[HWS_MAX_DEVICE_NUM]; + +/* + * Dump Dunit & Phy registers + */ +int ddr3_tip_reg_dump(u32 dev_num) +{ + u32 if_id, reg_addr, data_value, bus_id; + u32 read_data[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + printf("-- dunit registers --\n"); + for (reg_addr = 0x1400; reg_addr < 0x19f0; reg_addr += 4) { + printf("0x%x ", reg_addr); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + if_id, reg_addr, read_data, + MASK_ALL_BITS)); + printf("0x%x ", read_data[if_id]); + } + printf("\n"); + } + + printf("-- Phy registers --\n"); + for (reg_addr = 0; reg_addr <= 0xff; reg_addr++) { + printf("0x%x ", reg_addr); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; + bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, reg_addr, + &data_value)); + printf("0x%x ", data_value); + } + for (bus_id = 0; + bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_CONTROL, reg_addr, + &data_value)); + printf("0x%x ", data_value); + } + } + printf("\n"); + } + + return MV_OK; +} + +/* + * Register access func registration + */ +int ddr3_tip_init_config_func(u32 dev_num, + struct hws_tip_config_func_db *config_func) +{ + if (config_func == NULL) + return MV_BAD_PARAM; + + memcpy(&config_func_info[dev_num], config_func, + sizeof(struct hws_tip_config_func_db)); + + return MV_OK; +} + +/* + * Get training result info pointer + */ +enum hws_result *ddr3_tip_get_result_ptr(u32 stage) +{ + return training_result[stage]; +} + +/* + * Device info read + */ +int ddr3_tip_get_device_info(u32 dev_num, struct ddr3_device_info *info_ptr) +{ + if (config_func_info[dev_num].tip_get_device_info_func != NULL) { + return config_func_info[dev_num]. + tip_get_device_info_func((u8) dev_num, info_ptr); + } + + return MV_FAIL; +} + +#ifndef EXCLUDE_SWITCH_DEBUG +/* + * Convert freq to character string + */ +static char *convert_freq(enum hws_ddr_freq freq) +{ + switch (freq) { + case DDR_FREQ_LOW_FREQ: + return "DDR_FREQ_LOW_FREQ"; + case DDR_FREQ_400: + return "400"; + + case DDR_FREQ_533: + return "533"; + case DDR_FREQ_667: + return "667"; + + case DDR_FREQ_800: + return "800"; + + case DDR_FREQ_933: + return "933"; + + case DDR_FREQ_1066: + return "1066"; + case DDR_FREQ_311: + return "311"; + + case DDR_FREQ_333: + return "333"; + + case DDR_FREQ_467: + return "467"; + + case DDR_FREQ_850: + return "850"; + + case DDR_FREQ_900: + return "900"; + + case DDR_FREQ_360: + return "DDR_FREQ_360"; + + case DDR_FREQ_1000: + return "DDR_FREQ_1000"; + default: + return "Unknown Frequency"; + } +} + +/* + * Convert device ID to character string + */ +static char *convert_dev_id(u32 dev_id) +{ + switch (dev_id) { + case 0x6800: + return "A38xx"; + case 0x6900: + return "A39XX"; + case 0xf400: + return "AC3"; + case 0xfc00: + return "BC2"; + + default: + return "Unknown Device"; + } +} + +/* + * Convert device ID to character string + */ +static char *convert_mem_size(u32 dev_id) +{ + switch (dev_id) { + case 0: + return "512 MB"; + case 1: + return "1 GB"; + case 2: + return "2 GB"; + case 3: + return "4 GB"; + case 4: + return "8 GB"; + + default: + return "wrong mem size"; + } +} + +int print_device_info(u8 dev_num) +{ + struct ddr3_device_info info_ptr; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + CHECK_STATUS(ddr3_tip_get_device_info(dev_num, &info_ptr)); + printf("=== DDR setup START===\n"); + printf("\tDevice ID: %s\n", convert_dev_id(info_ptr.device_id)); + printf("\tDDR3 CK delay: %d\n", info_ptr.ck_delay); + print_topology(tm); + printf("=== DDR setup END===\n"); + + return MV_OK; +} + +void hws_ddr3_tip_sweep_test(int enable) +{ + if (enable) { + is_validate_window_per_if = 1; + is_validate_window_per_pup = 1; + debug_training = DEBUG_LEVEL_TRACE; + } else { + is_validate_window_per_if = 0; + is_validate_window_per_pup = 0; + } +} +#endif + +char *ddr3_tip_convert_tune_result(enum hws_result tune_result) +{ + switch (tune_result) { + case TEST_FAILED: + return "FAILED"; + case TEST_SUCCESS: + return "PASS"; + case NO_TEST_DONE: + return "NOT COMPLETED"; + default: + return "Un-KNOWN"; + } +} + +/* + * Print log info + */ +int ddr3_tip_print_log(u32 dev_num, u32 mem_addr) +{ + u32 if_id = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + +#ifndef EXCLUDE_SWITCH_DEBUG + if ((is_validate_window_per_if != 0) || + (is_validate_window_per_pup != 0)) { + u32 is_pup_log = 0; + enum hws_ddr_freq freq; + + freq = tm->interface_params[first_active_if].memory_freq; + + is_pup_log = (is_validate_window_per_pup != 0) ? 1 : 0; + printf("===VALIDATE WINDOW LOG START===\n"); + printf("DDR Frequency: %s ======\n", convert_freq(freq)); + /* print sweep windows */ + ddr3_tip_run_sweep_test(dev_num, sweep_cnt, 1, is_pup_log); + ddr3_tip_run_sweep_test(dev_num, sweep_cnt, 0, is_pup_log); + ddr3_tip_print_all_pbs_result(dev_num); + ddr3_tip_print_wl_supp_result(dev_num); + printf("===VALIDATE WINDOW LOG END ===\n"); + CHECK_STATUS(ddr3_tip_restore_dunit_regs(dev_num)); + ddr3_tip_reg_dump(dev_num); + } +#endif + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("IF %d Status:\n", if_id)); + + if (mask_tune_func & INIT_CONTROLLER_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tInit Controller: %s\n", + ddr3_tip_convert_tune_result + (training_result[INIT_CONTROLLER] + [if_id]))); + } + if (mask_tune_func & SET_LOW_FREQ_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tLow freq Config: %s\n", + ddr3_tip_convert_tune_result + (training_result[SET_LOW_FREQ] + [if_id]))); + } + if (mask_tune_func & LOAD_PATTERN_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tLoad Pattern: %s\n", + ddr3_tip_convert_tune_result + (training_result[LOAD_PATTERN] + [if_id]))); + } + if (mask_tune_func & SET_MEDIUM_FREQ_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tMedium freq Config: %s\n", + ddr3_tip_convert_tune_result + (training_result[SET_MEDIUM_FREQ] + [if_id]))); + } + if (mask_tune_func & WRITE_LEVELING_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tWL: %s\n", + ddr3_tip_convert_tune_result + (training_result[WRITE_LEVELING] + [if_id]))); + } + if (mask_tune_func & LOAD_PATTERN_2_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tLoad Pattern: %s\n", + ddr3_tip_convert_tune_result + (training_result[LOAD_PATTERN_2] + [if_id]))); + } + if (mask_tune_func & READ_LEVELING_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tRL: %s\n", + ddr3_tip_convert_tune_result + (training_result[READ_LEVELING] + [if_id]))); + } + if (mask_tune_func & WRITE_LEVELING_SUPP_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tWL Supp: %s\n", + ddr3_tip_convert_tune_result + (training_result[WRITE_LEVELING_SUPP] + [if_id]))); + } + if (mask_tune_func & PBS_RX_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tPBS RX: %s\n", + ddr3_tip_convert_tune_result + (training_result[PBS_RX] + [if_id]))); + } + if (mask_tune_func & PBS_TX_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tPBS TX: %s\n", + ddr3_tip_convert_tune_result + (training_result[PBS_TX] + [if_id]))); + } + if (mask_tune_func & SET_TARGET_FREQ_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tTarget freq Config: %s\n", + ddr3_tip_convert_tune_result + (training_result[SET_TARGET_FREQ] + [if_id]))); + } + if (mask_tune_func & WRITE_LEVELING_TF_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tWL TF: %s\n", + ddr3_tip_convert_tune_result + (training_result[WRITE_LEVELING_TF] + [if_id]))); + } + if (mask_tune_func & READ_LEVELING_TF_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tRL TF: %s\n", + ddr3_tip_convert_tune_result + (training_result[READ_LEVELING_TF] + [if_id]))); + } + if (mask_tune_func & WRITE_LEVELING_SUPP_TF_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tWL TF Supp: %s\n", + ddr3_tip_convert_tune_result + (training_result + [WRITE_LEVELING_SUPP_TF] + [if_id]))); + } + if (mask_tune_func & CENTRALIZATION_RX_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tCentr RX: %s\n", + ddr3_tip_convert_tune_result + (training_result[CENTRALIZATION_RX] + [if_id]))); + } + if (mask_tune_func & VREF_CALIBRATION_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tVREF_CALIBRATION: %s\n", + ddr3_tip_convert_tune_result + (training_result[VREF_CALIBRATION] + [if_id]))); + } + if (mask_tune_func & CENTRALIZATION_TX_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("\tCentr TX: %s\n", + ddr3_tip_convert_tune_result + (training_result[CENTRALIZATION_TX] + [if_id]))); + } + } + + return MV_OK; +} + +/* + * Print stability log info + */ +int ddr3_tip_print_stability_log(u32 dev_num) +{ + u8 if_id = 0, csindex = 0, bus_id = 0, idx = 0; + u32 reg_data; + u32 read_data[MAX_INTERFACE_NUM]; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* Title print */ + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + printf("Title: I/F# , Tj, Calibration_n0, Calibration_p0, Calibration_n1, Calibration_p1, Calibration_n2, Calibration_p2,"); + for (csindex = 0; csindex < max_cs; csindex++) { + printf("CS%d , ", csindex); + printf("\n"); + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + printf("VWTx, VWRx, WL_tot, WL_ADLL, WL_PH, RL_Tot, RL_ADLL, RL_PH, RL_Smp, Cen_tx, Cen_rx, Vref, DQVref,"); + printf("\t\t"); + for (idx = 0; idx < 11; idx++) + printf("PBSTx-Pad%d,", idx); + printf("\t\t"); + for (idx = 0; idx < 11; idx++) + printf("PBSRx-Pad%d,", idx); + } + } + printf("\n"); + + /* Data print */ + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + printf("Data: %d,%d,", if_id, + (config_func_info[dev_num].tip_get_temperature != NULL) + ? (config_func_info[dev_num]. + tip_get_temperature(dev_num)) : (0)); + + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x14c8, + read_data, MASK_ALL_BITS)); + printf("%d,%d,", ((read_data[if_id] & 0x3f0) >> 4), + ((read_data[if_id] & 0xfc00) >> 10)); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x17c8, + read_data, MASK_ALL_BITS)); + printf("%d,%d,", ((read_data[if_id] & 0x3f0) >> 4), + ((read_data[if_id] & 0xfc00) >> 10)); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x1dc8, + read_data, MASK_ALL_BITS)); + printf("%d,%d,", ((read_data[if_id] & 0x3f0000) >> 16), + ((read_data[if_id] & 0xfc00000) >> 22)); + + for (csindex = 0; csindex < max_cs; csindex++) { + printf("CS%d , ", csindex); + for (bus_id = 0; bus_id < MAX_BUS_NUM; bus_id++) { + printf("\n"); + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + RESULT_DB_PHY_REG_ADDR + + csindex, ®_data); + printf("%d,%d,", (reg_data & 0x1f), + ((reg_data & 0x3e0) >> 5)); + /* WL */ + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + WL_PHY_REG + + csindex * 4, ®_data); + printf("%d,%d,%d,", + (reg_data & 0x1f) + + ((reg_data & 0x1c0) >> 6) * 32, + (reg_data & 0x1f), + (reg_data & 0x1c0) >> 6); + /* RL */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + if_id, + READ_DATA_SAMPLE_DELAY, + read_data, MASK_ALL_BITS)); + read_data[if_id] = + (read_data[if_id] & + (0xf << (4 * csindex))) >> + (4 * csindex); + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, + RL_PHY_REG + csindex * 4, + ®_data); + printf("%d,%d,%d,%d,", + (reg_data & 0x1f) + + ((reg_data & 0x1c0) >> 6) * 32 + + read_data[if_id] * 64, + (reg_data & 0x1f), + ((reg_data & 0x1c0) >> 6), + read_data[if_id]); + /* Centralization */ + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + csindex * 4, ®_data); + printf("%d,", (reg_data & 0x3f)); + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, + READ_CENTRALIZATION_PHY_REG + + csindex * 4, ®_data); + printf("%d,", (reg_data & 0x1f)); + /* Vref */ + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, + PAD_CONFIG_PHY_REG, + ®_data); + printf("%d,", (reg_data & 0x7)); + /* DQVref */ + /* Need to add the Read Function from device */ + printf("%d,", 0); + printf("\t\t"); + for (idx = 0; idx < 11; idx++) { + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + 0xd0 + + 12 * csindex + + idx, ®_data); + printf("%d,", (reg_data & 0x3f)); + } + printf("\t\t"); + for (idx = 0; idx < 11; idx++) { + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + 0x10 + + 16 * csindex + + idx, ®_data); + printf("%d,", (reg_data & 0x3f)); + } + printf("\t\t"); + for (idx = 0; idx < 11; idx++) { + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + 0x50 + + 16 * csindex + + idx, ®_data); + printf("%d,", (reg_data & 0x3f)); + } + } + } + } + printf("\n"); + + return MV_OK; +} + +/* + * Register XSB information + */ +int ddr3_tip_register_xsb_info(u32 dev_num, struct hws_xsb_info *xsb_info_table) +{ + memcpy(&xsb_info[dev_num], xsb_info_table, sizeof(struct hws_xsb_info)); + return MV_OK; +} + +/* + * Read ADLL Value + */ +int read_adll_value(u32 pup_values[MAX_INTERFACE_NUM * MAX_BUS_NUM], + int reg_addr, u32 mask) +{ + u32 data_value; + u32 if_id = 0, bus_id = 0; + u32 dev_num = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * multi CS support - reg_addr is calucalated in calling function + * with CS offset + */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + CHECK_STATUS(ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, + bus_id, + DDR_PHY_DATA, reg_addr, + &data_value)); + pup_values[if_id * + tm->num_of_bus_per_interface + bus_id] = + data_value & mask; + } + } + + return 0; +} + +/* + * Write ADLL Value + */ +int write_adll_value(u32 pup_values[MAX_INTERFACE_NUM * MAX_BUS_NUM], + int reg_addr) +{ + u32 if_id = 0, bus_id = 0; + u32 dev_num = 0, data; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * multi CS support - reg_addr is calucalated in calling function + * with CS offset + */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + data = pup_values[if_id * + tm->num_of_bus_per_interface + + bus_id]; + CHECK_STATUS(ddr3_tip_bus_write(dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + reg_addr, data)); + } + } + + return 0; +} + +#ifndef EXCLUDE_SWITCH_DEBUG +u32 rl_version = 1; /* 0 - old RL machine */ +struct hws_tip_config_func_db config_func_info[HWS_MAX_DEVICE_NUM]; +u32 start_xsb_offset = 0; +u8 is_rl_old = 0; +u8 is_freq_old = 0; +u8 is_dfs_disabled = 0; +u32 default_centrlization_value = 0x12; +u32 vref = 0x4; +u32 activate_select_before_run_alg = 1, activate_deselect_after_run_alg = 1, + rl_test = 0, reset_read_fifo = 0; +int debug_acc = 0; +u32 ctrl_sweepres[ADLL_LENGTH][MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u32 ctrl_adll[MAX_CS_NUM * MAX_INTERFACE_NUM * MAX_BUS_NUM]; +u8 cs_mask_reg[] = { + 0, 4, 8, 12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 +}; + +u32 xsb_test_table[][8] = { + {0x00000000, 0x11111111, 0x22222222, 0x33333333, 0x44444444, 0x55555555, + 0x66666666, 0x77777777}, + {0x88888888, 0x99999999, 0xaaaaaaaa, 0xbbbbbbbb, 0xcccccccc, 0xdddddddd, + 0xeeeeeeee, 0xffffffff}, + {0x00000000, 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff}, + {0x00000000, 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff}, + {0x00000000, 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff}, + {0x00000000, 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff}, + {0x00000000, 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff}, + {0x00000000, 0x00000000, 0x00000000, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000}, + {0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff} +}; + +static int ddr3_tip_access_atr(u32 dev_num, u32 flag_id, u32 value, u32 **ptr); + +int ddr3_tip_print_adll(void) +{ + u32 bus_cnt = 0, if_id, data_p1, data_p2, ui_data3, dev_num = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_cnt = 0; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_cnt, + DDR_PHY_DATA, 0x1, &data_p1)); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, 0x2, &data_p2)); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, 0x3, &ui_data3)); + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + (" IF %d bus_cnt %d phy_reg_1_data 0x%x phy_reg_2_data 0x%x phy_reg_3_data 0x%x\n", + if_id, bus_cnt, data_p1, data_p2, + ui_data3)); + } + } + + return MV_OK; +} + +/* + * Set attribute value + */ +int ddr3_tip_set_atr(u32 dev_num, u32 flag_id, u32 value) +{ + int ret; + u32 *ptr_flag = NULL; + + ret = ddr3_tip_access_atr(dev_num, flag_id, value, &ptr_flag); + if (ptr_flag != NULL) { + printf("ddr3_tip_set_atr Flag ID 0x%x value is set to 0x%x (was 0x%x)\n", + flag_id, value, *ptr_flag); + *ptr_flag = value; + } else { + printf("ddr3_tip_set_atr Flag ID 0x%x value is set to 0x%x\n", + flag_id, value); + } + + return ret; +} + +/* + * Access attribute + */ +static int ddr3_tip_access_atr(u32 dev_num, u32 flag_id, u32 value, u32 **ptr) +{ + u32 tmp_val = 0, if_id = 0, pup_id = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + *ptr = NULL; + + switch (flag_id) { + case 0: + *ptr = (u32 *)&(tm->if_act_mask); + break; + + case 0x1: + *ptr = (u32 *)&mask_tune_func; + break; + + case 0x2: + *ptr = (u32 *)&low_freq; + break; + + case 0x3: + *ptr = (u32 *)&medium_freq; + break; + + case 0x4: + *ptr = (u32 *)&generic_init_controller; + break; + + case 0x5: + *ptr = (u32 *)&rl_version; + break; + + case 0x8: + *ptr = (u32 *)&start_xsb_offset; + break; + + case 0x20: + *ptr = (u32 *)&is_rl_old; + break; + + case 0x21: + *ptr = (u32 *)&is_freq_old; + break; + + case 0x23: + *ptr = (u32 *)&is_dfs_disabled; + break; + + case 0x24: + *ptr = (u32 *)&is_pll_before_init; + break; + + case 0x25: + *ptr = (u32 *)&is_adll_calib_before_init; + break; +#ifdef STATIC_ALGO_SUPPORT + case 0x26: + *ptr = (u32 *)&(silicon_delay[0]); + break; + + case 0x27: + *ptr = (u32 *)&wl_debug_delay; + break; +#endif + case 0x28: + *ptr = (u32 *)&is_tune_result; + break; + + case 0x29: + *ptr = (u32 *)&is_validate_window_per_if; + break; + + case 0x2a: + *ptr = (u32 *)&is_validate_window_per_pup; + break; + + case 0x30: + *ptr = (u32 *)&sweep_cnt; + break; + + case 0x31: + *ptr = (u32 *)&is_bist_reset_bit; + break; + + case 0x32: + *ptr = (u32 *)&is_dfs_in_init; + break; + + case 0x33: + *ptr = (u32 *)&p_finger; + break; + + case 0x34: + *ptr = (u32 *)&n_finger; + break; + + case 0x35: + *ptr = (u32 *)&init_freq; + break; + + case 0x36: + *ptr = (u32 *)&(freq_val[DDR_FREQ_LOW_FREQ]); + break; + + case 0x37: + *ptr = (u32 *)&start_pattern; + break; + + case 0x38: + *ptr = (u32 *)&end_pattern; + break; + + case 0x39: + *ptr = (u32 *)&phy_reg0_val; + break; + + case 0x4a: + *ptr = (u32 *)&phy_reg1_val; + break; + + case 0x4b: + *ptr = (u32 *)&phy_reg2_val; + break; + + case 0x4c: + *ptr = (u32 *)&phy_reg3_val; + break; + + case 0x4e: + *ptr = (u32 *)&sweep_pattern; + break; + + case 0x50: + *ptr = (u32 *)&is_rzq6; + break; + + case 0x51: + *ptr = (u32 *)&znri_data_phy_val; + break; + + case 0x52: + *ptr = (u32 *)&zpri_data_phy_val; + break; + + case 0x53: + *ptr = (u32 *)&finger_test; + break; + + case 0x54: + *ptr = (u32 *)&n_finger_start; + break; + + case 0x55: + *ptr = (u32 *)&n_finger_end; + break; + + case 0x56: + *ptr = (u32 *)&p_finger_start; + break; + + case 0x57: + *ptr = (u32 *)&p_finger_end; + break; + + case 0x58: + *ptr = (u32 *)&p_finger_step; + break; + + case 0x59: + *ptr = (u32 *)&n_finger_step; + break; + + case 0x5a: + *ptr = (u32 *)&znri_ctrl_phy_val; + break; + + case 0x5b: + *ptr = (u32 *)&zpri_ctrl_phy_val; + break; + + case 0x5c: + *ptr = (u32 *)&is_reg_dump; + break; + + case 0x5d: + *ptr = (u32 *)&vref; + break; + + case 0x5e: + *ptr = (u32 *)&mode2_t; + break; + + case 0x5f: + *ptr = (u32 *)&xsb_validate_type; + break; + + case 0x60: + *ptr = (u32 *)&xsb_validation_base_address; + break; + + case 0x67: + *ptr = (u32 *)&activate_select_before_run_alg; + break; + + case 0x68: + *ptr = (u32 *)&activate_deselect_after_run_alg; + break; + + case 0x69: + *ptr = (u32 *)&odt_additional; + break; + + case 0x70: + *ptr = (u32 *)&debug_mode; + break; + + case 0x71: + *ptr = (u32 *)&pbs_pattern; + break; + + case 0x72: + *ptr = (u32 *)&delay_enable; + break; + + case 0x73: + *ptr = (u32 *)&ck_delay; + break; + + case 0x74: + *ptr = (u32 *)&ck_delay_16; + break; + + case 0x75: + *ptr = (u32 *)&ca_delay; + break; + + case 0x100: + *ptr = (u32 *)&debug_dunit; + break; + + case 0x101: + debug_acc = (int)value; + break; + + case 0x102: + debug_training = (u8)value; + break; + + case 0x103: + debug_training_bist = (u8)value; + break; + + case 0x104: + debug_centralization = (u8)value; + break; + + case 0x105: + debug_training_ip = (u8)value; + break; + + case 0x106: + debug_leveling = (u8)value; + break; + + case 0x107: + debug_pbs = (u8)value; + break; + + case 0x108: + debug_training_static = (u8)value; + break; + + case 0x109: + debug_training_access = (u8)value; + break; + + case 0x112: + *ptr = &start_pattern; + break; + + case 0x113: + *ptr = &end_pattern; + break; + + default: + if ((flag_id >= 0x200) && (flag_id < 0x210)) { + if_id = flag_id - 0x200; + *ptr = (u32 *)&(tm->interface_params + [if_id].memory_freq); + } else if ((flag_id >= 0x210) && (flag_id < 0x220)) { + if_id = flag_id - 0x210; + *ptr = (u32 *)&(tm->interface_params + [if_id].speed_bin_index); + } else if ((flag_id >= 0x220) && (flag_id < 0x230)) { + if_id = flag_id - 0x220; + *ptr = (u32 *)&(tm->interface_params + [if_id].bus_width); + } else if ((flag_id >= 0x230) && (flag_id < 0x240)) { + if_id = flag_id - 0x230; + *ptr = (u32 *)&(tm->interface_params + [if_id].memory_size); + } else if ((flag_id >= 0x240) && (flag_id < 0x250)) { + if_id = flag_id - 0x240; + *ptr = (u32 *)&(tm->interface_params + [if_id].cas_l); + } else if ((flag_id >= 0x250) && (flag_id < 0x260)) { + if_id = flag_id - 0x250; + *ptr = (u32 *)&(tm->interface_params + [if_id].cas_wl); + } else if ((flag_id >= 0x270) && (flag_id < 0x2cf)) { + if_id = (flag_id - 0x270) / MAX_BUS_NUM; + pup_id = (flag_id - 0x270) % MAX_BUS_NUM; + *ptr = (u32 *)&(tm->interface_params[if_id]. + as_bus_params[pup_id].is_ck_swap); + } else if ((flag_id >= 0x2d0) && (flag_id < 0x32f)) { + if_id = (flag_id - 0x2d0) / MAX_BUS_NUM; + pup_id = (flag_id - 0x2d0) % MAX_BUS_NUM; + *ptr = (u32 *)&(tm->interface_params[if_id]. + as_bus_params[pup_id].is_dqs_swap); + } else if ((flag_id >= 0x330) && (flag_id < 0x38f)) { + if_id = (flag_id - 0x330) / MAX_BUS_NUM; + pup_id = (flag_id - 0x330) % MAX_BUS_NUM; + *ptr = (u32 *)&(tm->interface_params[if_id]. + as_bus_params[pup_id].cs_bitmask); + } else if ((flag_id >= 0x390) && (flag_id < 0x3ef)) { + if_id = (flag_id - 0x390) / MAX_BUS_NUM; + pup_id = (flag_id - 0x390) % MAX_BUS_NUM; + *ptr = (u32 *)&(tm->interface_params + [if_id].as_bus_params + [pup_id].mirror_enable_bitmask); + } else if ((flag_id >= 0x500) && (flag_id <= 0x50f)) { + tmp_val = flag_id - 0x320; + *ptr = (u32 *)&(clamp_tbl[tmp_val]); + } else { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("flag_id out of boundary %d\n", + flag_id)); + return MV_BAD_PARAM; + } + } + + return MV_OK; +} + +#ifndef EXCLUDE_SWITCH_DEBUG +/* + * Print ADLL + */ +int print_adll(u32 dev_num, u32 adll[MAX_INTERFACE_NUM * MAX_BUS_NUM]) +{ + u32 i, j; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (j = 0; j < tm->num_of_bus_per_interface; j++) { + VALIDATE_ACTIVE(tm->bus_act_mask, j); + for (i = 0; i < MAX_INTERFACE_NUM; i++) { + printf("%d ,", + adll[i * tm->num_of_bus_per_interface + j]); + } + } + printf("\n"); + + return MV_OK; +} +#endif + +/* byte_index - only byte 0, 1, 2, or 3, oxff - test all bytes */ +static u32 ddr3_tip_compare(u32 if_id, u32 *p_src, u32 *p_dst, + u32 byte_index) +{ + u32 burst_cnt = 0, addr_offset, i_id; + int b_is_fail = 0; + + addr_offset = + (byte_index == + 0xff) ? (u32) 0xffffffff : (u32) (0xff << (byte_index * 8)); + for (burst_cnt = 0; burst_cnt < EXT_ACCESS_BURST_LENGTH; burst_cnt++) { + if ((p_src[burst_cnt] & addr_offset) != + (p_dst[burst_cnt] & addr_offset)) + b_is_fail = 1; + } + + if (b_is_fail == 1) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("IF %d exp: ", if_id)); + for (i_id = 0; i_id <= MAX_INTERFACE_NUM - 1; i_id++) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("0x%8x ", p_src[i_id])); + } + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("\n_i_f %d rcv: ", if_id)); + for (i_id = 0; i_id <= MAX_INTERFACE_NUM - 1; i_id++) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("(0x%8x ", p_dst[i_id])); + } + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, ("\n ")); + } + + return b_is_fail; +} + +/* test_type = 0-tx , 1-rx */ +int ddr3_tip_sweep_test(u32 dev_num, u32 test_type, + u32 mem_addr, u32 is_modify_adll, + u32 start_if, u32 end_if, u32 startpup, u32 endpup) +{ + u32 bus_cnt = 0, adll_val = 0, if_id, ui_prev_adll, ui_mask_bit, + end_adll, start_adll; + u32 reg_addr = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (test_type == 0) { + reg_addr = 1; + ui_mask_bit = 0x3f; + start_adll = 0; + end_adll = ui_mask_bit; + } else { + reg_addr = 3; + ui_mask_bit = 0x1f; + start_adll = 0; + end_adll = ui_mask_bit; + } + + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("==============================\n")); + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("Test type %d (0-tx, 1-rx)\n", test_type)); + + for (if_id = start_if; if_id <= end_if; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_cnt = startpup; bus_cnt < endpup; bus_cnt++) { + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, reg_addr, + &ui_prev_adll)); + + for (adll_val = start_adll; adll_val <= end_adll; + adll_val++) { + if (is_modify_adll == 1) { + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, bus_cnt, + DDR_PHY_DATA, reg_addr, + adll_val, ui_mask_bit)); + } + } + if (is_modify_adll == 1) { + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, reg_addr, + ui_prev_adll)); + } + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, ("\n")); + } + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, ("\n")); + } + + return MV_OK; +} + +#ifndef EXCLUDE_SWITCH_DEBUG +/* + * Sweep validation + */ +int ddr3_tip_run_sweep_test(int dev_num, u32 repeat_num, u32 direction, + u32 mode) +{ + u32 pup = 0, start_pup = 0, end_pup = 0; + u32 adll = 0; + u32 res[MAX_INTERFACE_NUM] = { 0 }; + int if_id = 0; + u32 adll_value = 0; + int reg = (direction == 0) ? WRITE_CENTRALIZATION_PHY_REG : + READ_CENTRALIZATION_PHY_REG; + enum hws_access_type pup_access; + u32 cs; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (mode == 1) { + /* per pup */ + start_pup = 0; + end_pup = tm->num_of_bus_per_interface - 1; + pup_access = ACCESS_TYPE_UNICAST; + } else { + start_pup = 0; + end_pup = 0; + pup_access = ACCESS_TYPE_MULTICAST; + } + + for (cs = 0; cs < max_cs; cs++) { + for (adll = 0; adll < ADLL_LENGTH; adll++) { + for (if_id = 0; + if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE + (tm->if_act_mask, + if_id); + for (pup = start_pup; pup <= end_pup; pup++) { + ctrl_sweepres[adll][if_id][pup] = + 0; + } + } + } + + for (adll = 0; adll < (MAX_INTERFACE_NUM * MAX_BUS_NUM); adll++) + ctrl_adll[adll] = 0; + /* Save DQS value(after algorithm run) */ + read_adll_value(ctrl_adll, + (reg + (cs * CS_REGISTER_ADDR_OFFSET)), + MASK_ALL_BITS); + + /* + * Sweep ADLL from 0:31 on all I/F on all Pup and perform + * BIST on each stage. + */ + for (pup = start_pup; pup <= end_pup; pup++) { + for (adll = 0; adll < ADLL_LENGTH; adll++) { + adll_value = + (direction == 0) ? (adll * 2) : adll; + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, 0, + pup_access, pup, DDR_PHY_DATA, + reg + CS_REG_VALUE(cs), + adll_value)); + hws_ddr3_run_bist(dev_num, sweep_pattern, res, + cs); + /* ddr3_tip_reset_fifo_ptr(dev_num); */ + for (if_id = 0; + if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE + (tm->if_act_mask, + if_id); + ctrl_sweepres[adll][if_id][pup] + = res[if_id]; + if (mode == 1) { + CHECK_STATUS + (ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + pup, + DDR_PHY_DATA, + reg + CS_REG_VALUE(cs), + ctrl_adll[if_id * + cs * + tm->num_of_bus_per_interface + + pup])); + } + } + } + } + printf("Final, CS %d,%s, Sweep, Result, Adll,", cs, + ((direction == 0) ? "TX" : "RX")); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (mode == 1) { + for (pup = start_pup; pup <= end_pup; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + printf("I/F%d-PHY%d , ", if_id, pup); + } + } else { + printf("I/F%d , ", if_id); + } + } + printf("\n"); + + for (adll = 0; adll < ADLL_LENGTH; adll++) { + adll_value = (direction == 0) ? (adll * 2) : adll; + printf("Final,%s, Sweep, Result, %d ,", + ((direction == 0) ? "TX" : "RX"), adll_value); + + for (if_id = 0; + if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup = start_pup; pup <= end_pup; pup++) { + printf("%d , ", + ctrl_sweepres[adll][if_id] + [pup]); + } + } + printf("\n"); + } + + /* + * Write back to the phy the Rx DQS value, we store in + * the beginning. + */ + write_adll_value(ctrl_adll, + (reg + cs * CS_REGISTER_ADDR_OFFSET)); + /* print adll results */ + read_adll_value(ctrl_adll, (reg + cs * CS_REGISTER_ADDR_OFFSET), + MASK_ALL_BITS); + printf("%s, DQS, ADLL,,,", (direction == 0) ? "Tx" : "Rx"); + print_adll(dev_num, ctrl_adll); + } + ddr3_tip_reset_fifo_ptr(dev_num); + + return 0; +} + +void print_topology(struct hws_topology_map *topology_db) +{ + u32 ui, uj; + + printf("\tinterface_mask: 0x%x\n", topology_db->if_act_mask); + printf("\tNum Bus: %d\n", topology_db->num_of_bus_per_interface); + printf("\tbus_act_mask: 0x%x\n", topology_db->bus_act_mask); + + for (ui = 0; ui < MAX_INTERFACE_NUM; ui++) { + VALIDATE_ACTIVE(topology_db->if_act_mask, ui); + printf("\n\tInterface ID: %d\n", ui); + printf("\t\tDDR Frequency: %s\n", + convert_freq(topology_db-> + interface_params[ui].memory_freq)); + printf("\t\tSpeed_bin: %d\n", + topology_db->interface_params[ui].speed_bin_index); + printf("\t\tBus_width: %d\n", + (4 << topology_db->interface_params[ui].bus_width)); + printf("\t\tMem_size: %s\n", + convert_mem_size(topology_db-> + interface_params[ui].memory_size)); + printf("\t\tCAS-WL: %d\n", + topology_db->interface_params[ui].cas_wl); + printf("\t\tCAS-L: %d\n", + topology_db->interface_params[ui].cas_l); + printf("\t\tTemperature: %d\n", + topology_db->interface_params[ui].interface_temp); + printf("\n"); + for (uj = 0; uj < 4; uj++) { + printf("\t\tBus %d parameters- CS Mask: 0x%x\t", uj, + topology_db->interface_params[ui]. + as_bus_params[uj].cs_bitmask); + printf("Mirror: 0x%x\t", + topology_db->interface_params[ui]. + as_bus_params[uj].mirror_enable_bitmask); + printf("DQS Swap is %s \t", + (topology_db-> + interface_params[ui].as_bus_params[uj]. + is_dqs_swap == 1) ? "enabled" : "disabled"); + printf("Ck Swap:%s\t", + (topology_db-> + interface_params[ui].as_bus_params[uj]. + is_ck_swap == 1) ? "enabled" : "disabled"); + printf("\n"); + } + } +} +#endif + +/* + * Execute XSB Test transaction (rd/wr/both) + */ +int run_xsb_test(u32 dev_num, u32 mem_addr, u32 write_type, + u32 read_type, u32 burst_length) +{ + u32 seq = 0, if_id = 0, addr, cnt; + int ret = MV_OK, ret_tmp; + u32 data_read[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + addr = mem_addr; + for (cnt = 0; cnt <= burst_length; cnt++) { + seq = (seq + 1) % 8; + if (write_type != 0) { + CHECK_STATUS(ddr3_tip_ext_write + (dev_num, if_id, addr, 1, + xsb_test_table[seq])); + } + if (read_type != 0) { + CHECK_STATUS(ddr3_tip_ext_read + (dev_num, if_id, addr, 1, + data_read)); + } + if ((read_type != 0) && (write_type != 0)) { + ret_tmp = + ddr3_tip_compare(if_id, + xsb_test_table[seq], + data_read, + 0xff); + addr += (EXT_ACCESS_BURST_LENGTH * 4); + ret = (ret != MV_OK) ? ret : ret_tmp; + } + } + } + + return ret; +} + +#else /*EXCLUDE_SWITCH_DEBUG */ + +u32 rl_version = 1; /* 0 - old RL machine */ +u32 vref = 0x4; +u32 start_xsb_offset = 0; +u8 cs_mask_reg[] = { + 0, 4, 8, 12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 +}; + +int run_xsb_test(u32 dev_num, u32 mem_addr, u32 write_type, + u32 read_type, u32 burst_length) +{ + return MV_OK; +} + +#endif diff --git a/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.c b/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.c new file mode 100644 index 0000000000..b9b0eb7a64 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.c @@ -0,0 +1,147 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define REG_READ_DATA_SAMPLE_DELAYS_ADDR 0x1538 +#define REG_READ_DATA_SAMPLE_DELAYS_MASK 0x1f +#define REG_READ_DATA_SAMPLE_DELAYS_OFFS 8 + +#define REG_READ_DATA_READY_DELAYS_ADDR 0x153c +#define REG_READ_DATA_READY_DELAYS_MASK 0x1f +#define REG_READ_DATA_READY_DELAYS_OFFS 8 + +int ddr3_if_ecc_enabled(void) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (DDR3_IS_ECC_PUP4_MODE(tm->bus_act_mask) || + DDR3_IS_ECC_PUP3_MODE(tm->bus_act_mask)) + return 1; + else + return 0; +} + +int ddr3_pre_algo_config(void) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* Set Bus3 ECC training mode */ + if (DDR3_IS_ECC_PUP3_MODE(tm->bus_act_mask)) { + /* Set Bus3 ECC MUX */ + CHECK_STATUS(ddr3_tip_if_write + (0, ACCESS_TYPE_UNICAST, PARAM_NOT_CARE, + REG_SDRAM_PINS_MUX, 0x100, 0x100)); + } + + /* Set regular ECC training mode (bus4 and bus 3) */ + if ((DDR3_IS_ECC_PUP4_MODE(tm->bus_act_mask)) || + (DDR3_IS_ECC_PUP3_MODE(tm->bus_act_mask))) { + /* Enable ECC Write MUX */ + CHECK_STATUS(ddr3_tip_if_write + (0, ACCESS_TYPE_UNICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x100, 0x100)); + /* General ECC enable */ + CHECK_STATUS(ddr3_tip_if_write + (0, ACCESS_TYPE_UNICAST, PARAM_NOT_CARE, + REG_SDRAM_CONFIG_ADDR, 0x40000, 0x40000)); + /* Disable Read Data ECC MUX */ + CHECK_STATUS(ddr3_tip_if_write + (0, ACCESS_TYPE_UNICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x0, 0x2)); + } + + return MV_OK; +} + +int ddr3_post_algo_config(void) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + int status; + + status = ddr3_post_run_alg(); + if (MV_OK != status) { + printf("DDR3 Post Run Alg - FAILED 0x%x\n", status); + return status; + } + + /* Un_set ECC training mode */ + if ((DDR3_IS_ECC_PUP4_MODE(tm->bus_act_mask)) || + (DDR3_IS_ECC_PUP3_MODE(tm->bus_act_mask))) { + /* Disable ECC Write MUX */ + CHECK_STATUS(ddr3_tip_if_write + (0, ACCESS_TYPE_UNICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x0, 0x100)); + /* General ECC and Bus3 ECC MUX remains enabled */ + } + + return MV_OK; +} + +int ddr3_hws_hw_training(void) +{ + enum hws_algo_type algo_mode = ALGO_TYPE_DYNAMIC; + int status; + struct init_cntr_param init_param; + + status = ddr3_silicon_pre_init(); + if (MV_OK != status) { + printf("DDR3 Pre silicon Config - FAILED 0x%x\n", status); + return status; + } + + init_param.do_mrs_phy = 1; +#if defined(CONFIG_ARMADA_38X) || defined(CONFIG_ARMADA_39X) + init_param.is_ctrl64_bit = 0; +#else + init_param.is_ctrl64_bit = 1; +#endif +#if defined(CONFIG_ALLEYCAT3) || defined(CONFIG_ARMADA_38X) || \ + defined(CONFIG_ARMADA_39X) + init_param.init_phy = 1; +#else + init_param.init_phy = 0; +#endif + init_param.msys_init = 1; + status = hws_ddr3_tip_init_controller(0, &init_param); + if (MV_OK != status) { + printf("DDR3 init controller - FAILED 0x%x\n", status); + return status; + } + + status = ddr3_silicon_post_init(); + if (MV_OK != status) { + printf("DDR3 Post Init - FAILED 0x%x\n", status); + return status; + } + + status = ddr3_pre_algo_config(); + if (MV_OK != status) { + printf("DDR3 Pre Algo Config - FAILED 0x%x\n", status); + return status; + } + + /* run algorithm in order to configure the PHY */ + status = hws_ddr3_tip_run_alg(0, algo_mode); + if (MV_OK != status) { + printf("DDR3 run algorithm - FAILED 0x%x\n", status); + return status; + } + + status = ddr3_post_algo_config(); + if (MV_OK != status) { + printf("DDR3 Post Algo Config - FAILED 0x%x\n", status); + return status; + } + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.h b/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.h new file mode 100644 index 0000000000..17a09530d7 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training.h @@ -0,0 +1,49 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_HWS_HW_TRAINING_H +#define _DDR3_HWS_HW_TRAINING_H + +/* struct used for DLB configuration array */ +struct dlb_config { + u32 reg_addr; + u32 reg_data; +}; + +/* Topology update structure */ +struct topology_update_info { + int update_ecc; + u8 ecc; + int update_width; + u8 width; + int update_ecc_pup3_mode; + u8 ecc_pup_mode_offset; +}; + +/* Topology update defines */ +#define TOPOLOGY_UPDATE_WIDTH_16BIT 1 +#define TOPOLOGY_UPDATE_WIDTH_32BIT 0 +#define TOPOLOGY_UPDATE_WIDTH_32BIT_MASK 0xf +#define TOPOLOGY_UPDATE_WIDTH_16BIT_MASK 0x3 + +#define TOPOLOGY_UPDATE_ECC_ON 1 +#define TOPOLOGY_UPDATE_ECC_OFF 0 +#define TOPOLOGY_UPDATE_ECC_OFFSET_PUP4 4 +#define TOPOLOGY_UPDATE_ECC_OFFSET_PUP3 3 + +/* + * 1. L2 filter should be set at binary header to 0xd000000, + * to avoid conflict with internal register IO. + * 2. U-Boot modifies internal registers base to 0xf100000, + * and than should update L2 filter accordingly to 0xf000000 (3.75 GB) + */ +/* temporary limit l2 filter to 3GiB (LSP issue) */ +#define L2_FILTER_FOR_MAX_MEMORY_SIZE 0xc0000000 +#define ADDRESS_FILTERING_END_REGISTER 0x8c04 + +#define SUB_VERSION 0 + +#endif /* _DDR3_HWS_HW_TRAINING_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training_def.h b/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training_def.h new file mode 100644 index 0000000000..06d0ab10aa --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_hws_hw_training_def.h @@ -0,0 +1,464 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_HWS_HW_TRAINING_DEF_H +#define _DDR3_HWS_HW_TRAINING_DEF_H + +#define SAR_DDR3_FREQ_MASK 0xfe00000 +#define SAR_CPU_FAB_GET(cpu, fab) (((cpu & 0x7) << 21) | \ + ((fab & 0xf) << 24)) + +#define MAX_CS 4 + +#define MIN_DIMM_ADDR 0x50 +#define FAR_END_DIMM_ADDR 0x50 +#define MAX_DIMM_ADDR 0x60 + +#define SDRAM_CS_SIZE 0xfffffff +#define SDRAM_CS_BASE 0x0 +#define SDRAM_DIMM_SIZE 0x80000000 + +#define CPU_CONFIGURATION_REG(id) (0x21800 + (id * 0x100)) +#define CPU_MRVL_ID_OFFSET 0x10 +#define SAR1_CPU_CORE_MASK 0x38000000 +#define SAR1_CPU_CORE_OFFSET 27 + +#define NEW_FABRIC_TWSI_ADDR 0x4e +#ifdef DB_784MP_GP +#define BUS_WIDTH_ECC_TWSI_ADDR 0x4e +#else +#define BUS_WIDTH_ECC_TWSI_ADDR 0x4f +#endif +#define MV_MAX_DDR3_STATIC_SIZE 50 +#define MV_DDR3_MODES_NUMBER 30 + +#define RESUME_RL_PATTERNS_ADDR 0xfe0000 +#define RESUME_RL_PATTERNS_SIZE 0x100 +#define RESUME_TRAINING_VALUES_ADDR (RESUME_RL_PATTERNS_ADDR + \ + RESUME_RL_PATTERNS_SIZE) +#define RESUME_TRAINING_VALUES_MAX 0xcd0 +#define BOOT_INFO_ADDR (RESUME_RL_PATTERNS_ADDR + 0x1000) +#define CHECKSUM_RESULT_ADDR (BOOT_INFO_ADDR + 0x1000) +#define NUM_OF_REGISTER_ADDR (CHECKSUM_RESULT_ADDR + 4) +#define SUSPEND_MAGIC_WORD 0xdeadb002 +#define REGISTER_LIST_END 0xffffffff + +/* MISC */ +#define INTER_REGS_BASE SOC_REGS_PHY_BASE + +/* DDR */ +#define REG_SDRAM_CONFIG_ADDR 0x1400 +#define REG_SDRAM_CONFIG_MASK 0x9fffffff +#define REG_SDRAM_CONFIG_RFRS_MASK 0x3fff +#define REG_SDRAM_CONFIG_WIDTH_OFFS 15 +#define REG_SDRAM_CONFIG_REGDIMM_OFFS 17 +#define REG_SDRAM_CONFIG_ECC_OFFS 18 +#define REG_SDRAM_CONFIG_IERR_OFFS 19 +#define REG_SDRAM_CONFIG_PUPRSTDIV_OFFS 28 +#define REG_SDRAM_CONFIG_RSTRD_OFFS 30 + +#define REG_SDRAM_PINS_MUX 0x19d4 + +#define REG_DUNIT_CTRL_LOW_ADDR 0x1404 +#define REG_DUNIT_CTRL_LOW_2T_OFFS 3 +#define REG_DUNIT_CTRL_LOW_2T_MASK 0x3 +#define REG_DUNIT_CTRL_LOW_DPDE_OFFS 14 + +#define REG_SDRAM_TIMING_LOW_ADDR 0x1408 +#define REG_SDRAM_TIMING_HIGH_ADDR 0x140c +#define REG_SDRAM_TIMING_H_R2R_OFFS 7 +#define REG_SDRAM_TIMING_H_R2R_MASK 0x3 +#define REG_SDRAM_TIMING_H_R2W_W2R_OFFS 9 +#define REG_SDRAM_TIMING_H_R2W_W2R_MASK 0x3 +#define REG_SDRAM_TIMING_H_W2W_OFFS 11 +#define REG_SDRAM_TIMING_H_W2W_MASK 0x1f +#define REG_SDRAM_TIMING_H_R2R_H_OFFS 19 +#define REG_SDRAM_TIMING_H_R2R_H_MASK 0x7 +#define REG_SDRAM_TIMING_H_R2W_W2R_H_OFFS 22 +#define REG_SDRAM_TIMING_H_R2W_W2R_H_MASK 0x7 + +#define REG_SDRAM_ADDRESS_CTRL_ADDR 0x1410 +#define REG_SDRAM_ADDRESS_SIZE_OFFS 2 +#define REG_SDRAM_ADDRESS_SIZE_HIGH_OFFS 18 +#define REG_SDRAM_ADDRESS_CTRL_STRUCT_OFFS 4 + +#define REG_SDRAM_OPEN_PAGES_ADDR 0x1414 +#define REG_SDRAM_OPERATION_CS_OFFS 8 + +#define REG_SDRAM_OPERATION_ADDR 0x1418 +#define REG_SDRAM_OPERATION_CWA_DELAY_SEL_OFFS 24 +#define REG_SDRAM_OPERATION_CWA_DATA_OFFS 20 +#define REG_SDRAM_OPERATION_CWA_DATA_MASK 0xf +#define REG_SDRAM_OPERATION_CWA_RC_OFFS 16 +#define REG_SDRAM_OPERATION_CWA_RC_MASK 0xf +#define REG_SDRAM_OPERATION_CMD_MR0 0xf03 +#define REG_SDRAM_OPERATION_CMD_MR1 0xf04 +#define REG_SDRAM_OPERATION_CMD_MR2 0xf08 +#define REG_SDRAM_OPERATION_CMD_MR3 0xf09 +#define REG_SDRAM_OPERATION_CMD_RFRS 0xf02 +#define REG_SDRAM_OPERATION_CMD_CWA 0xf0e +#define REG_SDRAM_OPERATION_CMD_RFRS_DONE 0xf +#define REG_SDRAM_OPERATION_CMD_MASK 0xf +#define REG_SDRAM_OPERATION_CS_OFFS 8 + +#define REG_OUDDR3_TIMING_ADDR 0x142c + +#define REG_SDRAM_MODE_ADDR 0x141c + +#define REG_SDRAM_EXT_MODE_ADDR 0x1420 + +#define REG_DDR_CONT_HIGH_ADDR 0x1424 + +#define REG_ODT_TIME_LOW_ADDR 0x1428 +#define REG_ODT_ON_CTL_RD_OFFS 12 +#define REG_ODT_OFF_CTL_RD_OFFS 16 +#define REG_SDRAM_ERROR_ADDR 0x1454 +#define REG_SDRAM_AUTO_PWR_SAVE_ADDR 0x1474 +#define REG_ODT_TIME_HIGH_ADDR 0x147c + +#define REG_SDRAM_INIT_CTRL_ADDR 0x1480 +#define REG_SDRAM_INIT_CTRL_OFFS 0 +#define REG_SDRAM_INIT_CKE_ASSERT_OFFS 2 +#define REG_SDRAM_INIT_RESET_DEASSERT_OFFS 3 +#define REG_SDRAM_INIT_RESET_MASK_OFFS 1 + +#define REG_SDRAM_ODT_CTRL_LOW_ADDR 0x1494 + +#define REG_SDRAM_ODT_CTRL_HIGH_ADDR 0x1498 +#define REG_SDRAM_ODT_CTRL_HIGH_OVRD_MASK 0x0 +#define REG_SDRAM_ODT_CTRL_HIGH_OVRD_ENA 0x3 + +#define REG_DUNIT_ODT_CTRL_ADDR 0x149c +#define REG_DUNIT_ODT_CTRL_OVRD_OFFS 8 +#define REG_DUNIT_ODT_CTRL_OVRD_VAL_OFFS 9 + +#define REG_DRAM_FIFO_CTRL_ADDR 0x14a0 + +#define REG_DRAM_AXI_CTRL_ADDR 0x14a8 +#define REG_DRAM_AXI_CTRL_AXIDATABUSWIDTH_OFFS 0 + +#define REG_METAL_MASK_ADDR 0x14b0 +#define REG_METAL_MASK_MASK 0xdfffffff +#define REG_METAL_MASK_RETRY_OFFS 0 + +#define REG_DRAM_ADDR_CTRL_DRIVE_STRENGTH_ADDR 0x14c0 + +#define REG_DRAM_DATA_DQS_DRIVE_STRENGTH_ADDR 0x14c4 +#define REG_DRAM_VER_CAL_MACHINE_CTRL_ADDR 0x14c8 +#define REG_DRAM_MAIN_PADS_CAL_ADDR 0x14cc + +#define REG_DRAM_HOR_CAL_MACHINE_CTRL_ADDR 0x17c8 + +#define REG_CS_SIZE_SCRATCH_ADDR 0x1504 +#define REG_DYNAMIC_POWER_SAVE_ADDR 0x1520 +#define REG_DDR_IO_ADDR 0x1524 +#define REG_DDR_IO_CLK_RATIO_OFFS 15 + +#define REG_DFS_ADDR 0x1528 +#define REG_DFS_DLLNEXTSTATE_OFFS 0 +#define REG_DFS_BLOCK_OFFS 1 +#define REG_DFS_SR_OFFS 2 +#define REG_DFS_ATSR_OFFS 3 +#define REG_DFS_RECONF_OFFS 4 +#define REG_DFS_CL_NEXT_STATE_OFFS 8 +#define REG_DFS_CL_NEXT_STATE_MASK 0xf +#define REG_DFS_CWL_NEXT_STATE_OFFS 12 +#define REG_DFS_CWL_NEXT_STATE_MASK 0x7 + +#define REG_READ_DATA_SAMPLE_DELAYS_ADDR 0x1538 +#define REG_READ_DATA_SAMPLE_DELAYS_MASK 0x1f +#define REG_READ_DATA_SAMPLE_DELAYS_OFFS 8 + +#define REG_READ_DATA_READY_DELAYS_ADDR 0x153c +#define REG_READ_DATA_READY_DELAYS_MASK 0x1f +#define REG_READ_DATA_READY_DELAYS_OFFS 8 + +#define START_BURST_IN_ADDR 1 + +#define REG_DRAM_TRAINING_SHADOW_ADDR 0x18488 +#define REG_DRAM_TRAINING_ADDR 0x15b0 +#define REG_DRAM_TRAINING_LOW_FREQ_OFFS 0 +#define REG_DRAM_TRAINING_PATTERNS_OFFS 4 +#define REG_DRAM_TRAINING_MED_FREQ_OFFS 2 +#define REG_DRAM_TRAINING_WL_OFFS 3 +#define REG_DRAM_TRAINING_RL_OFFS 6 +#define REG_DRAM_TRAINING_DQS_RX_OFFS 15 +#define REG_DRAM_TRAINING_DQS_TX_OFFS 16 +#define REG_DRAM_TRAINING_CS_OFFS 20 +#define REG_DRAM_TRAINING_RETEST_OFFS 24 +#define REG_DRAM_TRAINING_DFS_FREQ_OFFS 27 +#define REG_DRAM_TRAINING_DFS_REQ_OFFS 29 +#define REG_DRAM_TRAINING_ERROR_OFFS 30 +#define REG_DRAM_TRAINING_AUTO_OFFS 31 +#define REG_DRAM_TRAINING_RETEST_PAR 0x3 +#define REG_DRAM_TRAINING_RETEST_MASK 0xf8ffffff +#define REG_DRAM_TRAINING_CS_MASK 0xff0fffff +#define REG_DRAM_TRAINING_PATTERNS_MASK 0xff0f0000 + +#define REG_DRAM_TRAINING_1_ADDR 0x15b4 +#define REG_DRAM_TRAINING_1_TRNBPOINT_OFFS 16 + +#define REG_DRAM_TRAINING_2_ADDR 0x15b8 +#define REG_DRAM_TRAINING_2_OVERRUN_OFFS 17 +#define REG_DRAM_TRAINING_2_FIFO_RST_OFFS 4 +#define REG_DRAM_TRAINING_2_RL_MODE_OFFS 3 +#define REG_DRAM_TRAINING_2_WL_MODE_OFFS 2 +#define REG_DRAM_TRAINING_2_ECC_MUX_OFFS 1 +#define REG_DRAM_TRAINING_2_SW_OVRD_OFFS 0 + +#define REG_DRAM_TRAINING_PATTERN_BASE_ADDR 0x15bc +#define REG_DRAM_TRAINING_PATTERN_BASE_OFFS 3 + +#define REG_TRAINING_DEBUG_2_ADDR 0x15c4 +#define REG_TRAINING_DEBUG_2_OFFS 16 +#define REG_TRAINING_DEBUG_2_MASK 0x3 + +#define REG_TRAINING_DEBUG_3_ADDR 0x15c8 +#define REG_TRAINING_DEBUG_3_OFFS 3 +#define REG_TRAINING_DEBUG_3_MASK 0x7 + +#define MR_CS_ADDR_OFFS 4 + +#define REG_DDR3_MR0_ADDR 0x15d0 +#define REG_DDR3_MR0_CS_ADDR 0x1870 +#define REG_DDR3_MR0_CL_MASK 0x74 +#define REG_DDR3_MR0_CL_OFFS 2 +#define REG_DDR3_MR0_CL_HIGH_OFFS 3 +#define CL_MASK 0xf + +#define REG_DDR3_MR1_ADDR 0x15d4 +#define REG_DDR3_MR1_CS_ADDR 0x1874 +#define REG_DDR3_MR1_RTT_MASK 0xfffffdbb +#define REG_DDR3_MR1_DLL_ENA_OFFS 0 +#define REG_DDR3_MR1_RTT_DISABLED 0x0 +#define REG_DDR3_MR1_RTT_RZQ2 0x40 +#define REG_DDR3_MR1_RTT_RZQ4 0x2 +#define REG_DDR3_MR1_RTT_RZQ6 0x42 +#define REG_DDR3_MR1_RTT_RZQ8 0x202 +#define REG_DDR3_MR1_RTT_RZQ12 0x4 +/* WL-disabled, OB-enabled */ +#define REG_DDR3_MR1_OUTBUF_WL_MASK 0xffffef7f +/* Output Buffer Disabled */ +#define REG_DDR3_MR1_OUTBUF_DIS_OFFS 12 +#define REG_DDR3_MR1_WL_ENA_OFFS 7 +#define REG_DDR3_MR1_WL_ENA 0x80 /* WL Enabled */ +#define REG_DDR3_MR1_ODT_MASK 0xfffffdbb + +#define REG_DDR3_MR2_ADDR 0x15d8 +#define REG_DDR3_MR2_CS_ADDR 0x1878 +#define REG_DDR3_MR2_CWL_OFFS 3 +#define REG_DDR3_MR2_CWL_MASK 0x7 +#define REG_DDR3_MR2_ODT_MASK 0xfffff9ff +#define REG_DDR3_MR3_ADDR 0x15dc +#define REG_DDR3_MR3_CS_ADDR 0x187c + +#define REG_DDR3_RANK_CTRL_ADDR 0x15e0 +#define REG_DDR3_RANK_CTRL_CS_ENA_MASK 0xf +#define REG_DDR3_RANK_CTRL_MIRROR_OFFS 4 + +#define REG_ZQC_CONF_ADDR 0x15e4 + +#define REG_DRAM_PHY_CONFIG_ADDR 0x15ec +#define REG_DRAM_PHY_CONFIG_MASK 0x3fffffff + +#define REG_ODPG_CNTRL_ADDR 0x1600 +#define REG_ODPG_CNTRL_OFFS 21 + +#define REG_PHY_LOCK_MASK_ADDR 0x1670 +#define REG_PHY_LOCK_MASK_MASK 0xfffff000 + +#define REG_PHY_LOCK_STATUS_ADDR 0x1674 +#define REG_PHY_LOCK_STATUS_LOCK_OFFS 9 +#define REG_PHY_LOCK_STATUS_LOCK_MASK 0xfff +#define REG_PHY_LOCK_APLL_ADLL_STATUS_MASK 0x7ff + +#define REG_PHY_REGISTRY_FILE_ACCESS_ADDR 0x16a0 +#define REG_PHY_REGISTRY_FILE_ACCESS_OP_WR 0xc0000000 +#define REG_PHY_REGISTRY_FILE_ACCESS_OP_RD 0x80000000 +#define REG_PHY_REGISTRY_FILE_ACCESS_OP_DONE 0x80000000 +#define REG_PHY_BC_OFFS 27 +#define REG_PHY_CNTRL_OFFS 26 +#define REG_PHY_CS_OFFS 16 +#define REG_PHY_DQS_REF_DLY_OFFS 10 +#define REG_PHY_PHASE_OFFS 8 +#define REG_PHY_PUP_OFFS 22 + +#define REG_TRAINING_WL_ADDR 0x16ac +#define REG_TRAINING_WL_CS_MASK 0xfffffffc +#define REG_TRAINING_WL_UPD_OFFS 2 +#define REG_TRAINING_WL_CS_DONE_OFFS 3 +#define REG_TRAINING_WL_RATIO_MASK 0xffffff0f +#define REG_TRAINING_WL_1TO1 0x50 +#define REG_TRAINING_WL_2TO1 0x10 +#define REG_TRAINING_WL_DELAYEXP_MASK 0x20000000 +#define REG_TRAINING_WL_RESULTS_MASK 0x000001ff +#define REG_TRAINING_WL_RESULTS_OFFS 20 + +#define REG_REGISTERED_DRAM_CTRL_ADDR 0x16d0 +#define REG_REGISTERED_DRAM_CTRL_SR_FLOAT_OFFS 15 +#define REG_REGISTERED_DRAM_CTRL_PARITY_MASK 0x3f + +/* DLB */ +#define REG_STATIC_DRAM_DLB_CONTROL 0x1700 +#define DLB_BUS_OPTIMIZATION_WEIGHTS_REG 0x1704 +#define DLB_AGING_REGISTER 0x1708 +#define DLB_EVICTION_CONTROL_REG 0x170c +#define DLB_EVICTION_TIMERS_REGISTER_REG 0x1710 +#define DLB_USER_COMMAND_REG 0x1714 +#define DLB_BUS_WEIGHTS_DIFF_CS 0x1770 +#define DLB_BUS_WEIGHTS_DIFF_BG 0x1774 +#define DLB_BUS_WEIGHTS_SAME_BG 0x1778 +#define DLB_BUS_WEIGHTS_RD_WR 0x177c +#define DLB_BUS_WEIGHTS_ATTR_SYS_PRIO 0x1780 +#define DLB_MAIN_QUEUE_MAP 0x1784 +#define DLB_LINE_SPLIT 0x1788 + +#define DLB_ENABLE 0x1 +#define DLB_WRITE_COALESING (0x1 << 2) +#define DLB_AXI_PREFETCH_EN (0x1 << 3) +#define DLB_MBUS_PREFETCH_EN (0x1 << 4) +#define PREFETCH_N_LN_SZ_TR (0x1 << 6) +#define DLB_INTERJECTION_ENABLE (0x1 << 3) + +/* CPU */ +#define REG_BOOTROM_ROUTINE_ADDR 0x182d0 +#define REG_BOOTROM_ROUTINE_DRAM_INIT_OFFS 12 + +#define REG_DRAM_INIT_CTRL_STATUS_ADDR 0x18488 +#define REG_DRAM_INIT_CTRL_TRN_CLK_OFFS 16 +#define REG_CPU_DIV_CLK_CTRL_0_NEW_RATIO 0x000200ff +#define REG_DRAM_INIT_CTRL_STATUS_2_ADDR 0x1488 + +#define REG_CPU_DIV_CLK_CTRL_0_ADDR 0x18700 + +#define REG_CPU_DIV_CLK_CTRL_1_ADDR 0x18704 +#define REG_CPU_DIV_CLK_CTRL_2_ADDR 0x18708 + +#define REG_CPU_DIV_CLK_CTRL_3_ADDR 0x1870c +#define REG_CPU_DIV_CLK_CTRL_3_FREQ_MASK 0xffffc0ff +#define REG_CPU_DIV_CLK_CTRL_3_FREQ_OFFS 8 + +#define REG_CPU_DIV_CLK_CTRL_4_ADDR 0x18710 + +#define REG_CPU_DIV_CLK_STATUS_0_ADDR 0x18718 +#define REG_CPU_DIV_CLK_ALL_STABLE_OFFS 8 + +#define REG_CPU_PLL_CTRL_0_ADDR 0x1871c +#define REG_CPU_PLL_STATUS_0_ADDR 0x18724 +#define REG_CORE_DIV_CLK_CTRL_ADDR 0x18740 +#define REG_CORE_DIV_CLK_STATUS_ADDR 0x18744 +#define REG_DDRPHY_APLL_CTRL_ADDR 0x18780 + +#define REG_DDRPHY_APLL_CTRL_2_ADDR 0x18784 +#define REG_SFABRIC_CLK_CTRL_ADDR 0x20858 +#define REG_SFABRIC_CLK_CTRL_SMPL_OFFS 8 + +/* DRAM Windows */ +#define REG_XBAR_WIN_19_CTRL_ADDR 0x200e8 +#define REG_XBAR_WIN_4_CTRL_ADDR 0x20040 +#define REG_XBAR_WIN_4_BASE_ADDR 0x20044 +#define REG_XBAR_WIN_4_REMAP_ADDR 0x20048 +#define REG_FASTPATH_WIN_0_CTRL_ADDR 0x20184 +#define REG_XBAR_WIN_7_REMAP_ADDR 0x20078 + +/* SRAM */ +#define REG_CDI_CONFIG_ADDR 0x20220 +#define REG_SRAM_WINDOW_0_ADDR 0x20240 +#define REG_SRAM_WINDOW_0_ENA_OFFS 0 +#define REG_SRAM_WINDOW_1_ADDR 0x20244 +#define REG_SRAM_L2_ENA_ADDR 0x8500 +#define REG_SRAM_CLEAN_BY_WAY_ADDR 0x87bc + +/* Timers */ +#define REG_TIMERS_CTRL_ADDR 0x20300 +#define REG_TIMERS_EVENTS_ADDR 0x20304 +#define REG_TIMER0_VALUE_ADDR 0x20314 +#define REG_TIMER1_VALUE_ADDR 0x2031c +#define REG_TIMER0_ENABLE_MASK 0x1 + +#define MV_BOARD_REFCLK_25MHZ 25000000 +#define CNTMR_RELOAD_REG(tmr) (REG_TIMERS_CTRL_ADDR + 0x10 + (tmr * 8)) +#define CNTMR_VAL_REG(tmr) (REG_TIMERS_CTRL_ADDR + 0x14 + (tmr * 8)) +#define CNTMR_CTRL_REG(tmr) (REG_TIMERS_CTRL_ADDR) +#define CTCR_ARM_TIMER_EN_OFFS(timer) (timer * 2) +#define CTCR_ARM_TIMER_EN_MASK(timer) (1 << CTCR_ARM_TIMER_EN_OFFS(timer)) +#define CTCR_ARM_TIMER_EN(timer) (1 << CTCR_ARM_TIMER_EN_OFFS(timer)) + +#define CTCR_ARM_TIMER_AUTO_OFFS(timer) (1 + (timer * 2)) +#define CTCR_ARM_TIMER_AUTO_MASK(timer) (1 << CTCR_ARM_TIMER_EN_OFFS(timer)) +#define CTCR_ARM_TIMER_AUTO_EN(timer) (1 << CTCR_ARM_TIMER_AUTO_OFFS(timer)) + +/* PMU */ +#define REG_PMU_I_F_CTRL_ADDR 0x1c090 +#define REG_PMU_DUNIT_BLK_OFFS 16 +#define REG_PMU_DUNIT_RFRS_OFFS 20 +#define REG_PMU_DUNIT_ACK_OFFS 24 + +/* MBUS */ +#define MBUS_UNITS_PRIORITY_CONTROL_REG (MBUS_REGS_OFFSET + 0x420) +#define FABRIC_UNITS_PRIORITY_CONTROL_REG (MBUS_REGS_OFFSET + 0x424) +#define MBUS_UNITS_PREFETCH_CONTROL_REG (MBUS_REGS_OFFSET + 0x428) +#define FABRIC_UNITS_PREFETCH_CONTROL_REG (MBUS_REGS_OFFSET + 0x42c) + +#define REG_PM_STAT_MASK_ADDR 0x2210c +#define REG_PM_STAT_MASK_CPU0_IDLE_MASK_OFFS 16 + +#define REG_PM_EVENT_STAT_MASK_ADDR 0x22120 +#define REG_PM_EVENT_STAT_MASK_DFS_DONE_OFFS 17 + +#define REG_PM_CTRL_CONFIG_ADDR 0x22104 +#define REG_PM_CTRL_CONFIG_DFS_REQ_OFFS 18 + +#define REG_FABRIC_LOCAL_IRQ_MASK_ADDR 0x218c4 +#define REG_FABRIC_LOCAL_IRQ_PMU_MASK_OFFS 18 + +/* Controller revision info */ +#define PCI_CLASS_CODE_AND_REVISION_ID 0x008 +#define PCCRIR_REVID_OFFS 0 /* Revision ID */ +#define PCCRIR_REVID_MASK (0xff << PCCRIR_REVID_OFFS) + +/* Power Management Clock Gating Control Register */ +#define POWER_MNG_CTRL_REG 0x18220 +#define PEX_DEVICE_AND_VENDOR_ID 0x000 +#define PEX_CFG_DIRECT_ACCESS(if, reg) (PEX_IF_REGS_BASE(if) + (reg)) +#define PMC_PEXSTOPCLOCK_OFFS(p) ((p) < 8 ? (5 + (p)) : (18 + (p))) +#define PMC_PEXSTOPCLOCK_MASK(p) (1 << PMC_PEXSTOPCLOCK_OFFS(p)) +#define PMC_PEXSTOPCLOCK_EN(p) (1 << PMC_PEXSTOPCLOCK_OFFS(p)) +#define PMC_PEXSTOPCLOCK_STOP(p) (0 << PMC_PEXSTOPCLOCK_OFFS(p)) + +/* TWSI */ +#define TWSI_DATA_ADDR_MASK 0x7 +#define TWSI_DATA_ADDR_OFFS 1 + +/* General */ +#define MAX_CS 4 + +/* Frequencies */ +#define FAB_OPT 21 +#define CLK_CPU 12 +#define CLK_VCO (2 * CLK_CPU) +#define CLK_DDR 12 + +/* CPU Frequencies: */ +#define CLK_CPU_1000 0 +#define CLK_CPU_1066 1 +#define CLK_CPU_1200 2 +#define CLK_CPU_1333 3 +#define CLK_CPU_1500 4 +#define CLK_CPU_1666 5 +#define CLK_CPU_1800 6 +#define CLK_CPU_2000 7 +#define CLK_CPU_600 8 +#define CLK_CPU_667 9 +#define CLK_CPU_800 0xa + +/* Extra Cpu Frequencies: */ +#define CLK_CPU_1600 11 +#define CLK_CPU_2133 12 +#define CLK_CPU_2200 13 +#define CLK_CPU_2400 14 + +#endif /* _DDR3_HWS_HW_TRAINING_DEF_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_hws_sil_training.h b/drivers/ddr/marvell/a38x/old/ddr3_hws_sil_training.h new file mode 100644 index 0000000000..544237a276 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_hws_sil_training.h @@ -0,0 +1,17 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_HWS_SIL_TRAINING_H +#define _DDR3_HWS_SIL_TRAINING_H + +#include "ddr3_training_ip.h" +#include "ddr3_training_ip_prv_if.h" + +int ddr3_silicon_pre_config(void); +int ddr3_silicon_init(void); +int ddr3_silicon_get_ddr_target_freq(u32 *ddr_freq); + +#endif /* _DDR3_HWS_SIL_TRAINING_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_init.c b/drivers/ddr/marvell/a38x/old/ddr3_init.c new file mode 100644 index 0000000000..9818c3d494 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_init.c @@ -0,0 +1,768 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include +#include + +#include "ddr3_init.h" + +#include "../../../../arch/arm/mach-mvebu/serdes/a38x/sys_env_lib.h" + +static struct dlb_config ddr3_dlb_config_table[] = { + {REG_STATIC_DRAM_DLB_CONTROL, 0x2000005c}, + {DLB_BUS_OPTIMIZATION_WEIGHTS_REG, 0x00880000}, + {DLB_AGING_REGISTER, 0x0f7f007f}, + {DLB_EVICTION_CONTROL_REG, 0x0000129f}, + {DLB_EVICTION_TIMERS_REGISTER_REG, 0x00ff0000}, + {DLB_BUS_WEIGHTS_DIFF_CS, 0x04030802}, + {DLB_BUS_WEIGHTS_DIFF_BG, 0x00000a02}, + {DLB_BUS_WEIGHTS_SAME_BG, 0x09000a01}, + {DLB_BUS_WEIGHTS_RD_WR, 0x00020005}, + {DLB_BUS_WEIGHTS_ATTR_SYS_PRIO, 0x00060f10}, + {DLB_MAIN_QUEUE_MAP, 0x00000543}, + {DLB_LINE_SPLIT, 0x00000000}, + {DLB_USER_COMMAND_REG, 0x00000000}, + {0x0, 0x0} +}; + +static struct dlb_config ddr3_dlb_config_table_a0[] = { + {REG_STATIC_DRAM_DLB_CONTROL, 0x2000005c}, + {DLB_BUS_OPTIMIZATION_WEIGHTS_REG, 0x00880000}, + {DLB_AGING_REGISTER, 0x0f7f007f}, + {DLB_EVICTION_CONTROL_REG, 0x0000129f}, + {DLB_EVICTION_TIMERS_REGISTER_REG, 0x00ff0000}, + {DLB_BUS_WEIGHTS_DIFF_CS, 0x04030802}, + {DLB_BUS_WEIGHTS_DIFF_BG, 0x00000a02}, + {DLB_BUS_WEIGHTS_SAME_BG, 0x09000a01}, + {DLB_BUS_WEIGHTS_RD_WR, 0x00020005}, + {DLB_BUS_WEIGHTS_ATTR_SYS_PRIO, 0x00060f10}, + {DLB_MAIN_QUEUE_MAP, 0x00000543}, + {DLB_LINE_SPLIT, 0x00000000}, + {DLB_USER_COMMAND_REG, 0x00000000}, + {0x0, 0x0} +}; + +#if defined(CONFIG_ARMADA_38X) +struct dram_modes { + char *mode_name; + u8 cpu_freq; + u8 fab_freq; + u8 chip_id; + u8 chip_board_rev; + struct reg_data *regs; +}; + +struct dram_modes ddr_modes[] = { +#ifdef SUPPORT_STATIC_DUNIT_CONFIG + /* Conf name, CPUFreq, Fab_freq, Chip ID, Chip/Board, MC regs*/ +#ifdef CONFIG_CUSTOMER_BOARD_SUPPORT + {"a38x_customer_0_800", DDR_FREQ_800, 0, 0x0, A38X_CUSTOMER_BOARD_ID0, + ddr3_customer_800}, + {"a38x_customer_1_800", DDR_FREQ_800, 0, 0x0, A38X_CUSTOMER_BOARD_ID1, + ddr3_customer_800}, +#else + {"a38x_533", DDR_FREQ_533, 0, 0x0, MARVELL_BOARD, ddr3_a38x_533}, + {"a38x_667", DDR_FREQ_667, 0, 0x0, MARVELL_BOARD, ddr3_a38x_667}, + {"a38x_800", DDR_FREQ_800, 0, 0x0, MARVELL_BOARD, ddr3_a38x_800}, + {"a38x_933", DDR_FREQ_933, 0, 0x0, MARVELL_BOARD, ddr3_a38x_933}, +#endif +#endif +}; +#endif /* defined(CONFIG_ARMADA_38X) */ + +/* Translates topology map definitions to real memory size in bits */ +u32 mem_size[] = { + ADDR_SIZE_512MB, ADDR_SIZE_1GB, ADDR_SIZE_2GB, ADDR_SIZE_4GB, + ADDR_SIZE_8GB +}; + +static char *ddr_type = "DDR3"; + +/* + * Set 1 to use dynamic DUNIT configuration, + * set 0 (supported for A380 and AC3) to configure DUNIT in values set by + * ddr3_tip_init_specific_reg_config + */ +u8 generic_init_controller = 1; + +#ifdef SUPPORT_STATIC_DUNIT_CONFIG +static u32 ddr3_get_static_ddr_mode(void); +#endif +static int ddr3_hws_tune_training_params(u8 dev_num); + +/* device revision */ +#define DEV_VERSION_ID_REG 0x1823c +#define REVISON_ID_OFFS 8 +#define REVISON_ID_MASK 0xf00 + +/* A38x revisions */ +#define MV_88F68XX_Z1_ID 0x0 +#define MV_88F68XX_A0_ID 0x4 +/* A39x revisions */ +#define MV_88F69XX_Z1_ID 0x2 + +/* + * sys_env_device_rev_get - Get Marvell controller device revision number + * + * DESCRIPTION: + * This function returns 8bit describing the device revision as defined + * Revision ID Register. + * + * INPUT: + * None. + * + * OUTPUT: + * None. + * + * RETURN: + * 8bit desscribing Marvell controller revision number + */ +u8 sys_env_device_rev_get(void) +{ + u32 value; + + value = reg_read(DEV_VERSION_ID_REG); + return (value & (REVISON_ID_MASK)) >> REVISON_ID_OFFS; +} + +/* + * sys_env_dlb_config_ptr_get + * + * DESCRIPTION: defines pointer to to DLB COnfiguration table + * + * INPUT: none + * + * OUTPUT: pointer to DLB COnfiguration table + * + * RETURN: + * returns pointer to DLB COnfiguration table + */ +struct dlb_config *sys_env_dlb_config_ptr_get(void) +{ +#ifdef CONFIG_ARMADA_39X + return &ddr3_dlb_config_table_a0[0]; +#else + if (sys_env_device_rev_get() == MV_88F68XX_A0_ID) + return &ddr3_dlb_config_table_a0[0]; + else + return &ddr3_dlb_config_table[0]; +#endif +} + +/* + * sys_env_get_cs_ena_from_reg + * + * DESCRIPTION: Get bit mask of enabled CS + * + * INPUT: None + * + * OUTPUT: None + * + * RETURN: + * Bit mask of enabled CS, 1 if only CS0 enabled, + * 3 if both CS0 and CS1 enabled + */ +u32 sys_env_get_cs_ena_from_reg(void) +{ + return reg_read(REG_DDR3_RANK_CTRL_ADDR) & + REG_DDR3_RANK_CTRL_CS_ENA_MASK; +} + +static void ddr3_restore_and_set_final_windows(u32 *win) +{ + u32 win_ctrl_reg, num_of_win_regs; + u32 cs_ena = sys_env_get_cs_ena_from_reg(); + u32 ui; + + win_ctrl_reg = REG_XBAR_WIN_4_CTRL_ADDR; + num_of_win_regs = 16; + + /* Return XBAR windows 4-7 or 16-19 init configuration */ + for (ui = 0; ui < num_of_win_regs; ui++) + reg_write((win_ctrl_reg + 0x4 * ui), win[ui]); + + printf("%s Training Sequence - Switching XBAR Window to FastPath Window\n", + ddr_type); + +#if defined DYNAMIC_CS_SIZE_CONFIG + if (ddr3_fast_path_dynamic_cs_size_config(cs_ena) != MV_OK) + printf("ddr3_fast_path_dynamic_cs_size_config FAILED\n"); +#else + u32 reg, cs; + reg = 0x1fffffe1; + for (cs = 0; cs < MAX_CS; cs++) { + if (cs_ena & (1 << cs)) { + reg |= (cs << 2); + break; + } + } + /* Open fast path Window to - 0.5G */ + reg_write(REG_FASTPATH_WIN_0_CTRL_ADDR, reg); +#endif +} + +static int ddr3_save_and_set_training_windows(u32 *win) +{ + u32 cs_ena; + u32 reg, tmp_count, cs, ui; + u32 win_ctrl_reg, win_base_reg, win_remap_reg; + u32 num_of_win_regs, win_jump_index; + win_ctrl_reg = REG_XBAR_WIN_4_CTRL_ADDR; + win_base_reg = REG_XBAR_WIN_4_BASE_ADDR; + win_remap_reg = REG_XBAR_WIN_4_REMAP_ADDR; + win_jump_index = 0x10; + num_of_win_regs = 16; + struct hws_topology_map *tm = ddr3_get_topology_map(); + +#ifdef DISABLE_L2_FILTERING_DURING_DDR_TRAINING + /* + * Disable L2 filtering during DDR training + * (when Cross Bar window is open) + */ + reg_write(ADDRESS_FILTERING_END_REGISTER, 0); +#endif + + cs_ena = tm->interface_params[0].as_bus_params[0].cs_bitmask; + + /* Close XBAR Window 19 - Not needed */ + /* {0x000200e8} - Open Mbus Window - 2G */ + reg_write(REG_XBAR_WIN_19_CTRL_ADDR, 0); + + /* Save XBAR Windows 4-19 init configurations */ + for (ui = 0; ui < num_of_win_regs; ui++) + win[ui] = reg_read(win_ctrl_reg + 0x4 * ui); + + /* Open XBAR Windows 4-7 or 16-19 for other CS */ + reg = 0; + tmp_count = 0; + for (cs = 0; cs < MAX_CS; cs++) { + if (cs_ena & (1 << cs)) { + switch (cs) { + case 0: + reg = 0x0e00; + break; + case 1: + reg = 0x0d00; + break; + case 2: + reg = 0x0b00; + break; + case 3: + reg = 0x0700; + break; + } + reg |= (1 << 0); + reg |= (SDRAM_CS_SIZE & 0xffff0000); + + reg_write(win_ctrl_reg + win_jump_index * tmp_count, + reg); + reg = (((SDRAM_CS_SIZE + 1) * (tmp_count)) & + 0xffff0000); + reg_write(win_base_reg + win_jump_index * tmp_count, + reg); + + if (win_remap_reg <= REG_XBAR_WIN_7_REMAP_ADDR) + reg_write(win_remap_reg + + win_jump_index * tmp_count, 0); + + tmp_count++; + } + } + + return MV_OK; +} + +/* + * Name: ddr3_init - Main DDR3 Init function + * Desc: This routine initialize the DDR3 MC and runs HW training. + * Args: None. + * Notes: + * Returns: None. + */ +int ddr3_init(void) +{ + u32 reg = 0; + u32 soc_num; + int status; + u32 win[16]; + + /* SoC/Board special Initializtions */ + /* Get version from internal library */ + ddr3_print_version(); + + /*Add sub_version string */ + DEBUG_INIT_C("", SUB_VERSION, 1); + + /* Switching CPU to MRVL ID */ + soc_num = (reg_read(REG_SAMPLE_RESET_HIGH_ADDR) & SAR1_CPU_CORE_MASK) >> + SAR1_CPU_CORE_OFFSET; + switch (soc_num) { + case 0x3: + case 0x1: + reg_bit_set(CPU_CONFIGURATION_REG(1), CPU_MRVL_ID_OFFSET); + case 0x0: + reg_bit_set(CPU_CONFIGURATION_REG(0), CPU_MRVL_ID_OFFSET); + default: + break; + } + + /* + * Set DRAM Reset Mask in case detected GPIO indication of wakeup from + * suspend i.e the DRAM values will not be overwritten / reset when + * waking from suspend + */ + if (sys_env_suspend_wakeup_check() == + SUSPEND_WAKEUP_ENABLED_GPIO_DETECTED) { + reg_bit_set(REG_SDRAM_INIT_CTRL_ADDR, + 1 << REG_SDRAM_INIT_RESET_MASK_OFFS); + } + + /* + * Stage 0 - Set board configuration + */ + + /* Check if DRAM is already initialized */ + if (reg_read(REG_BOOTROM_ROUTINE_ADDR) & + (1 << REG_BOOTROM_ROUTINE_DRAM_INIT_OFFS)) { + printf("%s Training Sequence - 2nd boot - Skip\n", ddr_type); + return MV_OK; + } + + /* + * Stage 1 - Dunit Setup + */ + + /* Fix read ready phases for all SOC in reg 0x15c8 */ + reg = reg_read(REG_TRAINING_DEBUG_3_ADDR); + reg &= ~(REG_TRAINING_DEBUG_3_MASK); + reg |= 0x4; /* Phase 0 */ + reg &= ~(REG_TRAINING_DEBUG_3_MASK << REG_TRAINING_DEBUG_3_OFFS); + reg |= (0x4 << (1 * REG_TRAINING_DEBUG_3_OFFS)); /* Phase 1 */ + reg &= ~(REG_TRAINING_DEBUG_3_MASK << (3 * REG_TRAINING_DEBUG_3_OFFS)); + reg |= (0x6 << (3 * REG_TRAINING_DEBUG_3_OFFS)); /* Phase 3 */ + reg &= ~(REG_TRAINING_DEBUG_3_MASK << (4 * REG_TRAINING_DEBUG_3_OFFS)); + reg |= (0x6 << (4 * REG_TRAINING_DEBUG_3_OFFS)); + reg &= ~(REG_TRAINING_DEBUG_3_MASK << (5 * REG_TRAINING_DEBUG_3_OFFS)); + reg |= (0x6 << (5 * REG_TRAINING_DEBUG_3_OFFS)); + reg_write(REG_TRAINING_DEBUG_3_ADDR, reg); + + /* + * Axi_bresp_mode[8] = Compliant, + * Axi_addr_decode_cntrl[11] = Internal, + * Axi_data_bus_width[0] = 128bit + * */ + /* 0x14a8 - AXI Control Register */ + reg_write(REG_DRAM_AXI_CTRL_ADDR, 0); + + /* + * Stage 2 - Training Values Setup + */ + /* Set X-BAR windows for the training sequence */ + ddr3_save_and_set_training_windows(win); + +#ifdef SUPPORT_STATIC_DUNIT_CONFIG + /* + * Load static controller configuration (in case dynamic/generic init + * is not enabled + */ + if (generic_init_controller == 0) { + ddr3_tip_init_specific_reg_config(0, + ddr_modes + [ddr3_get_static_ddr_mode + ()].regs); + } +#endif + + /* Tune training algo paramteres */ + status = ddr3_hws_tune_training_params(0); + if (MV_OK != status) + return status; + + /* Set log level for training lib */ + ddr3_hws_set_log_level(DEBUG_BLOCK_ALL, DEBUG_LEVEL_ERROR); + + /* Start New Training IP */ + status = ddr3_hws_hw_training(); + if (MV_OK != status) { + printf("%s Training Sequence - FAILED\n", ddr_type); + return status; + } + + /* + * Stage 3 - Finish + */ + /* Restore and set windows */ + ddr3_restore_and_set_final_windows(win); + + /* Update DRAM init indication in bootROM register */ + reg = reg_read(REG_BOOTROM_ROUTINE_ADDR); + reg_write(REG_BOOTROM_ROUTINE_ADDR, + reg | (1 << REG_BOOTROM_ROUTINE_DRAM_INIT_OFFS)); + + /* DLB config */ + ddr3_new_tip_dlb_config(); + +#if defined(ECC_SUPPORT) + if (ddr3_if_ecc_enabled()) + ddr3_new_tip_ecc_scrub(); +#endif + + printf("%s Training Sequence - Ended Successfully\n", ddr_type); + + return MV_OK; +} + +/* + * Name: ddr3_get_cpu_freq + * Desc: read S@R and return CPU frequency + * Args: + * Notes: + * Returns: required value + */ +u32 ddr3_get_cpu_freq(void) +{ + return ddr3_tip_get_init_freq(); +} + +/* + * Name: ddr3_get_fab_opt + * Desc: read S@R and return CPU frequency + * Args: + * Notes: + * Returns: required value + */ +u32 ddr3_get_fab_opt(void) +{ + return 0; /* No fabric */ +} + +/* + * Name: ddr3_get_static_m_cValue - Init Memory controller with + * static parameters + * Desc: Use this routine to init the controller without the HW training + * procedure. + * User must provide compatible header file with registers data. + * Args: None. + * Notes: + * Returns: None. + */ +u32 ddr3_get_static_mc_value(u32 reg_addr, u32 offset1, u32 mask1, + u32 offset2, u32 mask2) +{ + u32 reg, temp; + + reg = reg_read(reg_addr); + + temp = (reg >> offset1) & mask1; + if (mask2) + temp |= (reg >> offset2) & mask2; + + return temp; +} + +/* + * Name: ddr3_get_static_ddr_mode - Init Memory controller with + * static parameters + * Desc: Use this routine to init the controller without the HW training + * procedure. + * User must provide compatible header file with registers data. + * Args: None. + * Notes: + * Returns: None. + */ +u32 ddr3_get_static_ddr_mode(void) +{ + u32 chip_board_rev, i; + u32 size; + + /* Valid only for A380 only, MSYS using dynamic controller config */ +#ifdef CONFIG_CUSTOMER_BOARD_SUPPORT + /* + * Customer boards select DDR mode according to + * board ID & Sample@Reset + */ + chip_board_rev = mv_board_id_get(); +#else + /* Marvell boards select DDR mode according to Sample@Reset only */ + chip_board_rev = MARVELL_BOARD; +#endif + + size = ARRAY_SIZE(ddr_modes); + for (i = 0; i < size; i++) { + if ((ddr3_get_cpu_freq() == ddr_modes[i].cpu_freq) && + (ddr3_get_fab_opt() == ddr_modes[i].fab_freq) && + (chip_board_rev == ddr_modes[i].chip_board_rev)) + return i; + } + + DEBUG_INIT_S("\n*** Error: ddr3_get_static_ddr_mode: No match for requested DDR mode. ***\n\n"); + + return 0; +} + +/****************************************************************************** + * Name: ddr3_get_cs_num_from_reg + * Desc: + * Args: + * Notes: + * Returns: + */ +u32 ddr3_get_cs_num_from_reg(void) +{ + u32 cs_ena = sys_env_get_cs_ena_from_reg(); + u32 cs_count = 0; + u32 cs; + + for (cs = 0; cs < MAX_CS; cs++) { + if (cs_ena & (1 << cs)) + cs_count++; + } + + return cs_count; +} + +void get_target_freq(u32 freq_mode, u32 *ddr_freq, u32 *hclk_ps) +{ + u32 tmp, hclk = 200; + + switch (freq_mode) { + case 4: + tmp = 1; /* DDR_400; */ + hclk = 200; + break; + case 0x8: + tmp = 1; /* DDR_666; */ + hclk = 333; + break; + case 0xc: + tmp = 1; /* DDR_800; */ + hclk = 400; + break; + default: + *ddr_freq = 0; + *hclk_ps = 0; + break; + } + + *ddr_freq = tmp; /* DDR freq define */ + *hclk_ps = 1000000 / hclk; /* values are 1/HCLK in ps */ + + return; +} + +void ddr3_new_tip_dlb_config(void) +{ + u32 reg, i = 0; + struct dlb_config *config_table_ptr = sys_env_dlb_config_ptr_get(); + + /* Write the configuration */ + while (config_table_ptr[i].reg_addr != 0) { + reg_write(config_table_ptr[i].reg_addr, + config_table_ptr[i].reg_data); + i++; + } + + /* Enable DLB */ + reg = reg_read(REG_STATIC_DRAM_DLB_CONTROL); + reg |= DLB_ENABLE | DLB_WRITE_COALESING | DLB_AXI_PREFETCH_EN | + DLB_MBUS_PREFETCH_EN | PREFETCH_N_LN_SZ_TR; + reg_write(REG_STATIC_DRAM_DLB_CONTROL, reg); +} + +int ddr3_fast_path_dynamic_cs_size_config(u32 cs_ena) +{ + u32 reg, cs; + u32 mem_total_size = 0; + u32 cs_mem_size = 0; + u32 mem_total_size_c, cs_mem_size_c; + +#ifdef DEVICE_MAX_DRAM_ADDRESS_SIZE + u32 physical_mem_size; + u32 max_mem_size = DEVICE_MAX_DRAM_ADDRESS_SIZE; + struct hws_topology_map *tm = ddr3_get_topology_map(); +#endif + + /* Open fast path windows */ + for (cs = 0; cs < MAX_CS; cs++) { + if (cs_ena & (1 << cs)) { + /* get CS size */ + if (ddr3_calc_mem_cs_size(cs, &cs_mem_size) != MV_OK) + return MV_FAIL; + +#ifdef DEVICE_MAX_DRAM_ADDRESS_SIZE + /* + * if number of address pins doesn't allow to use max + * mem size that is defined in topology + * mem size is defined by DEVICE_MAX_DRAM_ADDRESS_SIZE + */ + physical_mem_size = mem_size + [tm->interface_params[0].memory_size]; + + if (ddr3_get_device_width(cs) == 16) { + /* + * 16bit mem device can be twice more - no need + * in less significant pin + */ + max_mem_size = DEVICE_MAX_DRAM_ADDRESS_SIZE * 2; + } + + if (physical_mem_size > max_mem_size) { + cs_mem_size = max_mem_size * + (ddr3_get_bus_width() / + ddr3_get_device_width(cs)); + printf("Updated Physical Mem size is from 0x%x to %x\n", + physical_mem_size, + DEVICE_MAX_DRAM_ADDRESS_SIZE); + } +#endif + + /* set fast path window control for the cs */ + reg = 0xffffe1; + reg |= (cs << 2); + reg |= (cs_mem_size - 1) & 0xffff0000; + /*Open fast path Window */ + reg_write(REG_FASTPATH_WIN_CTRL_ADDR(cs), reg); + + /* Set fast path window base address for the cs */ + reg = ((cs_mem_size) * cs) & 0xffff0000; + /* Set base address */ + reg_write(REG_FASTPATH_WIN_BASE_ADDR(cs), reg); + + /* + * Since memory size may be bigger than 4G the summ may + * be more than 32 bit word, + * so to estimate the result divide mem_total_size and + * cs_mem_size by 0x10000 (it is equal to >> 16) + */ + mem_total_size_c = mem_total_size >> 16; + cs_mem_size_c = cs_mem_size >> 16; + /* if the sum less than 2 G - calculate the value */ + if (mem_total_size_c + cs_mem_size_c < 0x10000) + mem_total_size += cs_mem_size; + else /* put max possible size */ + mem_total_size = L2_FILTER_FOR_MAX_MEMORY_SIZE; + } + } + + /* Set L2 filtering to Max Memory size */ + reg_write(ADDRESS_FILTERING_END_REGISTER, mem_total_size); + + return MV_OK; +} + +u32 ddr3_get_bus_width(void) +{ + u32 bus_width; + + bus_width = (reg_read(REG_SDRAM_CONFIG_ADDR) & 0x8000) >> + REG_SDRAM_CONFIG_WIDTH_OFFS; + + return (bus_width == 0) ? 16 : 32; +} + +u32 ddr3_get_device_width(u32 cs) +{ + u32 device_width; + + device_width = (reg_read(REG_SDRAM_ADDRESS_CTRL_ADDR) & + (0x3 << (REG_SDRAM_ADDRESS_CTRL_STRUCT_OFFS * cs))) >> + (REG_SDRAM_ADDRESS_CTRL_STRUCT_OFFS * cs); + + return (device_width == 0) ? 8 : 16; +} + +static int ddr3_get_device_size(u32 cs) +{ + u32 device_size_low, device_size_high, device_size; + u32 data, cs_low_offset, cs_high_offset; + + cs_low_offset = REG_SDRAM_ADDRESS_SIZE_OFFS + cs * 4; + cs_high_offset = REG_SDRAM_ADDRESS_SIZE_OFFS + + REG_SDRAM_ADDRESS_SIZE_HIGH_OFFS + cs; + + data = reg_read(REG_SDRAM_ADDRESS_CTRL_ADDR); + device_size_low = (data >> cs_low_offset) & 0x3; + device_size_high = (data >> cs_high_offset) & 0x1; + + device_size = device_size_low | (device_size_high << 2); + + switch (device_size) { + case 0: + return 2048; + case 2: + return 512; + case 3: + return 1024; + case 4: + return 4096; + case 5: + return 8192; + case 1: + default: + DEBUG_INIT_C("Error: Wrong device size of Cs: ", cs, 1); + /* + * Small value will give wrong emem size in + * ddr3_calc_mem_cs_size + */ + return 0; + } +} + +int ddr3_calc_mem_cs_size(u32 cs, u32 *cs_size) +{ + int cs_mem_size; + + /* Calculate in GiB */ + cs_mem_size = ((ddr3_get_bus_width() / ddr3_get_device_width(cs)) * + ddr3_get_device_size(cs)) / 8; + + /* + * Multiple controller bus width, 2x for 64 bit + * (SoC controller may be 32 or 64 bit, + * so bit 15 in 0x1400, that means if whole bus used or only half, + * have a differnt meaning + */ + cs_mem_size *= DDR_CONTROLLER_BUS_WIDTH_MULTIPLIER; + + if (!cs_mem_size || (cs_mem_size == 64) || (cs_mem_size == 4096)) { + DEBUG_INIT_C("Error: Wrong Memory size of Cs: ", cs, 1); + return MV_BAD_VALUE; + } + + *cs_size = cs_mem_size << 20; + return MV_OK; +} + +/* + * Name: ddr3_hws_tune_training_params + * Desc: + * Args: + * Notes: Tune internal training params + * Returns: + */ +static int ddr3_hws_tune_training_params(u8 dev_num) +{ + struct tune_train_params params; + int status; + + /* NOTE: do not remove any field initilization */ + params.ck_delay = TUNE_TRAINING_PARAMS_CK_DELAY; + params.ck_delay_16 = TUNE_TRAINING_PARAMS_CK_DELAY_16; + params.p_finger = TUNE_TRAINING_PARAMS_PFINGER; + params.n_finger = TUNE_TRAINING_PARAMS_NFINGER; + params.phy_reg3_val = TUNE_TRAINING_PARAMS_PHYREG3VAL; + + status = ddr3_tip_tune_training_params(dev_num, ¶ms); + if (MV_OK != status) { + printf("%s Training Sequence - FAILED\n", ddr_type); + return status; + } + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_init.h b/drivers/ddr/marvell/a38x/old/ddr3_init.h new file mode 100644 index 0000000000..8cb08864c2 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_init.h @@ -0,0 +1,393 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_INIT_H +#define _DDR3_INIT_H + +#if defined(CONFIG_ARMADA_38X) +#include "ddr3_a38x.h" +#include "ddr3_a38x_mc_static.h" +#include "ddr3_a38x_topology.h" +#endif +#include "ddr3_hws_hw_training.h" +#include "ddr3_hws_sil_training.h" +#include "ddr3_logging_def.h" +#include "ddr3_training_hw_algo.h" +#include "ddr3_training_ip.h" +#include "ddr3_training_ip_centralization.h" +#include "ddr3_training_ip_engine.h" +#include "ddr3_training_ip_flow.h" +#include "ddr3_training_ip_pbs.h" +#include "ddr3_training_ip_prv_if.h" +#include "ddr3_training_ip_static.h" +#include "ddr3_training_leveling.h" +#include "xor.h" + +/* + * MV_DEBUG_INIT need to be defines, otherwise the output of the + * DDR2 training code is not complete and misleading + */ +#define MV_DEBUG_INIT + +#ifdef MV_DEBUG_INIT +#define DEBUG_INIT_S(s) puts(s) +#define DEBUG_INIT_D(d, l) printf("%x", d) +#define DEBUG_INIT_D_10(d, l) printf("%d", d) +#else +#define DEBUG_INIT_S(s) +#define DEBUG_INIT_D(d, l) +#define DEBUG_INIT_D_10(d, l) +#endif + +#ifdef MV_DEBUG_INIT_FULL +#define DEBUG_INIT_FULL_S(s) puts(s) +#define DEBUG_INIT_FULL_D(d, l) printf("%x", d) +#define DEBUG_INIT_FULL_D_10(d, l) printf("%d", d) +#define DEBUG_WR_REG(reg, val) \ + { DEBUG_INIT_S("Write Reg: 0x"); DEBUG_INIT_D((reg), 8); \ + DEBUG_INIT_S("= "); DEBUG_INIT_D((val), 8); DEBUG_INIT_S("\n"); } +#define DEBUG_RD_REG(reg, val) \ + { DEBUG_INIT_S("Read Reg: 0x"); DEBUG_INIT_D((reg), 8); \ + DEBUG_INIT_S("= "); DEBUG_INIT_D((val), 8); DEBUG_INIT_S("\n"); } +#else +#define DEBUG_INIT_FULL_S(s) +#define DEBUG_INIT_FULL_D(d, l) +#define DEBUG_INIT_FULL_D_10(d, l) +#define DEBUG_WR_REG(reg, val) +#define DEBUG_RD_REG(reg, val) +#endif + +#define DEBUG_INIT_FULL_C(s, d, l) \ + { DEBUG_INIT_FULL_S(s); \ + DEBUG_INIT_FULL_D(d, l); \ + DEBUG_INIT_FULL_S("\n"); } +#define DEBUG_INIT_C(s, d, l) \ + { DEBUG_INIT_S(s); DEBUG_INIT_D(d, l); DEBUG_INIT_S("\n"); } + +/* + * Debug (Enable/Disable modules) and Error report + */ + +#ifdef BASIC_DEBUG +#define MV_DEBUG_WL +#define MV_DEBUG_RL +#define MV_DEBUG_DQS_RESULTS +#endif + +#ifdef FULL_DEBUG +#define MV_DEBUG_WL +#define MV_DEBUG_RL +#define MV_DEBUG_DQS + +#define MV_DEBUG_PBS +#define MV_DEBUG_DFS +#define MV_DEBUG_MAIN_FULL +#define MV_DEBUG_DFS_FULL +#define MV_DEBUG_DQS_FULL +#define MV_DEBUG_RL_FULL +#define MV_DEBUG_WL_FULL +#endif + +#if defined(CONFIG_ARMADA_38X) +#include "ddr3_a38x.h" +#include "ddr3_a38x_topology.h" +#endif + +/* The following is a list of Marvell status */ +#define MV_ERROR (-1) +#define MV_OK (0x00) /* Operation succeeded */ +#define MV_FAIL (0x01) /* Operation failed */ +#define MV_BAD_VALUE (0x02) /* Illegal value (general) */ +#define MV_OUT_OF_RANGE (0x03) /* The value is out of range */ +#define MV_BAD_PARAM (0x04) /* Illegal parameter in function called */ +#define MV_BAD_PTR (0x05) /* Illegal pointer value */ +#define MV_BAD_SIZE (0x06) /* Illegal size */ +#define MV_BAD_STATE (0x07) /* Illegal state of state machine */ +#define MV_SET_ERROR (0x08) /* Set operation failed */ +#define MV_GET_ERROR (0x09) /* Get operation failed */ +#define MV_CREATE_ERROR (0x0a) /* Fail while creating an item */ +#define MV_NOT_FOUND (0x0b) /* Item not found */ +#define MV_NO_MORE (0x0c) /* No more items found */ +#define MV_NO_SUCH (0x0d) /* No such item */ +#define MV_TIMEOUT (0x0e) /* Time Out */ +#define MV_NO_CHANGE (0x0f) /* Parameter(s) is already in this value */ +#define MV_NOT_SUPPORTED (0x10) /* This request is not support */ +#define MV_NOT_IMPLEMENTED (0x11) /* Request supported but not implemented*/ +#define MV_NOT_INITIALIZED (0x12) /* The item is not initialized */ +#define MV_NO_RESOURCE (0x13) /* Resource not available (memory ...) */ +#define MV_FULL (0x14) /* Item is full (Queue or table etc...) */ +#define MV_EMPTY (0x15) /* Item is empty (Queue or table etc...) */ +#define MV_INIT_ERROR (0x16) /* Error occurred while INIT process */ +#define MV_HW_ERROR (0x17) /* Hardware error */ +#define MV_TX_ERROR (0x18) /* Transmit operation not succeeded */ +#define MV_RX_ERROR (0x19) /* Recieve operation not succeeded */ +#define MV_NOT_READY (0x1a) /* The other side is not ready yet */ +#define MV_ALREADY_EXIST (0x1b) /* Tried to create existing item */ +#define MV_OUT_OF_CPU_MEM (0x1c) /* Cpu memory allocation failed. */ +#define MV_NOT_STARTED (0x1d) /* Not started yet */ +#define MV_BUSY (0x1e) /* Item is busy. */ +#define MV_TERMINATE (0x1f) /* Item terminates it's work. */ +#define MV_NOT_ALIGNED (0x20) /* Wrong alignment */ +#define MV_NOT_ALLOWED (0x21) /* Operation NOT allowed */ +#define MV_WRITE_PROTECT (0x22) /* Write protected */ +#define MV_INVALID (int)(-1) + +/* For checking function return values */ +#define CHECK_STATUS(orig_func) \ + { \ + int status; \ + status = orig_func; \ + if (MV_OK != status) \ + return status; \ + } + +enum log_level { + MV_LOG_LEVEL_0, + MV_LOG_LEVEL_1, + MV_LOG_LEVEL_2, + MV_LOG_LEVEL_3 +}; + +/* Globals */ +extern u8 debug_training; +extern u8 is_reg_dump; +extern u8 generic_init_controller; +extern u32 freq_val[]; +extern u32 is_pll_old; +extern struct cl_val_per_freq cas_latency_table[]; +extern struct pattern_info pattern_table[]; +extern struct cl_val_per_freq cas_write_latency_table[]; +extern u8 debug_training; +extern u8 debug_centralization, debug_training_ip, debug_training_bist, + debug_pbs, debug_training_static, debug_leveling; +extern u32 pipe_multicast_mask; +extern struct hws_tip_config_func_db config_func_info[]; +extern u8 cs_mask_reg[]; +extern u8 twr_mask_table[]; +extern u8 cl_mask_table[]; +extern u8 cwl_mask_table[]; +extern u16 rfc_table[]; +extern u32 speed_bin_table_t_rc[]; +extern u32 speed_bin_table_t_rcd_t_rp[]; +extern u32 ck_delay, ck_delay_16; + +extern u32 g_zpri_data; +extern u32 g_znri_data; +extern u32 g_zpri_ctrl; +extern u32 g_znri_ctrl; +extern u32 g_zpodt_data; +extern u32 g_znodt_data; +extern u32 g_zpodt_ctrl; +extern u32 g_znodt_ctrl; +extern u32 g_dic; +extern u32 g_odt_config; +extern u32 g_rtt_nom; + +extern u8 debug_training_access; +extern u8 debug_training_a38x; +extern u32 first_active_if; +extern enum hws_ddr_freq init_freq; +extern u32 delay_enable, ck_delay, ck_delay_16, ca_delay; +extern u32 mask_tune_func; +extern u32 rl_version; +extern int rl_mid_freq_wa; +extern u8 calibration_update_control; /* 2 external only, 1 is internal only */ +extern enum hws_ddr_freq medium_freq; + +extern u32 ck_delay, ck_delay_16; +extern enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; +extern u32 first_active_if; +extern u32 mask_tune_func; +extern u32 freq_val[]; +extern enum hws_ddr_freq init_freq; +extern enum hws_ddr_freq low_freq; +extern enum hws_ddr_freq medium_freq; +extern u8 generic_init_controller; +extern enum auto_tune_stage training_stage; +extern u32 is_pll_before_init; +extern u32 is_adll_calib_before_init; +extern u32 is_dfs_in_init; +extern int wl_debug_delay; +extern u32 silicon_delay[HWS_MAX_DEVICE_NUM]; +extern u32 p_finger; +extern u32 n_finger; +extern u32 freq_val[DDR_FREQ_LIMIT]; +extern u32 start_pattern, end_pattern; +extern u32 phy_reg0_val; +extern u32 phy_reg1_val; +extern u32 phy_reg2_val; +extern u32 phy_reg3_val; +extern enum hws_pattern sweep_pattern; +extern enum hws_pattern pbs_pattern; +extern u8 is_rzq6; +extern u32 znri_data_phy_val; +extern u32 zpri_data_phy_val; +extern u32 znri_ctrl_phy_val; +extern u32 zpri_ctrl_phy_val; +extern u8 debug_training_access; +extern u32 finger_test, p_finger_start, p_finger_end, n_finger_start, + n_finger_end, p_finger_step, n_finger_step; +extern u32 mode2_t; +extern u32 xsb_validate_type; +extern u32 xsb_validation_base_address; +extern u32 odt_additional; +extern u32 debug_mode; +extern u32 delay_enable; +extern u32 ca_delay; +extern u32 debug_dunit; +extern u32 clamp_tbl[]; +extern u32 freq_mask[HWS_MAX_DEVICE_NUM][DDR_FREQ_LIMIT]; +extern u32 start_pattern, end_pattern; + +extern u32 maxt_poll_tries; +extern u32 is_bist_reset_bit; +extern u8 debug_training_bist; + +extern u8 vref_window_size[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +extern u32 debug_mode; +extern u32 effective_cs; +extern int ddr3_tip_centr_skip_min_win_check; +extern u32 *dq_map_table; +extern enum auto_tune_stage training_stage; +extern u8 debug_centralization; + +extern u32 delay_enable; +extern u32 start_pattern, end_pattern; +extern u32 freq_val[DDR_FREQ_LIMIT]; +extern u8 debug_training_hw_alg; +extern enum auto_tune_stage training_stage; + +extern u8 debug_training_ip; +extern enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; +extern enum auto_tune_stage training_stage; +extern u32 effective_cs; + +extern u8 debug_leveling; +extern enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; +extern enum auto_tune_stage training_stage; +extern u32 rl_version; +extern struct cl_val_per_freq cas_latency_table[]; +extern u32 start_xsb_offset; +extern u32 debug_mode; +extern u32 odt_config; +extern u32 effective_cs; +extern u32 phy_reg1_val; + +extern u8 debug_pbs; +extern u32 effective_cs; +extern u16 mask_results_dq_reg_map[]; +extern enum hws_ddr_freq medium_freq; +extern u32 freq_val[]; +extern enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; +extern enum auto_tune_stage training_stage; +extern u32 debug_mode; +extern u32 *dq_map_table; + +extern u32 vref; +extern struct cl_val_per_freq cas_latency_table[]; +extern u32 target_freq; +extern struct hws_tip_config_func_db config_func_info[HWS_MAX_DEVICE_NUM]; +extern u32 clamp_tbl[]; +extern u32 init_freq; +/* list of allowed frequency listed in order of enum hws_ddr_freq */ +extern u32 freq_val[]; +extern u8 debug_training_static; +extern u32 first_active_if; + +/* Prototypes */ +int ddr3_tip_enable_init_sequence(u32 dev_num); + +int ddr3_tip_init_a38x(u32 dev_num, u32 board_id); + +int ddr3_hws_hw_training(void); +int ddr3_silicon_pre_init(void); +int ddr3_silicon_post_init(void); +int ddr3_post_run_alg(void); +int ddr3_if_ecc_enabled(void); +void ddr3_new_tip_ecc_scrub(void); + +void ddr3_print_version(void); +void ddr3_new_tip_dlb_config(void); +struct hws_topology_map *ddr3_get_topology_map(void); + +int ddr3_if_ecc_enabled(void); +int ddr3_tip_reg_write(u32 dev_num, u32 reg_addr, u32 data); +int ddr3_tip_reg_read(u32 dev_num, u32 reg_addr, u32 *data, u32 reg_mask); +int ddr3_silicon_get_ddr_target_freq(u32 *ddr_freq); +int ddr3_tip_a38x_get_freq_config(u8 dev_num, enum hws_ddr_freq freq, + struct hws_tip_freq_config_info + *freq_config_info); +int ddr3_a38x_update_topology_map(u32 dev_num, + struct hws_topology_map *topology_map); +int ddr3_tip_a38x_get_init_freq(int dev_num, enum hws_ddr_freq *freq); +int ddr3_tip_a38x_get_medium_freq(int dev_num, enum hws_ddr_freq *freq); +int ddr3_tip_a38x_if_read(u8 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 *data, u32 mask); +int ddr3_tip_a38x_if_write(u8 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 data, u32 mask); +int ddr3_tip_a38x_get_device_info(u8 dev_num, + struct ddr3_device_info *info_ptr); + +int ddr3_tip_init_a38x(u32 dev_num, u32 board_id); + +int print_adll(u32 dev_num, u32 adll[MAX_INTERFACE_NUM * MAX_BUS_NUM]); +int ddr3_tip_restore_dunit_regs(u32 dev_num); +void print_topology(struct hws_topology_map *topology_db); + +u32 mv_board_id_get(void); + +int ddr3_load_topology_map(void); +int ddr3_tip_init_specific_reg_config(u32 dev_num, + struct reg_data *reg_config_arr); +u32 ddr3_tip_get_init_freq(void); +void ddr3_hws_set_log_level(enum ddr_lib_debug_block block, u8 level); +int ddr3_tip_tune_training_params(u32 dev_num, + struct tune_train_params *params); +void get_target_freq(u32 freq_mode, u32 *ddr_freq, u32 *hclk_ps); +int ddr3_fast_path_dynamic_cs_size_config(u32 cs_ena); +void ddr3_fast_path_static_cs_size_config(u32 cs_ena); +u32 ddr3_get_device_width(u32 cs); +u32 mv_board_id_index_get(u32 board_id); +u32 mv_board_id_get(void); +u32 ddr3_get_bus_width(void); +void ddr3_set_log_level(u32 n_log_level); +int ddr3_calc_mem_cs_size(u32 cs, u32 *cs_size); + +int hws_ddr3_cs_base_adr_calc(u32 if_id, u32 cs, u32 *cs_base_addr); + +int ddr3_tip_print_pbs_result(u32 dev_num, u32 cs_num, enum pbs_dir pbs_mode); +int ddr3_tip_clean_pbs_result(u32 dev_num, enum pbs_dir pbs_mode); + +int ddr3_tip_static_round_trip_arr_build(u32 dev_num, + struct trip_delay_element *table_ptr, + int is_wl, u32 *round_trip_delay_arr); + +u32 hws_ddr3_tip_max_cs_get(void); + +/* + * Accessor functions for the registers + */ +static inline void reg_write(u32 addr, u32 val) +{ + writel(val, INTER_REGS_BASE + addr); +} + +static inline u32 reg_read(u32 addr) +{ + return readl(INTER_REGS_BASE + addr); +} + +static inline void reg_bit_set(u32 addr, u32 mask) +{ + setbits_le32(INTER_REGS_BASE + addr, mask); +} + +static inline void reg_bit_clr(u32 addr, u32 mask) +{ + clrbits_le32(INTER_REGS_BASE + addr, mask); +} + +#endif /* _DDR3_INIT_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_logging_def.h b/drivers/ddr/marvell/a38x/old/ddr3_logging_def.h new file mode 100644 index 0000000000..2de7c4fa31 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_logging_def.h @@ -0,0 +1,101 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_LOGGING_CONFIG_H +#define _DDR3_LOGGING_CONFIG_H + +#ifdef SILENT_LIB +#define DEBUG_TRAINING_BIST_ENGINE(level, s) +#define DEBUG_TRAINING_IP(level, s) +#define DEBUG_CENTRALIZATION_ENGINE(level, s) +#define DEBUG_TRAINING_HW_ALG(level, s) +#define DEBUG_TRAINING_IP_ENGINE(level, s) +#define DEBUG_LEVELING(level, s) +#define DEBUG_PBS_ENGINE(level, s) +#define DEBUG_TRAINING_STATIC_IP(level, s) +#define DEBUG_TRAINING_ACCESS(level, s) +#else +#ifdef LIB_FUNCTIONAL_DEBUG_ONLY +#define DEBUG_TRAINING_BIST_ENGINE(level, s) +#define DEBUG_TRAINING_IP_ENGINE(level, s) +#define DEBUG_TRAINING_IP(level, s) \ + if (level >= debug_training) \ + printf s +#define DEBUG_CENTRALIZATION_ENGINE(level, s) \ + if (level >= debug_centralization) \ + printf s +#define DEBUG_TRAINING_HW_ALG(level, s) \ + if (level >= debug_training_hw_alg) \ + printf s +#define DEBUG_LEVELING(level, s) \ + if (level >= debug_leveling) \ + printf s +#define DEBUG_PBS_ENGINE(level, s) \ + if (level >= debug_pbs) \ + printf s +#define DEBUG_TRAINING_STATIC_IP(level, s) \ + if (level >= debug_training_static) \ + printf s +#define DEBUG_TRAINING_ACCESS(level, s) \ + if (level >= debug_training_access) \ + printf s +#else +#define DEBUG_TRAINING_BIST_ENGINE(level, s) \ + if (level >= debug_training_bist) \ + printf s + +#define DEBUG_TRAINING_IP_ENGINE(level, s) \ + if (level >= debug_training_ip) \ + printf s +#define DEBUG_TRAINING_IP(level, s) \ + if (level >= debug_training) \ + printf s +#define DEBUG_CENTRALIZATION_ENGINE(level, s) \ + if (level >= debug_centralization) \ + printf s +#define DEBUG_TRAINING_HW_ALG(level, s) \ + if (level >= debug_training_hw_alg) \ + printf s +#define DEBUG_LEVELING(level, s) \ + if (level >= debug_leveling) \ + printf s +#define DEBUG_PBS_ENGINE(level, s) \ + if (level >= debug_pbs) \ + printf s +#define DEBUG_TRAINING_STATIC_IP(level, s) \ + if (level >= debug_training_static) \ + printf s +#define DEBUG_TRAINING_ACCESS(level, s) \ + if (level >= debug_training_access) \ + printf s +#endif +#endif + +/* Logging defines */ +#define DEBUG_LEVEL_TRACE 1 +#define DEBUG_LEVEL_INFO 2 +#define DEBUG_LEVEL_ERROR 3 + +enum ddr_lib_debug_block { + DEBUG_BLOCK_STATIC, + DEBUG_BLOCK_TRAINING_MAIN, + DEBUG_BLOCK_LEVELING, + DEBUG_BLOCK_CENTRALIZATION, + DEBUG_BLOCK_PBS, + DEBUG_BLOCK_IP, + DEBUG_BLOCK_BIST, + DEBUG_BLOCK_ALG, + DEBUG_BLOCK_DEVICE, + DEBUG_BLOCK_ACCESS, + DEBUG_STAGES_REG_DUMP, + /* All excluding IP and REG_DUMP, should be enabled separatelly */ + DEBUG_BLOCK_ALL +}; + +int ddr3_tip_print_log(u32 dev_num, u32 mem_addr); +int ddr3_tip_print_stability_log(u32 dev_num); + +#endif /* _DDR3_LOGGING_CONFIG_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_patterns_64bit.h b/drivers/ddr/marvell/a38x/old/ddr3_patterns_64bit.h new file mode 100644 index 0000000000..0ce0479a3a --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_patterns_64bit.h @@ -0,0 +1,924 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef __DDR3_PATTERNS_64_H +#define __DDR3_PATTERNS_64_H + +/* + * Patterns Declerations + */ + +u32 wl_sup_pattern[LEN_WL_SUP_PATTERN] __aligned(32) = { + 0x04030201, 0x08070605, 0x0c0b0a09, 0x100f0e0d, + 0x14131211, 0x18171615, 0x1c1b1a19, 0x201f1e1d, + 0x24232221, 0x28272625, 0x2c2b2a29, 0x302f2e2d, + 0x34333231, 0x38373635, 0x3c3b3a39, 0x403f3e3d, + 0x44434241, 0x48474645, 0x4c4b4a49, 0x504f4e4d, + 0x54535251, 0x58575655, 0x5c5b5a59, 0x605f5e5d, + 0x64636261, 0x68676665, 0x6c6b6a69, 0x706f6e6d, + 0x74737271, 0x78777675, 0x7c7b7a79, 0x807f7e7d +}; + +u32 pbs_pattern_32b[2][LEN_PBS_PATTERN] __aligned(32) = { + { + 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, 0x55555555, + 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, 0x55555555, + 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, 0x55555555, + 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, 0x55555555 + }, + { + 0x55555555, 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, + 0x55555555, 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, + 0x55555555, 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa, + 0x55555555, 0xaaaaaaaa, 0x55555555, 0xaaaaaaaa + } +}; + +u32 pbs_pattern_64b[2][LEN_PBS_PATTERN] __aligned(32) = { + { + 0xaaaaaaaa, 0xaaaaaaaa, 0x55555555, 0x55555555, + 0xaaaaaaaa, 0xaaaaaaaa, 0x55555555, 0x55555555, + 0xaaaaaaaa, 0xaaaaaaaa, 0x55555555, 0x55555555, + 0xaaaaaaaa, 0xaaaaaaaa, 0x55555555, 0x55555555 + }, + { + 0x55555555, 0x55555555, 0xaaaaaaaa, 0xaaaaaaaa, + 0x55555555, 0x55555555, 0xaaaaaaaa, 0xaaaaaaaa, + 0x55555555, 0x55555555, 0xaaaaaaaa, 0xaaaaaaaa, + 0x55555555, 0x55555555, 0xaaaaaaaa, 0xaaaaaaaa + } +}; + +u32 rl_pattern[LEN_STD_PATTERN] __aligned(32) = { + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x01010101, 0x01010101, 0x01010101, 0x01010101 +}; + +u32 killer_pattern_32b[DQ_NUM][LEN_KILLER_PATTERN] __aligned(32) = { + { + 0x01010101, 0x00000000, 0x01010101, 0xffffffff, + 0x01010101, 0x00000000, 0x01010101, 0xffffffff, + 0xfefefefe, 0xfefefefe, 0x01010101, 0xfefefefe, + 0xfefefefe, 0xfefefefe, 0x01010101, 0xfefefefe, + 0x01010101, 0xfefefefe, 0x01010101, 0x01010101, + 0x01010101, 0xfefefefe, 0x01010101, 0x01010101, + 0xfefefefe, 0x01010101, 0xfefefefe, 0x00000000, + 0xfefefefe, 0x01010101, 0xfefefefe, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x01010101, + 0xffffffff, 0x00000000, 0xffffffff, 0x01010101, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xfefefefe, + 0x00000000, 0x00000000, 0x00000000, 0xfefefefe, + 0xfefefefe, 0xffffffff, 0x00000000, 0x00000000, + 0xfefefefe, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xfefefefe, 0x00000000, 0xfefefefe, 0x00000000, + 0xfefefefe, 0x00000000, 0xfefefefe, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x01010101, + 0x00000000, 0xffffffff, 0xffffffff, 0x01010101, + 0xffffffff, 0xffffffff, 0x01010101, 0x00000000, + 0xffffffff, 0xffffffff, 0x01010101, 0x00000000, + 0x01010101, 0xffffffff, 0xfefefefe, 0xfefefefe, + 0x01010101, 0xffffffff, 0xfefefefe, 0xfefefefe + }, + { + 0x02020202, 0x00000000, 0x02020202, 0xffffffff, + 0x02020202, 0x00000000, 0x02020202, 0xffffffff, + 0xfdfdfdfd, 0xfdfdfdfd, 0x02020202, 0xfdfdfdfd, + 0xfdfdfdfd, 0xfdfdfdfd, 0x02020202, 0xfdfdfdfd, + 0x02020202, 0xfdfdfdfd, 0x02020202, 0x02020202, + 0x02020202, 0xfdfdfdfd, 0x02020202, 0x02020202, + 0xfdfdfdfd, 0x02020202, 0xfdfdfdfd, 0x00000000, + 0xfdfdfdfd, 0x02020202, 0xfdfdfdfd, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x02020202, + 0xffffffff, 0x00000000, 0xffffffff, 0x02020202, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xfdfdfdfd, + 0x00000000, 0x00000000, 0x00000000, 0xfdfdfdfd, + 0xfdfdfdfd, 0xffffffff, 0x00000000, 0x00000000, + 0xfdfdfdfd, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xfdfdfdfd, 0x00000000, 0xfdfdfdfd, 0x00000000, + 0xfdfdfdfd, 0x00000000, 0xfdfdfdfd, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x02020202, + 0x00000000, 0xffffffff, 0xffffffff, 0x02020202, + 0xffffffff, 0xffffffff, 0x02020202, 0x00000000, + 0xffffffff, 0xffffffff, 0x02020202, 0x00000000, + 0x02020202, 0xffffffff, 0xfdfdfdfd, 0xfdfdfdfd, + 0x02020202, 0xffffffff, 0xfdfdfdfd, 0xfdfdfdfd + }, + { + 0x04040404, 0x00000000, 0x04040404, 0xffffffff, + 0x04040404, 0x00000000, 0x04040404, 0xffffffff, + 0xfbfbfbfb, 0xfbfbfbfb, 0x04040404, 0xfbfbfbfb, + 0xfbfbfbfb, 0xfbfbfbfb, 0x04040404, 0xfbfbfbfb, + 0x04040404, 0xfbfbfbfb, 0x04040404, 0x04040404, + 0x04040404, 0xfbfbfbfb, 0x04040404, 0x04040404, + 0xfbfbfbfb, 0x04040404, 0xfbfbfbfb, 0x00000000, + 0xfbfbfbfb, 0x04040404, 0xfbfbfbfb, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x04040404, + 0xffffffff, 0x00000000, 0xffffffff, 0x04040404, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xfbfbfbfb, + 0x00000000, 0x00000000, 0x00000000, 0xfbfbfbfb, + 0xfbfbfbfb, 0xffffffff, 0x00000000, 0x00000000, + 0xfbfbfbfb, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xfbfbfbfb, 0x00000000, 0xfbfbfbfb, 0x00000000, + 0xfbfbfbfb, 0x00000000, 0xfbfbfbfb, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x04040404, + 0x00000000, 0xffffffff, 0xffffffff, 0x04040404, + 0xffffffff, 0xffffffff, 0x04040404, 0x00000000, + 0xffffffff, 0xffffffff, 0x04040404, 0x00000000, + 0x04040404, 0xffffffff, 0xfbfbfbfb, 0xfbfbfbfb, + 0x04040404, 0xffffffff, 0xfbfbfbfb, 0xfbfbfbfb + }, + { + 0x08080808, 0x00000000, 0x08080808, 0xffffffff, + 0x08080808, 0x00000000, 0x08080808, 0xffffffff, + 0xf7f7f7f7, 0xf7f7f7f7, 0x08080808, 0xf7f7f7f7, + 0xf7f7f7f7, 0xf7f7f7f7, 0x08080808, 0xf7f7f7f7, + 0x08080808, 0xf7f7f7f7, 0x08080808, 0x08080808, + 0x08080808, 0xf7f7f7f7, 0x08080808, 0x08080808, + 0xf7f7f7f7, 0x08080808, 0xf7f7f7f7, 0x00000000, + 0xf7f7f7f7, 0x08080808, 0xf7f7f7f7, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x08080808, + 0xffffffff, 0x00000000, 0xffffffff, 0x08080808, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xf7f7f7f7, + 0x00000000, 0x00000000, 0x00000000, 0xf7f7f7f7, + 0xf7f7f7f7, 0xffffffff, 0x00000000, 0x00000000, + 0xf7f7f7f7, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xf7f7f7f7, 0x00000000, 0xf7f7f7f7, 0x00000000, + 0xf7f7f7f7, 0x00000000, 0xf7f7f7f7, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x08080808, + 0x00000000, 0xffffffff, 0xffffffff, 0x08080808, + 0xffffffff, 0xffffffff, 0x08080808, 0x00000000, + 0xffffffff, 0xffffffff, 0x08080808, 0x00000000, + 0x08080808, 0xffffffff, 0xf7f7f7f7, 0xf7f7f7f7, + 0x08080808, 0xffffffff, 0xf7f7f7f7, 0xf7f7f7f7 + }, + { + 0x10101010, 0x00000000, 0x10101010, 0xffffffff, + 0x10101010, 0x00000000, 0x10101010, 0xffffffff, + 0xefefefef, 0xefefefef, 0x10101010, 0xefefefef, + 0xefefefef, 0xefefefef, 0x10101010, 0xefefefef, + 0x10101010, 0xefefefef, 0x10101010, 0x10101010, + 0x10101010, 0xefefefef, 0x10101010, 0x10101010, + 0xefefefef, 0x10101010, 0xefefefef, 0x00000000, + 0xefefefef, 0x10101010, 0xefefefef, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x10101010, + 0xffffffff, 0x00000000, 0xffffffff, 0x10101010, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xefefefef, + 0x00000000, 0x00000000, 0x00000000, 0xefefefef, + 0xefefefef, 0xffffffff, 0x00000000, 0x00000000, + 0xefefefef, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xefefefef, 0x00000000, 0xefefefef, 0x00000000, + 0xefefefef, 0x00000000, 0xefefefef, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x10101010, + 0x00000000, 0xffffffff, 0xffffffff, 0x10101010, + 0xffffffff, 0xffffffff, 0x10101010, 0x00000000, + 0xffffffff, 0xffffffff, 0x10101010, 0x00000000, + 0x10101010, 0xffffffff, 0xefefefef, 0xefefefef, + 0x10101010, 0xffffffff, 0xefefefef, 0xefefefef + }, + { + 0x20202020, 0x00000000, 0x20202020, 0xffffffff, + 0x20202020, 0x00000000, 0x20202020, 0xffffffff, + 0xdfdfdfdf, 0xdfdfdfdf, 0x20202020, 0xdfdfdfdf, + 0xdfdfdfdf, 0xdfdfdfdf, 0x20202020, 0xdfdfdfdf, + 0x20202020, 0xdfdfdfdf, 0x20202020, 0x20202020, + 0x20202020, 0xdfdfdfdf, 0x20202020, 0x20202020, + 0xdfdfdfdf, 0x20202020, 0xdfdfdfdf, 0x00000000, + 0xdfdfdfdf, 0x20202020, 0xdfdfdfdf, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x20202020, + 0xffffffff, 0x00000000, 0xffffffff, 0x20202020, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xdfdfdfdf, + 0x00000000, 0x00000000, 0x00000000, 0xdfdfdfdf, + 0xdfdfdfdf, 0xffffffff, 0x00000000, 0x00000000, + 0xdfdfdfdf, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xdfdfdfdf, 0x00000000, 0xdfdfdfdf, 0x00000000, + 0xdfdfdfdf, 0x00000000, 0xdfdfdfdf, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x20202020, + 0x00000000, 0xffffffff, 0xffffffff, 0x20202020, + 0xffffffff, 0xffffffff, 0x20202020, 0x00000000, + 0xffffffff, 0xffffffff, 0x20202020, 0x00000000, + 0x20202020, 0xffffffff, 0xdfdfdfdf, 0xdfdfdfdf, + 0x20202020, 0xffffffff, 0xdfdfdfdf, 0xdfdfdfdf + }, + { + 0x40404040, 0x00000000, 0x40404040, 0xffffffff, + 0x40404040, 0x00000000, 0x40404040, 0xffffffff, + 0xbfbfbfbf, 0xbfbfbfbf, 0x40404040, 0xbfbfbfbf, + 0xbfbfbfbf, 0xbfbfbfbf, 0x40404040, 0xbfbfbfbf, + 0x40404040, 0xbfbfbfbf, 0x40404040, 0x40404040, + 0x40404040, 0xbfbfbfbf, 0x40404040, 0x40404040, + 0xbfbfbfbf, 0x40404040, 0xbfbfbfbf, 0x00000000, + 0xbfbfbfbf, 0x40404040, 0xbfbfbfbf, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x40404040, + 0xffffffff, 0x00000000, 0xffffffff, 0x40404040, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0xbfbfbfbf, + 0x00000000, 0x00000000, 0x00000000, 0xbfbfbfbf, + 0xbfbfbfbf, 0xffffffff, 0x00000000, 0x00000000, + 0xbfbfbfbf, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0xbfbfbfbf, 0x00000000, 0xbfbfbfbf, 0x00000000, + 0xbfbfbfbf, 0x00000000, 0xbfbfbfbf, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x40404040, + 0x00000000, 0xffffffff, 0xffffffff, 0x40404040, + 0xffffffff, 0xffffffff, 0x40404040, 0x00000000, + 0xffffffff, 0xffffffff, 0x40404040, 0x00000000, + 0x40404040, 0xffffffff, 0xbfbfbfbf, 0xbfbfbfbf, + 0x40404040, 0xffffffff, 0xbfbfbfbf, 0xbfbfbfbf + }, + { + 0x80808080, 0x00000000, 0x80808080, 0xffffffff, + 0x80808080, 0x00000000, 0x80808080, 0xffffffff, + 0x7f7f7f7f, 0x7f7f7f7f, 0x80808080, 0x7f7f7f7f, + 0x7f7f7f7f, 0x7f7f7f7f, 0x80808080, 0x7f7f7f7f, + 0x80808080, 0x7f7f7f7f, 0x80808080, 0x80808080, + 0x80808080, 0x7f7f7f7f, 0x80808080, 0x80808080, + 0x7f7f7f7f, 0x80808080, 0x7f7f7f7f, 0x00000000, + 0x7f7f7f7f, 0x80808080, 0x7f7f7f7f, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x00000000, + 0xffffffff, 0x00000000, 0xffffffff, 0x80808080, + 0xffffffff, 0x00000000, 0xffffffff, 0x80808080, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x7f7f7f7f, + 0x00000000, 0x00000000, 0x00000000, 0x7f7f7f7f, + 0x7f7f7f7f, 0xffffffff, 0x00000000, 0x00000000, + 0x7f7f7f7f, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x00000000, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x00000000, 0xffffffff, 0x00000000, 0xffffffff, + 0x7f7f7f7f, 0x00000000, 0x7f7f7f7f, 0x00000000, + 0x7f7f7f7f, 0x00000000, 0x7f7f7f7f, 0x00000000, + 0x00000000, 0xffffffff, 0xffffffff, 0x80808080, + 0x00000000, 0xffffffff, 0xffffffff, 0x80808080, + 0xffffffff, 0xffffffff, 0x80808080, 0x00000000, + 0xffffffff, 0xffffffff, 0x80808080, 0x00000000, + 0x80808080, 0xffffffff, 0x7f7f7f7f, 0x7f7f7f7f, + 0x80808080, 0xffffffff, 0x7f7f7f7f, 0x7f7f7f7f + } +}; + +u32 killer_pattern_64b[DQ_NUM][LEN_KILLER_PATTERN] __aligned(32) = { + { + 0x01010101, 0x01010101, 0x00000000, 0x00000000, + 0x01010101, 0x01010101, 0xffffffff, 0xffffffff, + 0xfefefefe, 0xfefefefe, 0xfefefefe, 0xfefefefe, + 0x01010101, 0x01010101, 0xfefefefe, 0xfefefefe, + 0x01010101, 0x01010101, 0xfefefefe, 0xfefefefe, + 0x01010101, 0x01010101, 0x01010101, 0x01010101, + 0xfefefefe, 0xfefefefe, 0x01010101, 0x01010101, + 0xfefefefe, 0xfefefefe, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x01010101, 0x01010101, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xfefefefe, 0xfefefefe, + 0xfefefefe, 0xfefefefe, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xfefefefe, 0xfefefefe, 0x00000000, 0x00000000, + 0xfefefefe, 0xfefefefe, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x01010101, 0x01010101, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x01010101, 0x01010101, 0x00000000, 0x00000000, + 0x01010101, 0x01010101, 0xffffffff, 0xffffffff, + 0xfefefefe, 0xfefefefe, 0xfefefefe, 0xfefefefe + }, + { + 0x02020202, 0x02020202, 0x00000000, 0x00000000, + 0x02020202, 0x02020202, 0xffffffff, 0xffffffff, + 0xfdfdfdfd, 0xfdfdfdfd, 0xfdfdfdfd, 0xfdfdfdfd, + 0x02020202, 0x02020202, 0xfdfdfdfd, 0xfdfdfdfd, + 0x02020202, 0x02020202, 0xfdfdfdfd, 0xfdfdfdfd, + 0x02020202, 0x02020202, 0x02020202, 0x02020202, + 0xfdfdfdfd, 0xfdfdfdfd, 0x02020202, 0x02020202, + 0xfdfdfdfd, 0xfdfdfdfd, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x02020202, 0x02020202, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xfdfdfdfd, 0xfdfdfdfd, + 0xfdfdfdfd, 0xfdfdfdfd, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xfdfdfdfd, 0xfdfdfdfd, 0x00000000, 0x00000000, + 0xfdfdfdfd, 0xfdfdfdfd, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x02020202, 0x02020202, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x02020202, 0x02020202, 0x00000000, 0x00000000, + 0x02020202, 0x02020202, 0xffffffff, 0xffffffff, + 0xfdfdfdfd, 0xfdfdfdfd, 0xfdfdfdfd, 0xfdfdfdfd + }, + { + 0x04040404, 0x04040404, 0x00000000, 0x00000000, + 0x04040404, 0x04040404, 0xffffffff, 0xffffffff, + 0xfbfbfbfb, 0xfbfbfbfb, 0xfbfbfbfb, 0xfbfbfbfb, + 0x04040404, 0x04040404, 0xfbfbfbfb, 0xfbfbfbfb, + 0x04040404, 0x04040404, 0xfbfbfbfb, 0xfbfbfbfb, + 0x04040404, 0x04040404, 0x04040404, 0x04040404, + 0xfbfbfbfb, 0xfbfbfbfb, 0x04040404, 0x04040404, + 0xfbfbfbfb, 0xfbfbfbfb, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x04040404, 0x04040404, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xfbfbfbfb, 0xfbfbfbfb, + 0xfbfbfbfb, 0xfbfbfbfb, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xfbfbfbfb, 0xfbfbfbfb, 0x00000000, 0x00000000, + 0xfbfbfbfb, 0xfbfbfbfb, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x04040404, 0x04040404, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x04040404, 0x04040404, 0x00000000, 0x00000000, + 0x04040404, 0x04040404, 0xffffffff, 0xffffffff, + 0xfbfbfbfb, 0xfbfbfbfb, 0xfbfbfbfb, 0xfbfbfbfb + }, + { + 0x08080808, 0x08080808, 0x00000000, 0x00000000, + 0x08080808, 0x08080808, 0xffffffff, 0xffffffff, + 0xf7f7f7f7, 0xf7f7f7f7, 0xf7f7f7f7, 0xf7f7f7f7, + 0x08080808, 0x08080808, 0xf7f7f7f7, 0xf7f7f7f7, + 0x08080808, 0x08080808, 0xf7f7f7f7, 0xf7f7f7f7, + 0x08080808, 0x08080808, 0x08080808, 0x08080808, + 0xf7f7f7f7, 0xf7f7f7f7, 0x08080808, 0x08080808, + 0xf7f7f7f7, 0xf7f7f7f7, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x08080808, 0x08080808, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xf7f7f7f7, 0xf7f7f7f7, + 0xf7f7f7f7, 0xf7f7f7f7, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xf7f7f7f7, 0xf7f7f7f7, 0x00000000, 0x00000000, + 0xf7f7f7f7, 0xf7f7f7f7, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x08080808, 0x08080808, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x08080808, 0x08080808, 0x00000000, 0x00000000, + 0x08080808, 0x08080808, 0xffffffff, 0xffffffff, + 0xf7f7f7f7, 0xf7f7f7f7, 0xf7f7f7f7, 0xf7f7f7f7 + }, + { + 0x10101010, 0x10101010, 0x00000000, 0x00000000, + 0x10101010, 0x10101010, 0xffffffff, 0xffffffff, + 0xefefefef, 0xefefefef, 0xefefefef, 0xefefefef, + 0x10101010, 0x10101010, 0xefefefef, 0xefefefef, + 0x10101010, 0x10101010, 0xefefefef, 0xefefefef, + 0x10101010, 0x10101010, 0x10101010, 0x10101010, + 0xefefefef, 0xefefefef, 0x10101010, 0x10101010, + 0xefefefef, 0xefefefef, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x10101010, 0x10101010, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xefefefef, 0xefefefef, + 0xefefefef, 0xefefefef, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xefefefef, 0xefefefef, 0x00000000, 0x00000000, + 0xefefefef, 0xefefefef, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x10101010, 0x10101010, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x10101010, 0x10101010, 0x00000000, 0x00000000, + 0x10101010, 0x10101010, 0xffffffff, 0xffffffff, + 0xefefefef, 0xefefefef, 0xefefefef, 0xefefefef + }, + { + 0x20202020, 0x20202020, 0x00000000, 0x00000000, + 0x20202020, 0x20202020, 0xffffffff, 0xffffffff, + 0xdfdfdfdf, 0xdfdfdfdf, 0xdfdfdfdf, 0xdfdfdfdf, + 0x20202020, 0x20202020, 0xdfdfdfdf, 0xdfdfdfdf, + 0x20202020, 0x20202020, 0xdfdfdfdf, 0xdfdfdfdf, + 0x20202020, 0x20202020, 0x20202020, 0x20202020, + 0xdfdfdfdf, 0xdfdfdfdf, 0x20202020, 0x20202020, + 0xdfdfdfdf, 0xdfdfdfdf, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x20202020, 0x20202020, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xdfdfdfdf, 0xdfdfdfdf, + 0xdfdfdfdf, 0xdfdfdfdf, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xdfdfdfdf, 0xdfdfdfdf, 0x00000000, 0x00000000, + 0xdfdfdfdf, 0xdfdfdfdf, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x20202020, 0x20202020, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x20202020, 0x20202020, 0x00000000, 0x00000000, + 0x20202020, 0x20202020, 0xffffffff, 0xffffffff, + 0xdfdfdfdf, 0xdfdfdfdf, 0xdfdfdfdf, 0xdfdfdfdf + }, + { + 0x40404040, 0x40404040, 0x00000000, 0x00000000, + 0x40404040, 0x40404040, 0xffffffff, 0xffffffff, + 0xbfbfbfbf, 0xbfbfbfbf, 0xbfbfbfbf, 0xbfbfbfbf, + 0x40404040, 0x40404040, 0xbfbfbfbf, 0xbfbfbfbf, + 0x40404040, 0x40404040, 0xbfbfbfbf, 0xbfbfbfbf, + 0x40404040, 0x40404040, 0x40404040, 0x40404040, + 0xbfbfbfbf, 0xbfbfbfbf, 0x40404040, 0x40404040, + 0xbfbfbfbf, 0xbfbfbfbf, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x40404040, 0x40404040, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xbfbfbfbf, 0xbfbfbfbf, + 0xbfbfbfbf, 0xbfbfbfbf, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xbfbfbfbf, 0xbfbfbfbf, 0x00000000, 0x00000000, + 0xbfbfbfbf, 0xbfbfbfbf, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x40404040, 0x40404040, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x40404040, 0x40404040, 0x00000000, 0x00000000, + 0x40404040, 0x40404040, 0xffffffff, 0xffffffff, + 0xbfbfbfbf, 0xbfbfbfbf, 0xbfbfbfbf, 0xbfbfbfbf + }, + { + 0x80808080, 0x80808080, 0x00000000, 0x00000000, + 0x80808080, 0x80808080, 0xffffffff, 0xffffffff, + 0x7f7f7f7f, 0x7f7f7f7f, 0x7f7f7f7f, 0x7f7f7f7f, + 0x80808080, 0x80808080, 0x7f7f7f7f, 0x7f7f7f7f, + 0x80808080, 0x80808080, 0x7f7f7f7f, 0x7f7f7f7f, + 0x80808080, 0x80808080, 0x80808080, 0x80808080, + 0x7f7f7f7f, 0x7f7f7f7f, 0x80808080, 0x80808080, + 0x7f7f7f7f, 0x7f7f7f7f, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x80808080, 0x80808080, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x7f7f7f7f, 0x7f7f7f7f, + 0x7f7f7f7f, 0x7f7f7f7f, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x7f7f7f7f, 0x7f7f7f7f, 0x00000000, 0x00000000, + 0x7f7f7f7f, 0x7f7f7f7f, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x80808080, 0x80808080, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x80808080, 0x80808080, 0x00000000, 0x00000000, + 0x80808080, 0x80808080, 0xffffffff, 0xffffffff, + 0x7f7f7f7f, 0x7f7f7f7f, 0x7f7f7f7f, 0x7f7f7f7f + } +}; + +u32 special_pattern[DQ_NUM][LEN_SPECIAL_PATTERN] __aligned(32) = { + { + 0x00000000, 0x00000000, 0x01010101, 0x01010101, + 0xffffffff, 0xffffffff, 0xfefefefe, 0xfefefefe, + 0xfefefefe, 0xfefefefe, 0x01010101, 0x01010101, + 0xfefefefe, 0xfefefefe, 0x01010101, 0x01010101, + 0xfefefefe, 0xfefefefe, 0x01010101, 0x01010101, + 0x01010101, 0x01010101, 0xfefefefe, 0xfefefefe, + 0x01010101, 0x01010101, 0xfefefefe, 0xfefefefe, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x01010101, 0x01010101, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfefefefe, 0xfefefefe, 0xfefefefe, 0xfefefefe, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xfefefefe, 0xfefefefe, + 0x00000000, 0x00000000, 0xfefefefe, 0xfefefefe, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x01010101, 0x01010101, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x01010101, 0x01010101, + 0x00000000, 0x00000000, 0x01010101, 0x01010101, + 0xffffffff, 0xffffffff, 0xfefefefe, 0xfefefefe, + 0xfefefefe, 0xfefefefe, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x02020202, 0x02020202, + 0xffffffff, 0xffffffff, 0xfdfdfdfd, 0xfdfdfdfd, + 0xfdfdfdfd, 0xfdfdfdfd, 0x02020202, 0x02020202, + 0xfdfdfdfd, 0xfdfdfdfd, 0x02020202, 0x02020202, + 0xfdfdfdfd, 0xfdfdfdfd, 0x02020202, 0x02020202, + 0x02020202, 0x02020202, 0xfdfdfdfd, 0xfdfdfdfd, + 0x02020202, 0x02020202, 0xfdfdfdfd, 0xfdfdfdfd, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x02020202, 0x02020202, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfdfdfdfd, 0xfdfdfdfd, 0xfdfdfdfd, 0xfdfdfdfd, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xfdfdfdfd, 0xfdfdfdfd, + 0x00000000, 0x00000000, 0xfdfdfdfd, 0xfdfdfdfd, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x02020202, 0x02020202, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x02020202, 0x02020202, + 0x00000000, 0x00000000, 0x02020202, 0x02020202, + 0xffffffff, 0xffffffff, 0xfdfdfdfd, 0xfdfdfdfd, + 0xfdfdfdfd, 0xfdfdfdfd, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x04040404, 0x04040404, + 0xffffffff, 0xffffffff, 0xfbfbfbfb, 0xfbfbfbfb, + 0xfbfbfbfb, 0xfbfbfbfb, 0x04040404, 0x04040404, + 0xfbfbfbfb, 0xfbfbfbfb, 0x04040404, 0x04040404, + 0xfbfbfbfb, 0xfbfbfbfb, 0x04040404, 0x04040404, + 0x04040404, 0x04040404, 0xfbfbfbfb, 0xfbfbfbfb, + 0x04040404, 0x04040404, 0xfbfbfbfb, 0xfbfbfbfb, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x04040404, 0x04040404, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xfbfbfbfb, 0xfbfbfbfb, 0xfbfbfbfb, 0xfbfbfbfb, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xfbfbfbfb, 0xfbfbfbfb, + 0x00000000, 0x00000000, 0xfbfbfbfb, 0xfbfbfbfb, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x04040404, 0x04040404, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x04040404, 0x04040404, + 0x00000000, 0x00000000, 0x04040404, 0x04040404, + 0xffffffff, 0xffffffff, 0xfbfbfbfb, 0xfbfbfbfb, + 0xfbfbfbfb, 0xfbfbfbfb, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x08080808, 0x08080808, + 0xffffffff, 0xffffffff, 0xf7f7f7f7, 0xf7f7f7f7, + 0xf7f7f7f7, 0xf7f7f7f7, 0x08080808, 0x08080808, + 0xf7f7f7f7, 0xf7f7f7f7, 0x08080808, 0x08080808, + 0xf7f7f7f7, 0xf7f7f7f7, 0x08080808, 0x08080808, + 0x08080808, 0x08080808, 0xf7f7f7f7, 0xf7f7f7f7, + 0x08080808, 0x08080808, 0xf7f7f7f7, 0xf7f7f7f7, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x08080808, 0x08080808, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xf7f7f7f7, 0xf7f7f7f7, 0xf7f7f7f7, 0xf7f7f7f7, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xf7f7f7f7, 0xf7f7f7f7, + 0x00000000, 0x00000000, 0xf7f7f7f7, 0xf7f7f7f7, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x08080808, 0x08080808, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x08080808, 0x08080808, + 0x00000000, 0x00000000, 0x08080808, 0x08080808, + 0xffffffff, 0xffffffff, 0xf7f7f7f7, 0xf7f7f7f7, + 0xf7f7f7f7, 0xf7f7f7f7, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x10101010, 0x10101010, + 0xffffffff, 0xffffffff, 0xefefefef, 0xefefefef, + 0xefefefef, 0xefefefef, 0x10101010, 0x10101010, + 0xefefefef, 0xefefefef, 0x10101010, 0x10101010, + 0xefefefef, 0xefefefef, 0x10101010, 0x10101010, + 0x10101010, 0x10101010, 0xefefefef, 0xefefefef, + 0x10101010, 0x10101010, 0xefefefef, 0xefefefef, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x10101010, 0x10101010, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xefefefef, 0xefefefef, 0xefefefef, 0xefefefef, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xefefefef, 0xefefefef, + 0x00000000, 0x00000000, 0xefefefef, 0xefefefef, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x10101010, 0x10101010, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x10101010, 0x10101010, + 0x00000000, 0x00000000, 0x10101010, 0x10101010, + 0xffffffff, 0xffffffff, 0xefefefef, 0xefefefef, + 0xefefefef, 0xefefefef, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x20202020, 0x20202020, + 0xffffffff, 0xffffffff, 0xdfdfdfdf, 0xdfdfdfdf, + 0xdfdfdfdf, 0xdfdfdfdf, 0x20202020, 0x20202020, + 0xdfdfdfdf, 0xdfdfdfdf, 0x20202020, 0x20202020, + 0xdfdfdfdf, 0xdfdfdfdf, 0x20202020, 0x20202020, + 0x20202020, 0x20202020, 0xdfdfdfdf, 0xdfdfdfdf, + 0x20202020, 0x20202020, 0xdfdfdfdf, 0xdfdfdfdf, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x20202020, 0x20202020, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xdfdfdfdf, 0xdfdfdfdf, 0xdfdfdfdf, 0xdfdfdfdf, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xdfdfdfdf, 0xdfdfdfdf, + 0x00000000, 0x00000000, 0xdfdfdfdf, 0xdfdfdfdf, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x20202020, 0x20202020, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x20202020, 0x20202020, + 0x00000000, 0x00000000, 0x20202020, 0x20202020, + 0xffffffff, 0xffffffff, 0xdfdfdfdf, 0xdfdfdfdf, + 0xdfdfdfdf, 0xdfdfdfdf, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x40404040, 0x40404040, + 0xffffffff, 0xffffffff, 0xbfbfbfbf, 0xbfbfbfbf, + 0xbfbfbfbf, 0xbfbfbfbf, 0x40404040, 0x40404040, + 0xbfbfbfbf, 0xbfbfbfbf, 0x40404040, 0x40404040, + 0xbfbfbfbf, 0xbfbfbfbf, 0x40404040, 0x40404040, + 0x40404040, 0x40404040, 0xbfbfbfbf, 0xbfbfbfbf, + 0x40404040, 0x40404040, 0xbfbfbfbf, 0xbfbfbfbf, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x40404040, 0x40404040, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xbfbfbfbf, 0xbfbfbfbf, 0xbfbfbfbf, 0xbfbfbfbf, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xbfbfbfbf, 0xbfbfbfbf, + 0x00000000, 0x00000000, 0xbfbfbfbf, 0xbfbfbfbf, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x40404040, 0x40404040, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x40404040, 0x40404040, + 0x00000000, 0x00000000, 0x40404040, 0x40404040, + 0xffffffff, 0xffffffff, 0xbfbfbfbf, 0xbfbfbfbf, + 0xbfbfbfbf, 0xbfbfbfbf, 0x00000000, 0x00000000 + }, + { + 0x00000000, 0x00000000, 0x80808080, 0x80808080, + 0xffffffff, 0xffffffff, 0x7f7f7f7f, 0x7f7f7f7f, + 0x7f7f7f7f, 0x7f7f7f7f, 0x80808080, 0x80808080, + 0x7f7f7f7f, 0x7f7f7f7f, 0x80808080, 0x80808080, + 0x7f7f7f7f, 0x7f7f7f7f, 0x80808080, 0x80808080, + 0x80808080, 0x80808080, 0x7f7f7f7f, 0x7f7f7f7f, + 0x80808080, 0x80808080, 0x7f7f7f7f, 0x7f7f7f7f, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0x80808080, 0x80808080, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0x7f7f7f7f, 0x7f7f7f7f, 0x7f7f7f7f, 0x7f7f7f7f, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0x00000000, 0x00000000, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0x7f7f7f7f, 0x7f7f7f7f, + 0x00000000, 0x00000000, 0x7f7f7f7f, 0x7f7f7f7f, + 0x00000000, 0x00000000, 0x00000000, 0x00000000, + 0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, + 0x80808080, 0x80808080, 0xffffffff, 0xffffffff, + 0xffffffff, 0xffffffff, 0x80808080, 0x80808080, + 0x00000000, 0x00000000, 0x80808080, 0x80808080, + 0xffffffff, 0xffffffff, 0x7f7f7f7f, 0x7f7f7f7f, + 0x7f7f7f7f, 0x7f7f7f7f, 0x00000000, 0x00000000 + } +}; + +/* Fabric ratios table */ +u32 fabric_ratio[FAB_OPT] = { + 0x04010204, + 0x04020202, + 0x08020306, + 0x08020303, + 0x04020303, + 0x04020204, + 0x04010202, + 0x08030606, + 0x08030505, + 0x04020306, + 0x0804050a, + 0x04030606, + 0x04020404, + 0x04030306, + 0x04020505, + 0x08020505, + 0x04010303, + 0x08050a0a, + 0x04030408, + 0x04010102, + 0x08030306 +}; + +u32 pbs_dq_mapping[PUP_NUM_64BIT + 1][DQ_NUM] = { + {3, 2, 5, 7, 1, 0, 6, 4}, + {2, 3, 6, 7, 1, 0, 4, 5}, + {1, 3, 5, 6, 0, 2, 4, 7}, + {0, 2, 4, 7, 1, 3, 5, 6}, + {3, 0, 4, 6, 1, 2, 5, 7}, + {0, 3, 5, 7, 1, 2, 4, 6}, + {2, 3, 5, 7, 1, 0, 4, 6}, + {0, 2, 5, 4, 1, 3, 6, 7}, + {2, 3, 4, 7, 0, 1, 5, 6} +}; + +#endif /* __DDR3_PATTERNS_64_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_topology_def.h b/drivers/ddr/marvell/a38x/old/ddr3_topology_def.h new file mode 100644 index 0000000000..64a0447dd1 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_topology_def.h @@ -0,0 +1,76 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TOPOLOGY_DEF_H +#define _DDR3_TOPOLOGY_DEF_H + +/* TOPOLOGY */ + +enum hws_speed_bin { + SPEED_BIN_DDR_800D, + SPEED_BIN_DDR_800E, + SPEED_BIN_DDR_1066E, + SPEED_BIN_DDR_1066F, + SPEED_BIN_DDR_1066G, + SPEED_BIN_DDR_1333F, + SPEED_BIN_DDR_1333G, + SPEED_BIN_DDR_1333H, + SPEED_BIN_DDR_1333J, + SPEED_BIN_DDR_1600G, + SPEED_BIN_DDR_1600H, + SPEED_BIN_DDR_1600J, + SPEED_BIN_DDR_1600K, + SPEED_BIN_DDR_1866J, + SPEED_BIN_DDR_1866K, + SPEED_BIN_DDR_1866L, + SPEED_BIN_DDR_1866M, + SPEED_BIN_DDR_2133K, + SPEED_BIN_DDR_2133L, + SPEED_BIN_DDR_2133M, + SPEED_BIN_DDR_2133N, + + SPEED_BIN_DDR_1333H_EXT, + SPEED_BIN_DDR_1600K_EXT, + SPEED_BIN_DDR_1866M_EXT +}; + +enum hws_ddr_freq { + DDR_FREQ_LOW_FREQ, + DDR_FREQ_400, + DDR_FREQ_533, + DDR_FREQ_667, + DDR_FREQ_800, + DDR_FREQ_933, + DDR_FREQ_1066, + DDR_FREQ_311, + DDR_FREQ_333, + DDR_FREQ_467, + DDR_FREQ_850, + DDR_FREQ_600, + DDR_FREQ_300, + DDR_FREQ_900, + DDR_FREQ_360, + DDR_FREQ_1000, + DDR_FREQ_LIMIT +}; + +enum speed_bin_table_elements { + SPEED_BIN_TRCD, + SPEED_BIN_TRP, + SPEED_BIN_TRAS, + SPEED_BIN_TRC, + SPEED_BIN_TRRD1K, + SPEED_BIN_TRRD2K, + SPEED_BIN_TPD, + SPEED_BIN_TFAW1K, + SPEED_BIN_TFAW2K, + SPEED_BIN_TWTR, + SPEED_BIN_TRTP, + SPEED_BIN_TWR, + SPEED_BIN_TMOD +}; + +#endif /* _DDR3_TOPOLOGY_DEF_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training.c b/drivers/ddr/marvell/a38x/old/ddr3_training.c new file mode 100644 index 0000000000..c5ebd7c16d --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training.c @@ -0,0 +1,2649 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define GET_MAX_VALUE(x, y) \ + ((x) > (y)) ? (x) : (y) +#define CEIL_DIVIDE(x, y) \ + ((x - (x / y) * y) == 0) ? ((x / y) - 1) : (x / y) + +#define TIME_2_CLOCK_CYCLES CEIL_DIVIDE + +#define GET_CS_FROM_MASK(mask) (cs_mask2_num[mask]) +#define CS_CBE_VALUE(cs_num) (cs_cbe_reg[cs_num]) + +u32 window_mem_addr = 0; +u32 phy_reg0_val = 0; +u32 phy_reg1_val = 8; +u32 phy_reg2_val = 0; +u32 phy_reg3_val = 0xa; +enum hws_ddr_freq init_freq = DDR_FREQ_667; +enum hws_ddr_freq low_freq = DDR_FREQ_LOW_FREQ; +enum hws_ddr_freq medium_freq; +u32 debug_dunit = 0; +u32 odt_additional = 1; +u32 *dq_map_table = NULL; +u32 odt_config = 1; + +#if defined(CONFIG_ARMADA_38X) || defined(CONFIG_ALLEYCAT3) || \ + defined(CONFIG_ARMADA_39X) +u32 is_pll_before_init = 0, is_adll_calib_before_init = 0, is_dfs_in_init = 0; +u32 dfs_low_freq = 130; +#else +u32 is_pll_before_init = 0, is_adll_calib_before_init = 1, is_dfs_in_init = 0; +u32 dfs_low_freq = 100; +#endif +u32 g_rtt_nom_c_s0, g_rtt_nom_c_s1; +u8 calibration_update_control; /* 2 external only, 1 is internal only */ + +enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; +enum auto_tune_stage training_stage = INIT_CONTROLLER; +u32 finger_test = 0, p_finger_start = 11, p_finger_end = 64, + n_finger_start = 11, n_finger_end = 64, + p_finger_step = 3, n_finger_step = 3; +u32 clamp_tbl[] = { 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3 }; + +/* Initiate to 0xff, this variable is define by user in debug mode */ +u32 mode2_t = 0xff; +u32 xsb_validate_type = 0; +u32 xsb_validation_base_address = 0xf000; +u32 first_active_if = 0; +u32 dfs_low_phy1 = 0x1f; +u32 multicast_id = 0; +int use_broadcast = 0; +struct hws_tip_freq_config_info *freq_info_table = NULL; +u8 is_cbe_required = 0; +u32 debug_mode = 0; +u32 delay_enable = 0; +int rl_mid_freq_wa = 0; + +u32 effective_cs = 0; + +u32 mask_tune_func = (SET_MEDIUM_FREQ_MASK_BIT | + WRITE_LEVELING_MASK_BIT | + LOAD_PATTERN_2_MASK_BIT | + READ_LEVELING_MASK_BIT | + SET_TARGET_FREQ_MASK_BIT | WRITE_LEVELING_TF_MASK_BIT | + READ_LEVELING_TF_MASK_BIT | + CENTRALIZATION_RX_MASK_BIT | CENTRALIZATION_TX_MASK_BIT); + +void ddr3_print_version(void) +{ + printf(DDR3_TIP_VERSION_STRING); +} + +static int ddr3_tip_ddr3_training_main_flow(u32 dev_num); +static int ddr3_tip_write_odt(u32 dev_num, enum hws_access_type access_type, + u32 if_id, u32 cl_value, u32 cwl_value); +static int ddr3_tip_ddr3_auto_tune(u32 dev_num); +static int is_bus_access_done(u32 dev_num, u32 if_id, + u32 dunit_reg_adrr, u32 bit); +#ifdef ODT_TEST_SUPPORT +static int odt_test(u32 dev_num, enum hws_algo_type algo_type); +#endif + +int adll_calibration(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_ddr_freq frequency); +static int ddr3_tip_set_timing(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_ddr_freq frequency); + +static struct page_element page_param[] = { + /* + * 8bits 16 bits + * page-size(K) page-size(K) mask + */ + { 1, 2, 2}, + /* 512M */ + { 1, 2, 3}, + /* 1G */ + { 1, 2, 0}, + /* 2G */ + { 1, 2, 4}, + /* 4G */ + { 2, 2, 5} + /* 8G */ +}; + +static u8 mem_size_config[MEM_SIZE_LAST] = { + 0x2, /* 512Mbit */ + 0x3, /* 1Gbit */ + 0x0, /* 2Gbit */ + 0x4, /* 4Gbit */ + 0x5 /* 8Gbit */ +}; + +static u8 cs_mask2_num[] = { 0, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3 }; + +static struct reg_data odpg_default_value[] = { + {0x1034, 0x38000, MASK_ALL_BITS}, + {0x1038, 0x0, MASK_ALL_BITS}, + {0x10b0, 0x0, MASK_ALL_BITS}, + {0x10b8, 0x0, MASK_ALL_BITS}, + {0x10c0, 0x0, MASK_ALL_BITS}, + {0x10f0, 0x0, MASK_ALL_BITS}, + {0x10f4, 0x0, MASK_ALL_BITS}, + {0x10f8, 0xff, MASK_ALL_BITS}, + {0x10fc, 0xffff, MASK_ALL_BITS}, + {0x1130, 0x0, MASK_ALL_BITS}, + {0x1830, 0x2000000, MASK_ALL_BITS}, + {0x14d0, 0x0, MASK_ALL_BITS}, + {0x14d4, 0x0, MASK_ALL_BITS}, + {0x14d8, 0x0, MASK_ALL_BITS}, + {0x14dc, 0x0, MASK_ALL_BITS}, + {0x1454, 0x0, MASK_ALL_BITS}, + {0x1594, 0x0, MASK_ALL_BITS}, + {0x1598, 0x0, MASK_ALL_BITS}, + {0x159c, 0x0, MASK_ALL_BITS}, + {0x15a0, 0x0, MASK_ALL_BITS}, + {0x15a4, 0x0, MASK_ALL_BITS}, + {0x15a8, 0x0, MASK_ALL_BITS}, + {0x15ac, 0x0, MASK_ALL_BITS}, + {0x1604, 0x0, MASK_ALL_BITS}, + {0x1608, 0x0, MASK_ALL_BITS}, + {0x160c, 0x0, MASK_ALL_BITS}, + {0x1610, 0x0, MASK_ALL_BITS}, + {0x1614, 0x0, MASK_ALL_BITS}, + {0x1618, 0x0, MASK_ALL_BITS}, + {0x1624, 0x0, MASK_ALL_BITS}, + {0x1690, 0x0, MASK_ALL_BITS}, + {0x1694, 0x0, MASK_ALL_BITS}, + {0x1698, 0x0, MASK_ALL_BITS}, + {0x169c, 0x0, MASK_ALL_BITS}, + {0x14b8, 0x6f67, MASK_ALL_BITS}, + {0x1630, 0x0, MASK_ALL_BITS}, + {0x1634, 0x0, MASK_ALL_BITS}, + {0x1638, 0x0, MASK_ALL_BITS}, + {0x163c, 0x0, MASK_ALL_BITS}, + {0x16b0, 0x0, MASK_ALL_BITS}, + {0x16b4, 0x0, MASK_ALL_BITS}, + {0x16b8, 0x0, MASK_ALL_BITS}, + {0x16bc, 0x0, MASK_ALL_BITS}, + {0x16c0, 0x0, MASK_ALL_BITS}, + {0x16c4, 0x0, MASK_ALL_BITS}, + {0x16c8, 0x0, MASK_ALL_BITS}, + {0x16cc, 0x1, MASK_ALL_BITS}, + {0x16f0, 0x1, MASK_ALL_BITS}, + {0x16f4, 0x0, MASK_ALL_BITS}, + {0x16f8, 0x0, MASK_ALL_BITS}, + {0x16fc, 0x0, MASK_ALL_BITS} +}; + +static int ddr3_tip_bus_access(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, enum hws_access_type phy_access, + u32 phy_id, enum hws_ddr_phy phy_type, u32 reg_addr, + u32 data_value, enum hws_operation oper_type); +static int ddr3_tip_pad_inv(u32 dev_num, u32 if_id); +static int ddr3_tip_rank_control(u32 dev_num, u32 if_id); + +/* + * Update global training parameters by data from user + */ +int ddr3_tip_tune_training_params(u32 dev_num, + struct tune_train_params *params) +{ + if (params->ck_delay != -1) + ck_delay = params->ck_delay; + if (params->ck_delay_16 != -1) + ck_delay_16 = params->ck_delay_16; + if (params->phy_reg3_val != -1) + phy_reg3_val = params->phy_reg3_val; + + return MV_OK; +} + +/* + * Configure CS + */ +int ddr3_tip_configure_cs(u32 dev_num, u32 if_id, u32 cs_num, u32 enable) +{ + u32 data, addr_hi, data_high; + u32 mem_index; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (enable == 1) { + data = (tm->interface_params[if_id].bus_width == + BUS_WIDTH_8) ? 0 : 1; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + SDRAM_ACCESS_CONTROL_REG, (data << (cs_num * 4)), + 0x3 << (cs_num * 4))); + mem_index = tm->interface_params[if_id].memory_size; + + addr_hi = mem_size_config[mem_index] & 0x3; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + SDRAM_ACCESS_CONTROL_REG, + (addr_hi << (2 + cs_num * 4)), + 0x3 << (2 + cs_num * 4))); + + data_high = (mem_size_config[mem_index] & 0x4) >> 2; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + SDRAM_ACCESS_CONTROL_REG, + data_high << (20 + cs_num), 1 << (20 + cs_num))); + + /* Enable Address Select Mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + SDRAM_ACCESS_CONTROL_REG, 1 << (16 + cs_num), + 1 << (16 + cs_num))); + } + switch (cs_num) { + case 0: + case 1: + case 2: + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + DDR_CONTROL_LOW_REG, (enable << (cs_num + 11)), + 1 << (cs_num + 11))); + break; + case 3: + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + DDR_CONTROL_LOW_REG, (enable << 15), 1 << 15)); + break; + } + + return MV_OK; +} + +/* + * Calculate number of CS + */ +static int calc_cs_num(u32 dev_num, u32 if_id, u32 *cs_num) +{ + u32 cs; + u32 bus_cnt; + u32 cs_count; + u32 cs_bitmask; + u32 curr_cs_num = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (bus_cnt = 0; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + cs_count = 0; + cs_bitmask = tm->interface_params[if_id]. + as_bus_params[bus_cnt].cs_bitmask; + for (cs = 0; cs < MAX_CS_NUM; cs++) { + if ((cs_bitmask >> cs) & 1) + cs_count++; + } + + if (curr_cs_num == 0) { + curr_cs_num = cs_count; + } else if (cs_count != curr_cs_num) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("CS number is different per bus (IF %d BUS %d cs_num %d curr_cs_num %d)\n", + if_id, bus_cnt, cs_count, + curr_cs_num)); + return MV_NOT_SUPPORTED; + } + } + *cs_num = curr_cs_num; + + return MV_OK; +} + +/* + * Init Controller Flow + */ +int hws_ddr3_tip_init_controller(u32 dev_num, struct init_cntr_param *init_cntr_prm) +{ + u32 if_id; + u32 cs_num; + u32 t_refi = 0, t_hclk = 0, t_ckclk = 0, t_faw = 0, t_pd = 0, + t_wr = 0, t2t = 0, txpdll = 0; + u32 data_value = 0, bus_width = 0, page_size = 0, cs_cnt = 0, + mem_mask = 0, bus_index = 0; + enum hws_speed_bin speed_bin_index = SPEED_BIN_DDR_2133N; + enum hws_mem_size memory_size = MEM_2G; + enum hws_ddr_freq freq = init_freq; + enum hws_timing timing; + u32 cs_mask = 0; + u32 cl_value = 0, cwl_val = 0; + u32 refresh_interval_cnt = 0, bus_cnt = 0, adll_tap = 0; + enum hws_access_type access_type = ACCESS_TYPE_UNICAST; + u32 data_read[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("Init_controller, do_mrs_phy=%d, is_ctrl64_bit=%d\n", + init_cntr_prm->do_mrs_phy, + init_cntr_prm->is_ctrl64_bit)); + + if (init_cntr_prm->init_phy == 1) { + CHECK_STATUS(ddr3_tip_configure_phy(dev_num)); + } + + if (generic_init_controller == 1) { + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("active IF %d\n", if_id)); + mem_mask = 0; + for (bus_index = 0; + bus_index < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + mem_mask |= + tm->interface_params[if_id]. + as_bus_params[bus_index].mirror_enable_bitmask; + } + + if (mem_mask != 0) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, + if_id, CS_ENABLE_REG, 0, + 0x8)); + } + + memory_size = + tm->interface_params[if_id]. + memory_size; + speed_bin_index = + tm->interface_params[if_id]. + speed_bin_index; + freq = init_freq; + t_refi = + (tm->interface_params[if_id]. + interface_temp == + HWS_TEMP_HIGH) ? TREFI_HIGH : TREFI_LOW; + t_refi *= 1000; /* psec */ + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("memy_size %d speed_bin_ind %d freq %d t_refi %d\n", + memory_size, speed_bin_index, freq, + t_refi)); + /* HCLK & CK CLK in 2:1[ps] */ + /* t_ckclk is external clock */ + t_ckclk = (MEGA / freq_val[freq]); + /* t_hclk is internal clock */ + t_hclk = 2 * t_ckclk; + refresh_interval_cnt = t_refi / t_hclk; /* no units */ + bus_width = + (DDR3_IS_16BIT_DRAM_MODE(tm->bus_act_mask) + == 1) ? (16) : (32); + + if (init_cntr_prm->is_ctrl64_bit) + bus_width = 64; + + data_value = + (refresh_interval_cnt | 0x4000 | + ((bus_width == + 32) ? 0x8000 : 0) | 0x1000000) & ~(1 << 26); + + /* Interface Bus Width */ + /* SRMode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_CONFIGURATION_REG, data_value, + 0x100ffff)); + + /* Interleave first command pre-charge enable (TBD) */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_OPEN_PAGE_CONTROL_REG, (1 << 10), + (1 << 10))); + + /* PHY configuration */ + /* + * Postamble Length = 1.5cc, Addresscntl to clk skew + * \BD, Preamble length normal, parralal ADLL enable + */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DRAM_PHY_CONFIGURATION, 0x28, 0x3e)); + if (init_cntr_prm->is_ctrl64_bit) { + /* positive edge */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DRAM_PHY_CONFIGURATION, 0x0, + 0xff80)); + } + + /* calibration block disable */ + /* Xbar Read buffer select (for Internal access) */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + CALIB_MACHINE_CTRL_REG, 0x1200c, + 0x7dffe01c)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + CALIB_MACHINE_CTRL_REG, + calibration_update_control << 3, 0x3 << 3)); + + /* Pad calibration control - enable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + CALIB_MACHINE_CTRL_REG, 0x1, 0x1)); + + cs_mask = 0; + data_value = 0x7; + /* + * Address ctrl \96 Part of the Generic code + * The next configuration is done: + * 1) Memory Size + * 2) Bus_width + * 3) CS# + * 4) Page Number + * 5) t_faw + * Per Dunit get from the Map_topology the parameters: + * Bus_width + * t_faw is per Dunit not per CS + */ + page_size = + (tm->interface_params[if_id]. + bus_width == + BUS_WIDTH_8) ? page_param[memory_size]. + page_size_8bit : page_param[memory_size]. + page_size_16bit; + + t_faw = + (page_size == 1) ? speed_bin_table(speed_bin_index, + SPEED_BIN_TFAW1K) + : speed_bin_table(speed_bin_index, + SPEED_BIN_TFAW2K); + + data_value = TIME_2_CLOCK_CYCLES(t_faw, t_ckclk); + data_value = data_value << 24; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_ACCESS_CONTROL_REG, data_value, + 0x7f000000)); + + data_value = + (tm->interface_params[if_id]. + bus_width == BUS_WIDTH_8) ? 0 : 1; + + /* create merge cs mask for all cs available in dunit */ + for (bus_cnt = 0; + bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + cs_mask |= + tm->interface_params[if_id]. + as_bus_params[bus_cnt].cs_bitmask; + } + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("Init_controller IF %d cs_mask %d\n", + if_id, cs_mask)); + /* + * Configure the next upon the Map Topology \96 If the + * Dunit is CS0 Configure CS0 if it is multi CS + * configure them both: The Bust_width it\92s the + * Memory Bus width \96 x8 or x16 + */ + for (cs_cnt = 0; cs_cnt < NUM_OF_CS; cs_cnt++) { + ddr3_tip_configure_cs(dev_num, if_id, cs_cnt, + ((cs_mask & (1 << cs_cnt)) ? 1 + : 0)); + } + + if (init_cntr_prm->do_mrs_phy) { + /* + * MR0 \96 Part of the Generic code + * The next configuration is done: + * 1) Burst Length + * 2) CAS Latency + * get for each dunit what is it Speed_bin & + * Target Frequency. From those both parameters + * get the appropriate Cas_l from the CL table + */ + cl_value = + tm->interface_params[if_id]. + cas_l; + cwl_val = + tm->interface_params[if_id]. + cas_wl; + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("cl_value 0x%x cwl_val 0x%x\n", + cl_value, cwl_val)); + + data_value = + ((cl_mask_table[cl_value] & 0x1) << 2) | + ((cl_mask_table[cl_value] & 0xe) << 3); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + MR0_REG, data_value, + (0x7 << 4) | (1 << 2))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + MR0_REG, twr_mask_table[t_wr + 1], + 0xe00)); + + /* + * MR1: Set RTT and DIC Design GL values + * configured by user + */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, MR1_REG, + g_dic | g_rtt_nom, 0x266)); + + /* MR2 - Part of the Generic code */ + /* + * The next configuration is done: + * 1) SRT + * 2) CAS Write Latency + */ + data_value = (cwl_mask_table[cwl_val] << 3); + data_value |= + ((tm->interface_params[if_id]. + interface_temp == + HWS_TEMP_HIGH) ? (1 << 7) : 0); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + MR2_REG, data_value, + (0x7 << 3) | (0x1 << 7) | (0x3 << + 9))); + } + + ddr3_tip_write_odt(dev_num, access_type, if_id, + cl_value, cwl_val); + ddr3_tip_set_timing(dev_num, access_type, if_id, freq); + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DUNIT_CONTROL_HIGH_REG, 0x177, + 0x1000177)); + + if (init_cntr_prm->is_ctrl64_bit) { + /* disable 0.25 cc delay */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DUNIT_CONTROL_HIGH_REG, 0x0, + 0x800)); + } + + /* reset bit 7 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DUNIT_CONTROL_HIGH_REG, + (init_cntr_prm->msys_init << 7), (1 << 7))); + + timing = tm->interface_params[if_id].timing; + + if (mode2_t != 0xff) { + t2t = mode2_t; + } else if (timing != HWS_TIM_DEFAULT) { + /* Board topology map is forcing timing */ + t2t = (timing == HWS_TIM_2T) ? 1 : 0; + } else { + /* calculate number of CS (per interface) */ + CHECK_STATUS(calc_cs_num + (dev_num, if_id, &cs_num)); + t2t = (cs_num == 1) ? 0 : 1; + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DDR_CONTROL_LOW_REG, t2t << 3, + 0x3 << 3)); + /* move the block to ddr3_tip_set_timing - start */ + t_pd = GET_MAX_VALUE(t_ckclk * 3, + speed_bin_table(speed_bin_index, + SPEED_BIN_TPD)); + t_pd = TIME_2_CLOCK_CYCLES(t_pd, t_ckclk); + txpdll = GET_MAX_VALUE(t_ckclk * 10, 24); + txpdll = CEIL_DIVIDE((txpdll - 1), t_ckclk); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DDR_TIMING_REG, txpdll << 4, + 0x1f << 4)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DDR_TIMING_REG, 0x28 << 9, 0x3f << 9)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DDR_TIMING_REG, 0xa << 21, 0xff << 21)); + + /* move the block to ddr3_tip_set_timing - end */ + /* AUTO_ZQC_TIMING */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + TIMING_REG, (AUTO_ZQC_TIMING | (2 << 20)), + 0x3fffff)); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, access_type, if_id, + DRAM_PHY_CONFIGURATION, data_read, 0x30)); + data_value = + (data_read[if_id] == 0) ? (1 << 11) : 0; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DUNIT_CONTROL_HIGH_REG, data_value, + (1 << 11))); + + /* Set Active control for ODT write transactions */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, 0x1494, g_odt_config, + MASK_ALL_BITS)); + } + } else { +#ifdef STATIC_ALGO_SUPPORT + CHECK_STATUS(ddr3_tip_static_init_controller(dev_num)); +#if defined(CONFIG_ARMADA_38X) || defined(CONFIG_ARMADA_39X) + CHECK_STATUS(ddr3_tip_static_phy_init_controller(dev_num)); +#endif +#endif /* STATIC_ALGO_SUPPORT */ + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_rank_control(dev_num, if_id)); + + if (init_cntr_prm->do_mrs_phy) { + CHECK_STATUS(ddr3_tip_pad_inv(dev_num, if_id)); + } + + /* Pad calibration control - disable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + CALIB_MACHINE_CTRL_REG, 0x0, 0x1)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + CALIB_MACHINE_CTRL_REG, + calibration_update_control << 3, 0x3 << 3)); + } + + CHECK_STATUS(ddr3_tip_enable_init_sequence(dev_num)); + + if (delay_enable != 0) { + adll_tap = MEGA / (freq_val[freq] * 64); + ddr3_tip_cmd_addr_init_delay(dev_num, adll_tap); + } + + return MV_OK; +} + +/* + * Load Topology map + */ +int hws_ddr3_tip_load_topology_map(u32 dev_num, struct hws_topology_map *tm) +{ + enum hws_speed_bin speed_bin_index; + enum hws_ddr_freq freq = DDR_FREQ_LIMIT; + u32 if_id; + + freq_val[DDR_FREQ_LOW_FREQ] = dfs_low_freq; + tm = ddr3_get_topology_map(); + CHECK_STATUS(ddr3_tip_get_first_active_if + ((u8)dev_num, tm->if_act_mask, + &first_active_if)); + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("board IF_Mask=0x%x num_of_bus_per_interface=0x%x\n", + tm->if_act_mask, + tm->num_of_bus_per_interface)); + + /* + * if CL, CWL values are missing in topology map, then fill them + * according to speedbin tables + */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + speed_bin_index = + tm->interface_params[if_id].speed_bin_index; + /* TBD memory frequency of interface 0 only is used ! */ + freq = tm->interface_params[first_active_if].memory_freq; + + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("speed_bin_index =%d freq=%d cl=%d cwl=%d\n", + speed_bin_index, freq_val[freq], + tm->interface_params[if_id]. + cas_l, + tm->interface_params[if_id]. + cas_wl)); + + if (tm->interface_params[if_id].cas_l == 0) { + tm->interface_params[if_id].cas_l = + cas_latency_table[speed_bin_index].cl_val[freq]; + } + + if (tm->interface_params[if_id].cas_wl == 0) { + tm->interface_params[if_id].cas_wl = + cas_write_latency_table[speed_bin_index].cl_val[freq]; + } + } + + return MV_OK; +} + +/* + * RANK Control Flow + */ +static int ddr3_tip_rank_control(u32 dev_num, u32 if_id) +{ + u32 data_value = 0, bus_cnt; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (bus_cnt = 1; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + if ((tm->interface_params[if_id]. + as_bus_params[0].cs_bitmask != + tm->interface_params[if_id]. + as_bus_params[bus_cnt].cs_bitmask) || + (tm->interface_params[if_id]. + as_bus_params[0].mirror_enable_bitmask != + tm->interface_params[if_id]. + as_bus_params[bus_cnt].mirror_enable_bitmask)) + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("WARNING:Wrong configuration for pup #%d CS mask and CS mirroring for all pups should be the same\n", + bus_cnt)); + } + + data_value |= tm->interface_params[if_id]. + as_bus_params[0].cs_bitmask; + data_value |= tm->interface_params[if_id]. + as_bus_params[0].mirror_enable_bitmask << 4; + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, RANK_CTRL_REG, + data_value, 0xff)); + + return MV_OK; +} + +/* + * PAD Inverse Flow + */ +static int ddr3_tip_pad_inv(u32 dev_num, u32 if_id) +{ + u32 bus_cnt, data_value, ck_swap_pup_ctrl; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (bus_cnt = 0; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + if (tm->interface_params[if_id]. + as_bus_params[bus_cnt].is_dqs_swap == 1) { + /* dqs swap */ + ddr3_tip_bus_read_modify_write(dev_num, ACCESS_TYPE_UNICAST, + if_id, bus_cnt, + DDR_PHY_DATA, + PHY_CONTROL_PHY_REG, 0xc0, + 0xc0); + } + + if (tm->interface_params[if_id]. + as_bus_params[bus_cnt].is_ck_swap == 1) { + if (bus_cnt <= 1) + data_value = 0x5 << 2; + else + data_value = 0xa << 2; + + /* mask equals data */ + /* ck swap pup is only control pup #0 ! */ + ck_swap_pup_ctrl = 0; + ddr3_tip_bus_read_modify_write(dev_num, ACCESS_TYPE_UNICAST, + if_id, ck_swap_pup_ctrl, + DDR_PHY_CONTROL, + PHY_CONTROL_PHY_REG, + data_value, data_value); + } + } + + return MV_OK; +} + +/* + * Run Training Flow + */ +int hws_ddr3_tip_run_alg(u32 dev_num, enum hws_algo_type algo_type) +{ + int ret = MV_OK, ret_tune = MV_OK; + +#ifdef ODT_TEST_SUPPORT + if (finger_test == 1) + return odt_test(dev_num, algo_type); +#endif + + if (algo_type == ALGO_TYPE_DYNAMIC) { + ret = ddr3_tip_ddr3_auto_tune(dev_num); + } else { +#ifdef STATIC_ALGO_SUPPORT + { + enum hws_ddr_freq freq; + freq = init_freq; + + /* add to mask */ + if (is_adll_calib_before_init != 0) { + printf("with adll calib before init\n"); + adll_calibration(dev_num, ACCESS_TYPE_MULTICAST, + 0, freq); + } + /* + * Frequency per interface is not relevant, + * only interface 0 + */ + ret = ddr3_tip_run_static_alg(dev_num, + freq); + } +#endif + } + + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Run_alg: tuning failed %d\n", ret_tune)); + } + + return ret; +} + +#ifdef ODT_TEST_SUPPORT +/* + * ODT Test + */ +static int odt_test(u32 dev_num, enum hws_algo_type algo_type) +{ + int ret = MV_OK, ret_tune = MV_OK; + int pfinger_val = 0, nfinger_val; + + for (pfinger_val = p_finger_start; pfinger_val <= p_finger_end; + pfinger_val += p_finger_step) { + for (nfinger_val = n_finger_start; nfinger_val <= n_finger_end; + nfinger_val += n_finger_step) { + if (finger_test != 0) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("pfinger_val %d nfinger_val %d\n", + pfinger_val, nfinger_val)); + p_finger = pfinger_val; + n_finger = nfinger_val; + } + + if (algo_type == ALGO_TYPE_DYNAMIC) { + ret = ddr3_tip_ddr3_auto_tune(dev_num); + } else { + /* + * Frequency per interface is not relevant, + * only interface 0 + */ + ret = ddr3_tip_run_static_alg(dev_num, + init_freq); + } + } + } + + if (ret_tune != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Run_alg: tuning failed %d\n", ret_tune)); + ret = (ret == MV_OK) ? ret_tune : ret; + } + + return ret; +} +#endif + +/* + * Select Controller + */ +int hws_ddr3_tip_select_ddr_controller(u32 dev_num, int enable) +{ + if (config_func_info[dev_num].tip_dunit_mux_select_func != NULL) { + return config_func_info[dev_num]. + tip_dunit_mux_select_func((u8)dev_num, enable); + } + + return MV_FAIL; +} + +/* + * Dunit Register Write + */ +int ddr3_tip_if_write(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 data_value, u32 mask) +{ + if (config_func_info[dev_num].tip_dunit_write_func != NULL) { + return config_func_info[dev_num]. + tip_dunit_write_func((u8)dev_num, interface_access, + if_id, reg_addr, + data_value, mask); + } + + return MV_FAIL; +} + +/* + * Dunit Register Read + */ +int ddr3_tip_if_read(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 *data, u32 mask) +{ + if (config_func_info[dev_num].tip_dunit_read_func != NULL) { + return config_func_info[dev_num]. + tip_dunit_read_func((u8)dev_num, interface_access, + if_id, reg_addr, + data, mask); + } + + return MV_FAIL; +} + +/* + * Dunit Register Polling + */ +int ddr3_tip_if_polling(u32 dev_num, enum hws_access_type access_type, + u32 if_id, u32 exp_value, u32 mask, u32 offset, + u32 poll_tries) +{ + u32 poll_cnt = 0, interface_num = 0, start_if, end_if; + u32 read_data[MAX_INTERFACE_NUM]; + int ret; + int is_fail = 0, is_if_fail; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (access_type == ACCESS_TYPE_MULTICAST) { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } else { + start_if = if_id; + end_if = if_id; + } + + for (interface_num = start_if; interface_num <= end_if; interface_num++) { + /* polling bit 3 for n times */ + VALIDATE_ACTIVE(tm->if_act_mask, interface_num); + + is_if_fail = 0; + for (poll_cnt = 0; poll_cnt < poll_tries; poll_cnt++) { + ret = + ddr3_tip_if_read(dev_num, ACCESS_TYPE_UNICAST, + interface_num, offset, read_data, + mask); + if (ret != MV_OK) + return ret; + + if (read_data[interface_num] == exp_value) + break; + } + + if (poll_cnt >= poll_tries) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("max poll IF #%d\n", interface_num)); + is_fail = 1; + is_if_fail = 1; + } + + training_result[training_stage][interface_num] = + (is_if_fail == 1) ? TEST_FAILED : TEST_SUCCESS; + } + + return (is_fail == 0) ? MV_OK : MV_FAIL; +} + +/* + * Bus read access + */ +int ddr3_tip_bus_read(u32 dev_num, u32 if_id, + enum hws_access_type phy_access, u32 phy_id, + enum hws_ddr_phy phy_type, u32 reg_addr, u32 *data) +{ + u32 bus_index = 0; + u32 data_read[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (phy_access == ACCESS_TYPE_MULTICAST) { + for (bus_index = 0; bus_index < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + CHECK_STATUS(ddr3_tip_bus_access + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + bus_index, phy_type, reg_addr, 0, + OPERATION_READ)); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + PHY_REG_FILE_ACCESS, data_read, + MASK_ALL_BITS)); + data[bus_index] = (data_read[if_id] & 0xffff); + } + } else { + CHECK_STATUS(ddr3_tip_bus_access + (dev_num, ACCESS_TYPE_UNICAST, if_id, + phy_access, phy_id, phy_type, reg_addr, 0, + OPERATION_READ)); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + PHY_REG_FILE_ACCESS, data_read, MASK_ALL_BITS)); + + /* + * only 16 lsb bit are valid in Phy (each register is different, + * some can actually be less than 16 bits) + */ + *data = (data_read[if_id] & 0xffff); + } + + return MV_OK; +} + +/* + * Bus write access + */ +int ddr3_tip_bus_write(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, enum hws_access_type phy_access, + u32 phy_id, enum hws_ddr_phy phy_type, u32 reg_addr, + u32 data_value) +{ + CHECK_STATUS(ddr3_tip_bus_access + (dev_num, interface_access, if_id, phy_access, + phy_id, phy_type, reg_addr, data_value, OPERATION_WRITE)); + + return MV_OK; +} + +/* + * Bus access routine (relevant for both read & write) + */ +static int ddr3_tip_bus_access(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, enum hws_access_type phy_access, + u32 phy_id, enum hws_ddr_phy phy_type, u32 reg_addr, + u32 data_value, enum hws_operation oper_type) +{ + u32 addr_low = 0x3f & reg_addr; + u32 addr_hi = ((0xc0 & reg_addr) >> 6); + u32 data_p1 = + (oper_type << 30) + (addr_hi << 28) + (phy_access << 27) + + (phy_type << 26) + (phy_id << 22) + (addr_low << 16) + + (data_value & 0xffff); + u32 data_p2 = data_p1 + (1 << 31); + u32 start_if, end_if; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, interface_access, if_id, PHY_REG_FILE_ACCESS, + data_p1, MASK_ALL_BITS)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, interface_access, if_id, PHY_REG_FILE_ACCESS, + data_p2, MASK_ALL_BITS)); + + if (interface_access == ACCESS_TYPE_UNICAST) { + start_if = if_id; + end_if = if_id; + } else { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } + + /* polling for read/write execution done */ + for (if_id = start_if; if_id <= end_if; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(is_bus_access_done + (dev_num, if_id, PHY_REG_FILE_ACCESS, 31)); + } + + return MV_OK; +} + +/* + * Check bus access done + */ +static int is_bus_access_done(u32 dev_num, u32 if_id, u32 dunit_reg_adrr, + u32 bit) +{ + u32 rd_data = 1; + u32 cnt = 0; + u32 data_read[MAX_INTERFACE_NUM]; + + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, dunit_reg_adrr, + data_read, MASK_ALL_BITS)); + rd_data = data_read[if_id]; + rd_data &= (1 << bit); + + while (rd_data != 0) { + if (cnt++ >= MAX_POLLING_ITERATIONS) + break; + + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + dunit_reg_adrr, data_read, MASK_ALL_BITS)); + rd_data = data_read[if_id]; + rd_data &= (1 << bit); + } + + if (cnt < MAX_POLLING_ITERATIONS) + return MV_OK; + else + return MV_FAIL; +} + +/* + * Phy read-modify-write + */ +int ddr3_tip_bus_read_modify_write(u32 dev_num, enum hws_access_type access_type, + u32 interface_id, u32 phy_id, + enum hws_ddr_phy phy_type, u32 reg_addr, + u32 data_value, u32 reg_mask) +{ + u32 data_val = 0, if_id, start_if, end_if; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (access_type == ACCESS_TYPE_MULTICAST) { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } else { + start_if = interface_id; + end_if = interface_id; + } + + for (if_id = start_if; if_id <= end_if; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, phy_id, + phy_type, reg_addr, &data_val)); + data_value = (data_val & (~reg_mask)) | (data_value & reg_mask); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, phy_id, phy_type, reg_addr, + data_value)); + } + + return MV_OK; +} + +/* + * ADLL Calibration + */ +int adll_calibration(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_ddr_freq frequency) +{ + struct hws_tip_freq_config_info freq_config_info; + u32 bus_cnt = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* Reset Diver_b assert -> de-assert */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, SDRAM_CONFIGURATION_REG, + 0, 0x10000000)); + mdelay(10); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, SDRAM_CONFIGURATION_REG, + 0x10000000, 0x10000000)); + + if (config_func_info[dev_num].tip_get_freq_config_info_func != NULL) { + CHECK_STATUS(config_func_info[dev_num]. + tip_get_freq_config_info_func((u8)dev_num, frequency, + &freq_config_info)); + } else { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("tip_get_freq_config_info_func is NULL")); + return MV_NOT_INITIALIZED; + } + + for (bus_cnt = 0; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, access_type, if_id, bus_cnt, + DDR_PHY_DATA, BW_PHY_REG, + freq_config_info.bw_per_freq << 8, 0x700)); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, access_type, if_id, bus_cnt, + DDR_PHY_DATA, RATE_PHY_REG, + freq_config_info.rate_per_freq, 0x7)); + } + + /* DUnit to Phy drive post edge, ADLL reset assert de-assert */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DRAM_PHY_CONFIGURATION, + 0, (0x80000000 | 0x40000000))); + mdelay(100 / (freq_val[frequency] / freq_val[DDR_FREQ_LOW_FREQ])); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DRAM_PHY_CONFIGURATION, + (0x80000000 | 0x40000000), (0x80000000 | 0x40000000))); + + /* polling for ADLL Done */ + if (ddr3_tip_if_polling(dev_num, access_type, if_id, + 0x3ff03ff, 0x3ff03ff, PHY_LOCK_STATUS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Freq_set: DDR3 poll failed(1)")); + } + + /* pup data_pup reset assert-> deassert */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, SDRAM_CONFIGURATION_REG, + 0, 0x60000000)); + mdelay(10); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, SDRAM_CONFIGURATION_REG, + 0x60000000, 0x60000000)); + + return MV_OK; +} + +int ddr3_tip_freq_set(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_ddr_freq frequency) +{ + u32 cl_value = 0, cwl_value = 0, mem_mask = 0, val = 0, + bus_cnt = 0, t_hclk = 0, t_wr = 0, + refresh_interval_cnt = 0, cnt_id; + u32 t_refi = 0, end_if, start_if; + u32 bus_index = 0; + int is_dll_off = 0; + enum hws_speed_bin speed_bin_index = 0; + struct hws_tip_freq_config_info freq_config_info; + enum hws_result *flow_result = training_result[training_stage]; + u32 adll_tap = 0; + u32 cs_mask[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("dev %d access %d IF %d freq %d\n", dev_num, + access_type, if_id, frequency)); + + if (frequency == DDR_FREQ_LOW_FREQ) + is_dll_off = 1; + if (access_type == ACCESS_TYPE_MULTICAST) { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } else { + start_if = if_id; + end_if = if_id; + } + + /* calculate interface cs mask - Oferb 4/11 */ + /* speed bin can be different for each interface */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + /* cs enable is active low */ + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + cs_mask[if_id] = CS_BIT_MASK; + training_result[training_stage][if_id] = TEST_SUCCESS; + ddr3_tip_calc_cs_mask(dev_num, if_id, effective_cs, + &cs_mask[if_id]); + } + + /* speed bin can be different for each interface */ + /* + * moti b - need to remove the loop for multicas access functions + * and loop the unicast access functions + */ + for (if_id = start_if; if_id <= end_if; if_id++) { + if (IS_ACTIVE(tm->if_act_mask, if_id) == 0) + continue; + + flow_result[if_id] = TEST_SUCCESS; + speed_bin_index = + tm->interface_params[if_id].speed_bin_index; + if (tm->interface_params[if_id].memory_freq == + frequency) { + cl_value = + tm->interface_params[if_id].cas_l; + cwl_value = + tm->interface_params[if_id].cas_wl; + } else { + cl_value = + cas_latency_table[speed_bin_index].cl_val[frequency]; + cwl_value = + cas_write_latency_table[speed_bin_index]. + cl_val[frequency]; + } + + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("Freq_set dev 0x%x access 0x%x if 0x%x freq 0x%x speed %d:\n\t", + dev_num, access_type, if_id, + frequency, speed_bin_index)); + + for (cnt_id = 0; cnt_id < DDR_FREQ_LIMIT; cnt_id++) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, + ("%d ", + cas_latency_table[speed_bin_index]. + cl_val[cnt_id])); + } + + DEBUG_TRAINING_IP(DEBUG_LEVEL_TRACE, ("\n")); + mem_mask = 0; + for (bus_index = 0; bus_index < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + mem_mask |= + tm->interface_params[if_id]. + as_bus_params[bus_index].mirror_enable_bitmask; + } + + if (mem_mask != 0) { + /* motib redundant in KW28 */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, + CS_ENABLE_REG, 0, 0x8)); + } + + /* dll state after exiting SR */ + if (is_dll_off == 1) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DFS_REG, 0x1, 0x1)); + } else { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DFS_REG, 0, 0x1)); + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DUNIT_MMASK_REG, 0, 0x1)); + /* DFS - block transactions */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DFS_REG, 0x2, 0x2)); + + /* disable ODT in case of dll off */ + if (is_dll_off == 1) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + 0x1874, 0, 0x244)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + 0x1884, 0, 0x244)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + 0x1894, 0, 0x244)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + 0x18a4, 0, 0x244)); + } + + /* DFS - Enter Self-Refresh */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DFS_REG, 0x4, + 0x4)); + /* polling on self refresh entry */ + if (ddr3_tip_if_polling(dev_num, ACCESS_TYPE_UNICAST, + if_id, 0x8, 0x8, DFS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Freq_set: DDR3 poll failed on SR entry\n")); + } + + /* PLL configuration */ + if (config_func_info[dev_num].tip_set_freq_divider_func != NULL) { + config_func_info[dev_num]. + tip_set_freq_divider_func(dev_num, if_id, + frequency); + } + + /* PLL configuration End */ + + /* adjust t_refi to new frequency */ + t_refi = (tm->interface_params[if_id].interface_temp == + HWS_TEMP_HIGH) ? TREFI_LOW : TREFI_HIGH; + t_refi *= 1000; /*psec */ + + /* HCLK in[ps] */ + t_hclk = MEGA / (freq_val[frequency] / 2); + refresh_interval_cnt = t_refi / t_hclk; /* no units */ + val = 0x4000 | refresh_interval_cnt; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_CONFIGURATION_REG, val, 0x7fff)); + + /* DFS - CL/CWL/WR parameters after exiting SR */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DFS_REG, + (cl_mask_table[cl_value] << 8), 0xf00)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DFS_REG, + (cwl_mask_table[cwl_value] << 12), 0x7000)); + t_wr = speed_bin_table(speed_bin_index, SPEED_BIN_TWR); + t_wr = (t_wr / 1000); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DFS_REG, + (twr_mask_table[t_wr + 1] << 16), 0x70000)); + + /* Restore original RTT values if returning from DLL OFF mode */ + if (is_dll_off == 1) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, 0x1874, + g_dic | g_rtt_nom, 0x266)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, 0x1884, + g_dic | g_rtt_nom, 0x266)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, 0x1894, + g_dic | g_rtt_nom, 0x266)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, 0x18a4, + g_dic | g_rtt_nom, 0x266)); + } + + /* Reset Diver_b assert -> de-assert */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_CONFIGURATION_REG, 0, 0x10000000)); + mdelay(10); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_CONFIGURATION_REG, 0x10000000, 0x10000000)); + + /* Adll configuration function of process and Frequency */ + if (config_func_info[dev_num].tip_get_freq_config_info_func != NULL) { + CHECK_STATUS(config_func_info[dev_num]. + tip_get_freq_config_info_func(dev_num, frequency, + &freq_config_info)); + } + /* TBD check milo5 using device ID ? */ + for (bus_cnt = 0; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, bus_cnt, DDR_PHY_DATA, + 0x92, + freq_config_info. + bw_per_freq << 8 + /*freq_mask[dev_num][frequency] << 8 */ + , 0x700)); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + bus_cnt, DDR_PHY_DATA, 0x94, + freq_config_info.rate_per_freq, 0x7)); + } + + /* DUnit to Phy drive post edge, ADLL reset assert de-assert */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DRAM_PHY_CONFIGURATION, 0, + (0x80000000 | 0x40000000))); + mdelay(100 / (freq_val[frequency] / freq_val[DDR_FREQ_LOW_FREQ])); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + DRAM_PHY_CONFIGURATION, (0x80000000 | 0x40000000), + (0x80000000 | 0x40000000))); + + /* polling for ADLL Done */ + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x3ff03ff, + 0x3ff03ff, PHY_LOCK_STATUS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Freq_set: DDR3 poll failed(1)\n")); + } + + /* pup data_pup reset assert-> deassert */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_CONFIGURATION_REG, 0, 0x60000000)); + mdelay(10); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_CONFIGURATION_REG, 0x60000000, 0x60000000)); + + /* Set proper timing params before existing Self-Refresh */ + ddr3_tip_set_timing(dev_num, access_type, if_id, frequency); + if (delay_enable != 0) { + adll_tap = MEGA / (freq_val[frequency] * 64); + ddr3_tip_cmd_addr_init_delay(dev_num, adll_tap); + } + + /* Exit SR */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DFS_REG, 0, + 0x4)); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, 0x8, DFS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Freq_set: DDR3 poll failed(2)")); + } + + /* Refresh Command */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + SDRAM_OPERATION_REG, 0x2, 0xf1f)); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, 0x1f, + SDRAM_OPERATION_REG, MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Freq_set: DDR3 poll failed(3)")); + } + + /* Release DFS Block */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DFS_REG, 0, + 0x2)); + /* Controller to MBUS Retry - normal */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, DUNIT_MMASK_REG, + 0x1, 0x1)); + + /* MRO: Burst Length 8, CL , Auto_precharge 0x16cc */ + val = + ((cl_mask_table[cl_value] & 0x1) << 2) | + ((cl_mask_table[cl_value] & 0xe) << 3); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, MR0_REG, + val, (0x7 << 4) | (1 << 2))); + /* MR2: CWL = 10 , Auto Self-Refresh - disable */ + val = (cwl_mask_table[cwl_value] << 3); + /* + * nklein 24.10.13 - should not be here - leave value as set in + * the init configuration val |= (1 << 9); + * val |= ((tm->interface_params[if_id]. + * interface_temp == HWS_TEMP_HIGH) ? (1 << 7) : 0); + */ + /* nklein 24.10.13 - see above comment */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, MR2_REG, + val, (0x7 << 3))); + + /* ODT TIMING */ + val = ((cl_value - cwl_value + 1) << 4) | + ((cl_value - cwl_value + 6) << 8) | + ((cl_value - 1) << 12) | ((cl_value + 6) << 16); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, ODT_TIMING_LOW, + val, 0xffff0)); + val = 0x71 | ((cwl_value - 1) << 8) | ((cwl_value + 5) << 12); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, ODT_TIMING_HI_REG, + val, 0xffff)); + + /* ODT Active */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, + DUNIT_ODT_CONTROL_REG, + 0xf, 0xf)); + + /* re-write CL */ + val = ((cl_mask_table[cl_value] & 0x1) << 2) | + ((cl_mask_table[cl_value] & 0xe) << 3); + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + 0, MR0_REG, val, + (0x7 << 4) | (1 << 2))); + + /* re-write CWL */ + val = (cwl_mask_table[cwl_value] << 3); + CHECK_STATUS(ddr3_tip_write_mrs_cmd(dev_num, cs_mask, MRS2_CMD, + val, (0x7 << 3))); + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + 0, MR2_REG, val, (0x7 << 3))); + + if (mem_mask != 0) { + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, + CS_ENABLE_REG, + 1 << 3, 0x8)); + } + } + + return MV_OK; +} + +/* + * Set ODT values + */ +static int ddr3_tip_write_odt(u32 dev_num, enum hws_access_type access_type, + u32 if_id, u32 cl_value, u32 cwl_value) +{ + /* ODT TIMING */ + u32 val = (cl_value - cwl_value + 6); + + val = ((cl_value - cwl_value + 1) << 4) | ((val & 0xf) << 8) | + (((cl_value - 1) & 0xf) << 12) | + (((cl_value + 6) & 0xf) << 16) | (((val & 0x10) >> 4) << 21); + val |= (((cl_value - 1) >> 4) << 22) | (((cl_value + 6) >> 4) << 23); + + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + ODT_TIMING_LOW, val, 0xffff0)); + val = 0x71 | ((cwl_value - 1) << 8) | ((cwl_value + 5) << 12); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + ODT_TIMING_HI_REG, val, 0xffff)); + if (odt_additional == 1) { + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, + if_id, + SDRAM_ODT_CONTROL_HIGH_REG, + 0xf, 0xf)); + } + + /* ODT Active */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + DUNIT_ODT_CONTROL_REG, 0xf, 0xf)); + + return MV_OK; +} + +/* + * Set Timing values for training + */ +static int ddr3_tip_set_timing(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_ddr_freq frequency) +{ + u32 t_ckclk = 0, t_ras = 0; + u32 t_rcd = 0, t_rp = 0, t_wr = 0, t_wtr = 0, t_rrd = 0, t_rtp = 0, + t_rfc = 0, t_mod = 0; + u32 val = 0, page_size = 0; + enum hws_speed_bin speed_bin_index; + enum hws_mem_size memory_size = MEM_2G; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + speed_bin_index = tm->interface_params[if_id].speed_bin_index; + memory_size = tm->interface_params[if_id].memory_size; + page_size = + (tm->interface_params[if_id].bus_width == + BUS_WIDTH_8) ? page_param[memory_size]. + page_size_8bit : page_param[memory_size].page_size_16bit; + t_ckclk = (MEGA / freq_val[frequency]); + t_rrd = (page_size == 1) ? speed_bin_table(speed_bin_index, + SPEED_BIN_TRRD1K) : + speed_bin_table(speed_bin_index, SPEED_BIN_TRRD2K); + t_rrd = GET_MAX_VALUE(t_ckclk * 4, t_rrd); + t_rtp = GET_MAX_VALUE(t_ckclk * 4, speed_bin_table(speed_bin_index, + SPEED_BIN_TRTP)); + t_wtr = GET_MAX_VALUE(t_ckclk * 4, speed_bin_table(speed_bin_index, + SPEED_BIN_TWTR)); + t_ras = TIME_2_CLOCK_CYCLES(speed_bin_table(speed_bin_index, + SPEED_BIN_TRAS), + t_ckclk); + t_rcd = TIME_2_CLOCK_CYCLES(speed_bin_table(speed_bin_index, + SPEED_BIN_TRCD), + t_ckclk); + t_rp = TIME_2_CLOCK_CYCLES(speed_bin_table(speed_bin_index, + SPEED_BIN_TRP), + t_ckclk); + t_wr = TIME_2_CLOCK_CYCLES(speed_bin_table(speed_bin_index, + SPEED_BIN_TWR), + t_ckclk); + t_wtr = TIME_2_CLOCK_CYCLES(t_wtr, t_ckclk); + t_rrd = TIME_2_CLOCK_CYCLES(t_rrd, t_ckclk); + t_rtp = TIME_2_CLOCK_CYCLES(t_rtp, t_ckclk); + t_rfc = TIME_2_CLOCK_CYCLES(rfc_table[memory_size] * 1000, t_ckclk); + t_mod = GET_MAX_VALUE(t_ckclk * 24, 15000); + t_mod = TIME_2_CLOCK_CYCLES(t_mod, t_ckclk); + + /* SDRAM Timing Low */ + val = (t_ras & 0xf) | (t_rcd << 4) | (t_rp << 8) | (t_wr << 12) | + (t_wtr << 16) | (((t_ras & 0x30) >> 4) << 20) | (t_rrd << 24) | + (t_rtp << 28); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_LOW_REG, val, 0xff3fffff)); + + /* SDRAM Timing High */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + t_rfc & 0x7f, 0x7f)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + 0x180, 0x180)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + 0x600, 0x600)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + 0x1800, 0xf800)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + ((t_rfc & 0x380) >> 7) << 16, 0x70000)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, 0, + 0x380000)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + (t_mod & 0xf) << 25, 0x1e00000)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + (t_mod >> 4) << 30, 0xc0000000)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + 0x16000000, 0x1e000000)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + SDRAM_TIMING_HIGH_REG, + 0x40000000, 0xc0000000)); + + return MV_OK; +} + +/* + * Mode Read + */ +int hws_ddr3_tip_mode_read(u32 dev_num, struct mode_info *mode_info) +{ + u32 ret; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + MR0_REG, mode_info->reg_mr0, MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + MR1_REG, mode_info->reg_mr1, MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + MR2_REG, mode_info->reg_mr2, MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + MR3_REG, mode_info->reg_mr2, MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + READ_DATA_SAMPLE_DELAY, mode_info->read_data_sample, + MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + READ_DATA_READY_DELAY, mode_info->read_data_ready, + MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + + return MV_OK; +} + +/* + * Get first active IF + */ +int ddr3_tip_get_first_active_if(u8 dev_num, u32 interface_mask, + u32 *interface_id) +{ + u32 if_id; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (interface_mask & (1 << if_id)) { + *interface_id = if_id; + break; + } + } + + return MV_OK; +} + +/* + * Write CS Result + */ +int ddr3_tip_write_cs_result(u32 dev_num, u32 offset) +{ + u32 if_id, bus_num, cs_bitmask, data_val, cs_num; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_num = 0; bus_num < tm->num_of_bus_per_interface; + bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_num); + cs_bitmask = + tm->interface_params[if_id]. + as_bus_params[bus_num].cs_bitmask; + if (cs_bitmask != effective_cs) { + cs_num = GET_CS_FROM_MASK(cs_bitmask); + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_num, + DDR_PHY_DATA, + offset + + CS_REG_VALUE(effective_cs), + &data_val); + ddr3_tip_bus_write(dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_num, DDR_PHY_DATA, + offset + + CS_REG_VALUE(cs_num), + data_val); + } + } + } + + return MV_OK; +} + +/* + * Write MRS + */ +int ddr3_tip_write_mrs_cmd(u32 dev_num, u32 *cs_mask_arr, u32 cmd, + u32 data, u32 mask) +{ + u32 if_id, reg; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + reg = (cmd == MRS1_CMD) ? MR1_REG : MR2_REG; + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, reg, data, mask)); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + SDRAM_OPERATION_REG, + (cs_mask_arr[if_id] << 8) | cmd, 0xf1f)); + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (ddr3_tip_if_polling(dev_num, ACCESS_TYPE_UNICAST, if_id, 0, + 0x1f, SDRAM_OPERATION_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("write_mrs_cmd: Poll cmd fail")); + } + } + + return MV_OK; +} + +/* + * Reset XSB Read FIFO + */ +int ddr3_tip_reset_fifo_ptr(u32 dev_num) +{ + u32 if_id = 0; + + /* Configure PHY reset value to 0 in order to "clean" the FIFO */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, 0x15c8, 0, 0xff000000)); + /* + * Move PHY to RL mode (only in RL mode the PHY overrides FIFO values + * during FIFO reset) + */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, TRAINING_SW_2_REG, + 0x1, 0x9)); + /* In order that above configuration will influence the PHY */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, 0x15b0, + 0x80000000, 0x80000000)); + /* Reset read fifo assertion */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, 0x1400, 0, 0x40000000)); + /* Reset read fifo deassertion */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, 0x1400, + 0x40000000, 0x40000000)); + /* Move PHY back to functional mode */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, TRAINING_SW_2_REG, + 0x8, 0x9)); + /* Stop training machine */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + if_id, 0x15b4, 0x10000, 0x10000)); + + return MV_OK; +} + +/* + * Reset Phy registers + */ +int ddr3_tip_ddr3_reset_phy_regs(u32 dev_num) +{ + u32 if_id, phy_id, cs; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (phy_id = 0; phy_id < tm->num_of_bus_per_interface; + phy_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, phy_id); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + phy_id, DDR_PHY_DATA, + WL_PHY_REG + + CS_REG_VALUE(effective_cs), + phy_reg0_val)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, phy_id, DDR_PHY_DATA, + RL_PHY_REG + CS_REG_VALUE(effective_cs), + phy_reg2_val)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, phy_id, DDR_PHY_DATA, + READ_CENTRALIZATION_PHY_REG + + CS_REG_VALUE(effective_cs), phy_reg3_val)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, phy_id, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + CS_REG_VALUE(effective_cs), phy_reg3_val)); + } + } + + /* Set Receiver Calibration value */ + for (cs = 0; cs < MAX_CS_NUM; cs++) { + /* PHY register 0xdb bits[5:0] - configure to 63 */ + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + DDR_PHY_DATA, CSN_IOB_VREF_REG(cs), 63)); + } + + return MV_OK; +} + +/* + * Restore Dunit registers + */ +int ddr3_tip_restore_dunit_regs(u32 dev_num) +{ + u32 index_cnt; + + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, CALIB_MACHINE_CTRL_REG, + 0x1, 0x1)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, CALIB_MACHINE_CTRL_REG, + calibration_update_control << 3, + 0x3 << 3)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + ODPG_WRITE_READ_MODE_ENABLE_REG, + 0xffff, MASK_ALL_BITS)); + + for (index_cnt = 0; index_cnt < ARRAY_SIZE(odpg_default_value); + index_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + odpg_default_value[index_cnt].reg_addr, + odpg_default_value[index_cnt].reg_data, + odpg_default_value[index_cnt].reg_mask)); + } + + return MV_OK; +} + +/* + * Auto tune main flow + */ +static int ddr3_tip_ddr3_training_main_flow(u32 dev_num) +{ + enum hws_ddr_freq freq = init_freq; + struct init_cntr_param init_cntr_prm; + int ret = MV_OK; + u32 if_id; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + +#ifndef EXCLUDE_SWITCH_DEBUG + if (debug_training == DEBUG_LEVEL_TRACE) { + CHECK_STATUS(print_device_info((u8)dev_num)); + } +#endif + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + CHECK_STATUS(ddr3_tip_ddr3_reset_phy_regs(dev_num)); + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + freq = init_freq; + if (is_pll_before_init != 0) { + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + config_func_info[dev_num].tip_set_freq_divider_func( + (u8)dev_num, if_id, freq); + } + } + + if (is_adll_calib_before_init != 0) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("with adll calib before init\n")); + adll_calibration(dev_num, ACCESS_TYPE_MULTICAST, 0, freq); + } + + if (is_reg_dump != 0) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("Dump before init controller\n")); + ddr3_tip_reg_dump(dev_num); + } + + if (mask_tune_func & INIT_CONTROLLER_MASK_BIT) { + training_stage = INIT_CONTROLLER; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("INIT_CONTROLLER_MASK_BIT\n")); + init_cntr_prm.do_mrs_phy = 1; + init_cntr_prm.is_ctrl64_bit = 0; + init_cntr_prm.init_phy = 1; + init_cntr_prm.msys_init = 0; + ret = hws_ddr3_tip_init_controller(dev_num, &init_cntr_prm); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("hws_ddr3_tip_init_controller failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + +#ifdef STATIC_ALGO_SUPPORT + if (mask_tune_func & STATIC_LEVELING_MASK_BIT) { + training_stage = STATIC_LEVELING; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("STATIC_LEVELING_MASK_BIT\n")); + ret = ddr3_tip_run_static_alg(dev_num, freq); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_run_static_alg failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } +#endif + + if (mask_tune_func & SET_LOW_FREQ_MASK_BIT) { + training_stage = SET_LOW_FREQ; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("SET_LOW_FREQ_MASK_BIT %d\n", + freq_val[low_freq])); + ret = ddr3_tip_freq_set(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, low_freq); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_freq_set failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & LOAD_PATTERN_MASK_BIT) { + training_stage = LOAD_PATTERN; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("LOAD_PATTERN_MASK_BIT #%d\n", + effective_cs)); + ret = ddr3_tip_load_all_pattern_to_mem(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_load_all_pattern_to_mem failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + if (mask_tune_func & SET_MEDIUM_FREQ_MASK_BIT) { + training_stage = SET_MEDIUM_FREQ; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("SET_MEDIUM_FREQ_MASK_BIT %d\n", + freq_val[medium_freq])); + ret = + ddr3_tip_freq_set(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, medium_freq); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_freq_set failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + if (mask_tune_func & WRITE_LEVELING_MASK_BIT) { + training_stage = WRITE_LEVELING; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("WRITE_LEVELING_MASK_BIT\n")); + if ((rl_mid_freq_wa == 0) || (freq_val[medium_freq] == 533)) { + ret = ddr3_tip_dynamic_write_leveling(dev_num); + } else { + /* Use old WL */ + ret = ddr3_tip_legacy_dynamic_write_leveling(dev_num); + } + + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_dynamic_write_leveling failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & LOAD_PATTERN_2_MASK_BIT) { + training_stage = LOAD_PATTERN_2; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("LOAD_PATTERN_2_MASK_BIT CS #%d\n", + effective_cs)); + ret = ddr3_tip_load_all_pattern_to_mem(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_load_all_pattern_to_mem failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + if (mask_tune_func & READ_LEVELING_MASK_BIT) { + training_stage = READ_LEVELING; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("READ_LEVELING_MASK_BIT\n")); + if ((rl_mid_freq_wa == 0) || (freq_val[medium_freq] == 533)) { + ret = ddr3_tip_dynamic_read_leveling(dev_num, medium_freq); + } else { + /* Use old RL */ + ret = ddr3_tip_legacy_dynamic_read_leveling(dev_num); + } + + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_dynamic_read_leveling failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + if (mask_tune_func & WRITE_LEVELING_SUPP_MASK_BIT) { + training_stage = WRITE_LEVELING_SUPP; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("WRITE_LEVELING_SUPP_MASK_BIT\n")); + ret = ddr3_tip_dynamic_write_leveling_supp(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_dynamic_write_leveling_supp failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & PBS_RX_MASK_BIT) { + training_stage = PBS_RX; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("PBS_RX_MASK_BIT CS #%d\n", + effective_cs)); + ret = ddr3_tip_pbs_rx(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_pbs_rx failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & PBS_TX_MASK_BIT) { + training_stage = PBS_TX; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("PBS_TX_MASK_BIT CS #%d\n", + effective_cs)); + ret = ddr3_tip_pbs_tx(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_pbs_tx failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + if (mask_tune_func & SET_TARGET_FREQ_MASK_BIT) { + training_stage = SET_TARGET_FREQ; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("SET_TARGET_FREQ_MASK_BIT %d\n", + freq_val[tm-> + interface_params[first_active_if]. + memory_freq])); + ret = ddr3_tip_freq_set(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + tm->interface_params[first_active_if]. + memory_freq); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_freq_set failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + if (mask_tune_func & WRITE_LEVELING_TF_MASK_BIT) { + training_stage = WRITE_LEVELING_TF; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("WRITE_LEVELING_TF_MASK_BIT\n")); + ret = ddr3_tip_dynamic_write_leveling(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_dynamic_write_leveling TF failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + if (mask_tune_func & LOAD_PATTERN_HIGH_MASK_BIT) { + training_stage = LOAD_PATTERN_HIGH; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, ("LOAD_PATTERN_HIGH\n")); + ret = ddr3_tip_load_all_pattern_to_mem(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_load_all_pattern_to_mem failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + if (mask_tune_func & READ_LEVELING_TF_MASK_BIT) { + training_stage = READ_LEVELING_TF; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("READ_LEVELING_TF_MASK_BIT\n")); + ret = ddr3_tip_dynamic_read_leveling(dev_num, tm-> + interface_params[first_active_if]. + memory_freq); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_dynamic_read_leveling TF failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + + if (mask_tune_func & DM_PBS_TX_MASK_BIT) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, ("DM_PBS_TX_MASK_BIT\n")); + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & VREF_CALIBRATION_MASK_BIT) { + training_stage = VREF_CALIBRATION; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, ("VREF\n")); + ret = ddr3_tip_vref(dev_num); + if (is_reg_dump != 0) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("VREF Dump\n")); + ddr3_tip_reg_dump(dev_num); + } + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_vref failure\n")); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & CENTRALIZATION_RX_MASK_BIT) { + training_stage = CENTRALIZATION_RX; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("CENTRALIZATION_RX_MASK_BIT CS #%d\n", + effective_cs)); + ret = ddr3_tip_centralization_rx(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_centralization_rx failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & WRITE_LEVELING_SUPP_TF_MASK_BIT) { + training_stage = WRITE_LEVELING_SUPP_TF; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("WRITE_LEVELING_SUPP_TF_MASK_BIT CS #%d\n", + effective_cs)); + ret = ddr3_tip_dynamic_write_leveling_supp(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_dynamic_write_leveling_supp TF failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + if (mask_tune_func & CENTRALIZATION_TX_MASK_BIT) { + training_stage = CENTRALIZATION_TX; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("CENTRALIZATION_TX_MASK_BIT CS #%d\n", + effective_cs)); + ret = ddr3_tip_centralization_tx(dev_num); + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + if (ret != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("ddr3_tip_centralization_tx failure CS #%d\n", + effective_cs)); + if (debug_mode == 0) + return MV_FAIL; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, ("restore registers to default\n")); + /* restore register values */ + CHECK_STATUS(ddr3_tip_restore_dunit_regs(dev_num)); + + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + + return MV_OK; +} + +/* + * DDR3 Dynamic training flow + */ +static int ddr3_tip_ddr3_auto_tune(u32 dev_num) +{ + u32 if_id, stage, ret; + int is_if_fail = 0, is_auto_tune_fail = 0; + + training_stage = INIT_CONTROLLER; + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + for (stage = 0; stage < MAX_STAGE_LIMIT; stage++) + training_result[stage][if_id] = NO_TEST_DONE; + } + + ret = ddr3_tip_ddr3_training_main_flow(dev_num); + + /* activate XSB test */ + if (xsb_validate_type != 0) { + run_xsb_test(dev_num, xsb_validation_base_address, 1, 1, + 0x1024); + } + + if (is_reg_dump != 0) + ddr3_tip_reg_dump(dev_num); + + /* print log */ + CHECK_STATUS(ddr3_tip_print_log(dev_num, window_mem_addr)); + + if (ret != MV_OK) { + CHECK_STATUS(ddr3_tip_print_stability_log(dev_num)); + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + is_if_fail = 0; + for (stage = 0; stage < MAX_STAGE_LIMIT; stage++) { + if (training_result[stage][if_id] == TEST_FAILED) + is_if_fail = 1; + } + if (is_if_fail == 1) { + is_auto_tune_fail = 1; + DEBUG_TRAINING_IP(DEBUG_LEVEL_INFO, + ("Auto Tune failed for IF %d\n", + if_id)); + } + } + + if ((ret == MV_FAIL) || (is_auto_tune_fail == 1)) + return MV_FAIL; + else + return MV_OK; +} + +/* + * Enable init sequence + */ +int ddr3_tip_enable_init_sequence(u32 dev_num) +{ + int is_fail = 0; + u32 if_id = 0, mem_mask = 0, bus_index = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* Enable init sequence */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_MULTICAST, 0, + SDRAM_INIT_CONTROL_REG, 0x1, 0x1)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, 0x1, + SDRAM_INIT_CONTROL_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("polling failed IF %d\n", + if_id)); + is_fail = 1; + continue; + } + + mem_mask = 0; + for (bus_index = 0; bus_index < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + mem_mask |= + tm->interface_params[if_id]. + as_bus_params[bus_index].mirror_enable_bitmask; + } + + if (mem_mask != 0) { + /* Disable Multi CS */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, + if_id, CS_ENABLE_REG, 1 << 3, + 1 << 3)); + } + } + + return (is_fail == 0) ? MV_OK : MV_FAIL; +} + +int ddr3_tip_register_dq_table(u32 dev_num, u32 *table) +{ + dq_map_table = table; + + return MV_OK; +} + +/* + * Check if pup search is locked + */ +int ddr3_tip_is_pup_lock(u32 *pup_buf, enum hws_training_result read_mode) +{ + u32 bit_start = 0, bit_end = 0, bit_id; + + if (read_mode == RESULT_PER_BIT) { + bit_start = 0; + bit_end = BUS_WIDTH_IN_BITS - 1; + } else { + bit_start = 0; + bit_end = 0; + } + + for (bit_id = bit_start; bit_id <= bit_end; bit_id++) { + if (GET_LOCK_RESULT(pup_buf[bit_id]) == 0) + return 0; + } + + return 1; +} + +/* + * Get minimum buffer value + */ +u8 ddr3_tip_get_buf_min(u8 *buf_ptr) +{ + u8 min_val = 0xff; + u8 cnt = 0; + + for (cnt = 0; cnt < BUS_WIDTH_IN_BITS; cnt++) { + if (buf_ptr[cnt] < min_val) + min_val = buf_ptr[cnt]; + } + + return min_val; +} + +/* + * Get maximum buffer value + */ +u8 ddr3_tip_get_buf_max(u8 *buf_ptr) +{ + u8 max_val = 0; + u8 cnt = 0; + + for (cnt = 0; cnt < BUS_WIDTH_IN_BITS; cnt++) { + if (buf_ptr[cnt] > max_val) + max_val = buf_ptr[cnt]; + } + + return max_val; +} + +/* + * The following functions return memory parameters: + * bus and device width, device size + */ + +u32 hws_ddr3_get_bus_width(void) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + return (DDR3_IS_16BIT_DRAM_MODE(tm->bus_act_mask) == + 1) ? 16 : 32; +} + +u32 hws_ddr3_get_device_width(u32 if_id) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + return (tm->interface_params[if_id].bus_width == + BUS_WIDTH_8) ? 8 : 16; +} + +u32 hws_ddr3_get_device_size(u32 if_id) +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (tm->interface_params[if_id].memory_size >= + MEM_SIZE_LAST) { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Error: Wrong device size of Cs: %d", + tm->interface_params[if_id].memory_size)); + return 0; + } else { + return 1 << tm->interface_params[if_id].memory_size; + } +} + +int hws_ddr3_calc_mem_cs_size(u32 if_id, u32 cs, u32 *cs_size) +{ + u32 cs_mem_size, dev_size; + + dev_size = hws_ddr3_get_device_size(if_id); + if (dev_size != 0) { + cs_mem_size = ((hws_ddr3_get_bus_width() / + hws_ddr3_get_device_width(if_id)) * dev_size); + + /* the calculated result in Gbytex16 to avoid float using */ + + if (cs_mem_size == 2) { + *cs_size = _128M; + } else if (cs_mem_size == 4) { + *cs_size = _256M; + } else if (cs_mem_size == 8) { + *cs_size = _512M; + } else if (cs_mem_size == 16) { + *cs_size = _1G; + } else if (cs_mem_size == 32) { + *cs_size = _2G; + } else { + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Error: Wrong Memory size of Cs: %d", cs)); + return MV_FAIL; + } + return MV_OK; + } else { + return MV_FAIL; + } +} + +int hws_ddr3_cs_base_adr_calc(u32 if_id, u32 cs, u32 *cs_base_addr) +{ + u32 cs_mem_size = 0; +#ifdef DEVICE_MAX_DRAM_ADDRESS_SIZE + u32 physical_mem_size; + u32 max_mem_size = DEVICE_MAX_DRAM_ADDRESS_SIZE; +#endif + + if (hws_ddr3_calc_mem_cs_size(if_id, cs, &cs_mem_size) != MV_OK) + return MV_FAIL; + +#ifdef DEVICE_MAX_DRAM_ADDRESS_SIZE + struct hws_topology_map *tm = ddr3_get_topology_map(); + /* + * if number of address pins doesn't allow to use max mem size that + * is defined in topology mem size is defined by + * DEVICE_MAX_DRAM_ADDRESS_SIZE + */ + physical_mem_size = + mv_hwsmem_size[tm->interface_params[0].memory_size]; + + if (hws_ddr3_get_device_width(cs) == 16) { + /* + * 16bit mem device can be twice more - no need in less + * significant pin + */ + max_mem_size = DEVICE_MAX_DRAM_ADDRESS_SIZE * 2; + } + + if (physical_mem_size > max_mem_size) { + cs_mem_size = max_mem_size * + (hws_ddr3_get_bus_width() / + hws_ddr3_get_device_width(if_id)); + DEBUG_TRAINING_IP(DEBUG_LEVEL_ERROR, + ("Updated Physical Mem size is from 0x%x to %x\n", + physical_mem_size, + DEVICE_MAX_DRAM_ADDRESS_SIZE)); + } +#endif + + /* calculate CS base addr */ + *cs_base_addr = ((cs_mem_size) * cs) & 0xffff0000; + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_bist.c b/drivers/ddr/marvell/a38x/old/ddr3_training_bist.c new file mode 100644 index 0000000000..fadce2dda5 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_bist.c @@ -0,0 +1,288 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +static u32 bist_offset = 32; +enum hws_pattern sweep_pattern = PATTERN_KILLER_DQ0; + +static int ddr3_tip_bist_operation(u32 dev_num, + enum hws_access_type access_type, + u32 if_id, + enum hws_bist_operation oper_type); + +/* + * BIST activate + */ +int ddr3_tip_bist_activate(u32 dev_num, enum hws_pattern pattern, + enum hws_access_type access_type, u32 if_num, + enum hws_dir direction, + enum hws_stress_jump addr_stress_jump, + enum hws_pattern_duration duration, + enum hws_bist_operation oper_type, + u32 offset, u32 cs_num, u32 pattern_addr_length) +{ + u32 tx_burst_size; + u32 delay_between_burst; + u32 rd_mode, val; + u32 poll_cnt = 0, max_poll = 1000, i, start_if, end_if; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + u32 read_data[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* ODPG Write enable from BIST */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_num, + ODPG_DATA_CONTROL_REG, 0x1, 0x1)); + /* ODPG Read enable/disable from BIST */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_num, + ODPG_DATA_CONTROL_REG, + (direction == OPER_READ) ? + 0x2 : 0, 0x2)); + CHECK_STATUS(ddr3_tip_load_pattern_to_odpg(dev_num, access_type, if_num, + pattern, offset)); + + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_num, + ODPG_DATA_BUF_SIZE_REG, + pattern_addr_length, MASK_ALL_BITS)); + tx_burst_size = (direction == OPER_WRITE) ? + pattern_table[pattern].tx_burst_size : 0; + delay_between_burst = (direction == OPER_WRITE) ? 2 : 0; + rd_mode = (direction == OPER_WRITE) ? 1 : 0; + CHECK_STATUS(ddr3_tip_configure_odpg + (dev_num, access_type, if_num, direction, + pattern_table[pattern].num_of_phases_tx, tx_burst_size, + pattern_table[pattern].num_of_phases_rx, + delay_between_burst, + rd_mode, cs_num, addr_stress_jump, duration)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_num, + ODPG_PATTERN_ADDR_OFFSET_REG, + offset, MASK_ALL_BITS)); + if (oper_type == BIST_STOP) { + CHECK_STATUS(ddr3_tip_bist_operation(dev_num, access_type, + if_num, BIST_STOP)); + } else { + CHECK_STATUS(ddr3_tip_bist_operation(dev_num, access_type, + if_num, BIST_START)); + if (duration != DURATION_CONT) { + /* + * This pdelay is a WA, becuase polling fives "done" + * also the odpg did nmot finish its task + */ + if (access_type == ACCESS_TYPE_MULTICAST) { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } else { + start_if = if_num; + end_if = if_num; + } + + for (i = start_if; i <= end_if; i++) { + VALIDATE_ACTIVE(tm-> + if_act_mask, i); + + for (poll_cnt = 0; poll_cnt < max_poll; + poll_cnt++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_num, ODPG_BIST_DONE, + read_data, + MASK_ALL_BITS)); + val = read_data[i]; + if ((val & 0x1) == 0x0) { + /* + * In SOC type devices this bit + * is self clear so, if it was + * cleared all good + */ + break; + } + } + + if (poll_cnt >= max_poll) { + DEBUG_TRAINING_BIST_ENGINE + (DEBUG_LEVEL_ERROR, + ("Bist poll failure 2\n")); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_num, + ODPG_DATA_CONTROL_REG, 0, + MASK_ALL_BITS)); + return MV_FAIL; + } + } + + CHECK_STATUS(ddr3_tip_bist_operation + (dev_num, access_type, if_num, BIST_STOP)); + } + } + + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_num, + ODPG_DATA_CONTROL_REG, 0, + MASK_ALL_BITS)); + + return MV_OK; +} + +/* + * BIST read result + */ +int ddr3_tip_bist_read_result(u32 dev_num, u32 if_id, + struct bist_result *pst_bist_result) +{ + int ret; + u32 read_data[MAX_INTERFACE_NUM]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (IS_ACTIVE(tm->if_act_mask, if_id) == 0) + return MV_NOT_SUPPORTED; + DEBUG_TRAINING_BIST_ENGINE(DEBUG_LEVEL_TRACE, + ("ddr3_tip_bist_read_result if_id %d\n", + if_id)); + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_BIST_FAILED_DATA_HI_REG, read_data, + MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + pst_bist_result->bist_fail_high = read_data[if_id]; + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_BIST_FAILED_DATA_LOW_REG, read_data, + MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + pst_bist_result->bist_fail_low = read_data[if_id]; + + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_BIST_LAST_FAIL_ADDR_REG, read_data, + MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + pst_bist_result->bist_last_fail_addr = read_data[if_id]; + ret = ddr3_tip_if_read(dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_BIST_DATA_ERROR_COUNTER_REG, read_data, + MASK_ALL_BITS); + if (ret != MV_OK) + return ret; + pst_bist_result->bist_error_cnt = read_data[if_id]; + + return MV_OK; +} + +/* + * BIST flow - Activate & read result + */ +int hws_ddr3_run_bist(u32 dev_num, enum hws_pattern pattern, u32 *result, + u32 cs_num) +{ + int ret; + u32 i = 0; + u32 win_base; + struct bist_result st_bist_result; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (i = 0; i < MAX_INTERFACE_NUM; i++) { + VALIDATE_ACTIVE(tm->if_act_mask, i); + hws_ddr3_cs_base_adr_calc(i, cs_num, &win_base); + ret = ddr3_tip_bist_activate(dev_num, pattern, + ACCESS_TYPE_UNICAST, + i, OPER_WRITE, STRESS_NONE, + DURATION_SINGLE, BIST_START, + bist_offset + win_base, + cs_num, 15); + if (ret != MV_OK) { + printf("ddr3_tip_bist_activate failed (0x%x)\n", ret); + return ret; + } + + ret = ddr3_tip_bist_activate(dev_num, pattern, + ACCESS_TYPE_UNICAST, + i, OPER_READ, STRESS_NONE, + DURATION_SINGLE, BIST_START, + bist_offset + win_base, + cs_num, 15); + if (ret != MV_OK) { + printf("ddr3_tip_bist_activate failed (0x%x)\n", ret); + return ret; + } + + ret = ddr3_tip_bist_read_result(dev_num, i, &st_bist_result); + if (ret != MV_OK) { + printf("ddr3_tip_bist_read_result failed\n"); + return ret; + } + result[i] = st_bist_result.bist_error_cnt; + } + + return MV_OK; +} + +/* + * Set BIST Operation + */ + +static int ddr3_tip_bist_operation(u32 dev_num, + enum hws_access_type access_type, + u32 if_id, enum hws_bist_operation oper_type) +{ + if (oper_type == BIST_STOP) { + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + ODPG_BIST_DONE, 1 << 8, 1 << 8)); + } else { + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + ODPG_BIST_DONE, 1, 1)); + } + + return MV_OK; +} + +/* + * Print BIST result + */ +void ddr3_tip_print_bist_res(void) +{ + u32 dev_num = 0; + u32 i; + struct bist_result st_bist_result[MAX_INTERFACE_NUM]; + int res; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (i = 0; i < MAX_INTERFACE_NUM; i++) { + if (IS_ACTIVE(tm->if_act_mask, i) == 0) + continue; + + res = ddr3_tip_bist_read_result(dev_num, i, &st_bist_result[i]); + if (res != MV_OK) { + DEBUG_TRAINING_BIST_ENGINE( + DEBUG_LEVEL_ERROR, + ("ddr3_tip_bist_read_result failed\n")); + return; + } + } + + DEBUG_TRAINING_BIST_ENGINE( + DEBUG_LEVEL_INFO, + ("interface | error_cnt | fail_low | fail_high | fail_addr\n")); + + for (i = 0; i < MAX_INTERFACE_NUM; i++) { + if (IS_ACTIVE(tm->if_act_mask, i) == + 0) + continue; + + DEBUG_TRAINING_BIST_ENGINE( + DEBUG_LEVEL_INFO, + ("%d | 0x%08x | 0x%08x | 0x%08x | 0x%08x\n", + i, st_bist_result[i].bist_error_cnt, + st_bist_result[i].bist_fail_low, + st_bist_result[i].bist_fail_high, + st_bist_result[i].bist_last_fail_addr)); + } +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_centralization.c b/drivers/ddr/marvell/a38x/old/ddr3_training_centralization.c new file mode 100644 index 0000000000..248db49338 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_centralization.c @@ -0,0 +1,711 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define VALIDATE_WIN_LENGTH(e1, e2, maxsize) \ + (((e2) + 1 > (e1) + (u8)MIN_WINDOW_SIZE) && \ + ((e2) + 1 < (e1) + (u8)maxsize)) +#define IS_WINDOW_OUT_BOUNDARY(e1, e2, maxsize) \ + (((e1) == 0 && (e2) != 0) || \ + ((e1) != (maxsize - 1) && (e2) == (maxsize - 1))) +#define CENTRAL_TX 0 +#define CENTRAL_RX 1 +#define NUM_OF_CENTRAL_TYPES 2 + +u32 start_pattern = PATTERN_KILLER_DQ0, end_pattern = PATTERN_KILLER_DQ7; +u32 start_if = 0, end_if = (MAX_INTERFACE_NUM - 1); +u8 bus_end_window[NUM_OF_CENTRAL_TYPES][MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 bus_start_window[NUM_OF_CENTRAL_TYPES][MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 centralization_state[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +static u8 ddr3_tip_special_rx_run_once_flag; + +static int ddr3_tip_centralization(u32 dev_num, u32 mode); + +/* + * Centralization RX Flow + */ +int ddr3_tip_centralization_rx(u32 dev_num) +{ + CHECK_STATUS(ddr3_tip_special_rx(dev_num)); + CHECK_STATUS(ddr3_tip_centralization(dev_num, CENTRAL_RX)); + + return MV_OK; +} + +/* + * Centralization TX Flow + */ +int ddr3_tip_centralization_tx(u32 dev_num) +{ + CHECK_STATUS(ddr3_tip_centralization(dev_num, CENTRAL_TX)); + + return MV_OK; +} + +/* + * Centralization Flow + */ +static int ddr3_tip_centralization(u32 dev_num, u32 mode) +{ + enum hws_training_ip_stat training_result[MAX_INTERFACE_NUM]; + u32 if_id, pattern_id, bit_id; + u8 bus_id; + u8 cur_start_win[BUS_WIDTH_IN_BITS]; + u8 centralization_result[MAX_INTERFACE_NUM][BUS_WIDTH_IN_BITS]; + u8 cur_end_win[BUS_WIDTH_IN_BITS]; + u8 current_window[BUS_WIDTH_IN_BITS]; + u8 opt_window, waste_window, start_window_skew, end_window_skew; + u8 final_pup_window[MAX_INTERFACE_NUM][BUS_WIDTH_IN_BITS]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + enum hws_training_result result_type = RESULT_PER_BIT; + enum hws_dir direction; + u32 *result[HWS_SEARCH_DIR_LIMIT]; + u32 reg_phy_off, reg; + u8 max_win_size; + int lock_success = 1; + u8 cur_end_win_min, cur_start_win_max; + u32 cs_enable_reg_val[MAX_INTERFACE_NUM]; + int is_if_fail = 0; + enum hws_result *flow_result = ddr3_tip_get_result_ptr(training_stage); + u32 pup_win_length = 0; + enum hws_search_dir search_dir_id; + u8 cons_tap = (mode == CENTRAL_TX) ? (64) : (0); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* save current cs enable reg val */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val, MASK_ALL_BITS)); + /* enable single cs */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, (1 << 3), (1 << 3))); + } + + if (mode == CENTRAL_TX) { + max_win_size = MAX_WINDOW_SIZE_TX; + reg_phy_off = WRITE_CENTRALIZATION_PHY_REG + (effective_cs * 4); + direction = OPER_WRITE; + } else { + max_win_size = MAX_WINDOW_SIZE_RX; + reg_phy_off = READ_CENTRALIZATION_PHY_REG + (effective_cs * 4); + direction = OPER_READ; + } + + /* DB initialization */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; + bus_id < tm->num_of_bus_per_interface; bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + centralization_state[if_id][bus_id] = 0; + bus_end_window[mode][if_id][bus_id] = + (max_win_size - 1) + cons_tap; + bus_start_window[mode][if_id][bus_id] = 0; + centralization_result[if_id][bus_id] = 0; + } + } + + /* start flow */ + for (pattern_id = start_pattern; pattern_id <= end_pattern; + pattern_id++) { + ddr3_tip_ip_training_wrapper(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, result_type, + HWS_CONTROL_ELEMENT_ADLL, + PARAM_NOT_CARE, direction, + tm-> + if_act_mask, 0x0, + max_win_size - 1, + max_win_size - 1, + pattern_id, EDGE_FPF, CS_SINGLE, + PARAM_NOT_CARE, training_result); + + for (if_id = start_if; if_id <= end_if; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; + bus_id <= tm->num_of_bus_per_interface - 1; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + + for (search_dir_id = HWS_LOW2HIGH; + search_dir_id <= HWS_HIGH2LOW; + search_dir_id++) { + CHECK_STATUS + (ddr3_tip_read_training_result + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + ALL_BITS_PER_PUP, + search_dir_id, + direction, result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + CS_SINGLE, + &result[search_dir_id], + 1, 0, 0)); + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_INFO, + ("%s pat %d IF %d pup %d Regs: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", + ((mode == + CENTRAL_TX) ? "TX" : "RX"), + pattern_id, if_id, bus_id, + result[search_dir_id][0], + result[search_dir_id][1], + result[search_dir_id][2], + result[search_dir_id][3], + result[search_dir_id][4], + result[search_dir_id][5], + result[search_dir_id][6], + result[search_dir_id][7])); + } + + for (bit_id = 0; bit_id < BUS_WIDTH_IN_BITS; + bit_id++) { + /* check if this code is valid for 2 edge, probably not :( */ + cur_start_win[bit_id] = + GET_TAP_RESULT(result + [HWS_LOW2HIGH] + [bit_id], + EDGE_1); + cur_end_win[bit_id] = + GET_TAP_RESULT(result + [HWS_HIGH2LOW] + [bit_id], + EDGE_1); + /* window length */ + current_window[bit_id] = + cur_end_win[bit_id] - + cur_start_win[bit_id] + 1; + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_TRACE, + ("cs %x patern %d IF %d pup %d cur_start_win %d cur_end_win %d current_window %d\n", + effective_cs, pattern_id, + if_id, bus_id, + cur_start_win[bit_id], + cur_end_win[bit_id], + current_window[bit_id])); + } + + if ((ddr3_tip_is_pup_lock + (result[HWS_LOW2HIGH], result_type)) && + (ddr3_tip_is_pup_lock + (result[HWS_HIGH2LOW], result_type))) { + /* read result success */ + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_INFO, + ("Pup locked, pat %d IF %d pup %d\n", + pattern_id, if_id, bus_id)); + } else { + /* read result failure */ + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_INFO, + ("fail Lock, pat %d IF %d pup %d\n", + pattern_id, if_id, bus_id)); + if (centralization_state[if_id][bus_id] + == 1) { + /* continue with next pup */ + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_TRACE, + ("continue to next pup %d %d\n", + if_id, bus_id)); + continue; + } + + for (bit_id = 0; + bit_id < BUS_WIDTH_IN_BITS; + bit_id++) { + /* + * the next check is relevant + * only when using search + * machine 2 edges + */ + if (cur_start_win[bit_id] > 0 && + cur_end_win[bit_id] == 0) { + cur_end_win + [bit_id] = + max_win_size - 1; + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_TRACE, + ("fail, IF %d pup %d bit %d fail #1\n", + if_id, bus_id, + bit_id)); + /* the next bit */ + continue; + } else { + centralization_state + [if_id][bus_id] = 1; + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_TRACE, + ("fail, IF %d pup %d bit %d fail #2\n", + if_id, bus_id, + bit_id)); + } + } + + if (centralization_state[if_id][bus_id] + == 1) { + /* going to next pup */ + continue; + } + } /*bit */ + + opt_window = + ddr3_tip_get_buf_min(current_window); + /* final pup window length */ + final_pup_window[if_id][bus_id] = + ddr3_tip_get_buf_min(cur_end_win) - + ddr3_tip_get_buf_max(cur_start_win) + + 1; + waste_window = + opt_window - + final_pup_window[if_id][bus_id]; + start_window_skew = + ddr3_tip_get_buf_max(cur_start_win) - + ddr3_tip_get_buf_min( + cur_start_win); + end_window_skew = + ddr3_tip_get_buf_max( + cur_end_win) - + ddr3_tip_get_buf_min( + cur_end_win); + /* min/max updated with pattern change */ + cur_end_win_min = + ddr3_tip_get_buf_min( + cur_end_win); + cur_start_win_max = + ddr3_tip_get_buf_max( + cur_start_win); + bus_end_window[mode][if_id][bus_id] = + GET_MIN(bus_end_window[mode][if_id] + [bus_id], + cur_end_win_min); + bus_start_window[mode][if_id][bus_id] = + GET_MAX(bus_start_window[mode][if_id] + [bus_id], + cur_start_win_max); + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_INFO, + ("pat %d IF %d pup %d opt_win %d final_win %d waste_win %d st_win_skew %d end_win_skew %d cur_st_win_max %d cur_end_win_min %d bus_st_win %d bus_end_win %d\n", + pattern_id, if_id, bus_id, opt_window, + final_pup_window[if_id][bus_id], + waste_window, start_window_skew, + end_window_skew, + cur_start_win_max, + cur_end_win_min, + bus_start_window[mode][if_id][bus_id], + bus_end_window[mode][if_id][bus_id])); + + /* check if window is valid */ + if (ddr3_tip_centr_skip_min_win_check == 0) { + if ((VALIDATE_WIN_LENGTH + (bus_start_window[mode][if_id] + [bus_id], + bus_end_window[mode][if_id] + [bus_id], + max_win_size) == 1) || + (IS_WINDOW_OUT_BOUNDARY + (bus_start_window[mode][if_id] + [bus_id], + bus_end_window[mode][if_id] + [bus_id], + max_win_size) == 1)) { + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_INFO, + ("win valid, pat %d IF %d pup %d\n", + pattern_id, if_id, + bus_id)); + /* window is valid */ + } else { + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_INFO, + ("fail win, pat %d IF %d pup %d bus_st_win %d bus_end_win %d\n", + pattern_id, if_id, bus_id, + bus_start_window[mode] + [if_id][bus_id], + bus_end_window[mode] + [if_id][bus_id])); + centralization_state[if_id] + [bus_id] = 1; + if (debug_mode == 0) + return MV_FAIL; + } + } /* ddr3_tip_centr_skip_min_win_check */ + } /* pup */ + } /* interface */ + } /* pattern */ + + for (if_id = start_if; if_id <= end_if; if_id++) { + if (IS_ACTIVE(tm->if_act_mask, if_id) == 0) + continue; + + is_if_fail = 0; + flow_result[if_id] = TEST_SUCCESS; + + for (bus_id = 0; + bus_id <= (tm->num_of_bus_per_interface - 1); bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + + /* continue only if lock */ + if (centralization_state[if_id][bus_id] != 1) { + if (ddr3_tip_centr_skip_min_win_check == 0) { + if ((bus_end_window + [mode][if_id][bus_id] == + (max_win_size - 1)) && + ((bus_end_window + [mode][if_id][bus_id] - + bus_start_window[mode][if_id] + [bus_id]) < MIN_WINDOW_SIZE) && + ((bus_end_window[mode][if_id] + [bus_id] - bus_start_window + [mode][if_id][bus_id]) > 2)) { + /* prevent false lock */ + /* TBD change to enum */ + centralization_state + [if_id][bus_id] = 2; + } + + if ((bus_end_window[mode][if_id][bus_id] + == 0) && + ((bus_end_window[mode][if_id] + [bus_id] - + bus_start_window[mode][if_id] + [bus_id]) < MIN_WINDOW_SIZE) && + ((bus_end_window[mode][if_id] + [bus_id] - + bus_start_window[mode][if_id] + [bus_id]) > 2)) + /*prevent false lock */ + centralization_state[if_id] + [bus_id] = 3; + } + + if ((bus_end_window[mode][if_id][bus_id] > + (max_win_size - 1)) && direction == + OPER_WRITE) { + DEBUG_CENTRALIZATION_ENGINE + (DEBUG_LEVEL_INFO, + ("Tx special pattern\n")); + cons_tap = 64; + } + } + + /* check states */ + if (centralization_state[if_id][bus_id] == 3) { + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_INFO, + ("SSW - TBD IF %d pup %d\n", + if_id, bus_id)); + lock_success = 1; + } else if (centralization_state[if_id][bus_id] == 2) { + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_INFO, + ("SEW - TBD IF %d pup %d\n", + if_id, bus_id)); + lock_success = 1; + } else if (centralization_state[if_id][bus_id] == 0) { + lock_success = 1; + } else { + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_ERROR, + ("fail, IF %d pup %d\n", + if_id, bus_id)); + lock_success = 0; + } + + if (lock_success == 1) { + centralization_result[if_id][bus_id] = + (bus_end_window[mode][if_id][bus_id] + + bus_start_window[mode][if_id][bus_id]) + / 2 - cons_tap; + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_TRACE, + (" bus_id %d Res= %d\n", bus_id, + centralization_result[if_id][bus_id])); + /* copy results to registers */ + pup_win_length = + bus_end_window[mode][if_id][bus_id] - + bus_start_window[mode][if_id][bus_id] + + 1; + + ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, + RESULT_DB_PHY_REG_ADDR + + effective_cs, ®); + reg = (reg & (~0x1f << + ((mode == CENTRAL_TX) ? + (RESULT_DB_PHY_REG_TX_OFFSET) : + (RESULT_DB_PHY_REG_RX_OFFSET)))) + | pup_win_length << + ((mode == CENTRAL_TX) ? + (RESULT_DB_PHY_REG_TX_OFFSET) : + (RESULT_DB_PHY_REG_RX_OFFSET)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + RESULT_DB_PHY_REG_ADDR + + effective_cs, reg)); + + /* offset per CS is calculated earlier */ + CHECK_STATUS( + ddr3_tip_bus_write(dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_id, + DDR_PHY_DATA, + reg_phy_off, + centralization_result + [if_id] + [bus_id])); + } else { + is_if_fail = 1; + } + } + + if (is_if_fail == 1) + flow_result[if_id] = TEST_FAILED; + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + /* restore cs enable value */ + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_UNICAST, + if_id, CS_ENABLE_REG, + cs_enable_reg_val[if_id], + MASK_ALL_BITS)); + } + + return is_if_fail; +} + +/* + * Centralization Flow + */ +int ddr3_tip_special_rx(u32 dev_num) +{ + enum hws_training_ip_stat training_result[MAX_INTERFACE_NUM]; + u32 if_id, pup_id, pattern_id, bit_id; + u8 cur_start_win[BUS_WIDTH_IN_BITS]; + u8 cur_end_win[BUS_WIDTH_IN_BITS]; + enum hws_training_result result_type = RESULT_PER_BIT; + enum hws_dir direction; + enum hws_search_dir search_dir_id; + u32 *result[HWS_SEARCH_DIR_LIMIT]; + u32 max_win_size; + u8 cur_end_win_min, cur_start_win_max; + u32 cs_enable_reg_val[MAX_INTERFACE_NUM]; + u32 temp = 0; + int pad_num = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (ddr3_tip_special_rx_run_once_flag != 0) + return MV_OK; + + ddr3_tip_special_rx_run_once_flag = 1; + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* save current cs enable reg val */ + CHECK_STATUS(ddr3_tip_if_read(dev_num, ACCESS_TYPE_UNICAST, + if_id, CS_ENABLE_REG, + cs_enable_reg_val, + MASK_ALL_BITS)); + /* enable single cs */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, ACCESS_TYPE_UNICAST, + if_id, CS_ENABLE_REG, + (1 << 3), (1 << 3))); + } + + max_win_size = MAX_WINDOW_SIZE_RX; + direction = OPER_READ; + pattern_id = PATTERN_VREF; + + /* start flow */ + ddr3_tip_ip_training_wrapper(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, result_type, + HWS_CONTROL_ELEMENT_ADLL, + PARAM_NOT_CARE, direction, + tm->if_act_mask, 0x0, + max_win_size - 1, max_win_size - 1, + pattern_id, EDGE_FPF, CS_SINGLE, + PARAM_NOT_CARE, training_result); + + for (if_id = start_if; if_id <= end_if; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup_id = 0; + pup_id <= tm->num_of_bus_per_interface; pup_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup_id); + + for (search_dir_id = HWS_LOW2HIGH; + search_dir_id <= HWS_HIGH2LOW; + search_dir_id++) { + CHECK_STATUS(ddr3_tip_read_training_result + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup_id, + ALL_BITS_PER_PUP, search_dir_id, + direction, result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + CS_SINGLE, &result[search_dir_id], + 1, 0, 0)); + DEBUG_CENTRALIZATION_ENGINE(DEBUG_LEVEL_INFO, + ("Special: pat %d IF %d pup %d Regs: 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", + pattern_id, if_id, + pup_id, + result + [search_dir_id][0], + result + [search_dir_id][1], + result + [search_dir_id][2], + result + [search_dir_id][3], + result + [search_dir_id][4], + result + [search_dir_id][5], + result + [search_dir_id][6], + result + [search_dir_id] + [7])); + } + + for (bit_id = 0; bit_id < BUS_WIDTH_IN_BITS; bit_id++) { + /* + * check if this code is valid for 2 edge, + * probably not :( + */ + cur_start_win[bit_id] = + GET_TAP_RESULT(result[HWS_LOW2HIGH] + [bit_id], EDGE_1); + cur_end_win[bit_id] = + GET_TAP_RESULT(result[HWS_HIGH2LOW] + [bit_id], EDGE_1); + } + if (!((ddr3_tip_is_pup_lock + (result[HWS_LOW2HIGH], result_type)) && + (ddr3_tip_is_pup_lock + (result[HWS_HIGH2LOW], result_type)))) { + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_ERROR, + ("Special: Pup lock fail, pat %d IF %d pup %d\n", + pattern_id, if_id, pup_id)); + return MV_FAIL; + } + + cur_end_win_min = + ddr3_tip_get_buf_min(cur_end_win); + cur_start_win_max = + ddr3_tip_get_buf_max(cur_start_win); + + if (cur_start_win_max <= 1) { /* Align left */ + for (bit_id = 0; bit_id < BUS_WIDTH_IN_BITS; + bit_id++) { + pad_num = + dq_map_table[bit_id + + pup_id * + BUS_WIDTH_IN_BITS + + if_id * + BUS_WIDTH_IN_BITS * + tm-> + num_of_bus_per_interface]; + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, + pup_id, DDR_PHY_DATA, + PBS_RX_PHY_REG + pad_num, + &temp)); + temp = (temp + 0xa > 31) ? + (31) : (temp + 0xa); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + pup_id, DDR_PHY_DATA, + PBS_RX_PHY_REG + pad_num, + temp)); + } + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_INFO, + ("Special: PBS:: I/F# %d , Bus# %d fix align to the Left\n", + if_id, pup_id)); + } + + if (cur_end_win_min > 30) { /* Align right */ + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup_id, + DDR_PHY_DATA, PBS_RX_PHY_REG + 4, + &temp)); + temp += 0xa; + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + pup_id, DDR_PHY_DATA, + PBS_RX_PHY_REG + 4, temp)); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup_id, + DDR_PHY_DATA, PBS_RX_PHY_REG + 5, + &temp)); + temp += 0xa; + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + pup_id, DDR_PHY_DATA, + PBS_RX_PHY_REG + 5, temp)); + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_INFO, + ("Special: PBS:: I/F# %d , Bus# %d fix align to the right\n", + if_id, pup_id)); + } + + vref_window_size[if_id][pup_id] = + cur_end_win_min - + cur_start_win_max + 1; + DEBUG_CENTRALIZATION_ENGINE( + DEBUG_LEVEL_INFO, + ("Special: Winsize I/F# %d , Bus# %d is %d\n", + if_id, pup_id, vref_window_size + [if_id][pup_id])); + } /* pup */ + } /* end of interface */ + + return MV_OK; +} + +/* + * Print Centralization Result + */ +int ddr3_tip_print_centralization_result(u32 dev_num) +{ + u32 if_id = 0, bus_id = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + printf("Centralization Results\n"); + printf("I/F0 Result[0 - success 1-fail 2 - state_2 3 - state_3] ...\n"); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + printf("%d ,\n", centralization_state[if_id][bus_id]); + } + } + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_db.c b/drivers/ddr/marvell/a38x/old/ddr3_training_db.c new file mode 100644 index 0000000000..bd5413e233 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_db.c @@ -0,0 +1,651 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +/* List of allowed frequency listed in order of enum hws_ddr_freq */ +u32 freq_val[DDR_FREQ_LIMIT] = { + 0, /*DDR_FREQ_LOW_FREQ */ + 400, /*DDR_FREQ_400, */ + 533, /*DDR_FREQ_533, */ + 666, /*DDR_FREQ_667, */ + 800, /*DDR_FREQ_800, */ + 933, /*DDR_FREQ_933, */ + 1066, /*DDR_FREQ_1066, */ + 311, /*DDR_FREQ_311, */ + 333, /*DDR_FREQ_333, */ + 467, /*DDR_FREQ_467, */ + 850, /*DDR_FREQ_850, */ + 600, /*DDR_FREQ_600 */ + 300, /*DDR_FREQ_300 */ + 900, /*DDR_FREQ_900 */ + 360, /*DDR_FREQ_360 */ + 1000 /*DDR_FREQ_1000 */ +}; + +/* Table for CL values per frequency for each speed bin index */ +struct cl_val_per_freq cas_latency_table[] = { + /* + * 400M 667M 933M 311M 467M 600M 360 + * 100M 533M 800M 1066M 333M 850M 900 + * 1000 (the order is 100, 400, 533 etc.) + */ + /* DDR3-800D */ + { {6, 5, 0, 0, 0, 0, 0, 5, 5, 0, 0, 0, 5, 0, 5, 0} }, + /* DDR3-800E */ + { {6, 6, 0, 0, 0, 0, 0, 6, 6, 0, 0, 0, 6, 0, 6, 0} }, + /* DDR3-1066E */ + { {6, 5, 6, 0, 0, 0, 0, 5, 5, 6, 0, 0, 5, 0, 5, 0} }, + /* DDR3-1066F */ + { {6, 6, 7, 0, 0, 0, 0, 6, 6, 7, 0, 0, 6, 0, 6, 0} }, + /* DDR3-1066G */ + { {6, 6, 8, 0, 0, 0, 0, 6, 6, 8, 0, 0, 6, 0, 6, 0} }, + /* DDR3-1333F* */ + { {6, 5, 6, 7, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1333G */ + { {6, 5, 7, 8, 0, 0, 0, 5, 5, 7, 0, 8, 5, 0, 5, 0} }, + /* DDR3-1333H */ + { {6, 6, 8, 9, 0, 0, 0, 6, 6, 8, 0, 9, 6, 0, 6, 0} }, + /* DDR3-1333J* */ + { {6, 6, 8, 10, 0, 0, 0, 6, 6, 8, 0, 10, 6, 0, 6, 0} + /* DDR3-1600G* */}, + { {6, 5, 6, 7, 8, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1600H */ + { {6, 5, 6, 8, 9, 0, 0, 5, 5, 6, 0, 8, 5, 0, 5, 0} }, + /* DDR3-1600J */ + { {6, 5, 7, 9, 10, 0, 0, 5, 5, 7, 0, 9, 5, 0, 5, 0} }, + /* DDR3-1600K */ + { {6, 6, 8, 10, 11, 0, 0, 6, 6, 8, 0, 10, 6, 0, 6, 0 } }, + /* DDR3-1866J* */ + { {6, 5, 6, 8, 9, 11, 0, 5, 5, 6, 11, 8, 5, 0, 5, 0} }, + /* DDR3-1866K */ + { {6, 5, 7, 8, 10, 11, 0, 5, 5, 7, 11, 8, 5, 11, 5, 11} }, + /* DDR3-1866L */ + { {6, 6, 7, 9, 11, 12, 0, 6, 6, 7, 12, 9, 6, 12, 6, 12} }, + /* DDR3-1866M* */ + { {6, 6, 8, 10, 11, 13, 0, 6, 6, 8, 13, 10, 6, 13, 6, 13} }, + /* DDR3-2133K* */ + { {6, 5, 6, 7, 9, 10, 11, 5, 5, 6, 10, 7, 5, 11, 5, 11} }, + /* DDR3-2133L */ + { {6, 5, 6, 8, 9, 11, 12, 5, 5, 6, 11, 8, 5, 12, 5, 12} }, + /* DDR3-2133M */ + { {6, 5, 7, 9, 10, 12, 13, 5, 5, 7, 12, 9, 5, 13, 5, 13} }, + /* DDR3-2133N* */ + { {6, 6, 7, 9, 11, 13, 14, 6, 6, 7, 13, 9, 6, 14, 6, 14} }, + /* DDR3-1333H-ext */ + { {6, 6, 7, 9, 0, 0, 0, 6, 6, 7, 0, 9, 6, 0, 6, 0} }, + /* DDR3-1600K-ext */ + { {6, 6, 7, 9, 11, 0, 0, 6, 6, 7, 0, 9, 6, 0, 6, 0} }, + /* DDR3-1866M-ext */ + { {6, 6, 7, 9, 11, 13, 0, 6, 6, 7, 13, 9, 6, 13, 6, 13} }, +}; + +/* Table for CWL values per speedbin index */ +struct cl_val_per_freq cas_write_latency_table[] = { + /* + * 400M 667M 933M 311M 467M 600M 360 + * 100M 533M 800M 1066M 333M 850M 900 + * (the order is 100, 400, 533 etc.) + */ + /* DDR3-800D */ + { {5, 5, 0, 0, 0, 0, 0, 5, 5, 0, 0, 0, 5, 0, 5, 0} }, + /* DDR3-800E */ + { {5, 5, 0, 0, 0, 0, 0, 5, 5, 0, 0, 0, 5, 0, 5, 0} }, + /* DDR3-1066E */ + { {5, 5, 6, 0, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1066F */ + { {5, 5, 6, 0, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1066G */ + { {5, 5, 6, 0, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1333F* */ + { {5, 5, 6, 7, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1333G */ + { {5, 5, 6, 7, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1333H */ + { {5, 5, 6, 7, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1333J* */ + { {5, 5, 6, 7, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1600G* */ + { {5, 5, 6, 7, 8, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1600H */ + { {5, 5, 6, 7, 8, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1600J */ + { {5, 5, 6, 7, 8, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1600K */ + { {5, 5, 6, 7, 8, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1866J* */ + { {5, 5, 6, 7, 8, 9, 0, 5, 5, 6, 9, 7, 5, 0, 5, 0} }, + /* DDR3-1866K */ + { {5, 5, 6, 7, 8, 9, 0, 5, 5, 6, 9, 7, 5, 0, 5, 0} }, + /* DDR3-1866L */ + { {5, 5, 6, 7, 8, 9, 0, 5, 5, 6, 9, 7, 5, 9, 5, 9} }, + /* DDR3-1866M* */ + { {5, 5, 6, 7, 8, 9, 0, 5, 5, 6, 9, 7, 5, 9, 5, 9} }, + /* DDR3-2133K* */ + { {5, 5, 6, 7, 8, 9, 10, 5, 5, 6, 9, 7, 5, 9, 5, 10} }, + /* DDR3-2133L */ + { {5, 5, 6, 7, 8, 9, 10, 5, 5, 6, 9, 7, 5, 9, 5, 10} }, + /* DDR3-2133M */ + { {5, 5, 6, 7, 8, 9, 10, 5, 5, 6, 9, 7, 5, 9, 5, 10} }, + /* DDR3-2133N* */ + { {5, 5, 6, 7, 8, 9, 10, 5, 5, 6, 9, 7, 5, 9, 5, 10} }, + /* DDR3-1333H-ext */ + { {5, 5, 6, 7, 0, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1600K-ext */ + { {5, 5, 6, 7, 8, 0, 0, 5, 5, 6, 0, 7, 5, 0, 5, 0} }, + /* DDR3-1866M-ext */ + { {5, 5, 6, 7, 8, 9, 0, 5, 5, 6, 9, 7, 5, 9, 5, 9} }, +}; + +u8 twr_mask_table[] = { + 10, + 10, + 10, + 10, + 10, + 1, /*5 */ + 2, /*6 */ + 3, /*7 */ + 10, + 10, + 5, /*10 */ + 10, + 6, /*12 */ + 10, + 7, /*14 */ + 10, + 0 /*16 */ +}; + +u8 cl_mask_table[] = { + 0, + 0, + 0, + 0, + 0, + 0x2, + 0x4, + 0x6, + 0x8, + 0xa, + 0xc, + 0xe, + 0x1, + 0x3, + 0x5, + 0x5 +}; + +u8 cwl_mask_table[] = { + 0, + 0, + 0, + 0, + 0, + 0, + 0x1, + 0x2, + 0x3, + 0x4, + 0x5, + 0x6, + 0x7, + 0x8, + 0x9, + 0x9 +}; + +/* RFC values (in ns) */ +u16 rfc_table[] = { + 90, /* 512M */ + 110, /* 1G */ + 160, /* 2G */ + 260, /* 4G */ + 350 /* 8G */ +}; + +u32 speed_bin_table_t_rc[] = { + 50000, + 52500, + 48750, + 50625, + 52500, + 46500, + 48000, + 49500, + 51000, + 45000, + 46250, + 47500, + 48750, + 44700, + 45770, + 46840, + 47910, + 43285, + 44220, + 45155, + 46900 +}; + +u32 speed_bin_table_t_rcd_t_rp[] = { + 12500, + 15000, + 11250, + 13125, + 15000, + 10500, + 12000, + 13500, + 15000, + 10000, + 11250, + 12500, + 13750, + 10700, + 11770, + 12840, + 13910, + 10285, + 11022, + 12155, + 13090, +}; + +enum { + PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_AGGRESSOR = 0, + PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM +}; + +static u8 pattern_killer_pattern_table_map[KILLER_PATTERN_LENGTH * 2][2] = { + /*Aggressor / Victim */ + {1, 0}, + {0, 0}, + {1, 0}, + {1, 1}, + {0, 1}, + {0, 1}, + {1, 0}, + {0, 1}, + {1, 0}, + {0, 1}, + {1, 0}, + {1, 0}, + {0, 1}, + {1, 0}, + {0, 1}, + {0, 0}, + {1, 1}, + {0, 0}, + {1, 1}, + {0, 0}, + {1, 1}, + {0, 0}, + {1, 1}, + {1, 0}, + {0, 0}, + {1, 1}, + {0, 0}, + {1, 1}, + {0, 0}, + {0, 0}, + {0, 0}, + {0, 1}, + {0, 1}, + {1, 1}, + {0, 0}, + {0, 0}, + {1, 1}, + {1, 1}, + {0, 0}, + {1, 1}, + {0, 0}, + {1, 1}, + {1, 1}, + {0, 0}, + {0, 0}, + {1, 1}, + {0, 0}, + {1, 1}, + {0, 1}, + {0, 0}, + {0, 1}, + {0, 1}, + {0, 0}, + {1, 1}, + {1, 1}, + {1, 0}, + {1, 0}, + {1, 1}, + {1, 1}, + {1, 1}, + {1, 1}, + {1, 1}, + {1, 1}, + {1, 1} +}; + +static u8 pattern_vref_pattern_table_map[] = { + /* 1 means 0xffffffff, 0 is 0x0 */ + 0xb8, + 0x52, + 0x55, + 0x8a, + 0x33, + 0xa6, + 0x6d, + 0xfe +}; + +/* Return speed Bin value for selected index and t* element */ +u32 speed_bin_table(u8 index, enum speed_bin_table_elements element) +{ + u32 result = 0; + + switch (element) { + case SPEED_BIN_TRCD: + case SPEED_BIN_TRP: + result = speed_bin_table_t_rcd_t_rp[index]; + break; + case SPEED_BIN_TRAS: + if (index < 6) + result = 37500; + else if (index < 10) + result = 36000; + else if (index < 14) + result = 35000; + else if (index < 18) + result = 34000; + else + result = 33000; + break; + case SPEED_BIN_TRC: + result = speed_bin_table_t_rc[index]; + break; + case SPEED_BIN_TRRD1K: + if (index < 3) + result = 10000; + else if (index < 6) + result = 7005; + else if (index < 14) + result = 6000; + else + result = 5000; + break; + case SPEED_BIN_TRRD2K: + if (index < 6) + result = 10000; + else if (index < 14) + result = 7005; + else + result = 6000; + break; + case SPEED_BIN_TPD: + if (index < 3) + result = 7500; + else if (index < 10) + result = 5625; + else + result = 5000; + break; + case SPEED_BIN_TFAW1K: + if (index < 3) + result = 40000; + else if (index < 6) + result = 37500; + else if (index < 14) + result = 30000; + else if (index < 18) + result = 27000; + else + result = 25000; + break; + case SPEED_BIN_TFAW2K: + if (index < 6) + result = 50000; + else if (index < 10) + result = 45000; + else if (index < 14) + result = 40000; + else + result = 35000; + break; + case SPEED_BIN_TWTR: + result = 7500; + break; + case SPEED_BIN_TRTP: + result = 7500; + break; + case SPEED_BIN_TWR: + result = 15000; + break; + case SPEED_BIN_TMOD: + result = 15000; + break; + default: + break; + } + + return result; +} + +static inline u32 pattern_table_get_killer_word(u8 dqs, u8 index) +{ + u8 i, byte = 0; + u8 role; + + for (i = 0; i < 8; i++) { + role = (i == dqs) ? + (PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_AGGRESSOR) : + (PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM); + byte |= pattern_killer_pattern_table_map[index][role] << i; + } + + return byte | (byte << 8) | (byte << 16) | (byte << 24); +} + +static inline u32 pattern_table_get_killer_word16(u8 dqs, u8 index) +{ + u8 i, byte0 = 0, byte1 = 0; + u8 role; + + for (i = 0; i < 8; i++) { + role = (i == dqs) ? + (PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_AGGRESSOR) : + (PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM); + byte0 |= pattern_killer_pattern_table_map[index * 2][role] << i; + } + + for (i = 0; i < 8; i++) { + role = (i == dqs) ? + (PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_AGGRESSOR) : + (PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM); + byte1 |= pattern_killer_pattern_table_map + [index * 2 + 1][role] << i; + } + + return byte0 | (byte0 << 8) | (byte1 << 16) | (byte1 << 24); +} + +static inline u32 pattern_table_get_sso_word(u8 sso, u8 index) +{ + u8 step = sso + 1; + + if (0 == ((index / step) & 1)) + return 0x0; + else + return 0xffffffff; +} + +static inline u32 pattern_table_get_vref_word(u8 index) +{ + if (0 == ((pattern_vref_pattern_table_map[index / 8] >> + (index % 8)) & 1)) + return 0x0; + else + return 0xffffffff; +} + +static inline u32 pattern_table_get_vref_word16(u8 index) +{ + if (0 == pattern_killer_pattern_table_map + [PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM][index * 2] && + 0 == pattern_killer_pattern_table_map + [PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM][index * 2 + 1]) + return 0x00000000; + else if (1 == pattern_killer_pattern_table_map + [PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM][index * 2] && + 0 == pattern_killer_pattern_table_map + [PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM][index * 2 + 1]) + return 0xffff0000; + else if (0 == pattern_killer_pattern_table_map + [PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM][index * 2] && + 1 == pattern_killer_pattern_table_map + [PATTERN_KILLER_PATTERN_TABLE_MAP_ROLE_VICTIM][index * 2 + 1]) + return 0x0000ffff; + else + return 0xffffffff; +} + +static inline u32 pattern_table_get_static_pbs_word(u8 index) +{ + u16 temp; + + temp = ((0x00ff << (index / 3)) & 0xff00) >> 8; + + return temp | (temp << 8) | (temp << 16) | (temp << 24); +} + +inline u32 pattern_table_get_word(u32 dev_num, enum hws_pattern type, u8 index) +{ + u32 pattern; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (DDR3_IS_16BIT_DRAM_MODE(tm->bus_act_mask) == 0) { + /* 32bit patterns */ + switch (type) { + case PATTERN_PBS1: + case PATTERN_PBS2: + if (index == 0 || index == 2 || index == 5 || + index == 7) + pattern = PATTERN_55; + else + pattern = PATTERN_AA; + break; + case PATTERN_PBS3: + if (0 == (index & 1)) + pattern = PATTERN_55; + else + pattern = PATTERN_AA; + break; + case PATTERN_RL: + if (index < 6) + pattern = PATTERN_00; + else + pattern = PATTERN_80; + break; + case PATTERN_STATIC_PBS: + pattern = pattern_table_get_static_pbs_word(index); + break; + case PATTERN_KILLER_DQ0: + case PATTERN_KILLER_DQ1: + case PATTERN_KILLER_DQ2: + case PATTERN_KILLER_DQ3: + case PATTERN_KILLER_DQ4: + case PATTERN_KILLER_DQ5: + case PATTERN_KILLER_DQ6: + case PATTERN_KILLER_DQ7: + pattern = pattern_table_get_killer_word( + (u8)(type - PATTERN_KILLER_DQ0), index); + break; + case PATTERN_RL2: + if (index < 6) + pattern = PATTERN_00; + else + pattern = PATTERN_01; + break; + case PATTERN_TEST: + if (index > 1 && index < 6) + pattern = PATTERN_20; + else + pattern = PATTERN_00; + break; + case PATTERN_FULL_SSO0: + case PATTERN_FULL_SSO1: + case PATTERN_FULL_SSO2: + case PATTERN_FULL_SSO3: + pattern = pattern_table_get_sso_word( + (u8)(type - PATTERN_FULL_SSO0), index); + break; + case PATTERN_VREF: + pattern = pattern_table_get_vref_word(index); + break; + default: + pattern = 0; + break; + } + } else { + /* 16bit patterns */ + switch (type) { + case PATTERN_PBS1: + case PATTERN_PBS2: + case PATTERN_PBS3: + pattern = PATTERN_55AA; + break; + case PATTERN_RL: + if (index < 3) + pattern = PATTERN_00; + else + pattern = PATTERN_80; + break; + case PATTERN_STATIC_PBS: + pattern = PATTERN_00FF; + break; + case PATTERN_KILLER_DQ0: + case PATTERN_KILLER_DQ1: + case PATTERN_KILLER_DQ2: + case PATTERN_KILLER_DQ3: + case PATTERN_KILLER_DQ4: + case PATTERN_KILLER_DQ5: + case PATTERN_KILLER_DQ6: + case PATTERN_KILLER_DQ7: + pattern = pattern_table_get_killer_word16( + (u8)(type - PATTERN_KILLER_DQ0), index); + break; + case PATTERN_RL2: + if (index < 3) + pattern = PATTERN_00; + else + pattern = PATTERN_01; + break; + case PATTERN_TEST: + pattern = PATTERN_0080; + break; + case PATTERN_FULL_SSO0: + pattern = 0x0000ffff; + break; + case PATTERN_FULL_SSO1: + case PATTERN_FULL_SSO2: + case PATTERN_FULL_SSO3: + pattern = pattern_table_get_sso_word( + (u8)(type - PATTERN_FULL_SSO1), index); + break; + case PATTERN_VREF: + pattern = pattern_table_get_vref_word16(index); + break; + default: + pattern = 0; + break; + } + } + + return pattern; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.c b/drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.c new file mode 100644 index 0000000000..3a88527de3 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.c @@ -0,0 +1,685 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define VREF_INITIAL_STEP 3 +#define VREF_SECOND_STEP 1 +#define VREF_MAX_INDEX 7 +#define MAX_VALUE (1024 - 1) +#define MIN_VALUE (-MAX_VALUE) +#define GET_RD_SAMPLE_DELAY(data, cs) ((data >> rd_sample_mask[cs]) & 0xf) + +u32 ck_delay = (u32)-1, ck_delay_16 = (u32)-1; +u32 ca_delay; +int ddr3_tip_centr_skip_min_win_check = 0; +u8 current_vref[MAX_BUS_NUM][MAX_INTERFACE_NUM]; +u8 last_vref[MAX_BUS_NUM][MAX_INTERFACE_NUM]; +u16 current_valid_window[MAX_BUS_NUM][MAX_INTERFACE_NUM]; +u16 last_valid_window[MAX_BUS_NUM][MAX_INTERFACE_NUM]; +u8 lim_vref[MAX_BUS_NUM][MAX_INTERFACE_NUM]; +u8 interface_state[MAX_INTERFACE_NUM]; +u8 vref_window_size[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 vref_window_size_th = 12; + +static u8 pup_st[MAX_BUS_NUM][MAX_INTERFACE_NUM]; + +static u32 rd_sample_mask[] = { + 0, + 8, + 16, + 24 +}; + +#define VREF_STEP_1 0 +#define VREF_STEP_2 1 +#define VREF_CONVERGE 2 + +/* + * ODT additional timing + */ +int ddr3_tip_write_additional_odt_setting(u32 dev_num, u32 if_id) +{ + u32 cs_num = 0, max_read_sample = 0, min_read_sample = 0; + u32 data_read[MAX_INTERFACE_NUM] = { 0 }; + u32 read_sample[MAX_CS_NUM]; + u32 val; + u32 pup_index; + int max_phase = MIN_VALUE, current_phase; + enum hws_access_type access_type = ACCESS_TYPE_UNICAST; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + DUNIT_ODT_CONTROL_REG, + 0 << 8, 0x3 << 8)); + CHECK_STATUS(ddr3_tip_if_read(dev_num, access_type, if_id, + READ_DATA_SAMPLE_DELAY, + data_read, MASK_ALL_BITS)); + val = data_read[if_id]; + + for (cs_num = 0; cs_num < MAX_CS_NUM; cs_num++) { + read_sample[cs_num] = GET_RD_SAMPLE_DELAY(val, cs_num); + + /* find maximum of read_samples */ + if (read_sample[cs_num] >= max_read_sample) { + if (read_sample[cs_num] == max_read_sample) + max_phase = MIN_VALUE; + else + max_read_sample = read_sample[cs_num]; + + for (pup_index = 0; + pup_index < tm->num_of_bus_per_interface; + pup_index++) { + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup_index, + DDR_PHY_DATA, + RL_PHY_REG + CS_REG_VALUE(cs_num), + &val)); + + current_phase = ((int)val & 0xe0) >> 6; + if (current_phase >= max_phase) + max_phase = current_phase; + } + } + + /* find minimum */ + if (read_sample[cs_num] < min_read_sample) + min_read_sample = read_sample[cs_num]; + } + + min_read_sample = min_read_sample - 1; + max_read_sample = max_read_sample + 4 + (max_phase + 1) / 2 + 1; + if (min_read_sample >= 0xf) + min_read_sample = 0xf; + if (max_read_sample >= 0x1f) + max_read_sample = 0x1f; + + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + ODT_TIMING_LOW, + ((min_read_sample - 1) << 12), + 0xf << 12)); + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, if_id, + ODT_TIMING_LOW, + (max_read_sample << 16), + 0x1f << 16)); + + return MV_OK; +} + +int get_valid_win_rx(u32 dev_num, u32 if_id, u8 res[4]) +{ + u32 reg_pup = RESULT_DB_PHY_REG_ADDR; + u32 reg_data; + u32 cs_num; + int i; + + cs_num = 0; + + /* TBD */ + reg_pup += cs_num; + + for (i = 0; i < 4; i++) { + CHECK_STATUS(ddr3_tip_bus_read(dev_num, if_id, + ACCESS_TYPE_UNICAST, i, + DDR_PHY_DATA, reg_pup, + ®_data)); + res[i] = (reg_data >> RESULT_DB_PHY_REG_RX_OFFSET) & 0x1f; + } + + return 0; +} + +/* + * This algorithm deals with the vertical optimum from Voltage point of view + * of the sample signal. + * Voltage sample point can improve the Eye / window size of the bit and the + * pup. + * The problem is that it is tune for all DQ the same so there isn't any + * PBS like code. + * It is more like centralization. + * But because we don't have The training SM support we do it a bit more + * smart search to save time. + */ +int ddr3_tip_vref(u32 dev_num) +{ + /* + * The Vref register have non linear order. Need to check what will be + * in future projects. + */ + u32 vref_map[8] = { + 1, 2, 3, 4, 5, 6, 7, 0 + }; + /* State and parameter definitions */ + u32 initial_step = VREF_INITIAL_STEP; + /* need to be assign with minus ????? */ + u32 second_step = VREF_SECOND_STEP; + u32 algo_run_flag = 0, currrent_vref = 0; + u32 while_count = 0; + u32 pup = 0, if_id = 0, num_pup = 0, rep = 0; + u32 val = 0; + u32 reg_addr = 0xa8; + u32 copy_start_pattern, copy_end_pattern; + enum hws_result *flow_result = ddr3_tip_get_result_ptr(training_stage); + u8 res[4]; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + CHECK_STATUS(ddr3_tip_special_rx(dev_num)); + + /* save start/end pattern */ + copy_start_pattern = start_pattern; + copy_end_pattern = end_pattern; + + /* set vref as centralization pattern */ + start_pattern = PATTERN_VREF; + end_pattern = PATTERN_VREF; + + /* init params */ + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup = 0; + pup < tm->num_of_bus_per_interface; pup++) { + current_vref[pup][if_id] = 0; + last_vref[pup][if_id] = 0; + lim_vref[pup][if_id] = 0; + current_valid_window[pup][if_id] = 0; + last_valid_window[pup][if_id] = 0; + if (vref_window_size[if_id][pup] > + vref_window_size_th) { + pup_st[pup][if_id] = VREF_CONVERGE; + DEBUG_TRAINING_HW_ALG( + DEBUG_LEVEL_INFO, + ("VREF config, IF[ %d ]pup[ %d ] - Vref tune not requered (%d)\n", + if_id, pup, __LINE__)); + } else { + pup_st[pup][if_id] = VREF_STEP_1; + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, &val)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + pup, DDR_PHY_DATA, reg_addr, + (val & (~0xf)) | vref_map[0])); + DEBUG_TRAINING_HW_ALG( + DEBUG_LEVEL_INFO, + ("VREF config, IF[ %d ]pup[ %d ] - Vref = %X (%d)\n", + if_id, pup, + (val & (~0xf)) | vref_map[0], + __LINE__)); + } + } + interface_state[if_id] = 0; + } + + /* TODO: Set number of active interfaces */ + num_pup = tm->num_of_bus_per_interface * MAX_INTERFACE_NUM; + + while ((algo_run_flag <= num_pup) & (while_count < 10)) { + while_count++; + for (rep = 1; rep < 4; rep++) { + ddr3_tip_centr_skip_min_win_check = 1; + ddr3_tip_centralization_rx(dev_num); + ddr3_tip_centr_skip_min_win_check = 0; + + /* Read Valid window results only for non converge pups */ + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (interface_state[if_id] != 4) { + get_valid_win_rx(dev_num, if_id, res); + for (pup = 0; + pup < tm->num_of_bus_per_interface; + pup++) { + VALIDATE_ACTIVE + (tm->bus_act_mask, pup); + if (pup_st[pup] + [if_id] == + VREF_CONVERGE) + continue; + + current_valid_window[pup] + [if_id] = + (current_valid_window[pup] + [if_id] * (rep - 1) + + 1000 * res[pup]) / rep; + } + } + } + } + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + DEBUG_TRAINING_HW_ALG( + DEBUG_LEVEL_TRACE, + ("current_valid_window: IF[ %d ] - ", if_id)); + + for (pup = 0; + pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + DEBUG_TRAINING_HW_ALG(DEBUG_LEVEL_TRACE, + ("%d ", + current_valid_window + [pup][if_id])); + } + DEBUG_TRAINING_HW_ALG(DEBUG_LEVEL_TRACE, ("\n")); + } + + /* Compare results and respond as function of state */ + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup = 0; + pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + DEBUG_TRAINING_HW_ALG(DEBUG_LEVEL_TRACE, + ("I/F[ %d ], pup[ %d ] STATE #%d (%d)\n", + if_id, pup, + pup_st[pup] + [if_id], __LINE__)); + + if (pup_st[pup][if_id] == VREF_CONVERGE) + continue; + + DEBUG_TRAINING_HW_ALG(DEBUG_LEVEL_TRACE, + ("I/F[ %d ], pup[ %d ] CHECK progress - Current %d Last %d, limit VREF %d (%d)\n", + if_id, pup, + current_valid_window[pup] + [if_id], + last_valid_window[pup] + [if_id], lim_vref[pup] + [if_id], __LINE__)); + + /* + * The -1 is for solution resolution +/- 1 tap + * of ADLL + */ + if (current_valid_window[pup][if_id] + 200 >= + (last_valid_window[pup][if_id])) { + if (pup_st[pup][if_id] == VREF_STEP_1) { + /* + * We stay in the same state and + * step just update the window + * size (take the max) and Vref + */ + if (current_vref[pup] + [if_id] == VREF_MAX_INDEX) { + /* + * If we step to the end + * and didn't converge + * to some particular + * better Vref value + * define the pup as + * converge and step + * back to nominal + * Vref. + */ + pup_st[pup] + [if_id] = + VREF_CONVERGE; + algo_run_flag++; + interface_state + [if_id]++; + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("I/F[ %d ], pup[ %d ] VREF_CONVERGE - Vref = %X (%d)\n", + if_id, pup, + current_vref[pup] + [if_id], + __LINE__)); + } else { + /* continue to update the Vref index */ + current_vref[pup] + [if_id] = + ((current_vref[pup] + [if_id] + + initial_step) > + VREF_MAX_INDEX) ? + VREF_MAX_INDEX + : (current_vref[pup] + [if_id] + + initial_step); + if (current_vref[pup] + [if_id] == + VREF_MAX_INDEX) { + pup_st[pup] + [if_id] + = + VREF_STEP_2; + } + lim_vref[pup] + [if_id] = + last_vref[pup] + [if_id] = + current_vref[pup] + [if_id]; + } + + last_valid_window[pup] + [if_id] = + GET_MAX(current_valid_window + [pup][if_id], + last_valid_window + [pup] + [if_id]); + + /* update the Vref for next stage */ + currrent_vref = + current_vref[pup] + [if_id]; + CHECK_STATUS + (ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + &val)); + CHECK_STATUS + (ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + (val & (~0xf)) | + vref_map[currrent_vref])); + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("VREF config, IF[ %d ]pup[ %d ] - Vref = %X (%d)\n", + if_id, pup, + (val & (~0xf)) | + vref_map[currrent_vref], + __LINE__)); + } else if (pup_st[pup][if_id] + == VREF_STEP_2) { + /* + * We keep on search back with + * the same step size. + */ + last_valid_window[pup] + [if_id] = + GET_MAX(current_valid_window + [pup][if_id], + last_valid_window + [pup] + [if_id]); + last_vref[pup][if_id] = + current_vref[pup] + [if_id]; + + /* we finish all search space */ + if ((current_vref[pup] + [if_id] - second_step) == lim_vref[pup][if_id]) { + /* + * If we step to the end + * and didn't converge + * to some particular + * better Vref value + * define the pup as + * converge and step + * back to nominal + * Vref. + */ + pup_st[pup] + [if_id] = + VREF_CONVERGE; + algo_run_flag++; + + interface_state + [if_id]++; + + current_vref[pup] + [if_id] = + (current_vref[pup] + [if_id] - + second_step); + + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("I/F[ %d ], pup[ %d ] VREF_CONVERGE - Vref = %X (%d)\n", + if_id, pup, + current_vref[pup] + [if_id], + __LINE__)); + } else + /* we finish all search space */ + if (current_vref[pup] + [if_id] == + lim_vref[pup] + [if_id]) { + /* + * If we step to the end + * and didn't converge + * to some particular + * better Vref value + * define the pup as + * converge and step + * back to nominal + * Vref. + */ + pup_st[pup] + [if_id] = + VREF_CONVERGE; + + algo_run_flag++; + interface_state + [if_id]++; + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("I/F[ %d ], pup[ %d ] VREF_CONVERGE - Vref = %X (%d)\n", + if_id, pup, + current_vref[pup] + [if_id], + __LINE__)); + } else { + current_vref[pup] + [if_id] = + current_vref[pup] + [if_id] - + second_step; + } + + /* Update the Vref for next stage */ + currrent_vref = + current_vref[pup] + [if_id]; + CHECK_STATUS + (ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + &val)); + CHECK_STATUS + (ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + (val & (~0xf)) | + vref_map[currrent_vref])); + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("VREF config, IF[ %d ]pup[ %d ] - Vref = %X (%d)\n", + if_id, pup, + (val & (~0xf)) | + vref_map[currrent_vref], + __LINE__)); + } + } else { + /* we change state and change step */ + if (pup_st[pup][if_id] == VREF_STEP_1) { + pup_st[pup][if_id] = + VREF_STEP_2; + lim_vref[pup][if_id] = + current_vref[pup] + [if_id] - initial_step; + last_valid_window[pup] + [if_id] = + current_valid_window[pup] + [if_id]; + last_vref[pup][if_id] = + current_vref[pup] + [if_id]; + current_vref[pup][if_id] = + last_vref[pup][if_id] - + second_step; + + /* Update the Vref for next stage */ + CHECK_STATUS + (ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + &val)); + CHECK_STATUS + (ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + (val & (~0xf)) | + vref_map[current_vref[pup] + [if_id]])); + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("VREF config, IF[ %d ]pup[ %d ] - Vref = %X (%d)\n", + if_id, pup, + (val & (~0xf)) | + vref_map[current_vref[pup] + [if_id]], + __LINE__)); + + } else if (pup_st[pup][if_id] == VREF_STEP_2) { + /* + * The last search was the max + * point set value and exit + */ + CHECK_STATUS + (ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + &val)); + CHECK_STATUS + (ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + (val & (~0xf)) | + vref_map[last_vref[pup] + [if_id]])); + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("VREF config, IF[ %d ]pup[ %d ] - Vref = %X (%d)\n", + if_id, pup, + (val & (~0xf)) | + vref_map[last_vref[pup] + [if_id]], + __LINE__)); + pup_st[pup][if_id] = + VREF_CONVERGE; + algo_run_flag++; + interface_state[if_id]++; + DEBUG_TRAINING_HW_ALG + (DEBUG_LEVEL_TRACE, + ("I/F[ %d ], pup[ %d ] VREF_CONVERGE - Vref = %X (%d)\n", + if_id, pup, + current_vref[pup] + [if_id], __LINE__)); + } + } + } + } + } + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup = 0; + pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, &val)); + DEBUG_TRAINING_HW_ALG( + DEBUG_LEVEL_INFO, + ("FINAL values: I/F[ %d ], pup[ %d ] - Vref = %X (%d)\n", + if_id, pup, val, __LINE__)); + } + } + + flow_result[if_id] = TEST_SUCCESS; + + /* restore start/end pattern */ + start_pattern = copy_start_pattern; + end_pattern = copy_end_pattern; + + return 0; +} + +/* + * CK/CA Delay + */ +int ddr3_tip_cmd_addr_init_delay(u32 dev_num, u32 adll_tap) +{ + u32 if_id = 0; + u32 ck_num_adll_tap = 0, ca_num_adll_tap = 0, data = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * ck_delay_table is delaying the of the clock signal only. + * (to overcome timing issues between_c_k & command/address signals) + */ + /* + * ca_delay is delaying the of the entire command & Address signals + * (include Clock signal to overcome DGL error on the Clock versus + * the DQS). + */ + + /* Calc ADLL Tap */ + if ((ck_delay == -1) || (ck_delay_16 == -1)) { + DEBUG_TRAINING_HW_ALG( + DEBUG_LEVEL_ERROR, + ("ERROR: One of ck_delay values not initialized!!!\n")); + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* Calc delay ps in ADLL tap */ + if (tm->interface_params[if_id].bus_width == + BUS_WIDTH_16) + ck_num_adll_tap = ck_delay_16 / adll_tap; + else + ck_num_adll_tap = ck_delay / adll_tap; + + ca_num_adll_tap = ca_delay / adll_tap; + data = (ck_num_adll_tap & 0x3f) + + ((ca_num_adll_tap & 0x3f) << 10); + + /* + * Set the ADLL number to the CK ADLL for Interfaces for + * all Pup + */ + DEBUG_TRAINING_HW_ALG( + DEBUG_LEVEL_TRACE, + ("ck_num_adll_tap %d ca_num_adll_tap %d adll_tap %d\n", + ck_num_adll_tap, ca_num_adll_tap, adll_tap)); + + CHECK_STATUS(ddr3_tip_bus_write(dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, DDR_PHY_CONTROL, + 0x0, data)); + } + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.h b/drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.h new file mode 100644 index 0000000000..6e1bab260d --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_hw_algo.h @@ -0,0 +1,14 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_HW_ALGO_H_ +#define _DDR3_TRAINING_HW_ALGO_H_ + +int ddr3_tip_vref(u32 dev_num); +int ddr3_tip_write_additional_odt_setting(u32 dev_num, u32 if_id); +int ddr3_tip_cmd_addr_init_delay(u32 dev_num, u32 adll_tap); + +#endif /* _DDR3_TRAINING_HW_ALGO_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip.h new file mode 100644 index 0000000000..ed92873697 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip.h @@ -0,0 +1,178 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_H_ +#define _DDR3_TRAINING_IP_H_ + +#include "ddr3_training_ip_def.h" +#include "ddr_topology_def.h" +#include "ddr_training_ip_db.h" + +#define DDR3_TIP_VERSION_STRING "DDR3 Training Sequence - Ver TIP-1.29." + +#define MAX_CS_NUM 4 +#define MAX_TOTAL_BUS_NUM (MAX_INTERFACE_NUM * MAX_BUS_NUM) +#define MAX_DQ_NUM 40 + +#define GET_MIN(arg1, arg2) ((arg1) < (arg2)) ? (arg1) : (arg2) +#define GET_MAX(arg1, arg2) ((arg1) < (arg2)) ? (arg2) : (arg1) + +#define INIT_CONTROLLER_MASK_BIT 0x00000001 +#define STATIC_LEVELING_MASK_BIT 0x00000002 +#define SET_LOW_FREQ_MASK_BIT 0x00000004 +#define LOAD_PATTERN_MASK_BIT 0x00000008 +#define SET_MEDIUM_FREQ_MASK_BIT 0x00000010 +#define WRITE_LEVELING_MASK_BIT 0x00000020 +#define LOAD_PATTERN_2_MASK_BIT 0x00000040 +#define READ_LEVELING_MASK_BIT 0x00000080 +#define SW_READ_LEVELING_MASK_BIT 0x00000100 +#define WRITE_LEVELING_SUPP_MASK_BIT 0x00000200 +#define PBS_RX_MASK_BIT 0x00000400 +#define PBS_TX_MASK_BIT 0x00000800 +#define SET_TARGET_FREQ_MASK_BIT 0x00001000 +#define ADJUST_DQS_MASK_BIT 0x00002000 +#define WRITE_LEVELING_TF_MASK_BIT 0x00004000 +#define LOAD_PATTERN_HIGH_MASK_BIT 0x00008000 +#define READ_LEVELING_TF_MASK_BIT 0x00010000 +#define WRITE_LEVELING_SUPP_TF_MASK_BIT 0x00020000 +#define DM_PBS_TX_MASK_BIT 0x00040000 +#define CENTRALIZATION_RX_MASK_BIT 0x00100000 +#define CENTRALIZATION_TX_MASK_BIT 0x00200000 +#define TX_EMPHASIS_MASK_BIT 0x00400000 +#define PER_BIT_READ_LEVELING_TF_MASK_BIT 0x00800000 +#define VREF_CALIBRATION_MASK_BIT 0x01000000 + +enum hws_result { + TEST_FAILED = 0, + TEST_SUCCESS = 1, + NO_TEST_DONE = 2 +}; + +enum hws_training_result { + RESULT_PER_BIT, + RESULT_PER_BYTE +}; + +enum auto_tune_stage { + INIT_CONTROLLER, + STATIC_LEVELING, + SET_LOW_FREQ, + LOAD_PATTERN, + SET_MEDIUM_FREQ, + WRITE_LEVELING, + LOAD_PATTERN_2, + READ_LEVELING, + WRITE_LEVELING_SUPP, + PBS_RX, + PBS_TX, + SET_TARGET_FREQ, + ADJUST_DQS, + WRITE_LEVELING_TF, + READ_LEVELING_TF, + WRITE_LEVELING_SUPP_TF, + DM_PBS_TX, + VREF_CALIBRATION, + CENTRALIZATION_RX, + CENTRALIZATION_TX, + TX_EMPHASIS, + LOAD_PATTERN_HIGH, + PER_BIT_READ_LEVELING_TF, + MAX_STAGE_LIMIT +}; + +enum hws_access_type { + ACCESS_TYPE_UNICAST = 0, + ACCESS_TYPE_MULTICAST = 1 +}; + +enum hws_algo_type { + ALGO_TYPE_DYNAMIC, + ALGO_TYPE_STATIC +}; + +struct init_cntr_param { + int is_ctrl64_bit; + int do_mrs_phy; + int init_phy; + int msys_init; +}; + +struct pattern_info { + u8 num_of_phases_tx; + u8 tx_burst_size; + u8 delay_between_bursts; + u8 num_of_phases_rx; + u32 start_addr; + u8 pattern_len; +}; + +/* CL value for each frequency */ +struct cl_val_per_freq { + u8 cl_val[DDR_FREQ_LIMIT]; +}; + +struct cs_element { + u8 cs_num; + u8 num_of_cs; +}; + +struct mode_info { + /* 32 bits representing MRS bits */ + u32 reg_mr0[MAX_INTERFACE_NUM]; + u32 reg_mr1[MAX_INTERFACE_NUM]; + u32 reg_mr2[MAX_INTERFACE_NUM]; + u32 reg_m_r3[MAX_INTERFACE_NUM]; + /* + * Each element in array represent read_data_sample register delay for + * a specific interface. + * Each register, 4 bits[0+CS*8 to 4+CS*8] represent Number of DDR + * cycles from read command until data is ready to be fetched from + * the PHY, when accessing CS. + */ + u32 read_data_sample[MAX_INTERFACE_NUM]; + /* + * Each element in array represent read_data_sample register delay for + * a specific interface. + * Each register, 4 bits[0+CS*8 to 4+CS*8] represent the total delay + * from read command until opening the read mask, when accessing CS. + * This field defines the delay in DDR cycles granularity. + */ + u32 read_data_ready[MAX_INTERFACE_NUM]; +}; + +struct hws_tip_freq_config_info { + u8 is_supported; + u8 bw_per_freq; + u8 rate_per_freq; +}; + +struct hws_cs_config_info { + u32 cs_reg_value; + u32 cs_cbe_value; +}; + +struct dfx_access { + u8 pipe; + u8 client; +}; + +struct hws_xsb_info { + struct dfx_access *dfx_table; +}; + +int ddr3_tip_register_dq_table(u32 dev_num, u32 *table); +int hws_ddr3_tip_select_ddr_controller(u32 dev_num, int enable); +int hws_ddr3_tip_init_controller(u32 dev_num, + struct init_cntr_param *init_cntr_prm); +int hws_ddr3_tip_load_topology_map(u32 dev_num, + struct hws_topology_map *topology); +int hws_ddr3_tip_run_alg(u32 dev_num, enum hws_algo_type algo_type); +int hws_ddr3_tip_mode_read(u32 dev_num, struct mode_info *mode_info); +int ddr3_tip_is_pup_lock(u32 *pup_buf, enum hws_training_result read_mode); +u8 ddr3_tip_get_buf_min(u8 *buf_ptr); +u8 ddr3_tip_get_buf_max(u8 *buf_ptr); + +#endif /* _DDR3_TRAINING_IP_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_bist.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_bist.h new file mode 100644 index 0000000000..5c9bfe98a0 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_bist.h @@ -0,0 +1,54 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_BIST_H_ +#define _DDR3_TRAINING_IP_BIST_H_ + +#include "ddr3_training_ip.h" + +enum hws_bist_operation { + BIST_STOP = 0, + BIST_START = 1 +}; + +enum hws_stress_jump { + STRESS_NONE = 0, + STRESS_ENABLE = 1 +}; + +enum hws_pattern_duration { + DURATION_SINGLE = 0, + DURATION_STOP_AT_FAIL = 1, + DURATION_ADDRESS = 2, + DURATION_CONT = 4 +}; + +struct bist_result { + u32 bist_error_cnt; + u32 bist_fail_low; + u32 bist_fail_high; + u32 bist_last_fail_addr; +}; + +int ddr3_tip_bist_read_result(u32 dev_num, u32 if_id, + struct bist_result *pst_bist_result); +int ddr3_tip_bist_activate(u32 dev_num, enum hws_pattern pattern, + enum hws_access_type access_type, + u32 if_num, enum hws_dir direction, + enum hws_stress_jump addr_stress_jump, + enum hws_pattern_duration duration, + enum hws_bist_operation oper_type, + u32 offset, u32 cs_num, u32 pattern_addr_length); +int hws_ddr3_run_bist(u32 dev_num, enum hws_pattern pattern, u32 *result, + u32 cs_num); +int ddr3_tip_run_sweep_test(int dev_num, u32 repeat_num, u32 direction, + u32 mode); +int ddr3_tip_print_regs(u32 dev_num); +int ddr3_tip_reg_dump(u32 dev_num); +int run_xsb_test(u32 dev_num, u32 mem_addr, u32 write_type, u32 read_type, + u32 burst_length); + +#endif /* _DDR3_TRAINING_IP_BIST_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_centralization.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_centralization.h new file mode 100644 index 0000000000..7c57603947 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_centralization.h @@ -0,0 +1,15 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_CENTRALIZATION_H +#define _DDR3_TRAINING_IP_CENTRALIZATION_H + +int ddr3_tip_centralization_tx(u32 dev_num); +int ddr3_tip_centralization_rx(u32 dev_num); +int ddr3_tip_print_centralization_result(u32 dev_num); +int ddr3_tip_special_rx(u32 dev_num); + +#endif /* _DDR3_TRAINING_IP_CENTRALIZATION_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_db.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_db.h new file mode 100644 index 0000000000..c0afa7742e --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_db.h @@ -0,0 +1,34 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_DB_H_ +#define _DDR3_TRAINING_IP_DB_H_ + +enum hws_pattern { + PATTERN_PBS1, + PATTERN_PBS2, + PATTERN_RL, + PATTERN_STATIC_PBS, + PATTERN_KILLER_DQ0, + PATTERN_KILLER_DQ1, + PATTERN_KILLER_DQ2, + PATTERN_KILLER_DQ3, + PATTERN_KILLER_DQ4, + PATTERN_KILLER_DQ5, + PATTERN_KILLER_DQ6, + PATTERN_KILLER_DQ7, + PATTERN_PBS3, + PATTERN_RL2, + PATTERN_TEST, + PATTERN_FULL_SSO0, + PATTERN_FULL_SSO1, + PATTERN_FULL_SSO2, + PATTERN_FULL_SSO3, + PATTERN_VREF, + PATTERN_LIMIT +}; + +#endif /* _DDR3_TRAINING_IP_DB_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_def.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_def.h new file mode 100644 index 0000000000..51a66d8491 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_def.h @@ -0,0 +1,173 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_DEF_H +#define _DDR3_TRAINING_IP_DEF_H + +#include "silicon_if.h" + +#define PATTERN_55 0x55555555 +#define PATTERN_AA 0xaaaaaaaa +#define PATTERN_80 0x80808080 +#define PATTERN_20 0x20202020 +#define PATTERN_01 0x01010101 +#define PATTERN_FF 0xffffffff +#define PATTERN_00 0x00000000 + +/* 16bit bus width patterns */ +#define PATTERN_55AA 0x5555aaaa +#define PATTERN_00FF 0x0000ffff +#define PATTERN_0080 0x00008080 + +#define INVALID_VALUE 0xffffffff +#define MAX_NUM_OF_DUNITS 32 +/* + * length *2 = length in words of pattern, first low address, + * second high address + */ +#define TEST_PATTERN_LENGTH 4 +#define KILLER_PATTERN_DQ_NUMBER 8 +#define SSO_DQ_NUMBER 4 +#define PATTERN_MAXIMUM_LENGTH 64 +#define ADLL_TX_LENGTH 64 +#define ADLL_RX_LENGTH 32 + +#define PARAM_NOT_CARE 0 + +#define READ_LEVELING_PHY_OFFSET 2 +#define WRITE_LEVELING_PHY_OFFSET 0 + +#define MASK_ALL_BITS 0xffffffff + +#define CS_BIT_MASK 0xf + +/* DFX access */ +#define BROADCAST_ID 28 +#define MULTICAST_ID 29 + +#define XSB_BASE_ADDR 0x00004000 +#define XSB_CTRL_0_REG 0x00000000 +#define XSB_CTRL_1_REG 0x00000004 +#define XSB_CMD_REG 0x00000008 +#define XSB_ADDRESS_REG 0x0000000c +#define XSB_DATA_REG 0x00000010 +#define PIPE_ENABLE_ADDR 0x000f8000 +#define ENABLE_DDR_TUNING_ADDR 0x000f829c + +#define CLIENT_BASE_ADDR 0x00002000 +#define CLIENT_CTRL_REG 0x00000000 + +#define TARGET_INT 0x1801 +#define TARGET_EXT 0x180e +#define BYTE_EN 0 +#define CMD_READ 0 +#define CMD_WRITE 1 + +#define INTERNAL_ACCESS_PORT 1 +#define EXECUTING 1 +#define ACCESS_EXT 1 +#define CS2_EXIST_BIT 2 +#define TRAINING_ID 0xf +#define EXT_TRAINING_ID 1 +#define EXT_MODE 0x4 + +#define GET_RESULT_STATE(res) (res) +#define SET_RESULT_STATE(res, state) (res = state) + +#define _1K 0x00000400 +#define _4K 0x00001000 +#define _8K 0x00002000 +#define _16K 0x00004000 +#define _32K 0x00008000 +#define _64K 0x00010000 +#define _128K 0x00020000 +#define _256K 0x00040000 +#define _512K 0x00080000 + +#define _1M 0x00100000 +#define _2M 0x00200000 +#define _4M 0x00400000 +#define _8M 0x00800000 +#define _16M 0x01000000 +#define _32M 0x02000000 +#define _64M 0x04000000 +#define _128M 0x08000000 +#define _256M 0x10000000 +#define _512M 0x20000000 + +#define _1G 0x40000000 +#define _2G 0x80000000 + +#define ADDR_SIZE_512MB 0x04000000 +#define ADDR_SIZE_1GB 0x08000000 +#define ADDR_SIZE_2GB 0x10000000 +#define ADDR_SIZE_4GB 0x20000000 +#define ADDR_SIZE_8GB 0x40000000 + +enum hws_edge_compare { + EDGE_PF, + EDGE_FP, + EDGE_FPF, + EDGE_PFP +}; + +enum hws_control_element { + HWS_CONTROL_ELEMENT_ADLL, /* per bit 1 edge */ + HWS_CONTROL_ELEMENT_DQ_SKEW, + HWS_CONTROL_ELEMENT_DQS_SKEW +}; + +enum hws_search_dir { + HWS_LOW2HIGH, + HWS_HIGH2LOW, + HWS_SEARCH_DIR_LIMIT +}; + +enum hws_page_size { + PAGE_SIZE_1K, + PAGE_SIZE_2K +}; + +enum hws_operation { + OPERATION_READ = 0, + OPERATION_WRITE = 1 +}; + +enum hws_training_ip_stat { + HWS_TRAINING_IP_STATUS_FAIL, + HWS_TRAINING_IP_STATUS_SUCCESS, + HWS_TRAINING_IP_STATUS_TIMEOUT +}; + +enum hws_ddr_cs { + CS_SINGLE, + CS_NON_SINGLE +}; + +enum hws_ddr_phy { + DDR_PHY_DATA = 0, + DDR_PHY_CONTROL = 1 +}; + +enum hws_dir { + OPER_WRITE, + OPER_READ, + OPER_WRITE_AND_READ +}; + +enum hws_wl_supp { + PHASE_SHIFT, + CLOCK_SHIFT, + ALIGN_SHIFT +}; + +struct reg_data { + u32 reg_addr; + u32 reg_data; + u32 reg_mask; +}; + +#endif /* _DDR3_TRAINING_IP_DEF_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c new file mode 100644 index 0000000000..7ac23aedc6 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c @@ -0,0 +1,1353 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define PATTERN_1 0x55555555 +#define PATTERN_2 0xaaaaaaaa + +#define VALIDATE_TRAINING_LIMIT(e1, e2) \ + ((((e2) - (e1) + 1) > 33) && ((e1) < 67)) + +u32 phy_reg_bk[MAX_INTERFACE_NUM][MAX_BUS_NUM][BUS_WIDTH_IN_BITS]; + +u32 training_res[MAX_INTERFACE_NUM * MAX_BUS_NUM * BUS_WIDTH_IN_BITS * + HWS_SEARCH_DIR_LIMIT]; + +u16 mask_results_dq_reg_map[] = { + RESULT_CONTROL_PUP_0_BIT_0_REG, RESULT_CONTROL_PUP_0_BIT_1_REG, + RESULT_CONTROL_PUP_0_BIT_2_REG, RESULT_CONTROL_PUP_0_BIT_3_REG, + RESULT_CONTROL_PUP_0_BIT_4_REG, RESULT_CONTROL_PUP_0_BIT_5_REG, + RESULT_CONTROL_PUP_0_BIT_6_REG, RESULT_CONTROL_PUP_0_BIT_7_REG, + RESULT_CONTROL_PUP_1_BIT_0_REG, RESULT_CONTROL_PUP_1_BIT_1_REG, + RESULT_CONTROL_PUP_1_BIT_2_REG, RESULT_CONTROL_PUP_1_BIT_3_REG, + RESULT_CONTROL_PUP_1_BIT_4_REG, RESULT_CONTROL_PUP_1_BIT_5_REG, + RESULT_CONTROL_PUP_1_BIT_6_REG, RESULT_CONTROL_PUP_1_BIT_7_REG, + RESULT_CONTROL_PUP_2_BIT_0_REG, RESULT_CONTROL_PUP_2_BIT_1_REG, + RESULT_CONTROL_PUP_2_BIT_2_REG, RESULT_CONTROL_PUP_2_BIT_3_REG, + RESULT_CONTROL_PUP_2_BIT_4_REG, RESULT_CONTROL_PUP_2_BIT_5_REG, + RESULT_CONTROL_PUP_2_BIT_6_REG, RESULT_CONTROL_PUP_2_BIT_7_REG, + RESULT_CONTROL_PUP_3_BIT_0_REG, RESULT_CONTROL_PUP_3_BIT_1_REG, + RESULT_CONTROL_PUP_3_BIT_2_REG, RESULT_CONTROL_PUP_3_BIT_3_REG, + RESULT_CONTROL_PUP_3_BIT_4_REG, RESULT_CONTROL_PUP_3_BIT_5_REG, + RESULT_CONTROL_PUP_3_BIT_6_REG, RESULT_CONTROL_PUP_3_BIT_7_REG, + RESULT_CONTROL_PUP_4_BIT_0_REG, RESULT_CONTROL_PUP_4_BIT_1_REG, + RESULT_CONTROL_PUP_4_BIT_2_REG, RESULT_CONTROL_PUP_4_BIT_3_REG, + RESULT_CONTROL_PUP_4_BIT_4_REG, RESULT_CONTROL_PUP_4_BIT_5_REG, + RESULT_CONTROL_PUP_4_BIT_6_REG, RESULT_CONTROL_PUP_4_BIT_7_REG, +}; + +u16 mask_results_pup_reg_map[] = { + RESULT_CONTROL_BYTE_PUP_0_REG, RESULT_CONTROL_BYTE_PUP_1_REG, + RESULT_CONTROL_BYTE_PUP_2_REG, RESULT_CONTROL_BYTE_PUP_3_REG, + RESULT_CONTROL_BYTE_PUP_4_REG +}; + +u16 mask_results_dq_reg_map_pup3_ecc[] = { + RESULT_CONTROL_PUP_0_BIT_0_REG, RESULT_CONTROL_PUP_0_BIT_1_REG, + RESULT_CONTROL_PUP_0_BIT_2_REG, RESULT_CONTROL_PUP_0_BIT_3_REG, + RESULT_CONTROL_PUP_0_BIT_4_REG, RESULT_CONTROL_PUP_0_BIT_5_REG, + RESULT_CONTROL_PUP_0_BIT_6_REG, RESULT_CONTROL_PUP_0_BIT_7_REG, + RESULT_CONTROL_PUP_1_BIT_0_REG, RESULT_CONTROL_PUP_1_BIT_1_REG, + RESULT_CONTROL_PUP_1_BIT_2_REG, RESULT_CONTROL_PUP_1_BIT_3_REG, + RESULT_CONTROL_PUP_1_BIT_4_REG, RESULT_CONTROL_PUP_1_BIT_5_REG, + RESULT_CONTROL_PUP_1_BIT_6_REG, RESULT_CONTROL_PUP_1_BIT_7_REG, + RESULT_CONTROL_PUP_2_BIT_0_REG, RESULT_CONTROL_PUP_2_BIT_1_REG, + RESULT_CONTROL_PUP_2_BIT_2_REG, RESULT_CONTROL_PUP_2_BIT_3_REG, + RESULT_CONTROL_PUP_2_BIT_4_REG, RESULT_CONTROL_PUP_2_BIT_5_REG, + RESULT_CONTROL_PUP_2_BIT_6_REG, RESULT_CONTROL_PUP_2_BIT_7_REG, + RESULT_CONTROL_PUP_4_BIT_0_REG, RESULT_CONTROL_PUP_4_BIT_1_REG, + RESULT_CONTROL_PUP_4_BIT_2_REG, RESULT_CONTROL_PUP_4_BIT_3_REG, + RESULT_CONTROL_PUP_4_BIT_4_REG, RESULT_CONTROL_PUP_4_BIT_5_REG, + RESULT_CONTROL_PUP_4_BIT_6_REG, RESULT_CONTROL_PUP_4_BIT_7_REG, + RESULT_CONTROL_PUP_4_BIT_0_REG, RESULT_CONTROL_PUP_4_BIT_1_REG, + RESULT_CONTROL_PUP_4_BIT_2_REG, RESULT_CONTROL_PUP_4_BIT_3_REG, + RESULT_CONTROL_PUP_4_BIT_4_REG, RESULT_CONTROL_PUP_4_BIT_5_REG, + RESULT_CONTROL_PUP_4_BIT_6_REG, RESULT_CONTROL_PUP_4_BIT_7_REG, +}; + +u16 mask_results_pup_reg_map_pup3_ecc[] = { + RESULT_CONTROL_BYTE_PUP_0_REG, RESULT_CONTROL_BYTE_PUP_1_REG, + RESULT_CONTROL_BYTE_PUP_2_REG, RESULT_CONTROL_BYTE_PUP_4_REG, + RESULT_CONTROL_BYTE_PUP_4_REG +}; + +struct pattern_info pattern_table_16[] = { + /* + * num tx phases, tx burst, delay between, rx pattern, + * start_address, pattern_len + */ + {1, 1, 2, 1, 0x0080, 2}, /* PATTERN_PBS1 */ + {1, 1, 2, 1, 0x00c0, 2}, /* PATTERN_PBS2 */ + {1, 1, 2, 1, 0x0100, 2}, /* PATTERN_RL */ + {0xf, 0x7, 2, 0x7, 0x0140, 16}, /* PATTERN_STATIC_PBS */ + {0xf, 0x7, 2, 0x7, 0x0190, 16}, /* PATTERN_KILLER_DQ0 */ + {0xf, 0x7, 2, 0x7, 0x01d0, 16}, /* PATTERN_KILLER_DQ1 */ + {0xf, 0x7, 2, 0x7, 0x0210, 16}, /* PATTERN_KILLER_DQ2 */ + {0xf, 0x7, 2, 0x7, 0x0250, 16}, /* PATTERN_KILLER_DQ3 */ + {0xf, 0x7, 2, 0x7, 0x0290, 16}, /* PATTERN_KILLER_DQ4 */ + {0xf, 0x7, 2, 0x7, 0x02d0, 16}, /* PATTERN_KILLER_DQ5 */ + {0xf, 0x7, 2, 0x7, 0x0310, 16}, /* PATTERN_KILLER_DQ6 */ + {0xf, 0x7, 2, 0x7, 0x0350, 16}, /* PATTERN_KILLER_DQ7 */ + {1, 1, 2, 1, 0x0380, 2}, /* PATTERN_PBS3 */ + {1, 1, 2, 1, 0x0000, 2}, /* PATTERN_RL2 */ + {1, 1, 2, 1, 0x0040, 2}, /* PATTERN_TEST */ + {0xf, 0x7, 2, 0x7, 0x03c0, 16}, /* PATTERN_FULL_SSO_1T */ + {0xf, 0x7, 2, 0x7, 0x0400, 16}, /* PATTERN_FULL_SSO_2T */ + {0xf, 0x7, 2, 0x7, 0x0440, 16}, /* PATTERN_FULL_SSO_3T */ + {0xf, 0x7, 2, 0x7, 0x0480, 16}, /* PATTERN_FULL_SSO_4T */ + {0xf, 0x7, 2, 0x7, 0x04c0, 16} /* PATTERN_VREF */ + /*Note: actual start_address is <<3 of defined addess */ +}; + +struct pattern_info pattern_table_32[] = { + /* + * num tx phases, tx burst, delay between, rx pattern, + * start_address, pattern_len + */ + {3, 3, 2, 3, 0x0080, 4}, /* PATTERN_PBS1 */ + {3, 3, 2, 3, 0x00c0, 4}, /* PATTERN_PBS2 */ + {3, 3, 2, 3, 0x0100, 4}, /* PATTERN_RL */ + {0x1f, 0xf, 2, 0xf, 0x0140, 32}, /* PATTERN_STATIC_PBS */ + {0x1f, 0xf, 2, 0xf, 0x0190, 32}, /* PATTERN_KILLER_DQ0 */ + {0x1f, 0xf, 2, 0xf, 0x01d0, 32}, /* PATTERN_KILLER_DQ1 */ + {0x1f, 0xf, 2, 0xf, 0x0210, 32}, /* PATTERN_KILLER_DQ2 */ + {0x1f, 0xf, 2, 0xf, 0x0250, 32}, /* PATTERN_KILLER_DQ3 */ + {0x1f, 0xf, 2, 0xf, 0x0290, 32}, /* PATTERN_KILLER_DQ4 */ + {0x1f, 0xf, 2, 0xf, 0x02d0, 32}, /* PATTERN_KILLER_DQ5 */ + {0x1f, 0xf, 2, 0xf, 0x0310, 32}, /* PATTERN_KILLER_DQ6 */ + {0x1f, 0xf, 2, 0xf, 0x0350, 32}, /* PATTERN_KILLER_DQ7 */ + {3, 3, 2, 3, 0x0380, 4}, /* PATTERN_PBS3 */ + {3, 3, 2, 3, 0x0000, 4}, /* PATTERN_RL2 */ + {3, 3, 2, 3, 0x0040, 4}, /* PATTERN_TEST */ + {0x1f, 0xf, 2, 0xf, 0x03c0, 32}, /* PATTERN_FULL_SSO_1T */ + {0x1f, 0xf, 2, 0xf, 0x0400, 32}, /* PATTERN_FULL_SSO_2T */ + {0x1f, 0xf, 2, 0xf, 0x0440, 32}, /* PATTERN_FULL_SSO_3T */ + {0x1f, 0xf, 2, 0xf, 0x0480, 32}, /* PATTERN_FULL_SSO_4T */ + {0x1f, 0xf, 2, 0xf, 0x04c0, 32} /* PATTERN_VREF */ + /*Note: actual start_address is <<3 of defined addess */ +}; + +u32 train_dev_num; +enum hws_ddr_cs traintrain_cs_type; +u32 train_pup_num; +enum hws_training_result train_result_type; +enum hws_control_element train_control_element; +enum hws_search_dir traine_search_dir; +enum hws_dir train_direction; +u32 train_if_select; +u32 train_init_value; +u32 train_number_iterations; +enum hws_pattern train_pattern; +enum hws_edge_compare train_edge_compare; +u32 train_cs_num; +u32 train_if_acess, train_if_id, train_pup_access; +u32 max_polling_for_done = 1000000; + +u32 *ddr3_tip_get_buf_ptr(u32 dev_num, enum hws_search_dir search, + enum hws_training_result result_type, + u32 interface_num) +{ + u32 *buf_ptr = NULL; + + buf_ptr = &training_res + [MAX_INTERFACE_NUM * MAX_BUS_NUM * BUS_WIDTH_IN_BITS * search + + interface_num * MAX_BUS_NUM * BUS_WIDTH_IN_BITS]; + + return buf_ptr; +} + +/* + * IP Training search + * Note: for one edge search only from fail to pass, else jitter can + * be be entered into solution. + */ +int ddr3_tip_ip_training(u32 dev_num, enum hws_access_type access_type, + u32 interface_num, + enum hws_access_type pup_access_type, + u32 pup_num, enum hws_training_result result_type, + enum hws_control_element control_element, + enum hws_search_dir search_dir, enum hws_dir direction, + u32 interface_mask, u32 init_value, u32 num_iter, + enum hws_pattern pattern, + enum hws_edge_compare edge_comp, + enum hws_ddr_cs cs_type, u32 cs_num, + enum hws_training_ip_stat *train_status) +{ + u32 mask_dq_num_of_regs, mask_pup_num_of_regs, index_cnt, poll_cnt, + reg_data, pup_id; + u32 tx_burst_size; + u32 delay_between_burst; + u32 rd_mode; + u32 read_data[MAX_INTERFACE_NUM]; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (pup_num >= tm->num_of_bus_per_interface) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("pup_num %d not valid\n", pup_num)); + } + if (interface_num >= MAX_INTERFACE_NUM) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("if_id %d not valid\n", + interface_num)); + } + if (train_status == NULL) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("error param 4\n")); + return MV_BAD_PARAM; + } + + /* load pattern */ + if (cs_type == CS_SINGLE) { + /* All CSs to CS0 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + CS_ENABLE_REG, 1 << 3, 1 << 3)); + /* All CSs to CS0 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + ODPG_DATA_CONTROL_REG, + (0x3 | (effective_cs << 26)), 0xc000003)); + } else { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + CS_ENABLE_REG, 0, 1 << 3)); + /* CS select */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + ODPG_DATA_CONTROL_REG, 0x3 | cs_num << 26, + 0x3 | 3 << 26)); + } + + /* load pattern to ODPG */ + ddr3_tip_load_pattern_to_odpg(dev_num, access_type, interface_num, + pattern, + pattern_table[pattern].start_addr); + tx_burst_size = (direction == OPER_WRITE) ? + pattern_table[pattern].tx_burst_size : 0; + delay_between_burst = (direction == OPER_WRITE) ? 2 : 0; + rd_mode = (direction == OPER_WRITE) ? 1 : 0; + CHECK_STATUS(ddr3_tip_configure_odpg + (dev_num, access_type, interface_num, direction, + pattern_table[pattern].num_of_phases_tx, tx_burst_size, + pattern_table[pattern].num_of_phases_rx, + delay_between_burst, rd_mode, effective_cs, STRESS_NONE, + DURATION_SINGLE)); + reg_data = (direction == OPER_READ) ? 0 : (0x3 << 30); + reg_data |= (direction == OPER_READ) ? 0x60 : 0xfa; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + ODPG_WRITE_READ_MODE_ENABLE_REG, reg_data, + MASK_ALL_BITS)); + reg_data = (edge_comp == EDGE_PF || edge_comp == EDGE_FP) ? 0 : 1 << 6; + reg_data |= (edge_comp == EDGE_PF || edge_comp == EDGE_PFP) ? + (1 << 7) : 0; + + /* change from Pass to Fail will lock the result */ + if (pup_access_type == ACCESS_TYPE_MULTICAST) + reg_data |= 0xe << 14; + else + reg_data |= pup_num << 14; + + if (edge_comp == EDGE_FP) { + /* don't search for readl edge change, only the state */ + reg_data |= (0 << 20); + } else if (edge_comp == EDGE_FPF) { + reg_data |= (0 << 20); + } else { + reg_data |= (3 << 20); + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + ODPG_TRAINING_CONTROL_REG, + reg_data | (0x7 << 8) | (0x7 << 11), + (0x3 | (0x3 << 2) | (0x3 << 6) | (1 << 5) | (0x7 << 8) | + (0x7 << 11) | (0xf << 14) | (0x3 << 18) | (3 << 20)))); + reg_data = (search_dir == HWS_LOW2HIGH) ? 0 : (1 << 8); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, ODPG_OBJ1_OPCODE_REG, + 1 | reg_data | init_value << 9 | (1 << 25) | (1 << 26), + 0xff | (1 << 8) | (0xffff << 9) | (1 << 25) | (1 << 26))); + + /* + * Write2_dunit(0x10b4, Number_iteration , [15:0]) + * Max number of iterations + */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, interface_num, + ODPG_OBJ1_ITER_CNT_REG, num_iter, + 0xffff)); + if (control_element == HWS_CONTROL_ELEMENT_DQ_SKEW && + direction == OPER_READ) { + /* + * Write2_dunit(0x10c0, 0x5f , [7:0]) + * MC PBS Reg Address at DDR PHY + */ + reg_data = 0x5f + + effective_cs * CALIBRATED_OBJECTS_REG_ADDR_OFFSET; + } else if (control_element == HWS_CONTROL_ELEMENT_DQ_SKEW && + direction == OPER_WRITE) { + reg_data = 0x1f + + effective_cs * CALIBRATED_OBJECTS_REG_ADDR_OFFSET; + } else if (control_element == HWS_CONTROL_ELEMENT_ADLL && + direction == OPER_WRITE) { + /* + * LOOP 0x00000001 + 4*n: + * where n (0-3) represents M_CS number + */ + /* + * Write2_dunit(0x10c0, 0x1 , [7:0]) + * ADLL WR Reg Address at DDR PHY + */ + reg_data = 1 + effective_cs * CS_REGISTER_ADDR_OFFSET; + } else if (control_element == HWS_CONTROL_ELEMENT_ADLL && + direction == OPER_READ) { + /* ADLL RD Reg Address at DDR PHY */ + reg_data = 3 + effective_cs * CS_REGISTER_ADDR_OFFSET; + } else if (control_element == HWS_CONTROL_ELEMENT_DQS_SKEW && + direction == OPER_WRITE) { + /* TBD not defined in 0.5.0 requirement */ + } else if (control_element == HWS_CONTROL_ELEMENT_DQS_SKEW && + direction == OPER_READ) { + /* TBD not defined in 0.5.0 requirement */ + } + + reg_data |= (0x6 << 28); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, CALIB_OBJ_PRFA_REG, + reg_data | (init_value << 8), + 0xff | (0xffff << 8) | (0xf << 24) | (u32) (0xf << 28))); + + mask_dq_num_of_regs = tm->num_of_bus_per_interface * BUS_WIDTH_IN_BITS; + mask_pup_num_of_regs = tm->num_of_bus_per_interface; + + if (result_type == RESULT_PER_BIT) { + for (index_cnt = 0; index_cnt < mask_dq_num_of_regs; + index_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + mask_results_dq_reg_map[index_cnt], 0, + 1 << 24)); + } + + /* Mask disabled buses */ + for (pup_id = 0; pup_id < tm->num_of_bus_per_interface; + pup_id++) { + if (IS_ACTIVE(tm->bus_act_mask, pup_id) == 1) + continue; + + for (index_cnt = (mask_dq_num_of_regs - pup_id * 8); + index_cnt < + (mask_dq_num_of_regs - (pup_id + 1) * 8); + index_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, + interface_num, + mask_results_dq_reg_map + [index_cnt], (1 << 24), 1 << 24)); + } + } + + for (index_cnt = 0; index_cnt < mask_pup_num_of_regs; + index_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + mask_results_pup_reg_map[index_cnt], + (1 << 24), 1 << 24)); + } + } else if (result_type == RESULT_PER_BYTE) { + /* write to adll */ + for (index_cnt = 0; index_cnt < mask_pup_num_of_regs; + index_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + mask_results_pup_reg_map[index_cnt], 0, + 1 << 24)); + } + for (index_cnt = 0; index_cnt < mask_dq_num_of_regs; + index_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, interface_num, + mask_results_dq_reg_map[index_cnt], + (1 << 24), (1 << 24))); + } + } + + /* Start Training Trigger */ + CHECK_STATUS(ddr3_tip_if_write(dev_num, access_type, interface_num, + ODPG_TRAINING_TRIGGER_REG, 1, 1)); + /* wait for all RFU tests to finish (or timeout) */ + /* WA for 16 bit mode, more investigation needed */ + mdelay(1); + + /* Training "Done ?" */ + for (index_cnt = 0; index_cnt < MAX_INTERFACE_NUM; index_cnt++) { + if (IS_ACTIVE(tm->if_act_mask, index_cnt) == 0) + continue; + + if (interface_mask & (1 << index_cnt)) { + /* need to check results for this Dunit */ + for (poll_cnt = 0; poll_cnt < max_polling_for_done; + poll_cnt++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + index_cnt, + ODPG_TRAINING_STATUS_REG, + ®_data, MASK_ALL_BITS)); + if ((reg_data & 0x2) != 0) { + /*done */ + train_status[index_cnt] = + HWS_TRAINING_IP_STATUS_SUCCESS; + break; + } + } + + if (poll_cnt == max_polling_for_done) { + train_status[index_cnt] = + HWS_TRAINING_IP_STATUS_TIMEOUT; + } + } + /* Be sure that ODPG done */ + CHECK_STATUS(is_odpg_access_done(dev_num, index_cnt)); + } + + /* Write ODPG done in Dunit */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_STATUS_DONE_REG, 0, 0x1)); + + /* wait for all Dunit tests to finish (or timeout) */ + /* Training "Done ?" */ + /* Training "Pass ?" */ + for (index_cnt = 0; index_cnt < MAX_INTERFACE_NUM; index_cnt++) { + if (IS_ACTIVE(tm->if_act_mask, index_cnt) == 0) + continue; + + if (interface_mask & (1 << index_cnt)) { + /* need to check results for this Dunit */ + for (poll_cnt = 0; poll_cnt < max_polling_for_done; + poll_cnt++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + index_cnt, + ODPG_TRAINING_TRIGGER_REG, + read_data, MASK_ALL_BITS)); + reg_data = read_data[index_cnt]; + if ((reg_data & 0x2) != 0) { + /* done */ + if ((reg_data & 0x4) == 0) { + train_status[index_cnt] = + HWS_TRAINING_IP_STATUS_SUCCESS; + } else { + train_status[index_cnt] = + HWS_TRAINING_IP_STATUS_FAIL; + } + break; + } + } + + if (poll_cnt == max_polling_for_done) { + train_status[index_cnt] = + HWS_TRAINING_IP_STATUS_TIMEOUT; + } + } + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0, MASK_ALL_BITS)); + + return MV_OK; +} + +/* + * Load expected Pattern to ODPG + */ +int ddr3_tip_load_pattern_to_odpg(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_pattern pattern, + u32 load_addr) +{ + u32 pattern_length_cnt = 0; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + + for (pattern_length_cnt = 0; + pattern_length_cnt < pattern_table[pattern].pattern_len; + pattern_length_cnt++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + ODPG_PATTERN_DATA_LOW_REG, + pattern_table_get_word(dev_num, pattern, + (u8) (pattern_length_cnt * + 2)), MASK_ALL_BITS)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + ODPG_PATTERN_DATA_HI_REG, + pattern_table_get_word(dev_num, pattern, + (u8) (pattern_length_cnt * + 2 + 1)), + MASK_ALL_BITS)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + ODPG_PATTERN_ADDR_REG, pattern_length_cnt, + MASK_ALL_BITS)); + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, access_type, if_id, + ODPG_PATTERN_ADDR_OFFSET_REG, load_addr, MASK_ALL_BITS)); + + return MV_OK; +} + +/* + * Configure ODPG + */ +int ddr3_tip_configure_odpg(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_dir direction, u32 tx_phases, + u32 tx_burst_size, u32 rx_phases, + u32 delay_between_burst, u32 rd_mode, u32 cs_num, + u32 addr_stress_jump, u32 single_pattern) +{ + u32 data_value = 0; + int ret; + + data_value = ((single_pattern << 2) | (tx_phases << 5) | + (tx_burst_size << 11) | (delay_between_burst << 15) | + (rx_phases << 21) | (rd_mode << 25) | (cs_num << 26) | + (addr_stress_jump << 29)); + ret = ddr3_tip_if_write(dev_num, access_type, if_id, + ODPG_DATA_CONTROL_REG, data_value, 0xaffffffc); + if (ret != MV_OK) + return ret; + + return MV_OK; +} + +int ddr3_tip_process_result(u32 *ar_result, enum hws_edge e_edge, + enum hws_edge_search e_edge_search, + u32 *edge_result) +{ + u32 i, res; + int tap_val, max_val = -10000, min_val = 10000; + int lock_success = 1; + + for (i = 0; i < BUS_WIDTH_IN_BITS; i++) { + res = GET_LOCK_RESULT(ar_result[i]); + if (res == 0) { + lock_success = 0; + break; + } + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("lock failed for bit %d\n", i)); + } + + if (lock_success == 1) { + for (i = 0; i < BUS_WIDTH_IN_BITS; i++) { + tap_val = GET_TAP_RESULT(ar_result[i], e_edge); + if (tap_val > max_val) + max_val = tap_val; + if (tap_val < min_val) + min_val = tap_val; + if (e_edge_search == TRAINING_EDGE_MAX) + *edge_result = (u32) max_val; + else + *edge_result = (u32) min_val; + + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("i %d ar_result[i] 0x%x tap_val %d max_val %d min_val %d Edge_result %d\n", + i, ar_result[i], tap_val, + max_val, min_val, + *edge_result)); + } + } else { + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Read training search result + */ +int ddr3_tip_read_training_result(u32 dev_num, u32 if_id, + enum hws_access_type pup_access_type, + u32 pup_num, u32 bit_num, + enum hws_search_dir search, + enum hws_dir direction, + enum hws_training_result result_type, + enum hws_training_load_op operation, + u32 cs_num_type, u32 **load_res, + int is_read_from_db, u8 cons_tap, + int is_check_result_validity) +{ + u32 reg_offset, pup_cnt, start_pup, end_pup, start_reg, end_reg; + u32 *interface_train_res = NULL; + u16 *reg_addr = NULL; + u32 read_data[MAX_INTERFACE_NUM]; + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * Agreed assumption: all CS mask contain same number of bits, + * i.e. in multi CS, the number of CS per memory is the same for + * all pups + */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, CS_ENABLE_REG, + (cs_num_type == 0) ? 1 << 3 : 0, (1 << 3))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_DATA_CONTROL_REG, (cs_num_type << 26), (3 << 26))); + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_TRACE, + ("Read_from_d_b %d cs_type %d oper %d result_type %d direction %d search %d pup_num %d if_id %d pup_access_type %d\n", + is_read_from_db, cs_num_type, operation, + result_type, direction, search, pup_num, + if_id, pup_access_type)); + + if ((load_res == NULL) && (is_read_from_db == 1)) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("ddr3_tip_read_training_result load_res = NULL")); + return MV_FAIL; + } + if (pup_num >= tm->num_of_bus_per_interface) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("pup_num %d not valid\n", pup_num)); + } + if (if_id >= MAX_INTERFACE_NUM) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("if_id %d not valid\n", if_id)); + } + if (result_type == RESULT_PER_BIT) + reg_addr = mask_results_dq_reg_map; + else + reg_addr = mask_results_pup_reg_map; + if (pup_access_type == ACCESS_TYPE_UNICAST) { + start_pup = pup_num; + end_pup = pup_num; + } else { /*pup_access_type == ACCESS_TYPE_MULTICAST) */ + + start_pup = 0; + end_pup = tm->num_of_bus_per_interface - 1; + } + + for (pup_cnt = start_pup; pup_cnt <= end_pup; pup_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup_cnt); + DEBUG_TRAINING_IP_ENGINE( + DEBUG_LEVEL_TRACE, + ("if_id %d start_pup %d end_pup %d pup_cnt %d\n", + if_id, start_pup, end_pup, pup_cnt)); + if (result_type == RESULT_PER_BIT) { + if (bit_num == ALL_BITS_PER_PUP) { + start_reg = pup_cnt * BUS_WIDTH_IN_BITS; + end_reg = (pup_cnt + 1) * BUS_WIDTH_IN_BITS - 1; + } else { + start_reg = + pup_cnt * BUS_WIDTH_IN_BITS + bit_num; + end_reg = pup_cnt * BUS_WIDTH_IN_BITS + bit_num; + } + } else { + start_reg = pup_cnt; + end_reg = pup_cnt; + } + + interface_train_res = + ddr3_tip_get_buf_ptr(dev_num, search, result_type, + if_id); + DEBUG_TRAINING_IP_ENGINE( + DEBUG_LEVEL_TRACE, + ("start_reg %d end_reg %d interface %p\n", + start_reg, end_reg, interface_train_res)); + if (interface_train_res == NULL) { + DEBUG_TRAINING_IP_ENGINE( + DEBUG_LEVEL_ERROR, + ("interface_train_res is NULL\n")); + return MV_FAIL; + } + + for (reg_offset = start_reg; reg_offset <= end_reg; + reg_offset++) { + if (operation == TRAINING_LOAD_OPERATION_UNLOAD) { + if (is_read_from_db == 0) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + reg_addr[reg_offset], + read_data, + MASK_ALL_BITS)); + if (is_check_result_validity == 1) { + if ((read_data[if_id] & + 0x02000000) == 0) { + interface_train_res + [reg_offset] = + 0x02000000 + + 64 + cons_tap; + } else { + interface_train_res + [reg_offset] = + read_data + [if_id] + + cons_tap; + } + } else { + interface_train_res[reg_offset] + = read_data[if_id] + + cons_tap; + } + DEBUG_TRAINING_IP_ENGINE + (DEBUG_LEVEL_TRACE, + ("reg_offset %d value 0x%x addr %p\n", + reg_offset, + interface_train_res + [reg_offset], + &interface_train_res + [reg_offset])); + } else { + *load_res = + &interface_train_res[start_reg]; + DEBUG_TRAINING_IP_ENGINE + (DEBUG_LEVEL_TRACE, + ("*load_res %p\n", *load_res)); + } + } else { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_TRACE, + ("not supported\n")); + } + } + } + + return MV_OK; +} + +/* + * Load all pattern to memory using ODPG + */ +int ddr3_tip_load_all_pattern_to_mem(u32 dev_num) +{ + u32 pattern = 0, if_id; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + training_result[training_stage][if_id] = TEST_SUCCESS; + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* enable single cs */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, (1 << 3), (1 << 3))); + } + + for (pattern = 0; pattern < PATTERN_LIMIT; pattern++) + ddr3_tip_load_pattern_to_mem(dev_num, pattern); + + return MV_OK; +} + +/* + * Wait till ODPG access is ready + */ +int is_odpg_access_done(u32 dev_num, u32 if_id) +{ + u32 poll_cnt = 0, data_value; + u32 read_data[MAX_INTERFACE_NUM]; + + for (poll_cnt = 0; poll_cnt < MAX_POLLING_ITERATIONS; poll_cnt++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_BIST_DONE, read_data, MASK_ALL_BITS)); + data_value = read_data[if_id]; + if (((data_value >> ODPG_BIST_DONE_BIT_OFFS) & 0x1) == + ODPG_BIST_DONE_BIT_VALUE) { + data_value = data_value & 0xfffffffe; + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ODPG_BIST_DONE, data_value, + MASK_ALL_BITS)); + break; + } + } + + if (poll_cnt >= MAX_POLLING_ITERATIONS) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("Bist Activate: poll failure 2\n")); + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Load specific pattern to memory using ODPG + */ +int ddr3_tip_load_pattern_to_mem(u32 dev_num, enum hws_pattern pattern) +{ + u32 reg_data, if_id; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* load pattern to memory */ + /* + * Write Tx mode, CS0, phases, Tx burst size, delay between burst, + * rx pattern phases + */ + reg_data = + 0x1 | (pattern_table[pattern].num_of_phases_tx << 5) | + (pattern_table[pattern].tx_burst_size << 11) | + (pattern_table[pattern].delay_between_bursts << 15) | + (pattern_table[pattern].num_of_phases_rx << 21) | (0x1 << 25) | + (effective_cs << 26); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, reg_data, MASK_ALL_BITS)); + /* ODPG Write enable from BIST */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, (0x1 | (effective_cs << 26)), + 0xc000003)); + /* disable error injection */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_WRITE_DATA_ERROR_REG, 0, 0x1)); + /* load pattern to ODPG */ + ddr3_tip_load_pattern_to_odpg(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, pattern, + pattern_table[pattern].start_addr); + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + if (IS_ACTIVE(tm->if_act_mask, if_id) == 0) + continue; + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0x1498, + 0x3, 0xf)); + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_ENABLE_REG, 0x1 << ODPG_ENABLE_OFFS, + (0x1 << ODPG_ENABLE_OFFS))); + + mdelay(1); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(is_odpg_access_done(dev_num, if_id)); + } + + /* Disable ODPG and stop write to memory */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, (0x1 << 30), (u32) (0x3 << 30))); + + /* return to default */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0, MASK_ALL_BITS)); + + /* Disable odt0 for CS0 training - need to adjust for multy CS */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0x1498, + 0x0, 0xf)); + + /* temporary added */ + mdelay(1); + + return MV_OK; +} + +/* + * Load specific pattern to memory using CPU + */ +int ddr3_tip_load_pattern_to_mem_by_cpu(u32 dev_num, enum hws_pattern pattern, + u32 offset) +{ + /* eranba - TBD */ + return MV_OK; +} + +/* + * Training search routine + */ +int ddr3_tip_ip_training_wrapper_int(u32 dev_num, + enum hws_access_type access_type, + u32 if_id, + enum hws_access_type pup_access_type, + u32 pup_num, u32 bit_num, + enum hws_training_result result_type, + enum hws_control_element control_element, + enum hws_search_dir search_dir, + enum hws_dir direction, + u32 interface_mask, u32 init_value_l2h, + u32 init_value_h2l, u32 num_iter, + enum hws_pattern pattern, + enum hws_edge_compare edge_comp, + enum hws_ddr_cs train_cs_type, u32 cs_num, + enum hws_training_ip_stat *train_status) +{ + u32 interface_num = 0, start_if, end_if, init_value_used; + enum hws_search_dir search_dir_id, start_search, end_search; + enum hws_edge_compare edge_comp_used; + u8 cons_tap = (direction == OPER_WRITE) ? (64) : (0); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (train_status == NULL) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("train_status is NULL\n")); + return MV_FAIL; + } + + if ((train_cs_type > CS_NON_SINGLE) || + (edge_comp >= EDGE_PFP) || + (pattern >= PATTERN_LIMIT) || + (direction > OPER_WRITE_AND_READ) || + (search_dir > HWS_HIGH2LOW) || + (control_element > HWS_CONTROL_ELEMENT_DQS_SKEW) || + (result_type > RESULT_PER_BYTE) || + (pup_num >= tm->num_of_bus_per_interface) || + (pup_access_type > ACCESS_TYPE_MULTICAST) || + (if_id > 11) || (access_type > ACCESS_TYPE_MULTICAST)) { + DEBUG_TRAINING_IP_ENGINE( + DEBUG_LEVEL_ERROR, + ("wrong parameter train_cs_type %d edge_comp %d pattern %d direction %d search_dir %d control_element %d result_type %d pup_num %d pup_access_type %d if_id %d access_type %d\n", + train_cs_type, edge_comp, pattern, direction, + search_dir, control_element, result_type, pup_num, + pup_access_type, if_id, access_type)); + return MV_FAIL; + } + + if (edge_comp == EDGE_FPF) { + start_search = HWS_LOW2HIGH; + end_search = HWS_HIGH2LOW; + edge_comp_used = EDGE_FP; + } else { + start_search = search_dir; + end_search = search_dir; + edge_comp_used = edge_comp; + } + + for (search_dir_id = start_search; search_dir_id <= end_search; + search_dir_id++) { + init_value_used = (search_dir_id == HWS_LOW2HIGH) ? + init_value_l2h : init_value_h2l; + DEBUG_TRAINING_IP_ENGINE( + DEBUG_LEVEL_TRACE, + ("dev_num %d, access_type %d, if_id %d, pup_access_type %d,pup_num %d, result_type %d, control_element %d search_dir_id %d, direction %d, interface_mask %d,init_value_used %d, num_iter %d, pattern %d, edge_comp_used %d, train_cs_type %d, cs_num %d\n", + dev_num, access_type, if_id, pup_access_type, pup_num, + result_type, control_element, search_dir_id, + direction, interface_mask, init_value_used, num_iter, + pattern, edge_comp_used, train_cs_type, cs_num)); + + ddr3_tip_ip_training(dev_num, access_type, if_id, + pup_access_type, pup_num, result_type, + control_element, search_dir_id, direction, + interface_mask, init_value_used, num_iter, + pattern, edge_comp_used, train_cs_type, + cs_num, train_status); + if (access_type == ACCESS_TYPE_MULTICAST) { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } else { + start_if = if_id; + end_if = if_id; + } + + for (interface_num = start_if; interface_num <= end_if; + interface_num++) { + VALIDATE_ACTIVE(tm->if_act_mask, interface_num); + cs_num = 0; + CHECK_STATUS(ddr3_tip_read_training_result + (dev_num, interface_num, pup_access_type, + pup_num, bit_num, search_dir_id, + direction, result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + train_cs_type, NULL, 0, cons_tap, + 0)); + } + } + + return MV_OK; +} + +/* + * Training search & read result routine + */ +int ddr3_tip_ip_training_wrapper(u32 dev_num, enum hws_access_type access_type, + u32 if_id, + enum hws_access_type pup_access_type, + u32 pup_num, + enum hws_training_result result_type, + enum hws_control_element control_element, + enum hws_search_dir search_dir, + enum hws_dir direction, u32 interface_mask, + u32 init_value_l2h, u32 init_value_h2l, + u32 num_iter, enum hws_pattern pattern, + enum hws_edge_compare edge_comp, + enum hws_ddr_cs train_cs_type, u32 cs_num, + enum hws_training_ip_stat *train_status) +{ + u8 e1, e2; + u32 interface_cnt, bit_id, start_if, end_if, bit_end = 0; + u32 *result[HWS_SEARCH_DIR_LIMIT] = { 0 }; + u8 cons_tap = (direction == OPER_WRITE) ? (64) : (0); + u8 bit_bit_mask[MAX_BUS_NUM] = { 0 }, bit_bit_mask_active = 0; + u8 pup_id; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (pup_num >= tm->num_of_bus_per_interface) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("pup_num %d not valid\n", pup_num)); + } + + if (if_id >= MAX_INTERFACE_NUM) { + DEBUG_TRAINING_IP_ENGINE(DEBUG_LEVEL_ERROR, + ("if_id %d not valid\n", if_id)); + } + + CHECK_STATUS(ddr3_tip_ip_training_wrapper_int + (dev_num, access_type, if_id, pup_access_type, pup_num, + ALL_BITS_PER_PUP, result_type, control_element, + search_dir, direction, interface_mask, init_value_l2h, + init_value_h2l, num_iter, pattern, edge_comp, + train_cs_type, cs_num, train_status)); + + if (access_type == ACCESS_TYPE_MULTICAST) { + start_if = 0; + end_if = MAX_INTERFACE_NUM - 1; + } else { + start_if = if_id; + end_if = if_id; + } + + for (interface_cnt = start_if; interface_cnt <= end_if; + interface_cnt++) { + VALIDATE_ACTIVE(tm->if_act_mask, interface_cnt); + for (pup_id = 0; + pup_id <= (tm->num_of_bus_per_interface - 1); pup_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup_id); + if (result_type == RESULT_PER_BIT) + bit_end = BUS_WIDTH_IN_BITS - 1; + else + bit_end = 0; + + bit_bit_mask[pup_id] = 0; + for (bit_id = 0; bit_id <= bit_end; bit_id++) { + enum hws_search_dir search_dir_id; + for (search_dir_id = HWS_LOW2HIGH; + search_dir_id <= HWS_HIGH2LOW; + search_dir_id++) { + CHECK_STATUS + (ddr3_tip_read_training_result + (dev_num, interface_cnt, + ACCESS_TYPE_UNICAST, pup_id, + bit_id, search_dir_id, + direction, result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + CS_SINGLE, + &result[search_dir_id], + 1, 0, 0)); + } + e1 = GET_TAP_RESULT(result[HWS_LOW2HIGH][0], + EDGE_1); + e2 = GET_TAP_RESULT(result[HWS_HIGH2LOW][0], + EDGE_1); + DEBUG_TRAINING_IP_ENGINE( + DEBUG_LEVEL_INFO, + ("wrapper if_id %d pup_id %d bit %d l2h 0x%x (e1 0x%x) h2l 0x%x (e2 0x%x)\n", + interface_cnt, pup_id, bit_id, + result[HWS_LOW2HIGH][0], e1, + result[HWS_HIGH2LOW][0], e2)); + /* TBD validate is valid only for tx */ + if (VALIDATE_TRAINING_LIMIT(e1, e2) == 1 && + GET_LOCK_RESULT(result[HWS_LOW2HIGH][0]) && + GET_LOCK_RESULT(result[HWS_LOW2HIGH][0])) { + /* Mark problem bits */ + bit_bit_mask[pup_id] |= 1 << bit_id; + bit_bit_mask_active = 1; + } + } /* For all bits */ + } /* For all PUPs */ + + /* Fix problem bits */ + if (bit_bit_mask_active != 0) { + u32 *l2h_if_train_res = NULL; + u32 *h2l_if_train_res = NULL; + l2h_if_train_res = + ddr3_tip_get_buf_ptr(dev_num, HWS_LOW2HIGH, + result_type, + interface_cnt); + h2l_if_train_res = + ddr3_tip_get_buf_ptr(dev_num, HWS_HIGH2LOW, + result_type, + interface_cnt); + + ddr3_tip_ip_training(dev_num, ACCESS_TYPE_UNICAST, + interface_cnt, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, result_type, + control_element, HWS_LOW2HIGH, + direction, interface_mask, + num_iter / 2, num_iter / 2, + pattern, EDGE_FP, train_cs_type, + cs_num, train_status); + + for (pup_id = 0; + pup_id <= (tm->num_of_bus_per_interface - 1); + pup_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup_id); + + if (bit_bit_mask[pup_id] == 0) + continue; + + for (bit_id = 0; bit_id <= bit_end; bit_id++) { + if ((bit_bit_mask[pup_id] & + (1 << bit_id)) == 0) + continue; + CHECK_STATUS + (ddr3_tip_read_training_result + (dev_num, interface_cnt, + ACCESS_TYPE_UNICAST, pup_id, + bit_id, HWS_LOW2HIGH, + direction, + result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + CS_SINGLE, &l2h_if_train_res, + 0, 0, 1)); + } + } + + ddr3_tip_ip_training(dev_num, ACCESS_TYPE_UNICAST, + interface_cnt, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, result_type, + control_element, HWS_HIGH2LOW, + direction, interface_mask, + num_iter / 2, num_iter / 2, + pattern, EDGE_FP, train_cs_type, + cs_num, train_status); + + for (pup_id = 0; + pup_id <= (tm->num_of_bus_per_interface - 1); + pup_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup_id); + + if (bit_bit_mask[pup_id] == 0) + continue; + + for (bit_id = 0; bit_id <= bit_end; bit_id++) { + if ((bit_bit_mask[pup_id] & + (1 << bit_id)) == 0) + continue; + CHECK_STATUS + (ddr3_tip_read_training_result + (dev_num, interface_cnt, + ACCESS_TYPE_UNICAST, pup_id, + bit_id, HWS_HIGH2LOW, direction, + result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + CS_SINGLE, &h2l_if_train_res, + 0, cons_tap, 1)); + } + } + } /* if bit_bit_mask_active */ + } /* For all Interfacess */ + + return MV_OK; +} + +/* + * Load phy values + */ +int ddr3_tip_load_phy_values(int b_load) +{ + u32 bus_cnt = 0, if_id, dev_num = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_cnt = 0; bus_cnt < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + if (b_load == 1) { + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_cnt, + DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + (effective_cs * + CS_REGISTER_ADDR_OFFSET), + &phy_reg_bk[if_id][bus_cnt] + [0])); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_cnt, + DDR_PHY_DATA, + RL_PHY_REG + + (effective_cs * + CS_REGISTER_ADDR_OFFSET), + &phy_reg_bk[if_id][bus_cnt] + [1])); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, bus_cnt, + DDR_PHY_DATA, + READ_CENTRALIZATION_PHY_REG + + (effective_cs * + CS_REGISTER_ADDR_OFFSET), + &phy_reg_bk[if_id][bus_cnt] + [2])); + } else { + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + (effective_cs * + CS_REGISTER_ADDR_OFFSET), + phy_reg_bk[if_id][bus_cnt] + [0])); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, + RL_PHY_REG + + (effective_cs * + CS_REGISTER_ADDR_OFFSET), + phy_reg_bk[if_id][bus_cnt] + [1])); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, + bus_cnt, DDR_PHY_DATA, + READ_CENTRALIZATION_PHY_REG + + (effective_cs * + CS_REGISTER_ADDR_OFFSET), + phy_reg_bk[if_id][bus_cnt] + [2])); + } + } + } + + return MV_OK; +} + +int ddr3_tip_training_ip_test(u32 dev_num, enum hws_training_result result_type, + enum hws_search_dir search_dir, + enum hws_dir direction, + enum hws_edge_compare edge, + u32 init_val1, u32 init_val2, + u32 num_of_iterations, + u32 start_pattern, u32 end_pattern) +{ + u32 pattern, if_id, pup_id; + enum hws_training_ip_stat train_status[MAX_INTERFACE_NUM]; + u32 *res = NULL; + u32 search_state = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + ddr3_tip_load_phy_values(1); + + for (pattern = start_pattern; pattern <= end_pattern; pattern++) { + for (search_state = 0; search_state < HWS_SEARCH_DIR_LIMIT; + search_state++) { + ddr3_tip_ip_training_wrapper(dev_num, + ACCESS_TYPE_MULTICAST, 0, + ACCESS_TYPE_MULTICAST, 0, + result_type, + HWS_CONTROL_ELEMENT_ADLL, + search_dir, direction, + 0xfff, init_val1, + init_val2, + num_of_iterations, pattern, + edge, CS_SINGLE, + PARAM_NOT_CARE, + train_status); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup_id = 0; pup_id < + tm->num_of_bus_per_interface; + pup_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, + pup_id); + CHECK_STATUS + (ddr3_tip_read_training_result + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup_id, + ALL_BITS_PER_PUP, + search_state, + direction, result_type, + TRAINING_LOAD_OPERATION_UNLOAD, + CS_SINGLE, &res, 1, 0, + 0)); + if (result_type == RESULT_PER_BYTE) { + DEBUG_TRAINING_IP_ENGINE + (DEBUG_LEVEL_INFO, + ("search_state %d if_id %d pup_id %d 0x%x\n", + search_state, if_id, + pup_id, res[0])); + } else { + DEBUG_TRAINING_IP_ENGINE + (DEBUG_LEVEL_INFO, + ("search_state %d if_id %d pup_id %d 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", + search_state, if_id, + pup_id, res[0], + res[1], res[2], + res[3], res[4], + res[5], res[6], + res[7])); + } + } + } /* interface */ + } /* search */ + } /* pattern */ + + ddr3_tip_load_phy_values(0); + + return MV_OK; +} + +struct pattern_info *ddr3_tip_get_pattern_table() +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (DDR3_IS_16BIT_DRAM_MODE(tm->bus_act_mask) == 0) + return pattern_table_32; + else + return pattern_table_16; +} + +u16 *ddr3_tip_get_mask_results_dq_reg() +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (DDR3_IS_ECC_PUP3_MODE(tm->bus_act_mask)) + return mask_results_dq_reg_map_pup3_ecc; + else + return mask_results_dq_reg_map; +} + +u16 *ddr3_tip_get_mask_results_pup_reg_map() +{ + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (DDR3_IS_ECC_PUP3_MODE(tm->bus_act_mask)) + return mask_results_pup_reg_map_pup3_ecc; + else + return mask_results_pup_reg_map; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.h new file mode 100644 index 0000000000..25b146216e --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.h @@ -0,0 +1,85 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_ENGINE_H_ +#define _DDR3_TRAINING_IP_ENGINE_H_ + +#include "ddr3_training_ip_def.h" +#include "ddr3_training_ip_flow.h" + +#define EDGE_1 0 +#define EDGE_2 1 +#define ALL_PUP_TRAINING 0xe +#define PUP_RESULT_EDGE_1_MASK 0xff +#define PUP_RESULT_EDGE_2_MASK (0xff << 8) +#define PUP_LOCK_RESULT_BIT 25 + +#define GET_TAP_RESULT(reg, edge) \ + (((edge) == EDGE_1) ? ((reg) & PUP_RESULT_EDGE_1_MASK) : \ + (((reg) & PUP_RESULT_EDGE_2_MASK) >> 8)); +#define GET_LOCK_RESULT(reg) \ + (((reg) & (1<> PUP_LOCK_RESULT_BIT) + +#define EDGE_FAILURE 128 +#define ALL_BITS_PER_PUP 128 + +#define MIN_WINDOW_SIZE 6 +#define MAX_WINDOW_SIZE_RX 32 +#define MAX_WINDOW_SIZE_TX 64 + +int ddr3_tip_training_ip_test(u32 dev_num, enum hws_training_result result_type, + enum hws_search_dir search_dir, + enum hws_dir direction, + enum hws_edge_compare edge, + u32 init_val1, u32 init_val2, + u32 num_of_iterations, u32 start_pattern, + u32 end_pattern); +int ddr3_tip_load_pattern_to_mem(u32 dev_num, enum hws_pattern pattern); +int ddr3_tip_load_pattern_to_mem_by_cpu(u32 dev_num, enum hws_pattern pattern, + u32 offset); +int ddr3_tip_load_all_pattern_to_mem(u32 dev_num); +int ddr3_tip_read_training_result(u32 dev_num, u32 if_id, + enum hws_access_type pup_access_type, + u32 pup_num, u32 bit_num, + enum hws_search_dir search, + enum hws_dir direction, + enum hws_training_result result_type, + enum hws_training_load_op operation, + u32 cs_num_type, u32 **load_res, + int is_read_from_db, u8 cons_tap, + int is_check_result_validity); +int ddr3_tip_ip_training(u32 dev_num, enum hws_access_type access_type, + u32 interface_num, + enum hws_access_type pup_access_type, + u32 pup_num, enum hws_training_result result_type, + enum hws_control_element control_element, + enum hws_search_dir search_dir, enum hws_dir direction, + u32 interface_mask, u32 init_value, u32 num_iter, + enum hws_pattern pattern, + enum hws_edge_compare edge_comp, + enum hws_ddr_cs cs_type, u32 cs_num, + enum hws_training_ip_stat *train_status); +int ddr3_tip_ip_training_wrapper(u32 dev_num, enum hws_access_type access_type, + u32 if_id, + enum hws_access_type pup_access_type, + u32 pup_num, + enum hws_training_result result_type, + enum hws_control_element control_element, + enum hws_search_dir search_dir, + enum hws_dir direction, + u32 interface_mask, u32 init_value1, + u32 init_value2, u32 num_iter, + enum hws_pattern pattern, + enum hws_edge_compare edge_comp, + enum hws_ddr_cs train_cs_type, u32 cs_num, + enum hws_training_ip_stat *train_status); +int is_odpg_access_done(u32 dev_num, u32 if_id); +void ddr3_tip_print_bist_res(void); +struct pattern_info *ddr3_tip_get_pattern_table(void); +u16 *ddr3_tip_get_mask_results_dq_reg(void); +u16 *ddr3_tip_get_mask_results_pup_reg_map(void); + +#endif /* _DDR3_TRAINING_IP_ENGINE_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_flow.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_flow.h new file mode 100644 index 0000000000..22d7ce23e6 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_flow.h @@ -0,0 +1,349 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_FLOW_H_ +#define _DDR3_TRAINING_IP_FLOW_H_ + +#include "ddr3_training_ip.h" +#include "ddr3_training_ip_pbs.h" + +#define MRS0_CMD 0x3 +#define MRS1_CMD 0x4 +#define MRS2_CMD 0x8 +#define MRS3_CMD 0x9 + +/* + * Definitions of INTERFACE registers + */ + +#define READ_BUFFER_SELECT 0x14a4 + +/* + * Definitions of PHY registers + */ + +#define KILLER_PATTERN_LENGTH 32 +#define EXT_ACCESS_BURST_LENGTH 8 + +#define IS_ACTIVE(if_mask , if_id) \ + ((if_mask) & (1 << (if_id))) +#define VALIDATE_ACTIVE(mask, id) \ + { \ + if (IS_ACTIVE(mask, id) == 0) \ + continue; \ + } + +#define GET_TOPOLOGY_NUM_OF_BUSES() \ + (ddr3_get_topology_map()->num_of_bus_per_interface) + +#define DDR3_IS_ECC_PUP3_MODE(if_mask) \ + (((if_mask) == 0xb) ? 1 : 0) +#define DDR3_IS_ECC_PUP4_MODE(if_mask) \ + (((((if_mask) & 0x10) == 0)) ? 0 : 1) +#define DDR3_IS_16BIT_DRAM_MODE(mask) \ + (((((mask) & 0x4) == 0)) ? 1 : 0) + +#define MEGA 1000000 +#define BUS_WIDTH_IN_BITS 8 + +/* + * DFX address Space + * Table 2: DFX address space + * Address Bits Value Description + * [31 : 20] 0x? DFX base address bases PCIe mapping + * [19 : 15] 0...Number_of_client-1 Client Index inside pipe. + * See also Table 1 Multi_cast = 29 Broadcast = 28 + * [14 : 13] 2'b01 Access to Client Internal Register + * [12 : 0] Client Internal Register offset See related Client Registers + * [14 : 13] 2'b00 Access to Ram Wrappers Internal Register + * [12 : 6] 0 Number_of_rams-1 Ram Index inside Client + * [5 : 0] Ram Wrapper Internal Register offset See related Ram Wrappers + * Registers + */ + +/* nsec */ +#define TREFI_LOW 7800 +#define TREFI_HIGH 3900 + +#define TR2R_VALUE_REG 0x180 +#define TR2R_MASK_REG 0x180 +#define TRFC_MASK_REG 0x7f +#define TR2W_MASK_REG 0x600 +#define TW2W_HIGH_VALUE_REG 0x1800 +#define TW2W_HIGH_MASK_REG 0xf800 +#define TRFC_HIGH_VALUE_REG 0x20000 +#define TRFC_HIGH_MASK_REG 0x70000 +#define TR2R_HIGH_VALUE_REG 0x0 +#define TR2R_HIGH_MASK_REG 0x380000 +#define TMOD_VALUE_REG 0x16000000 +#define TMOD_MASK_REG 0x1e000000 +#define T_VALUE_REG 0x40000000 +#define T_MASK_REG 0xc0000000 +#define AUTO_ZQC_TIMING 15384 +#define WRITE_XBAR_PORT1 0xc03f8077 +#define READ_XBAR_PORT1 0xc03f8073 +#define DISABLE_DDR_TUNING_DATA 0x02294285 +#define ENABLE_DDR_TUNING_DATA 0x12294285 + +#define ODPG_TRAINING_STATUS_REG 0x18488 +#define ODPG_TRAINING_TRIGGER_REG 0x1030 +#define ODPG_STATUS_DONE_REG 0x16fc +#define ODPG_ENABLE_REG 0x186d4 +#define ODPG_ENABLE_OFFS 0 +#define ODPG_DISABLE_OFFS 8 + +#define ODPG_TRAINING_CONTROL_REG 0x1034 +#define ODPG_OBJ1_OPCODE_REG 0x103c +#define ODPG_OBJ1_ITER_CNT_REG 0x10b4 +#define CALIB_OBJ_PRFA_REG 0x10c4 +#define ODPG_WRITE_LEVELING_DONE_CNTR_REG 0x10f8 +#define ODPG_WRITE_READ_MODE_ENABLE_REG 0x10fc +#define TRAINING_OPCODE_1_REG 0x10b4 +#define SDRAM_CONFIGURATION_REG 0x1400 +#define DDR_CONTROL_LOW_REG 0x1404 +#define SDRAM_TIMING_LOW_REG 0x1408 +#define SDRAM_TIMING_HIGH_REG 0x140c +#define SDRAM_ACCESS_CONTROL_REG 0x1410 +#define SDRAM_OPEN_PAGE_CONTROL_REG 0x1414 +#define SDRAM_OPERATION_REG 0x1418 +#define DUNIT_CONTROL_HIGH_REG 0x1424 +#define ODT_TIMING_LOW 0x1428 +#define DDR_TIMING_REG 0x142c +#define ODT_TIMING_HI_REG 0x147c +#define SDRAM_INIT_CONTROL_REG 0x1480 +#define SDRAM_ODT_CONTROL_HIGH_REG 0x1498 +#define DUNIT_ODT_CONTROL_REG 0x149c +#define READ_BUFFER_SELECT_REG 0x14a4 +#define DUNIT_MMASK_REG 0x14b0 +#define CALIB_MACHINE_CTRL_REG 0x14cc +#define DRAM_DLL_TIMING_REG 0x14e0 +#define DRAM_ZQ_INIT_TIMIMG_REG 0x14e4 +#define DRAM_ZQ_TIMING_REG 0x14e8 +#define DFS_REG 0x1528 +#define READ_DATA_SAMPLE_DELAY 0x1538 +#define READ_DATA_READY_DELAY 0x153c +#define TRAINING_REG 0x15b0 +#define TRAINING_SW_1_REG 0x15b4 +#define TRAINING_SW_2_REG 0x15b8 +#define TRAINING_PATTERN_BASE_ADDRESS_REG 0x15bc +#define TRAINING_DBG_1_REG 0x15c0 +#define TRAINING_DBG_2_REG 0x15c4 +#define TRAINING_DBG_3_REG 0x15c8 +#define RANK_CTRL_REG 0x15e0 +#define TIMING_REG 0x15e4 +#define DRAM_PHY_CONFIGURATION 0x15ec +#define MR0_REG 0x15d0 +#define MR1_REG 0x15d4 +#define MR2_REG 0x15d8 +#define MR3_REG 0x15dc +#define TIMING_REG 0x15e4 +#define ODPG_CTRL_CONTROL_REG 0x1600 +#define ODPG_DATA_CONTROL_REG 0x1630 +#define ODPG_PATTERN_ADDR_OFFSET_REG 0x1638 +#define ODPG_DATA_BUF_SIZE_REG 0x163c +#define PHY_LOCK_STATUS_REG 0x1674 +#define PHY_REG_FILE_ACCESS 0x16a0 +#define TRAINING_WRITE_LEVELING_REG 0x16ac +#define ODPG_PATTERN_ADDR_REG 0x16b0 +#define ODPG_PATTERN_DATA_HI_REG 0x16b4 +#define ODPG_PATTERN_DATA_LOW_REG 0x16b8 +#define ODPG_BIST_LAST_FAIL_ADDR_REG 0x16bc +#define ODPG_BIST_DATA_ERROR_COUNTER_REG 0x16c0 +#define ODPG_BIST_FAILED_DATA_HI_REG 0x16c4 +#define ODPG_BIST_FAILED_DATA_LOW_REG 0x16c8 +#define ODPG_WRITE_DATA_ERROR_REG 0x16cc +#define CS_ENABLE_REG 0x16d8 +#define WR_LEVELING_DQS_PATTERN_REG 0x16dc + +#define ODPG_BIST_DONE 0x186d4 +#define ODPG_BIST_DONE_BIT_OFFS 0 +#define ODPG_BIST_DONE_BIT_VALUE 0 + +#define RESULT_CONTROL_BYTE_PUP_0_REG 0x1830 +#define RESULT_CONTROL_BYTE_PUP_1_REG 0x1834 +#define RESULT_CONTROL_BYTE_PUP_2_REG 0x1838 +#define RESULT_CONTROL_BYTE_PUP_3_REG 0x183c +#define RESULT_CONTROL_BYTE_PUP_4_REG 0x18b0 + +#define RESULT_CONTROL_PUP_0_BIT_0_REG 0x18b4 +#define RESULT_CONTROL_PUP_0_BIT_1_REG 0x18b8 +#define RESULT_CONTROL_PUP_0_BIT_2_REG 0x18bc +#define RESULT_CONTROL_PUP_0_BIT_3_REG 0x18c0 +#define RESULT_CONTROL_PUP_0_BIT_4_REG 0x18c4 +#define RESULT_CONTROL_PUP_0_BIT_5_REG 0x18c8 +#define RESULT_CONTROL_PUP_0_BIT_6_REG 0x18cc +#define RESULT_CONTROL_PUP_0_BIT_7_REG 0x18f0 +#define RESULT_CONTROL_PUP_1_BIT_0_REG 0x18f4 +#define RESULT_CONTROL_PUP_1_BIT_1_REG 0x18f8 +#define RESULT_CONTROL_PUP_1_BIT_2_REG 0x18fc +#define RESULT_CONTROL_PUP_1_BIT_3_REG 0x1930 +#define RESULT_CONTROL_PUP_1_BIT_4_REG 0x1934 +#define RESULT_CONTROL_PUP_1_BIT_5_REG 0x1938 +#define RESULT_CONTROL_PUP_1_BIT_6_REG 0x193c +#define RESULT_CONTROL_PUP_1_BIT_7_REG 0x19b0 +#define RESULT_CONTROL_PUP_2_BIT_0_REG 0x19b4 +#define RESULT_CONTROL_PUP_2_BIT_1_REG 0x19b8 +#define RESULT_CONTROL_PUP_2_BIT_2_REG 0x19bc +#define RESULT_CONTROL_PUP_2_BIT_3_REG 0x19c0 +#define RESULT_CONTROL_PUP_2_BIT_4_REG 0x19c4 +#define RESULT_CONTROL_PUP_2_BIT_5_REG 0x19c8 +#define RESULT_CONTROL_PUP_2_BIT_6_REG 0x19cc +#define RESULT_CONTROL_PUP_2_BIT_7_REG 0x19f0 +#define RESULT_CONTROL_PUP_3_BIT_0_REG 0x19f4 +#define RESULT_CONTROL_PUP_3_BIT_1_REG 0x19f8 +#define RESULT_CONTROL_PUP_3_BIT_2_REG 0x19fc +#define RESULT_CONTROL_PUP_3_BIT_3_REG 0x1a30 +#define RESULT_CONTROL_PUP_3_BIT_4_REG 0x1a34 +#define RESULT_CONTROL_PUP_3_BIT_5_REG 0x1a38 +#define RESULT_CONTROL_PUP_3_BIT_6_REG 0x1a3c +#define RESULT_CONTROL_PUP_3_BIT_7_REG 0x1ab0 +#define RESULT_CONTROL_PUP_4_BIT_0_REG 0x1ab4 +#define RESULT_CONTROL_PUP_4_BIT_1_REG 0x1ab8 +#define RESULT_CONTROL_PUP_4_BIT_2_REG 0x1abc +#define RESULT_CONTROL_PUP_4_BIT_3_REG 0x1ac0 +#define RESULT_CONTROL_PUP_4_BIT_4_REG 0x1ac4 +#define RESULT_CONTROL_PUP_4_BIT_5_REG 0x1ac8 +#define RESULT_CONTROL_PUP_4_BIT_6_REG 0x1acc +#define RESULT_CONTROL_PUP_4_BIT_7_REG 0x1af0 + +#define WL_PHY_REG 0x0 +#define WRITE_CENTRALIZATION_PHY_REG 0x1 +#define RL_PHY_REG 0x2 +#define READ_CENTRALIZATION_PHY_REG 0x3 +#define PBS_RX_PHY_REG 0x50 +#define PBS_TX_PHY_REG 0x10 +#define PHY_CONTROL_PHY_REG 0x90 +#define BW_PHY_REG 0x92 +#define RATE_PHY_REG 0x94 +#define CMOS_CONFIG_PHY_REG 0xa2 +#define PAD_ZRI_CALIB_PHY_REG 0xa4 +#define PAD_ODT_CALIB_PHY_REG 0xa6 +#define PAD_CONFIG_PHY_REG 0xa8 +#define PAD_PRE_DISABLE_PHY_REG 0xa9 +#define TEST_ADLL_REG 0xbf +#define CSN_IOB_VREF_REG(cs) (0xdb + (cs * 12)) +#define CSN_IO_BASE_VREF_REG(cs) (0xd0 + (cs * 12)) + +#define RESULT_DB_PHY_REG_ADDR 0xc0 +#define RESULT_DB_PHY_REG_RX_OFFSET 5 +#define RESULT_DB_PHY_REG_TX_OFFSET 0 + +/* TBD - for NP5 use only CS 0 */ +#define PHY_WRITE_DELAY(cs) WL_PHY_REG +/*( ( _cs_ == 0 ) ? 0x0 : 0x4 )*/ +/* TBD - for NP5 use only CS 0 */ +#define PHY_READ_DELAY(cs) RL_PHY_REG + +#define DDR0_ADDR_1 0xf8258 +#define DDR0_ADDR_2 0xf8254 +#define DDR1_ADDR_1 0xf8270 +#define DDR1_ADDR_2 0xf8270 +#define DDR2_ADDR_1 0xf825c +#define DDR2_ADDR_2 0xf825c +#define DDR3_ADDR_1 0xf8264 +#define DDR3_ADDR_2 0xf8260 +#define DDR4_ADDR_1 0xf8274 +#define DDR4_ADDR_2 0xf8274 + +#define GENERAL_PURPOSE_RESERVED0_REG 0x182e0 + +#define GET_BLOCK_ID_MAX_FREQ(dev_num, block_id) 800000 +#define CS0_RD_LVL_REF_DLY_OFFS 0 +#define CS0_RD_LVL_REF_DLY_LEN 0 +#define CS0_RD_LVL_PH_SEL_OFFS 0 +#define CS0_RD_LVL_PH_SEL_LEN 0 + +#define CS_REGISTER_ADDR_OFFSET 4 +#define CALIBRATED_OBJECTS_REG_ADDR_OFFSET 0x10 + +#define MAX_POLLING_ITERATIONS 100000 + +#define PHASE_REG_OFFSET 32 +#define NUM_BYTES_IN_BURST 31 +#define NUM_OF_CS 4 +#define CS_REG_VALUE(cs_num) (cs_mask_reg[cs_num]) +#define ADLL_LENGTH 32 + +struct write_supp_result { + enum hws_wl_supp stage; + int is_pup_fail; +}; + +struct page_element { + enum hws_page_size page_size_8bit; + /* page size in 8 bits bus width */ + enum hws_page_size page_size_16bit; + /* page size in 16 bits bus width */ + u32 ui_page_mask; + /* Mask used in register */ +}; + +int ddr3_tip_write_leveling_static_config(u32 dev_num, u32 if_id, + enum hws_ddr_freq frequency, + u32 *round_trip_delay_arr); +int ddr3_tip_read_leveling_static_config(u32 dev_num, u32 if_id, + enum hws_ddr_freq frequency, + u32 *total_round_trip_delay_arr); +int ddr3_tip_if_write(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 data_value, u32 mask); +int ddr3_tip_if_polling(u32 dev_num, enum hws_access_type access_type, + u32 if_id, u32 exp_value, u32 mask, u32 offset, + u32 poll_tries); +int ddr3_tip_if_read(u32 dev_num, enum hws_access_type interface_access, + u32 if_id, u32 reg_addr, u32 *data, u32 mask); +int ddr3_tip_bus_read_modify_write(u32 dev_num, + enum hws_access_type access_type, + u32 if_id, u32 phy_id, + enum hws_ddr_phy phy_type, + u32 reg_addr, u32 data_value, u32 reg_mask); +int ddr3_tip_bus_read(u32 dev_num, u32 if_id, enum hws_access_type phy_access, + u32 phy_id, enum hws_ddr_phy phy_type, u32 reg_addr, + u32 *data); +int ddr3_tip_bus_write(u32 dev_num, enum hws_access_type e_interface_access, + u32 if_id, enum hws_access_type e_phy_access, u32 phy_id, + enum hws_ddr_phy e_phy_type, u32 reg_addr, + u32 data_value); +int ddr3_tip_freq_set(u32 dev_num, enum hws_access_type e_access, u32 if_id, + enum hws_ddr_freq memory_freq); +int ddr3_tip_adjust_dqs(u32 dev_num); +int ddr3_tip_init_controller(u32 dev_num); +int ddr3_tip_ext_read(u32 dev_num, u32 if_id, u32 reg_addr, + u32 num_of_bursts, u32 *addr); +int ddr3_tip_ext_write(u32 dev_num, u32 if_id, u32 reg_addr, + u32 num_of_bursts, u32 *addr); +int ddr3_tip_dynamic_read_leveling(u32 dev_num, u32 ui_freq); +int ddr3_tip_legacy_dynamic_read_leveling(u32 dev_num); +int ddr3_tip_dynamic_per_bit_read_leveling(u32 dev_num, u32 ui_freq); +int ddr3_tip_legacy_dynamic_write_leveling(u32 dev_num); +int ddr3_tip_dynamic_write_leveling(u32 dev_num); +int ddr3_tip_dynamic_write_leveling_supp(u32 dev_num); +int ddr3_tip_static_init_controller(u32 dev_num); +int ddr3_tip_configure_phy(u32 dev_num); +int ddr3_tip_load_pattern_to_odpg(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_pattern pattern, + u32 load_addr); +int ddr3_tip_load_pattern_to_mem(u32 dev_num, enum hws_pattern e_pattern); +int ddr3_tip_configure_odpg(u32 dev_num, enum hws_access_type access_type, + u32 if_id, enum hws_dir direction, u32 tx_phases, + u32 tx_burst_size, u32 rx_phases, + u32 delay_between_burst, u32 rd_mode, u32 cs_num, + u32 addr_stress_jump, u32 single_pattern); +int ddr3_tip_set_atr(u32 dev_num, u32 flag_id, u32 value); +int ddr3_tip_write_mrs_cmd(u32 dev_num, u32 *cs_mask_arr, u32 cmd, u32 data, + u32 mask); +int ddr3_tip_write_cs_result(u32 dev_num, u32 offset); +int ddr3_tip_get_first_active_if(u8 dev_num, u32 interface_mask, u32 *if_id); +int ddr3_tip_reset_fifo_ptr(u32 dev_num); +int read_pup_value(int pup_values[MAX_INTERFACE_NUM * MAX_BUS_NUM], + int reg_addr, u32 mask); +int read_adll_value(u32 pup_values[MAX_INTERFACE_NUM * MAX_BUS_NUM], + int reg_addr, u32 mask); +int write_adll_value(u32 pup_values[MAX_INTERFACE_NUM * MAX_BUS_NUM], + int reg_addr); +int ddr3_tip_tune_training_params(u32 dev_num, + struct tune_train_params *params); + +#endif /* _DDR3_TRAINING_IP_FLOW_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_pbs.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_pbs.h new file mode 100644 index 0000000000..c6be67c40a --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_pbs.h @@ -0,0 +1,41 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_PBS_H_ +#define _DDR3_TRAINING_IP_PBS_H_ + +enum { + EBA_CONFIG, + EEBA_CONFIG, + SBA_CONFIG +}; + +enum hws_training_load_op { + TRAINING_LOAD_OPERATION_UNLOAD, + TRAINING_LOAD_OPERATION_LOAD +}; + +enum hws_edge { + TRAINING_EDGE_1, + TRAINING_EDGE_2 +}; + +enum hws_edge_search { + TRAINING_EDGE_MAX, + TRAINING_EDGE_MIN +}; + +enum pbs_dir { + PBS_TX_MODE = 0, + PBS_RX_MODE, + NUM_OF_PBS_MODES +}; + +int ddr3_tip_pbs_rx(u32 dev_num); +int ddr3_tip_print_all_pbs_result(u32 dev_num); +int ddr3_tip_pbs_tx(u32 dev_num); + +#endif /* _DDR3_TRAINING_IP_PBS_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_prv_if.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_prv_if.h new file mode 100644 index 0000000000..724b106275 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_prv_if.h @@ -0,0 +1,107 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_PRV_IF_H +#define _DDR3_TRAINING_IP_PRV_IF_H + +#include "ddr3_training_ip.h" +#include "ddr3_training_ip_flow.h" +#include "ddr3_training_ip_bist.h" + +enum hws_static_config_type { + WRITE_LEVELING_STATIC, + READ_LEVELING_STATIC +}; + +struct ddr3_device_info { + u32 device_id; + u32 ck_delay; +}; + +typedef int (*HWS_TIP_DUNIT_MUX_SELECT_FUNC_PTR)(u8 dev_num, int enable); +typedef int (*HWS_TIP_DUNIT_REG_READ_FUNC_PTR)( + u8 dev_num, enum hws_access_type interface_access, u32 if_id, + u32 offset, u32 *data, u32 mask); +typedef int (*HWS_TIP_DUNIT_REG_WRITE_FUNC_PTR)( + u8 dev_num, enum hws_access_type interface_access, u32 if_id, + u32 offset, u32 data, u32 mask); +typedef int (*HWS_TIP_GET_FREQ_CONFIG_INFO)( + u8 dev_num, enum hws_ddr_freq freq, + struct hws_tip_freq_config_info *freq_config_info); +typedef int (*HWS_TIP_GET_DEVICE_INFO)( + u8 dev_num, struct ddr3_device_info *info_ptr); +typedef int (*HWS_GET_CS_CONFIG_FUNC_PTR)( + u8 dev_num, u32 cs_mask, struct hws_cs_config_info *cs_info); +typedef int (*HWS_SET_FREQ_DIVIDER_FUNC_PTR)( + u8 dev_num, u32 if_id, enum hws_ddr_freq freq); +typedef int (*HWS_GET_INIT_FREQ)(u8 dev_num, enum hws_ddr_freq *freq); +typedef int (*HWS_TRAINING_IP_IF_WRITE_FUNC_PTR)( + u32 dev_num, enum hws_access_type access_type, u32 dunit_id, + u32 reg_addr, u32 data, u32 mask); +typedef int (*HWS_TRAINING_IP_IF_READ_FUNC_PTR)( + u32 dev_num, enum hws_access_type access_type, u32 dunit_id, + u32 reg_addr, u32 *data, u32 mask); +typedef int (*HWS_TRAINING_IP_BUS_WRITE_FUNC_PTR)( + u32 dev_num, enum hws_access_type dunit_access_type, u32 if_id, + enum hws_access_type phy_access_type, u32 phy_id, + enum hws_ddr_phy phy_type, u32 reg_addr, u32 data); +typedef int (*HWS_TRAINING_IP_BUS_READ_FUNC_PTR)( + u32 dev_num, u32 if_id, enum hws_access_type phy_access_type, + u32 phy_id, enum hws_ddr_phy phy_type, u32 reg_addr, u32 *data); +typedef int (*HWS_TRAINING_IP_ALGO_RUN_FUNC_PTR)( + u32 dev_num, enum hws_algo_type algo_type); +typedef int (*HWS_TRAINING_IP_SET_FREQ_FUNC_PTR)( + u32 dev_num, enum hws_access_type access_type, u32 if_id, + enum hws_ddr_freq frequency); +typedef int (*HWS_TRAINING_IP_INIT_CONTROLLER_FUNC_PTR)( + u32 dev_num, struct init_cntr_param *init_cntr_prm); +typedef int (*HWS_TRAINING_IP_PBS_RX_FUNC_PTR)(u32 dev_num); +typedef int (*HWS_TRAINING_IP_PBS_TX_FUNC_PTR)(u32 dev_num); +typedef int (*HWS_TRAINING_IP_SELECT_CONTROLLER_FUNC_PTR)( + u32 dev_num, int enable); +typedef int (*HWS_TRAINING_IP_TOPOLOGY_MAP_LOAD_FUNC_PTR)( + u32 dev_num, struct hws_topology_map *topology_map); +typedef int (*HWS_TRAINING_IP_STATIC_CONFIG_FUNC_PTR)( + u32 dev_num, enum hws_ddr_freq frequency, + enum hws_static_config_type static_config_type, u32 if_id); +typedef int (*HWS_TRAINING_IP_EXTERNAL_READ_PTR)( + u32 dev_num, u32 if_id, u32 ddr_addr, u32 num_bursts, u32 *data); +typedef int (*HWS_TRAINING_IP_EXTERNAL_WRITE_PTR)( + u32 dev_num, u32 if_id, u32 ddr_addr, u32 num_bursts, u32 *data); +typedef int (*HWS_TRAINING_IP_BIST_ACTIVATE)( + u32 dev_num, enum hws_pattern pattern, enum hws_access_type access_type, + u32 if_num, enum hws_dir direction, + enum hws_stress_jump addr_stress_jump, + enum hws_pattern_duration duration, + enum hws_bist_operation oper_type, u32 offset, u32 cs_num, + u32 pattern_addr_length); +typedef int (*HWS_TRAINING_IP_BIST_READ_RESULT)( + u32 dev_num, u32 if_id, struct bist_result *pst_bist_result); +typedef int (*HWS_TRAINING_IP_LOAD_TOPOLOGY)(u32 dev_num, u32 config_num); +typedef int (*HWS_TRAINING_IP_READ_LEVELING)(u32 dev_num, u32 config_num); +typedef int (*HWS_TRAINING_IP_WRITE_LEVELING)(u32 dev_num, u32 config_num); +typedef u32 (*HWS_TRAINING_IP_GET_TEMP)(u8 dev_num); + +struct hws_tip_config_func_db { + HWS_TIP_DUNIT_MUX_SELECT_FUNC_PTR tip_dunit_mux_select_func; + HWS_TIP_DUNIT_REG_READ_FUNC_PTR tip_dunit_read_func; + HWS_TIP_DUNIT_REG_WRITE_FUNC_PTR tip_dunit_write_func; + HWS_TIP_GET_FREQ_CONFIG_INFO tip_get_freq_config_info_func; + HWS_TIP_GET_DEVICE_INFO tip_get_device_info_func; + HWS_SET_FREQ_DIVIDER_FUNC_PTR tip_set_freq_divider_func; + HWS_GET_CS_CONFIG_FUNC_PTR tip_get_cs_config_info; + HWS_TRAINING_IP_GET_TEMP tip_get_temperature; +}; + +int ddr3_tip_init_config_func(u32 dev_num, + struct hws_tip_config_func_db *config_func); +int ddr3_tip_register_xsb_info(u32 dev_num, + struct hws_xsb_info *xsb_info_table); +enum hws_result *ddr3_tip_get_result_ptr(u32 stage); +int ddr3_set_freq_config_info(struct hws_tip_freq_config_info *table); +int print_device_info(u8 dev_num); + +#endif /* _DDR3_TRAINING_IP_PRV_IF_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_static.h b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_static.h new file mode 100644 index 0000000000..878068b24d --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_static.h @@ -0,0 +1,31 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_IP_STATIC_H_ +#define _DDR3_TRAINING_IP_STATIC_H_ + +#include "ddr3_training_ip_def.h" +#include "ddr3_training_ip.h" + +struct trip_delay_element { + u32 dqs_delay; /* DQS delay (m_sec) */ + u32 ck_delay; /* CK Delay (m_sec) */ +}; + +struct hws_tip_static_config_info { + u32 silicon_delay; + struct trip_delay_element *package_trace_arr; + struct trip_delay_element *board_trace_arr; +}; + +int ddr3_tip_run_static_alg(u32 dev_num, enum hws_ddr_freq freq); +int ddr3_tip_init_static_config_db( + u32 dev_num, struct hws_tip_static_config_info *static_config_info); +int ddr3_tip_init_specific_reg_config(u32 dev_num, + struct reg_data *reg_config_arr); +int ddr3_tip_static_phy_init_controller(u32 dev_num); + +#endif /* _DDR3_TRAINING_IP_STATIC_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c b/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c new file mode 100644 index 0000000000..8073c2baaf --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c @@ -0,0 +1,1835 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define WL_ITERATION_NUM 10 +#define ONE_CLOCK_ERROR_SHIFT 2 +#define ALIGN_ERROR_SHIFT -2 + +static u32 pup_mask_table[] = { + 0x000000ff, + 0x0000ff00, + 0x00ff0000, + 0xff000000 +}; + +static struct write_supp_result wr_supp_res[MAX_INTERFACE_NUM][MAX_BUS_NUM]; + +static int ddr3_tip_dynamic_write_leveling_seq(u32 dev_num); +static int ddr3_tip_dynamic_read_leveling_seq(u32 dev_num); +static int ddr3_tip_dynamic_per_bit_read_leveling_seq(u32 dev_num); +static int ddr3_tip_wl_supp_align_err_shift(u32 dev_num, u32 if_id, u32 bus_id, + u32 bus_id_delta); +static int ddr3_tip_wl_supp_align_phase_shift(u32 dev_num, u32 if_id, + u32 bus_id, u32 offset, + u32 bus_id_delta); +static int ddr3_tip_xsb_compare_test(u32 dev_num, u32 if_id, u32 bus_id, + u32 edge_offset, u32 bus_id_delta); +static int ddr3_tip_wl_supp_one_clk_err_shift(u32 dev_num, u32 if_id, + u32 bus_id, u32 bus_id_delta); + +u32 hws_ddr3_tip_max_cs_get(void) +{ + u32 c_cs; + static u32 max_cs; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (!max_cs) { + for (c_cs = 0; c_cs < NUM_OF_CS; c_cs++) { + VALIDATE_ACTIVE(tm-> + interface_params[0].as_bus_params[0]. + cs_bitmask, c_cs); + max_cs++; + } + } + + return max_cs; +} + +/***************************************************************************** +Dynamic read leveling +******************************************************************************/ +int ddr3_tip_dynamic_read_leveling(u32 dev_num, u32 freq) +{ + u32 data, mask; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + u32 bus_num, if_id, cl_val; + enum hws_speed_bin speed_bin_index; + /* save current CS value */ + u32 cs_enable_reg_val[MAX_INTERFACE_NUM] = { 0 }; + int is_any_pup_fail = 0; + u32 data_read[MAX_INTERFACE_NUM + 1] = { 0 }; + u8 rl_values[NUM_OF_CS][MAX_BUS_NUM][MAX_INTERFACE_NUM]; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + if (rl_version == 0) { + /* OLD RL machine */ + data = 0x40; + data |= (1 << 20); + + /* TBD multi CS */ + CHECK_STATUS(ddr3_tip_if_write( + dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, TRAINING_REG, + data, 0x11ffff)); + CHECK_STATUS(ddr3_tip_if_write( + dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + TRAINING_PATTERN_BASE_ADDRESS_REG, + 0, 0xfffffff8)); + CHECK_STATUS(ddr3_tip_if_write( + dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, TRAINING_REG, + (u32)(1 << 31), (u32)(1 << 31))); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + training_result[training_stage][if_id] = TEST_SUCCESS; + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, + (u32)(1 << 31), TRAINING_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("RL: DDR3 poll failed(1) IF %d\n", + if_id)); + training_result[training_stage][if_id] = + TEST_FAILED; + + if (debug_mode == 0) + return MV_FAIL; + } + } + + /* read read-leveling result */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_REG, data_read, 1 << 30)); + /* exit read leveling mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x8, 0x9)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_1_REG, 1 << 16, 1 << 16)); + + /* disable RL machine all Trn_CS[3:0] , [16:0] */ + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_REG, 0, 0xf1ffff)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if ((data_read[if_id] & (1 << 30)) == 0) { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("\n_read Leveling failed for IF %d\n", + if_id)); + training_result[training_stage][if_id] = + TEST_FAILED; + if (debug_mode == 0) + return MV_FAIL; + } + } + return MV_OK; + } + + /* NEW RL machine */ + for (effective_cs = 0; effective_cs < NUM_OF_CS; effective_cs++) + for (bus_num = 0; bus_num < MAX_BUS_NUM; bus_num++) + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) + rl_values[effective_cs][bus_num][if_id] = 0; + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + training_result[training_stage][if_id] = TEST_SUCCESS; + + /* save current cs enable reg val */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val, + MASK_ALL_BITS)); + /* enable single cs */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, (1 << 3), (1 << 3))); + } + + ddr3_tip_reset_fifo_ptr(dev_num); + + /* + * Phase 1: Load pattern (using ODPG) + * + * enter Read Leveling mode + * only 27 bits are masked + * assuming non multi-CS configuration + * write to CS = 0 for the non multi CS configuration, note + * that the results shall be read back to the required CS !!! + */ + + /* BUS count is 0 shifted 26 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0x3, 0x3)); + CHECK_STATUS(ddr3_tip_configure_odpg + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0, + pattern_table[PATTERN_RL].num_of_phases_tx, 0, + pattern_table[PATTERN_RL].num_of_phases_rx, 0, 0, + effective_cs, STRESS_NONE, DURATION_SINGLE)); + + /* load pattern to ODPG */ + ddr3_tip_load_pattern_to_odpg(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, PATTERN_RL, + pattern_table[PATTERN_RL]. + start_addr); + + /* + * Phase 2: ODPG to Read Leveling mode + */ + + /* General Training Opcode register */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_WRITE_READ_MODE_ENABLE_REG, 0, + MASK_ALL_BITS)); + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_CONTROL_REG, + (0x301b01 | effective_cs << 2), 0x3c3fef)); + + /* Object1 opcode register 0 & 1 */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + speed_bin_index = + tm->interface_params[if_id].speed_bin_index; + cl_val = + cas_latency_table[speed_bin_index].cl_val[freq]; + data = (cl_val << 17) | (0x3 << 25); + mask = (0xff << 9) | (0x1f << 17) | (0x3 << 25); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_OBJ1_OPCODE_REG, data, mask)); + } + + /* Set iteration count to max value */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_OPCODE_1_REG, 0xd00, 0xd00)); + + /* + * Phase 2: Mask config + */ + + ddr3_tip_dynamic_read_leveling_seq(dev_num); + + /* + * Phase 3: Read Leveling execution + */ + + /* temporary jira dunit=14751 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_DBG_1_REG, 0, (u32)(1 << 31))); + /* configure phy reset value */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_DBG_3_REG, (0x7f << 24), + (u32)(0xff << 24))); + /* data pup rd reset enable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + SDRAM_CONFIGURATION_REG, 0, (1 << 30))); + /* data pup rd reset disable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + SDRAM_CONFIGURATION_REG, (1 << 30), (1 << 30))); + /* training SW override & training RL mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x1, 0x9)); + /* training enable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_REG, (1 << 24) | (1 << 20), + (1 << 24) | (1 << 20))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_REG, (u32)(1 << 31), (u32)(1 << 31))); + + /********* trigger training *******************/ + /* Trigger, poll on status and disable ODPG */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_TRIGGER_REG, 0x1, 0x1)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_STATUS_REG, 0x1, 0x1)); + + /* check for training done + results pass */ + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0x2, 0x2, + ODPG_TRAINING_STATUS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("Training Done Failed\n")); + return MV_FAIL; + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + if_id, + ODPG_TRAINING_TRIGGER_REG, data_read, + 0x4)); + data = data_read[if_id]; + if (data != 0x0) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("Training Result Failed\n")); + } + } + + /*disable ODPG - Back to functional mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_ENABLE_REG, 0x1 << ODPG_DISABLE_OFFS, + (0x1 << ODPG_DISABLE_OFFS))); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0x0, 0x1, + ODPG_ENABLE_REG, MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("ODPG disable failed ")); + return MV_FAIL; + } + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0, MASK_ALL_BITS)); + + /* double loop on bus, pup */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* check training done */ + is_any_pup_fail = 0; + for (bus_num = 0; + bus_num < tm->num_of_bus_per_interface; + bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_num); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, + if_id, (1 << 25), (1 << 25), + mask_results_pup_reg_map[bus_num], + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("\n_r_l: DDR3 poll failed(2) for bus %d", + bus_num)); + is_any_pup_fail = 1; + } else { + /* read result per pup */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + mask_results_pup_reg_map + [bus_num], data_read, + 0xff)); + rl_values[effective_cs][bus_num] + [if_id] = (u8)data_read[if_id]; + } + } + + if (is_any_pup_fail == 1) { + training_result[training_stage][if_id] = + TEST_FAILED; + if (debug_mode == 0) + return MV_FAIL; + } + } + + DEBUG_LEVELING(DEBUG_LEVEL_INFO, ("RL exit read leveling\n")); + + /* + * Phase 3: Exit Read Leveling + */ + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, (1 << 3), (1 << 3))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_1_REG, (1 << 16), (1 << 16))); + /* set ODPG to functional */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0x0, MASK_ALL_BITS)); + + /* + * Copy the result from the effective CS search to the + * real Functional CS + */ + /*ddr3_tip_write_cs_result(dev_num, RL_PHY_REG); */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0x0, MASK_ALL_BITS)); + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + /* double loop on bus, pup */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_num = 0; + bus_num < tm->num_of_bus_per_interface; + bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_num); + /* read result per pup from arry */ + data = rl_values[effective_cs][bus_num][if_id]; + data = (data & 0x1f) | + (((data & 0xe0) >> 5) << 6); + ddr3_tip_bus_write(dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_num, DDR_PHY_DATA, + RL_PHY_REG + + ((effective_cs == + 0) ? 0x0 : 0x4), data); + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* restore cs enable value */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val[if_id], + MASK_ALL_BITS)); + if (odt_config != 0) { + CHECK_STATUS(ddr3_tip_write_additional_odt_setting + (dev_num, if_id)); + } + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (training_result[training_stage][if_id] == TEST_FAILED) + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Legacy Dynamic write leveling + */ +int ddr3_tip_legacy_dynamic_write_leveling(u32 dev_num) +{ + u32 c_cs, if_id, cs_mask = 0; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * In TRAINIUNG reg (0x15b0) write 0x80000008 | cs_mask: + * Trn_start + * cs_mask = 0x1 <<20 Trn_CS0 - CS0 is included in the DDR3 training + * cs_mask = 0x1 <<21 Trn_CS1 - CS1 is included in the DDR3 training + * cs_mask = 0x1 <<22 Trn_CS2 - CS2 is included in the DDR3 training + * cs_mask = 0x1 <<23 Trn_CS3 - CS3 is included in the DDR3 training + * Trn_auto_seq = write leveling + */ + for (c_cs = 0; c_cs < max_cs; c_cs++) + cs_mask = cs_mask | 1 << (20 + c_cs); + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, 0, + TRAINING_REG, (0x80000008 | cs_mask), + 0xffffffff)); + mdelay(20); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, + (u32)0x80000000, TRAINING_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("polling failed for Old WL result\n")); + return MV_FAIL; + } + } + + return MV_OK; +} + +/* + * Legacy Dynamic read leveling + */ +int ddr3_tip_legacy_dynamic_read_leveling(u32 dev_num) +{ + u32 c_cs, if_id, cs_mask = 0; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * In TRAINIUNG reg (0x15b0) write 0x80000040 | cs_mask: + * Trn_start + * cs_mask = 0x1 <<20 Trn_CS0 - CS0 is included in the DDR3 training + * cs_mask = 0x1 <<21 Trn_CS1 - CS1 is included in the DDR3 training + * cs_mask = 0x1 <<22 Trn_CS2 - CS2 is included in the DDR3 training + * cs_mask = 0x1 <<23 Trn_CS3 - CS3 is included in the DDR3 training + * Trn_auto_seq = Read Leveling using training pattern + */ + for (c_cs = 0; c_cs < max_cs; c_cs++) + cs_mask = cs_mask | 1 << (20 + c_cs); + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, 0, TRAINING_REG, + (0x80000040 | cs_mask), 0xffffffff)); + mdelay(100); + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, + (u32)0x80000000, TRAINING_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("polling failed for Old RL result\n")); + return MV_FAIL; + } + } + + return MV_OK; +} + +/* + * Dynamic per bit read leveling + */ +int ddr3_tip_dynamic_per_bit_read_leveling(u32 dev_num, u32 freq) +{ + u32 data, mask; + u32 bus_num, if_id, cl_val, bit_num; + u32 curr_numb, curr_min_delay; + int adll_array[3] = { 0, -0xa, 0x14 }; + u32 phyreg3_arr[MAX_INTERFACE_NUM][MAX_BUS_NUM]; + enum hws_speed_bin speed_bin_index; + int is_any_pup_fail = 0; + int break_loop = 0; + u32 cs_enable_reg_val[MAX_INTERFACE_NUM]; /* save current CS value */ + u32 data_read[MAX_INTERFACE_NUM]; + int per_bit_rl_pup_status[MAX_INTERFACE_NUM][MAX_BUS_NUM]; + u32 data2_write[MAX_INTERFACE_NUM][MAX_BUS_NUM]; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_num = 0; + bus_num <= tm->num_of_bus_per_interface; bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_num); + per_bit_rl_pup_status[if_id][bus_num] = 0; + data2_write[if_id][bus_num] = 0; + /* read current value of phy register 0x3 */ + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_num, DDR_PHY_DATA, + READ_CENTRALIZATION_PHY_REG, + &phyreg3_arr[if_id][bus_num])); + } + } + + /* NEW RL machine */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + training_result[training_stage][if_id] = TEST_SUCCESS; + + /* save current cs enable reg val */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, &cs_enable_reg_val[if_id], + MASK_ALL_BITS)); + /* enable single cs */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, (1 << 3), (1 << 3))); + } + + ddr3_tip_reset_fifo_ptr(dev_num); + for (curr_numb = 0; curr_numb < 3; curr_numb++) { + /* + * Phase 1: Load pattern (using ODPG) + * + * enter Read Leveling mode + * only 27 bits are masked + * assuming non multi-CS configuration + * write to CS = 0 for the non multi CS configuration, note that + * the results shall be read back to the required CS !!! + */ + + /* BUS count is 0 shifted 26 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0x3, 0x3)); + CHECK_STATUS(ddr3_tip_configure_odpg + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0, + pattern_table[PATTERN_TEST].num_of_phases_tx, 0, + pattern_table[PATTERN_TEST].num_of_phases_rx, 0, + 0, 0, STRESS_NONE, DURATION_SINGLE)); + + /* load pattern to ODPG */ + ddr3_tip_load_pattern_to_odpg(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, PATTERN_TEST, + pattern_table[PATTERN_TEST]. + start_addr); + + /* + * Phase 2: ODPG to Read Leveling mode + */ + + /* General Training Opcode register */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_WRITE_READ_MODE_ENABLE_REG, 0, + MASK_ALL_BITS)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_CONTROL_REG, 0x301b01, 0x3c3fef)); + + /* Object1 opcode register 0 & 1 */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + speed_bin_index = + tm->interface_params[if_id].speed_bin_index; + cl_val = + cas_latency_table[speed_bin_index].cl_val[freq]; + data = (cl_val << 17) | (0x3 << 25); + mask = (0xff << 9) | (0x1f << 17) | (0x3 << 25); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ODPG_OBJ1_OPCODE_REG, data, mask)); + } + + /* Set iteration count to max value */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_OPCODE_1_REG, 0xd00, 0xd00)); + + /* + * Phase 2: Mask config + */ + + ddr3_tip_dynamic_per_bit_read_leveling_seq(dev_num); + + /* + * Phase 3: Read Leveling execution + */ + + /* temporary jira dunit=14751 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_DBG_1_REG, 0, (u32)(1 << 31))); + /* configure phy reset value */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_DBG_3_REG, (0x7f << 24), + (u32)(0xff << 24))); + /* data pup rd reset enable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + SDRAM_CONFIGURATION_REG, 0, (1 << 30))); + /* data pup rd reset disable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + SDRAM_CONFIGURATION_REG, (1 << 30), (1 << 30))); + /* training SW override & training RL mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x1, 0x9)); + /* training enable */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_REG, (1 << 24) | (1 << 20), + (1 << 24) | (1 << 20))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_REG, (u32)(1 << 31), (u32)(1 << 31))); + + /********* trigger training *******************/ + /* Trigger, poll on status and disable ODPG */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_TRIGGER_REG, 0x1, 0x1)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_STATUS_REG, 0x1, 0x1)); + + /*check for training done + results pass */ + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0x2, 0x2, + ODPG_TRAINING_STATUS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("Training Done Failed\n")); + return MV_FAIL; + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + if_id, + ODPG_TRAINING_TRIGGER_REG, data_read, + 0x4)); + data = data_read[if_id]; + if (data != 0x0) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("Training Result Failed\n")); + } + } + + /*disable ODPG - Back to functional mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_ENABLE_REG, 0x1 << ODPG_DISABLE_OFFS, + (0x1 << ODPG_DISABLE_OFFS))); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0x0, 0x1, + ODPG_ENABLE_REG, MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("ODPG disable failed ")); + return MV_FAIL; + } + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0, MASK_ALL_BITS)); + + /* double loop on bus, pup */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* check training done */ + for (bus_num = 0; + bus_num < tm->num_of_bus_per_interface; + bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_num); + + if (per_bit_rl_pup_status[if_id][bus_num] + == 0) { + curr_min_delay = 0; + for (bit_num = 0; bit_num < 8; + bit_num++) { + if (ddr3_tip_if_polling + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, (1 << 25), + (1 << 25), + mask_results_dq_reg_map + [bus_num * 8 + bit_num], + MAX_POLLING_ITERATIONS) != + MV_OK) { + DEBUG_LEVELING + (DEBUG_LEVEL_ERROR, + ("\n_r_l: DDR3 poll failed(2) for bus %d bit %d\n", + bus_num, + bit_num)); + } else { + /* read result per pup */ + CHECK_STATUS + (ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + mask_results_dq_reg_map + [bus_num * 8 + + bit_num], + data_read, + MASK_ALL_BITS)); + data = + (data_read + [if_id] & + 0x1f) | + ((data_read + [if_id] & + 0xe0) << 1); + if (curr_min_delay == 0) + curr_min_delay = + data; + else if (data < + curr_min_delay) + curr_min_delay = + data; + if (data > data2_write[if_id][bus_num]) + data2_write + [if_id] + [bus_num] = + data; + } + } + + if (data2_write[if_id][bus_num] <= + (curr_min_delay + + MAX_DQ_READ_LEVELING_DELAY)) { + per_bit_rl_pup_status[if_id] + [bus_num] = 1; + } + } + } + } + + /* check if there is need to search new phyreg3 value */ + if (curr_numb < 2) { + /* if there is DLL that is not checked yet */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_num = 0; + bus_num < tm->num_of_bus_per_interface; + bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, + bus_num); + if (per_bit_rl_pup_status[if_id] + [bus_num] != 1) { + /* go to next ADLL value */ + CHECK_STATUS + (ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_num, DDR_PHY_DATA, + READ_CENTRALIZATION_PHY_REG, + (phyreg3_arr[if_id] + [bus_num] + + adll_array[curr_numb]))); + break_loop = 1; + break; + } + } + if (break_loop) + break; + } + } /* if (curr_numb < 2) */ + if (!break_loop) + break; + } /* for ( curr_numb = 0; curr_numb <3; curr_numb++) */ + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_num = 0; bus_num < tm->num_of_bus_per_interface; + bus_num++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_num); + if (per_bit_rl_pup_status[if_id][bus_num] == 1) + ddr3_tip_bus_write(dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_num, DDR_PHY_DATA, + RL_PHY_REG + + CS_REG_VALUE(effective_cs), + data2_write[if_id] + [bus_num]); + else + is_any_pup_fail = 1; + } + + /* TBD flow does not support multi CS */ + /* + * cs_bitmask = tm->interface_params[if_id]. + * as_bus_params[bus_num].cs_bitmask; + */ + /* divide by 4 is used for retrieving the CS number */ + /* + * TBD BC2 - what is the PHY address for other + * CS ddr3_tip_write_cs_result() ??? + */ + /* + * find what should be written to PHY + * - max delay that is less than threshold + */ + if (is_any_pup_fail == 1) { + training_result[training_stage][if_id] = TEST_FAILED; + if (debug_mode == 0) + return MV_FAIL; + } + } + DEBUG_LEVELING(DEBUG_LEVEL_INFO, ("RL exit read leveling\n")); + + /* + * Phase 3: Exit Read Leveling + */ + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, (1 << 3), (1 << 3))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_1_REG, (1 << 16), (1 << 16))); + /* set ODPG to functional */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0x0, MASK_ALL_BITS)); + /* + * Copy the result from the effective CS search to the real + * Functional CS + */ + ddr3_tip_write_cs_result(dev_num, RL_PHY_REG); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_DATA_CONTROL_REG, 0x0, MASK_ALL_BITS)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* restore cs enable value */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val[if_id], + MASK_ALL_BITS)); + if (odt_config != 0) { + CHECK_STATUS(ddr3_tip_write_additional_odt_setting + (dev_num, if_id)); + } + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (training_result[training_stage][if_id] == TEST_FAILED) + return MV_FAIL; + } + + return MV_OK; +} + +int ddr3_tip_calc_cs_mask(u32 dev_num, u32 if_id, u32 effective_cs, + u32 *cs_mask) +{ + u32 all_bus_cs = 0, same_bus_cs; + u32 bus_cnt; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + *cs_mask = same_bus_cs = CS_BIT_MASK; + + /* + * In some of the devices (such as BC2), the CS is per pup and there + * for mixed mode is valid on like other devices where CS configuration + * is per interface. + * In order to know that, we do 'Or' and 'And' operation between all + * CS (of the pups). + * If they are they are not the same then it's mixed mode so all CS + * should be configured (when configuring the MRS) + */ + for (bus_cnt = 0; bus_cnt < tm->num_of_bus_per_interface; bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + + all_bus_cs |= tm->interface_params[if_id]. + as_bus_params[bus_cnt].cs_bitmask; + same_bus_cs &= tm->interface_params[if_id]. + as_bus_params[bus_cnt].cs_bitmask; + + /* cs enable is active low */ + *cs_mask &= ~tm->interface_params[if_id]. + as_bus_params[bus_cnt].cs_bitmask; + } + + if (all_bus_cs == same_bus_cs) + *cs_mask = (*cs_mask | (~(1 << effective_cs))) & CS_BIT_MASK; + + return MV_OK; +} + +/* + * Dynamic write leveling + */ +int ddr3_tip_dynamic_write_leveling(u32 dev_num) +{ + u32 reg_data = 0, iter, if_id, bus_cnt; + u32 cs_enable_reg_val[MAX_INTERFACE_NUM] = { 0 }; + u32 cs_mask[MAX_INTERFACE_NUM]; + u32 read_data_sample_delay_vals[MAX_INTERFACE_NUM] = { 0 }; + u32 read_data_ready_delay_vals[MAX_INTERFACE_NUM] = { 0 }; + /* 0 for failure */ + u32 res_values[MAX_INTERFACE_NUM * MAX_BUS_NUM] = { 0 }; + u32 test_res = 0; /* 0 - success for all pup */ + u32 data_read[MAX_INTERFACE_NUM]; + u8 wl_values[NUM_OF_CS][MAX_BUS_NUM][MAX_INTERFACE_NUM]; + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + u32 cs_mask0[MAX_INTERFACE_NUM] = { 0 }; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + training_result[training_stage][if_id] = TEST_SUCCESS; + + /* save Read Data Sample Delay */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + READ_DATA_SAMPLE_DELAY, + read_data_sample_delay_vals, MASK_ALL_BITS)); + /* save Read Data Ready Delay */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + READ_DATA_READY_DELAY, read_data_ready_delay_vals, + MASK_ALL_BITS)); + /* save current cs reg val */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val, MASK_ALL_BITS)); + } + + /* + * Phase 1: DRAM 2 Write Leveling mode + */ + + /*Assert 10 refresh commands to DRAM to all CS */ + for (iter = 0; iter < WL_ITERATION_NUM; iter++) { + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, SDRAM_OPERATION_REG, + (u32)((~(0xf) << 8) | 0x2), 0xf1f)); + } + } + /* check controller back to normal */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, 0, 0x1f, + SDRAM_OPERATION_REG, MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("WL: DDR3 poll failed(3)")); + } + } + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + /*enable write leveling to all cs - Q off , WL n */ + /* calculate interface cs mask */ + CHECK_STATUS(ddr3_tip_write_mrs_cmd(dev_num, cs_mask0, MRS1_CMD, + 0x1000, 0x1080)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* cs enable is active low */ + ddr3_tip_calc_cs_mask(dev_num, if_id, effective_cs, + &cs_mask[if_id]); + } + + /* Enable Output buffer to relevant CS - Q on , WL on */ + CHECK_STATUS(ddr3_tip_write_mrs_cmd + (dev_num, cs_mask, MRS1_CMD, 0x80, 0x1080)); + + /*enable odt for relevant CS */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + 0x1498, (0x3 << (effective_cs * 2)), 0xf)); + + /* + * Phase 2: Set training IP to write leveling mode + */ + + CHECK_STATUS(ddr3_tip_dynamic_write_leveling_seq(dev_num)); + + /* + * Phase 3: Trigger training + */ + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_TRIGGER_REG, 0x1, 0x1)); + + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + /* training done */ + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, + (1 << 1), (1 << 1), ODPG_TRAINING_STATUS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("WL: DDR3 poll (4) failed (Data: 0x%x)\n", + reg_data)); + } +#if !defined(CONFIG_ARMADA_38X) /*Disabled. JIRA #1498 */ + else { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + if_id, + ODPG_TRAINING_TRIGGER_REG, + ®_data, (1 << 2))); + if (reg_data != 0) { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("WL: WL failed IF %d reg_data=0x%x\n", + if_id, reg_data)); + } + } +#endif + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* training done */ + if (ddr3_tip_if_polling + (dev_num, ACCESS_TYPE_UNICAST, if_id, + (1 << 1), (1 << 1), ODPG_TRAINING_STATUS_REG, + MAX_POLLING_ITERATIONS) != MV_OK) { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("WL: DDR3 poll (4) failed (Data: 0x%x)\n", + reg_data)); + } else { +#if !defined(CONFIG_ARMADA_38X) /*Disabled. JIRA #1498 */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, + if_id, + ODPG_TRAINING_STATUS_REG, + data_read, (1 << 2))); + reg_data = data_read[if_id]; + if (reg_data != 0) { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("WL: WL failed IF %d reg_data=0x%x\n", + if_id, reg_data)); + } +#endif + + /* check for training completion per bus */ + for (bus_cnt = 0; + bus_cnt < tm->num_of_bus_per_interface; + bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, + bus_cnt); + /* training status */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + mask_results_pup_reg_map + [bus_cnt], data_read, + (1 << 25))); + reg_data = data_read[if_id]; + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL: IF %d BUS %d reg 0x%x\n", + if_id, bus_cnt, reg_data)); + if (reg_data == 0) { + res_values[ + (if_id * + tm->num_of_bus_per_interface) + + bus_cnt] = 1; + } + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + mask_results_pup_reg_map + [bus_cnt], data_read, + 0xff)); + /* + * Save the read value that should be + * write to PHY register + */ + wl_values[effective_cs] + [bus_cnt][if_id] = + (u8)data_read[if_id]; + } + } + } + + /* + * Phase 4: Exit write leveling mode + */ + + /* disable DQs toggling */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + WR_LEVELING_DQS_PATTERN_REG, 0x0, 0x1)); + + /* Update MRS 1 (WL off) */ + CHECK_STATUS(ddr3_tip_write_mrs_cmd(dev_num, cs_mask0, MRS1_CMD, + 0x1000, 0x1080)); + + /* Update MRS 1 (return to functional mode - Q on , WL off) */ + CHECK_STATUS(ddr3_tip_write_mrs_cmd + (dev_num, cs_mask0, MRS1_CMD, 0x0, 0x1080)); + + /* set phy to normal mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x5, 0x7)); + + /* exit sw override mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x4, 0x7)); + } + + /* + * Phase 5: Load WL values to each PHY + */ + + for (effective_cs = 0; effective_cs < max_cs; effective_cs++) { + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + test_res = 0; + for (bus_cnt = 0; + bus_cnt < tm->num_of_bus_per_interface; + bus_cnt++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_cnt); + /* check if result == pass */ + if (res_values + [(if_id * + tm->num_of_bus_per_interface) + + bus_cnt] == 0) { + /* + * read result control register + * according to pup + */ + reg_data = + wl_values[effective_cs][bus_cnt] + [if_id]; + /* + * Write into write leveling register + * ([4:0] ADLL, [8:6] Phase, [15:10] + * (centralization) ADLL + 0x10) + */ + reg_data = + (reg_data & 0x1f) | + (((reg_data & 0xe0) >> 5) << 6) | + (((reg_data & 0x1f) + + phy_reg1_val) << 10); + ddr3_tip_bus_write( + dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, + bus_cnt, + DDR_PHY_DATA, + WL_PHY_REG + + effective_cs * + CS_REGISTER_ADDR_OFFSET, + reg_data); + } else { + test_res = 1; + /* + * read result control register + * according to pup + */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + mask_results_pup_reg_map + [bus_cnt], data_read, + 0xff)); + reg_data = data_read[if_id]; + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("WL: IF %d BUS %d failed, reg 0x%x\n", + if_id, bus_cnt, reg_data)); + } + } + + if (test_res != 0) { + training_result[training_stage][if_id] = + TEST_FAILED; + } + } + } + /* Set to 0 after each loop to avoid illegal value may be used */ + effective_cs = 0; + + /* + * Copy the result from the effective CS search to the real + * Functional CS + */ + /* ddr3_tip_write_cs_result(dev_num, WL_PHY_REG); */ + /* restore saved values */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* restore Read Data Sample Delay */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + READ_DATA_SAMPLE_DELAY, + read_data_sample_delay_vals[if_id], + MASK_ALL_BITS)); + + /* restore Read Data Ready Delay */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + READ_DATA_READY_DELAY, + read_data_ready_delay_vals[if_id], + MASK_ALL_BITS)); + + /* enable multi cs */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val[if_id], + MASK_ALL_BITS)); + } + + /* Disable modt0 for CS0 training - need to adjust for multy CS */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, 0x1498, + 0x0, 0xf)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (training_result[training_stage][if_id] == TEST_FAILED) + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Dynamic write leveling supplementary + */ +int ddr3_tip_dynamic_write_leveling_supp(u32 dev_num) +{ + int adll_offset; + u32 if_id, bus_id, data, data_tmp; + int is_if_fail = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + is_if_fail = 0; + + for (bus_id = 0; bus_id < GET_TOPOLOGY_NUM_OF_BUSES(); + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + wr_supp_res[if_id][bus_id].is_pup_fail = 1; + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + effective_cs * CS_REGISTER_ADDR_OFFSET, + &data)); + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL Supp: adll_offset=0 data delay = %d\n", + data)); + if (ddr3_tip_wl_supp_align_phase_shift + (dev_num, if_id, bus_id, 0, 0) == MV_OK) { + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL Supp: IF %d bus_id %d adll_offset=0 Success !\n", + if_id, bus_id)); + continue; + } + + /* change adll */ + adll_offset = 5; + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, bus_id, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + effective_cs * CS_REGISTER_ADDR_OFFSET, + data + adll_offset)); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + effective_cs * CS_REGISTER_ADDR_OFFSET, + &data_tmp)); + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL Supp: adll_offset= %d data delay = %d\n", + adll_offset, data_tmp)); + + if (ddr3_tip_wl_supp_align_phase_shift + (dev_num, if_id, bus_id, adll_offset, 0) == MV_OK) { + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL Supp: IF %d bus_id %d adll_offset= %d Success !\n", + if_id, bus_id, adll_offset)); + continue; + } + + /* change adll */ + adll_offset = -5; + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, bus_id, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + effective_cs * CS_REGISTER_ADDR_OFFSET, + data + adll_offset)); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, + WRITE_CENTRALIZATION_PHY_REG + + effective_cs * CS_REGISTER_ADDR_OFFSET, + &data_tmp)); + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL Supp: adll_offset= %d data delay = %d\n", + adll_offset, data_tmp)); + if (ddr3_tip_wl_supp_align_phase_shift + (dev_num, if_id, bus_id, adll_offset, 0) == MV_OK) { + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("WL Supp: IF %d bus_id %d adll_offset= %d Success !\n", + if_id, bus_id, adll_offset)); + continue; + } else { + DEBUG_LEVELING( + DEBUG_LEVEL_ERROR, + ("WL Supp: IF %d bus_id %d Failed !\n", + if_id, bus_id)); + is_if_fail = 1; + } + } + DEBUG_LEVELING(DEBUG_LEVEL_TRACE, + ("WL Supp: IF %d bus_id %d is_pup_fail %d\n", + if_id, bus_id, is_if_fail)); + + if (is_if_fail == 1) { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("WL Supp: IF %d failed\n", if_id)); + training_result[training_stage][if_id] = TEST_FAILED; + } else { + training_result[training_stage][if_id] = TEST_SUCCESS; + } + } + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (training_result[training_stage][if_id] == TEST_FAILED) + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Phase Shift + */ +static int ddr3_tip_wl_supp_align_phase_shift(u32 dev_num, u32 if_id, + u32 bus_id, u32 offset, + u32 bus_id_delta) +{ + wr_supp_res[if_id][bus_id].stage = PHASE_SHIFT; + if (ddr3_tip_xsb_compare_test(dev_num, if_id, bus_id, + 0, bus_id_delta) == MV_OK) { + wr_supp_res[if_id][bus_id].is_pup_fail = 0; + return MV_OK; + } else if (ddr3_tip_xsb_compare_test(dev_num, if_id, bus_id, + ONE_CLOCK_ERROR_SHIFT, + bus_id_delta) == MV_OK) { + /* 1 clock error */ + wr_supp_res[if_id][bus_id].stage = CLOCK_SHIFT; + DEBUG_LEVELING(DEBUG_LEVEL_TRACE, + ("Supp: 1 error clock for if %d pup %d with ofsset %d success\n", + if_id, bus_id, offset)); + ddr3_tip_wl_supp_one_clk_err_shift(dev_num, if_id, bus_id, 0); + wr_supp_res[if_id][bus_id].is_pup_fail = 0; + return MV_OK; + } else if (ddr3_tip_xsb_compare_test(dev_num, if_id, bus_id, + ALIGN_ERROR_SHIFT, + bus_id_delta) == MV_OK) { + /* align error */ + DEBUG_LEVELING(DEBUG_LEVEL_TRACE, + ("Supp: align error for if %d pup %d with ofsset %d success\n", + if_id, bus_id, offset)); + wr_supp_res[if_id][bus_id].stage = ALIGN_SHIFT; + ddr3_tip_wl_supp_align_err_shift(dev_num, if_id, bus_id, 0); + wr_supp_res[if_id][bus_id].is_pup_fail = 0; + return MV_OK; + } else { + wr_supp_res[if_id][bus_id].is_pup_fail = 1; + return MV_FAIL; + } +} + +/* + * Compare Test + */ +static int ddr3_tip_xsb_compare_test(u32 dev_num, u32 if_id, u32 bus_id, + u32 edge_offset, u32 bus_id_delta) +{ + u32 num_of_succ_byte_compare, word_in_pattern, abs_offset; + u32 word_offset, i; + u32 read_pattern[TEST_PATTERN_LENGTH * 2]; + struct pattern_info *pattern_table = ddr3_tip_get_pattern_table(); + u32 pattern_test_pattern_table[8]; + + for (i = 0; i < 8; i++) { + pattern_test_pattern_table[i] = + pattern_table_get_word(dev_num, PATTERN_TEST, (u8)i); + } + + /* extern write, than read and compare */ + CHECK_STATUS(ddr3_tip_ext_write + (dev_num, if_id, + (pattern_table[PATTERN_TEST].start_addr + + ((SDRAM_CS_SIZE + 1) * effective_cs)), 1, + pattern_test_pattern_table)); + + CHECK_STATUS(ddr3_tip_reset_fifo_ptr(dev_num)); + + CHECK_STATUS(ddr3_tip_ext_read + (dev_num, if_id, + (pattern_table[PATTERN_TEST].start_addr + + ((SDRAM_CS_SIZE + 1) * effective_cs)), 1, read_pattern)); + + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("XSB-compt: IF %d bus_id %d 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", + if_id, bus_id, read_pattern[0], read_pattern[1], + read_pattern[2], read_pattern[3], read_pattern[4], + read_pattern[5], read_pattern[6], read_pattern[7])); + + /* compare byte per pup */ + num_of_succ_byte_compare = 0; + for (word_in_pattern = start_xsb_offset; + word_in_pattern < (TEST_PATTERN_LENGTH * 2); word_in_pattern++) { + word_offset = word_in_pattern + edge_offset; + if ((word_offset > (TEST_PATTERN_LENGTH * 2 - 1)) || + (word_offset < 0)) + continue; + + if ((read_pattern[word_in_pattern] & pup_mask_table[bus_id]) == + (pattern_test_pattern_table[word_offset] & + pup_mask_table[bus_id])) + num_of_succ_byte_compare++; + } + + abs_offset = (edge_offset > 0) ? edge_offset : -edge_offset; + if (num_of_succ_byte_compare == ((TEST_PATTERN_LENGTH * 2) - + abs_offset - start_xsb_offset)) { + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("XSB-compt: IF %d bus_id %d num_of_succ_byte_compare %d - Success\n", + if_id, bus_id, num_of_succ_byte_compare)); + return MV_OK; + } else { + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("XSB-compt: IF %d bus_id %d num_of_succ_byte_compare %d - Fail !\n", + if_id, bus_id, num_of_succ_byte_compare)); + + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("XSB-compt: expected 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", + pattern_test_pattern_table[0], + pattern_test_pattern_table[1], + pattern_test_pattern_table[2], + pattern_test_pattern_table[3], + pattern_test_pattern_table[4], + pattern_test_pattern_table[5], + pattern_test_pattern_table[6], + pattern_test_pattern_table[7])); + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("XSB-compt: recieved 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n", + read_pattern[0], read_pattern[1], + read_pattern[2], read_pattern[3], + read_pattern[4], read_pattern[5], + read_pattern[6], read_pattern[7])); + + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("XSB-compt: IF %d bus_id %d num_of_succ_byte_compare %d - Fail !\n", + if_id, bus_id, num_of_succ_byte_compare)); + + return MV_FAIL; + } +} + +/* + * Clock error shift - function moves the write leveling delay 1cc forward + */ +static int ddr3_tip_wl_supp_one_clk_err_shift(u32 dev_num, u32 if_id, + u32 bus_id, u32 bus_id_delta) +{ + int phase, adll; + u32 data; + DEBUG_LEVELING(DEBUG_LEVEL_TRACE, ("One_clk_err_shift\n")); + + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, ACCESS_TYPE_UNICAST, bus_id, + DDR_PHY_DATA, WL_PHY_REG, &data)); + phase = ((data >> 6) & 0x7); + adll = data & 0x1f; + DEBUG_LEVELING(DEBUG_LEVEL_TRACE, + ("One_clk_err_shift: IF %d bus_id %d phase %d adll %d\n", + if_id, bus_id, phase, adll)); + + if ((phase == 0) || (phase == 1)) { + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, bus_id, + DDR_PHY_DATA, 0, (phase + 2), 0x1f)); + } else if (phase == 2) { + if (adll < 6) { + data = (3 << 6) + (0x1f); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + bus_id, DDR_PHY_DATA, 0, data, + (0x7 << 6 | 0x1f))); + data = 0x2f; + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + bus_id, DDR_PHY_DATA, 1, data, 0x3f)); + } + } else { + /* phase 3 */ + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Align error shift + */ +static int ddr3_tip_wl_supp_align_err_shift(u32 dev_num, u32 if_id, + u32 bus_id, u32 bus_id_delta) +{ + int phase, adll; + u32 data; + + /* Shift WL result 1 phase back */ + CHECK_STATUS(ddr3_tip_bus_read(dev_num, if_id, ACCESS_TYPE_UNICAST, + bus_id, DDR_PHY_DATA, WL_PHY_REG, + &data)); + phase = ((data >> 6) & 0x7); + adll = data & 0x1f; + DEBUG_LEVELING( + DEBUG_LEVEL_TRACE, + ("Wl_supp_align_err_shift: IF %d bus_id %d phase %d adll %d\n", + if_id, bus_id, phase, adll)); + + if (phase < 2) { + if (adll > 0x1a) { + if (phase == 0) + return MV_FAIL; + + if (phase == 1) { + data = 0; + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, bus_id, DDR_PHY_DATA, + 0, data, (0x7 << 6 | 0x1f))); + data = 0xf; + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, bus_id, DDR_PHY_DATA, + 1, data, 0x1f)); + return MV_OK; + } + } else { + return MV_FAIL; + } + } else if ((phase == 2) || (phase == 3)) { + phase = phase - 2; + data = (phase << 6) + (adll & 0x1f); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, bus_id, + DDR_PHY_DATA, 0, data, (0x7 << 6 | 0x1f))); + return MV_OK; + } else { + DEBUG_LEVELING(DEBUG_LEVEL_ERROR, + ("Wl_supp_align_err_shift: unexpected phase\n")); + + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Dynamic write leveling sequence + */ +static int ddr3_tip_dynamic_write_leveling_seq(u32 dev_num) +{ + u32 bus_id, dq_id; + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_SW_2_REG, 0x1, 0x5)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_WRITE_LEVELING_REG, 0x50, 0xff)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_WRITE_LEVELING_REG, 0x5c, 0xff)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_TRAINING_CONTROL_REG, 0x381b82, 0x3c3faf)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_OBJ1_OPCODE_REG, (0x3 << 25), (0x3ffff << 9))); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_OBJ1_ITER_CNT_REG, 0x80, 0xffff)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_WRITE_LEVELING_DONE_CNTR_REG, 0x14, 0xff)); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + TRAINING_WRITE_LEVELING_REG, 0xff5c, 0xffff)); + + /* mask PBS */ + for (dq_id = 0; dq_id < MAX_DQ_NUM; dq_id++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_dq_reg_map[dq_id], 0x1 << 24, + 0x1 << 24)); + } + + /* Mask all results */ + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; bus_id++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_pup_reg_map[bus_id], 0x1 << 24, + 0x1 << 24)); + } + + /* Unmask only wanted */ + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_pup_reg_map[bus_id], 0, 0x1 << 24)); + } + + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + WR_LEVELING_DQS_PATTERN_REG, 0x1, 0x1)); + + return MV_OK; +} + +/* + * Dynamic read leveling sequence + */ +static int ddr3_tip_dynamic_read_leveling_seq(u32 dev_num) +{ + u32 bus_id, dq_id; + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* mask PBS */ + for (dq_id = 0; dq_id < MAX_DQ_NUM; dq_id++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_dq_reg_map[dq_id], 0x1 << 24, + 0x1 << 24)); + } + + /* Mask all results */ + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; bus_id++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_pup_reg_map[bus_id], 0x1 << 24, + 0x1 << 24)); + } + + /* Unmask only wanted */ + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_pup_reg_map[bus_id], 0, 0x1 << 24)); + } + + return MV_OK; +} + +/* + * Dynamic read leveling sequence + */ +static int ddr3_tip_dynamic_per_bit_read_leveling_seq(u32 dev_num) +{ + u32 bus_id, dq_id; + u16 *mask_results_pup_reg_map = ddr3_tip_get_mask_results_pup_reg_map(); + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* mask PBS */ + for (dq_id = 0; dq_id < MAX_DQ_NUM; dq_id++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_dq_reg_map[dq_id], 0x1 << 24, + 0x1 << 24)); + } + + /* Mask all results */ + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; bus_id++) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_pup_reg_map[bus_id], 0x1 << 24, + 0x1 << 24)); + } + + /* Unmask only wanted */ + for (dq_id = 0; dq_id < MAX_DQ_NUM; dq_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, dq_id / 8); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + mask_results_dq_reg_map[dq_id], 0x0 << 24, + 0x1 << 24)); + } + + return MV_OK; +} + +/* + * Print write leveling supplementary results + */ +int ddr3_tip_print_wl_supp_result(u32 dev_num) +{ + u32 bus_id = 0, if_id = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + DEBUG_LEVELING(DEBUG_LEVEL_INFO, + ("I/F0 PUP0 Result[0 - success, 1-fail] ...\n")); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + DEBUG_LEVELING(DEBUG_LEVEL_INFO, + ("%d ,", wr_supp_res[if_id] + [bus_id].is_pup_fail)); + } + } + DEBUG_LEVELING( + DEBUG_LEVEL_INFO, + ("I/F0 PUP0 Stage[0-phase_shift, 1-clock_shift, 2-align_shift] ...\n")); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_id = 0; bus_id < tm->num_of_bus_per_interface; + bus_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_id); + DEBUG_LEVELING(DEBUG_LEVEL_INFO, + ("%d ,", wr_supp_res[if_id] + [bus_id].stage)); + } + } + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.h b/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.h new file mode 100644 index 0000000000..f2b4177082 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.h @@ -0,0 +1,17 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR3_TRAINING_LEVELING_H_ +#define _DDR3_TRAINING_LEVELING_H_ + +#define MAX_DQ_READ_LEVELING_DELAY 15 + +int ddr3_tip_print_wl_supp_result(u32 dev_num); +int ddr3_tip_calc_cs_mask(u32 dev_num, u32 if_id, u32 effective_cs, + u32 *cs_mask); +u32 hws_ddr3_tip_max_cs_get(void); + +#endif /* _DDR3_TRAINING_LEVELING_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_pbs.c b/drivers/ddr/marvell/a38x/old/ddr3_training_pbs.c new file mode 100644 index 0000000000..c6f58c9ea7 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_pbs.c @@ -0,0 +1,994 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +#define TYPICAL_PBS_VALUE 12 + +u32 nominal_adll[MAX_INTERFACE_NUM * MAX_BUS_NUM]; +enum hws_training_ip_stat train_status[MAX_INTERFACE_NUM]; +u8 result_mat[MAX_INTERFACE_NUM][MAX_BUS_NUM][BUS_WIDTH_IN_BITS]; +u8 result_mat_rx_dqs[MAX_INTERFACE_NUM][MAX_BUS_NUM][MAX_CS_NUM]; +/* 4-EEWA, 3-EWA, 2-SWA, 1-Fail, 0-Pass */ +u8 result_all_bit[MAX_BUS_NUM * BUS_WIDTH_IN_BITS * MAX_INTERFACE_NUM]; +u8 max_pbs_per_pup[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 min_pbs_per_pup[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 max_adll_per_pup[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 min_adll_per_pup[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u32 pbsdelay_per_pup[NUM_OF_PBS_MODES][MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 adll_shift_lock[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +u8 adll_shift_val[MAX_INTERFACE_NUM][MAX_BUS_NUM]; +enum hws_pattern pbs_pattern = PATTERN_VREF; +static u8 pup_state[MAX_INTERFACE_NUM][MAX_BUS_NUM]; + +/* + * Name: ddr3_tip_pbs + * Desc: PBS + * Args: TBD + * Notes: + * Returns: OK if success, other error code if fail. + */ +int ddr3_tip_pbs(u32 dev_num, enum pbs_dir pbs_mode) +{ + u32 res0[MAX_INTERFACE_NUM]; + int adll_tap = MEGA / freq_val[medium_freq] / 64; + int pad_num = 0; + enum hws_search_dir search_dir = + (pbs_mode == PBS_RX_MODE) ? HWS_HIGH2LOW : HWS_LOW2HIGH; + enum hws_dir dir = (pbs_mode == PBS_RX_MODE) ? OPER_READ : OPER_WRITE; + int iterations = (pbs_mode == PBS_RX_MODE) ? 31 : 63; + u32 res_valid_mask = (pbs_mode == PBS_RX_MODE) ? 0x1f : 0x3f; + int init_val = (search_dir == HWS_LOW2HIGH) ? 0 : iterations; + enum hws_edge_compare search_edge = EDGE_FP; + u32 pup = 0, bit = 0, if_id = 0, all_lock = 0, cs_num = 0; + int reg_addr = 0; + u32 validation_val = 0; + u32 cs_enable_reg_val[MAX_INTERFACE_NUM]; + u16 *mask_results_dq_reg_map = ddr3_tip_get_mask_results_dq_reg(); + u8 temp = 0; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* save current cs enable reg val */ + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + /* save current cs enable reg val */ + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val, MASK_ALL_BITS)); + + /* enable single cs */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, (1 << 3), (1 << 3))); + } + + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (READ_CENTRALIZATION_PHY_REG + + (effective_cs * CS_REGISTER_ADDR_OFFSET)) : + (WRITE_CENTRALIZATION_PHY_REG + + (effective_cs * CS_REGISTER_ADDR_OFFSET)); + read_adll_value(nominal_adll, reg_addr, MASK_ALL_BITS); + + /* stage 1 shift ADLL */ + ddr3_tip_ip_training(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, RESULT_PER_BIT, + HWS_CONTROL_ELEMENT_ADLL, search_dir, dir, + tm->if_act_mask, init_val, iterations, + pbs_pattern, search_edge, CS_SINGLE, cs_num, + train_status); + validation_val = (pbs_mode == PBS_RX_MODE) ? 0x1f : 0; + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + min_adll_per_pup[if_id][pup] = + (pbs_mode == PBS_RX_MODE) ? 0x1f : 0x3f; + pup_state[if_id][pup] = 0x3; + adll_shift_lock[if_id][pup] = 1; + max_adll_per_pup[if_id][pup] = 0x0; + } + } + + /* EBA */ + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + mask_results_dq_reg_map[ + bit + pup * BUS_WIDTH_IN_BITS], + res0, MASK_ALL_BITS)); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("FP I/F %d, bit:%d, pup:%d res0 0x%x\n", + if_id, bit, pup, + res0[if_id])); + if (pup_state[if_id][pup] != 3) + continue; + /* if not EBA state than move to next pup */ + + if ((res0[if_id] & 0x2000000) == 0) { + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("-- Fail Training IP\n")); + /* training machine failed */ + pup_state[if_id][pup] = 1; + adll_shift_lock[if_id][pup] = 0; + continue; + } + + else if ((res0[if_id] & res_valid_mask) == + validation_val) { + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("-- FAIL EBA %d %d %d %d\n", + if_id, bit, pup, + res0[if_id])); + pup_state[if_id][pup] = 4; + /* this pup move to EEBA */ + adll_shift_lock[if_id][pup] = 0; + continue; + } else { + /* + * The search ended in Pass we need + * Fail + */ + res0[if_id] = + (pbs_mode == PBS_RX_MODE) ? + ((res0[if_id] & + res_valid_mask) + 1) : + ((res0[if_id] & + res_valid_mask) - 1); + max_adll_per_pup[if_id][pup] = + (max_adll_per_pup[if_id][pup] < + res0[if_id]) ? + (u8)res0[if_id] : + max_adll_per_pup[if_id][pup]; + min_adll_per_pup[if_id][pup] = + (res0[if_id] > + min_adll_per_pup[if_id][pup]) ? + min_adll_per_pup[if_id][pup] : + (u8) + res0[if_id]; + /* + * vs the Rx we are searching for the + * smallest value of DQ shift so all + * Bus would fail + */ + adll_shift_val[if_id][pup] = + (pbs_mode == PBS_RX_MODE) ? + max_adll_per_pup[if_id][pup] : + min_adll_per_pup[if_id][pup]; + } + } + } + } + + /* EEBA */ + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + if (pup_state[if_id][pup] != 4) + continue; + /* + * if pup state different from EEBA than move to + * next pup + */ + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x54 + effective_cs * 0x10) : + (0x14 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, pup, DDR_PHY_DATA, + reg_addr, 0x1f)); + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x55 + effective_cs * 0x10) : + (0x15 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, pup, DDR_PHY_DATA, + reg_addr, 0x1f)); + /* initialize the Edge2 Max. */ + adll_shift_val[if_id][pup] = 0; + min_adll_per_pup[if_id][pup] = + (pbs_mode == PBS_RX_MODE) ? 0x1f : 0x3f; + max_adll_per_pup[if_id][pup] = 0x0; + + ddr3_tip_ip_training(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, RESULT_PER_BIT, + HWS_CONTROL_ELEMENT_ADLL, + search_dir, dir, + tm->if_act_mask, init_val, + iterations, pbs_pattern, + search_edge, CS_SINGLE, cs_num, + train_status); + DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + ("ADLL shift results:\n")); + + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + mask_results_dq_reg_map[ + bit + pup * + BUS_WIDTH_IN_BITS], + res0, MASK_ALL_BITS)); + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("FP I/F %d, bit:%d, pup:%d res0 0x%x\n", + if_id, bit, pup, + res0[if_id])); + + if ((res0[if_id] & 0x2000000) == 0) { + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + (" -- EEBA Fail\n")); + bit = BUS_WIDTH_IN_BITS; + /* exit bit loop */ + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("-- EEBA Fail Training IP\n")); + /* + * training machine failed but pass + * before in the EBA so maybe the DQS + * shift change env. + */ + pup_state[if_id][pup] = 2; + adll_shift_lock[if_id][pup] = 0; + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x54 + effective_cs * 0x10) : + (0x14 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + 0x0)); + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x55 + effective_cs * 0x10) : + (0x15 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + 0x0)); + continue; + } else if ((res0[if_id] & res_valid_mask) == + validation_val) { + /* exit bit loop */ + bit = BUS_WIDTH_IN_BITS; + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("-- FAIL EEBA\n")); + /* this pup move to SBA */ + pup_state[if_id][pup] = 2; + adll_shift_lock[if_id][pup] = 0; + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x54 + effective_cs * 0x10) : + (0x14 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + 0x0)); + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x55 + effective_cs * 0x10) : + (0x15 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, + ACCESS_TYPE_UNICAST, + if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + 0x0)); + continue; + } else { + adll_shift_lock[if_id][pup] = 1; + /* + * The search ended in Pass we need + * Fail + */ + res0[if_id] = + (pbs_mode == PBS_RX_MODE) ? + ((res0[if_id] & + res_valid_mask) + 1) : + ((res0[if_id] & + res_valid_mask) - 1); + max_adll_per_pup[if_id][pup] = + (max_adll_per_pup[if_id][pup] < + res0[if_id]) ? + (u8)res0[if_id] : + max_adll_per_pup[if_id][pup]; + min_adll_per_pup[if_id][pup] = + (res0[if_id] > + min_adll_per_pup[if_id][pup]) ? + min_adll_per_pup[if_id][pup] : + (u8)res0[if_id]; + /* + * vs the Rx we are searching for the + * smallest value of DQ shift so all Bus + * would fail + */ + adll_shift_val[if_id][pup] = + (pbs_mode == PBS_RX_MODE) ? + max_adll_per_pup[if_id][pup] : + min_adll_per_pup[if_id][pup]; + } + } + } + } + + /* Print Stage result */ + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("FP I/F %d, ADLL Shift for EBA: pup[%d] Lock status = %d Lock Val = %d,%d\n", + if_id, pup, + adll_shift_lock[if_id][pup], + max_adll_per_pup[if_id][pup], + min_adll_per_pup[if_id][pup])); + } + } + DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + ("Update ADLL Shift of all pups:\n")); + + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (adll_shift_lock[if_id][pup] != 1) + continue; + /* if pup not locked continue to next pup */ + + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x3 + effective_cs * 4) : + (0x1 + effective_cs * 4); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, pup, DDR_PHY_DATA, + reg_addr, adll_shift_val[if_id][pup])); + DEBUG_PBS_ENGINE(DEBUG_LEVEL_TRACE, + ("FP I/F %d, Pup[%d] = %d\n", if_id, + pup, adll_shift_val[if_id][pup])); + } + } + + /* PBS EEBA&EBA */ + /* Start the Per Bit Skew search */ + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + max_pbs_per_pup[if_id][pup] = 0x0; + min_pbs_per_pup[if_id][pup] = 0x1f; + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + /* reset result for PBS */ + result_all_bit[bit + pup * BUS_WIDTH_IN_BITS + + if_id * MAX_BUS_NUM * + BUS_WIDTH_IN_BITS] = 0; + } + } + } + + iterations = 31; + search_dir = HWS_LOW2HIGH; + /* !!!!! ran sh (search_dir == HWS_LOW2HIGH)?0:iterations; */ + init_val = 0; + + ddr3_tip_ip_training(dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + RESULT_PER_BIT, HWS_CONTROL_ELEMENT_DQ_SKEW, + search_dir, dir, tm->if_act_mask, init_val, + iterations, pbs_pattern, search_edge, + CS_SINGLE, cs_num, train_status); + + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (adll_shift_lock[if_id][pup] != 1) { + /* if pup not lock continue to next pup */ + continue; + } + + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + mask_results_dq_reg_map[ + bit + + pup * BUS_WIDTH_IN_BITS], + res0, MASK_ALL_BITS)); + DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + ("Per Bit Skew search, FP I/F %d, bit:%d, pup:%d res0 0x%x\n", + if_id, bit, pup, + res0[if_id])); + if ((res0[if_id] & 0x2000000) == 0) { + DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + ("--EBA PBS Fail - Training IP machine\n")); + /* exit the bit loop */ + bit = BUS_WIDTH_IN_BITS; + /* + * ADLL is no long in lock need new + * search + */ + adll_shift_lock[if_id][pup] = 0; + /* Move to SBA */ + pup_state[if_id][pup] = 2; + max_pbs_per_pup[if_id][pup] = 0x0; + min_pbs_per_pup[if_id][pup] = 0x1f; + continue; + } else { + temp = (u8)(res0[if_id] & + res_valid_mask); + max_pbs_per_pup[if_id][pup] = + (temp > + max_pbs_per_pup[if_id][pup]) ? + temp : + max_pbs_per_pup[if_id][pup]; + min_pbs_per_pup[if_id][pup] = + (temp < + min_pbs_per_pup[if_id][pup]) ? + temp : + min_pbs_per_pup[if_id][pup]; + result_all_bit[bit + + pup * BUS_WIDTH_IN_BITS + + if_id * MAX_BUS_NUM * + BUS_WIDTH_IN_BITS] = + temp; + } + } + } + } + + /* Check all Pup lock */ + all_lock = 1; + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + all_lock = all_lock * adll_shift_lock[if_id][pup]; + } + } + + /* Only if not all Pups Lock */ + if (all_lock == 0) { + DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + ("##########ADLL shift for SBA###########\n")); + + /* ADLL shift for SBA */ + search_dir = (pbs_mode == PBS_RX_MODE) ? HWS_LOW2HIGH : + HWS_HIGH2LOW; + init_val = (search_dir == HWS_LOW2HIGH) ? 0 : iterations; + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + if (adll_shift_lock[if_id][pup] == 1) { + /*if pup lock continue to next pup */ + continue; + } + /*init the var altogth init before */ + adll_shift_lock[if_id][pup] = 0; + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x54 + effective_cs * 0x10) : + (0x14 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, 0)); + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x55 + effective_cs * 0x10) : + (0x15 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, 0)); + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x5f + effective_cs * 0x10) : + (0x1f + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, 0)); + /* initilaze the Edge2 Max. */ + adll_shift_val[if_id][pup] = 0; + min_adll_per_pup[if_id][pup] = 0x1f; + max_adll_per_pup[if_id][pup] = 0x0; + + ddr3_tip_ip_training(dev_num, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + RESULT_PER_BIT, + HWS_CONTROL_ELEMENT_ADLL, + search_dir, dir, + tm->if_act_mask, + init_val, iterations, + pbs_pattern, + search_edge, CS_SINGLE, + cs_num, train_status); + + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + mask_results_dq_reg_map + [bit + + pup * + BUS_WIDTH_IN_BITS], + res0, MASK_ALL_BITS)); + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_INFO, + ("FP I/F %d, bit:%d, pup:%d res0 0x%x\n", + if_id, bit, pup, res0[if_id])); + if ((res0[if_id] & 0x2000000) == 0) { + /* exit the bit loop */ + bit = BUS_WIDTH_IN_BITS; + /* Fail SBA --> Fail PBS */ + pup_state[if_id][pup] = 1; + DEBUG_PBS_ENGINE + (DEBUG_LEVEL_INFO, + (" SBA Fail\n")); + continue; + } else { + /* + * - increment to get all + * 8 bit lock. + */ + adll_shift_lock[if_id][pup]++; + /* + * The search ended in Pass + * we need Fail + */ + res0[if_id] = + (pbs_mode == PBS_RX_MODE) ? + ((res0[if_id] & res_valid_mask) + 1) : + ((res0[if_id] & res_valid_mask) - 1); + max_adll_per_pup[if_id][pup] = + (max_adll_per_pup[if_id] + [pup] < res0[if_id]) ? + (u8)res0[if_id] : + max_adll_per_pup[if_id][pup]; + min_adll_per_pup[if_id][pup] = + (res0[if_id] > + min_adll_per_pup[if_id] + [pup]) ? + min_adll_per_pup[if_id][pup] : + (u8)res0[if_id]; + /* + * vs the Rx we are searching for + * the smallest value of DQ shift + * so all Bus would fail + */ + adll_shift_val[if_id][pup] = + (pbs_mode == PBS_RX_MODE) ? + max_adll_per_pup[if_id][pup] : + min_adll_per_pup[if_id][pup]; + } + } + /* 1 is lock */ + adll_shift_lock[if_id][pup] = + (adll_shift_lock[if_id][pup] == 8) ? + 1 : 0; + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x3 + effective_cs * 4) : + (0x1 + effective_cs * 4); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + adll_shift_val[if_id][pup])); + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_INFO, + ("adll_shift_lock[%x][%x] = %x\n", + if_id, pup, + adll_shift_lock[if_id][pup])); + } + } + + /* End ADLL Shift for SBA */ + /* Start the Per Bit Skew search */ + /* The ADLL shift finished with a Pass */ + search_edge = (pbs_mode == PBS_RX_MODE) ? EDGE_PF : EDGE_FP; + search_dir = (pbs_mode == PBS_RX_MODE) ? + HWS_LOW2HIGH : HWS_HIGH2LOW; + iterations = 0x1f; + /* - The initial value is different in Rx and Tx mode */ + init_val = (pbs_mode == PBS_RX_MODE) ? 0 : iterations; + + ddr3_tip_ip_training(dev_num, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, RESULT_PER_BIT, + HWS_CONTROL_ELEMENT_DQ_SKEW, + search_dir, dir, tm->if_act_mask, + init_val, iterations, pbs_pattern, + search_edge, CS_SINGLE, cs_num, + train_status); + + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + CHECK_STATUS(ddr3_tip_if_read + (dev_num, + ACCESS_TYPE_MULTICAST, + PARAM_NOT_CARE, + mask_results_dq_reg_map + [bit + + pup * + BUS_WIDTH_IN_BITS], + res0, MASK_ALL_BITS)); + if (pup_state[if_id][pup] != 2) { + /* + * if pup is not SBA continue + * to next pup + */ + bit = BUS_WIDTH_IN_BITS; + continue; + } + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_INFO, + ("Per Bit Skew search, PF I/F %d, bit:%d, pup:%d res0 0x%x\n", + if_id, bit, pup, res0[if_id])); + if ((res0[if_id] & 0x2000000) == 0) { + DEBUG_PBS_ENGINE + (DEBUG_LEVEL_INFO, + ("SBA Fail\n")); + + max_pbs_per_pup[if_id][pup] = + 0x1f; + result_all_bit[ + bit + pup * + BUS_WIDTH_IN_BITS + + if_id * MAX_BUS_NUM * + BUS_WIDTH_IN_BITS] = + 0x1f; + } else { + temp = (u8)(res0[if_id] & + res_valid_mask); + max_pbs_per_pup[if_id][pup] = + (temp > + max_pbs_per_pup[if_id] + [pup]) ? temp : + max_pbs_per_pup + [if_id][pup]; + min_pbs_per_pup[if_id][pup] = + (temp < + min_pbs_per_pup[if_id] + [pup]) ? temp : + min_pbs_per_pup + [if_id][pup]; + result_all_bit[ + bit + pup * + BUS_WIDTH_IN_BITS + + if_id * MAX_BUS_NUM * + BUS_WIDTH_IN_BITS] = + temp; + adll_shift_lock[if_id][pup] = 1; + } + } + } + } + + /* Check all Pup state */ + all_lock = 1; + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + /* + * DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + * ("pup_state[%d][%d] = %d\n",if_id,pup,pup_state + * [if_id][pup])); + */ + } + } + + /* END OF SBA */ + /* Norm */ + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; + if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* if pup not lock continue to next pup */ + if (adll_shift_lock[if_id][pup] != 1) { + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_ERROR, + ("PBS failed for IF #%d\n", + if_id)); + training_result[training_stage][if_id] + = TEST_FAILED; + + result_mat[if_id][pup][bit] = 0; + max_pbs_per_pup[if_id][pup] = 0; + min_pbs_per_pup[if_id][pup] = 0; + } else { + training_result[ + training_stage][if_id] = + (training_result[training_stage] + [if_id] == TEST_FAILED) ? + TEST_FAILED : TEST_SUCCESS; + result_mat[if_id][pup][bit] = + result_all_bit[ + bit + pup * + BUS_WIDTH_IN_BITS + + if_id * MAX_BUS_NUM * + BUS_WIDTH_IN_BITS] - + min_pbs_per_pup[if_id][pup]; + } + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_INFO, + ("The abs min_pbs[%d][%d] = %d\n", + if_id, pup, + min_pbs_per_pup[if_id][pup])); + } + } + } + + /* Clean all results */ + ddr3_tip_clean_pbs_result(dev_num, pbs_mode); + + /* DQ PBS register update with the final result */ + for (if_id = 0; if_id < MAX_INTERFACE_NUM; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup = 0; pup < tm->num_of_bus_per_interface; pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_INFO, + ("Final Results: if_id %d, pup %d, Pup State: %d\n", + if_id, pup, pup_state[if_id][pup])); + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + if (dq_map_table == NULL) { + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_ERROR, + ("dq_map_table not initialized\n")); + return MV_FAIL; + } + pad_num = dq_map_table[ + bit + pup * BUS_WIDTH_IN_BITS + + if_id * BUS_WIDTH_IN_BITS * + tm->num_of_bus_per_interface]; + DEBUG_PBS_ENGINE(DEBUG_LEVEL_INFO, + ("result_mat: %d ", + result_mat[if_id][pup] + [bit])); + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (PBS_RX_PHY_REG + effective_cs * 0x10) : + (PBS_TX_PHY_REG + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr + pad_num, + result_mat[if_id][pup][bit])); + } + pbsdelay_per_pup[pbs_mode][if_id][pup] = + (max_pbs_per_pup[if_id][pup] == + min_pbs_per_pup[if_id][pup]) ? + TYPICAL_PBS_VALUE : + ((max_adll_per_pup[if_id][pup] - + min_adll_per_pup[if_id][pup]) * adll_tap / + (max_pbs_per_pup[if_id][pup] - + min_pbs_per_pup[if_id][pup])); + + /* RX results ready, write RX also */ + if (pbs_mode == PBS_TX_MODE) { + /* Write TX results */ + reg_addr = (0x14 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + (max_pbs_per_pup[if_id][pup] - + min_pbs_per_pup[if_id][pup]) / + 2)); + reg_addr = (0x15 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + (max_pbs_per_pup[if_id][pup] - + min_pbs_per_pup[if_id][pup]) / + 2)); + + /* Write previously stored RX results */ + reg_addr = (0x54 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + result_mat_rx_dqs[if_id][pup] + [effective_cs])); + reg_addr = (0x55 + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr, + result_mat_rx_dqs[if_id][pup] + [effective_cs])); + } else { + /* + * RX results may affect RL results correctess, + * so just store the results that will written + * in TX stage + */ + result_mat_rx_dqs[if_id][pup][effective_cs] = + (max_pbs_per_pup[if_id][pup] - + min_pbs_per_pup[if_id][pup]) / 2; + } + DEBUG_PBS_ENGINE( + DEBUG_LEVEL_INFO, + (", PBS tap=%d [psec] ==> skew observed = %d\n", + pbsdelay_per_pup[pbs_mode][if_id][pup], + ((max_pbs_per_pup[if_id][pup] - + min_pbs_per_pup[if_id][pup]) * + pbsdelay_per_pup[pbs_mode][if_id][pup]))); + } + } + + /* Write back to the phy the default values */ + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (READ_CENTRALIZATION_PHY_REG + effective_cs * 4) : + (WRITE_CENTRALIZATION_PHY_REG + effective_cs * 4); + write_adll_value(nominal_adll, reg_addr); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + reg_addr = (pbs_mode == PBS_RX_MODE) ? + (0x5a + effective_cs * 0x10) : + (0x1a + effective_cs * 0x10); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, pup, DDR_PHY_DATA, reg_addr, + 0)); + + /* restore cs enable value */ + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + CS_ENABLE_REG, cs_enable_reg_val[if_id], + MASK_ALL_BITS)); + } + + /* exit test mode */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ODPG_WRITE_READ_MODE_ENABLE_REG, 0xffff, MASK_ALL_BITS)); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + /* + * meaning that there is no VW exist at all (No lock at + * the EBA ADLL shift at EBS) + */ + if (pup_state[if_id][pup] == 1) + return MV_FAIL; + } + + return MV_OK; +} + +/* + * Name: ddr3_tip_pbs_rx. + * Desc: PBS TX + * Args: TBD + * Notes: + * Returns: OK if success, other error code if fail. + */ +int ddr3_tip_pbs_rx(u32 uidev_num) +{ + return ddr3_tip_pbs(uidev_num, PBS_RX_MODE); +} + +/* + * Name: ddr3_tip_pbs_tx. + * Desc: PBS TX + * Args: TBD + * Notes: + * Returns: OK if success, other error code if fail. + */ +int ddr3_tip_pbs_tx(u32 uidev_num) +{ + return ddr3_tip_pbs(uidev_num, PBS_TX_MODE); +} + +#ifndef EXCLUDE_SWITCH_DEBUG +/* + * Print PBS Result + */ +int ddr3_tip_print_all_pbs_result(u32 dev_num) +{ + u32 curr_cs; + u32 max_cs = hws_ddr3_tip_max_cs_get(); + + for (curr_cs = 0; curr_cs < max_cs; curr_cs++) { + ddr3_tip_print_pbs_result(dev_num, curr_cs, PBS_RX_MODE); + ddr3_tip_print_pbs_result(dev_num, curr_cs, PBS_TX_MODE); + } + + return MV_OK; +} + +/* + * Print PBS Result + */ +int ddr3_tip_print_pbs_result(u32 dev_num, u32 cs_num, enum pbs_dir pbs_mode) +{ + u32 data_value = 0, bit = 0, if_id = 0, pup = 0; + u32 reg_addr = (pbs_mode == PBS_RX_MODE) ? + (PBS_RX_PHY_REG + cs_num * 0x10) : + (PBS_TX_PHY_REG + cs_num * 0x10); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + printf("CS%d, %s ,PBS\n", cs_num, + (pbs_mode == PBS_RX_MODE) ? "Rx" : "Tx"); + + for (bit = 0; bit < BUS_WIDTH_IN_BITS; bit++) { + printf("%s, DQ", (pbs_mode == PBS_RX_MODE) ? "Rx" : "Tx"); + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + printf("%d ,PBS,,, ", bit); + for (pup = 0; pup <= tm->num_of_bus_per_interface; + pup++) { + VALIDATE_ACTIVE(tm->bus_act_mask, pup); + CHECK_STATUS(ddr3_tip_bus_read + (dev_num, if_id, + ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr + bit, + &data_value)); + printf("%d , ", data_value); + } + } + printf("\n"); + } + printf("\n"); + + return MV_OK; +} +#endif + +/* + * Fixup PBS Result + */ +int ddr3_tip_clean_pbs_result(u32 dev_num, enum pbs_dir pbs_mode) +{ + u32 if_id, pup, bit; + u32 reg_addr = (pbs_mode == PBS_RX_MODE) ? + (PBS_RX_PHY_REG + effective_cs * 0x10) : + (PBS_TX_PHY_REG + effective_cs * 0x10); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (pup = 0; pup <= tm->num_of_bus_per_interface; pup++) { + for (bit = 0; bit <= BUS_WIDTH_IN_BITS + 3; bit++) { + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, ACCESS_TYPE_UNICAST, pup, + DDR_PHY_DATA, reg_addr + bit, 0)); + } + } + } + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_static.c b/drivers/ddr/marvell/a38x/old/ddr3_training_static.c new file mode 100644 index 0000000000..3129dfa04e --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_static.c @@ -0,0 +1,537 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#include +#include +#include +#include + +#include "ddr3_init.h" + +/* Design Guidelines parameters */ +u32 g_zpri_data = 123; /* controller data - P drive strength */ +u32 g_znri_data = 123; /* controller data - N drive strength */ +u32 g_zpri_ctrl = 74; /* controller C/A - P drive strength */ +u32 g_znri_ctrl = 74; /* controller C/A - N drive strength */ +u32 g_zpodt_data = 45; /* controller data - P ODT */ +u32 g_znodt_data = 45; /* controller data - N ODT */ +u32 g_zpodt_ctrl = 45; /* controller data - P ODT */ +u32 g_znodt_ctrl = 45; /* controller data - N ODT */ +u32 g_odt_config = 0x120012; +u32 g_rtt_nom = 0x44; +u32 g_dic = 0x2; + +#ifdef STATIC_ALGO_SUPPORT + +#define PARAM_NOT_CARE 0 +#define MAX_STATIC_SEQ 48 + +u32 silicon_delay[HWS_MAX_DEVICE_NUM]; +struct hws_tip_static_config_info static_config[HWS_MAX_DEVICE_NUM]; +static reg_data *static_init_controller_config[HWS_MAX_DEVICE_NUM]; + +/* debug delay in write leveling */ +int wl_debug_delay = 0; +/* pup register #3 for functional board */ +int function_reg_value = 8; +u32 silicon; + +u32 read_ready_delay_phase_offset[] = { 4, 4, 4, 4, 6, 6, 6, 6 }; + +static struct cs_element chip_select_map[] = { + /* CS Value (single only) Num_CS */ + {0, 0}, + {0, 1}, + {1, 1}, + {0, 2}, + {2, 1}, + {0, 2}, + {0, 2}, + {0, 3}, + {3, 1}, + {0, 2}, + {0, 2}, + {0, 3}, + {0, 2}, + {0, 3}, + {0, 3}, + {0, 4} +}; + +/* + * Register static init controller DB + */ +int ddr3_tip_init_specific_reg_config(u32 dev_num, reg_data *reg_config_arr) +{ + static_init_controller_config[dev_num] = reg_config_arr; + return MV_OK; +} + +/* + * Register static info DB + */ +int ddr3_tip_init_static_config_db( + u32 dev_num, struct hws_tip_static_config_info *static_config_info) +{ + static_config[dev_num].board_trace_arr = + static_config_info->board_trace_arr; + static_config[dev_num].package_trace_arr = + static_config_info->package_trace_arr; + silicon_delay[dev_num] = static_config_info->silicon_delay; + + return MV_OK; +} + +/* + * Static round trip flow - Calculates the total round trip delay. + */ +int ddr3_tip_static_round_trip_arr_build(u32 dev_num, + struct trip_delay_element *table_ptr, + int is_wl, u32 *round_trip_delay_arr) +{ + u32 bus_index, global_bus; + u32 if_id; + u32 bus_per_interface; + int sign; + u32 temp; + u32 board_trace; + struct trip_delay_element *pkg_delay_ptr; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + /* + * In WL we calc the diff between Clock to DQs in RL we sum the round + * trip of Clock and DQs + */ + sign = (is_wl) ? -1 : 1; + + bus_per_interface = GET_TOPOLOGY_NUM_OF_BUSES(); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + for (bus_index = 0; bus_index < bus_per_interface; + bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + global_bus = (if_id * bus_per_interface) + bus_index; + + /* calculate total trip delay (package and board) */ + board_trace = (table_ptr[global_bus].dqs_delay * sign) + + table_ptr[global_bus].ck_delay; + temp = (board_trace * 163) / 1000; + + /* Convert the length to delay in psec units */ + pkg_delay_ptr = + static_config[dev_num].package_trace_arr; + round_trip_delay_arr[global_bus] = temp + + (int)(pkg_delay_ptr[global_bus].dqs_delay * + sign) + + (int)pkg_delay_ptr[global_bus].ck_delay + + (int)((is_wl == 1) ? wl_debug_delay : + (int)silicon_delay[dev_num]); + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_TRACE, + ("Round Trip Build round_trip_delay_arr[0x%x]: 0x%x temp 0x%x\n", + global_bus, round_trip_delay_arr[global_bus], + temp)); + } + } + + return MV_OK; +} + +/* + * Write leveling for static flow - calculating the round trip delay of the + * DQS signal. + */ +int ddr3_tip_write_leveling_static_config(u32 dev_num, u32 if_id, + enum hws_ddr_freq frequency, + u32 *round_trip_delay_arr) +{ + u32 bus_index; /* index to the bus loop */ + u32 bus_start_index; + u32 bus_per_interface; + u32 phase = 0; + u32 adll = 0, adll_cen, adll_inv, adll_final; + u32 adll_period = MEGA / freq_val[frequency] / 64; + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("ddr3_tip_write_leveling_static_config\n")); + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_TRACE, + ("dev_num 0x%x IF 0x%x freq %d (adll_period 0x%x)\n", + dev_num, if_id, frequency, adll_period)); + + bus_per_interface = GET_TOPOLOGY_NUM_OF_BUSES(); + bus_start_index = if_id * bus_per_interface; + for (bus_index = bus_start_index; + bus_index < (bus_start_index + bus_per_interface); bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + phase = round_trip_delay_arr[bus_index] / (32 * adll_period); + adll = (round_trip_delay_arr[bus_index] - + (phase * 32 * adll_period)) / adll_period; + adll = (adll > 31) ? 31 : adll; + adll_cen = 16 + adll; + adll_inv = adll_cen / 32; + adll_final = adll_cen - (adll_inv * 32); + adll_final = (adll_final > 31) ? 31 : adll_final; + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("\t%d - phase 0x%x adll 0x%x\n", + bus_index, phase, adll)); + /* + * Writing to all 4 phy of Interface number, + * bit 0 \96 4 \96 ADLL, bit 6-8 phase + */ + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + (bus_index % 4), DDR_PHY_DATA, + PHY_WRITE_DELAY(cs), + ((phase << 6) + (adll & 0x1f)), 0x1df)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ACCESS_TYPE_UNICAST, (bus_index % 4), + DDR_PHY_DATA, WRITE_CENTRALIZATION_PHY_REG, + ((adll_inv & 0x1) << 5) + adll_final)); + } + + return MV_OK; +} + +/* + * Read leveling for static flow + */ +int ddr3_tip_read_leveling_static_config(u32 dev_num, + u32 if_id, + enum hws_ddr_freq frequency, + u32 *total_round_trip_delay_arr) +{ + u32 cs, data0, data1, data3 = 0; + u32 bus_index; /* index to the bus loop */ + u32 bus_start_index; + u32 phase0, phase1, max_phase; + u32 adll0, adll1; + u32 cl_value; + u32 min_delay; + u32 sdr_period = MEGA / freq_val[frequency]; + u32 ddr_period = MEGA / freq_val[frequency] / 2; + u32 adll_period = MEGA / freq_val[frequency] / 64; + enum hws_speed_bin speed_bin_index; + u32 rd_sample_dly[MAX_CS_NUM] = { 0 }; + u32 rd_ready_del[MAX_CS_NUM] = { 0 }; + u32 bus_per_interface = GET_TOPOLOGY_NUM_OF_BUSES(); + struct hws_topology_map *tm = ddr3_get_topology_map(); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("ddr3_tip_read_leveling_static_config\n")); + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("dev_num 0x%x ifc 0x%x freq %d\n", dev_num, + if_id, frequency)); + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_TRACE, + ("Sdr_period 0x%x Ddr_period 0x%x adll_period 0x%x\n", + sdr_period, ddr_period, adll_period)); + + if (tm->interface_params[first_active_if].memory_freq == + frequency) { + cl_value = tm->interface_params[first_active_if].cas_l; + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("cl_value 0x%x\n", cl_value)); + } else { + speed_bin_index = tm->interface_params[if_id].speed_bin_index; + cl_value = cas_latency_table[speed_bin_index].cl_val[frequency]; + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("cl_value 0x%x speed_bin_index %d\n", + cl_value, speed_bin_index)); + } + + bus_start_index = if_id * bus_per_interface; + + for (bus_index = bus_start_index; + bus_index < (bus_start_index + bus_per_interface); + bus_index += 2) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + cs = chip_select_map[ + tm->interface_params[if_id].as_bus_params[ + (bus_index % 4)].cs_bitmask].cs_num; + + /* read sample delay calculation */ + min_delay = (total_round_trip_delay_arr[bus_index] < + total_round_trip_delay_arr[bus_index + 1]) ? + total_round_trip_delay_arr[bus_index] : + total_round_trip_delay_arr[bus_index + 1]; + /* round down */ + rd_sample_dly[cs] = 2 * (min_delay / (sdr_period * 2)); + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_TRACE, + ("\t%d - min_delay 0x%x cs 0x%x rd_sample_dly[cs] 0x%x\n", + bus_index, min_delay, cs, rd_sample_dly[cs])); + + /* phase calculation */ + phase0 = (total_round_trip_delay_arr[bus_index] - + (sdr_period * rd_sample_dly[cs])) / (ddr_period); + phase1 = (total_round_trip_delay_arr[bus_index + 1] - + (sdr_period * rd_sample_dly[cs])) / (ddr_period); + max_phase = (phase0 > phase1) ? phase0 : phase1; + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_TRACE, + ("\tphase0 0x%x phase1 0x%x max_phase 0x%x\n", + phase0, phase1, max_phase)); + + /* ADLL calculation */ + adll0 = (u32)((total_round_trip_delay_arr[bus_index] - + (sdr_period * rd_sample_dly[cs]) - + (ddr_period * phase0)) / adll_period); + adll0 = (adll0 > 31) ? 31 : adll0; + adll1 = (u32)((total_round_trip_delay_arr[bus_index + 1] - + (sdr_period * rd_sample_dly[cs]) - + (ddr_period * phase1)) / adll_period); + adll1 = (adll1 > 31) ? 31 : adll1; + + /* The Read delay close the Read FIFO */ + rd_ready_del[cs] = rd_sample_dly[cs] + + read_ready_delay_phase_offset[max_phase]; + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_TRACE, + ("\tadll0 0x%x adll1 0x%x rd_ready_del[cs] 0x%x\n", + adll0, adll1, rd_ready_del[cs])); + + /* + * Write to the phy of Interface (bit 0 \96 4 \96 ADLL, + * bit 6-8 phase) + */ + data0 = ((phase0 << 6) + (adll0 & 0x1f)); + data1 = ((phase1 << 6) + (adll1 & 0x1f)); + + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + (bus_index % 4), DDR_PHY_DATA, PHY_READ_DELAY(cs), + data0, 0x1df)); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + ((bus_index + 1) % 4), DDR_PHY_DATA, + PHY_READ_DELAY(cs), data1, 0x1df)); + } + + for (bus_index = 0; bus_index < bus_per_interface; bus_index++) { + VALIDATE_ACTIVE(tm->bus_act_mask, bus_index); + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + bus_index, DDR_PHY_DATA, 0x3, data3, 0x1f)); + } + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + READ_DATA_SAMPLE_DELAY, + (rd_sample_dly[0] + cl_value) + (rd_sample_dly[1] << 8), + MASK_ALL_BITS)); + + /* Read_ready_del0 bit 0-4 , CS bits 8-12 */ + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_UNICAST, if_id, + READ_DATA_READY_DELAY, + rd_ready_del[0] + (rd_ready_del[1] << 8) + cl_value, + MASK_ALL_BITS)); + + return MV_OK; +} + +/* + * DDR3 Static flow + */ +int ddr3_tip_run_static_alg(u32 dev_num, enum hws_ddr_freq freq) +{ + u32 if_id = 0; + struct trip_delay_element *table_ptr; + u32 wl_total_round_trip_delay_arr[MAX_TOTAL_BUS_NUM]; + u32 rl_total_round_trip_delay_arr[MAX_TOTAL_BUS_NUM]; + struct init_cntr_param init_cntr_prm; + int ret; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("ddr3_tip_run_static_alg")); + + init_cntr_prm.do_mrs_phy = 1; + init_cntr_prm.is_ctrl64_bit = 0; + init_cntr_prm.init_phy = 1; + ret = hws_ddr3_tip_init_controller(dev_num, &init_cntr_prm); + if (ret != MV_OK) { + DEBUG_TRAINING_STATIC_IP( + DEBUG_LEVEL_ERROR, + ("hws_ddr3_tip_init_controller failure\n")); + } + + /* calculate the round trip delay for Write Leveling */ + table_ptr = static_config[dev_num].board_trace_arr; + CHECK_STATUS(ddr3_tip_static_round_trip_arr_build + (dev_num, table_ptr, 1, + wl_total_round_trip_delay_arr)); + /* calculate the round trip delay for Read Leveling */ + CHECK_STATUS(ddr3_tip_static_round_trip_arr_build + (dev_num, table_ptr, 0, + rl_total_round_trip_delay_arr)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + /* check if the interface is enabled */ + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + /* + * Static frequency is defined according to init-frequency + * (not target) + */ + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Static IF %d freq %d\n", + if_id, freq)); + CHECK_STATUS(ddr3_tip_write_leveling_static_config + (dev_num, if_id, freq, + wl_total_round_trip_delay_arr)); + CHECK_STATUS(ddr3_tip_read_leveling_static_config + (dev_num, if_id, freq, + rl_total_round_trip_delay_arr)); + } + + return MV_OK; +} + +/* + * Init controller for static flow + */ +int ddr3_tip_static_init_controller(u32 dev_num) +{ + u32 index_cnt = 0; + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("ddr3_tip_static_init_controller\n")); + while (static_init_controller_config[dev_num][index_cnt].reg_addr != + 0) { + CHECK_STATUS(ddr3_tip_if_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + static_init_controller_config[dev_num][index_cnt]. + reg_addr, + static_init_controller_config[dev_num][index_cnt]. + reg_data, + static_init_controller_config[dev_num][index_cnt]. + reg_mask)); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Init_controller index_cnt %d\n", + index_cnt)); + index_cnt++; + } + + return MV_OK; +} + +int ddr3_tip_static_phy_init_controller(u32 dev_num) +{ + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Phy Init Controller 2\n")); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, 0xa4, + 0x3dfe)); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Phy Init Controller 3\n")); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, 0xa6, + 0xcb2)); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Phy Init Controller 4\n")); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, 0xa9, + 0)); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Static Receiver Calibration\n")); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, 0xd0, + 0x1f)); + + DEBUG_TRAINING_STATIC_IP(DEBUG_LEVEL_TRACE, + ("Static V-REF Calibration\n")); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, 0xa8, + 0x434)); + + return MV_OK; +} +#endif + +/* + * Configure phy (called by static init controller) for static flow + */ +int ddr3_tip_configure_phy(u32 dev_num) +{ + u32 if_id, phy_id; + struct hws_topology_map *tm = ddr3_get_topology_map(); + + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, + PAD_ZRI_CALIB_PHY_REG, + ((0x7f & g_zpri_data) << 7 | (0x7f & g_znri_data)))); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_CONTROL, + PAD_ZRI_CALIB_PHY_REG, + ((0x7f & g_zpri_ctrl) << 7 | (0x7f & g_znri_ctrl)))); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, + PAD_ODT_CALIB_PHY_REG, + ((0x3f & g_zpodt_data) << 6 | (0x3f & g_znodt_data)))); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_CONTROL, + PAD_ODT_CALIB_PHY_REG, + ((0x3f & g_zpodt_ctrl) << 6 | (0x3f & g_znodt_ctrl)))); + + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, + PAD_PRE_DISABLE_PHY_REG, 0)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, + CMOS_CONFIG_PHY_REG, 0)); + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_CONTROL, + CMOS_CONFIG_PHY_REG, 0)); + + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { + /* check if the interface is enabled */ + VALIDATE_ACTIVE(tm->if_act_mask, if_id); + + for (phy_id = 0; + phy_id < tm->num_of_bus_per_interface; + phy_id++) { + VALIDATE_ACTIVE(tm->bus_act_mask, phy_id); + /* Vref & clamp */ + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, phy_id, DDR_PHY_DATA, + PAD_CONFIG_PHY_REG, + ((clamp_tbl[if_id] << 4) | vref), + ((0x7 << 4) | 0x7))); + /* clamp not relevant for control */ + CHECK_STATUS(ddr3_tip_bus_read_modify_write + (dev_num, ACCESS_TYPE_UNICAST, + if_id, phy_id, DDR_PHY_CONTROL, + PAD_CONFIG_PHY_REG, 0x4, 0x7)); + } + } + + CHECK_STATUS(ddr3_tip_bus_write + (dev_num, ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, + ACCESS_TYPE_MULTICAST, PARAM_NOT_CARE, DDR_PHY_DATA, 0x90, + 0x6002)); + + return MV_OK; +} diff --git a/drivers/ddr/marvell/a38x/old/ddr_topology_def.h b/drivers/ddr/marvell/a38x/old/ddr_topology_def.h new file mode 100644 index 0000000000..229c3a127a --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr_topology_def.h @@ -0,0 +1,121 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR_TOPOLOGY_DEF_H +#define _DDR_TOPOLOGY_DEF_H + +#include "ddr3_training_ip_def.h" +#include "ddr3_topology_def.h" + +#if defined(CONFIG_ARMADA_38X) +#include "ddr3_a38x.h" +#endif + +/* bus width in bits */ +enum hws_bus_width { + BUS_WIDTH_4, + BUS_WIDTH_8, + BUS_WIDTH_16, + BUS_WIDTH_32 +}; + +enum hws_temperature { + HWS_TEMP_LOW, + HWS_TEMP_NORMAL, + HWS_TEMP_HIGH +}; + +enum hws_mem_size { + MEM_512M, + MEM_1G, + MEM_2G, + MEM_4G, + MEM_8G, + MEM_SIZE_LAST +}; + +enum hws_timing { + HWS_TIM_DEFAULT, + HWS_TIM_1T, + HWS_TIM_2T +}; + +struct bus_params { + /* Chip Select (CS) bitmask (bits 0-CS0, bit 1- CS1 ...) */ + u8 cs_bitmask; + + /* + * mirror enable/disable + * (bits 0-CS0 mirroring, bit 1- CS1 mirroring ...) + */ + int mirror_enable_bitmask; + + /* DQS Swap (polarity) - true if enable */ + int is_dqs_swap; + + /* CK swap (polarity) - true if enable */ + int is_ck_swap; +}; + +struct if_params { + /* bus configuration */ + struct bus_params as_bus_params[MAX_BUS_NUM]; + + /* Speed Bin Table */ + enum hws_speed_bin speed_bin_index; + + /* bus width of memory */ + enum hws_bus_width bus_width; + + /* Bus memory size (MBit) */ + enum hws_mem_size memory_size; + + /* The DDR frequency for each interfaces */ + enum hws_ddr_freq memory_freq; + + /* + * delay CAS Write Latency + * - 0 for using default value (jedec suggested) + */ + u8 cas_wl; + + /* + * delay CAS Latency + * - 0 for using default value (jedec suggested) + */ + u8 cas_l; + + /* operation temperature */ + enum hws_temperature interface_temp; + + /* 2T vs 1T mode (by default computed from number of CSs) */ + enum hws_timing timing; +}; + +struct hws_topology_map { + /* Number of interfaces (default is 12) */ + u8 if_act_mask; + + /* Controller configuration per interface */ + struct if_params interface_params[MAX_INTERFACE_NUM]; + + /* BUS per interface (default is 4) */ + u8 num_of_bus_per_interface; + + /* Bit mask for active buses */ + u8 bus_act_mask; +}; + +/* DDR3 training global configuration parameters */ +struct tune_train_params { + u32 ck_delay; + u32 ck_delay_16; + u32 p_finger; + u32 n_finger; + u32 phy_reg3_val; +}; + +#endif /* _DDR_TOPOLOGY_DEF_H */ diff --git a/drivers/ddr/marvell/a38x/old/ddr_training_ip_db.h b/drivers/ddr/marvell/a38x/old/ddr_training_ip_db.h new file mode 100644 index 0000000000..ff5f817383 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/ddr_training_ip_db.h @@ -0,0 +1,16 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _DDR_TRAINING_IP_DB_H_ +#define _DDR_TRAINING_IP_DB_H_ + +#include "ddr_topology_def.h" +#include "ddr3_training_ip_db.h" + +u32 speed_bin_table(u8 index, enum speed_bin_table_elements element); +u32 pattern_table_get_word(u32 dev_num, enum hws_pattern type, u8 index); + +#endif /* _DDR3_TRAINING_IP_DB_H_ */ diff --git a/drivers/ddr/marvell/a38x/old/silicon_if.h b/drivers/ddr/marvell/a38x/old/silicon_if.h new file mode 100644 index 0000000000..7fce27da12 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/silicon_if.h @@ -0,0 +1,17 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef __silicon_if_H +#define __silicon_if_H + +/* max number of devices supported by driver */ +#ifdef CO_CPU_RUN +#define HWS_MAX_DEVICE_NUM (1) +#else +#define HWS_MAX_DEVICE_NUM (16) +#endif + +#endif /* __silicon_if_H */ diff --git a/drivers/ddr/marvell/a38x/old/xor.h b/drivers/ddr/marvell/a38x/old/xor.h new file mode 100644 index 0000000000..7b1e316177 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/xor.h @@ -0,0 +1,92 @@ +/* + * Copyright (C) Marvell International Ltd. and its affiliates + * + * SPDX-License-Identifier: GPL-2.0 + */ + +#ifndef _XOR_H +#define _XOR_H + +#define SRAM_BASE 0x40000000 + +#include "ddr3_hws_hw_training_def.h" + +#define MV_XOR_MAX_UNIT 2 /* XOR unit == XOR engine */ +#define MV_XOR_MAX_CHAN 4 /* total channels for all units */ +#define MV_XOR_MAX_CHAN_PER_UNIT 2 /* channels for units */ + +#define MV_IS_POWER_OF_2(num) (((num) != 0) && (((num) & ((num) - 1)) == 0)) + +/* + * This structure describes address space window. Window base can be + * 64 bit, window size up to 4GB + */ +struct addr_win { + u32 base_low; /* 32bit base low */ + u32 base_high; /* 32bit base high */ + u32 size; /* 32bit size */ +}; + +/* This structure describes SoC units address decode window */ +struct unit_win_info { + struct addr_win addr_win; /* An address window */ + int enable; /* Address decode window is enabled/disabled */ + u8 attrib; /* chip select attributes */ + u8 target_id; /* Target Id of this MV_TARGET */ +}; + +/* + * This enumerator describes the type of functionality the XOR channel + * can have while using the same data structures. + */ +enum xor_type { + MV_XOR, /* XOR channel functions as XOR accelerator */ + MV_DMA, /* XOR channel functions as IDMA channel */ + MV_CRC32 /* XOR channel functions as CRC 32 calculator */ +}; + +enum mv_state { + MV_IDLE, + MV_ACTIVE, + MV_PAUSED, + MV_UNDEFINED_STATE +}; + +/* + * This enumerator describes the set of commands that can be applied on + * an engine (e.g. IDMA, XOR). Appling a comman depends on the current + * status (see MV_STATE enumerator) + * + * Start can be applied only when status is IDLE + * Stop can be applied only when status is IDLE, ACTIVE or PAUSED + * Pause can be applied only when status is ACTIVE + * Restart can be applied only when status is PAUSED + */ +enum mv_command { + MV_START, /* Start */ + MV_STOP, /* Stop */ + MV_PAUSE, /* Pause */ + MV_RESTART /* Restart */ +}; + +enum xor_override_target { + SRC_ADDR0, /* Source Address #0 Control */ + SRC_ADDR1, /* Source Address #1 Control */ + SRC_ADDR2, /* Source Address #2 Control */ + SRC_ADDR3, /* Source Address #3 Control */ + SRC_ADDR4, /* Source Address #4 Control */ + SRC_ADDR5, /* Source Address #5 Control */ + SRC_ADDR6, /* Source Address #6 Control */ + SRC_ADDR7, /* Source Address #7 Control */ + XOR_DST_ADDR, /* Destination Address Control */ + XOR_NEXT_DESC /* Next Descriptor Address Control */ +}; + +enum mv_state mv_xor_state_get(u32 chan); +void mv_xor_hal_init(u32 xor_chan_num); +int mv_xor_ctrl_set(u32 chan, u32 xor_ctrl); +int mv_xor_command_set(u32 chan, enum mv_command command); +int mv_xor_override_set(u32 chan, enum xor_override_target target, u32 win_num, + int enable); + +#endif From patchwork Tue Jun 18 15:34:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949269 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=qzW/UOMa; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W8f0tvhz20Wb for ; Wed, 19 Jun 2024 01:37:22 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 5F7EF883DA; Tue, 18 Jun 2024 17:35:10 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="qzW/UOMa"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 58F34884C5; Tue, 18 Jun 2024 17:35:08 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id EB9448840C for ; Tue, 18 Jun 2024 17:35:04 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 3E45ACE1B72; Tue, 18 Jun 2024 15:35:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6C8B1C4AF1D; Tue, 18 Jun 2024 15:35:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724902; bh=dCLeM8VZkRnx+De+xhHx+iRoeroBWSeOxg38WKQpLsk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qzW/UOMaNLLIlA7LEMrki58ue1zR72Ocew90JVtmkKraBORTW42pXTgzLgr7HDjhw FjNaSnfVB+eR4YfQRnIrLTQ9Lfu/0ZS3gOUdcsrK94hEeSksAk3Vzhr/lQcPAowDWn ryNwRfVA0TNqqhanPxljSrQEjttEhoaFHUKOsU1jIabXajqOTq5lh0749scKwh/nby DMx7RdyVxNFN8v1rq9sKd+6til6hE8hMAXLezshWKQBHHFgkfUbvTplaCyojaF+z1F ZRsyEg960mTx5i6lXdO4pFbrPaZ9cng1+x2M6LZ2YgcFYk17DX1Wqr/KenfSGrVcI7 Yi8rDygilNPcQ== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 13/16] ddr: marvell: a38x: old: Fix some compiler warning of the old code Date: Tue, 18 Jun 2024 17:34:36 +0200 Message-ID: <20240618153439.9518-14-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Fix some compilation warning in the old DDR training code. Signed-off-by: Marek Behún --- drivers/ddr/marvell/a38x/old/ddr3_a38x.c | 1 + drivers/ddr/marvell/a38x/old/ddr3_init.h | 2 ++ drivers/ddr/marvell/a38x/old/ddr3_training.c | 1 + drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c | 1 + drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c | 1 + 5 files changed, 6 insertions(+) diff --git a/drivers/ddr/marvell/a38x/old/ddr3_a38x.c b/drivers/ddr/marvell/a38x/old/ddr3_a38x.c index cc2e498d50..8504b9b40c 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_a38x.c +++ b/drivers/ddr/marvell/a38x/old/ddr3_a38x.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "ddr3_init.h" diff --git a/drivers/ddr/marvell/a38x/old/ddr3_init.h b/drivers/ddr/marvell/a38x/old/ddr3_init.h index 8cb08864c2..ad95cc9ef8 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_init.h +++ b/drivers/ddr/marvell/a38x/old/ddr3_init.h @@ -291,7 +291,9 @@ extern struct cl_val_per_freq cas_latency_table[]; extern u32 target_freq; extern struct hws_tip_config_func_db config_func_info[HWS_MAX_DEVICE_NUM]; extern u32 clamp_tbl[]; +#if 0 extern u32 init_freq; +#endif /* list of allowed frequency listed in order of enum hws_ddr_freq */ extern u32 freq_val[]; extern u8 debug_training_static; diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training.c b/drivers/ddr/marvell/a38x/old/ddr3_training.c index c5ebd7c16d..29b31a059a 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_training.c +++ b/drivers/ddr/marvell/a38x/old/ddr3_training.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "ddr3_init.h" diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c index 7ac23aedc6..869f397b7f 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_ip_engine.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "ddr3_init.h" diff --git a/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c b/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c index 8073c2baaf..d41845a4f4 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c +++ b/drivers/ddr/marvell/a38x/old/ddr3_training_leveling.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "ddr3_init.h" From patchwork Tue Jun 18 15:34:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949270 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=DF22/r3d; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W8v4PdVz20KL for ; Wed, 19 Jun 2024 01:37:35 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id C7BAA8848F; Tue, 18 Jun 2024 17:35:10 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="DF22/r3d"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id D62DB88485; Tue, 18 Jun 2024 17:35:08 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id F23038819D for ; Tue, 18 Jun 2024 17:35:04 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C3C9361A02; Tue, 18 Jun 2024 15:35:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8E85C4AF4D; Tue, 18 Jun 2024 15:35:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724903; bh=pkWH/78ofpQEH6ySHJy/dazVL4ax0gW1Kun5jKH5/cw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DF22/r3dim7kRSs3Fj+PGc/YOiPLt84AAVR6yEU+95td5otPFd3VXSpxJLOnmN1WE O3uQQl0UxOBgnnCUvpRCAae7c7NQNK3m5oR0cX5pmcCW88qrnaZ2R+vigcVfVVMe6z 3pohyRN5T9kOcDVBf+tXF4jVWN54CJwpmSY899h0j0Ic0wS93Drx49CxaPEgtNMig+ mYfmhAgAhYhAxSkLQcqFhOplcB9BVXNmn8i4EdBNdoE1Z4odc1OGc71WZqHtRwi1Ov CyMbxFMp8Kmjv9iPz+vnyKCl4pcldXlLaKBMCybG1vnrBQaLOugdhVTlNjjekexyFt v7MjndOm1281g== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 14/16] ddr: marvell: a38x: old: Backport immutable debug settings Date: Tue, 18 Jun 2024 17:34:37 +0200 Message-ID: <20240618153439.9518-15-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Backport the option to compile with immutable debug settings also to the old implementation of the DDR3 training code. The original PR for mv-ddr-marvell can be seen at https://github.com/MarvellEmbeddedProcessors/mv-ddr-marvell/pull/45/ Signed-off-by: Marek Behún --- drivers/ddr/marvell/a38x/old/ddr3_debug.c | 34 +++++++++++++++---- drivers/ddr/marvell/a38x/old/ddr3_init.c | 3 +- drivers/ddr/marvell/a38x/old/ddr3_init.h | 40 ++++++++++++++--------- 3 files changed, 54 insertions(+), 23 deletions(-) diff --git a/drivers/ddr/marvell/a38x/old/ddr3_debug.c b/drivers/ddr/marvell/a38x/old/ddr3_debug.c index d8bd328dfd..d559a84a68 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_debug.c +++ b/drivers/ddr/marvell/a38x/old/ddr3_debug.c @@ -12,13 +12,15 @@ #include "ddr3_init.h" +#if !defined(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS) u8 is_reg_dump = 0; u8 debug_pbs = DEBUG_LEVEL_ERROR; +#endif /* * API to change flags outside of the lib */ -#ifndef SILENT_LIB +#if !defined(SILENT_LIB) && !defined(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS) /* Debug flags for other Training modules */ u8 debug_training_static = DEBUG_LEVEL_ERROR; u8 debug_training = DEBUG_LEVEL_ERROR; @@ -83,12 +85,13 @@ void ddr3_hws_set_log_level(enum ddr_lib_debug_block block, u8 level) #endif struct hws_tip_config_func_db config_func_info[HWS_MAX_DEVICE_NUM]; -u8 is_default_centralization = 0; -u8 is_tune_result = 0; -u8 is_validate_window_per_if = 0; -u8 is_validate_window_per_pup = 0; -u8 sweep_cnt = 1; -u32 is_bist_reset_bit = 1; + +#if 0 +static u8 is_validate_window_per_if = 0; +static u8 is_validate_window_per_pup = 0; +static u8 sweep_cnt = 1; +#endif + static struct hws_xsb_info xsb_info[HWS_MAX_DEVICE_NUM]; /* @@ -291,6 +294,7 @@ int print_device_info(u8 dev_num) return MV_OK; } +#if 0 void hws_ddr3_tip_sweep_test(int enable) { if (enable) { @@ -303,6 +307,7 @@ void hws_ddr3_tip_sweep_test(int enable) } } #endif +#endif char *ddr3_tip_convert_tune_result(enum hws_result tune_result) { @@ -326,6 +331,7 @@ int ddr3_tip_print_log(u32 dev_num, u32 mem_addr) u32 if_id = 0; struct hws_topology_map *tm = ddr3_get_topology_map(); +#if 0 #ifndef EXCLUDE_SWITCH_DEBUG if ((is_validate_window_per_if != 0) || (is_validate_window_per_pup != 0)) { @@ -346,8 +352,18 @@ int ddr3_tip_print_log(u32 dev_num, u32 mem_addr) CHECK_STATUS(ddr3_tip_restore_dunit_regs(dev_num)); ddr3_tip_reg_dump(dev_num); } +#endif #endif + /* return early if we won't print anything anyway */ + if ( +#if defined(SILENT_LIB) + 1 || +#endif + debug_training < DEBUG_LEVEL_INFO) { + return MV_OK; + } + for (if_id = 0; if_id <= MAX_INTERFACE_NUM - 1; if_id++) { VALIDATE_ACTIVE(tm->if_act_mask, if_id); @@ -756,7 +772,9 @@ u32 xsb_test_table[][8] = { 0xffffffff, 0xffffffff} }; +#if 0 static int ddr3_tip_access_atr(u32 dev_num, u32 flag_id, u32 value, u32 **ptr); +#endif int ddr3_tip_print_adll(void) { @@ -788,6 +806,7 @@ int ddr3_tip_print_adll(void) return MV_OK; } +#if 0 /* * Set attribute value */ @@ -1155,6 +1174,7 @@ static int ddr3_tip_access_atr(u32 dev_num, u32 flag_id, u32 value, u32 **ptr) return MV_OK; } +#endif #ifndef EXCLUDE_SWITCH_DEBUG /* diff --git a/drivers/ddr/marvell/a38x/old/ddr3_init.c b/drivers/ddr/marvell/a38x/old/ddr3_init.c index 9818c3d494..b3c04eb3ab 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_init.c +++ b/drivers/ddr/marvell/a38x/old/ddr3_init.c @@ -385,7 +385,8 @@ int ddr3_init(void) return status; /* Set log level for training lib */ - ddr3_hws_set_log_level(DEBUG_BLOCK_ALL, DEBUG_LEVEL_ERROR); + if (!IS_ENABLED(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS)) + ddr3_hws_set_log_level(DEBUG_BLOCK_ALL, DEBUG_LEVEL_ERROR); /* Start New Training IP */ status = ddr3_hws_hw_training(); diff --git a/drivers/ddr/marvell/a38x/old/ddr3_init.h b/drivers/ddr/marvell/a38x/old/ddr3_init.h index ad95cc9ef8..5090cf97a7 100644 --- a/drivers/ddr/marvell/a38x/old/ddr3_init.h +++ b/drivers/ddr/marvell/a38x/old/ddr3_init.h @@ -152,17 +152,38 @@ enum log_level { }; /* Globals */ -extern u8 debug_training; +#if defined(CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS) +static const u8 is_reg_dump = 0; +static const u8 debug_training_static = DEBUG_LEVEL_ERROR; +static const u8 debug_training = DEBUG_LEVEL_ERROR; +static const u8 debug_leveling = DEBUG_LEVEL_ERROR; +static const u8 debug_centralization = DEBUG_LEVEL_ERROR; +static const u8 debug_training_ip = DEBUG_LEVEL_ERROR; +static const u8 debug_training_bist = DEBUG_LEVEL_ERROR; +static const u8 debug_training_hw_alg = DEBUG_LEVEL_ERROR; +static const u8 debug_training_access = DEBUG_LEVEL_ERROR; +static const u8 debug_training_a38x = DEBUG_LEVEL_ERROR; +static const u8 debug_pbs = DEBUG_LEVEL_ERROR; +#else /* !CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS */ extern u8 is_reg_dump; +extern u8 debug_training_static; +extern u8 debug_training; +extern u8 debug_leveling; +extern u8 debug_centralization; +extern u8 debug_training_ip; +extern u8 debug_training_bist; +extern u8 debug_training_hw_alg; +extern u8 debug_training_access; +extern u8 debug_training_a38x; +extern u8 debug_pbs; +#endif /* !CONFIG_DDR_IMMUTABLE_DEBUG_SETTINGS */ + extern u8 generic_init_controller; extern u32 freq_val[]; extern u32 is_pll_old; extern struct cl_val_per_freq cas_latency_table[]; extern struct pattern_info pattern_table[]; extern struct cl_val_per_freq cas_write_latency_table[]; -extern u8 debug_training; -extern u8 debug_centralization, debug_training_ip, debug_training_bist, - debug_pbs, debug_training_static, debug_leveling; extern u32 pipe_multicast_mask; extern struct hws_tip_config_func_db config_func_info[]; extern u8 cs_mask_reg[]; @@ -186,8 +207,6 @@ extern u32 g_dic; extern u32 g_odt_config; extern u32 g_rtt_nom; -extern u8 debug_training_access; -extern u8 debug_training_a38x; extern u32 first_active_if; extern enum hws_ddr_freq init_freq; extern u32 delay_enable, ck_delay, ck_delay_16, ca_delay; @@ -227,7 +246,6 @@ extern u32 znri_data_phy_val; extern u32 zpri_data_phy_val; extern u32 znri_ctrl_phy_val; extern u32 zpri_ctrl_phy_val; -extern u8 debug_training_access; extern u32 finger_test, p_finger_start, p_finger_end, n_finger_start, n_finger_end, p_finger_step, n_finger_step; extern u32 mode2_t; @@ -243,8 +261,6 @@ extern u32 freq_mask[HWS_MAX_DEVICE_NUM][DDR_FREQ_LIMIT]; extern u32 start_pattern, end_pattern; extern u32 maxt_poll_tries; -extern u32 is_bist_reset_bit; -extern u8 debug_training_bist; extern u8 vref_window_size[MAX_INTERFACE_NUM][MAX_BUS_NUM]; extern u32 debug_mode; @@ -252,20 +268,16 @@ extern u32 effective_cs; extern int ddr3_tip_centr_skip_min_win_check; extern u32 *dq_map_table; extern enum auto_tune_stage training_stage; -extern u8 debug_centralization; extern u32 delay_enable; extern u32 start_pattern, end_pattern; extern u32 freq_val[DDR_FREQ_LIMIT]; -extern u8 debug_training_hw_alg; extern enum auto_tune_stage training_stage; -extern u8 debug_training_ip; extern enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; extern enum auto_tune_stage training_stage; extern u32 effective_cs; -extern u8 debug_leveling; extern enum hws_result training_result[MAX_STAGE_LIMIT][MAX_INTERFACE_NUM]; extern enum auto_tune_stage training_stage; extern u32 rl_version; @@ -276,7 +288,6 @@ extern u32 odt_config; extern u32 effective_cs; extern u32 phy_reg1_val; -extern u8 debug_pbs; extern u32 effective_cs; extern u16 mask_results_dq_reg_map[]; extern enum hws_ddr_freq medium_freq; @@ -296,7 +307,6 @@ extern u32 init_freq; #endif /* list of allowed frequency listed in order of enum hws_ddr_freq */ extern u32 freq_val[]; -extern u8 debug_training_static; extern u32 first_active_if; /* Prototypes */ From patchwork Tue Jun 18 15:34:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949271 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=X+S5BoYl; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W980kRKz20KL for ; Wed, 19 Jun 2024 01:37:48 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 3A9EF8841E; Tue, 18 Jun 2024 17:35:12 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="X+S5BoYl"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 14FC4884A3; Tue, 18 Jun 2024 17:35:11 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_VALIDITY_RPBL_BLOCKED,RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 5F7A48834B for ; Tue, 18 Jun 2024 17:35:06 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 3A67F618BC; Tue, 18 Jun 2024 15:35:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10CEBC4AF1D; Tue, 18 Jun 2024 15:35:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724904; bh=8v+vF6Ylo0650W5M2DoxCNaq5CFBw6rX309KOVcE9VI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X+S5BoYlB7z5i6DubZWr0V5vAR+C0YjjDn7hpsenBPoy6ZPpnvoNFAH8Y59shnt/0 BcvzyqLEfNpqeZ9jpRBGV1f3T6TzsP0KvlvmhnkUzabBUxcYZPU53HV2x7nR7aMlbi Sx5wtBga1fAXOIz17muhN/pKDt0gXI2dHCYWbPcQoszlFjDa4fs6maIy+avJTDsT0T rCARVVbvz8/0cGjL+llHueFRQrFChdIZGL6b7TvcHAYdlPosKtH1E9+SbhSjYxKYae bnPttvafT0shOgbqREPEnKIulVXotEJ5KCclH8QkxmHJT6/yhXL7fM6dTFyguZhu6m ZNOl6eQNB88Lg== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 15/16] arm: mvebu: a38x: Add optional support for using old DDR3 training code Date: Tue, 18 Jun 2024 17:34:38 +0200 Message-ID: <20240618153439.9518-16-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Add optional support for using old DDR3 training code from 2017. The code lives in drivers/ddr/marvell/a38x/old/. To prevent symbol clashing with new DDR3 training code, a special header which renames all clashing symbols via macros is included and the symbols are prefixed with 'old_'. If old DDR3 training support is selected for a board, then the SPL initialization code calls a new function board_use_old_ddr3_training() to check whether it should use old DDR3 training code. The default weak implementation returns false, defaulting to new DDR3 training code. Boards that wish to support this need to select the ARMADA_38X_SUPPORT_OLD_DDR3_TRAINING config option and implement the old version of DDR topology provider, ddr3_get_topology_map(). Signed-off-by: Marek Behún --- arch/arm/mach-mvebu/Kconfig | 4 + arch/arm/mach-mvebu/include/mach/cpu.h | 1 + arch/arm/mach-mvebu/spl.c | 37 ++- drivers/ddr/marvell/a38x/Makefile | 2 + drivers/ddr/marvell/a38x/old/Makefile | 11 + .../marvell/a38x/old/glue_symbol_renames.h | 247 ++++++++++++++++++ 6 files changed, 293 insertions(+), 9 deletions(-) create mode 100644 drivers/ddr/marvell/a38x/old/glue_symbol_renames.h diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig index a320793a30..e377e8a48a 100644 --- a/arch/arm/mach-mvebu/Kconfig +++ b/arch/arm/mach-mvebu/Kconfig @@ -37,6 +37,10 @@ config ARMADA_38X_HS_IMPEDANCE_THRESH default 0x6 range 0x0 0x7 +config ARMADA_38X_SUPPORT_OLD_DDR3_TRAINING + bool + depends on ARMADA_38X + config ARMADA_XP bool select ARMADA_32BIT diff --git a/arch/arm/mach-mvebu/include/mach/cpu.h b/arch/arm/mach-mvebu/include/mach/cpu.h index 904e7157ba..af6ce2920e 100644 --- a/arch/arm/mach-mvebu/include/mach/cpu.h +++ b/arch/arm/mach-mvebu/include/mach/cpu.h @@ -174,6 +174,7 @@ int serdes_phy_config(void); * drivers/ddr/marvell */ int ddr3_init(void); +int old_ddr3_init(void); /* Auto Voltage Scaling */ #if defined(CONFIG_ARMADA_38X) diff --git a/arch/arm/mach-mvebu/spl.c b/arch/arm/mach-mvebu/spl.c index 4f4f7e00e3..cbef411376 100644 --- a/arch/arm/mach-mvebu/spl.c +++ b/arch/arm/mach-mvebu/spl.c @@ -313,6 +313,33 @@ int board_return_to_bootrom(struct spl_image_info *spl_image, hang(); } +#if !defined(CONFIG_ARMADA_375) +__weak bool board_use_old_ddr3_training(void) +{ + return false; +} + +static void ddr3_init_or_fail(void) +{ + int ret; + + if (IS_ENABLED(CONFIG_ARMADA_38X_SUPPORT_OLD_DDR3_TRAINING) && + board_use_old_ddr3_training()) + ret = old_ddr3_init(); + else + ret = ddr3_init(); + + if (ret) { + printf("ddr3 init failed: %d\n", ret); + if (IS_ENABLED(CONFIG_DDR_RESET_ON_TRAINING_FAILURE) && + get_boot_device() != BOOT_DEVICE_UART) + reset_cpu(); + else + hang(); + } +} +#endif + void board_init_f(ulong dummy) { int ret; @@ -347,15 +374,7 @@ void board_init_f(ulong dummy) serdes_phy_config(); /* Setup DDR */ - ret = ddr3_init(); - if (ret) { - printf("ddr3_init() failed: %d\n", ret); - if (IS_ENABLED(CONFIG_DDR_RESET_ON_TRAINING_FAILURE) && - get_boot_device() != BOOT_DEVICE_UART) - reset_cpu(); - else - hang(); - } + ddr3_init_or_fail(); #endif /* Initialize Auto Voltage Scaling */ diff --git a/drivers/ddr/marvell/a38x/Makefile b/drivers/ddr/marvell/a38x/Makefile index fcfb615686..4e8a9d190d 100644 --- a/drivers/ddr/marvell/a38x/Makefile +++ b/drivers/ddr/marvell/a38x/Makefile @@ -18,6 +18,8 @@ obj-$(CONFIG_SPL_BUILD) += mv_ddr_spd.o obj-$(CONFIG_SPL_BUILD) += mv_ddr_topology.o obj-$(CONFIG_SPL_BUILD) += xor.o +obj-$(CONFIG_ARMADA_38X_SUPPORT_OLD_DDR3_TRAINING) += old/ + ifdef CONFIG_DDR4 obj-$(CONFIG_SPL_BUILD) += mv_ddr4_mpr_pda_if.o obj-$(CONFIG_SPL_BUILD) += mv_ddr4_training.o diff --git a/drivers/ddr/marvell/a38x/old/Makefile b/drivers/ddr/marvell/a38x/old/Makefile index e7b723bb24..1645a79b40 100644 --- a/drivers/ddr/marvell/a38x/old/Makefile +++ b/drivers/ddr/marvell/a38x/old/Makefile @@ -16,3 +16,14 @@ obj-$(CONFIG_SPL_BUILD) += ddr3_training_ip_engine.o obj-$(CONFIG_SPL_BUILD) += ddr3_training_leveling.o obj-$(CONFIG_SPL_BUILD) += ddr3_training_pbs.o obj-$(CONFIG_SPL_BUILD) += ddr3_training_static.o + +define IncludeSymbolRename + CFLAGS_$(1) = -include $(srctree)/drivers/ddr/marvell/a38x/old/glue_symbol_renames.h +endef + +$(foreach obj,$(obj-y),$(eval $(call IncludeSymbolRename,$(obj)))) + +# The old version of DDR training fails weirdly on some boards if the whole +# driver is compiled with LTO. It seems to work if at least ddr3_init.c is +# compiled without LTO. +CFLAGS_REMOVE_ddr3_init.o := $(LTO_CFLAGS) diff --git a/drivers/ddr/marvell/a38x/old/glue_symbol_renames.h b/drivers/ddr/marvell/a38x/old/glue_symbol_renames.h new file mode 100644 index 0000000000..9bdfecd2d6 --- /dev/null +++ b/drivers/ddr/marvell/a38x/old/glue_symbol_renames.h @@ -0,0 +1,247 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Glue old A38x DDR training code with new U-Boot + * + * This header renames symbols so that they do not clash with new A38x DDR + * training code (the one living in the parent directory). + * + * Copyright (C) 2024 Marek Behún + */ + +#ifndef GLUE_SYMBOL_RENAMES_H +#define GLUE_SYMBOL_RENAMES_H + +#define activate_deselect_after_run_alg old_activate_deselect_after_run_alg +#define activate_select_before_run_alg old_activate_select_before_run_alg +#define adll_calibration old_adll_calibration +#define adll_shift_lock old_adll_shift_lock +#define adll_shift_val old_adll_shift_val +#define bus_end_window old_bus_end_window +#define bus_start_window old_bus_start_window +#define ca_delay old_ca_delay +#define calibration_update_control old_calibration_update_control +#define centralization_state old_centralization_state +#define ck_delay old_ck_delay +#define clamp_tbl old_clamp_tbl +#define cl_mask_table old_cl_mask_table +#define config_func_info old_config_func_info +#define ctrl_adll old_ctrl_adll +#define ctrl_sweepres old_ctrl_sweepres +#define current_valid_window old_current_valid_window +#define current_vref old_current_vref +#define cwl_mask_table old_cwl_mask_table +#define ddr3_calc_mem_cs_size old_ddr3_calc_mem_cs_size +#define ddr3_hws_set_log_level old_ddr3_hws_set_log_level +#define ddr3_init old_ddr3_init +#define ddr3_post_algo_config old_ddr3_post_algo_config +#define ddr3_post_run_alg old_ddr3_post_run_alg +#define ddr3_pre_algo_config old_ddr3_pre_algo_config +#define ddr3_silicon_post_init old_ddr3_silicon_post_init +#define ddr3_tip_bist_activate old_ddr3_tip_bist_activate +#define ddr3_tip_bist_read_result old_ddr3_tip_bist_read_result +#define ddr3_tip_bus_read old_ddr3_tip_bus_read +#define ddr3_tip_bus_read_modify_write old_ddr3_tip_bus_read_modify_write +#define ddr3_tip_bus_write old_ddr3_tip_bus_write +#define ddr3_tip_calc_cs_mask old_ddr3_tip_calc_cs_mask +#define ddr3_tip_centralization_rx old_ddr3_tip_centralization_rx +#define ddr3_tip_centralization_tx old_ddr3_tip_centralization_tx +#define ddr3_tip_centr_skip_min_win_check old_ddr3_tip_centr_skip_min_win_check +#define ddr3_tip_clean_pbs_result old_ddr3_tip_clean_pbs_result +#define ddr3_tip_cmd_addr_init_delay old_ddr3_tip_cmd_addr_init_delay +#define ddr3_tip_configure_cs old_ddr3_tip_configure_cs +#define ddr3_tip_configure_odpg old_ddr3_tip_configure_odpg +#define ddr3_tip_configure_phy old_ddr3_tip_configure_phy +#define ddr3_tip_convert_tune_result old_ddr3_tip_convert_tune_result +#define ddr3_tip_ddr3_reset_phy_regs old_ddr3_tip_ddr3_reset_phy_regs +#define ddr3_tip_dynamic_per_bit_read_leveling old_ddr3_tip_dynamic_per_bit_read_leveling +#define ddr3_tip_dynamic_read_leveling old_ddr3_tip_dynamic_read_leveling +#define ddr3_tip_dynamic_write_leveling old_ddr3_tip_dynamic_write_leveling +#define ddr3_tip_dynamic_write_leveling_supp old_ddr3_tip_dynamic_write_leveling_supp +#define ddr3_tip_enable_init_sequence old_ddr3_tip_enable_init_sequence +#define ddr3_tip_ext_read old_ddr3_tip_ext_read +#define ddr3_tip_ext_write old_ddr3_tip_ext_write +#define ddr3_tip_freq_set old_ddr3_tip_freq_set +#define ddr3_tip_get_buf_max old_ddr3_tip_get_buf_max +#define ddr3_tip_get_buf_min old_ddr3_tip_get_buf_min +#define ddr3_tip_get_buf_ptr old_ddr3_tip_get_buf_ptr +#define ddr3_tip_get_device_info old_ddr3_tip_get_device_info +#define ddr3_tip_get_mask_results_dq_reg old_ddr3_tip_get_mask_results_dq_reg +#define ddr3_tip_get_mask_results_pup_reg_map old_ddr3_tip_get_mask_results_pup_reg_map +#define ddr3_tip_get_pattern_table old_ddr3_tip_get_pattern_table +#define ddr3_tip_get_result_ptr old_ddr3_tip_get_result_ptr +#define ddr3_tip_if_polling old_ddr3_tip_if_polling +#define ddr3_tip_if_read old_ddr3_tip_if_read +#define ddr3_tip_if_write old_ddr3_tip_if_write +#define ddr3_tip_init_config_func old_ddr3_tip_init_config_func +#define ddr3_tip_ip_training old_ddr3_tip_ip_training +#define ddr3_tip_ip_training_wrapper old_ddr3_tip_ip_training_wrapper +#define ddr3_tip_ip_training_wrapper_int old_ddr3_tip_ip_training_wrapper_int +#define ddr3_tip_is_pup_lock old_ddr3_tip_is_pup_lock +#define ddr3_tip_legacy_dynamic_read_leveling old_ddr3_tip_legacy_dynamic_read_leveling +#define ddr3_tip_legacy_dynamic_write_leveling old_ddr3_tip_legacy_dynamic_write_leveling +#define ddr3_tip_load_all_pattern_to_mem old_ddr3_tip_load_all_pattern_to_mem +#define ddr3_tip_load_pattern_to_mem old_ddr3_tip_load_pattern_to_mem +#define ddr3_tip_load_pattern_to_odpg old_ddr3_tip_load_pattern_to_odpg +#define ddr3_tip_load_phy_values old_ddr3_tip_load_phy_values +#define ddr3_tip_pbs old_ddr3_tip_pbs +#define ddr3_tip_pbs_rx old_ddr3_tip_pbs_rx +#define ddr3_tip_pbs_tx old_ddr3_tip_pbs_tx +#define ddr3_tip_print_adll old_ddr3_tip_print_adll +#define ddr3_tip_print_bist_res old_ddr3_tip_print_bist_res +#define ddr3_tip_print_centralization_result old_ddr3_tip_print_centralization_result +#define ddr3_tip_print_log old_ddr3_tip_print_log +#define ddr3_tip_print_stability_log old_ddr3_tip_print_stability_log +#define ddr3_tip_print_wl_supp_result old_ddr3_tip_print_wl_supp_result +#define ddr3_tip_process_result old_ddr3_tip_process_result +#define ddr3_tip_read_training_result old_ddr3_tip_read_training_result +#define ddr3_tip_reg_dump old_ddr3_tip_reg_dump +#define ddr3_tip_register_dq_table old_ddr3_tip_register_dq_table +#define ddr3_tip_register_xsb_info old_ddr3_tip_register_xsb_info +#define ddr3_tip_reset_fifo_ptr old_ddr3_tip_reset_fifo_ptr +#define ddr3_tip_restore_dunit_regs old_ddr3_tip_restore_dunit_regs +#define ddr3_tip_special_rx old_ddr3_tip_special_rx +#define ddr3_tip_training_ip_test old_ddr3_tip_training_ip_test +#define ddr3_tip_tune_training_params old_ddr3_tip_tune_training_params +#define ddr3_tip_vref old_ddr3_tip_vref +#define ddr3_tip_write_additional_odt_setting old_ddr3_tip_write_additional_odt_setting +#define ddr3_tip_write_cs_result old_ddr3_tip_write_cs_result +#define ddr3_tip_write_mrs_cmd old_ddr3_tip_write_mrs_cmd +#define debug_acc old_debug_acc +#define debug_centralization old_debug_centralization +#define debug_dunit old_debug_dunit +#define debug_leveling old_debug_leveling +#define debug_mode old_debug_mode +#define debug_pbs old_debug_pbs +#define debug_training old_debug_training +#define debug_training_access old_debug_training_access +#define debug_training_bist old_debug_training_bist +#define debug_training_hw_alg old_debug_training_hw_alg +#define debug_training_ip old_debug_training_ip +#define debug_training_static old_debug_training_static +#define default_centrlization_value old_default_centrlization_value +#define delay_enable old_delay_enable +#define dfs_low_freq old_dfs_low_freq +#define dfs_low_phy1 old_dfs_low_phy1 +#define dq_map_table old_dq_map_table +#define effective_cs old_effective_cs +#define end_if old_end_if +#define end_pattern old_end_pattern +#define finger_test old_finger_test +#define first_active_if old_first_active_if +#define freq_info_table old_freq_info_table +#define g_dic old_g_dic +#define generic_init_controller old_generic_init_controller +#define get_valid_win_rx old_get_valid_win_rx +#define g_odt_config old_g_odt_config +#define g_rtt_nom old_g_rtt_nom +#define g_znodt_ctrl old_g_znodt_ctrl +#define g_znodt_data old_g_znodt_data +#define g_znri_ctrl old_g_znri_ctrl +#define g_znri_data old_g_znri_data +#define g_zpodt_ctrl old_g_zpodt_ctrl +#define g_zpodt_data old_g_zpodt_data +#define g_zpri_ctrl old_g_zpri_ctrl +#define g_zpri_data old_g_zpri_data +#define hws_ddr3_calc_mem_cs_size old_hws_ddr3_calc_mem_cs_size +#define hws_ddr3_cs_base_adr_calc old_hws_ddr3_cs_base_adr_calc +#define hws_ddr3_get_bus_width old_hws_ddr3_get_bus_width +#define hws_ddr3_get_device_size old_hws_ddr3_get_device_size +#define hws_ddr3_get_device_width old_hws_ddr3_get_device_width +#define hws_ddr3_run_bist old_hws_ddr3_run_bist +#define hws_ddr3_tip_init_controller old_hws_ddr3_tip_init_controller +#define hws_ddr3_tip_run_alg old_hws_ddr3_tip_run_alg +#define hws_ddr3_tip_select_ddr_controller old_hws_ddr3_tip_select_ddr_controller +#define interface_state old_interface_state +#define is_adll_calib_before_init old_is_adll_calib_before_init +#define is_bist_reset_bit old_is_bist_reset_bit +#define is_cbe_required old_is_cbe_required +#define is_default_centralization old_is_default_centralization +#define is_dfs_disabled old_is_dfs_disabled +#define is_dfs_in_init old_is_dfs_in_init +#define is_freq_old old_is_freq_old +#define is_pll_before_init old_is_pll_before_init +#define is_reg_dump old_is_reg_dump +#define is_rl_old old_is_rl_old +#define is_tune_result old_is_tune_result +#define is_validate_window_per_if old_is_validate_window_per_if +#define is_validate_window_per_pup old_is_validate_window_per_pup +#define last_valid_window old_last_valid_window +#define last_vref old_last_vref +#define lim_vref old_lim_vref +#define low_freq old_low_freq +#define mask_results_dq_reg_map old_mask_results_dq_reg_map +#define mask_results_dq_reg_map_pup3_ecc old_mask_results_dq_reg_map_pup3_ecc +#define mask_results_pup_reg_map old_mask_results_pup_reg_map +#define mask_results_pup_reg_map_pup3_ecc old_mask_results_pup_reg_map_pup3_ecc +#define mask_tune_func old_mask_tune_func +#define max_adll_per_pup old_max_adll_per_pup +#define max_pbs_per_pup old_max_pbs_per_pup +#define max_polling_for_done old_max_polling_for_done +#define medium_freq old_medium_freq +#define min_adll_per_pup old_min_adll_per_pup +#define min_pbs_per_pup old_min_pbs_per_pup +#define multicast_id old_multicast_id +#define n_finger_end old_n_finger_end +#define n_finger_start old_n_finger_start +#define n_finger_step old_n_finger_step +#define nominal_adll old_nominal_adll +#define odt_additional old_odt_additional +#define odt_config old_odt_config +#define pattern_table_16 old_pattern_table_16 +#define pattern_table_32 old_pattern_table_32 +#define pattern_table_get_word old_pattern_table_get_word +#define pbsdelay_per_pup old_pbsdelay_per_pup +#define pbs_pattern old_pbs_pattern +#define p_finger_end old_p_finger_end +#define p_finger_start old_p_finger_start +#define p_finger_step old_p_finger_step +#define phy_reg0_val old_phy_reg0_val +#define phy_reg1_val old_phy_reg1_val +#define phy_reg2_val old_phy_reg2_val +#define phy_reg3_val old_phy_reg3_val +#define phy_reg_bk old_phy_reg_bk +#define reset_read_fifo old_reset_read_fifo +#define result_all_bit old_result_all_bit +#define result_mat old_result_mat +#define result_mat_rx_dqs old_result_mat_rx_dqs +#define rl_mid_freq_wa old_rl_mid_freq_wa +#define rl_test old_rl_test +#define run_xsb_test old_run_xsb_test +#define speed_bin_table_t_rc old_speed_bin_table_t_rc +#define speed_bin_table_t_rcd_t_rp old_speed_bin_table_t_rcd_t_rp +#define start_if old_start_if +#define start_pattern old_start_pattern +#define start_xsb_offset old_start_xsb_offset +#define sweep_cnt old_sweep_cnt +#define sweep_pattern old_sweep_pattern +#define sys_env_device_rev_get old_sys_env_device_rev_get +#define train_control_element old_train_control_element +#define train_cs_num old_train_cs_num +#define train_dev_num old_train_dev_num +#define train_direction old_train_direction +#define train_edge_compare old_train_edge_compare +#define traine_search_dir old_traine_search_dir +#define train_if_acess old_train_if_acess +#define train_if_id old_train_if_id +#define train_if_select old_train_if_select +#define training_res old_training_res +#define training_result old_training_result +#define training_stage old_training_stage +#define train_init_value old_train_init_value +#define train_number_iterations old_train_number_iterations +#define train_pattern old_train_pattern +#define train_pup_access old_train_pup_access +#define train_pup_num old_train_pup_num +#define train_result_type old_train_result_type +#define train_status old_train_status +#define traintrain_cs_type old_traintrain_cs_type +#define twr_mask_table old_twr_mask_table +#define use_broadcast old_use_broadcast +#define vref_window_size old_vref_window_size +#define vref_window_size_th old_vref_window_size_th +#define window_mem_addr old_window_mem_addr +#define xsb_test_table old_xsb_test_table +#define xsb_validate_type old_xsb_validate_type +#define xsb_validation_base_address old_xsb_validation_base_address + +#endif /* !GLUE_SYMBOL_RENAMES_H */ From patchwork Tue Jun 18 15:34:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marek_Beh=C3=BAn?= X-Patchwork-Id: 1949272 X-Patchwork-Delegate: sr@denx.de Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=VD7URzq0; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=patchwork.ozlabs.org) Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W3W9S0hGFz20KL for ; Wed, 19 Jun 2024 01:38:04 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id C3D8788445; Tue, 18 Jun 2024 17:35:14 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="VD7URzq0"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 1AC1A8841E; Tue, 18 Jun 2024 17:35:12 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 8FE17884BA for ; Tue, 18 Jun 2024 17:35:07 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=kabel@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 688BA6199A; Tue, 18 Jun 2024 15:35:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D04AC4AF49; Tue, 18 Jun 2024 15:35:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718724906; bh=AFCZh/WI0Wi3TrjxkAvDtz65+Zwz9W8qgyvuTIF0sRU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VD7URzq0BEhoVvG1eSdLHEmDBwlGspK4gRHiw5XZMvTM+9Zv1MoNLzT84bRWne+t7 3r/CTusiYJnJ4srJsFz/dUm15UL52h9Hj14jkidVMkpIxeb1UcuuVL8egZ8R/PMHr5 6FHsr+j14KgWaUcy3Ges9X3tOLhPDo2fGe4SZRKNLN/0qVyi5asYKMFfk04EPwGTJl yvGMQyzt/KrUovD8pDockStclDeeKblkrjQ/xAm1kQZQ2GMiJ1L0l3HwDgAS6EoC3x e7Vs+zihytYYSiosLAcH8RMWtRmqFR3AoC2avdpfcSxRh85A4oXQqorkgmysN/Wydv VxHsHqYUPPoow== From: =?utf-8?q?Marek_Beh=C3=BAn?= To: Stefan Roese Cc: u-boot@lists.denx.de, =?utf-8?q?Marek_Beh=C3=BAn?= Subject: [PATCH u-boot-marvell 16/16] arm: mvebu: turris_omnia: Support old DDR3 training Date: Tue, 18 Jun 2024 17:34:39 +0200 Message-ID: <20240618153439.9518-17-kabel@kernel.org> X-Mailer: git-send-email 2.44.2 In-Reply-To: <20240618153439.9518-1-kabel@kernel.org> References: <20240618153439.9518-1-kabel@kernel.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean Support old DDR3 training code on Turris Omnia, selectable via EEPROM field. Users experiencing DDR3 initialization failures or random crashes of the operating system due to incorrect DDR3 configuration can select the old DDR3 training implementation to fix those issues by setting the EEPROM field "Use old DDR training": eeprom update "Use old DDR training" 1 Signed-off-by: Marek Behún --- arch/arm/mach-mvebu/Kconfig | 1 + board/CZ.NIC/turris_omnia/Makefile | 1 + board/CZ.NIC/turris_omnia/eeprom.c | 37 ++++++++++- board/CZ.NIC/turris_omnia/old_ddr3_training.c | 63 +++++++++++++++++++ board/CZ.NIC/turris_omnia/turris_omnia.c | 18 +++++- 5 files changed, 117 insertions(+), 3 deletions(-) create mode 100644 board/CZ.NIC/turris_omnia/old_ddr3_training.c diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig index e377e8a48a..4a8328760e 100644 --- a/arch/arm/mach-mvebu/Kconfig +++ b/arch/arm/mach-mvebu/Kconfig @@ -149,6 +149,7 @@ config TARGET_TURRIS_OMNIA select SPL_SYS_MALLOC_SIMPLE select SYS_I2C_MVTWSI select ATSHA204A + select ARMADA_38X_SUPPORT_OLD_DDR3_TRAINING config TARGET_TURRIS_MOX bool "Support CZ.NIC's Turris Mox / RIPE Atlas Probe" diff --git a/board/CZ.NIC/turris_omnia/Makefile b/board/CZ.NIC/turris_omnia/Makefile index 216e11958a..d1ef5cb860 100644 --- a/board/CZ.NIC/turris_omnia/Makefile +++ b/board/CZ.NIC/turris_omnia/Makefile @@ -4,3 +4,4 @@ obj-y := turris_omnia.o ../turris_atsha_otp.o ../turris_common.o obj-$(CONFIG_CMD_EEPROM_LAYOUT) += eeprom.o +obj-$(CONFIG_SPL_BUILD) += old_ddr3_training.o diff --git a/board/CZ.NIC/turris_omnia/eeprom.c b/board/CZ.NIC/turris_omnia/eeprom.c index 32572481ce..6e2640ad2a 100644 --- a/board/CZ.NIC/turris_omnia/eeprom.c +++ b/board/CZ.NIC/turris_omnia/eeprom.c @@ -91,13 +91,48 @@ static int eeprom_field_update_ddr_speed(struct eeprom_field *field, return 0; } +static void eeprom_field_print_bool(const struct eeprom_field *field) +{ + unsigned char val = field->buf[0]; + + printf(PRINT_FIELD_SEGMENT, field->name); + + if (val == 0xff) + puts("(empty, defaults to 0)\n"); + else + printf("%u\n", val); +} + +static int eeprom_field_update_bool(struct eeprom_field *field, char *value) +{ + unsigned char *val = &field->buf[0]; + + if (value[0] == '\0') { + /* setting default value */ + *val = 0xff; + + return 0; + } + + if (value[1] != '\0') + return -1; + + if (value[0] == '1' || value[0] == '0') + *val = value[0] - '0'; + else + return -1; + + return 0; +} + static struct eeprom_field omnia_layout[] = { _DEF_FIELD("Magic constant", 4, bin), _DEF_FIELD("RAM size in GB", 4, ramsz), _DEF_FIELD("Wi-Fi Region", 4, region), _DEF_FIELD("CRC32 checksum", 4, bin), _DEF_FIELD("DDR speed", 5, ddr_speed), - _DEF_FIELD("Extended reserved fields", 39, reserved), + _DEF_FIELD("Use old DDR training", 1, bool), + _DEF_FIELD("Extended reserved fields", 38, reserved), _DEF_FIELD("Extended CRC32 checksum", 4, bin), }; diff --git a/board/CZ.NIC/turris_omnia/old_ddr3_training.c b/board/CZ.NIC/turris_omnia/old_ddr3_training.c new file mode 100644 index 0000000000..cdb3487ad9 --- /dev/null +++ b/board/CZ.NIC/turris_omnia/old_ddr3_training.c @@ -0,0 +1,63 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (C) 2024 Marek Behún + */ + +#include +#include + +#include "../drivers/ddr/marvell/a38x/old/ddr3_init.h" + +static struct hws_topology_map board_topology_map_1g = { + 0x1, /* active interfaces */ + /* cs_mask, mirror, dqs_swap, ck_swap X PUPs */ + { { { {0x1, 0, 0, 0}, + {0x1, 0, 0, 0}, + {0x1, 0, 0, 0}, + {0x1, 0, 0, 0}, + {0x1, 0, 0, 0} }, + SPEED_BIN_DDR_1600K, /* speed_bin */ + BUS_WIDTH_16, /* memory_width */ + MEM_4G, /* mem_size */ + DDR_FREQ_800, /* frequency */ + 0, 0, /* cas_l cas_wl */ + HWS_TEMP_NORMAL, /* temperature */ + HWS_TIM_2T} }, /* timing (force 2t) */ + 5, /* Num Of Bus Per Interface*/ + BUS_MASK_32BIT /* Busses mask */ +}; + +static struct hws_topology_map board_topology_map_2g = { + 0x1, /* active interfaces */ + /* cs_mask, mirror, dqs_swap, ck_swap X PUPs */ + { { { {0x1, 0, 0, 0}, + {0x1, 0, 0, 0}, + {0x1, 0, 0, 0}, + {0x1, 0, 0, 0}, + {0x1, 0, 0, 0} }, + SPEED_BIN_DDR_1600K, /* speed_bin */ + BUS_WIDTH_16, /* memory_width */ + MEM_8G, /* mem_size */ + DDR_FREQ_800, /* frequency */ + 0, 0, /* cas_l cas_wl */ + HWS_TEMP_NORMAL, /* temperature */ + HWS_TIM_2T} }, /* timing (force 2t) */ + 5, /* Num Of Bus Per Interface*/ + BUS_MASK_32BIT /* Busses mask */ +}; + +/* defined in turris_omnia.c */ +extern int omnia_get_ram_size_gb(void); + +struct hws_topology_map *ddr3_get_topology_map(void) +{ + if (omnia_get_ram_size_gb() == 2) + return &board_topology_map_2g; + else + return &board_topology_map_1g; +} + +__weak u32 sys_env_get_topology_update_info(struct topology_update_info *tui) +{ + return MV_OK; +} diff --git a/board/CZ.NIC/turris_omnia/turris_omnia.c b/board/CZ.NIC/turris_omnia/turris_omnia.c index 544784e860..2f29d26edf 100644 --- a/board/CZ.NIC/turris_omnia/turris_omnia.c +++ b/board/CZ.NIC/turris_omnia/turris_omnia.c @@ -432,7 +432,8 @@ struct omnia_eeprom { /* second part (only considered if crc2 is not all-ones) */ char ddr_speed[5]; - u8 reserved[39]; + u8 old_ddr_training; + u8 reserved[38]; u32 crc2; }; @@ -496,7 +497,7 @@ static bool omnia_read_eeprom(struct omnia_eeprom *oep) return true; } -static int omnia_get_ram_size_gb(void) +int omnia_get_ram_size_gb(void) { static int ram_size; struct omnia_eeprom oep; @@ -521,6 +522,19 @@ static int omnia_get_ram_size_gb(void) return ram_size; } +bool board_use_old_ddr3_training(void) +{ + struct omnia_eeprom oep; + + if (!omnia_read_eeprom(&oep)) + return false; + + if (!is_omnia_eeprom_second_part_valid(&oep)) + return false; + + return oep.old_ddr_training == 1; +} + static const char *omnia_get_ddr_speed(void) { struct omnia_eeprom oep;