From patchwork Fri Mar 27 10:07:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thierry Reding X-Patchwork-Id: 455364 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 7C1181400B6 for ; Fri, 27 Mar 2015 21:08:03 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="verification failed; unprotected key" header.d=gmail.com header.i=@gmail.com header.b=zItHy6Ty; dkim-adsp=none (unprotected policy); dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752657AbbC0KHs (ORCPT ); Fri, 27 Mar 2015 06:07:48 -0400 Received: from mail-pd0-f178.google.com ([209.85.192.178]:34765 "EHLO mail-pd0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752599AbbC0KHp (ORCPT ); Fri, 27 Mar 2015 06:07:45 -0400 Received: by pdbni2 with SMTP id ni2so92134854pdb.1 for ; Fri, 27 Mar 2015 03:07:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=G/vR5RqkY3omOCvAQTpVR6qkwS3SuuSoqp/1tCPGZ/o=; b=zItHy6TyDnzGpZbLp7rcFuoQI1UszaMhxbHQUQJx96T3N5uwtx7VqpY00Sti2K9/h0 R02lOKn7suHsLQZjUCG2zD2DYuXbN6mvNkg6c2GFnO3Xn6g8ylUvsDRMzDl47Y2TsG+O yJ21JMM/NlzHPLf/YK20kWbDYbnQxtOSRM7F7WV8+wvk2RxOdM/c6DbG+gFyYQaw6JGs lIg8+LVLq7xO+aBdwn9wa3uj5iP4JvD9nMO80ykSKEPSehWjK09zhIvQvn5na1J6WsUk lGbR8oVfAacUYzErY8FUzmm47MqfCqPbEWnZHa80jDb4AVHIYj2v/6lnqKABO7VA0fTf QE0A== X-Received: by 10.66.129.198 with SMTP id ny6mr34722551pab.67.1427450865240; Fri, 27 Mar 2015 03:07:45 -0700 (PDT) Received: from localhost ([216.228.120.20]) by mx.google.com with ESMTPSA id y1sm1658836pdb.74.2015.03.27.03.07.43 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Mar 2015 03:07:44 -0700 (PDT) From: Thierry Reding To: Joerg Roedel Cc: Stephen Warren , Alexandre Courbot , iommu@lists.linux-foundation.org, linux-tegra@vger.kernel.org, Hiroshi Doyu Subject: [PATCH 4/4] iommu/tegra: smmu: Compute PFN mask at runtime Date: Fri, 27 Mar 2015 11:07:27 +0100 Message-Id: <1427450847-17719-4-git-send-email-thierry.reding@gmail.com> X-Mailer: git-send-email 2.3.2 In-Reply-To: <1427450847-17719-1-git-send-email-thierry.reding@gmail.com> References: <1427450847-17719-1-git-send-email-thierry.reding@gmail.com> Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org From: Thierry Reding The SMMU on Tegra30 and Tegra114 supports addressing up to 4 GiB of physical memory. On Tegra124 the addressable physical memory was extended to 16 GiB. The page frame number stored in PTEs therefore requires 20 or 22 bits, depending on SoC generation. In order to cope with this, compute the proper value at runtime. Reported-by: Joseph Lo Cc: Hiroshi Doyu Signed-off-by: Thierry Reding --- drivers/iommu/tegra-smmu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 7b36760ae087..f0496e34ada3 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -6,6 +6,7 @@ * published by the Free Software Foundation. */ +#include #include #include #include @@ -25,6 +26,8 @@ struct tegra_smmu { struct tegra_mc *mc; const struct tegra_smmu_soc *soc; + unsigned long pfn_mask; + unsigned long *asids; struct mutex lock; @@ -108,8 +111,6 @@ static inline u32 smmu_readl(struct tegra_smmu *smmu, unsigned long offset) #define SMMU_PDE_SHIFT 22 #define SMMU_PTE_SHIFT 12 -#define SMMU_PFN_MASK 0x000fffff - #define SMMU_PD_READABLE (1 << 31) #define SMMU_PD_WRITABLE (1 << 30) #define SMMU_PD_NONSECURE (1 << 29) @@ -489,7 +490,7 @@ static u32 *as_get_pte(struct tegra_smmu_as *as, dma_addr_t iova, smmu_flush_tlb_section(smmu, as->id, iova); smmu_flush(smmu); } else { - page = pfn_to_page(pd[pde] & SMMU_PFN_MASK); + page = pfn_to_page(pd[pde] & smmu->pfn_mask); pt = page_address(page); } @@ -511,7 +512,7 @@ static void as_put_pte(struct tegra_smmu_as *as, dma_addr_t iova) u32 *pd = page_address(as->pd), *pt; struct page *page; - page = pfn_to_page(pd[pde] & SMMU_PFN_MASK); + page = pfn_to_page(pd[pde] & as->smmu->pfn_mask); pt = page_address(page); /* @@ -586,7 +587,7 @@ static phys_addr_t tegra_smmu_iova_to_phys(struct iommu_domain *domain, u32 *pte; pte = as_get_pte(as, iova, &page); - pfn = *pte & SMMU_PFN_MASK; + pfn = *pte & as->smmu->pfn_mask; return PFN_PHYS(pfn); } @@ -807,6 +808,10 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev, smmu->dev = dev; smmu->mc = mc; + smmu->pfn_mask = BIT_MASK(mc->soc->num_address_bits - PAGE_SHIFT) - 1; + dev_dbg(dev, "address bits: %u, PFN mask: %#lx\n", + mc->soc->num_address_bits, smmu->pfn_mask); + value = SMMU_PTC_CONFIG_ENABLE | SMMU_PTC_CONFIG_INDEX_MAP(0x3f); if (soc->supports_request_limit)