From patchwork Mon May 15 12:49:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 1781353 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QKfN42QNXz20dc for ; Mon, 15 May 2023 22:49:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241920AbjEOMty (ORCPT ); Mon, 15 May 2023 08:49:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241439AbjEOMtm (ORCPT ); Mon, 15 May 2023 08:49:42 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1141E1728 for ; Mon, 15 May 2023 05:49:39 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6A92A4B3; Mon, 15 May 2023 05:50:23 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B78C33F67D; Mon, 15 May 2023 05:49:37 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux.dev, will@kernel.org, jgg@nvidia.com, digetx@gmail.com, thierry.reding@gmail.com, linux-tegra@vger.kernel.org Subject: [PATCH 1/4] iommu/tegra-gart: Add default identity domain support Date: Mon, 15 May 2023 13:49:29 +0100 Message-Id: <3a243e6e796cea4566e28297da8270482457637c.1684154219.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org The nature of a GART means that supporting identity domains is as easy as doing nothing, so bring the Tegra driver into the modern world of default domains with a trivial implementation. Identity domains are allowed to exist alongside any explicit domain for the translation aperture, since they both simply represent regions of the physical address space with no isolation from each other. As such we'll continue to do the "wrong" thing with groups to allow that to work, since the notion of isolation that groups represent is counterproductive to the GART's established usage model. Signed-off-by: Robin Murphy --- drivers/iommu/tegra-gart.c | 39 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index a482ff838b53..deedb9f0f380 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -111,7 +111,13 @@ static int gart_iommu_attach_dev(struct iommu_domain *domain, spin_lock(&gart->dom_lock); - if (gart->active_domain && gart->active_domain != domain) { + if (domain->type == IOMMU_DOMAIN_IDENTITY) { + if (dev_iommu_priv_get(dev)) { + dev_iommu_priv_set(dev, NULL); + if (--gart->active_devices == 0) + gart->active_domain = NULL; + } + } else if (gart->active_domain && gart->active_domain != domain) { ret = -EINVAL; } else if (dev_iommu_priv_get(dev) != domain) { dev_iommu_priv_set(dev, domain); @@ -124,27 +130,14 @@ static int gart_iommu_attach_dev(struct iommu_domain *domain, return ret; } -static void gart_iommu_set_platform_dma(struct device *dev) -{ - struct iommu_domain *domain = iommu_get_domain_for_dev(dev); - struct gart_device *gart = gart_handle; - - spin_lock(&gart->dom_lock); - - if (dev_iommu_priv_get(dev) == domain) { - dev_iommu_priv_set(dev, NULL); - - if (--gart->active_devices == 0) - gart->active_domain = NULL; - } - - spin_unlock(&gart->dom_lock); -} - static struct iommu_domain *gart_iommu_domain_alloc(unsigned type) { + static struct iommu_domain identity; struct iommu_domain *domain; + if (type == IOMMU_DOMAIN_IDENTITY) + return &identity; + if (type != IOMMU_DOMAIN_UNMANAGED) return NULL; @@ -161,7 +154,8 @@ static struct iommu_domain *gart_iommu_domain_alloc(unsigned type) static void gart_iommu_domain_free(struct iommu_domain *domain) { WARN_ON(gart_handle->active_domain == domain); - kfree(domain); + if (domain->type != IOMMU_DOMAIN_IDENTITY) + kfree(domain); } static inline int __gart_iommu_map(struct gart_device *gart, unsigned long iova, @@ -246,6 +240,11 @@ static struct iommu_device *gart_iommu_probe_device(struct device *dev) return &gart_handle->iommu; } +static int gart_iommu_def_domain_type(struct device *dev) +{ + return IOMMU_DOMAIN_IDENTITY; +} + static int gart_iommu_of_xlate(struct device *dev, struct of_phandle_args *args) { @@ -270,9 +269,9 @@ static const struct iommu_ops gart_iommu_ops = { .domain_alloc = gart_iommu_domain_alloc, .probe_device = gart_iommu_probe_device, .device_group = generic_device_group, - .set_platform_dma_ops = gart_iommu_set_platform_dma, .pgsize_bitmap = GART_IOMMU_PGSIZES, .of_xlate = gart_iommu_of_xlate, + .def_domain_type = gart_iommu_def_domain_type, .default_domain_ops = &(const struct iommu_domain_ops) { .attach_dev = gart_iommu_attach_dev, .map = gart_iommu_map, From patchwork Mon May 15 12:49:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 1781354 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QKfN45nd4z20db for ; Mon, 15 May 2023 22:49:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241275AbjEOMtz (ORCPT ); Mon, 15 May 2023 08:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241940AbjEOMtn (ORCPT ); Mon, 15 May 2023 08:49:43 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 357D3173F for ; Mon, 15 May 2023 05:49:40 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E66AFEC; Mon, 15 May 2023 05:50:24 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EA8053F67D; Mon, 15 May 2023 05:49:38 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux.dev, will@kernel.org, jgg@nvidia.com, digetx@gmail.com, thierry.reding@gmail.com, linux-tegra@vger.kernel.org Subject: [PATCH 2/4] iommu/tegra-gart: Improve domain support Date: Mon, 15 May 2023 13:49:30 +0100 Message-Id: <11f282c5d1db884e9f187cc4639add28faa4ee13.1684154219.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Rework the saving of mappings over suspend/resume by moving the data from the GART instance to the domain itself, and keeping it actively up to date. This saves having to read everything back out of the hardware at suspend time, but also lets us make the attach path actually support the notion of multiple domains that it was already checking for. With this in place we'll then be ready to tackle the remaining assumptions... Signed-off-by: Robin Murphy --- drivers/iommu/tegra-gart.c | 87 ++++++++++++++++++++++++-------------- 1 file changed, 56 insertions(+), 31 deletions(-) diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index deedb9f0f380..a74648655dac 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -33,15 +33,19 @@ /* bitmap of the page sizes currently supported */ #define GART_IOMMU_PGSIZES (GART_PAGE_SIZE) +struct gart_domain { + struct iommu_domain domain; + u32 *savedata; +}; + struct gart_device { void __iomem *regs; - u32 *savedata; unsigned long iovmm_base; /* offset to vmm_area start */ unsigned long iovmm_end; /* offset to vmm_area end */ spinlock_t pte_lock; /* for pagetable */ spinlock_t dom_lock; /* for active domain */ unsigned int active_devices; /* number of active devices */ - struct iommu_domain *active_domain; /* current active domain */ + struct gart_domain *active_domain; /* current active domain */ struct iommu_device iommu; /* IOMMU Core handle */ struct device *dev; }; @@ -62,6 +66,16 @@ static bool gart_debug; iova < gart->iovmm_end; \ iova += GART_PAGE_SIZE) +static struct gart_domain *to_gart_domain(const struct iommu_domain *domain) +{ + return container_of(domain, struct gart_domain, domain); +} + +static int gart_pte_index(struct gart_device *gart, unsigned long iova) +{ + return (iova - gart->iovmm_base) / GART_PAGE_SIZE; +} + static inline void gart_set_pte(struct gart_device *gart, unsigned long iova, unsigned long pte) { @@ -80,9 +94,10 @@ static inline unsigned long gart_read_pte(struct gart_device *gart, return pte; } -static void do_gart_setup(struct gart_device *gart, const u32 *data) +static void do_gart_setup(struct gart_device *gart) { unsigned long iova; + const u32 *data = gart->active_domain ? gart->active_domain->savedata : NULL; for_each_gart_pte(gart, iova) gart_set_pte(gart, iova, data ? *(data++) : 0); @@ -107,33 +122,44 @@ static int gart_iommu_attach_dev(struct iommu_domain *domain, struct device *dev) { struct gart_device *gart = gart_handle; + struct gart_domain *prev; int ret = 0; spin_lock(&gart->dom_lock); + prev = gart->active_domain; if (domain->type == IOMMU_DOMAIN_IDENTITY) { if (dev_iommu_priv_get(dev)) { dev_iommu_priv_set(dev, NULL); if (--gart->active_devices == 0) gart->active_domain = NULL; } - } else if (gart->active_domain && gart->active_domain != domain) { + } else if (prev && &prev->domain != domain) { ret = -EINVAL; } else if (dev_iommu_priv_get(dev) != domain) { dev_iommu_priv_set(dev, domain); - gart->active_domain = domain; + gart->active_domain = to_gart_domain(domain); gart->active_devices++; } spin_unlock(&gart->dom_lock); + /* If the active domain has changed, sync our mappings */ + if (!ret && prev != gart->active_domain) { + spin_lock(&gart->pte_lock); + do_gart_setup(gart); + spin_unlock(&gart->pte_lock); + } + return ret; } static struct iommu_domain *gart_iommu_domain_alloc(unsigned type) { + struct gart_device *gart = gart_handle; static struct iommu_domain identity; - struct iommu_domain *domain; + struct gart_domain *domain; + int num_pages; if (type == IOMMU_DOMAIN_IDENTITY) return &identity; @@ -142,18 +168,25 @@ static struct iommu_domain *gart_iommu_domain_alloc(unsigned type) return NULL; domain = kzalloc(sizeof(*domain), GFP_KERNEL); - if (domain) { - domain->geometry.aperture_start = gart_handle->iovmm_base; - domain->geometry.aperture_end = gart_handle->iovmm_end - 1; - domain->geometry.force_aperture = true; - } + if (!domain) + return NULL; - return domain; + domain->domain.geometry.aperture_start = gart->iovmm_base; + domain->domain.geometry.aperture_end = gart->iovmm_end - 1; + domain->domain.geometry.force_aperture = true; + + num_pages = (gart->iovmm_end - gart->iovmm_base) / GART_PAGE_SIZE; + domain->savedata = vcalloc(num_pages, sizeof(u32)); + if (!domain->savedata) { + kfree(domain); + return NULL; + } + return &domain->domain; } static void gart_iommu_domain_free(struct iommu_domain *domain) { - WARN_ON(gart_handle->active_domain == domain); + WARN_ON(&gart_handle->active_domain->domain == domain); if (domain->type != IOMMU_DOMAIN_IDENTITY) kfree(domain); } @@ -161,12 +194,16 @@ static void gart_iommu_domain_free(struct iommu_domain *domain) static inline int __gart_iommu_map(struct gart_device *gart, unsigned long iova, unsigned long pa) { + int idx = gart_pte_index(gart, iova); + u32 pte = GART_ENTRY_PHYS_ADDR_VALID | pa; + if (unlikely(gart_debug && gart_pte_valid(gart, iova))) { dev_err(gart->dev, "Page entry is in-use\n"); return -EINVAL; } - gart_set_pte(gart, iova, GART_ENTRY_PHYS_ADDR_VALID | pa); + gart->active_domain->savedata[idx] = pte; + gart_set_pte(gart, iova, pte); return 0; } @@ -190,11 +227,14 @@ static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova, static inline int __gart_iommu_unmap(struct gart_device *gart, unsigned long iova) { + int idx = gart_pte_index(gart, iova); + if (unlikely(gart_debug && !gart_pte_valid(gart, iova))) { dev_err(gart->dev, "Page entry is invalid\n"); return -EINVAL; } + gart->active_domain->savedata[idx] = 0; gart_set_pte(gart, iova, 0); return 0; @@ -285,9 +325,6 @@ static const struct iommu_ops gart_iommu_ops = { int tegra_gart_suspend(struct gart_device *gart) { - u32 *data = gart->savedata; - unsigned long iova; - /* * All GART users shall be suspended at this point. Disable * address translation to trap all GART accesses as invalid @@ -296,15 +333,12 @@ int tegra_gart_suspend(struct gart_device *gart) writel_relaxed(0, gart->regs + GART_CONFIG); FLUSH_GART_REGS(gart); - for_each_gart_pte(gart, iova) - *(data++) = gart_read_pte(gart, iova); - return 0; } int tegra_gart_resume(struct gart_device *gart) { - do_gart_setup(gart, gart->savedata); + do_gart_setup(gart); return 0; } @@ -337,7 +371,7 @@ struct gart_device *tegra_gart_probe(struct device *dev, struct tegra_mc *mc) spin_lock_init(&gart->pte_lock); spin_lock_init(&gart->dom_lock); - do_gart_setup(gart, NULL); + do_gart_setup(gart); err = iommu_device_sysfs_add(&gart->iommu, dev, NULL, "gart"); if (err) @@ -347,17 +381,8 @@ struct gart_device *tegra_gart_probe(struct device *dev, struct tegra_mc *mc) if (err) goto remove_sysfs; - gart->savedata = vmalloc(resource_size(res) / GART_PAGE_SIZE * - sizeof(u32)); - if (!gart->savedata) { - err = -ENOMEM; - goto unregister_iommu; - } - return gart; -unregister_iommu: - iommu_device_unregister(&gart->iommu); remove_sysfs: iommu_device_sysfs_remove(&gart->iommu); free_gart: From patchwork Mon May 15 12:49:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 1781355 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QKfN51lJ7z20dc for ; Mon, 15 May 2023 22:49:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241613AbjEOMtz (ORCPT ); Mon, 15 May 2023 08:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241978AbjEOMto (ORCPT ); Mon, 15 May 2023 08:49:44 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5DB1BD2 for ; Mon, 15 May 2023 05:49:41 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1B991063; Mon, 15 May 2023 05:50:25 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2A89A3F67D; Mon, 15 May 2023 05:49:40 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux.dev, will@kernel.org, jgg@nvidia.com, digetx@gmail.com, thierry.reding@gmail.com, linux-tegra@vger.kernel.org Subject: [PATCH 3/4] iommu/tegra-gart: Generalise domain support Date: Mon, 15 May 2023 13:49:31 +0100 Message-Id: <38e4c004c6b3d0120c5dc6739301bb97bb28fcf6.1684154219.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org Now that we have a clear notion of mappings being owned by a domain, rather than the GART instance, it's easy enough to support operating on them independently of whether the domain is currently active. This also makes the PTE checks for debugging as cheap as testing the gart_debug option itself, so we may as well just remove the option. Signed-off-by: Robin Murphy --- drivers/iommu/tegra-gart.c | 46 ++++++++++++++------------------------ 1 file changed, 17 insertions(+), 29 deletions(-) diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index a74648655dac..0a121cbc17b8 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -52,8 +52,6 @@ struct gart_device { static struct gart_device *gart_handle; /* unique for a system */ -static bool gart_debug; - /* * Any interaction between any block on PPSB and a block on APB or AHB * must have these read-back to ensure the APB/AHB bus transaction is @@ -83,17 +81,6 @@ static inline void gart_set_pte(struct gart_device *gart, writel_relaxed(pte, gart->regs + GART_ENTRY_DATA); } -static inline unsigned long gart_read_pte(struct gart_device *gart, - unsigned long iova) -{ - unsigned long pte; - - writel_relaxed(iova, gart->regs + GART_ENTRY_ADDR); - pte = readl_relaxed(gart->regs + GART_ENTRY_DATA); - - return pte; -} - static void do_gart_setup(struct gart_device *gart) { unsigned long iova; @@ -113,9 +100,9 @@ static inline bool gart_iova_range_invalid(struct gart_device *gart, iova + bytes > gart->iovmm_end); } -static inline bool gart_pte_valid(struct gart_device *gart, unsigned long iova) +static inline bool gart_pte_valid(u32 pte) { - return !!(gart_read_pte(gart, iova) & GART_ENTRY_PHYS_ADDR_VALID); + return pte & GART_ENTRY_PHYS_ADDR_VALID; } static int gart_iommu_attach_dev(struct iommu_domain *domain, @@ -192,18 +179,19 @@ static void gart_iommu_domain_free(struct iommu_domain *domain) } static inline int __gart_iommu_map(struct gart_device *gart, unsigned long iova, - unsigned long pa) + unsigned long pa, struct gart_domain *domain) { int idx = gart_pte_index(gart, iova); u32 pte = GART_ENTRY_PHYS_ADDR_VALID | pa; - if (unlikely(gart_debug && gart_pte_valid(gart, iova))) { + if (unlikely(gart_pte_valid(domain->savedata[idx]))) { dev_err(gart->dev, "Page entry is in-use\n"); return -EINVAL; } - gart->active_domain->savedata[idx] = pte; - gart_set_pte(gart, iova, pte); + domain->savedata[idx] = pte; + if (domain == gart->active_domain) + gart_set_pte(gart, iova, pte); return 0; } @@ -218,24 +206,26 @@ static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova, return -EINVAL; spin_lock(&gart->pte_lock); - ret = __gart_iommu_map(gart, iova, (unsigned long)pa); + ret = __gart_iommu_map(gart, iova, (unsigned long)pa, to_gart_domain(domain)); spin_unlock(&gart->pte_lock); return ret; } static inline int __gart_iommu_unmap(struct gart_device *gart, - unsigned long iova) + unsigned long iova, + struct gart_domain *domain) { int idx = gart_pte_index(gart, iova); - if (unlikely(gart_debug && !gart_pte_valid(gart, iova))) { + if (unlikely(!gart_pte_valid(domain->savedata[idx]))) { dev_err(gart->dev, "Page entry is invalid\n"); return -EINVAL; } - gart->active_domain->savedata[idx] = 0; - gart_set_pte(gart, iova, 0); + domain->savedata[idx] = 0; + if (domain == gart->active_domain) + gart_set_pte(gart, iova, 0); return 0; } @@ -250,7 +240,7 @@ static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova, return 0; spin_lock(&gart->pte_lock); - err = __gart_iommu_unmap(gart, iova); + err = __gart_iommu_unmap(gart, iova, to_gart_domain(domain)); spin_unlock(&gart->pte_lock); return err ? 0 : bytes; @@ -261,12 +251,13 @@ static phys_addr_t gart_iommu_iova_to_phys(struct iommu_domain *domain, { struct gart_device *gart = gart_handle; unsigned long pte; + int idx = gart_pte_index(gart, iova); if (gart_iova_range_invalid(gart, iova, GART_PAGE_SIZE)) return -EINVAL; spin_lock(&gart->pte_lock); - pte = gart_read_pte(gart, iova); + pte = to_gart_domain(domain)->savedata[idx]; spin_unlock(&gart->pte_lock); return pte & GART_PAGE_MASK; @@ -390,6 +381,3 @@ struct gart_device *tegra_gart_probe(struct device *dev, struct tegra_mc *mc) return ERR_PTR(err); } - -module_param(gart_debug, bool, 0644); -MODULE_PARM_DESC(gart_debug, "Enable GART debugging"); From patchwork Mon May 15 12:49:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 1781356 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-tegra-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QKfN55288z20db for ; Mon, 15 May 2023 22:49:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241281AbjEOMtz (ORCPT ); Mon, 15 May 2023 08:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242029AbjEOMts (ORCPT ); Mon, 15 May 2023 08:49:48 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AC255E49 for ; Mon, 15 May 2023 05:49:42 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1200D2F4; Mon, 15 May 2023 05:50:27 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5E17E3F67D; Mon, 15 May 2023 05:49:41 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux.dev, will@kernel.org, jgg@nvidia.com, digetx@gmail.com, thierry.reding@gmail.com, linux-tegra@vger.kernel.org Subject: [PATCH 4/4] iommu: Clean up force_aperture confusion Date: Mon, 15 May 2023 13:49:32 +0100 Message-Id: X-Mailer: git-send-email 2.39.2.101.g768bb238c484.dirty In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org It was never entirely clear whether the force_aperture flag was supposed to be a hardware capability or a software policy. So far things seem to have leant towards the latter, given that the only drivers not setting the flag are ones where the aperture is seemingly a whole virtual address space such that accesses outside it wouldn't be possible, and the one driver which definitely can't enforce it in hardware *does* still set the flag. On reflection, though, it makes little sense for drivers to dictate usage policy to callers, and the interpretation that a driver setting the flag might also translate addresses outside the given aperture but has for some reason chosen not to is not actually a useful one. It seems a lot more logical to treat the aperture as the absolute limit of what can be translated, and the flag to indicate what would happen if a device did try to access an address outside the aperture, i.e. whether the access would fault or pass through untranslated. As such, reframe the flag consistent with a hardware capability in the hope of clearing up the misconception. Signed-off-by: Robin Murphy --- drivers/iommu/dma-iommu.c | 19 +++++++------------ drivers/iommu/mtk_iommu_v1.c | 4 ++++ drivers/iommu/sprd-iommu.c | 1 + drivers/iommu/tegra-gart.c | 2 +- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7a9f0b0bddbd..4693b021d54f 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -548,24 +548,19 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) return -EINVAL; + /* + * If the IOMMU only offers a non-isolated GART-style translation + * aperture, just let the device use dma-direct. + */ + if (!domain->geometry.force_aperture) + return -EINVAL; + iovad = &cookie->iovad; /* Use the smallest supported page size for IOVA granularity */ order = __ffs(domain->pgsize_bitmap); base_pfn = max_t(unsigned long, 1, base >> order); - /* Check the domain allows at least some access to the device... */ - if (domain->geometry.force_aperture) { - if (base > domain->geometry.aperture_end || - limit < domain->geometry.aperture_start) { - pr_warn("specified DMA range outside IOMMU capability\n"); - return -EFAULT; - } - /* ...then finally give it a kicking to make sure it fits */ - base_pfn = max_t(unsigned long, base_pfn, - domain->geometry.aperture_start >> order); - } - /* start_pfn is always nonzero for an already-initialised domain */ mutex_lock(&cookie->mutex); if (iovad->start_pfn) { diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index 8a0a5e5d049f..3ca813919974 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -281,6 +281,10 @@ static struct iommu_domain *mtk_iommu_v1_domain_alloc(unsigned type) if (!dom) return NULL; + dom->domain.geometry.aperture_start = 0; + dom->domain.geometry.aperture_end = SZ_4G - 1; + dom->domain.geometry.force_aperture = true; + return &dom->domain; } diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c index 39e34fdeccda..eb684d8807ca 100644 --- a/drivers/iommu/sprd-iommu.c +++ b/drivers/iommu/sprd-iommu.c @@ -148,6 +148,7 @@ static struct iommu_domain *sprd_iommu_domain_alloc(unsigned int domain_type) dom->domain.geometry.aperture_start = 0; dom->domain.geometry.aperture_end = SZ_256M - 1; + dom->domain.geometry.force_aperture = true; return &dom->domain; } diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index 0a121cbc17b8..c221af290798 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -160,7 +160,7 @@ static struct iommu_domain *gart_iommu_domain_alloc(unsigned type) domain->domain.geometry.aperture_start = gart->iovmm_base; domain->domain.geometry.aperture_end = gart->iovmm_end - 1; - domain->domain.geometry.force_aperture = true; + domain->domain.geometry.force_aperture = false; num_pages = (gart->iovmm_end - gart->iovmm_base) / GART_PAGE_SIZE; domain->savedata = vcalloc(num_pages, sizeof(u32));