From patchwork Wed Jul 8 11:27:47 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mikko Perttunen X-Patchwork-Id: 492858 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B99FB1401AF for ; Wed, 8 Jul 2015 21:33:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934665AbbGHLdZ (ORCPT ); Wed, 8 Jul 2015 07:33:25 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:5200 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934662AbbGHLdV (ORCPT ); Wed, 8 Jul 2015 07:33:21 -0400 Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Wed, 08 Jul 2015 04:33:52 -0700 Received: from HQMAIL105.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Wed, 08 Jul 2015 04:32:38 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Wed, 08 Jul 2015 04:32:38 -0700 Received: from UKMAIL102.nvidia.com (10.26.138.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 8 Jul 2015 11:33:20 +0000 Received: from mperttunen-lnx.Nvidia.com (10.21.25.200) by UKMAIL102.nvidia.com (10.26.138.15) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 8 Jul 2015 11:33:17 +0000 From: Mikko Perttunen To: , CC: , , , , , , , "Mikko Perttunen" Subject: [PATCH 4/5] drm/tegra: Support kernel mappings with IOMMU Date: Wed, 8 Jul 2015 14:27:47 +0300 Message-ID: <1436354868-24309-5-git-send-email-mperttunen@nvidia.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1436354868-24309-1-git-send-email-mperttunen@nvidia.com> References: <1436354868-24309-1-git-send-email-mperttunen@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-Originating-IP: [10.21.25.200] X-ClientProxiedBy: UKMAIL101.nvidia.com (10.26.138.13) To UKMAIL102.nvidia.com (10.26.138.15) Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org From: Arto Merilainen Host1x command buffer patching requires that the buffer object can be mapped into kernel address space, however, the recent addition of IOMMU did not account to this requirement. Therefore Host1x engines cannot be used if IOMMU is enabled. This patch implements kmap, kunmap, mmap and munmap functions to host1x bo objects. Signed-off-by: Arto Merilainen Signed-off-by: Mikko Perttunen --- drivers/gpu/drm/tegra/gem.c | 34 +++++++++++++++++++++++++++++++--- 1 file changed, 31 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 01e16e1..6aed866 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -2,7 +2,7 @@ * NVIDIA Tegra DRM GEM helper functions * * Copyright (C) 2012 Sascha Hauer, Pengutronix - * Copyright (C) 2013 NVIDIA CORPORATION, All rights reserved. + * Copyright (C) 2013-2015 NVIDIA CORPORATION, All rights reserved. * * Based on the GEM/CMA helpers * @@ -50,23 +50,51 @@ static void *tegra_bo_mmap(struct host1x_bo *bo) { struct tegra_bo *obj = host1x_to_tegra_bo(bo); - return obj->vaddr; + if (obj->vaddr) + return obj->vaddr; + else if (obj->gem.import_attach) + return dma_buf_vmap(obj->gem.import_attach->dmabuf); + else + return vmap(obj->pages, obj->num_pages, VM_MAP, + pgprot_writecombine(PAGE_KERNEL)); } static void tegra_bo_munmap(struct host1x_bo *bo, void *addr) { + struct tegra_bo *obj = host1x_to_tegra_bo(bo); + + if (obj->vaddr) + return; + else if (obj->gem.import_attach) + dma_buf_vunmap(obj->gem.import_attach->dmabuf, addr); + else + vunmap(addr); } static void *tegra_bo_kmap(struct host1x_bo *bo, unsigned int page) { struct tegra_bo *obj = host1x_to_tegra_bo(bo); - return obj->vaddr + page * PAGE_SIZE; + if (obj->vaddr) + return obj->vaddr + page * PAGE_SIZE; + else if (obj->gem.import_attach) + return dma_buf_kmap(obj->gem.import_attach->dmabuf, page); + else + return vmap(obj->pages + page, 1, VM_MAP, + pgprot_writecombine(PAGE_KERNEL)); } static void tegra_bo_kunmap(struct host1x_bo *bo, unsigned int page, void *addr) { + struct tegra_bo *obj = host1x_to_tegra_bo(bo); + + if (obj->vaddr) + return; + else if (obj->gem.import_attach) + dma_buf_kunmap(obj->gem.import_attach->dmabuf, page, addr); + else + vunmap(addr); } static struct host1x_bo *tegra_bo_get(struct host1x_bo *bo)