From patchwork Fri May 8 21:04:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kelsey Skunberg X-Patchwork-Id: 1286454 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49JjWx6RcQz9sSt; Sat, 9 May 2020 07:05:01 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1jXAB0-0005AF-Ht; Fri, 08 May 2020 21:04:58 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jXAAy-00058D-2e for kernel-team@lists.ubuntu.com; Fri, 08 May 2020 21:04:56 +0000 Received: from mail-io1-f69.google.com ([209.85.166.69]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jXAAx-0001ir-OB for kernel-team@lists.ubuntu.com; Fri, 08 May 2020 21:04:55 +0000 Received: by mail-io1-f69.google.com with SMTP id g10so3079607iov.20 for ; Fri, 08 May 2020 14:04:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bSMFxkIstAZw6Rm28sAxe5D3woenWb4BsX9buEHIC84=; b=fHmyIyhDhFzVlqVJWqh/a+2JN/08ndIGnZODv2pOT5H2bi0j/ck5XdCLVrbRLCezRI t/yPNMoUMGJ+TS/HOW+o43gGNhoE3mGI0z9jESsaN4KViAWvIydCfnr1dTFK4uP/THZN zeKgVZpXDwzvz/JbIZLX5J8hN+7XtEEeMlv9h83REpYIqU1bgQ1AntyZEtIwgf0F8+fp bAkaxOFLPKA4xFmeWZgj4+9RTbdK0W8lgyoQrTh2RiCQCmfgKB/efNa+bPzdzbsBEXbu 9+j0DssEbrQumfhWijMpOACPHH1VqCA79u28tcR9Ab+Hi7SLTX5Aw0pmrst0CSYLs5ns 96ag== X-Gm-Message-State: AGi0PuZZPMqr3fqSUoVz+jU3xmpe+nR7dCBMwxpdVOYjVbx9acLG+gh0 p/V9BZcoB5nnmyFX1R2Ne9EIKnArKbisXfbYuJ7N01djp45Tf22wvArZzOaPgg2Mcw2Yt6P+HWk SUcyG3PwBkOkC/qBl2h1mMuuZsj1lCLRHxEoI3DXEeg== X-Received: by 2002:a05:6638:155:: with SMTP id y21mr4609648jao.79.1588971894433; Fri, 08 May 2020 14:04:54 -0700 (PDT) X-Google-Smtp-Source: APiQypK9CmfhjlWt8hWVuELRAFoSTHA5LmWb49DLiEZokRi3nG2YS7Rqtc6ioslKRuk89sflp4KhXw== X-Received: by 2002:a05:6638:155:: with SMTP id y21mr4609639jao.79.1588971894114; Fri, 08 May 2020 14:04:54 -0700 (PDT) Received: from localhost.localdomain (c-73-243-191-173.hsd1.co.comcast.net. [73.243.191.173]) by smtp.gmail.com with ESMTPSA id r18sm1148170ioj.15.2020.05.08.14.04.53 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 14:04:53 -0700 (PDT) From: Kelsey Skunberg To: kernel-team@lists.ubuntu.com Subject: [B][PATCH v2 1/2] UBUNTU: SAUCE: Revert "drm/msm: Use the correct dma_sync calls in msm_gem" Date: Fri, 8 May 2020 15:04:36 -0600 Message-Id: <20200508210437.31193-2-kelsey.skunberg@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200508210437.31193-1-kelsey.skunberg@canonical.com> References: <20200508210437.31193-1-kelsey.skunberg@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/1877657 This reverts commit 3de433c5b38af49a5fc7602721e2ab5d39f1e69c which is upstream commit 9f614197c744 ("drm/msm: Use the correct dma_sync calls harder") Commit contributes to Certification Test failures and should be reverted until a fix or alternative solution for dma_sync calls in msm_gem can be applied. Signed-off-by: Kelsey Skunberg --- drivers/gpu/drm/msm/msm_gem.c | 47 ++++------------------------------- 1 file changed, 5 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ea59eb5eb556..21502afbcddc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,46 +43,6 @@ static bool use_pages(struct drm_gem_object *obj) return !msm_obj->vram_node; } -/* - * Cache sync.. this is a bit over-complicated, to fit dma-mapping - * API. Really GPU cache is out of scope here (handled on cmdstream) - * and all we need to do is invalidate newly allocated pages before - * mapping to CPU as uncached/writecombine. - * - * On top of this, we have the added headache, that depending on - * display generation, the display's iommu may be wired up to either - * the toplevel drm device (mdss), or to the mdp sub-node, meaning - * that here we either have dma-direct or iommu ops. - * - * Let this be a cautionary tail of abstraction gone wrong. - */ - -static void sync_for_device(struct msm_gem_object *msm_obj) -{ - struct device *dev = msm_obj->base.dev->dev; - - if (get_dma_ops(dev)) { - dma_sync_sg_for_device(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_map_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } -} - -static void sync_for_cpu(struct msm_gem_object *msm_obj) -{ - struct device *dev = msm_obj->base.dev->dev; - - if (get_dma_ops(dev)) { - dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } else { - dma_unmap_sg(dev, msm_obj->sgt->sgl, - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); - } -} - /* allocate pages from VRAM carveout, used when no IOMMU: */ static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) { @@ -148,7 +108,8 @@ static struct page **get_pages(struct drm_gem_object *obj) * because display controller, GPU, etc. are not coherent: */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) - sync_for_device(msm_obj); + dma_sync_sg_for_device(dev->dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, DMA_BIDIRECTIONAL); } return msm_obj->pages; @@ -177,7 +138,9 @@ static void put_pages(struct drm_gem_object *obj) * GPU, etc. are not coherent: */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) - sync_for_cpu(msm_obj); + dma_sync_sg_for_cpu(obj->dev->dev, msm_obj->sgt->sgl, + msm_obj->sgt->nents, + DMA_BIDIRECTIONAL); sg_free_table(msm_obj->sgt); kfree(msm_obj->sgt);