From patchwork Wed Oct 13 06:49:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1540240 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=NE5L1H+t; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4HTjpS1M8Lz9sR4 for ; Wed, 13 Oct 2021 17:50:24 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1maY5h-00064Q-DV; Wed, 13 Oct 2021 06:50:17 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1maY5e-00063D-F5 for kernel-team@lists.ubuntu.com; Wed, 13 Oct 2021 06:50:14 +0000 Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 46B8240011 for ; Wed, 13 Oct 2021 06:50:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1634107814; bh=rUTTEJCsSXrPQ7ihGUUDjFpSYzfTAm+kYbvBdwTPAiw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=NE5L1H+t9zwk0OO1ca4qj3wfcwxe31Rnb9qfn7CxAsr1E1mGu2uj+FU023uIxAz/G 4tY1g+XjvMSp7wD19SbLA93O1JJ1tv0zRiL/FUYFWlME1K7DVvnyGlia7zHPdzbuZJ 2oPG4IzDvz/zQwK3FXx7LbmsZW7pcOC2XLlXqbDmmaCp6B+QsygsZ6x1OYcGzP4F1i 4dMBmjtbvDoPlVVscGIb+NJ7jXPU4aTFxXA80ib4/5/jIgcJxxVa0RrB4JIelAlF/J JFNaoOHbaZAgB5iL/ovx49Dtcf1fqETXxpKOWeBrPKa6sk1o6cKoU74SVgNfpfsgts r0gy+dBs8PCeQ== Received: by mail-qv1-f69.google.com with SMTP id p9-20020a05621421e900b003830bb235fbso1739148qvj.14 for ; Tue, 12 Oct 2021 23:50:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rUTTEJCsSXrPQ7ihGUUDjFpSYzfTAm+kYbvBdwTPAiw=; b=Ep0/Fhzz5nRi4Ou/5IQdkk9VxgxnhsTt/90401/B38iYwfPwUNXn0zthzP+fZDOzxG ey3YGu7M+qGjXxbGItJ/MScMacr+tnBuJg/96ytxd2r7depeD1UXOeDafgSyciWmKeSv AeXefqduKYCLHmC5DxzJSPEzTTTau/pB6y3t2pONEUYZ8GBz8blVVRcviLnfCJzFzY/R kSBEggduL8ur4xhla3Y4DrWgqhME1eJymEXPQdd6V2cIXcXgnihvQiOs3PYd4jee6lUC gLua4pveQwGtENCaEARB4KoVEhGlZdMmjkLfDksVQKxGRHrPnsn5dr1tCWsxxv7t9bcZ s3bw== X-Gm-Message-State: AOAM531cMupoGLDo63GlzFgkMx1up7p897bwutieQ/lUFkYmcFEtt0Lf OIYlNI25qE+qMzRBDyzmgSpoItyRACrnPM2W1ghN2anZtTTznC8dkfHEBytqwKlO5/qaCWjyRdF QkcWTB3u0ZBPESLR2/KNUHtd2ZcNMinqiaXXNpWeuMw== X-Received: by 2002:a0c:ec44:: with SMTP id n4mr21614574qvq.8.1634107813338; Tue, 12 Oct 2021 23:50:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOsilecJcb8iHvHa3yUb/tVjdtzoRphXEZdIv+E9jUgztZ2aV1pYU5Sh7XfhE97YXr88HC4A== X-Received: by 2002:a0c:ec44:: with SMTP id n4mr21614561qvq.8.1634107813060; Tue, 12 Oct 2021 23:50:13 -0700 (PDT) Received: from kbuntu2.fuzzbuzz.org (dhcp-24-53-240-12.cable.user.start.ca. [24.53.240.12]) by smtp.gmail.com with ESMTPSA id t19sm5727298qtn.26.2021.10.12.23.50.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Oct 2021 23:50:12 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [PATCH 01/13] swiotlb: remove the tbl_dma_addr argument to swiotlb_tbl_map_single Date: Wed, 13 Oct 2021 02:49:55 -0400 Message-Id: <20211013065007.1302-2-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211013065007.1302-1-khalid.elmously@canonical.com> References: <20211013065007.1302-1-khalid.elmously@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Christoph Hellwig BugLink: https://bugs.launchpad.net/bugs/1943902 The tbl_dma_addr argument is used to check the DMA boundary for the allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead passed a physical address, which could lead to incorrect results for strange offsets. Fix this by removing the parameter entirely and hard code the DMA address for io_tlb_start instead. Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations") Signed-off-by: Christoph Hellwig Reviewed-by: Stefano Stabellini Signed-off-by: Konrad Rzeszutek Wilk (backported from commit fc0021aa340af65a0a37d77be39e22aa886a6132) [ kmously: variable names to swiotlb_tbl_map_single() were different, needed manual adjusting. Also and drivers/iommu/intel/iommu.c was moved to drivers/iommu/intel-iommu.c ] Signed-off-by: Khalid Elmously --- drivers/iommu/intel-iommu.c | 1 - drivers/xen/swiotlb-xen.c | 3 +-- include/linux/swiotlb.h | 10 +++------- kernel/dma/swiotlb.c | 14 +++++--------- 4 files changed, 9 insertions(+), 19 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index ebb874fe6dbb4..2a1409e380485 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3872,7 +3872,6 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size, */ if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) { tlb_addr = swiotlb_tbl_map_single(dev, - __phys_to_dma(dev, io_tlb_start), paddr, size, aligned_size, dir, attrs); if (tlb_addr == DMA_MAPPING_ERROR) { goto swiotlb_error; diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 06346422f7432..32b92b733da1f 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -392,8 +392,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, */ trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force); - map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, - size, size, dir, attrs); + map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs); if (map == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 0a8fced6aaec4..16361efb7ac90 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -45,13 +45,9 @@ enum dma_sync_target { SYNC_FOR_DEVICE = 1, }; -extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, - dma_addr_t tbl_dma_addr, - phys_addr_t phys, - size_t mapping_size, - size_t alloc_size, - enum dma_data_direction dir, - unsigned long attrs); +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, unsigned long attrs); extern void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f99b79d7e1235..4a22111a0183d 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -446,14 +446,11 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } } -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, - dma_addr_t tbl_dma_addr, - phys_addr_t orig_addr, - size_t mapping_size, - size_t alloc_size, - enum dma_data_direction dir, - unsigned long attrs) +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr, + size_t mapping_size, size_t alloc_size, + enum dma_data_direction dir, unsigned long attrs) { + dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start); unsigned long flags; phys_addr_t tlb_addr; unsigned int nslots, stride, index, wrap; @@ -675,8 +672,7 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr, } /* Oh well, have to allocate and map a bounce buffer. */ - *phys = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start), - *phys, size, size, dir, attrs); + *phys = swiotlb_tbl_map_single(dev, *phys, size, size, dir, attrs); if (*phys == (phys_addr_t)DMA_MAPPING_ERROR) return false;