diff mbox series

[v5,1/3] dma: improve DMA zone selection

Message ID 5200f289af1a9b80dfd329b6ed3d54e1d4a02876.1722578375.git.baruch@tkos.co.il (mailing list archive)
State Handled Elsewhere, archived
Headers show
Series dma: support DMA zone starting above 4GB | expand

Commit Message

Unknown via Linuxppc-dev Aug. 2, 2024, 6:03 a.m. UTC
When device DMA limit does not fit in DMA32 zone it should use DMA zone,
even when DMA zone is stricter than needed.

Same goes for devices that can't allocate from the entire normal zone.
Limit to DMA32 in that case.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
---
 kernel/dma/direct.c  | 6 +++---
 kernel/dma/swiotlb.c | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

Comments

kernel test robot Aug. 7, 2024, 12:04 p.m. UTC | #1
Hi Baruch,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 8400291e289ee6b2bf9779ff1c83a291501f017b]

url:    https://github.com/intel-lab-lkp/linux/commits/Baruch-Siach/dma-improve-DMA-zone-selection/20240803-074651
base:   8400291e289ee6b2bf9779ff1c83a291501f017b
patch link:    https://lore.kernel.org/r/5200f289af1a9b80dfd329b6ed3d54e1d4a02876.1722578375.git.baruch%40tkos.co.il
patch subject: [PATCH v5 1/3] dma: improve DMA zone selection
config: i386-randconfig-063-20240807 (https://download.01.org/0day-ci/archive/20240807/202408071931.W1GA8Ee2-lkp@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240807/202408071931.W1GA8Ee2-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202408071931.W1GA8Ee2-lkp@intel.com/

All warnings (new ones prefixed by >>, old ones prefixed by <<):

>> WARNING: modpost: vmlinux: section mismatch in reference: __dma_direct_alloc_pages+0xcc (section: .text.__dma_direct_alloc_pages) -> memblock (section: .init.data)
WARNING: modpost: vmlinux: section mismatch in reference: __dma_direct_alloc_pages+0xd2 (section: .text.__dma_direct_alloc_pages) -> memblock (section: .init.data)
WARNING: modpost: vmlinux: section mismatch in reference: swiotlb_alloc_pool+0xa0 (section: .text.swiotlb_alloc_pool) -> memblock (section: .init.data)
WARNING: modpost: vmlinux: section mismatch in reference: swiotlb_alloc_pool+0xa6 (section: .text.swiotlb_alloc_pool) -> memblock (section: .init.data)
WARNING: modpost: missing MODULE_DESCRIPTION() in kernel/bpf/preload/bpf_preload.o
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/devfreq/governor_userspace.o
Robin Murphy Aug. 7, 2024, 1:13 p.m. UTC | #2
On 2024-08-02 7:03 am, Baruch Siach wrote:
> When device DMA limit does not fit in DMA32 zone it should use DMA zone,
> even when DMA zone is stricter than needed.
> 
> Same goes for devices that can't allocate from the entire normal zone.
> Limit to DMA32 in that case.

Per the bot report this only works for CONFIG_ARCH_KEEP_MEMBLOCK, 
however the whole concept looks wrong anyway. The logic here is that 
we're only forcing a particular zone if there's *no* chance of the 
higher zone being usable. For example, ignoring offsets for simplicity, 
if we have a 40-bit DMA mask then we *do* want to initially try 
allocating from ZONE_NORMAL even if max_pfn is above 40 bits, since we 
still might get a usable allocation from between 32 and 40 bits, and if 
we don't, then we'll fall back to retrying from the DMA zone(s) anyway.

I'm not sure if the rest of the series functionally depends on this 
change, but I think it would be too needlessly restrictive in the 
general case to be justified.

Thanks,
Robin.

> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Baruch Siach <baruch@tkos.co.il>
> ---
>   kernel/dma/direct.c  | 6 +++---
>   kernel/dma/swiotlb.c | 4 ++--
>   2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 4480a3cd92e0..3b4be4ca3b08 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -4,7 +4,7 @@
>    *
>    * DMA operations that map physical memory directly without using an IOMMU.
>    */
> -#include <linux/memblock.h> /* for max_pfn */
> +#include <linux/memblock.h>
>   #include <linux/export.h>
>   #include <linux/mm.h>
>   #include <linux/dma-map-ops.h>
> @@ -59,9 +59,9 @@ static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 *phys_limit)
>   	 * zones.
>   	 */
>   	*phys_limit = dma_to_phys(dev, dma_limit);
> -	if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
> +	if (*phys_limit < DMA_BIT_MASK(32))
>   		return GFP_DMA;
> -	if (*phys_limit <= DMA_BIT_MASK(32))
> +	if (*phys_limit < memblock_end_of_DRAM())
>   		return GFP_DMA32;
>   	return 0;
>   }
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index df68d29740a0..043b0ecd3e8d 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -629,9 +629,9 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes,
>   	}
>   
>   	gfp &= ~GFP_ZONEMASK;
> -	if (phys_limit <= DMA_BIT_MASK(zone_dma_bits))
> +	if (phys_limit < DMA_BIT_MASK(32))
>   		gfp |= __GFP_DMA;
> -	else if (phys_limit <= DMA_BIT_MASK(32))
> +	else if (phys_limit < memblock_end_of_DRAM())
>   		gfp |= __GFP_DMA32;
>   
>   	while (IS_ERR(page = alloc_dma_pages(gfp, bytes, phys_limit))) {
Catalin Marinas Aug. 7, 2024, 1:58 p.m. UTC | #3
Thanks Robin for having a look.

On Wed, Aug 07, 2024 at 02:13:06PM +0100, Robin Murphy wrote:
> On 2024-08-02 7:03 am, Baruch Siach wrote:
> > When device DMA limit does not fit in DMA32 zone it should use DMA zone,
> > even when DMA zone is stricter than needed.
> > 
> > Same goes for devices that can't allocate from the entire normal zone.
> > Limit to DMA32 in that case.
> 
> Per the bot report this only works for CONFIG_ARCH_KEEP_MEMBLOCK,

Yeah, I just noticed.

> however
> the whole concept looks wrong anyway. The logic here is that we're only
> forcing a particular zone if there's *no* chance of the higher zone being
> usable. For example, ignoring offsets for simplicity, if we have a 40-bit
> DMA mask then we *do* want to initially try allocating from ZONE_NORMAL even
> if max_pfn is above 40 bits, since we still might get a usable allocation
> from between 32 and 40 bits, and if we don't, then we'll fall back to
> retrying from the DMA zone(s) anyway.

Ah, I did not read the code further down in __dma_direct_alloc_pages(),
it does fall back to a GFP_DMA allocation if !dma_coherent_ok().
Similarly with swiotlb_alloc_tlb(), it keeps retrying until the
allocation fails.

So yes, this patch can be dropped.
Petr Tesařík Aug. 7, 2024, 2:12 p.m. UTC | #4
On Wed, 7 Aug 2024 14:58:49 +0100
Catalin Marinas <catalin.marinas@arm.com> wrote:

> Thanks Robin for having a look.
> 
> On Wed, Aug 07, 2024 at 02:13:06PM +0100, Robin Murphy wrote:
> > On 2024-08-02 7:03 am, Baruch Siach wrote:  
> > > When device DMA limit does not fit in DMA32 zone it should use DMA zone,
> > > even when DMA zone is stricter than needed.
> > > 
> > > Same goes for devices that can't allocate from the entire normal zone.
> > > Limit to DMA32 in that case.  
> > 
> > Per the bot report this only works for CONFIG_ARCH_KEEP_MEMBLOCK,  
> 
> Yeah, I just noticed.
> 
> > however
> > the whole concept looks wrong anyway. The logic here is that we're only
> > forcing a particular zone if there's *no* chance of the higher zone being
> > usable. For example, ignoring offsets for simplicity, if we have a 40-bit
> > DMA mask then we *do* want to initially try allocating from ZONE_NORMAL even
> > if max_pfn is above 40 bits, since we still might get a usable allocation
> > from between 32 and 40 bits, and if we don't, then we'll fall back to
> > retrying from the DMA zone(s) anyway.  
> 
> Ah, I did not read the code further down in __dma_direct_alloc_pages(),
> it does fall back to a GFP_DMA allocation if !dma_coherent_ok().
> Similarly with swiotlb_alloc_tlb(), it keeps retrying until the
> allocation fails.

Er, you certainly mean it keeps retrying as long as the allocation
fails.

Yes, that's true. The initial GFP zone flags are a best-effort guess,
and a stricter zone can be chosen in the end. This whole logic tries to
select the highest zone that satisfies the limit, because allocating
some pages and freeing them immediately is wasteful, especially for
high-order allocations.

> So yes, this patch can be dropped.

+1

Petr T
kernel test robot Aug. 7, 2024, 4:40 p.m. UTC | #5
Hi Baruch,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 8400291e289ee6b2bf9779ff1c83a291501f017b]

url:    https://github.com/intel-lab-lkp/linux/commits/Baruch-Siach/dma-improve-DMA-zone-selection/20240803-074651
base:   8400291e289ee6b2bf9779ff1c83a291501f017b
patch link:    https://lore.kernel.org/r/5200f289af1a9b80dfd329b6ed3d54e1d4a02876.1722578375.git.baruch%40tkos.co.il
patch subject: [PATCH v5 1/3] dma: improve DMA zone selection
config: csky-randconfig-001-20240807 (https://download.01.org/0day-ci/archive/20240808/202408080035.rXXbb5Yc-lkp@intel.com/config)
compiler: csky-linux-gcc (GCC) 14.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240808/202408080035.rXXbb5Yc-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202408080035.rXXbb5Yc-lkp@intel.com/

All warnings (new ones prefixed by >>, old ones prefixed by <<):

WARNING: modpost: vmlinux: section mismatch in reference: dma_direct_optimal_gfp_mask+0x46 (section: .text) -> memblock_end_of_DRAM (section: .init.text)
>> WARNING: modpost: vmlinux: section mismatch in reference: sg_page.isra.0+0x1c (section: .text) -> memblock_end_of_DRAM (section: .init.text)
WARNING: modpost: missing MODULE_DESCRIPTION() in kernel/locking/test-ww_mutex.o
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/clk/imx/clk-imxrt1050.o
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/devfreq/governor_performance.o
diff mbox series

Patch

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 4480a3cd92e0..3b4be4ca3b08 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -4,7 +4,7 @@ 
  *
  * DMA operations that map physical memory directly without using an IOMMU.
  */
-#include <linux/memblock.h> /* for max_pfn */
+#include <linux/memblock.h>
 #include <linux/export.h>
 #include <linux/mm.h>
 #include <linux/dma-map-ops.h>
@@ -59,9 +59,9 @@  static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 *phys_limit)
 	 * zones.
 	 */
 	*phys_limit = dma_to_phys(dev, dma_limit);
-	if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
+	if (*phys_limit < DMA_BIT_MASK(32))
 		return GFP_DMA;
-	if (*phys_limit <= DMA_BIT_MASK(32))
+	if (*phys_limit < memblock_end_of_DRAM())
 		return GFP_DMA32;
 	return 0;
 }
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index df68d29740a0..043b0ecd3e8d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -629,9 +629,9 @@  static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes,
 	}
 
 	gfp &= ~GFP_ZONEMASK;
-	if (phys_limit <= DMA_BIT_MASK(zone_dma_bits))
+	if (phys_limit < DMA_BIT_MASK(32))
 		gfp |= __GFP_DMA;
-	else if (phys_limit <= DMA_BIT_MASK(32))
+	else if (phys_limit < memblock_end_of_DRAM())
 		gfp |= __GFP_DMA32;
 
 	while (IS_ERR(page = alloc_dma_pages(gfp, bytes, phys_limit))) {