From patchwork Tue Jun 4 21:00:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: dann frazier X-Patchwork-Id: 1110160 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45JPTh60gVz9sN6; Wed, 5 Jun 2019 07:00:56 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1hYGY8-00029u-Et; Tue, 04 Jun 2019 21:00:52 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1hYGY3-00026D-Sh for kernel-team@lists.ubuntu.com; Tue, 04 Jun 2019 21:00:47 +0000 Received: from mail-it1-f198.google.com ([209.85.166.198]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1hYGY3-0007Bj-G1 for kernel-team@lists.ubuntu.com; Tue, 04 Jun 2019 21:00:47 +0000 Received: by mail-it1-f198.google.com with SMTP id q20so124154itq.2 for ; Tue, 04 Jun 2019 14:00:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YukLX/uhRQBBuHI2Rh2nhN3lP0Jxb2eQL8CspaU567c=; b=nMAgHZJsCrxMmNrj01l3+XLl64UdkGEjeqvDUA4jBAzsxW6+Ehxsg6xZIHIx+BkkGS DWXLqhpVe1PfDHBAl7miTSwiIV3HRgX+KbYnc0W9M4vSkGcZF9qXll09z9jl5OETr0RU uW8XiU/0EVpdviQXZMEtZme2WB2sJ56dyhbrHQCWZrQaejXyBYuxjIQDd9r1FZkF1zan Y8oq77rfLJEhGmdfApSJDr8oDsT/VAfTA6NO2dLe1xiWrSUgf6aVDAdSHkq+aq9fHb6c /PGKHlw6oSzfB2G2k2MbuUUdfVhYYl3RDWADpTHzy9wYQ7DfOXBYUfMvLN2jhjBGogFQ V7+w== X-Gm-Message-State: APjAAAVZ5sYRJupem6JLSrc7IsY6jPXQDylL4ZH4uk6dAGjTsu510xyE dX5ojczfTulMmmpsYRUjZQtJ4XyyAMIv+QjryH+sDVWa1TptKUiw3aEl8dk8tzqcINhHL73iRZD RLAJ5dejAIXBZkXEJyoc1V+k2CePNvp9bDzO8CSRgmQ== X-Received: by 2002:a05:660c:28a:: with SMTP id s10mr8788410itl.173.1559682046347; Tue, 04 Jun 2019 14:00:46 -0700 (PDT) X-Google-Smtp-Source: APXvYqxU67TMljppeXao4TXwRYTGOcqqNNyQclgM/lcu+6XjakGWko/cmobwTifWOdsPIPrw2KFSvg== X-Received: by 2002:a05:660c:28a:: with SMTP id s10mr8788387itl.173.1559682046068; Tue, 04 Jun 2019 14:00:46 -0700 (PDT) Received: from xps13.canonical.com (c-71-56-235-36.hsd1.co.comcast.net. [71.56.235.36]) by smtp.gmail.com with ESMTPSA id h26sm8216604itf.13.2019.06.04.14.00.45 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 04 Jun 2019 14:00:45 -0700 (PDT) From: dann frazier To: kernel-team@lists.ubuntu.com Subject: [PATCH 3/4][Unstable][SRU Disco] dma-contiguous: use fallback alloc_pages for single pages Date: Tue, 4 Jun 2019 15:00:09 -0600 Message-Id: <20190604210010.7405-4-dann.frazier@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190604210010.7405-1-dann.frazier@canonical.com> References: <20190604210010.7405-1-dann.frazier@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Nicolin Chen BugLink: https://bugs.launchpad.net/bugs/1823753 The addresses within a single page are always contiguous, so it's not so necessary to always allocate one single page from CMA area. Since the CMA area has a limited predefined size of space, it may run out of space in heavy use cases, where there might be quite a lot CMA pages being allocated for single pages. However, there is also a concern that a device might care where a page comes from -- it might expect the page from CMA area and act differently if the page doesn't. This patch tries to use the fallback alloc_pages path, instead of one-page size allocations from the global CMA area in case that a device does not have its own CMA area. This'd save resources from the CMA global area for more CMA allocations, and also reduce CMA fragmentations resulted from trivial allocations. Signed-off-by: Nicolin Chen Tested-by: dann frazier Signed-off-by: Christoph Hellwig (cherry picked from commit bd2e75633c8012fc8a7431c82fda66237133bf7e linux-next) Signed-off-by: dann frazier --- kernel/dma/contiguous.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 637b120d647b5..bfc0c17f2a3d4 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -223,14 +223,23 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, * This function allocates contiguous memory buffer for specified device. It * first tries to use device specific contiguous memory area if available or * the default global one, then tries a fallback allocation of normal pages. + * + * Note that it byapss one-page size of allocations from the global area as + * the addresses within one page are always contiguous, so there is no need + * to waste CMA pages for that kind; it also helps reduce fragmentations. */ struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp) { int node = dev ? dev_to_node(dev) : NUMA_NO_NODE; size_t count = PAGE_ALIGN(size) >> PAGE_SHIFT; size_t align = get_order(PAGE_ALIGN(size)); - struct cma *cma = dev_get_cma_area(dev); struct page *page = NULL; + struct cma *cma = NULL; + + if (dev && dev->cma_area) + cma = dev->cma_area; + else if (count > 1) + cma = dma_contiguous_default_area; /* CMA can be used only in the context which permits sleeping */ if (cma && gfpflags_allow_blocking(gfp)) {