From patchwork Thu Jul 17 23:09:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nishanth Aravamudan X-Patchwork-Id: 371264 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id C73FC14017E for ; Fri, 18 Jul 2014 09:11:26 +1000 (EST) Received: from ozlabs.org (ozlabs.org [103.22.144.67]) by lists.ozlabs.org (Postfix) with ESMTP id B4E081A06D4 for ; Fri, 18 Jul 2014 09:11:26 +1000 (EST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 6A7E21A045D for ; Fri, 18 Jul 2014 09:10:11 +1000 (EST) Received: from /spool/local by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 17 Jul 2014 17:10:09 -0600 Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e36.co.us.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 17 Jul 2014 17:10:07 -0600 Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id 0D6A11FF0043 for ; Thu, 17 Jul 2014 17:10:06 -0600 (MDT) Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by b03cxnp07029.gho.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s6HL6QOd10027266 for ; Thu, 17 Jul 2014 23:06:26 +0200 Received: from d03av03.boulder.ibm.com (localhost [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s6HNA5tJ023522 for ; Thu, 17 Jul 2014 17:10:06 -0600 Received: from qbert.localdomain ([9.80.108.72]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s6HNA4sF023429; Thu, 17 Jul 2014 17:10:04 -0600 Received: by qbert.localdomain (Postfix, from userid 1000) id 9F46E48151C; Thu, 17 Jul 2014 16:09:58 -0700 (PDT) Date: Thu, 17 Jul 2014 16:09:58 -0700 From: Nishanth Aravamudan To: benh@kernel.crashing.org Subject: [RFC 1/2] workqueue: use the nearest NUMA node, not the local one Message-ID: <20140717230958.GB32660@linux.vnet.ibm.com> References: <20140717230923.GA32660@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20140717230923.GA32660@linux.vnet.ibm.com> X-Operating-System: Linux 3.13.0-30-generic (x86_64) User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14071723-3532-0000-0000-0000033AD6FD Cc: Fenghua Yu , Tony Luck , linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, David Rientjes , Joonsoo Kim , linuxppc-dev@lists.ozlabs.org, Jiang Liu , Wanpeng Li X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" In the presence of memoryless nodes, the workqueue code incorrectly uses cpu_to_node() to determine what node to prefer memory allocations come from. cpu_to_mem() should be used instead, which will use the nearest NUMA node with memory. Signed-off-by: Nishanth Aravamudan diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 35974ac..0bba022 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3547,7 +3547,12 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs) for_each_node(node) { if (cpumask_subset(pool->attrs->cpumask, wq_numa_possible_cpumask[node])) { - pool->node = node; + /* + * We could use local_memory_node(node) here, + * but it is expensive and the following caches + * the same value. + */ + pool->node = cpu_to_mem(cpumask_first(pool->attrs->cpumask)); break; } } @@ -4921,7 +4926,7 @@ static int __init init_workqueues(void) pool->cpu = cpu; cpumask_copy(pool->attrs->cpumask, cpumask_of(cpu)); pool->attrs->nice = std_nice[i++]; - pool->node = cpu_to_node(cpu); + pool->node = cpu_to_mem(cpu); /* alloc pool ID */ mutex_lock(&wq_pool_mutex);