From patchwork Thu Jan 12 15:37:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 714503 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3tzqqj5gQZz9t1F for ; Fri, 13 Jan 2017 02:46:25 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751289AbdALPqY (ORCPT ); Thu, 12 Jan 2017 10:46:24 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:36300 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750889AbdALPqU (ORCPT ); Thu, 12 Jan 2017 10:46:20 -0500 Received: by mail-wm0-f67.google.com with SMTP id r126so4560835wmr.3; Thu, 12 Jan 2017 07:46:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=41u/Y5zdPTC2+cmA5U5vQmj+HNDsdlh1Py0TAc7mogk=; b=Iqv+bi4DU9ETC9gnik6BmJVYjkOG9GyAw5w7Qp9ArbBXjFfLkOTIegeDjkjd1oVK4/ T9ExAbFpAinGxeZRMvE9xaaS6N47nqfRecvruiftyONQH2jnQ+U9iCNYKeyWUBtkIjfC HcKF/Ooj4ICXw7doKN/VjyBEoOGX+eEZZj2RxIrP093+O+sW4FCt8eqlgwtepyYa+k+C WNz5ailp72AdNZ92l0uFJ+QaVcPR2CbBAXqdGtdxXavgkr3W0Vr8vUlNNAKjnuoEbt5c V/5PCe0QLEZHNZQJzEMEHMiIMFIRNqM8pu47J46teY0E6GeZp7Le5vXeyI0QhbuYnZAC lsmQ== X-Gm-Message-State: AIkVDXJ15U43Ed1G9uEkydHx0gY77ehgDqOH5X2UeGDR+VIy0a+Y+Y5ymV5N1o8cWfwAbQ== X-Received: by 10.223.170.73 with SMTP id q9mr7580136wrd.153.1484235449769; Thu, 12 Jan 2017 07:37:29 -0800 (PST) Received: from tiehlicka.suse.cz ([213.151.95.130]) by smtp.gmail.com with ESMTPSA id x7sm14251609wjp.18.2017.01.12.07.37.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Jan 2017 07:37:29 -0800 (PST) From: Michal Hocko To: Andrew Morton Cc: Vlastimil Babka , David Rientjes , Mel Gorman , Johannes Weiner , Al Viro , , LKML , Michal Hocko , Eric Dumazet , netdev@vger.kernel.org Subject: [RFC PATCH 6/6] net: use kvmalloc with __GFP_REPEAT rather than open coded variant Date: Thu, 12 Jan 2017 16:37:17 +0100 Message-Id: <20170112153717.28943-7-mhocko@kernel.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170112153717.28943-1-mhocko@kernel.org> References: <20170112153717.28943-1-mhocko@kernel.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Michal Hocko fq_alloc_node, alloc_netdev_mqs and netif_alloc* open code kmalloc with vmalloc fallback. Use the kvmalloc variant instead. Keep the __GFP_REPEAT flag based on explanation from Eric: " At the time, tests on the hardware I had in my labs showed that vmalloc() could deliver pages spread all over the memory and that was a small penalty (once memory is fragmented enough, not at boot time) " The way how the code is constructed means, however, that we prefer to go and hit the OOM killer before we fall back to the vmalloc for requests smaller than 64kB in the current code. This is rather disruptive for something that can be achived with the fallback. On the other hand __GFP_REPEAT doesn't have any useful semantic for these requests. So the effect of this patch is that requests smaller than 64kB will fallback to vmalloc esier now. Cc: Eric Dumazet Cc: netdev@vger.kernel.org Signed-off-by: Michal Hocko --- net/core/dev.c | 24 +++++++++--------------- net/sched/sch_fq.c | 12 +----------- 2 files changed, 10 insertions(+), 26 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 56818f7eab2b..5cf2762387aa 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -7111,12 +7111,10 @@ static int netif_alloc_rx_queues(struct net_device *dev) BUG_ON(count < 1); - rx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); - if (!rx) { - rx = vzalloc(sz); - if (!rx) - return -ENOMEM; - } + rx = kvzalloc(sz, GFP_KERNEL | __GFP_REPEAT); + if (!rx) + return -ENOMEM; + dev->_rx = rx; for (i = 0; i < count; i++) @@ -7153,12 +7151,10 @@ static int netif_alloc_netdev_queues(struct net_device *dev) if (count < 1 || count > 0xffff) return -EINVAL; - tx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); - if (!tx) { - tx = vzalloc(sz); - if (!tx) - return -ENOMEM; - } + tx = kvzalloc(sz, GFP_KERNEL | __GFP_REPEAT); + if (!tx) + return -ENOMEM; + dev->_tx = tx; netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL); @@ -7691,9 +7687,7 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name, /* ensure 32-byte alignment of whole construct */ alloc_size += NETDEV_ALIGN - 1; - p = kzalloc(alloc_size, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); - if (!p) - p = vzalloc(alloc_size); + p = kvzalloc(alloc_size, GFP_KERNEL | __GFP_REPEAT); if (!p) return NULL; diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c index a4f738ac7728..594f77d89f6c 100644 --- a/net/sched/sch_fq.c +++ b/net/sched/sch_fq.c @@ -624,16 +624,6 @@ static void fq_rehash(struct fq_sched_data *q, q->stat_gc_flows += fcnt; } -static void *fq_alloc_node(size_t sz, int node) -{ - void *ptr; - - ptr = kmalloc_node(sz, GFP_KERNEL | __GFP_REPEAT | __GFP_NOWARN, node); - if (!ptr) - ptr = vmalloc_node(sz, node); - return ptr; -} - static void fq_free(void *addr) { kvfree(addr); @@ -650,7 +640,7 @@ static int fq_resize(struct Qdisc *sch, u32 log) return 0; /* If XPS was setup, we can allocate memory on right NUMA node */ - array = fq_alloc_node(sizeof(struct rb_root) << log, + array = kvmalloc_node(sizeof(struct rb_root) << log, GFP_KERNEL | __GFP_REPEAT, netdev_queue_numa_node_read(sch->dev_queue)); if (!array) return -ENOMEM;