From patchwork Thu Jan 26 14:13:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 720188 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3v8P6t5PC2z9rxw for ; Fri, 27 Jan 2017 01:14:14 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752568AbdAZON4 (ORCPT ); Thu, 26 Jan 2017 09:13:56 -0500 Received: from mx2.suse.de ([195.135.220.15]:57264 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752030AbdAZONz (ORCPT ); Thu, 26 Jan 2017 09:13:55 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 70668AC99; Thu, 26 Jan 2017 14:13:50 +0000 (UTC) Date: Thu, 26 Jan 2017 15:13:49 +0100 From: Michal Hocko To: Daniel Borkmann Cc: Alexei Starovoitov , Andrew Morton , Vlastimil Babka , Mel Gorman , Johannes Weiner , linux-mm , LKML , "netdev@vger.kernel.org" , marcelo.leitner@gmail.com Subject: Re: [PATCH 0/6 v3] kvmalloc Message-ID: <20170126141349.GN6590@dhcp22.suse.cz> References: <588907AA.1020704@iogearbox.net> <20170126074354.GB8456@dhcp22.suse.cz> <5889C331.7020101@iogearbox.net> <20170126100802.GF6590@dhcp22.suse.cz> <5889DEA3.7040106@iogearbox.net> <20170126115833.GI6590@dhcp22.suse.cz> <5889F52E.7030602@iogearbox.net> <20170126134004.GM6590@dhcp22.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170126134004.GM6590@dhcp22.suse.cz> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu 26-01-17 14:40:04, Michal Hocko wrote: > On Thu 26-01-17 14:10:06, Daniel Borkmann wrote: > > On 01/26/2017 12:58 PM, Michal Hocko wrote: > > > On Thu 26-01-17 12:33:55, Daniel Borkmann wrote: > > > > On 01/26/2017 11:08 AM, Michal Hocko wrote: > > > [...] > > > > > If you disagree I can drop the bpf part of course... > > > > > > > > If we could consolidate these spots with kvmalloc() eventually, I'm > > > > all for it. But even if __GFP_NORETRY is not covered down to all > > > > possible paths, it kind of does have an effect already of saying > > > > 'don't try too hard', so would it be harmful to still keep that for > > > > now? If it's not, I'd personally prefer to just leave it as is until > > > > there's some form of support by kvmalloc() and friends. > > > > > > Well, you can use kvmalloc(size, GFP_KERNEL|__GFP_NORETRY). It is not > > > disallowed. It is not _supported_ which means that if it doesn't work as > > > you expect you are on your own. Which is actually the situation right > > > now as well. But I still think that this is just not right thing to do. > > > Even though it might happen to work in some cases it gives a false > > > impression of a solution. So I would rather go with > > > > Hmm. 'On my own' means, we could potentially BUG somewhere down the > > vmalloc implementation, etc, presumably? So it might in-fact be > > harmful to pass that, right? > > No it would mean that it might eventually hit the behavior which you are > trying to avoid - in other words it may invoke OOM killer even though > __GFP_NORETRY means giving up before any system wide disruptive actions > a re taken. I will separate both bpf and netfilter hunks into its own patch with the clarification. Does the following look better? --- From ab6b2d724228e4abcc69c44f5ab1ce91009aa91d Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Thu, 26 Jan 2017 14:59:21 +0100 Subject: [PATCH] net, bpf: use kvzalloc helper both bpf_map_area_alloc and xt_alloc_table_info try really hard to play nicely with large memory requests which can be triggered from the userspace (by an admin). See 5bad87348c70 ("netfilter: x_tables: avoid warn and OOM killer on vmalloc call") resp. d407bd25a204 ("bpf: don't trigger OOM killer under pressure with map alloc"). The current allocation pattern strongly resembles kvmalloc helper except for one thing __GFP_NORETRY is not used for the vmalloc fallback. The main reason why kvmalloc doesn't really support __GFP_NORETRY is because vmalloc doesn't support this flag properly and it is far from straightforward to make it understand it because there are some hard coded GFP_KERNEL allocation deep in the call chains. This patch simply replaces the open coded variants with kvmalloc and puts a note to push on MM people to support __GFP_NORETRY in kvmalloc it this turns out to be really needed along with OOM report pointing at vmalloc. If there is an immediate need and no full support yet then kvmalloc(size, gfp | __GFP_NORETRY) will work as good as __vmalloc(gfp | __GFP_NORETRY) - in other words it might trigger the OOM in some cases. Cc: Daniel Borkmann Cc: Alexei Starovoitov Cc: Andrey Konovalov Cc: Marcelo Ricardo Leitner Cc: Pablo Neira Ayuso Signed-off-by: Michal Hocko --- kernel/bpf/syscall.c | 19 +++++-------------- net/netfilter/x_tables.c | 16 ++++++---------- 2 files changed, 11 insertions(+), 24 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 19b6129eab23..a6dc4d596f14 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -53,21 +53,12 @@ void bpf_register_map_type(struct bpf_map_type_list *tl) void *bpf_map_area_alloc(size_t size) { - /* We definitely need __GFP_NORETRY, so OOM killer doesn't - * trigger under memory pressure as we really just want to - * fail instead. + /* + * FIXME: we would really like to not trigger the OOM killer and rather + * fail instead. This is not supported right now. Please nag MM people + * if these OOM start bothering people. */ - const gfp_t flags = __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO; - void *area; - - if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { - area = kmalloc(size, GFP_USER | flags); - if (area != NULL) - return area; - } - - return __vmalloc(size, GFP_KERNEL | __GFP_HIGHMEM | flags, - PAGE_KERNEL); + return kvzalloc(size, GFP_USER); } void bpf_map_area_free(void *area) diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c index d529989f5791..ba8ba633da72 100644 --- a/net/netfilter/x_tables.c +++ b/net/netfilter/x_tables.c @@ -995,16 +995,12 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size) if ((SMP_ALIGN(size) >> PAGE_SHIFT) + 2 > totalram_pages) return NULL; - if (sz <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) - info = kmalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!info) { - info = __vmalloc(sz, GFP_KERNEL | __GFP_NOWARN | - __GFP_NORETRY | __GFP_HIGHMEM, - PAGE_KERNEL); - if (!info) - return NULL; - } - memset(info, 0, sizeof(*info)); + /* + * FIXME: we would really like to not trigger the OOM killer and rather + * fail instead. This is not supported right now. Please nag MM people + * if these OOM start bothering people. + */ + info = kvzalloc(sz, GFP_KERNEL); info->size = size; return info; }