From patchwork Mon Nov 30 10:54:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Davydov X-Patchwork-Id: 549929 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 4D45D1401F6 for ; Mon, 30 Nov 2015 21:54:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753745AbbK3Kym (ORCPT ); Mon, 30 Nov 2015 05:54:42 -0500 Received: from relay.parallels.com ([195.214.232.42]:57408 "EHLO relay.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751948AbbK3Kyl (ORCPT ); Mon, 30 Nov 2015 05:54:41 -0500 Received: from [10.67.48.55] (helo=mail.parallels.net) by relay.parallels.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-SHA384:256) (Exim 4.86) (envelope-from ) id 1a3M6N-0003IW-2A; Mon, 30 Nov 2015 13:54:35 +0300 Received: from esperanza (10.30.4.177) by MSK-EXCH1.sw.swsoft.com (10.67.48.55) with Microsoft SMTP Server (TLS) id 15.0.1130.7; Mon, 30 Nov 2015 11:54:26 +0100 Date: Mon, 30 Nov 2015 13:54:21 +0300 From: Vladimir Davydov To: Johannes Weiner CC: Andrew Morton , David Miller , Michal Hocko , Tejun Heo , Eric Dumazet , , , , , Subject: Re: [PATCH 12/13] mm: memcontrol: account socket memory in unified hierarchy memory controller Message-ID: <20151130105421.GA24704@esperanza> References: <1448401925-22501-1-git-send-email-hannes@cmpxchg.org> <20151124215844.GA1373@cmpxchg.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20151124215844.GA1373@cmpxchg.org> X-ClientProxiedBy: US-EXCH2.sw.swsoft.com (10.255.249.46) To MSK-EXCH1.sw.swsoft.com (10.67.48.55) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Tue, Nov 24, 2015 at 04:58:44PM -0500, Johannes Weiner wrote: ... > @@ -5520,15 +5557,30 @@ void sock_release_memcg(struct sock *sk) > */ > bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages) > { > - struct page_counter *counter; > + gfp_t gfp_mask = GFP_KERNEL; > > - if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated, > - nr_pages, &counter)) { > - memcg->tcp_mem.memory_pressure = 0; > - return true; > +#ifdef CONFIG_MEMCG_KMEM > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) { > + struct page_counter *counter; > + > + if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated, > + nr_pages, &counter)) { > + memcg->tcp_mem.memory_pressure = 0; > + return true; > + } > + page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages); > + memcg->tcp_mem.memory_pressure = 1; > + return false; > } > - page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages); > - memcg->tcp_mem.memory_pressure = 1; > +#endif > + /* Don't block in the packet receive path */ > + if (in_softirq()) > + gfp_mask = GFP_NOWAIT; > + > + if (try_charge(memcg, gfp_mask, nr_pages) == 0) > + return true; > + > + try_charge(memcg, gfp_mask|__GFP_NOFAIL, nr_pages); We won't trigger high reclaim if we get here, because try_charge does not check high threshold if failing or forcing charge. I think this should be fixed regardless of this patch. The fix is attached below. Also, I don't like calling try_charge twice: the second time will go through all the try_charge steps for nothing. What about checking page_counter value after calling try_charge instead: try_charge(memcg, gfp_mask|__GFP_NOFAIL, nr_pages); return page_counter_read(&memcg->memory) <= memcg->memory.limit; or adding an out parameter to try_charge that would inform us if charge was forced? > return false; > } > > @@ -5539,10 +5591,32 @@ bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages) > */ > void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages) > { > - page_counter_uncharge(&memcg->tcp_mem.memory_allocated, nr_pages); > +#ifdef CONFIG_MEMCG_KMEM > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) { > + page_counter_uncharge(&memcg->tcp_mem.memory_allocated, > + nr_pages); > + return; > + } > +#endif > + page_counter_uncharge(&memcg->memory, nr_pages); > + css_put_many(&memcg->css, nr_pages); cancel_charge(memcg, nr_pages); ? Reviewed-by: Vladimir Davydov --- From: Vladimir Davydov Subject: [PATCH] memcg: check high threshold if forcing allocation try_charge() does not result in checking high threshold if it forces charge. This is incorrect, because we could have failed to reclaim memory due to the current context, so we do need to check high threshold and try to compensate for the excess once we are in the safe context. Signed-off-by: Vladimir Davydov -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 79a29d564bff..e922965b572b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2112,13 +2112,14 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, page_counter_charge(&memcg->memsw, nr_pages); css_get_many(&memcg->css, nr_pages); - return 0; + goto check_high; done_restock: css_get_many(&memcg->css, batch); if (batch > nr_pages) refill_stock(memcg, batch - nr_pages); +check_high: /* * If the hierarchy is above the normal consumption range, schedule * reclaim on returning to userland. We can perform reclaim here