From patchwork Thu Sep 6 17:57:39 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 182253 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 7A9FE2C008B for ; Fri, 7 Sep 2012 03:58:03 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932877Ab2IFR5o (ORCPT ); Thu, 6 Sep 2012 13:57:44 -0400 Received: from mail-ob0-f174.google.com ([209.85.214.174]:46410 "EHLO mail-ob0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932828Ab2IFR5k (ORCPT ); Thu, 6 Sep 2012 13:57:40 -0400 Received: by obbuo13 with SMTP id uo13so3065626obb.19 for ; Thu, 06 Sep 2012 10:57:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=/1E2fy4zXq1AGDvbNjTzBqfNQITx3uLY3DvQ7T52yXQ=; b=qWGNiRheseocrXqJccP+PVbelTmH+rg4ws2ohJF1N5xlT23GNlspl56mfvzE96y1Gi IN9aF71tuzACycdiNc357mAxoJORphwDlR0Z/+MkERFNUiFNUWIKRMUU/9mZ7gAl1EaP EAAPTniMxhDEUrxtSmSe1xScG3ssAQdCRGzMcZAGOHvo9e+LjJl/MwYS6O+LFSKNaO37 bic+jhygSz/6jSfmXPTPAzRWreZFSSIhaMPlLxDeW/SPbkcYedoWzhxkd6y7BFwF33kn 2og3dTDzGa1GogqoYEW8uT2xg7zpOgiOhjSpt6wM5t9sOX37BUUVcE9ybl32uQSezSOM EtlQ== MIME-Version: 1.0 Received: by 10.182.202.1 with SMTP id ke1mr3275125obc.51.1346954259706; Thu, 06 Sep 2012 10:57:39 -0700 (PDT) Received: by 10.60.49.164 with HTTP; Thu, 6 Sep 2012 10:57:39 -0700 (PDT) In-Reply-To: <1346779479-1097-2-git-send-email-mgorman@suse.de> References: <1346779479-1097-1-git-send-email-mgorman@suse.de> <1346779479-1097-2-git-send-email-mgorman@suse.de> Date: Fri, 7 Sep 2012 02:57:39 +0900 Message-ID: Subject: Re: [PATCH 1/4] slab: do ClearSlabPfmemalloc() for all pages of slab From: JoonSoo Kim To: Mel Gorman Cc: Andrew Morton , Linux-MM , Linux-Netdev , LKML , David Miller , Chuck Lever , Pekka@suse.de, "Enberg , Christoph Lameter Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add "Cc" to "Christoph Lameter" 2012/9/5 Mel Gorman : > Right now, we call ClearSlabPfmemalloc() for first page of slab when we > clear SlabPfmemalloc flag. This is fine for most swap-over-network use > cases as it is expected that order-0 pages are in use. Unfortunately it > is possible that that __ac_put_obj() checks SlabPfmemalloc on a tail page > and while this is harmless, it is sloppy. This patch ensures that the head > page is always used. > > This problem was originally identified by Joonsoo Kim. > > [js1304@gmail.com: Original implementation and problem identification] > Signed-off-by: Mel Gorman > --- > mm/slab.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/slab.c b/mm/slab.c > index 811af03..d34a903 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -1000,7 +1000,7 @@ static void *__ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, > l3 = cachep->nodelists[numa_mem_id()]; > if (!list_empty(&l3->slabs_free) && force_refill) { > struct slab *slabp = virt_to_slab(objp); > - ClearPageSlabPfmemalloc(virt_to_page(slabp->s_mem)); > + ClearPageSlabPfmemalloc(virt_to_head_page(slabp->s_mem)); > clear_obj_pfmemalloc(&objp); > recheck_pfmemalloc_active(cachep, ac); > return objp; We assume that slabp->s_mem's address is always in head page, so "virt_to_head_page" is not needed. > @@ -1032,7 +1032,7 @@ static void *__ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, > { > if (unlikely(pfmemalloc_active)) { > /* Some pfmemalloc slabs exist, check if this is one */ > - struct page *page = virt_to_page(objp); > + struct page *page = virt_to_head_page(objp); > if (PageSlabPfmemalloc(page)) > set_obj_pfmemalloc(&objp); > } > -- > 1.7.9.2 > If we always use head page, following suggestion is more good to me. How about you? } @@ -1921,10 +1921,9 @@ static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid) NR_SLAB_UNRECLAIMABLE, nr_pages); for (i = 0; i < nr_pages; i++) { __SetPageSlab(page + i); - - if (page->pfmemalloc) - SetPageSlabPfmemalloc(page + i); } + if (page->pfmemalloc) + SetPageSlabPfmemalloc(page); if (kmemcheck_enabled && !(cachep->flags & SLAB_NOTRACK)) { kmemcheck_alloc_shadow(page, cachep->gfporder, flags, nodeid); @@ -1943,26 +1942,26 @@ static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid) */ static void kmem_freepages(struct kmem_cache *cachep, void *addr) { - unsigned long i = (1 << cachep->gfporder); + int nr_pages = (1 << cachep->gfporder); + int i; struct page *page = virt_to_page(addr); - const unsigned long nr_freed = i; kmemcheck_free_shadow(page, cachep->gfporder); if (cachep->flags & SLAB_RECLAIM_ACCOUNT) sub_zone_page_state(page_zone(page), - NR_SLAB_RECLAIMABLE, nr_freed); + NR_SLAB_RECLAIMABLE, nr_pages); else sub_zone_page_state(page_zone(page), - NR_SLAB_UNRECLAIMABLE, nr_freed); - while (i--) { - BUG_ON(!PageSlab(page)); - __ClearPageSlabPfmemalloc(page); - __ClearPageSlab(page); - page++; + NR_SLAB_UNRECLAIMABLE, nr_pages); + for (i = 0; i < nr_pages; i++) { + BUG_ON(!PageSlab(page + i)); + __ClearPageSlab(page + i); } + __ClearPageSlabPfmemalloc(page); + if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += nr_freed; + current->reclaim_state->reclaimed_slab += nr_pages; free_pages((unsigned long)addr, cachep->gfporder); } --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/slab.c b/mm/slab.c index f8b0d53..ce70989 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1032,7 +1032,7 @@ static void *__ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, { if (unlikely(pfmemalloc_active)) { /* Some pfmemalloc slabs exist, check if this is one */ - struct page *page = virt_to_page(objp); + struct page *page = virt_to_head_page(objp); if (PageSlabPfmemalloc(page)) set_obj_pfmemalloc(&objp);