From patchwork Thu Jul 12 06:40:29 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 170567 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id DA64F2C007B for ; Thu, 12 Jul 2012 16:48:21 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758746Ab2GLGrk (ORCPT ); Thu, 12 Jul 2012 02:47:40 -0400 Received: from cantor2.suse.de ([195.135.220.15]:39880 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758681Ab2GLGkw (ORCPT ); Thu, 12 Jul 2012 02:40:52 -0400 Received: from relay1.suse.de (unknown [195.135.220.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id D67E2A451C; Thu, 12 Jul 2012 08:40:51 +0200 (CEST) From: Mel Gorman To: Andrew Morton Cc: Linux-MM , Linux-Netdev , LKML , David Miller , Neil Brown , Peter Zijlstra , Mike Christie , Eric B Munson , Eric Dumazet , Sebastian Andrzej Siewior , Mel Gorman Subject: [PATCH 13/16] mm: Micro-optimise slab to avoid a function call Date: Thu, 12 Jul 2012 07:40:29 +0100 Message-Id: <1342075232-29267-14-git-send-email-mgorman@suse.de> X-Mailer: git-send-email 1.7.9.2 In-Reply-To: <1342075232-29267-1-git-send-email-mgorman@suse.de> References: <1342075232-29267-1-git-send-email-mgorman@suse.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Getting and putting objects in SLAB currently requires a function call but the bulk of the work is related to PFMEMALLOC reserves which are only consumed when network-backed storage is critical. Use an inline function to determine if the function call is required. Signed-off-by: Mel Gorman --- mm/slab.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 54bbfe4..c32fc28 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -117,6 +117,8 @@ #include #include +#include + #include #include #include @@ -990,7 +992,7 @@ out: spin_unlock_irqrestore(&l3->list_lock, flags); } -static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, +static void *__ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, gfp_t flags, bool force_refill) { int i; @@ -1037,7 +1039,20 @@ static void *ac_get_obj(struct kmem_cache *cachep, struct array_cache *ac, return objp; } -static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, +static inline void *ac_get_obj(struct kmem_cache *cachep, + struct array_cache *ac, gfp_t flags, bool force_refill) +{ + void *objp; + + if (unlikely(sk_memalloc_socks())) + objp = __ac_get_obj(cachep, ac, flags, force_refill); + else + objp = ac->entry[--ac->avail]; + + return objp; +} + +static void *__ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, void *objp) { if (unlikely(pfmemalloc_active)) { @@ -1047,6 +1062,15 @@ static void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, set_obj_pfmemalloc(&objp); } + return objp; +} + +static inline void ac_put_obj(struct kmem_cache *cachep, struct array_cache *ac, + void *objp) +{ + if (unlikely(sk_memalloc_socks())) + objp = __ac_put_obj(cachep, ac, objp); + ac->entry[ac->avail++] = objp; }