From patchwork Mon Jul 20 13:16:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Guo X-Patchwork-Id: 497709 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 28C41140D26; Mon, 20 Jul 2015 23:17:20 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1ZHAwW-0006vG-Mm; Mon, 20 Jul 2015 13:17:16 +0000 Received: from mail-pa0-f50.google.com ([209.85.220.50]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1ZHAwE-0006oE-17 for kernel-team@lists.ubuntu.com; Mon, 20 Jul 2015 13:16:58 +0000 Received: by pabkd10 with SMTP id kd10so29370844pab.2 for ; Mon, 20 Jul 2015 06:16:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=G0DqTHGwZlW5cOuNpcgVHU3AjSq0uIExa1DzAqk/St0=; b=LE7P/tLUIjxoPkimJvvdtcXaqnkVqcJv43cY5JYlcdV8Eyargxtpxz/md13rvGsdY2 ErEL495NO8tTY0Ra2PbfeN71HOofPVjq0GTKlaN6db79qRPJ44yLiMcgHquq7j3FV65h icNmaIsZJxFxwN07LXBYePd9mRY0qFruH0ZrqMBgafrGTj9d3dKBvQ7LbKLRXhmBRFwg 83dDCoE0jUzaUDqJ116YkCOdxp462CByHl4n100BG6SeMLXgtWnB6/Uh2QG02OXPXLkr qqS7BQIUJawndfJTF3pkC5td7C7ws6S0v+IzXulA9F+17EWljI6gKgMDGS5O4SnRIp43 Zr3A== X-Gm-Message-State: ALoCoQlJgt68phlcn8QXYvVRGPbz016HjUTuFt3X9lCvyDPzSszb7Sj27JjYBaIDaLTTw2UaJzGt X-Received: by 10.70.95.198 with SMTP id dm6mr60104183pdb.53.1437398217431; Mon, 20 Jul 2015 06:16:57 -0700 (PDT) Received: from localhost.localdomain (114-35-245-81.HINET-IP.hinet.net. [114.35.245.81]) by smtp.gmail.com with ESMTPSA id j3sm21832520pdf.76.2015.07.20.06.16.56 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 20 Jul 2015 06:16:56 -0700 (PDT) From: Gavin Guo To: kernel-team@lists.ubuntu.com Subject: [Utopic][SRU][PATCH] Fix kmalloc slab creation sequence Date: Mon, 20 Jul 2015 21:16:49 +0800 Message-Id: <1437398210-15107-3-git-send-email-gavin.guo@canonical.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1437398210-15107-1-git-send-email-gavin.guo@canonical.com> References: <1437398210-15107-1-git-send-email-gavin.guo@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com From: Christoph Lameter BugLink: http://bugs.launchpad.net/bugs/1475204 This patch restores the slab creation sequence that was broken by commit 4066c33d0308f8 and also reverts the portions that introduced the KMALLOC_LOOP_XXX macros. Those can never really work since the slab creation is much more complex than just going from a minimum to a maximum number. The latest upstream kernel boots cleanly on my machine with a 64 bit x86 configuration under KVM using either SLAB or SLUB. Fixes: 4066c33d0308f8 ("support the slub_debug boot option") Reported-by: Theodore Ts'o Signed-off-by: Christoph Lameter Signed-off-by: Linus Torvalds (backported from commit a9730fca9946f3697410479e0ef1bd759ba00a77) Signed-off-by: Gavin Guo Conflicts: mm/slab_common.c --- include/linux/slab.h | 22 ---------------------- mm/slab_common.c | 33 +++++++++++++++++---------------- 2 files changed, 17 insertions(+), 38 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 02226e7..1d9abb7 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -154,30 +154,8 @@ size_t ksize(const void *); #define ARCH_KMALLOC_MINALIGN ARCH_DMA_MINALIGN #define KMALLOC_MIN_SIZE ARCH_DMA_MINALIGN #define KMALLOC_SHIFT_LOW ilog2(ARCH_DMA_MINALIGN) -/* - * The KMALLOC_LOOP_LOW is the definition for the for loop index start number - * to create the kmalloc_caches object in create_kmalloc_caches(). The first - * and the second are 96 and 192. You can see that in the kmalloc_index(), if - * the KMALLOC_MIN_SIZE <= 32, then return 1 (96). If KMALLOC_MIN_SIZE <= 64, - * then return 2 (192). If the KMALLOC_MIN_SIZE is bigger than 64, we don't - * need to initialize 96 and 192. Go directly to start the KMALLOC_SHIFT_LOW. - */ -#if KMALLOC_MIN_SIZE <= 32 -#define KMALLOC_LOOP_LOW 1 -#elif KMALLOC_MIN_SIZE <= 64 -#define KMALLOC_LOOP_LOW 2 -#else -#define KMALLOC_LOOP_LOW KMALLOC_SHIFT_LOW -#endif - #else #define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long) -/* - * The KMALLOC_MIN_SIZE of slub/slab/slob is 2^3/2^5/2^3. So, even slab is used. - * The KMALLOC_MIN_SIZE <= 32. The kmalloc-96 and kmalloc-192 should also be - * initialized. - */ -#define KMALLOC_LOOP_LOW 1 #endif #ifdef CONFIG_SLOB diff --git a/mm/slab_common.c b/mm/slab_common.c index ddf3a06..353527b 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -533,6 +533,12 @@ static struct { {"kmalloc-67108864", 67108864} }; +static void new_kmalloc_cache(int idx, unsigned long flags) +{ + kmalloc_caches[idx] = create_kmalloc_cache(kmalloc_info[idx].name, + kmalloc_info[idx].size, flags); +} + /* * Create the kmalloc array. Some of the regular kmalloc arrays * may already have been created because they were needed to @@ -583,25 +589,20 @@ void __init create_kmalloc_caches(unsigned long flags) for (i = 128 + 8; i <= 192; i += 8) size_index[size_index_elem(i)] = 8; } - for (i = KMALLOC_LOOP_LOW; i <= KMALLOC_SHIFT_HIGH; i++) { - if (!kmalloc_caches[i]) { - kmalloc_caches[i] = create_kmalloc_cache( - kmalloc_info[i].name, - kmalloc_info[i].size, - flags); - } + + for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) { + if (!kmalloc_caches[i]) + new_kmalloc_cache(i, flags); /* - * "i == 2" is the "kmalloc-192" case which is the last special - * case for initialization and it's the point to jump to - * allocate the minimize size of the object. In slab allocator, - * the KMALLOC_SHIFT_LOW = 5. So, it needs to skip 2^3 and 2^4 - * and go straight to allocate 2^5. If the ARCH_DMA_MINALIGN is - * defined, it may be larger than 2^5 and here is also the - * trick to skip the empty gap. + * Caches that are not of the two-to-the-power-of size. + * These have to be created immediately after the + * earlier power of two caches */ - if (i == 2) - i = (KMALLOC_SHIFT_LOW - 1); + if (KMALLOC_MIN_SIZE <= 32 && !kmalloc_caches[1] && i == 6) + new_kmalloc_cache(1, flags); + if (KMALLOC_MIN_SIZE <= 64 && !kmalloc_caches[2] && i == 7) + new_kmalloc_cache(2, flags); } /* Kmalloc array is now usable */