From patchwork Tue Dec 20 13:06:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 707422 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3tjdPX3wSdz9t15 for ; Wed, 21 Dec 2016 00:08:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934111AbcLTNHj (ORCPT ); Tue, 20 Dec 2016 08:07:39 -0500 Received: from mail-wj0-f194.google.com ([209.85.210.194]:35768 "EHLO mail-wj0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756212AbcLTNHK (ORCPT ); Tue, 20 Dec 2016 08:07:10 -0500 Received: by mail-wj0-f194.google.com with SMTP id he10so27636666wjc.2; Tue, 20 Dec 2016 05:07:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6wuxD4BxYEykCocuFzpPNgP8nXtTrmGO0Z2zexOWwGQ=; b=KU3GfdMPVdjdOPM6eAM0IMTg2D/CSnjsGqXC+gkK0+Xpkqp2eqUychv6ugissi3LvS L8j+w3I8FKW5ft0Lv4DbnAZS0zC6RpBvyemQ/K7+Ouj1lf2O+bJr7WAyPn4aOsuoacuL qOlgnPJv+9pE7sS5rSBMHizs+6kMBoOem6SQhOUqa8wnnMjtWVR02+uCsyu2g07l0QFy j6efYd2zSH7KRPga7xOBxlOBFylPzQ2n2i2mrwj0kYKb5pqeWGdYKxGi28Q7ULigSu3O LZjgA3+iwhU39g/RFqXLdlJkzJEg0F+5+gllO2DsJlah1XtqB4fVns3xdh/iy3yuvqVW GAiw== X-Gm-Message-State: AIkVDXII/fQ9jOZF3N7t1np1T4qao4dBWvsIOSmjcMLX8mDtrbUgzhvmm53PX2NQ3lvYOg== X-Received: by 10.195.11.229 with SMTP id el5mr22393205wjd.64.1482239228928; Tue, 20 Dec 2016 05:07:08 -0800 (PST) Received: from tiehlicka.suse.cz ([213.151.95.130]) by smtp.gmail.com with ESMTPSA id n5sm21977804wmf.0.2016.12.20.05.07.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Dec 2016 05:07:08 -0800 (PST) From: Michal Hocko To: Andrew Morton Cc: Cristopher Lameter , Alexei Starovoitov , Andrey Konovalov , netdev@vger.kernel.org, , LKML , Michal Hocko Subject: [PATCH 1/2] mm, slab: make sure that KMALLOC_MAX_SIZE will fit into MAX_ORDER Date: Tue, 20 Dec 2016 14:06:58 +0100 Message-Id: <20161220130659.16461-2-mhocko@kernel.org> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161220130659.16461-1-mhocko@kernel.org> References: <20161220130659.16461-1-mhocko@kernel.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Michal Hocko Andrey Konovalov has reported the following warning triggered by the syzkaller fuzzer. WARNING: CPU: 1 PID: 9935 at mm/page_alloc.c:3511 __alloc_pages_nodemask+0x159c/0x1e20 Kernel panic - not syncing: panic_on_warn set ... CPU: 1 PID: 9935 Comm: syz-executor0 Not tainted 4.9.0-rc7+ #34 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 ffff88006949f2c8 ffffffff81f96b8a ffffffff00000200 1ffff1000d293dec ffffed000d293de4 0000000000000a06 0000000041b58ab3 ffffffff8598b510 ffffffff81f968f8 0000000041b58ab3 ffffffff85942a58 ffffffff81432860 Call Trace: [< inline >] __dump_stack lib/dump_stack.c:15 [] dump_stack+0x292/0x398 lib/dump_stack.c:51 [] panic+0x1cb/0x3a9 kernel/panic.c:179 [] __warn+0x1c4/0x1e0 kernel/panic.c:542 [] warn_slowpath_null+0x2c/0x40 kernel/panic.c:585 [< inline >] __alloc_pages_slowpath mm/page_alloc.c:3511 [] __alloc_pages_nodemask+0x159c/0x1e20 mm/page_alloc.c:3781 [] alloc_pages_current+0x1c7/0x6b0 mm/mempolicy.c:2072 [< inline >] alloc_pages include/linux/gfp.h:469 [] kmalloc_order+0x1f/0x70 mm/slab_common.c:1015 [] kmalloc_order_trace+0x1f/0x160 mm/slab_common.c:1026 [< inline >] kmalloc_large include/linux/slab.h:422 [] __kmalloc+0x210/0x2d0 mm/slub.c:3723 [< inline >] kmalloc include/linux/slab.h:495 [] ep_write_iter+0x167/0xb50 drivers/usb/gadget/legacy/inode.c:664 [< inline >] new_sync_write fs/read_write.c:499 [] __vfs_write+0x483/0x760 fs/read_write.c:512 [] vfs_write+0x170/0x4e0 fs/read_write.c:560 [< inline >] SYSC_write fs/read_write.c:607 [] SyS_write+0xfb/0x230 fs/read_write.c:599 [] entry_SYSCALL_64_fastpath+0x1f/0xc2 The issue is caused by a lack of size check for the request size in ep_write_iter which should be fixed. It, however, points to another problem, that SLUB defines KMALLOC_MAX_SIZE too large because the its KMALLOC_SHIFT_MAX is (MAX_ORDER + PAGE_SHIFT) which means that the resulting page allocator request might be MAX_ORDER which is too large (see __alloc_pages_slowpath). The same applies to the SLOB allocator which allows even larger sizes. Make sure that they are capped properly and never request more than MAX_ORDER order. Reported-by: Andrey Konovalov Signed-off-by: Michal Hocko Acked-by: Christoph Lameter --- include/linux/slab.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 084b12bad198..4c5363566815 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -226,7 +226,7 @@ static inline const char *__check_heap_object(const void *ptr, * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif @@ -239,7 +239,7 @@ static inline const char *__check_heap_object(const void *ptr, * be allocated from the same page. */ #define KMALLOC_SHIFT_HIGH PAGE_SHIFT -#define KMALLOC_SHIFT_MAX 30 +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif