From patchwork Sat Jan 26 04:41:15 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zefan Li X-Patchwork-Id: 215883 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 488392C008E for ; Sat, 26 Jan 2013 15:41:25 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751626Ab3AZElW (ORCPT ); Fri, 25 Jan 2013 23:41:22 -0500 Received: from szxga01-in.huawei.com ([119.145.14.64]:22124 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751138Ab3AZElV (ORCPT ); Fri, 25 Jan 2013 23:41:21 -0500 Received: from 172.24.2.119 (EHLO szxeml213-edg.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.4-GA FastPath queued) with ESMTP id AWS35743; Sat, 26 Jan 2013 12:41:19 +0800 (CST) Received: from SZXEML405-HUB.china.huawei.com (10.82.67.60) by szxeml213-edg.china.huawei.com (172.24.2.30) with Microsoft SMTP Server (TLS) id 14.1.323.7; Sat, 26 Jan 2013 12:41:12 +0800 Received: from [10.135.68.215] (10.135.68.215) by smtpscn.huawei.com (10.82.67.60) with Microsoft SMTP Server (TLS) id 14.1.323.7; Sat, 26 Jan 2013 12:41:16 +0800 Message-ID: <51035E6B.1000106@huawei.com> Date: Sat, 26 Jan 2013 12:41:15 +0800 From: Li Zefan User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130107 Thunderbird/17.0.2 MIME-Version: 1.0 To: David Miller CC: , Gao feng , John Fastabend , Neil Horman Subject: [PATCH 3.4.y 1/3] cgroup: fix panic in netprio_cgroup References: <51035E4F.6030508@huawei.com> In-Reply-To: <51035E4F.6030508@huawei.com> X-Originating-IP: [10.135.68.215] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Gao feng commit b761c9b1f4f69eb53fb6147547a1ab25237a93b3 upstream. we set max_prioidx to the first zero bit index of prioidx_map in function get_prioidx. So when we delete the low index netprio cgroup and adding a new netprio cgroup again,the max_prioidx will be set to the low index. when we set the high index cgroup's net_prio.ifpriomap,the function write_priomap will call update_netdev_tables to alloc memory which size is sizeof(struct netprio_map) + sizeof(u32) * (max_prioidx + 1), so the size of array that map->priomap point to is max_prioidx +1, which is low than what we actually need. fix this by adding check in get_prioidx,only set max_prioidx when max_prioidx low than the new prioidx. Signed-off-by: Gao feng Acked-by: Neil Horman Signed-off-by: David S. Miller Signed-off-by: Li Zefan --- net/core/netprio_cgroup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c index ba6900f..4435296 100644 --- a/net/core/netprio_cgroup.c +++ b/net/core/netprio_cgroup.c @@ -62,8 +62,9 @@ static int get_prioidx(u32 *prio) return -ENOSPC; } set_bit(prioidx, prioidx_map); + if (atomic_read(&max_prioidx) < prioidx) + atomic_set(&max_prioidx, prioidx); spin_unlock_irqrestore(&prioidx_map_lock, flags); - atomic_set(&max_prioidx, prioidx); *prio = prioidx; return 0; }