From patchwork Thu Apr 26 22:51:05 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Glauber Costa X-Patchwork-Id: 155339 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 71CF1B6FC5 for ; Fri, 27 Apr 2012 08:54:42 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759602Ab2DZWyi (ORCPT ); Thu, 26 Apr 2012 18:54:38 -0400 Received: from mailhub.sw.ru ([195.214.232.25]:15455 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758735Ab2DZWyg (ORCPT ); Thu, 26 Apr 2012 18:54:36 -0400 Received: from straightjacket.localdomain (newvpn.parallels.com [195.214.232.74] (may be forged)) by relay.sw.ru (8.13.4/8.13.4) with ESMTP id q3QMqqBJ011614; Fri, 27 Apr 2012 02:52:59 +0400 (MSK) From: Glauber Costa To: Cc: , , Li Zefan , Tejun Heo , , , , Glauber Costa , Johannes Weiner , Michal Hocko , Ingo Molnar , Jason Baron Subject: [PATCH v4 1/3] make jump_labels wait while updates are in place Date: Thu, 26 Apr 2012 19:51:05 -0300 Message-Id: <1335480667-8301-2-git-send-email-glommer@parallels.com> X-Mailer: git-send-email 1.7.7.6 In-Reply-To: <1335480667-8301-1-git-send-email-glommer@parallels.com> References: <1335480667-8301-1-git-send-email-glommer@parallels.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In mem cgroup, we need to guarantee that two concurrent updates of the jump_label interface wait for each other. IOW, we can't have other updates returning while the first one is still patching the kernel around, otherwise we'll race. I believe this is something that can fit well in the static branch API, without noticeable disadvantages: * in the common case, it will be a quite simple lock/unlock operation * Every context that calls static_branch_slow* already expects to be in sleeping context because it will mutex_lock the unlikely case. * static_key_slow_inc is not expected to be called in any fast path, otherwise it would be expected to have quite a different name. Therefore the mutex + atomic combination instead of just an atomic should not kill us. Signed-off-by: Glauber Costa CC: Tejun Heo CC: Li Zefan CC: Kamezawa Hiroyuki CC: Johannes Weiner CC: Michal Hocko CC: Ingo Molnar CC: Jason Baron --- kernel/jump_label.c | 21 +++++++++++---------- 1 files changed, 11 insertions(+), 10 deletions(-) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index 4304919..5d09cb4 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -57,17 +57,16 @@ static void jump_label_update(struct static_key *key, int enable); void static_key_slow_inc(struct static_key *key) { + jump_label_lock(); if (atomic_inc_not_zero(&key->enabled)) - return; + goto out; - jump_label_lock(); - if (atomic_read(&key->enabled) == 0) { - if (!jump_label_get_branch_default(key)) - jump_label_update(key, JUMP_LABEL_ENABLE); - else - jump_label_update(key, JUMP_LABEL_DISABLE); - } + if (!jump_label_get_branch_default(key)) + jump_label_update(key, JUMP_LABEL_ENABLE); + else + jump_label_update(key, JUMP_LABEL_DISABLE); atomic_inc(&key->enabled); +out: jump_label_unlock(); } EXPORT_SYMBOL_GPL(static_key_slow_inc); @@ -75,10 +74,11 @@ EXPORT_SYMBOL_GPL(static_key_slow_inc); static void __static_key_slow_dec(struct static_key *key, unsigned long rate_limit, struct delayed_work *work) { - if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) { + jump_label_lock(); + if (atomic_dec_and_test(&key->enabled)) { WARN(atomic_read(&key->enabled) < 0, "jump label: negative count!\n"); - return; + goto out; } if (rate_limit) { @@ -90,6 +90,7 @@ static void __static_key_slow_dec(struct static_key *key, else jump_label_update(key, JUMP_LABEL_ENABLE); } +out: jump_label_unlock(); }