From patchwork Mon Sep 26 07:09:16 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Collins X-Patchwork-Id: 164269 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 6EAC2B6FD7 for ; Tue, 12 Jun 2012 09:02:21 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SeDck-0003jk-1p; Mon, 11 Jun 2012 23:02:14 +0000 Received: from mail-pz0-f49.google.com ([209.85.210.49]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SeDcd-0003hC-3e for kernel-team@lists.ubuntu.com; Mon, 11 Jun 2012 23:02:07 +0000 Received: by mail-pz0-f49.google.com with SMTP id m1so6348994dad.8 for ; Mon, 11 Jun 2012 16:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:in-reply-to:references:from:date:subject:to; bh=COdF3ycMgHzreUnqRwx0bfcljePvNR5HzOnc7pEDIsg=; b=pJX5E7lL+THw0vUOqELHL6cwb24W51uGzVWs1tbptdpvQvTzYL2IGnRt0JxHNihqm/ VXBvmndmLqZvAbUeZ2srp4K4FXjgIXiQMmmx4ZCM6D6JIekQ63DovMVsYMYzLQZySw3R gkaLkl4I2cG2FZ2p+O1jYSWHrMfXW4GhVwKf+rR7BiJfnOXFAYbQVBE1G/eyLXVHlkPh JP1VoEeD/u1R0FdOJQg/6vfdST8n5EPx53RahmS42DZC0e9R6JafoZPMP4RBhiT6c/hu PeCwxMROXIHcXZEBmrivBA4pPpCAkcmuimsUGOxQtHigLUumq6FCt9cG6BNDcCmRQh6G 4DUg== Received: by 10.68.213.234 with SMTP id nv10mr31700613pbc.56.1339455726548; Mon, 11 Jun 2012 16:02:06 -0700 (PDT) Received: from localhost (ip68-13-200-36.hr.hr.cox.net. [68.13.200.36]) by mx.google.com with ESMTPS id z2sm19937560pbv.34.2012.06.11.16.02.03 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 11 Jun 2012 16:02:05 -0700 (PDT) Received: by localhost (sSMTP sendmail emulation); Mon, 11 Jun 2012 19:02:01 -0400 Message-Id: <0a0ebabf2caa367d6cff7ce6420bdbdd4206df45.1339455422.git.bcollins@ubuntu.com> In-Reply-To: References: From: Madalin Bucur Date: Mon, 26 Sep 2011 07:09:16 +0000 Subject: [PATCH 13/27] UBUNTU: SAUCE: net/flow: remove sleeping and deferral mechanism from flow_cache_flush To: kernel-team@lists.ubuntu.com X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com flow_cache_flush must not sleep as it can be called in atomic context; removed the schedule_work as the deferred processing lead to the flow cache gc never being actually run under heavy network load This patch is being maintained and will eventually be merged upstream by Freescale directly. The powerpc-e500mc flavour uses this. Signed-off-by: Madalin Bucur Signed-off-by: Ben Collins --- net/core/flow.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/net/core/flow.c b/net/core/flow.c index e318c7e..0b66876 100644 --- a/net/core/flow.c +++ b/net/core/flow.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include @@ -49,7 +48,6 @@ struct flow_cache_percpu { struct flow_flush_info { struct flow_cache *cache; atomic_t cpuleft; - struct completion completion; }; struct flow_cache { @@ -100,7 +98,7 @@ static void flow_entry_kill(struct flow_cache_entry *fle) kmem_cache_free(flow_cachep, fle); } -static void flow_cache_gc_task(struct work_struct *work) +static void flow_cache_gc_task(void) { struct list_head gc_list; struct flow_cache_entry *fce, *n; @@ -113,7 +111,6 @@ static void flow_cache_gc_task(struct work_struct *work) list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) flow_entry_kill(fce); } -static DECLARE_WORK(flow_cache_gc_work, flow_cache_gc_task); static void flow_cache_queue_garbage(struct flow_cache_percpu *fcp, int deleted, struct list_head *gc_list) @@ -123,7 +120,7 @@ static void flow_cache_queue_garbage(struct flow_cache_percpu *fcp, spin_lock_bh(&flow_cache_gc_lock); list_splice_tail(gc_list, &flow_cache_gc_list); spin_unlock_bh(&flow_cache_gc_lock); - schedule_work(&flow_cache_gc_work); + flow_cache_gc_task(); } } @@ -320,8 +317,7 @@ static void flow_cache_flush_tasklet(unsigned long data) flow_cache_queue_garbage(fcp, deleted, &gc_list); - if (atomic_dec_and_test(&info->cpuleft)) - complete(&info->completion); + atomic_dec(&info->cpuleft); } static void flow_cache_flush_per_cpu(void *data) @@ -339,22 +335,23 @@ static void flow_cache_flush_per_cpu(void *data) void flow_cache_flush(void) { struct flow_flush_info info; - static DEFINE_MUTEX(flow_flush_sem); + static DEFINE_SPINLOCK(flow_flush_lock); /* Don't want cpus going down or up during this. */ get_online_cpus(); - mutex_lock(&flow_flush_sem); + spin_lock_bh(&flow_flush_lock); info.cache = &flow_cache_global; atomic_set(&info.cpuleft, num_online_cpus()); - init_completion(&info.completion); local_bh_disable(); smp_call_function(flow_cache_flush_per_cpu, &info, 0); flow_cache_flush_tasklet((unsigned long)&info); local_bh_enable(); - wait_for_completion(&info.completion); - mutex_unlock(&flow_flush_sem); + while (atomic_read(&info.cpuleft) != 0) + cpu_relax(); + + spin_unlock_bh(&flow_flush_lock); put_online_cpus(); }