From patchwork Tue Oct 7 09:55:47 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 3135 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 1C4B3DDDE3 for ; Tue, 7 Oct 2008 20:56:51 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751235AbYJGJ4t (ORCPT ); Tue, 7 Oct 2008 05:56:49 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751263AbYJGJ4t (ORCPT ); Tue, 7 Oct 2008 05:56:49 -0400 Received: from bombadil.infradead.org ([18.85.46.34]:36658 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751235AbYJGJ4s (ORCPT ); Tue, 7 Oct 2008 05:56:48 -0400 Received: from d9244.upc-d.chello.nl ([213.46.9.244] helo=lappy.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.68 #1 (Red Hat Linux)) id 1Kn9IB-0003kQ-9j; Tue, 07 Oct 2008 09:55:47 +0000 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lappy.programming.kicks-ass.net (Postfix) with ESMTP id 0ED965C40A0; Tue, 7 Oct 2008 11:55:49 +0200 (CEST) Subject: Re: [PATCH, RFC] percpu_counters: make fbc->count read atomic on 32 bit architecture From: Peter Zijlstra To: Andrew Morton Cc: Theodore Ts'o , "Aneesh Kumar K.V" , linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org In-Reply-To: <20081006232322.c383fac5.akpm@linux-foundation.org> References: <20081006232322.c383fac5.akpm@linux-foundation.org> Date: Tue, 07 Oct 2008 11:55:47 +0200 Message-Id: <1223373347.26330.19.camel@lappy.programming.kicks-ass.net> Mime-Version: 1.0 X-Mailer: Evolution 2.22.3.1 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Mon, 2008-10-06 at 23:23 -0700, Andrew Morton wrote: > On Sun, 05 Oct 2008 21:28:10 -0400 "Theodore Ts'o" wrote: > > > The following patch has been sitting in the ext4 patch queue for about > > six weeks. It was there it was a suspected cause for block allocation > > bug. As I recall, it we found the true root cause since then, but this > > has stuck around since it's a potential problem. Andrew has expressed > > concerns that this patch might have performance impacts. > > Performace impacts I guess we'll just have to put up with. iirc I was > thinking that this implementation should be pushed down to a kernel-wide > atomic64_t and then the percpu_counters would just use that type. something like so? or should I use a global seqlock hash table? i386 could then do a version using cmpxchg8, although I'm not sure that's a win, as I've heard thats an awefuly expensive op to use. --- Subject: From: Peter Zijlstra Date: Tue Oct 07 11:52:37 CEST 2008 Signed-off-by: Peter Zijlstra --- include/asm-generic/atomic_64.h | 132 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 132 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: linux-2.6/include/asm-generic/atomic_64.h =================================================================== --- /dev/null +++ linux-2.6/include/asm-generic/atomic_64.h @@ -0,0 +1,132 @@ +#ifndef _ASM_GENERIC_ATOMIC64_H +#define _ASM_GENERIC_ATOMIC64_H + +#include + +#if BITS_PER_LONG == 32 + +#include + +typedef struct { + s64 counter; + seqlock_t slock; +} atomic64_t; + +ATOMIC64_INIT(i) { .counter = (i), .slock = __SEQLOCK_UNLOCKED(i.slock) } + +static inline s64 atomic64_read(atomic64_t *v) +{ + unsigned seq = read_seqbegin(&v->slock); + s64 val; + + do { + val = v->counter; + } while (read_seqretry(&v->slock, seq)); + + return val; +} + +static inline void atomic64_set(s64 i, atomic64_t *v) +{ + write_seqlock(&v->slock); + v->counter = i; + write_sequnlock(&v->slock); +} + +static inline void atomic64_add(s64 i, atomic64_t *v) +{ + write_seqlock(&v->slock); + v->counter += i; + write_sequnlock(&v->slock); +} + +static inline void atomic64_sub(s64 i, atomic64_t *v) +{ + write_seqlock(&v->slock); + v->counter -= i; + write_sequnlock(&v->slock); +} + +static inline int atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + int ret; + + write_seqlock(&v->slock); + v->counter -= i; + ret = !v->counter; + write_sequnlock(&v->slock); + + return ret; +} + +static inline void atomic64_inc(atomic64_t *v) +{ + write_seqlock(&v->slock); + v->counter++; + write_sequnlock(&v->slock); +} + +static inline void atomic64_dec(atomic64_t *v) +{ + write_seqlock(&v->slock); + v->counter--; + write_sequnlock(&v->slock); +} + +static inline int atomic64_dec_and_test(atomic64_t *v) +{ + int ret; + + write_seqlock(&v->slock); + v->counter--; + ret = !v->counter; + write_sequnlock(&v->slock); + + return ret; +} + +static inline int atomic64_add_and_test(atomic64_t *v) +{ + int ret; + + write_seqlock(&v->slock); + v->counter++; + ret = !v->counter; + write_sequnlock(&v->slock); + + return ret; +} + +static inline int atomic64_add_negative(s64 i, atomic64_t *v) +{ + int ret; + + write_seqlock(&v->slock); + v->counter += i; + ret = v->counter < 0; + write_sequnlock(&v->slock); + + return ret; +} + +static inline s64 atomic64_add_return(s64 i, atomic64_t *v) +{ + s64 val; + + write_seqlock(&v->slock); + v->counter += i; + val = v->counter; + write_sequnlock(&v->slock); + + return val; +} + +static inline s64 atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(-i, v); +} + +#define atomic64_inc_return(v) (atomic64_add_return(1, (v))) +#define atomic64_dec_return(v) (atomic64_sub_return(1, (v))) + +#endif