diff mbox

[v6,1/4] atomics: Allow architectures to define their own __atomic_op_* helpers

Message ID 1450189457-10589-2-git-send-email-boqun.feng@gmail.com (mailing list archive)
State Accepted
Headers show

Commit Message

Boqun Feng Dec. 15, 2015, 2:24 p.m. UTC
Some architectures may have their special barriers for acquire, release
and fence semantics, so that general memory barriers(smp_mb__*_atomic())
in the default __atomic_op_*() may be too strong, so allow architectures
to define their own helpers which can overwrite the default helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/atomic.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Michael Ellerman Feb. 22, 2016, 9:45 a.m. UTC | #1
On Tue, 2015-15-12 at 14:24:14 UTC, Boqun Feng wrote:
> Some architectures may have their special barriers for acquire, release
> and fence semantics, so that general memory barriers(smp_mb__*_atomic())
> in the default __atomic_op_*() may be too strong, so allow architectures
> to define their own helpers which can overwrite the default helpers.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/e1ab7f39d7e0dbfbdefe148be3

cheers
diff mbox

Patch

diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 301de78..5f3ee5a 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -34,20 +34,29 @@ 
  * The idea here is to build acquire/release variants by adding explicit
  * barriers on top of the relaxed variant. In the case where the relaxed
  * variant is already fully ordered, no additional barriers are needed.
+ *
+ * Besides, if an arch has a special barrier for acquire/release, it could
+ * implement its own __atomic_op_* and use the same framework for building
+ * variants
  */
+#ifndef __atomic_op_acquire
 #define __atomic_op_acquire(op, args...)				\
 ({									\
 	typeof(op##_relaxed(args)) __ret  = op##_relaxed(args);		\
 	smp_mb__after_atomic();						\
 	__ret;								\
 })
+#endif
 
+#ifndef __atomic_op_release
 #define __atomic_op_release(op, args...)				\
 ({									\
 	smp_mb__before_atomic();					\
 	op##_relaxed(args);						\
 })
+#endif
 
+#ifndef __atomic_op_fence
 #define __atomic_op_fence(op, args...)					\
 ({									\
 	typeof(op##_relaxed(args)) __ret;				\
@@ -56,6 +65,7 @@ 
 	smp_mb__after_atomic();						\
 	__ret;								\
 })
+#endif
 
 /* atomic_add_return_relaxed */
 #ifndef atomic_add_return_relaxed