aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaul E. McKenney <[email protected]>2013-12-11 13:59:11 -0800
committerIngo Molnar <[email protected]>2013-12-16 11:36:18 +0100
commit919fc6e34831d1c2b58bfb5ae261dc3facc9b269 (patch)
treea9c7d8a8c4b089decea2e596a5f08b182bd8cc8e
parent6303b9c87d52eaedc82968d3ff59c471e7682afc (diff)
powerpc: Full barrier for smp_mb__after_unlock_lock()
The powerpc lock acquisition sequence is as follows: lwarx; cmpwi; bne; stwcx.; lwsync; Lock release is as follows: lwsync; stw; If CPU 0 does a store (say, x=1) then a lock release, and CPU 1 does a lock acquisition then a load (say, r1=y), then there is no guarantee of a full memory barrier between the store to 'x' and the load from 'y'. To see this, suppose that CPUs 0 and 1 are hardware threads in the same core that share a store buffer, and that CPU 2 is in some other core, and that CPU 2 does the following: y = 1; sync; r2 = x; If 'x' and 'y' are both initially zero, then the lock acquisition and release sequences above can result in r1 and r2 both being equal to zero, which could not happen if unlock+lock was a full barrier. This commit therefore makes powerpc's smp_mb__after_unlock_lock() be a full barrier. Signed-off-by: Paul E. McKenney <[email protected]> Acked-by: Benjamin Herrenschmidt <[email protected]> Reviewed-by: Peter Zijlstra <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: [email protected] Cc: <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Andrew Morton <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r--arch/powerpc/include/asm/spinlock.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 5f54a744dcc5..f6e78d63fb6a 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -28,6 +28,8 @@
#include <asm/synch.h>
#include <asm/ppc-opcode.h>
+#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
+
#define arch_spin_is_locked(x) ((x)->slock != 0)
#ifdef CONFIG_PPC64