aboutsummaryrefslogtreecommitdiff
path: root/tools/perf/util/scripting-engines/trace-event-python.c
diff options
context:
space:
mode:
authorAndrea Parri <[email protected]>2018-12-03 15:04:49 -0800
committerIngo Molnar <[email protected]>2019-01-21 11:06:55 +0100
commit5b735eb1ce481b2f1674a47c0995944b1cb6f5d5 (patch)
tree6d9cedef7675a70de468ca9f6d8c874006f37a7d /tools/perf/util/scripting-engines/trace-event-python.c
parentd7393226d15add056285c8fc86723d54d7e0c77d (diff)
tools/memory-model: Model smp_mb__after_unlock_lock()
The kernel documents smp_mb__after_unlock_lock() the following way: "Place this after a lock-acquisition primitive to guarantee that an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies if the UNLOCK and LOCK are executed by the same CPU or if the UNLOCK and LOCK operate on the same lock variable." Formalize in LKMM the above guarantee by defining (new) mb-links according to the law: ([M] ; po ; [UL] ; (co | po) ; [LKW] ; fencerel(After-unlock-lock) ; [M]) where the component ([UL] ; co ; [LKW]) identifies "UNLOCK+LOCK pairs on the same lock variable" and the component ([UL] ; po ; [LKW]) identifies "UNLOCK+LOCK pairs executed by the same CPU". In particular, the LKMM forbids the following two behaviors (the second litmus test below is based on: Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html c.f., Section "Tree RCU Grace Period Memory Ordering Building Blocks"): C after-unlock-lock-same-cpu (* * Result: Never *) {} P0(spinlock_t *s, spinlock_t *t, int *x, int *y) { int r0; spin_lock(s); WRITE_ONCE(*x, 1); spin_unlock(s); spin_lock(t); smp_mb__after_unlock_lock(); r0 = READ_ONCE(*y); spin_unlock(t); } P1(int *x, int *y) { int r0; WRITE_ONCE(*y, 1); smp_mb(); r0 = READ_ONCE(*x); } exists (0:r0=0 /\ 1:r0=0) C after-unlock-lock-same-lock-variable (* * Result: Never *) {} P0(spinlock_t *s, int *x, int *y) { int r0; spin_lock(s); WRITE_ONCE(*x, 1); r0 = READ_ONCE(*y); spin_unlock(s); } P1(spinlock_t *s, int *y, int *z) { int r0; spin_lock(s); smp_mb__after_unlock_lock(); WRITE_ONCE(*y, 1); r0 = READ_ONCE(*z); spin_unlock(s); } P2(int *z, int *x) { int r0; WRITE_ONCE(*z, 1); smp_mb(); r0 = READ_ONCE(*x); } exists (0:r0=0 /\ 1:r0=0 /\ 2:r0=0) Signed-off-by: Andrea Parri <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Cc: Akira Yokosawa <[email protected]> Cc: Alan Stern <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Daniel Lustig <[email protected]> Cc: David Howells <[email protected]> Cc: Jade Alglave <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Luc Maranget <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'tools/perf/util/scripting-engines/trace-event-python.c')
0 files changed, 0 insertions, 0 deletions