diff options
author | Will Deacon <[email protected]> | 2014-02-07 19:12:32 +0100 |
---|---|---|
committer | Russell King <[email protected]> | 2014-02-10 11:44:50 +0000 |
commit | 7c8746a9eb287642deaad0e7c2cdf482dce5e4be (patch) | |
tree | d8740bb7222926df4bae9f93a6564839b08dc6c3 /net/lapb/lapb_subr.c | |
parent | bae0ca2bc550d1ec6a118fb8f2696f18c4da3d8e (diff) |
ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev
When unlocking a spinlock, we require the following, strictly ordered
sequence of events:
<barrier> /* dmb */
<unlock>
<barrier> /* dsb */
<sev>
Whilst the code does indeed reflect this in terms of the architecture,
the final <barrier> + <sev> have been contracted into a single inline
asm without a "memory" clobber, therefore the compiler is at liberty to
reorder the unlock to the end of the above sequence. In such a case,
a waiting CPU may be woken up before the lock has been unlocked, leading
to extremely poor performance.
This patch reworks the dsb_sev() function to make use of the dsb()
macro and ensure ordering against the unlock.
Cc: <[email protected]>
Reported-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
Diffstat (limited to 'net/lapb/lapb_subr.c')
0 files changed, 0 insertions, 0 deletions