diff options
author | Mark Rutland <mark.rutland@arm.com> | 2023-06-05 08:00:58 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2023-06-05 09:57:13 +0200 |
commit | dda5f312bb09e56e7a1c3e3851f2000eb2e9c879 (patch) | |
tree | 2c5d77a688caffdffb4b516c1e6b00baeadb1259 /scripts/atomic/fallbacks/fetch_add_unless | |
parent | 497cc42bf53b55185ab3d39c634fbf09eb6681ae (diff) |
locking/atomic: arm: fix sync ops
The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.
Fix this by defining sync ops with the required barriers.
Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.
Fixes: e54d2f61528165bb ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-2-mark.rutland@arm.com
Diffstat (limited to 'scripts/atomic/fallbacks/fetch_add_unless')
0 files changed, 0 insertions, 0 deletions