aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/lib/testsetbit.S
diff options
context:
space:
mode:
authorMark Rutland <[email protected]>2023-06-05 08:00:58 +0100
committerPeter Zijlstra <[email protected]>2023-06-05 09:57:13 +0200
commitdda5f312bb09e56e7a1c3e3851f2000eb2e9c879 (patch)
tree2c5d77a688caffdffb4b516c1e6b00baeadb1259 /arch/arm/lib/testsetbit.S
parent497cc42bf53b55185ab3d39c634fbf09eb6681ae (diff)
locking/atomic: arm: fix sync ops
The sync_*() ops on arch/arm are defined in terms of the regular bitops with no special handling. This is not correct, as UP kernels elide barriers for the fully-ordered operations, and so the required ordering is lost when such UP kernels are run under a hypervsior on an SMP system. Fix this by defining sync ops with the required barriers. Note: On 32-bit arm, the sync_*() ops are currently only used by Xen, which requires ARMv7, but the semantics can be implemented for ARMv6+. Fixes: e54d2f61528165bb ("xen/arm: sync_bitops") Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
Diffstat (limited to 'arch/arm/lib/testsetbit.S')
-rw-r--r--arch/arm/lib/testsetbit.S4
1 files changed, 4 insertions, 0 deletions
diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S
index f3192e55acc8..649dbab65d8d 100644
--- a/arch/arm/lib/testsetbit.S
+++ b/arch/arm/lib/testsetbit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_set_bit, orreq, streq
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_set_bit, orreq, streq
+#endif