aboutsummaryrefslogtreecommitdiff
path: root/include/linux/atomic-fallback.h
AgeCommit message (Collapse)AuthorFilesLines
2020-10-12asm-generic/atomic: Add try_cmpxchg() fallbacksPeter Zijlstra1-10/+80
Only x86 provides try_cmpxchg() outside of the atomic_t interfaces, provide generic fallbacks to create this interface from the widely available cmpxchg() function. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/159870621515.1229682.15506193091065001742.stgit@devnote2
2020-06-25locking/atomics: Provide the arch_atomic_ interface to generic codePeter Zijlstra1-1/+235
Architectures with instrumented (KASAN/KCSAN) atomic operations natively provide arch_atomic_ variants that are not instrumented. It turns out that some generic code also requires arch_atomic_ in order to avoid instrumentation, so provide the arch_atomic_ interface as a direct map into the regular atomic_ interface for non-instrumented architectures. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2020-06-11locking/atomics: Flip fallbacks and instrumentationPeter Zijlstra1-7/+1
Currently instrumentation of atomic primitives is done at the architecture level, while composites or fallbacks are provided at the generic level. The result is that there are no uninstrumented variants of the fallbacks. Since there is now need of such variants to isolate text poke from any form of instrumentation invert this ordering. Doing this means moving the instrumentation into the generic code as well as having (for now) two variants of the fallbacks. Notes: - the various *cond_read* primitives are not proper fallbacks and got moved into linux/atomic.c. No arch_ variants are generated because the base primitives smp_cond_load*() are instrumented. - once all architectures are moved over to arch_atomic_ one of the fallback variants can be removed and some 2300 lines reclaimed. - atomic_{read,set}*() are no longer double-instrumented Reported-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-06-11asm-generic/atomic: Use __always_inline for fallback wrappersMarco Elver1-169/+171
Use __always_inline for atomic fallback wrappers. When building for size (CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to inline even relatively small static inline functions that are assumed to be inlinable such as atomic ops. This can cause problems, for example in UACCESS regions. While the fallback wrappers aren't pure wrappers, they are trivial nonetheless, and the function they wrap should determine the final inlining policy. For x86 tinyconfig we observe: - vmlinux baseline: 1315988 - vmlinux with patch: 1315928 (-60 bytes) [ tglx: Cherry-picked from KCSAN ] Suggested-by: Mark Rutland <[email protected]> Signed-off-by: Marco Elver <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2019-02-13locking/atomics: Check atomic headers with sha1sumMark Rutland1-0/+1
We currently check the atomic headers at build-time to ensure they haven't been modified directly, and these checks require regenerating the headers in full. As this takes a few seconds, even when parallelized, this is too slow to run for every kernel build. Instead, we can generate a hash of each header as we generate them, which we can cheaply check at build time (~0.16s for all headers). This patch does so, updating headers with their hashes using the new gen-atomics.sh script. As some users apparently build the kernel wihout coreutils, lacking sha1sum, the checks are skipped in this case. Presumably, most developers have a working coreutils installation. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-11-01locking/atomics: Switch to generated fallbacksMark Rutland1-0/+2294
As a step to ensuring the atomic* APIs are consistent, switch to fallbacks generated by gen-atomic-fallback.sh. These are checked in rather than generated with Kbuild, since: * This allows inspection of the atomics with git grep and ctags on a pristine tree, which Linus strongly prefers being able to do. * The fallbacks are not affected by machine details or configuration options, so it is not necessary to regenerate them to take these into account. * These are included by files required *very* early in the build process (e.g. for generating bounds.h), and we'd rather not complicate the top-level Kbuild file with dependencies. The new fallback header should be equivalent to the old fallbacks in <linux/atomic.h>, but: * It is formatted a little differently due to scripting ensuring things are more regular than they used to be. * Fallbacks are now expanded in-place as static inline functions rather than macros. * The prototypes for fallbacks are arragned consistently with the return type on a separate line to try to keep to a sensible line length. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: Boqun Feng <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>