aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-06-22lockding/lockdep: Avoid to find wrong lock dep path in check_irq_usage()Boqun Feng1-1/+11
In the step #3 of check_irq_usage(), we seach backwards to find a lock whose usage conflicts the usage of @target_entry1 on safe/unsafe. However, we should only keep the irq-unsafe usage of @target_entry1 into consideration, because it could be a case where a lock is hardirq-unsafe but soft-safe, and in check_irq_usage() we find it because its hardirq-unsafe could result into a hardirq-safe-unsafe deadlock, but currently since we don't filter out the other usage bits, so we may find a lock dependency path softirq-unsafe -> softirq-safe, which in fact doesn't cause a deadlock. And this may cause misleading lockdep splats. Fix this by only keeping LOCKF_ENABLED_IRQ_ALL bits when we try the backwards search. Reported-by: Johannes Berg <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22locking/lockdep: Remove the unnecessary trace savingBoqun Feng1-3/+0
In print_bad_irq_dependency(), save_trace() is called to set the ->trace for @prev_root as the current call trace, however @prev_root corresponds to the the held lock, which may not be acquired in current call trace, therefore it's wrong to use save_trace() to set ->trace of @prev_root. Moreover, with our adjustment of printing backwards dependency path, the ->trace of @prev_root is unncessary, so remove it. Reported-by: Johannes Berg <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22locking/lockdep: Fix the dep path printing for backwards BFSBoqun Feng1-2/+106
We use the same code to print backwards lock dependency path as the forwards lock dependency path, and this could result into incorrect printing because for a backwards lock_list ->trace is not the call trace where the lock of ->class is acquired. Fix this by introducing a separate function on printing the backwards dependency path. Also add a few comments about the printing while we are at it. Reported-by: Johannes Berg <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22selftests: futex: Add futex compare requeue testAndré Almeida4-1/+142
Add testing for futex_cmp_requeue(). The first test just requeues from one waiter to another one, and wakes it. The second performs both wake and requeue, and checks the return values to see if the operation woke/requeued the expected number of waiters. Signed-off-by: André Almeida <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22selftests: futex: Add futex wait testAndré Almeida4-1/+177
There are three different strategies to uniquely identify a futex in the kernel: - Private futexes: uses the pointer to mm_struct and the page address - Shared futexes: checks if the page containing the address is a PageAnon: - If it is, uses the same data as a private futexes - If it isn't, uses an inode sequence number from struct inode and the page's index Create a selftest to check those three paths and basic wait/wake mechanism. Signed-off-by: André Almeida <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-08seqlock: Remove trailing semicolon in macrosHuilong Deng1-3/+3
Macros should not use a trailing semicolon. Signed-off-by: Huilong Deng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-05-31locking/lockdep: Reduce LOCKDEP dependency listRandy Dunlap1-1/+0
Some arches (um, sparc64, riscv, xtensa) cause a Kconfig warning for LOCKDEP. These arch-es select LOCKDEP_SUPPORT but they are not listed as one of the arch-es that LOCKDEP depends on. Since (16) arch-es define the Kconfig symbol LOCKDEP_SUPPORT if they intend to have LOCKDEP support, replace the awkward list of arch-es that LOCKDEP depends on with the LOCKDEP_SUPPORT symbol. But wait. LOCKDEP_SUPPORT is included in LOCK_DEBUGGING_SUPPORT, which is already a dependency here, so LOCKDEP_SUPPORT is redundant and not needed. That leaves the FRAME_POINTER dependency, but it is part of an expression like this: depends on (A && B) && (FRAME_POINTER || B') where B' is a dependency of B so if B is true then B' is true and the value of FRAME_POINTER does not matter. Thus we can also delete the FRAME_POINTER dependency. Fixes this kconfig warning: (for um, sparc64, riscv, xtensa) WARNING: unmet direct dependencies detected for LOCKDEP Depends on [n]: DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] && (FRAME_POINTER [=n] || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86) Selected by [y]: - PROVE_LOCKING [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] - LOCK_STAT [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] - DEBUG_LOCK_ALLOC [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] Fixes: 7d37cb2c912d ("lib: fix kconfig dependency on ARCH_WANT_FRAME_POINTERS") Signed-off-by: Randy Dunlap <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Waiman Long <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-05-31locking/lockdep,doc: Improve readability of the block matrixXiongwei Song1-2/+2
The block condition matrix is using 'E' as the writer notation, however, the writer reminder below the matrix is using 'W', to make them consistent and make the matrix more readable, we'd better to use 'W' to represent writer. Suggested-by: Waiman Long <[email protected]> Suggested-by: Boqun Feng <[email protected]> Signed-off-by: Xiongwei Song <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Boqun Feng <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-05-26locking/atomics: atomic-instrumented: simplify ifdefferyMark Rutland2-546/+3
Now that all architectures implement ARCH_ATOMIC, the fallbacks are generated before the instrumented wrappers are generated. Due to this, in atomic-instrumented.h we can assume that the whole set of atomic functions has been generated. Likewise, atomic-instrumented.h doesn't need to provide a preprocessor definition for every atomic it wraps. This patch removes the redundant ifdeffery. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: delete !ARCH_ATOMIC remnantsMark Rutland31-2718/+3
Now that all architectures implement ARCH_ATOMIC, we can make it mandatory, removing the Kconfig symbol and logic for !ARCH_ATOMIC. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: xtensa: move to ARCH_ATOMICMark Rutland3-18/+19
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates xtensa to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Max Filippov <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: sparc: move to ARCH_ATOMICMark Rutland7-80/+81
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates sparc to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: sh: move to ARCH_ATOMICMark Rutland6-15/+16
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates sh to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rich Felker <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: riscv: move to ARCH_ATOMICMark Rutland3-81/+82
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates riscv to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Palmer Dabbelt <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: powerpc: move to ARCH_ATOMICMark Rutland4-83/+90
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates powerpc to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. While atomic_try_cmpxchg_lock() is not part of the common atomic API, it is given an `arch_` prefix for consistency. Signed-off-by: Mark Rutland <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: parisc: move to ARCH_ATOMICMark Rutland3-23/+24
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates parisc to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Helge Deller <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: openrisc: move to ARCH_ATOMICMark Rutland3-21/+26
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates openrisc to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Stafford Horne <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: nios2: move to ARCH_ATOMICMark Rutland1-0/+1
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates nios2 to ARCH_ATOMIC, using the asm-generic implementations. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: nds32: move to ARCH_ATOMICMark Rutland1-0/+1
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates nds32 to ARCH_ATOMIC, using the asm-generic implementations. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Nick Hu <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: mips: move to ARCH_ATOMICMark Rutland4-39/+43
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates mips to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: microblaze: move to ARCH_ATOMICMark Rutland1-0/+1
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates microblaze to ARCH_ATOMIC, using the asm-generic implementations. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Michal Simek <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: m68k: move to ARCH_ATOMICMark Rutland4-36/+37
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. While atomic_dec_and_test_lt() is not part of the common atomic API, it is also given an `arch_` prefix for consistency. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Geert Uytterhoeven <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Greg Ungerer <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: ia64: move to ARCH_ATOMICMark Rutland4-40/+61
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates ia64 to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: hexagon: move to ARCH_ATOMICMark Rutland3-16/+17
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates hexagon to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Brian Cain <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: h8300: move to ARCH_ATOMICMark Rutland1-0/+1
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates h8300 to ARCH_ATOMIC, using the asm-generic implementations. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: csky: move to ARCH_ATOMICMark Rutland2-4/+5
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates csky to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Guo Ren <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: arm: move to ARCH_ATOMICMark Rutland4-57/+58
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates alpha to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: arc: move to ARCH_ATOMICMark Rutland3-35/+36
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates alpha to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Vineet Gupta <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: alpha: move to ARCH_ATOMICMark Rutland3-47/+54
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates alpha to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Note: xchg_local() is NOT currently part of the generic atomic arch_atomic API, and is not instrumented. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: cmpxchg: support ARCH_ATOMICMark Rutland1-17/+44
We'd like all architectures to convert to ARCH_ATOMIC, as this will enable functionality, and once all architectures are converted it will be possible to make significant cleanups to the atomic headers. A number of architectures use asm-generic/cmpxchg.h or asm-generic/cmpxhg-local.h, and it's impractical to convert the headers and all these architectures in one go. To make it possible to convert them one-by-one, let's make the asm-generic implementation function as either cmpxchg*() or arch_cmpxchg*() depending on whether ARCH_ATOMIC is selected. To do this, the generic implementations are prefixed as generic_cmpxchg_*(), and preprocessor definitions map cmpxchg_*()/arch_cmpxchg_*() onto these as appropriate. Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header can be simplified and/or removed entirely. For existing users (none of which select ARCH_ATOMIC), there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: cmpxchg: make `generic` a prefixMark Rutland10-18/+18
The asm-generic implementations of cmpxchg_local() and cmpxchg64_local() use a `_generic` suffix to distinguish themselves from arch code or wrappers used elsewhere. Subsequent patches will add ARCH_ATOMIC support to these implementations, and will distinguish more functions with a `generic` portion. To align with how ARCH_ATOMIC uses an `arch_` prefix, it would be helpful to use a `generic_` prefix rather than a `_generic` suffix. In preparation for this, this patch renames the existing functions to make `generic` a prefix rather than a suffix. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: atomic64: support ARCH_ATOMICMark Rutland2-31/+79
We'd like all architectures to convert to ARCH_ATOMIC, as this will enable functionality, and once all architectures are converted it will be possible to make significant cleanups to the atomic headers. A number of architectures use asm-generic/atomic64.h, and it's impractical to convert the header and all these architectures in one go. To make it possible to convert them one-by-one, let's make the asm-generic implementation function as either atomic64_*() or arch_atomic64_*() depending on whether ARCH_ATOMIC is selected. To do this, the generic implementations are prefixed as generic_atomic64_*(), and preprocessor definitions map atomic64_*()/arch_atomic64_*() onto these as appropriate. Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header can be simplified and/or removed entirely. For existing users (none of which select ARCH_ATOMIC), there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: atomic: support ARCH_ATOMICMark Rutland1-9/+62
We'd like all architectures to convert to ARCH_ATOMIC, as this will enable functionality, and once all architectures are converted it will be possible to make significant cleanups to the atomic headers. A number of architectures use asm-generic/atomic.h, and it's impractical to convert the header and all these architectures in one go. To make it possible to convert them one-by-one, let's make the asm-generic implementation function as either atomic_*() or arch_atomic_*() depending on whether ARCH_ATOMIC is selected. To do this, the C implementations are prefixed as generic_atomic_*(), and preprocessor definitions map atomic_*()/arch_atomic_*() onto these as appropriate. Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header can be simplified and/or removed entirely. For existing users (none of which select ARCH_ATOMIC), there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: atomic: simplify ifdefferyMark Rutland1-42/+4
Now that asm-generic/atomic.h is only used by architectures without any architecture-specific atomic definitions, we know that there will be no architecture-specific implementations to override, and can remove the ifdeffery this has previously required, bringing it into line with asm-generic/atomic64.h. At the same time, we can implement atomic_add() and atomic_sub() directly using ATOMIC_OP(), since we know architectures won't provide atomic_add_return() or atomic_sub_return(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: atomic: remove redundant includeMark Rutland1-2/+0
Since commit: 560cb12a4080a48b ("locking,arch: Rewrite generic atomic support") ... we conditionally include <linux/irqflags.h> before defining atomics using locking, and hence do not need to do so unconditionally later in the header. This patch removes the redundant include. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: atomic: remove stale commentsMark Rutland1-37/+2
The commentary in asm-generic/atomic.h is stale; let's bring it up-to date: * The block comment at the start of the file mentions this is only usable on UP systems, but is immediately followed by an SMP implementation using cmpxchg. Let's delete the misleading statement. * A comment near the end of the file was originally at the top of the file, but over time rework has shuffled it near the end, and it's long been superceded by the block comment at the top of the file. Let's remove it. * Since asm-generic/atomic.h isn't the canonical documentation for the atomic ops, and since the existing comments are not in kerneldoc format, we don't need to document the semantics of each operation here (and this would be better done in a centralised document). Let's remove these comments. Signed-off-by: Mark Rutland <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: openrisc: avoid asm-generic/atomic.hMark Rutland1-1/+7
OpenRISC is the only architecture which uses asm-generic/atomic.h and also provides its own implementation of some functions, requiring ifdeferry in the asm-generic header. As OpenRISC provides the vast majority of functions itself, it would be simpler overall if it also provided the few functions it cribs from asm-generic. This patch decouples OpenRISC from asm-generic/atomic.h. Subsequent patches will simplify the asm-generic implementation and remove the now unnecessary ifdeferry. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Stafford Horne <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: microblaze: use asm-generic exclusivelyMark Rutland3-37/+1
Microblaze provides its own implementation of atomic_dec_if_positive(), but nothing else. For a while now, the conditional inc/dec ops have been optional, and the core code will provide generic implementations using the code templates in scripts/atomic/fallbacks/. For simplicity, and for consistency with the other conditional atomic ops, let's drop the microblaze implementation of atomic_dec_if_positive(), and use the generic implementation. With that, we can also drop the local asm/atomic.h and asm/cmpxchg.h headers, as asm-generic/atomic.h is mandatory-y, and we can pull in asm-generic/cmpxchg.h via generic-y. This matches what nios2 and nds32 do today. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Michal Simek <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: h8300: use asm-generic exclusivelyMark Rutland3-163/+1
As h8300's implementation of the atomics isn't using any arch-specific functionality, and its implementation of cmpxchg only uses assembly to non-atomically swap two elements in memory, we may as well use the asm-generic atomic.h and cmpxchg.h, and avoid the duplicate code. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: net: use linux/atomic.h for xchg & cmpxchgMark Rutland2-2/+2
As xchg*() and cmpxchg*() may be instrumented by atomic-instrumented.h, it's necessary to include <linux/atomic.h> to use these, rather than <asm/cmpxchg.h>, which is effectively an arch-internal header. In a couple of places we include <asm/cmpxchg.h>, but get away with this as <linux/atomic.h> gets pulled in inidrectly by another include. Before we convert more architectures to use atomic-instrumented.h, let's fix these up to use <linux/atomic.h> so that we don't make things more fragile. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-26locking/atomic: make ARCH_ATOMIC a Kconfig symbolMark Rutland9-7/+8
Subsequent patches will move architectures over to the ARCH_ATOMIC API, after preparing the asm-generic atomic implementations to function with or without ARCH_ATOMIC. As some architectures use the asm-generic implementations exclusively (and don't have a local atomic.h), and to avoid the risk that ARCH_ATOMIC isn't defined in some cases we expect, let's make the ARCH_ATOMIC macro a Kconfig symbol instead, so that we can guarantee it is consistently available where needed. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-25futex: Deduplicate cond_resched() invocation in futex_wake_op()Pavel Begunkov1-5/+2
After pagefaulting in futex_wake_op() both branches do cond_resched() before retry. Deduplicate it as compilers cannot figure it out themself. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Davidlohr Bueso <[email protected]> Link: https://lore.kernel.org/r/9b2588c1fd33c91fb01c4e348a3b647ab2c8baab.1621258128.git.asml.silence@gmail.com
2021-05-12selftests: futex: Expand timeout testAndré Almeida1-16/+110
Improve futex timeout testing by checking all the operations that supports timeout and their available modes. Signed-off-by: André Almeida <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-12selftests: futex: Correctly include headers dirsAndré Almeida1-1/+2
When building selftests, the build system will install uapi linux headers at usr/include in kernel source's root directory. When building with a different output folder, the headers will be installed at kselftests/usr/include. Add both paths so we can build the tests using up-to-date headers. Currently, this is uncommon to happen since it's rare to find a build system with an outdated futex header, but it happens when testing new futex operations. Signed-off-by: André Almeida <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-12locking: Fix comment typosIngo Molnar2-7/+7
A few snuck through. Signed-off-by: Ingo Molnar <[email protected]>
2021-05-11Merge tag 'for-5.13-rc1-part2-tag' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fix from David Sterba: "Handle transaction start error in btrfs_fileattr_set() This is fix for code introduced by the new fileattr merge" * tag 'for-5.13-rc1-part2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: handle transaction start error in btrfs_fileattr_set
2021-05-11btrfs: handle transaction start error in btrfs_fileattr_setRitesh Harjani1-0/+2
Add error handling in btrfs_fileattr_set in case of an error while starting a transaction. This fixes btrfs/232 which otherwise used to fail with below signature on Power. btrfs/232 [ 1119.474650] run fstests btrfs/232 at 2021-04-21 02:21:22 <...> [ 1366.638585] BUG: Unable to handle kernel data access on read at 0xffffffffffffff86 [ 1366.638768] Faulting instruction address: 0xc0000000009a5c88 cpu 0x0: Vector: 380 (Data SLB Access) at [c000000014f177b0] pc: c0000000009a5c88: btrfs_update_root_times+0x58/0xc0 lr: c0000000009a5c84: btrfs_update_root_times+0x54/0xc0 <...> pid = 24881, comm = fsstress btrfs_update_inode+0xa0/0x140 btrfs_fileattr_set+0x5d0/0x6f0 vfs_fileattr_set+0x2a8/0x390 do_vfs_ioctl+0x1290/0x1ac0 sys_ioctl+0x6c/0x120 system_call_exception+0x3d4/0x410 system_call_common+0xec/0x278 Fixes: 97fc29775487 ("btrfs: convert to fileattr") Signed-off-by: Ritesh Harjani <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-05-10Merge tag 'perf-tools-fixes-for-v5.13-2021-05-10' of ↵Linus Torvalds26-22/+260
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tools fixes from Arnaldo Carvalho de Melo: - Fix swapping of cpu_map and stat_config records. - Fix dynamic libbpf linking. - Disallow -c and -F option at the same time in 'perf record'. - Update headers with the kernel originals. - Silence warning for JSON ArchStd files. - Fix a build error on arm64 with clang. * tag 'perf-tools-fixes-for-v5.13-2021-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: tools headers UAPI: Sync perf_event.h with the kernel sources tools headers cpufeatures: Sync with the kernel sources tools include UAPI powerpc: Sync errno.h with the kernel headers tools arch: Update arch/x86/lib/mem{cpy,set}_64.S copies used in 'perf bench mem memcpy' tools headers UAPI: Sync linux/prctl.h with the kernel sources tools headers UAPI: Sync files changed by landlock, quotactl_path and mount_settattr new syscalls perf tools: Fix a build error on arm64 with clang tools headers kvm: Sync kvm headers with the kernel sources tools headers UAPI: Sync linux/kvm.h with the kernel sources perf tools: Fix dynamic libbpf link perf session: Fix swapping of cpu_map and stat_config records perf jevents: Silence warning for ArchStd files perf record: Disallow -c and -F option at the same time tools arch x86: Sync the msr-index.h copy with the kernel sources tools headers UAPI: Sync drm/i915_drm.h with the kernel sources tools headers UAPI: Update tools's copy of drm.h headers
2021-05-10Merge tag 'for-5.13-rc1-tag' of ↵Linus Torvalds11-26/+55
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: "First batch of various fixes, here's a list of notable ones: - fix unmountable seed device after fstrim - fix silent data loss in zoned mode due to ordered extent splitting - fix race leading to unpersisted data and metadata on fsync - fix deadlock when cloning inline extents and using qgroups" * tag 'for-5.13-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: initialize return variable in cleanup_free_space_cache_v1 btrfs: zoned: sanity check zone type btrfs: fix unmountable seed device after fstrim btrfs: fix deadlock when cloning inline extents and using qgroups btrfs: fix race leading to unpersisted data and metadata on fsync btrfs: do not consider send context as valid when trying to flush qgroups btrfs: zoned: fix silent data loss after failure splitting ordered extent
2021-05-10Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds25-342/+542
Pull kvm fixes from Paolo Bonzini: - Lots of bug fixes. - Fix virtualization of RDPID - Virtualization of DR6_BUS_LOCK, which on bare metal is new to this release - More nested virtualization migration fixes (nSVM and eVMCS) - Fix for KVM guest hibernation - Fix for warning in SEV-ES SRCU usage - Block KVM from loading on AMD machines with 5-level page tables, due to the APM not mentioning how host CR4.LA57 exactly impacts the guest. * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (48 commits) KVM: SVM: Move GHCB unmapping to fix RCU warning KVM: SVM: Invert user pointer casting in SEV {en,de}crypt helpers kvm: Cap halt polling at kvm->max_halt_poll_ns tools/kvm_stat: Fix documentation typo KVM: x86: Prevent deadlock against tk_core.seq KVM: x86: Cancel pvclock_gtod_work on module removal KVM: x86: Prevent KVM SVM from loading on kernels with 5-level paging KVM: X86: Expose bus lock debug exception to guest KVM: X86: Add support for the emulation of DR6_BUS_LOCK bit KVM: PPC: Book3S HV: Fix conversion to gfn-based MMU notifier callbacks KVM: x86: Hide RDTSCP and RDPID if MSR_TSC_AUX probing failed KVM: x86: Tie Intel and AMD behavior for MSR_TSC_AUX to guest CPU model KVM: x86: Move uret MSR slot management to common x86 KVM: x86: Export the number of uret MSRs to vendor modules KVM: VMX: Disable loading of TSX_CTRL MSR the more conventional way KVM: VMX: Use common x86's uret MSR list as the one true list KVM: VMX: Use flag to indicate "active" uret MSRs instead of sorting list KVM: VMX: Configure list of user return MSRs at module init KVM: x86: Add support for RDPID without RDTSCP KVM: SVM: Probe and load MSR_TSC_AUX regardless of RDTSCP support in host ...