| Age | Commit message (Collapse) | Author | Files | Lines | |
|---|---|---|---|---|---|
| 2023-03-28 | atomics: Provide atomic_add_negative() variants | Thomas Gleixner | 1 | -1/+37 | |
| atomic_add_negative() does not provide the relaxed/acquire/release variants. Provide them in preparation for a new scalable reference count algorithm. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] | |||||
| 2021-07-16 | locking/atomic: add arch_atomic_long*() | Mark Rutland | 1 | -329/+329 | |
| Now that all architectures provide arch_{atomic,atomic64}_*(), we can build arch_atomic_long_*() atop these, which can be safely used in noinstr code. The regular atomic_long_*() wrappers are built atop these, as we do for {atomic,atomic64}_*() atop arch_{atomic,atomic64}_*(). We don't provide arch_* versions of the cond_read*() variants, as we don't have arch_* versions of the underlying atomic/atomic64 functions (nor the smp_cond_load*() helpers these are typically based on). Note that the headers in this patch under include/linux/atomic/ are generated by the scripts in scripts/atomic/. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected] | |||||
| 2021-07-16 | locking/atomic: centralize generated headers | Mark Rutland | 1 | -0/+1014 | |
| The generated atomic headers are only intended to be included directly by <linux/atomic.h>, but are spread across include/linux/ and include/asm-generic/, where people mnay be encouraged to include them. This patch centralizes them under include/linux/atomic/. Other than the header guards and hashes, there is no change to any of the generated headers as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected] | |||||