aboutsummaryrefslogtreecommitdiff
path: root/scripts/atomic/fallbacks/fetch_add_unless
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2023-06-05 08:00:59 +0100
committerPeter Zijlstra <peterz@infradead.org>2023-06-05 09:57:13 +0200
commit14d72d4b6f0e88b5f683c1a5b7a876a55055852d (patch)
tree3127dfca6f49e32b40c3a596b2deddd09112970a /scripts/atomic/fallbacks/fetch_add_unless
parentdda5f312bb09e56e7a1c3e3851f2000eb2e9c879 (diff)
locking/atomic: remove fallback comments
Currently a subset of the fallback templates have kerneldoc comments, resulting in a haphazard set of generated kerneldoc comments as only some operations have fallback templates to begin with. We'd like to generate more consistent kerneldoc comments, and to do so we'll need to restructure the way the fallback code is generated. To minimize churn and to make it easier to restructure the fallback code, this patch removes the existing kerneldoc comments from the fallback templates. We can add new kerneldoc comments in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-3-mark.rutland@arm.com
Diffstat (limited to 'scripts/atomic/fallbacks/fetch_add_unless')
-rwxr-xr-xscripts/atomic/fallbacks/fetch_add_unless9
1 files changed, 0 insertions, 9 deletions
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 68ce13c8b9da..81d2834f03d2 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,13 +1,4 @@
cat << EOF
-/**
- * arch_${atomic}_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline ${int}
arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{