diff options
author | Alexander Shishkin <[email protected]> | 2017-08-29 17:01:03 +0300 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2017-09-29 13:28:30 +0200 |
commit | 5bce9db1894c998c5b85a34036d679ea6517668f (patch) | |
tree | 98d531a881d66e722e2f275a43f4fc3681d21630 | |
parent | 4c4de7d3c8383e3bf122cd84c61e7523df02b1ae (diff) |
perf/core: Explain perf_sched_mutex
To clarify why atomic_inc_return(&perf_sched_events) is not sufficient and
a mutex is needed to order static branch enabling vs the atomic counter
increment, this adds a comment with a short explanation.
Signed-off-by: Alexander Shishkin <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r-- | kernel/events/core.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index 6bc21e202ae4..5ee62714f9a6 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -9394,6 +9394,11 @@ static void account_event(struct perf_event *event) inc = true; if (inc) { + /* + * We need the mutex here because static_branch_enable() + * must complete *before* the perf_sched_count increment + * becomes visible. + */ if (atomic_inc_not_zero(&perf_sched_count)) goto enabled; |