diff options
| author | Namhyung Kim <[email protected]> | 2024-02-06 21:05:44 -0800 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2024-04-10 06:13:57 +0200 |
| commit | 0259bf63f71e2accfeca4a4e346ede8edcc86aab (patch) | |
| tree | 9af8bdbc67f072d870c7df0de5df98d654f3696b /include | |
| parent | 9794563d4d053b1b46a0cc91901f0a11d8678c19 (diff) | |
perf/core: Optimize perf_adjust_freq_unthr_context()
It was unnecessarily disabling and enabling PMUs for each event. It
should be done at PMU level. Add pmu_ctx->nr_freq counter to check it
at each PMU. As PMU context has separate active lists for pinned group
and flexible group, factor out a new function to do the job.
Another minor optimization is that it can skip PMUs w/ CAP_NO_INTERRUPT
even if it needs to unthrottle sampling events.
Signed-off-by: Namhyung Kim <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Tested-by: Mingwei Zhang <[email protected]>
Reviewed-by: Ian Rogers <[email protected]>
Reviewed-by: Kan Liang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Diffstat (limited to 'include')
| -rw-r--r-- | include/linux/perf_event.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index d2a15c0c6f8a..3e33b366347a 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -883,6 +883,7 @@ struct perf_event_pmu_context { unsigned int nr_events; unsigned int nr_cgroups; + unsigned int nr_freq; atomic_t refcount; /* event <-> epc */ struct rcu_head rcu_head; @@ -897,6 +898,11 @@ struct perf_event_pmu_context { int rotate_necessary; }; +static inline bool perf_pmu_ctx_is_active(struct perf_event_pmu_context *epc) +{ + return !list_empty(&epc->flexible_active) || !list_empty(&epc->pinned_active); +} + struct perf_event_groups { struct rb_root tree; u64 index; |