aboutsummaryrefslogtreecommitdiff
path: root/tools/perf/util
AgeCommit message (Collapse)AuthorFilesLines
2023-08-30perf metric: Add #num_cpus_online literalIan Rogers1-0/+7
Returns the number of CPUs online, unlike #num_cpus that returns the number present. Add a test of the property. This will be used in future Intel metrics. Reviewed-by: Kan Liang <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eduard Zingerman <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Perry Taylor <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-30perf pmu: Remove str from perf_pmu_aliasIan Rogers1-23/+10
Currently the value is only used in perf list. Compute the value just when needed to avoid unnecessary overhead. Recycle the strbuf to avoid memory allocation overhead. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-30perf parse-events: Make common term list to strbuf helperIan Rogers3-41/+81
A term list is turned into a string for debug output and for the str value in the alias. Add a helper to do this based on existing code, but then fix for situations like events being identified. Use strbuf to manage the dynamic memory allocation and remove the 256 byte limit. Use in various places the string of the term list is required. Before: $ sudo perf stat -vv -e inst_retired.any true Using CPUID GenuineIntel-6-8D-1 intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch Attempting to add event pmu 'cpu' with 'inst_retired.any,' that may result in non-fatal errors After aliases, add event pmu 'cpu' with 'event,period,' that may result in non-fatal errors inst_retired.any -> cpu/inst_retired.any/ ... After: $ sudo perf stat -vv -e inst_retired.any true Using CPUID GenuineIntel-6-8D-1 intel_pt default config: tsc,mtc,mtc_period=3,psb_period=3,pt,branch Attempt to add: cpu/inst_retired.any/ ..after resolving event: cpu/event=0xc0,period=0x1e8483/ inst_retired.any -> cpu/event=0xc0,period=0x1e8483/ ... Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-30perf parse-events: Minor help message improvementsIan Rogers1-2/+2
Be more specific and fix a typo. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-30perf pmu: Avoid uninitialized use of alias->strIan Rogers1-0/+2
alias is allocated with malloc allowing uninitialized memory to be accessed. The initialization of str was moved late after it could have been updated by a JSON event, however, this create a potential for an uninitialized use. Fix this by assigning str to NULL early. Testing on ARM (Raspberry Pi) showed a memory leak in the same code so add a zfree. Fixes: f63a536f03a2f64f ("perf pmu: Merge JSON events with sysfs at load time") Reported-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Acked-by: Namhyung Kim <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sumanth Korikkar <[email protected]> Cc: Thomas Richter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-29perf jevents: Use "default_core" for events with no UnitIan Rogers1-1/+6
The JSON Unit field encodes the name of the PMU to match the events to. When no name is given it has meant the "cpu" core PMU except for tests. On ARM, Intel hybrid and s390 the core PMU is named differently which means that using "cpu" for this case causes the events not to get matched to the PMU. Introduce a new "default_core" string for this case and in the pmu__name_match force all core PMUs to match this name. Fixes: 2e255b4f9f41f137 ("perf jevents: Group events by PMU") Reported-by: Arnaldo Carvalho de Melo <[email protected]> Reported-by: Thomas Richter <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ian Rogers <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-29perf tools: Convert to perf_record_header_attr_id()Namhyung Kim3-4/+4
Instead of accessing the attr.id directly, use the perf_record_header_attr_id() helper to handle old versions. Signed-off-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-29perf tools: Handle old data in PERF_RECORD_ATTRNamhyung Kim1-5/+6
The PERF_RECORD_ATTR is used for a pipe mode to describe an event with attribute and IDs. The ID table comes after the attr and it calculate size of the table using the total record size and the attr size. n_ids = (total_record_size - end_of_the_attr_field) / sizeof(u64) This is fine for most use cases, but sometimes it saves the pipe output in a file and then process it later. And it becomes a problem if there is a change in attr size between the record and report. $ perf record -o- > perf-pipe.data # old version $ perf report -i- < perf-pipe.data # new version For example, if the attr size is 128 and it has 4 IDs, then it would save them in 168 byte like below: 8 byte: perf event header { .type = PERF_RECORD_ATTR, .size = 168 }, 128 byte: perf event attr { .size = 128, ... }, 32 byte: event IDs [] = { 1234, 1235, 1236, 1237 }, But when report later, it thinks the attr size is 136 then it only read the last 3 entries as ID. 8 byte: perf event header { .type = PERF_RECORD_ATTR, .size = 168 }, 136 byte: perf event attr { .size = 136, ... }, 24 byte: event IDs [] = { 1235, 1236, 1237 }, // 1234 is missing So it should use the recorded version of the attr. The attr has the size field already then it should honor the size when reading data. Fixes: 2c46dbb517a10b18 ("perf: Convert perf header attrs into attr events") Signed-off-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-29perf pmus: Skip duplicate PMUs and don't print list suffix by defaultIan Rogers6-11/+68
Add a PMUs scan that ignores duplicates. When there are multiple PMUs that differ only by suffix, by default just list the first one and skip all others. The scan routine checks that the PMU names match but doesn't enforce that the numbers are consecutive as for some PMUs there are gaps. If "-v" is passed to "perf list" then list all PMUs. With the previous change duplicate PMUs are no longer printed but the suffix of the first is printed. When duplicate PMUs are being skipped avoid printing the suffix. Before: $ perf list ... uncore_imc_free_running_0/data_read/ [Kernel PMU event] uncore_imc_free_running_0/data_total/ [Kernel PMU event] uncore_imc_free_running_0/data_write/ [Kernel PMU event] uncore_imc_free_running_1/data_read/ [Kernel PMU event] uncore_imc_free_running_1/data_total/ [Kernel PMU event] uncore_imc_free_running_1/data_write/ [Kernel PMU event] After: $ perf list ... uncore_imc_free_running/data_read/ [Kernel PMU event] uncore_imc_free_running/data_total/ [Kernel PMU event] uncore_imc_free_running/data_write/ [Kernel PMU event] ... $ perf list -v uncore_imc_free_running_0/data_read/ [Kernel PMU event] uncore_imc_free_running_0/data_total/ [Kernel PMU event] uncore_imc_free_running_0/data_write/ [Kernel PMU event] uncore_imc_free_running_1/data_read/ [Kernel PMU event] uncore_imc_free_running_1/data_total/ [Kernel PMU event] uncore_imc_free_running_1/data_write/ [Kernel PMU event] ... Reviewed-by: Kan Liang <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-29perf pmus: Sort pmus by name then suffixIan Rogers1-0/+49
Sort PMUs by name. If two PMUs have the same name but differ by suffix, sort the suffixes numerically. For example, "breakpoint" comes before "cpu", "uncore_imc_free_running_0" comes before "uncore_imc_free_running_1". Suffixes need to be treated specially as otherwise they will be ordered like 0, 1, 10, 11, .., 2, 20, 21, .., etc. Only PMUs starting 'uncore_' are considered to have a potential suffix. Sorting of PMUs is done so that later patches can skip duplicate uncore PMUs that differ only by there suffix. Committer notes: Used the more compact, intention revealing strstarts() function we got from the kernel sources: - if (strncmp(str, "uncore_", 7)) + if (!strstarts(str, "uncore_")) Also in pmus_cmp() the lhs_num and rhs_num variables may end up not being set for non "uncore_" prefixed PMUs in pmu_name_len_no_suffix(), or at least gcc 7.5 in some distros (opensuse 15.5, to be EOLed in Dec/2024) thins so, so initialize both to zero. Reviewed-by: Kan Liang <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-29perf tools: Allow to use cpuinfo on LoongArchYanteng Si2-1/+3
Define these macros so that the CPU name can be displayed when running 'perf report' and 'perf timechart'. Committer notes: No need to have: if (strcasestr(buf, "Model Name")) { strlcpy(cpu_m, &buf[13], 255); break; } else if (strcasestr(buf, "model name")) { strlcpy(cpu_m, &buf[13], 255); break; } As the point of strcasestr() is to be case insensitive to both the haystack and the needle, so simplify the above to just: if (strcasestr(buf, "model name")) { strlcpy(cpu_m, &buf[13], 255); break; } Signed-off-by: Yanteng Si <[email protected]> Acked-by: Huacai Chen <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/db968a186a10e4629fe10c26a1210f7126ad41ec.1692962043.git.siyanteng@loongson.cn Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-25perf build-id: Simplify build_id_cache__cachedir()Ian Rogers1-4/+2
Initialize realname to NULL, rather than name. This avoids a cast and as realpath is either NULL or an allocated string, free can be called unconditionally. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Athira Rajeev <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: K Prateek Nayak <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: Wei Li <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-25perf pmu: Make id const and add missing freeIan Rogers2-1/+2
The struct pmu id is initialized from pmu_id that is read into allocated memory from a file, as such it needs free-ing in pmu__delete(). Make the id value const so that we can remove casts in tests. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Athira Rajeev <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: K Prateek Nayak <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: Wei Li <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-25perf parse-events: Make term's config constIan Rogers3-8/+8
This avoids casts in tests. Use zfree in a few places to avoid warnings about a freeing a const pointer. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Athira Rajeev <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: K Prateek Nayak <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: Wei Li <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-25perf pmu: Remove logic for PMU name being NULLIan Rogers9-40/+29
The PMU name could be NULL in the case of the fake_pmu. Initialize the name for the fake_pmu to "fake" so that all other logic can assume it is initialized. Add a const to the type of name so that a literal can be used to avoid additional initialization code. Propagate the cost through related routines and remove now unnecessary "(char *)" casts. Doing this located a bug in builtin-list for the pmu_glob that was missing a strdup. Signed-off-by: Ian Rogers <[email protected]> Link: https://lore.kernel.org/r/[email protected] Cc: K Prateek Nayak <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: James Clark <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Wei Li <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Will Deacon <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mike Leach <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Kan Liang <[email protected]> Cc: Athira Rajeev <[email protected]> Cc: [email protected] Cc: Ming Wang <[email protected]> Cc: John Garry <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-25perf header: Fix missing PMU capsIan Rogers1-15/+16
PMU caps are written as HEADER_PMU_CAPS or for the special case of the PMU "cpu" as HEADER_CPU_PMU_CAPS. As the PMU "cpu" is special, and not any "core" PMU, the logic had become broken and core PMUs not called "cpu" were not having their caps written. This affects ARM and s390 non-hybrid PMUs. Simplify the PMU caps writing logic to scan one fewer time and to be more explicit in its behavior. Fixes: 178ddf3bad981380 ("perf header: Avoid hybrid PMU list in write_pmu_caps") Reported-by: Wei Li <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Athira Rajeev <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: K Prateek Nayak <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Lazily load sysfs aliasesIan Rogers2-39/+44
Don't load sysfs aliases for a PMU when the PMU is first created, defer until an alias needs to be found. For the pmu-scan benchmark, average core PMU scanning is reduced by 30.8%, and average PMU scanning by 12.6%. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Be lazy about loading event info files from sysfsIan Rogers1-45/+83
Event info is only needed when an event is parsed or when merging data from an JSON and sysfs event. Be lazy in its loading to reduce file accesses. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Scan type early to fail an invalid PMU quicklyIan Rogers1-7/+12
Scan sysfs PMU's type early so that format and aliases aren't attempted to be loaded if the PMU name is invalid. This is the case for event_pmu tokens in parse-events.y where a wildcard name is first assumed to be a PMU name. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Lazily add JSON eventsIan Rogers2-15/+50
Rather than scanning all JSON events and adding them when a PMU is created, add the alias when the JSON event is needed. Average core PMU scanning run time reduced by 60.2%. Average PMU scanning run time reduced by 15%. Page faults with no events reduced by 74 page faults, 4% of total. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Cache JSON events tableIan Rogers2-9/+11
Cache the JSON events table so that finding it isn't done per event/alias. Change the events table find so that when the PMU is given, if the PMU has no JSON events return null. Update usage to always use the PMU variable. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Merge JSON events with sysfs at load timeIan Rogers1-89/+88
Rather than load all sysfs events then parsing all JSON events and merging with ones that already exist. When a sysfs event is loaded, look for a corresponding JSON event and merge immediately. To simplify the logic, early exit the perf_pmu__new_alias function if an alias is attempted to be added twice - as merging has already been explicitly handled. Fix the copying of terms to a merged alias and some ENOMEM paths. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Prefer passing pmu to aliases listIan Rogers1-28/+16
The aliases list is part of the PMU. Rather than pass the aliases list, pass the full PMU simplifying some callbacks. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Parse sysfs events directly from a fileIan Rogers3-38/+31
Rather than read a sysfs events file into a 256 byte char buffer, pass the FILE* directly to the lex/yacc parser. This avoids there being a maximum events file size. While changing the API, constify some arguments to remove unnecessary casts. Allocating the read buffer decreases the performance of pmu-scan by around 3%. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu-events: Reduce processed events by passing PMUIan Rogers2-25/+11
Pass the PMU to pmu_events_table__for_each_event so that entries that don't match don't need to be processed by callback. If a NULL PMU is passed then all PMUs are processed. 'perf bench internals pmu-scan's "Average PMU scanning" performance is reduced by about 5% on an Intel tigerlake. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf s390 s390_cpumcfdg_dump: Don't scan all PMUsIan Rogers1-24/+26
Rather than scanning all PMUs for a counter name, scan the PMU associated with the evsel of the sample. This is done to remove a dependence on pmu-events.h. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf parse-events: Improve error message for double settingIan Rogers3-9/+29
Double setting information for an event would produce an error message associated with the PMU rather than the term that was double setting. Improve the error message to be on the term. Before: $ perf stat -e 'cpu/inst_retired.any,inst_retired.any/' true event syntax error: 'cpu/inst_retired.any,inst_retired.any/' \___ Bad event or PMU Unabled to find PMU or event on a PMU of 'cpu' Run 'perf list' for a list of valid events $ After: $ perf stat -e 'cpu/inst_retired.any,inst_retired.any/' true event syntax error: '..etired.any,inst_retired.any/' \___ Bad event or PMU Unabled to find PMU or event on a PMU of 'cpu' Initial error: event syntax error: '..etired.any,inst_retired.any/' \___ Attempt to set event's scale twice Run 'perf list' for a list of valid events Signed-off-by: Ian Rogers <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu-events: Add extra underscore to function namesIan Rogers3-7/+7
Add extra underscore before "for" of pmu_events_table_for_each_event and pmu_metrics_table_for_each_metric. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Abstract alias/event structIan Rogers4-264/+316
In order to be able to lazily compute aliases/events for a PMU, move the struct perf_pmu_alias into pmu.c. Add perf_pmu__find_event and perf_pmu__for_each_event that take a callback that is called for the found event or for each event. The layout of struct pmu and the event/alias list is unchanged but the API is altered so that aliases are no longer directly accessed, allowing for later changes. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-24perf pmu: Make the loading of formats lazyIan Rogers3-64/+105
The sysfs format files are loaded eagerly in a PMU. Add a flag so that we create the format but only load the contents when necessary. Reduce the size of the value in struct perf_pmu_format and avoid holes so there is no additional space requirement. For "perf stat -e cycles true" this reduces the number of openat calls from 648 to 573 (about 12%). The benchmark pmu scan speed is improved by roughly 5%. Before: $ perf bench internals pmu-scan Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 1061.100 usec (+- 9.965 usec) Average PMU scanning took: 4725.300 usec (+- 260.599 usec) After: $ perf bench internals pmu-scan Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 989.170 usec (+- 6.873 usec) Average PMU scanning took: 4520.960 usec (+- 251.272 usec) Committer testing: On a AMD Ryzen 5950x: Before: $ perf bench internals pmu-scan -i1000 # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 563.466 usec (+- 1.008 usec) Average PMU scanning took: 1619.174 usec (+- 23.627 usec) $ perf stat -r5 perf bench internals pmu-scan -i1000 # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 583.401 usec (+- 2.098 usec) Average PMU scanning took: 1677.352 usec (+- 24.636 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 553.254 usec (+- 0.825 usec) Average PMU scanning took: 1635.655 usec (+- 24.312 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 557.733 usec (+- 0.980 usec) Average PMU scanning took: 1600.659 usec (+- 23.344 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 554.906 usec (+- 0.774 usec) Average PMU scanning took: 1595.338 usec (+- 23.288 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 551.798 usec (+- 0.967 usec) Average PMU scanning took: 1623.213 usec (+- 23.998 usec) Performance counter stats for 'perf bench internals pmu-scan -i1000' (5 runs): 3276.82 msec task-clock:u # 0.990 CPUs utilized ( +- 0.82% ) 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 1008 page-faults:u # 307.615 /sec ( +- 0.04% ) 12049614778 cycles:u # 3.677 GHz ( +- 0.07% ) (83.34%) 117507478 stalled-cycles-frontend:u # 0.98% frontend cycles idle ( +- 0.33% ) (83.32%) 27106761 stalled-cycles-backend:u # 0.22% backend cycles idle ( +- 9.55% ) (83.36%) 33294953848 instructions:u # 2.76 insn per cycle # 0.00 stalled cycles per insn ( +- 0.03% ) (83.31%) 6849825049 branches:u # 2.090 G/sec ( +- 0.03% ) (83.37%) 71533903 branch-misses:u # 1.04% of all branches ( +- 0.20% ) (83.30%) 3.3088 +- 0.0302 seconds time elapsed ( +- 0.91% ) $ After: $ perf stat -r5 perf bench internals pmu-scan -i1000 # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 550.702 usec (+- 0.958 usec) Average PMU scanning took: 1566.577 usec (+- 22.747 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 548.315 usec (+- 0.555 usec) Average PMU scanning took: 1565.499 usec (+- 22.760 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 548.073 usec (+- 0.555 usec) Average PMU scanning took: 1586.097 usec (+- 23.299 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 561.184 usec (+- 2.709 usec) Average PMU scanning took: 1567.153 usec (+- 22.548 usec) # Running 'internals/pmu-scan' benchmark: Computing performance of sysfs PMU event scan for 1000 times Average core PMU scanning took: 546.987 usec (+- 0.553 usec) Average PMU scanning took: 1562.814 usec (+- 22.729 usec) Performance counter stats for 'perf bench internals pmu-scan -i1000' (5 runs): 3170.86 msec task-clock:u # 0.992 CPUs utilized ( +- 0.22% ) 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 1010 page-faults:u # 318.526 /sec ( +- 0.04% ) 11890047674 cycles:u # 3.750 GHz ( +- 0.14% ) (83.27%) 119090499 stalled-cycles-frontend:u # 1.00% frontend cycles idle ( +- 0.46% ) (83.40%) 32502449 stalled-cycles-backend:u # 0.27% backend cycles idle ( +- 8.32% ) (83.30%) 33119141261 instructions:u # 2.79 insn per cycle # 0.00 stalled cycles per insn ( +- 0.01% ) (83.37%) 6812816561 branches:u # 2.149 G/sec ( +- 0.01% ) (83.29%) 70157855 branch-misses:u # 1.03% of all branches ( +- 0.28% ) (83.38%) 3.19710 +- 0.00826 seconds time elapsed ( +- 0.26% ) $ Signed-off-by: Ian Rogers <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Pass PMU rather than aliases and formatIan Rogers2-36/+42
Pass the pmu so the aliases and format list can be better abstracted and later lazily loaded. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Avoid passing format list to perf_pmu__format_bits()Ian Rogers3-5/+5
Pass the PMU so the format list can be better abstracted and later lazily loaded. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] [ Did missing conversions in tools/perf/arch/arm*/util/cs-etm.c ] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Avoid passing format list to perf_pmu__format_typeIan Rogers3-4/+4
Pass the pmu so the format list can be better abstracted and later lazily loaded. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Avoid passing format list to perf_pmu__config_terms()Ian Rogers2-13/+9
Abstract the format list better, hiding it in the PMU, by changing perf_pmu__config_terms() the PMU rather than the format list in the PMU. Change the PMU test to pass a dummy PMU for this purpose. Changing the test allows perf_pmu__del_formats() to become static. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Reduce scope of perf_pmu_error()Ian Rogers2-2/+3
Move declaration from header file to pmu.y and make static. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Move perf_pmu__set_format to pmu.yIan Rogers3-13/+12
Avoid having the function in the C and header file, as it is only used locally by pmu.y. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf pmu: Avoid a path name copyIan Rogers1-5/+7
Rather than read a base path and append into a 2nd path, read the base path directly into output buffer and append to that. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-23perf script ibs: Remove unused includeIan Rogers1-1/+0
Done to reduce dependencies on pmu-events.h. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-22perf lzma: Convert some pr_err() to pr_debug() as callers already use pr_debug()Arnaldo Carvalho de Melo1-7/+5
I noticed some error with: # perf list ex_ret_brn lzma: fopen failed on /usr/lib/modules/5.15.14-100.fc34.x86_64/kernel/net/bluetooth/bnep/bnep.ko.xz: 'No such file or directory' lzma: fopen failed on /usr/lib/modules/5.16.16-200.fc35.x86_64/kernel/drivers/gpu/drm/drm_kms_helper.ko.xz: 'No such file or directory' lzma: fopen failed on /usr/lib/modules/5.18.16-200.fc36.x86_64/kernel/arch/x86/crypto/crct10dif-pclmul.ko.xz: 'No such file or directory' lzma: fopen failed on /usr/lib/modules/5.16.16-200.fc35.x86_64/kernel/drivers/i2c/busses/i2c-piix4.ko.xz: 'No such file or directory' <BIG SNIP> Then using 'perf probe' + 'perf trace' to debug 'perf list', it seems its some inconsistency in the ~/.debug/ cache where broken build id symlinks that ends up making it try to uncompress some kernel modules using the lzma routines: 395.309 perf/3594447 probe_perf:lzma_decompress_to_file(__probe_ip: 6118448, input_string: "/usr/lib/modules/5.18.17-200.fc36.x86_64/kernel/drivers/nvme/host/nvme.ko.xz") lzma_decompress_to_file (/var/home/acme/bin/perf) filename__decompress (/var/home/acme/bin/perf) filename__read_build_id (/var/home/acme/bin/perf) filename__sprintf_build_id (inlined) build_id_cache__valid_id (inlined) build_id_cache__list_all (/var/home/acme/bin/perf) print_sdt_events (/var/home/acme/bin/perf) cmd_list (/var/home/acme/bin/perf) run_builtin (/var/home/acme/bin/perf) handle_internal_command (inlined) run_argv (inlined) main (/var/home/acme/bin/perf) __libc_start_call_main (/usr/lib64/libc.so.6) __libc_start_main@@GLIBC_2.34 (/usr/lib64/libc.so.6) _start (/var/home/acme/bin/perf) But callers of filename__decompress() already check its return and use pr_debug(), so be consistent and make functions it calls also use pr_debug(). Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-21perf stat-display: Check if snprintf()'s fmt argument is NULLKaige Ye1-2/+2
It is undefined behavior to pass NULL as snprintf()'s fmt argument. Here is an example to trigger the problem: $ perf stat --metric-only -x, -e instructions -- sleep 1 insn per cycle, Segmentation fault (core dumped) With this patch: $ perf stat --metric-only -x, -e instructions -- sleep 1 insn per cycle, , Reviewed-by: Ian Rogers <[email protected]> Signed-off-by: Kaige Ye <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-21perf bpf augmented_raw_syscalls: Add an assert to make sure ↵Arnaldo Carvalho de Melo1-0/+1
sizeof(augmented_arg->value) is a power of two. Similar to what was done in the previous cset for sizeof(saddr), we need to make sure sizeof(augmented_arg->value) is a power of two to do bounds checking using &=: augmented_len &= sizeof(augmented_arg->value) - 1; Suggested-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-21perf bpf augmented_raw_syscalls: Add an assert to make sure sizeof(saddr) is ↵Arnaldo Carvalho de Melo1-0/+11
a power of two. We're using the BPF verifier suggestion: 22: (85) call bpf_probe_read#4 R2 min value is negative, either use unsigned or 'var &= const' That works only when const is a (power of two - 1) so add an assert to make sure that that is the case. Suggested-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-17perf jevents: Add a new expression builtin strcmp_cpuid_str()James Clark6-1/+45
This will allow writing formulas that are conditional on a specific CPU type or CPU version. It calls through to the existing strcmp_cpuid_str() function in Perf which has a default weak version, and an arch specific version for x86 and arm64. The function takes an 'ID' type value, which is a string. But in this case Arm CPU IDs are hex numbers prefixed with '0x'. metric.py assumes strings are only used by event names, and that they can't start with a number ('0'), so an additional change has to be made to the regex to convert hex numbers back to 'ID' types. Signed-off-by: James Clark <[email protected]> Reviewed-by: John Garry <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eduard Zingerman <[email protected]> Cc: Haixin Yu <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Nick Forrington <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sohom Datta <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-16perf bpf_skel augmented_raw_syscalls: Cap the socklen parameter using &= ↵Arnaldo Carvalho de Melo1-4/+2
sizeof(saddr) This works with: $ clang -v clang version 14.0.5 (Fedora 14.0.5-2.fc36) $ But not with: $ clang -v clang version 16.0.6 (Fedora 16.0.6-2.fc38) $ [root@quaco ~]# perf trace -e connect*,sendto* ping -c 10 localhost libbpf: prog 'sys_enter_sendto': BPF program load failed: Permission denied libbpf: prog 'sys_enter_sendto': -- BEGIN PROG LOAD LOG -- reg type unsupported for arg#0 function sys_enter_sendto#59 0: R1=ctx(off=0,imm=0) R10=fp0 ; int sys_enter_sendto(struct syscall_enter_args *args) 0: (bf) r6 = r1 ; R1=ctx(off=0,imm=0) R6_w=ctx(off=0,imm=0) 1: (b7) r1 = 0 ; R1_w=0 ; int key = 0; 2: (63) *(u32 *)(r10 -4) = r1 ; R1_w=0 R10=fp0 fp-8=0000???? 3: (bf) r2 = r10 ; R2_w=fp0 R10=fp0 ; 4: (07) r2 += -4 ; R2_w=fp-4 ; return bpf_map_lookup_elem(&augmented_args_tmp, &key); 5: (18) r1 = 0xffff8de5a5b8bc00 ; R1_w=map_ptr(off=0,ks=4,vs=8272,imm=0) 7: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=8272,imm=0) 8: (bf) r7 = r0 ; R0_w=map_value_or_null(id=1,off=0,ks=4,vs=8272,imm=0) R7_w=map_value_or_null(id=1,off=0,ks=4,vs=8272,imm=0) 9: (b7) r0 = 1 ; R0_w=1 ; if (augmented_args == NULL) 10: (15) if r7 == 0x0 goto pc+25 ; R7_w=map_value(off=0,ks=4,vs=8272,imm=0) ; unsigned int socklen = args->args[5]; 11: (79) r1 = *(u64 *)(r6 +56) ; R1_w=scalar() R6_w=ctx(off=0,imm=0) ; 12: (bf) r2 = r1 ; R1_w=scalar(id=2) R2_w=scalar(id=2) 13: (67) r2 <<= 32 ; R2_w=scalar(smax=9223372032559808512,umax=18446744069414584320,var_off=(0x0; 0xffffffff00000000),s32_min=0,s32_max=0,u32_max=0) 14: (77) r2 >>= 32 ; R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) 15: (b7) r8 = 128 ; R8=128 ; if (socklen > sizeof(augmented_args->saddr)) 16: (25) if r2 > 0x80 goto pc+1 ; R2=scalar(umax=128,var_off=(0x0; 0xff)) 17: (bf) r8 = r1 ; R1=scalar(id=2) R8_w=scalar(id=2) ; const void *sockaddr_arg = (const void *)args->args[4]; 18: (79) r3 = *(u64 *)(r6 +48) ; R3_w=scalar() R6=ctx(off=0,imm=0) ; bpf_probe_read(&augmented_args->saddr, socklen, sockaddr_arg); 19: (bf) r1 = r7 ; R1_w=map_value(off=0,ks=4,vs=8272,imm=0) R7=map_value(off=0,ks=4,vs=8272,imm=0) 20: (07) r1 += 64 ; R1_w=map_value(off=64,ks=4,vs=8272,imm=0) ; bpf_probe_read(&augmented_args->saddr, socklen, sockaddr_arg); 21: (bf) r2 = r8 ; R2_w=scalar(id=2) R8_w=scalar(id=2) 22: (85) call bpf_probe_read#4 R2 min value is negative, either use unsigned or 'var &= const' processed 22 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1 -- END PROG LOAD LOG -- libbpf: prog 'sys_enter_sendto': failed to load: -13 libbpf: failed to load object 'augmented_raw_syscalls_bpf' libbpf: failed to load BPF skeleton 'augmented_raw_syscalls_bpf': -13 So use the suggested &= variant since sizeof(saddr) == 128 bytes. Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-16perf parse-regs: Move out arch specific header from util/perf_regs.hLeo Yan1-2/+0
util/perf_regs.h includes another perf_regs.h: #include <perf_regs.h> Here it includes architecture specific header, for example, if we build arm64 target, the header tools/perf/arch/arm64/include/perf_regs.h is included. We use this implicit way to include architecture specific header, which is not directive; furthermore, util/perf_regs.c is coupled with the architecture specific definitions. This patch moves out arch specific header from util/perf_regs.h for generalizing the 'util' folder, as a result, the source files in 'arch' folder explicitly include architecture's perf_regs.h. Signed-off-by: Leo Yan <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eric Lin <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Guo Ren <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Babrou <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sandipan Das <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-16perf parse-regs: Remove PERF_REGS_{MAX|MASK} from common codeLeo Yan3-6/+4
The macros PERF_REGS_MAX and PERF_REGS_MASK are architecture specific, let's remove them from the common file util/perf_regs.c. As a side effect, the weak functions arch__intr_reg_mask() and arch__user_reg_mask() just return zeros, every arch defines its own functions in the 'arch' folder for returning right values. Note, we don't need to return intr/user register masks dynamically, this is because these two functions are invoked during recording phase but not decoding phase, they are always invoked on the native environment, thus we don't need to parse them dynamically. Signed-off-by: Leo Yan <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eric Lin <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Guo Ren <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Babrou <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sandipan Das <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-16perf unwind: Use perf_arch_reg_{ip|sp}() to substitute macrosLeo Yan5-16/+10
We use perf_arch_reg_ip() and perf_arch_reg_sp() to substitute macros for obtaining the register numbers of SP and IP. This modification enables cross analysis in the unwinding, therefore, the unwinding is not restricted to the predefined values by the macros. Consequently, the macros LIBUNWIND__ARCH_REG_{IP|SP} are removed since they are no longer used. Committer notes: Add missing "util/env.h" header to make sure we have the definition for perf_env__arch(), that when built with NO_LIBUNWIND=1 isn't available, i.e. it was being included by sheer luck. Signed-off-by: Leo Yan <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eric Lin <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Guo Ren <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Babrou <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sandipan Das <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-16perf parse-regs: Introduce functions perf_arch_reg_{ip|sp}()Leo Yan12-5/+186
The current code uses macros PERF_REG_IP and PERF_REG_SP for parsing registers and we build perf with these macros statically, which means it only can correctly analyze CPU registers for the native architecture and fails to support cross analysis (e.g. we build perf on x86 and cannot analyze Arm64's registers). We need to generalize util/perf_regs.c for support multi architectures, as a first step, this commit introduces new functions perf_arch_reg_ip() and perf_arch_reg_sp(), these two functions dynamically return IP and SP register index respectively according to the parameter "arch". Every architecture has its own functions (like __perf_reg_ip_arm64 and __perf_reg_sp_arm64), these architecture specific functions are defined in each arch source file under folder util/perf-regs-arch; at the end all of them are built into the tool for cross analysis. Committer notes: Make DWARF_MINIMAL_REGS() an inline function, so that we can use the __maybe_unused attribute for the 'arch' parameter, as this will avoid a build failure when that variable is unused in the callers. That happens when building on unsupported architectures, the ones without HAVE_PERF_REGS_SUPPORT defined. Signed-off-by: Leo Yan <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eric Lin <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Guo Ren <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Babrou <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sandipan Das <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-16perf parse-regs: Refactor arch register parsing functionsLeo Yan14-716/+803
Every architecture has a specific register parsing function for returning register name based on register index, to support cross analysis (e.g. we use perf x86 binary to parse Arm64's perf data), we build all these register parsing functions into the tool, this is why we place all related functions into util/perf_regs.c. Unfortunately, since util/perf_regs.c needs to include every arch's perf_regs.h, this easily introduces duplicated definitions coming from multiple headers, finally it's fragile for building and difficult for maintenance. We cannot simply move these register parsing functions into the corresponding 'arch' folder, the folder is only conditionally built based on the target architecture. Therefore, this commit creates a new folder util/perf-regs-arch/ and uses a dedicated source file to keep every architecture's register parsing function to avoid definition conflicts. This is only a refactoring, no functionality change is expected. Committer notes: Had to add util/perf-regs-arch/*.c to tools/perf/util/python-ext-sources to keep 'perf test python' passing. Signed-off-by: Leo Yan <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Eric Lin <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Guo Ren <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Babrou <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Ming Wang <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sandipan Das <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-15perf cs-etm: Don't duplicate FIELD_GET()James Clark1-12/+2
linux/bitfield.h can be included as long as linux/kernel.h is included first, so change the order of the includes and drop the duplicate macro. Reviewed-by: John Garry <[email protected]> Signed-off-by: James Clark <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Eduard Zingerman <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mike Leach <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Nick Forrington <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sohom Datta <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>