diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-05-07 11:32:18 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-05-07 11:32:18 -0700 |
commit | f085df1be60abf670315c11036261cfaec16b2eb (patch) | |
tree | c02c07ad31578b90c3cc99be6b5ba680d163bca7 /tools/perf/arch | |
parent | 17784de648be93b4eef0ef8fe28a16ff04feecc7 (diff) | |
parent | 9a2d5178b9d51e1c5f9e08989ff97fc8d4893f31 (diff) |
Merge tag 'perf-tools-for-v6.4-3-2023-05-06' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tool updates from Arnaldo Carvalho de Melo:
"Third version of perf tool updates, with the build problems with with
using a 'vmlinux.h' generated from the main build fixed, and the bpf
skeleton build disabled by default.
Build:
- Require libtraceevent to build, one can disable it using
NO_LIBTRACEEVENT=1.
It is required for tools like 'perf sched', 'perf kvm', 'perf
trace', etc.
libtraceevent is available in most distros so installing
'libtraceevent-devel' should be a one-time event to continue
building perf as usual.
Using NO_LIBTRACEEVENT=1 produces tooling that is functional and
sufficient for lots of users not interested in those libtraceevent
dependent features.
- Allow Python support in 'perf script' when libtraceevent isn't
linked, as not all features requires it, for instance Intel PT does
not use tracepoints.
- Error if the python interpreter needed for jevents to work isn't
available and NO_JEVENTS=1 isn't set, preventing a build without
support for JSON vendor events, which is a rare but possible
condition. The two check error messages:
$(error ERROR: No python interpreter needed for jevents generation. Install python or build with NO_JEVENTS=1.)
$(error ERROR: Python interpreter needed for jevents generation too old (older than 3.6). Install a newer python or build with NO_JEVENTS=1.)
- Make libbpf 1.0 the minimum required when building with out of
tree, distro provided libbpf.
- Use libsdtc++'s and LLVM's libcxx's __cxa_demangle, a portable C++
demangler, add 'perf test' entry for it.
- Make binutils libraries opt in, as distros disable building with it
due to licensing, they were used for C++ demangling, for instance.
- Switch libpfm4 to opt-out rather than opt-in, if libpfm-devel (or
equivalent) isn't installed, we'll just have a build warning:
Makefile.config:1144: libpfm4 not found, disables libpfm4 support. Please install libpfm4-dev
- Add a feature test for scandirat(), that is not implemented so far
in musl and uclibc, disabling features that need it, such as
scanning for tracepoints in /sys/kernel/tracing/events.
perf BPF filters:
- New feature where BPF can be used to filter samples, for instance:
$ sudo ./perf record -e cycles --filter 'period > 1000' true
$ sudo ./perf script
perf-exec 2273949 546850.708501: 5029 cycles: ffffffff826f9e25 finish_wait+0x5 ([kernel.kallsyms])
perf-exec 2273949 546850.708508: 32409 cycles: ffffffff826f9e25 finish_wait+0x5 ([kernel.kallsyms])
perf-exec 2273949 546850.708526: 143369 cycles: ffffffff82b4cdbf xas_start+0x5f ([kernel.kallsyms])
perf-exec 2273949 546850.708600: 372650 cycles: ffffffff8286b8f7 __pagevec_lru_add+0x117 ([kernel.kallsyms])
perf-exec 2273949 546850.708791: 482953 cycles: ffffffff829190de __mod_memcg_lruvec_state+0x4e ([kernel.kallsyms])
true 2273949 546850.709036: 501985 cycles: ffffffff828add7c tlb_gather_mmu+0x4c ([kernel.kallsyms])
true 2273949 546850.709292: 503065 cycles: 7f2446d97c03 _dl_map_object_deps+0x973 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
- In addition to 'period' (PERF_SAMPLE_PERIOD), the other
PERF_SAMPLE_ can be used for filtering, and also some other sample
accessible values, from tools/perf/Documentation/perf-record.txt:
Essentially the BPF filter expression is:
<term> <operator> <value> (("," | "||") <term> <operator> <value>)*
The <term> can be one of:
ip, id, tid, pid, cpu, time, addr, period, txn, weight, phys_addr,
code_pgsz, data_pgsz, weight1, weight2, weight3, ins_lat, retire_lat,
p_stage_cyc, mem_op, mem_lvl, mem_snoop, mem_remote, mem_lock,
mem_dtlb, mem_blk, mem_hops
The <operator> can be one of:
==, !=, >, >=, <, <=, &
The <value> can be one of:
<number> (for any term)
na, load, store, pfetch, exec (for mem_op)
l1, l2, l3, l4, cxl, io, any_cache, lfb, ram, pmem (for mem_lvl)
na, none, hit, miss, hitm, fwd, peer (for mem_snoop)
remote (for mem_remote)
na, locked (for mem_locked)
na, l1_hit, l1_miss, l2_hit, l2_miss, any_hit, any_miss, walk, fault (for mem_dtlb)
na, by_data, by_addr (for mem_blk)
hops0, hops1, hops2, hops3 (for mem_hops)
perf lock contention:
- Show lock type with address.
- Track and show mmap_lock, siglock and per-cpu rq_lock with address.
This is done for mmap_lock by following the current->mm pointer:
$ sudo ./perf lock con -abl -- sleep 10
contended total wait max wait avg wait address symbol
...
16344 312.30 ms 2.22 ms 19.11 us ffff8cc702595640
17686 310.08 ms 1.49 ms 17.53 us ffff8cc7025952c0
3 84.14 ms 45.79 ms 28.05 ms ffff8cc78114c478 mmap_lock
3557 76.80 ms 68.75 us 21.59 us ffff8cc77ca3af58
1 68.27 ms 68.27 ms 68.27 ms ffff8cda745dfd70
9 54.53 ms 7.96 ms 6.06 ms ffff8cc7642a48b8 mmap_lock
14629 44.01 ms 60.00 us 3.01 us ffff8cc7625f9ca0
3481 42.63 ms 140.71 us 12.24 us ffffffff937906ac vmap_area_lock
16194 38.73 ms 42.15 us 2.39 us ffff8cd397cbc560
11 38.44 ms 10.39 ms 3.49 ms ffff8ccd6d12fbb8 mmap_lock
1 5.43 ms 5.43 ms 5.43 ms ffff8cd70018f0d8
1674 5.38 ms 422.93 us 3.21 us ffffffff92e06080 tasklist_lock
581 4.51 ms 130.68 us 7.75 us ffff8cc9b1259058
5 3.52 ms 1.27 ms 703.23 us ffff8cc754510070
112 3.47 ms 56.47 us 31.02 us ffff8ccee38b3120
381 3.31 ms 73.44 us 8.69 us ffffffff93790690 purge_vmap_area_lock
255 3.19 ms 36.35 us 12.49 us ffff8d053ce30c80
- Update default map size to 16384.
- Allocate single letter option -M for --map-nr-entries, as it is
proving being frequently used.
- Fix struct rq lock access for older kernels with BPF's CO-RE
(Compile once, run everywhere).
- Fix problems found with MSAn.
perf report/top:
- Add inline information when using --call-graph=fp or lbr, as was
already done to the --call-graph=dwarf callchain mode.
- Improve the 'srcfile' sort key performance by really using an
optimization introduced in 6.2 for the 'srcline' sort key that
avoids calling addr2line for comparision with each sample.
perf sched:
- Make 'perf sched latency/map/replay' to use "sched:sched_waking"
instead of "sched:sched_waking", consistent with 'perf record'
since d566a9c2d482 ("perf sched: Prefer sched_waking event when it
exists").
perf ftrace:
- Make system wide the default target for latency subcommand, run the
following command then generate some network traffic and press
control+C:
# perf ftrace latency -T __kfree_skb
^C
DURATION | COUNT | GRAPH |
0 - 1 us | 27 | ############# |
1 - 2 us | 22 | ########### |
2 - 4 us | 8 | #### |
4 - 8 us | 5 | ## |
8 - 16 us | 24 | ############ |
16 - 32 us | 2 | # |
32 - 64 us | 1 | |
64 - 128 us | 0 | |
128 - 256 us | 0 | |
256 - 512 us | 0 | |
512 - 1024 us | 0 | |
1 - 2 ms | 0 | |
2 - 4 ms | 0 | |
4 - 8 ms | 0 | |
8 - 16 ms | 0 | |
16 - 32 ms | 0 | |
32 - 64 ms | 0 | |
64 - 128 ms | 0 | |
128 - 256 ms | 0 | |
256 - 512 ms | 0 | |
512 - 1024 ms | 0 | |
1 - ... s | 0 | |
#
perf top:
- Add --branch-history (LBR: Last Branch Record) option, just like
already available for 'perf record'.
- Fix segfault in thread__comm_len() where thread->comm was being
used outside thread->comm_lock.
perf annotate:
- Allow configuring objdump and addr2line in ~/.perfconfig., so that
you can use alternative binaries, such as llvm's.
perf kvm:
- Add TUI mode for 'perf kvm stat report'.
Reference counting:
- Add reference count checking infrastructure to check for use after
free, done to the 'cpumap', 'namespaces', 'maps' and 'map' structs,
more to come.
To build with it use -DREFCNT_CHECKING=1 in the make command line
to build tools/perf. Documented at:
https://perf.wiki.kernel.org/index.php/Reference_Count_Checking
- The above caught, for instance, fix, present in this series:
- Fix maps use after put in 'perf test "Share thread maps"':
'maps' is copied from leader, but the leader is put on line 79
and then 'maps' is used to read the reference count below - so
a use after put, with the put of maps happening within
thread__put.
Fixed by reversing the order of puts so that the leader is put
last.
- Also several fixes were made to places where reference counts were
not being held.
- Make this one of the tests in 'make -C tools/perf build-test' to
regularly build test it and to make sure no direct access to the
reference counted structs are made, doing that via accessors to
check the validity of the struct pointer.
ARM64:
- Fix 'perf report' segfault when filtering coresight traces by
sparse lists of CPUs.
- Add support for 'simd' as a sort field for 'perf report', to show
ARM's NEON SIMD's predicate flags: "partial" and "empty".
arm64 vendor events:
- Add N1 metrics.
Intel vendor events:
- Add graniterapids, grandridge and sierraforrest events.
- Refresh events for: alderlake, aldernaken, broadwell, broadwellde,
broadwellx, cascadelakx, haswell, haswellx, icelake, icelakex,
jaketown, meteorlake, knightslanding, sandybridge, sapphirerapids,
silvermont, skylake, tigerlake and westmereep-dp
- Refresh metrics for alderlake-n, broadwell, broadwellde,
broadwellx, haswell, haswellx, icelakex, ivybridge, ivytown and
skylakex.
perf stat:
- Implement --topdown using JSON metrics.
- Add TopdownL1 JSON metric as a default if present, but disable it
for now for some Intel hybrid architectures, a series of patches
addressing this is being reviewed and will be submitted for v6.5.
- Use metrics for --smi-cost.
- Update topdown documentation.
Vendor events (JSON) infrastructure:
- Add support for computing and printing metric threshold values. For
instance, here is one found in thesapphirerapids json file:
{
"BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
"MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
"MetricGroup": "smi",
"MetricName": "smi_cycles",
"MetricThreshold": "smi_cycles > 0.1",
"ScaleUnit": "100%"
},
- Test parsing metric thresholds with the fake PMU in 'perf test
pmu-events'.
- Support for printing metric thresholds in 'perf list'.
- Add --metric-no-threshold option to 'perf stat'.
- Add rand (reverse and) and has_pmem (optane memory) support to
metrics.
- Sort list of input files to avoid depending on the order from
readdir() helping in obtaining reproducible builds.
S/390:
- Add common metrics: - CPI (cycles per instruction), prbstate (ratio
of instructions executed in problem state compared to total number
of instructions), l1mp (Level one instruction and data cache misses
per 100 instructions).
- Add cache metrics for z13, z14, z15 and z16.
- Add metric for TLB and cache.
ARM:
- Add raw decoding for SPE (Statistical Profiling Extension) v1.3 MTE
(Memory Tagging Extension) and MOPS (Memory Operations) load/store.
Intel PT hardware tracing:
- Add event type names UINTR (User interrupt delivered) and UIRET
(Exiting from user interrupt routine), documented in table 32-50
"CFE Packet Type and Vector Fields Details" in the Intel Processor
Trace chapter of The Intel SDM Volume 3 version 078.
- Add support for new branch instructions ERETS and ERETU.
- Fix CYC timestamps after standalone CBR
ARM CoreSight hardware tracing:
- Allow user to override timestamp and contextid settings.
- Fix segfault in dso lookup.
- Fix timeless decode mode detection.
- Add separate decode paths for timeless and per-thread modes.
auxtrace:
- Fix address filter entire kernel size.
Miscellaneous:
- Fix use-after-free and unaligned bugs in the PLT handling routines.
- Use zfree() to reduce chances of use after free.
- Add missing 0x prefix for addresses printed in hexadecimal in 'perf
probe'.
- Suppress massive unsupported target platform errors in the unwind
code.
- Fix return incorrect build_id size in elf_read_build_id().
- Fix 'perf scripts intel-pt-events.py' IPC output for Python 2 .
- Add missing new parameter in kfree_skb tracepoint to the python
scripts using it.
- Add 'perf bench syscall fork' benchmark.
- Add support for printing PERF_MEM_LVLNUM_UNC (Uncached access) in
'perf mem'.
- Fix wrong size expectation for perf test 'Setup struct
perf_event_attr' caused by the patch adding
perf_event_attr::config3.
- Fix some spelling mistakes"
* tag 'perf-tools-for-v6.4-3-2023-05-06' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (365 commits)
Revert "perf build: Make BUILD_BPF_SKEL default, rename to NO_BPF_SKEL"
Revert "perf build: Warn for BPF skeletons if endian mismatches"
perf metrics: Fix SEGV with --for-each-cgroup
perf bpf skels: Stop using vmlinux.h generated from BTF, use subset of used structs + CO-RE
perf stat: Separate bperf from bpf_profiler
perf test record+probe_libc_inet_pton: Fix call chain match on x86_64
perf test record+probe_libc_inet_pton: Fix call chain match on s390
perf tracepoint: Fix memory leak in is_valid_tracepoint()
perf cs-etm: Add fix for coresight trace for any range of CPUs
perf build: Fix unescaped # in perf build-test
perf unwind: Suppress massive unsupported target platform errors
perf script: Add new parameter in kfree_skb tracepoint to the python scripts using it
perf script: Print raw ip instead of binary offset for callchain
perf symbols: Fix return incorrect build_id size in elf_read_build_id()
perf list: Modify the warning message about scandirat(3)
perf list: Fix memory leaks in print_tracepoint_events()
perf lock contention: Rework offset calculation with BPF CO-RE
perf lock contention: Fix struct rq lock access
perf stat: Disable TopdownL1 on hybrid
perf stat: Avoid SEGV on counter->name
...
Diffstat (limited to 'tools/perf/arch')
28 files changed, 270 insertions, 347 deletions
diff --git a/tools/perf/arch/arm/tests/dwarf-unwind.c b/tools/perf/arch/arm/tests/dwarf-unwind.c index ccfa87055c4a..566fb6c0eae7 100644 --- a/tools/perf/arch/arm/tests/dwarf-unwind.c +++ b/tools/perf/arch/arm/tests/dwarf-unwind.c @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample, return -1; } - stack_size = map->end - sp; + stack_size = map__end(map) - sp; stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size; memcpy(buf, (void *) sp, stack_size); diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c index 7f71c8a237ff..77cb03e6ff87 100644 --- a/tools/perf/arch/arm/util/cs-etm.c +++ b/tools/perf/arch/arm/util/cs-etm.c @@ -69,21 +69,29 @@ static const char * const metadata_ete_ro[] = { static bool cs_etm_is_etmv4(struct auxtrace_record *itr, int cpu); static bool cs_etm_is_ete(struct auxtrace_record *itr, int cpu); -static int cs_etm_set_context_id(struct auxtrace_record *itr, - struct evsel *evsel, int cpu) +static int cs_etm_validate_context_id(struct auxtrace_record *itr, + struct evsel *evsel, int cpu) { - struct cs_etm_recording *ptr; - struct perf_pmu *cs_etm_pmu; + struct cs_etm_recording *ptr = + container_of(itr, struct cs_etm_recording, itr); + struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu; char path[PATH_MAX]; - int err = -EINVAL; + int err; u32 val; - u64 contextid; + u64 contextid = + evsel->core.attr.config & + (perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1") | + perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2")); - ptr = container_of(itr, struct cs_etm_recording, itr); - cs_etm_pmu = ptr->cs_etm_pmu; + if (!contextid) + return 0; - if (!cs_etm_is_etmv4(itr, cpu)) - goto out; + /* Not supported in etmv3 */ + if (!cs_etm_is_etmv4(itr, cpu)) { + pr_err("%s: contextid not supported in ETMv3, disable with %s/contextid=0/\n", + CORESIGHT_ETM_PMU_NAME, CORESIGHT_ETM_PMU_NAME); + return -EINVAL; + } /* Get a handle on TRCIDR2 */ snprintf(path, PATH_MAX, "cpu%d/%s", @@ -92,27 +100,13 @@ static int cs_etm_set_context_id(struct auxtrace_record *itr, /* There was a problem reading the file, bailing out */ if (err != 1) { - pr_err("%s: can't read file %s\n", - CORESIGHT_ETM_PMU_NAME, path); - goto out; + pr_err("%s: can't read file %s\n", CORESIGHT_ETM_PMU_NAME, + path); + return err; } - /* User has configured for PID tracing, respects it. */ - contextid = evsel->core.attr.config & - (BIT(ETM_OPT_CTXTID) | BIT(ETM_OPT_CTXTID2)); - - /* - * If user doesn't configure the contextid format, parse PMU format and - * enable PID tracing according to the "contextid" format bits: - * - * If bit ETM_OPT_CTXTID is set, trace CONTEXTIDR_EL1; - * If bit ETM_OPT_CTXTID2 is set, trace CONTEXTIDR_EL2. - */ - if (!contextid) - contextid = perf_pmu__format_bits(&cs_etm_pmu->format, - "contextid"); - - if (contextid & BIT(ETM_OPT_CTXTID)) { + if (contextid & + perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1")) { /* * TRCIDR2.CIDSIZE, bit [9-5], indicates whether contextID * tracing is supported: @@ -122,14 +116,14 @@ static int cs_etm_set_context_id(struct auxtrace_record *itr, */ val = BMVAL(val, 5, 9); if (!val || val != 0x4) { - pr_err("%s: CONTEXTIDR_EL1 isn't supported\n", - CORESIGHT_ETM_PMU_NAME); - err = -EINVAL; - goto out; + pr_err("%s: CONTEXTIDR_EL1 isn't supported, disable with %s/contextid1=0/\n", + CORESIGHT_ETM_PMU_NAME, CORESIGHT_ETM_PMU_NAME); + return -EINVAL; } } - if (contextid & BIT(ETM_OPT_CTXTID2)) { + if (contextid & + perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2")) { /* * TRCIDR2.VMIDOPT[30:29] != 0 and * TRCIDR2.VMIDSIZE[14:10] == 0b00100 (32bit virtual contextid) @@ -138,35 +132,34 @@ static int cs_etm_set_context_id(struct auxtrace_record *itr, * Any value of VMIDSIZE >= 4 (i.e, > 32bit) is fine for us. */ if (!BMVAL(val, 29, 30) || BMVAL(val, 10, 14) < 4) { - pr_err("%s: CONTEXTIDR_EL2 isn't supported\n", - CORESIGHT_ETM_PMU_NAME); - err = -EINVAL; - goto out; + pr_err("%s: CONTEXTIDR_EL2 isn't supported, disable with %s/contextid2=0/\n", + CORESIGHT_ETM_PMU_NAME, CORESIGHT_ETM_PMU_NAME); + return -EINVAL; } } - /* All good, let the kernel know */ - evsel->core.attr.config |= contextid; - err = 0; - -out: - return err; + return 0; } -static int cs_etm_set_timestamp(struct auxtrace_record *itr, - struct evsel *evsel, int cpu) +static int cs_etm_validate_timestamp(struct auxtrace_record *itr, + struct evsel *evsel, int cpu) { - struct cs_etm_recording *ptr; - struct perf_pmu *cs_etm_pmu; + struct cs_etm_recording *ptr = + container_of(itr, struct cs_etm_recording, itr); + struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu; char path[PATH_MAX]; - int err = -EINVAL; + int err; u32 val; - ptr = container_of(itr, struct cs_etm_recording, itr); - cs_etm_pmu = ptr->cs_etm_pmu; + if (!(evsel->core.attr.config & + perf_pmu__format_bits(&cs_etm_pmu->format, "timestamp"))) + return 0; - if (!cs_etm_is_etmv4(itr, cpu)) - goto out; + if (!cs_etm_is_etmv4(itr, cpu)) { + pr_err("%s: timestamp not supported in ETMv3, disable with %s/timestamp=0/\n", + CORESIGHT_ETM_PMU_NAME, CORESIGHT_ETM_PMU_NAME); + return -EINVAL; + } /* Get a handle on TRCIRD0 */ snprintf(path, PATH_MAX, "cpu%d/%s", @@ -177,7 +170,7 @@ static int cs_etm_set_timestamp(struct auxtrace_record *itr, if (err != 1) { pr_err("%s: can't read file %s\n", CORESIGHT_ETM_PMU_NAME, path); - goto out; + return err; } /* @@ -189,24 +182,21 @@ static int cs_etm_set_timestamp(struct auxtrace_record *itr, */ val &= GENMASK(28, 24); if (!val) { - err = -EINVAL; - goto out; + return -EINVAL; } - /* All good, let the kernel know */ - evsel->core.attr.config |= (1 << ETM_OPT_TS); - err = 0; - -out: - return err; + return 0; } -#define ETM_SET_OPT_CTXTID (1 << 0) -#define ETM_SET_OPT_TS (1 << 1) -#define ETM_SET_OPT_MASK (ETM_SET_OPT_CTXTID | ETM_SET_OPT_TS) - -static int cs_etm_set_option(struct auxtrace_record *itr, - struct evsel *evsel, u32 option) +/* + * Check whether the requested timestamp and contextid options should be + * available on all requested CPUs and if not, tell the user how to override. + * The kernel will silently disable any unavailable options so a warning here + * first is better. In theory the kernel could still disable the option for + * some other reason so this is best effort only. + */ +static int cs_etm_validate_config(struct auxtrace_record *itr, + struct evsel *evsel) { int i, err = -EINVAL; struct perf_cpu_map *event_cpus = evsel->evlist->core.user_requested_cpus; @@ -220,18 +210,11 @@ static int cs_etm_set_option(struct auxtrace_record *itr, !perf_cpu_map__has(online_cpus, cpu)) continue; - if (option & BIT(ETM_OPT_CTXTID)) { - err = cs_etm_set_context_id(itr, evsel, i); - if (err) - goto out; - } - if (option & BIT(ETM_OPT_TS)) { - err = cs_etm_set_timestamp(itr, evsel, i); - if (err) - goto out; - } - if (option & ~(BIT(ETM_OPT_CTXTID) | BIT(ETM_OPT_TS))) - /* Nothing else is currently supported */ + err = cs_etm_validate_context_id(itr, evsel, i); + if (err) + goto out; + err = cs_etm_validate_timestamp(itr, evsel, i); + if (err) goto out; } @@ -319,13 +302,6 @@ static int cs_etm_recording_options(struct auxtrace_record *itr, bool privileged = perf_event_paranoid_check(-1); int err = 0; - ptr->evlist = evlist; - ptr->snapshot_mode = opts->auxtrace_snapshot_mode; - - if (!record_opts__no_switch_events(opts) && - perf_can_record_switch_events()) - opts->record_switch_events = true; - evlist__for_each_entry(evlist, evsel) { if (evsel->core.attr.type == cs_etm_pmu->type) { if (cs_etm_evsel) { @@ -333,11 +309,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr, CORESIGHT_ETM_PMU_NAME); return -EINVAL; } - evsel->core.attr.freq = 0; - evsel->core.attr.sample_period = 1; - evsel->needs_auxtrace_mmap = true; cs_etm_evsel = evsel; - opts->full_auxtrace = true; } } @@ -345,6 +317,16 @@ static int cs_etm_recording_options(struct auxtrace_record *itr, if (!cs_etm_evsel) return 0; + ptr->evlist = evlist; + ptr->snapshot_mode = opts->auxtrace_snapshot_mode; + + if (!record_opts__no_switch_events(opts) && + perf_can_record_switch_events()) + opts->record_switch_events = true; + + cs_etm_evsel->needs_auxtrace_mmap = true; + opts->full_auxtrace = true; + ret = cs_etm_set_sink_attr(cs_etm_pmu, cs_etm_evsel); if (ret) return ret; @@ -414,8 +396,8 @@ static int cs_etm_recording_options(struct auxtrace_record *itr, } } - /* We are in full trace mode but '-m,xyz' wasn't specified */ - if (opts->full_auxtrace && !opts->auxtrace_mmap_pages) { + /* Buffer sizes weren't specified with '-m,xyz' so give some defaults */ + if (!opts->auxtrace_mmap_pages) { if (privileged) { opts->auxtrace_mmap_pages = MiB(4) / page_size; } else { @@ -423,7 +405,6 @@ static int cs_etm_recording_options(struct auxtrace_record *itr, if (opts->mmap_pages == UINT_MAX) opts->mmap_pages = KiB(256) / page_size; } - } if (opts->auxtrace_snapshot_mode) @@ -437,38 +418,36 @@ static int cs_etm_recording_options(struct auxtrace_record *itr, evlist__to_front(evlist, cs_etm_evsel); /* - * In the case of per-cpu mmaps, we need the CPU on the - * AUX event. We also need the contextID in order to be notified + * get the CPU on the sample - need it to associate trace ID in the + * AUX_OUTPUT_HW_ID event, and the AUX event for per-cpu mmaps. + */ + evsel__set_sample_bit(cs_etm_evsel, CPU); + + /* + * Also the case of per-cpu mmaps, need the contextID in order to be notified * when a context switch happened. */ if (!perf_cpu_map__empty(cpus)) { - evsel__set_sample_bit(cs_etm_evsel, CPU); - - err = cs_etm_set_option(itr, cs_etm_evsel, - BIT(ETM_OPT_CTXTID) | BIT(ETM_OPT_TS)); - if (err) - goto out; + evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel, + "timestamp", 1); + evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel, + "contextid", 1); } /* Add dummy event to keep tracking */ - if (opts->full_auxtrace) { - struct evsel *tracking_evsel; - - err = parse_event(evlist, "dummy:u"); - if (err) - goto out; - - tracking_evsel = evlist__last(evlist); - evlist__set_tracking_event(evlist, tracking_evsel); - - tracking_evsel->core.attr.freq = 0; - tracking_evsel->core.attr.sample_period = 1; + err = parse_event(evlist, "dummy:u"); + if (err) + goto out; + evsel = evlist__last(evlist); + evlist__set_tracking_event(evlist, evsel); + evsel->core.attr.freq = 0; + evsel->core.attr.sample_period = 1; - /* In per-cpu case, always need the time of mmap events etc */ - if (!perf_cpu_map__empty(cpus)) - evsel__set_sample_bit(tracking_evsel, TIME); - } + /* In per-cpu case, always need the time of mmap events etc */ + if (!perf_cpu_map__empty(cpus)) + evsel__set_sample_bit(evsel, TIME); + err = cs_etm_validate_config(itr, cs_etm_evsel); out: return err; } @@ -659,8 +638,12 @@ static bool cs_etm_is_ete(struct auxtrace_record *itr, int cpu) { struct cs_etm_recording *ptr = container_of(itr, struct cs_etm_recording, itr); struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu; - int trcdevarch = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH]); + int trcdevarch; + if (!cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH])) + return false; + + trcdevarch = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH]); /* * ETE if ARCHVER is 5 (ARCHVER is 4 for ETM) and ARCHPART is 0xA13. * See ETM_DEVARCH_ETE_ARCH in coresight-etm4x.h @@ -675,8 +658,10 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr, /* Get trace configuration register */ data[CS_ETMV4_TRCCONFIGR] = cs_etmv4_get_config(itr); - /* Get traceID from the framework */ - data[CS_ETMV4_TRCTRACEIDR] = coresight_get_trace_id(cpu); + /* traceID set to legacy version, in case new perf running on older system */ + data[CS_ETMV4_TRCTRACEIDR] = + CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG; + /* Get read-only information from sysFS */ data[CS_ETMV4_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR0]); @@ -694,8 +679,8 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr, data[CS_ETMV4_TS_SOURCE] = (__u64) cs_etm_get_ro_signed(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TS_SOURCE]); else { - pr_warning("[%03d] pmu file 'ts_source' not found. Fallback to safe value (-1)\n", - cpu); + pr_debug3("[%03d] pmu file 'ts_source' not found. Fallback to safe value (-1)\n", + cpu); data[CS_ETMV4_TS_SOURCE] = (__u64) -1; } } @@ -707,8 +692,10 @@ static void cs_etm_save_ete_header(__u64 data[], struct auxtrace_record *itr, in /* Get trace configuration register */ data[CS_ETE_TRCCONFIGR] = cs_etmv4_get_config(itr); - /* Get traceID from the framework */ - data[CS_ETE_TRCTRACEIDR] = coresight_get_trace_id(cpu); + /* traceID set to legacy version, in case new perf running on older system */ + data[CS_ETE_TRCTRACEIDR] = + CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG; + /* Get read-only information from sysFS */ data[CS_ETE_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCIDR0]); @@ -729,8 +716,8 @@ static void cs_etm_save_ete_header(__u64 data[], struct auxtrace_record *itr, in data[CS_ETE_TS_SOURCE] = (__u64) cs_etm_get_ro_signed(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TS_SOURCE]); else { - pr_warning("[%03d] pmu file 'ts_source' not found. Fallback to safe value (-1)\n", - cpu); + pr_debug3("[%03d] pmu file 'ts_source' not found. Fallback to safe value (-1)\n", + cpu); data[CS_ETE_TS_SOURCE] = (__u64) -1; } } @@ -764,9 +751,9 @@ static void cs_etm_get_metadata(int cpu, u32 *offset, magic = __perf_cs_etmv3_magic; /* Get configuration register */ info->priv[*offset + CS_ETM_ETMCR] = cs_etm_get_config(itr); - /* Get traceID from the framework */ + /* traceID set to legacy value in case new perf running on old system */ info->priv[*offset + CS_ETM_ETMTRACEIDR] = - coresight_get_trace_id(cpu); + CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG; /* Get read-only information from sysFS */ info->priv[*offset + CS_ETM_ETMCCER] = cs_etm_get_ro(cs_etm_pmu, cpu, @@ -925,3 +912,22 @@ struct auxtrace_record *cs_etm_record_init(int *err) out: return NULL; } + +/* + * Set a default config to enable the user changed config tracking mechanism + * (CFG_CHG and evsel__set_config_if_unset()). If no default is set then user + * changes aren't tracked. + */ +struct perf_event_attr * +cs_etm_get_default_config(struct perf_pmu *pmu __maybe_unused) +{ + struct perf_event_attr *attr; + + attr = zalloc(sizeof(struct perf_event_attr)); + if (!attr) + return NULL; + + attr->sample_period = 1; + + return attr; +} diff --git a/tools/perf/arch/arm/util/pmu.c b/tools/perf/arch/arm/util/pmu.c index 887c8addc491..860a8b42b4b5 100644 --- a/tools/perf/arch/arm/util/pmu.c +++ b/tools/perf/arch/arm/util/pmu.c @@ -12,6 +12,7 @@ #include "arm-spe.h" #include "hisi-ptt.h" #include "../../../util/pmu.h" +#include "../cs-etm.h" struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused) @@ -20,6 +21,7 @@ struct perf_event_attr if (!strcmp(pmu->name, CORESIGHT_ETM_PMU_NAME)) { /* add ETM default config here */ pmu->selectable = true; + return cs_etm_get_default_config(pmu); #if defined(__aarch64__) } else if (strstarts(pmu->name, ARM_SPE_PMU_NAME)) { return arm_spe_pmu_default_config(pmu); diff --git a/tools/perf/arch/arm64/tests/dwarf-unwind.c b/tools/perf/arch/arm64/tests/dwarf-unwind.c index 46147a483049..90a7ef293ce7 100644 --- a/tools/perf/arch/arm64/tests/dwarf-unwind.c +++ b/tools/perf/arch/arm64/tests/dwarf-unwind.c @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample, return -1; } - stack_size = map->end - sp; + stack_size = map__end(map) - sp; stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size; memcpy(buf, (void *) sp, stack_size); diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c index d4c234076541..3b1676ff03f9 100644 --- a/tools/perf/arch/arm64/util/arm-spe.c +++ b/tools/perf/arch/arm64/util/arm-spe.c @@ -36,29 +36,6 @@ struct arm_spe_recording { bool *wrapped; }; -static void arm_spe_set_timestamp(struct auxtrace_record *itr, - struct evsel *evsel) -{ - struct arm_spe_recording *ptr; - struct perf_pmu *arm_spe_pmu; - struct evsel_config_term *term = evsel__get_config_term(evsel, CFG_CHG); - u64 user_bits = 0, bit; - - ptr = container_of(itr, struct arm_spe_recording, itr); - arm_spe_pmu = ptr->arm_spe_pmu; - - if (term) - user_bits = term->val.cfg_chg; - - bit = perf_pmu__format_bits(&arm_spe_pmu->format, "ts_enable"); - - /* Skip if user has set it */ - if (bit & user_bits) - return; - - evsel->core.attr.config |= bit; -} - static size_t arm_spe_info_priv_size(struct auxtrace_record *itr __maybe_unused, struct evlist *evlist __maybe_unused) @@ -238,7 +215,8 @@ static int arm_spe_recording_options(struct auxtrace_record *itr, */ if (!perf_cpu_map__empty(cpus)) { evsel__set_sample_bit(arm_spe_evsel, CPU); - arm_spe_set_timestamp(itr, arm_spe_evsel); + evsel__set_config_if_unset(arm_spe_pmu, arm_spe_evsel, + "ts_enable", 1); } /* @@ -479,7 +457,7 @@ static void arm_spe_recording_free(struct auxtrace_record *itr) struct arm_spe_recording *sper = container_of(itr, struct arm_spe_recording, itr); - free(sper->wrapped); + zfree(&sper->wrapped); free(sper); } diff --git a/tools/perf/arch/arm64/util/kvm-stat.c b/tools/perf/arch/arm64/util/kvm-stat.c index 73d18e0ed6f6..6611aa21cba9 100644 --- a/tools/perf/arch/arm64/util/kvm-stat.c +++ b/tools/perf/arch/arm64/util/kvm-stat.c @@ -11,7 +11,6 @@ define_exit_reasons_table(arm64_trap_exit_reasons, kvm_arm_exception_class); const char *kvm_trap_exit_reason = "esr_ec"; const char *vcpu_id_str = "id"; -const int decode_str_len = 20; const char *kvm_exit_reason = "ret"; const char *kvm_entry_trace = "kvm:kvm_entry"; const char *kvm_exit_trace = "kvm:kvm_exit"; @@ -45,14 +44,14 @@ static bool event_begin(struct evsel *evsel, struct perf_sample *sample __maybe_unused, struct event_key *key __maybe_unused) { - return !strcmp(evsel->name, kvm_entry_trace); + return evsel__name_is(evsel, kvm_entry_trace); } static bool event_end(struct evsel *evsel, struct perf_sample *sample, struct event_key *key) { - if (!strcmp(evsel->name, kvm_exit_trace)) { + if (evsel__name_is(evsel, kvm_exit_trace)) { event_get_key(evsel, sample, key); return true; } diff --git a/tools/perf/arch/common.c b/tools/perf/arch/common.c index cfe63dfab03a..b951374bc49d 100644 --- a/tools/perf/arch/common.c +++ b/tools/perf/arch/common.c @@ -128,7 +128,7 @@ static int lookup_triplets(const char *const *triplets, const char *name) } static int perf_env__lookup_binutils_path(struct perf_env *env, - const char *name, const char **path) + const char *name, char **path) { int idx; const char *arch = perf_env__arch(env), *cross_env; @@ -200,7 +200,7 @@ out_error: return -1; } -int perf_env__lookup_objdump(struct perf_env *env, const char **path) +int perf_env__lookup_objdump(struct perf_env *env, char **path) { /* * For live mode, env->arch will be NULL and we can use diff --git a/tools/perf/arch/common.h b/tools/perf/arch/common.h index e965ed8bb328..4224c299cc70 100644 --- a/tools/perf/arch/common.h +++ b/tools/perf/arch/common.h @@ -6,7 +6,7 @@ struct perf_env; -int perf_env__lookup_objdump(struct perf_env *env, const char **path); +int perf_env__lookup_objdump(struct perf_env *env, char **path); bool perf_env__single_address_space(struct perf_env *env); #endif /* ARCH_PERF_COMMON_H */ diff --git a/tools/perf/arch/powerpc/tests/dwarf-unwind.c b/tools/perf/arch/powerpc/tests/dwarf-unwind.c index c9cb4b059392..32fffb593fbf 100644 --- a/tools/perf/arch/powerpc/tests/dwarf-unwind.c +++ b/tools/perf/arch/powerpc/tests/dwarf-unwind.c @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample, return -1; } - stack_size = map->end - sp; + stack_size = map__end(map) - sp; stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size; memcpy(buf, (void *) sp, stack_size); diff --git a/tools/perf/arch/powerpc/util/header.c b/tools/perf/arch/powerpc/util/header.c index 78eef77d8a8d..c8d0dc775e5d 100644 --- a/tools/perf/arch/powerpc/util/header.c +++ b/tools/perf/arch/powerpc/util/header.c @@ -45,6 +45,6 @@ int arch_get_runtimeparam(const struct pmu_metric *pm) int count; char path[PATH_MAX] = "/devices/hv_24x7/interface/"; - atoi(pm->aggr_mode) == PerChip ? strcat(path, "sockets") : strcat(path, "coresperchip"); + strcat(path, pm->aggr_mode == PerChip ? "sockets" : "coresperchip"); return sysfs__read_int(path, &count) < 0 ? 1 : count; } diff --git a/tools/perf/arch/powerpc/util/kvm-stat.c b/tools/perf/arch/powerpc/util/kvm-stat.c index 1a9b40ea92a5..ea1220d66b67 100644 --- a/tools/perf/arch/powerpc/util/kvm-stat.c +++ b/tools/perf/arch/powerpc/util/kvm-stat.c @@ -14,7 +14,6 @@ #define NR_TPS 4 const char *vcpu_id_str = "vcpu_id"; -const int decode_str_len = 40; const char *kvm_entry_trace = "kvm_hv:kvm_guest_enter"; const char *kvm_exit_trace = "kvm_hv:kvm_guest_exit"; @@ -61,13 +60,13 @@ static bool hcall_event_end(struct evsel *evsel, struct perf_sample *sample __maybe_unused, struct event_key *key __maybe_unused) { - return (!strcmp(evsel->name, kvm_events_tp[3])); + return (evsel__name_is(evsel, kvm_events_tp[3])); } static bool hcall_event_begin(struct evsel *evsel, struct perf_sample *sample, struct event_key *key) { - if (!strcmp(evsel->name, kvm_events_tp[2])) { + if (evsel__name_is(evsel, kvm_events_tp[2])) { hcall_event_get_key(evsel, sample, key); return true; } @@ -80,7 +79,7 @@ static void hcall_event_decode_key(struct perf_kvm_stat *kvm __maybe_unused, { const char *hcall_reason = get_hcall_exit_reason(key->key); - scnprintf(decode, decode_str_len, "%s", hcall_reason); + scnprintf(decode, KVM_EVENT_NAME_LEN, "%s", hcall_reason); } static struct kvm_events_ops hcall_events = { diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c index 20cd6244863b..b7223feec770 100644 --- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c +++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c @@ -255,14 +255,14 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain) thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al); if (al.map) - dso = al.map->dso; + dso = map__dso(al.map); if (!dso) { pr_debug("%" PRIx64 " dso is NULL\n", ip); return skip_slot; } - rc = check_return_addr(dso, al.map->start, ip); + rc = check_return_addr(dso, map__start(al.map), ip); pr_debug("[DSO %s, sym %s, ip 0x%" PRIx64 "] rc %d\n", dso->long_name, al.sym->name, ip, rc); diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c index 0856b32f9e08..947bfad7aa59 100644 --- a/tools/perf/arch/powerpc/util/sym-handling.c +++ b/tools/perf/arch/powerpc/util/sym-handling.c @@ -104,7 +104,7 @@ void arch__fix_tev_from_maps(struct perf_probe_event *pev, lep_offset = PPC64_LOCAL_ENTRY_OFFSET(sym->arch_sym); - if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS) + if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS) tev->point.offset += PPC64LE_LEP_OFFSET; else if (lep_offset) { if (pev->uprobes) @@ -131,7 +131,7 @@ void arch__post_process_probe_trace_events(struct perf_probe_event *pev, for (i = 0; i < ntevs; i++) { tev = &pev->tevs[i]; map__for_each_symbol(map, sym, tmp) { - if (map->unmap_ip(map, sym->start) == tev->point.address) { + if (map__unmap_ip(map, sym->start) == tev->point.address) { arch__fix_tev_from_maps(pev, tev, map, sym); break; } diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c index 0e136630659e..de925b0e35ce 100644 --- a/tools/perf/arch/s390/annotate/instructions.c +++ b/tools/perf/arch/s390/annotate/instructions.c @@ -39,7 +39,7 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops, target.addr = map__objdump_2mem(map, ops->target.addr); if (maps__find_ams(ms->maps, &target) == 0 && - map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr) + map__rip_2objdump(target.ms.map, map__map_ip(target.ms.map, target.addr)) == ops->target.addr) ops->target.sym = target.ms.sym; return 0; diff --git a/tools/perf/arch/s390/util/Build b/tools/perf/arch/s390/util/Build index db6884086997..fa66f15a14ec 100644 --- a/tools/perf/arch/s390/util/Build +++ b/tools/perf/arch/s390/util/Build @@ -6,5 +6,6 @@ perf-$(CONFIG_DWARF) += dwarf-regs.o perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o perf-y += machine.o +perf-y += pmu.o perf-$(CONFIG_AUXTRACE) += auxtrace.o diff --git a/tools/perf/arch/s390/util/kvm-stat.c b/tools/perf/arch/s390/util/kvm-stat.c index 34da89ced29a..0aed92df51ba 100644 --- a/tools/perf/arch/s390/util/kvm-stat.c +++ b/tools/perf/arch/s390/util/kvm-stat.c @@ -19,7 +19,6 @@ define_exit_reasons_table(sie_diagnose_codes, diagnose_codes); define_exit_reasons_table(sie_icpt_prog_codes, icpt_prog_codes); const char *vcpu_id_str = "id"; -const int decode_str_len = 40; const char *kvm_exit_reason = "icptcode"; const char *kvm_entry_trace = "kvm:kvm_s390_sie_enter"; const char *kvm_exit_trace = "kvm:kvm_s390_sie_exit"; diff --git a/tools/perf/arch/s390/util/pmu.c b/tools/perf/arch/s390/util/pmu.c new file mode 100644 index 000000000000..11f03f32e3fd --- /dev/null +++ b/tools/perf/arch/s390/util/pmu.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright IBM Corp. 2023 + * Author(s): Thomas Richter <tmricht@linux.ibm.com> + */ + +#include <string.h> + +#include "../../../util/pmu.h" + +#define S390_PMUPAI_CRYPTO "pai_crypto" +#define S390_PMUPAI_EXT "pai_ext" +#define S390_PMUCPUM_CF "cpum_cf" + +struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu) +{ + if (!strcmp(pmu->name, S390_PMUPAI_CRYPTO) || + !strcmp(pmu->name, S390_PMUPAI_EXT) || + !strcmp(pmu->name, S390_PMUCPUM_CF)) + pmu->selectable = true; + return NULL; +} diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c index a54dea7c112f..497593be80f2 100644 --- a/tools/perf/arch/x86/tests/dwarf-unwind.c +++ b/tools/perf/arch/x86/tests/dwarf-unwind.c @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample, return -1; } - stack_size = map->end - sp; + stack_size = map__end(map) - sp; stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size; memcpy(buf, (void *) sp, stack_size); diff --git a/tools/perf/arch/x86/tests/insn-x86.c b/tools/perf/arch/x86/tests/insn-x86.c index 94b490c434d0..735257d205b5 100644 --- a/tools/perf/arch/x86/tests/insn-x86.c +++ b/tools/perf/arch/x86/tests/insn-x86.c @@ -29,6 +29,8 @@ struct test_data test_data_64[] = { #include "insn-x86-dat-64.c" {{0x0f, 0x01, 0xee}, 3, 0, NULL, NULL, "0f 01 ee \trdpkru"}, {{0x0f, 0x01, 0xef}, 3, 0, NULL, NULL, "0f 01 ef \twrpkru"}, + {{0xf2, 0x0f, 0x01, 0xca}, 4, 0, "erets", "indirect", "f2 0f 01 ca \terets"}, + {{0xf3, 0x0f, 0x01, 0xca}, 4, 0, "eretu", "indirect", "f3 0f 01 ca \teretu"}, {{0}, 0, 0, NULL, NULL, NULL}, }; @@ -49,6 +51,8 @@ static int get_op(const char *op_str) {"syscall", INTEL_PT_OP_SYSCALL}, {"sysret", INTEL_PT_OP_SYSRET}, {"vmentry", INTEL_PT_OP_VMENTRY}, + {"erets", INTEL_PT_OP_ERETS}, + {"eretu", INTEL_PT_OP_ERETU}, {NULL, 0}, }; struct val_data *val; diff --git a/tools/perf/arch/x86/util/auxtrace.c b/tools/perf/arch/x86/util/auxtrace.c index 3da506e13f49..330d03216b0e 100644 --- a/tools/perf/arch/x86/util/auxtrace.c +++ b/tools/perf/arch/x86/util/auxtrace.c @@ -26,11 +26,7 @@ struct auxtrace_record *auxtrace_record__init_intel(struct evlist *evlist, bool found_bts = false; intel_pt_pmu = perf_pmu__find(INTEL_PT_PMU_NAME); - if (intel_pt_pmu) - intel_pt_pmu->auxtrace = true; intel_bts_pmu = perf_pmu__find(INTEL_BTS_PMU_NAME); - if (intel_bts_pmu) - intel_bts_pmu->auxtrace = true; evlist__for_each_entry(evlist, evsel) { if (intel_pt_pmu && evsel->core.attr.type == intel_pt_pmu->type) diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c index e4288d09f3a0..5741ffe47312 100644 --- a/tools/perf/arch/x86/util/event.c +++ b/tools/perf/arch/x86/util/event.c @@ -19,7 +19,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool, struct machine *machine) { int rc = 0; - struct map *pos; + struct map_rb_node *pos; struct maps *kmaps = machine__kernel_maps(machine); union perf_event *event = zalloc(sizeof(event->mmap) + machine->id_hdr_size); @@ -33,11 +33,12 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool, maps__for_each_entry(kmaps, pos) { struct kmap *kmap; size_t size; + struct map *map = pos->map; - if (!__map__is_extra_kernel_map(pos)) + if (!__map__is_extra_kernel_map(map)) continue; - kmap = map__kmap(pos); + kmap = map__kmap(map); size = sizeof(event->mmap) - sizeof(event->mmap.filename) + PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) + @@ -58,9 +59,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool, event->mmap.header.size = size; - event->mmap.start = pos->start; - event->mmap.len = pos->end - pos->start; - event->mmap.pgoff = pos->pgoff; + event->mmap.start = map__start(map); + event->mmap.len = map__size(map); + event->mmap.pgoff = map__pgoff(map); event->mmap.pid = machine->pid; strlcpy(event->mmap.filename, kmap->name, PATH_MAX); diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c index cb59ce9b9638..d4193479a364 100644 --- a/tools/perf/arch/x86/util/evlist.c +++ b/tools/perf/arch/x86/util/evlist.c @@ -59,35 +59,28 @@ int arch_evlist__add_default_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs) { - if (nr_attrs) - return ___evlist__add_default_attrs(evlist, attrs, nr_attrs); + if (!nr_attrs) + return 0; - return topdown_parse_events(evlist); + return ___evlist__add_default_attrs(evlist, attrs, nr_attrs); } -struct evsel *arch_evlist__leader(struct list_head *list) +int arch_evlist__cmp(const struct evsel *lhs, const struct evsel *rhs) { - struct evsel *evsel, *first, *slots = NULL; - bool has_topdown = false; - - first = list_first_entry(list, struct evsel, core.node); - - if (!topdown_sys_has_perf_metrics()) - return first; - - /* If there is a slots event and a topdown event then the slots event comes first. */ - __evlist__for_each_entry(list, evsel) { - if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) { - if (strcasestr(evsel->name, "slots")) { - slots = evsel; - if (slots == first) - return first; - } - if (strcasestr(evsel->name, "topdown")) - has_topdown = true; - if (slots && has_topdown) - return slots; - } + if (topdown_sys_has_perf_metrics() && + (!lhs->pmu_name || !strncmp(lhs->pmu_name, "cpu", 3))) { + /* Ensure the topdown slots comes first. */ + if (strcasestr(lhs->name, "slots")) + return -1; + if (strcasestr(rhs->name, "slots")) + return 1; + /* Followed by topdown events. */ + if (strcasestr(lhs->name, "topdown") && !strcasestr(rhs->name, "topdown")) + return -1; + if (!strcasestr(lhs->name, "topdown") && strcasestr(rhs->name, "topdown")) + return 1; } - return first; + + /* Default ordering by insertion index. */ + return lhs->core.idx - rhs->core.idx; } diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c index 1e39a034cee9..17336da08b58 100644 --- a/tools/perf/arch/x86/util/intel-pt.c +++ b/tools/perf/arch/x86/util/intel-pt.c @@ -194,16 +194,19 @@ static u64 intel_pt_default_config(struct perf_pmu *intel_pt_pmu) int pos = 0; u64 config; char c; + int dirfd; + + dirfd = perf_pmu__event_source_devices_fd(); pos += scnprintf(buf + pos, sizeof(buf) - pos, "tsc"); - if (perf_pmu__scan_file(intel_pt_pmu, "caps/mtc", "%d", - &mtc) != 1) + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "caps/mtc", "%d", + &mtc) != 1) mtc = 1; if (mtc) { - if (perf_pmu__scan_file(intel_pt_pmu, "caps/mtc_periods", "%x", - &mtc_periods) != 1) + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "caps/mtc_periods", "%x", + &mtc_periods) != 1) mtc_periods = 0; if (mtc_periods) { mtc_period = intel_pt_pick_bit(mtc_periods, 3); @@ -212,13 +215,13 @@ static u64 intel_pt_default_config(struct perf_pmu *intel_pt_pmu) } } - if (perf_pmu__scan_file(intel_pt_pmu, "caps/psb_cyc", "%d", - &psb_cyc) != 1) + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "caps/psb_cyc", "%d", + &psb_cyc) != 1) psb_cyc = 1; if (psb_cyc && mtc_periods) { - if (perf_pmu__scan_file(intel_pt_pmu, "caps/psb_periods", "%x", - &psb_periods) != 1) + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "caps/psb_periods", "%x", + &psb_periods) != 1) psb_periods = 0; if (psb_periods) { psb_period = intel_pt_pick_bit(psb_periods, 3); @@ -227,8 +230,8 @@ static u64 intel_pt_default_config(struct perf_pmu *intel_pt_pmu) } } - if (perf_pmu__scan_file(intel_pt_pmu, "format/pt", "%c", &c) == 1 && - perf_pmu__scan_file(intel_pt_pmu, "format/branch", "%c", &c) == 1) + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "format/pt", "%c", &c) == 1 && + perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "format/branch", "%c", &c) == 1) pos += scnprintf(buf + pos, sizeof(buf) - pos, ",pt,branch"); pr_debug2("%s default config: %s\n", intel_pt_pmu->name, buf); @@ -236,6 +239,7 @@ static u64 intel_pt_default_config(struct perf_pmu *intel_pt_pmu) intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, buf, &config); + close(dirfd); return config; } @@ -488,7 +492,7 @@ static void intel_pt_valid_str(char *str, size_t len, u64 valid) } } -static int intel_pt_val_config_term(struct perf_pmu *intel_pt_pmu, +static int intel_pt_val_config_term(struct perf_pmu *intel_pt_pmu, int dirfd, const char *caps, const char *name, const char *supported, u64 config) { @@ -498,11 +502,11 @@ static int intel_pt_val_config_term(struct perf_pmu *intel_pt_pmu, u64 bits; int ok; - if (perf_pmu__scan_file(intel_pt_pmu, caps, "%llx", &valid) != 1) + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, caps, "%llx", &valid) != 1) valid = 0; if (supported && - perf_pmu__scan_file(intel_pt_pmu, supported, "%d", &ok) == 1 && !ok) + perf_pmu__scan_file_at(intel_pt_pmu, dirfd, supported, "%d", &ok) == 1 && !ok) valid = 0; valid |= 1; @@ -531,56 +535,45 @@ out_err: static int intel_pt_validate_config(struct perf_pmu *intel_pt_pmu, struct evsel *evsel) { - int err; + int err, dirfd; char c; if (!evsel) return 0; + dirfd = perf_pmu__event_source_devices_fd(); + if (dirfd < 0) + return dirfd; + /* * If supported, force pass-through config term (pt=1) even if user * sets pt=0, which avoids senseless kernel errors. */ - if (perf_pmu__scan_file(intel_pt_pmu, "format/pt", "%c", &c) == 1 && + if (perf_pmu__scan_file_at(intel_pt_pmu, dirfd, "format/pt", "%c", &c) == 1 && !(evsel->core.attr.config & 1)) { pr_warning("pt=0 doesn't make sense, forcing pt=1\n"); evsel->core.attr.config |= 1; } - err = intel_pt_val_config_term(intel_pt_pmu, "caps/cycle_thresholds", + err = intel_pt_val_config_term(intel_pt_pmu, dirfd, "caps/cycle_thresholds", "cyc_thresh", "caps/psb_cyc", evsel->core.attr.config); if (err) - return err; + goto out; - err = intel_pt_val_config_term(intel_pt_pmu, "caps/mtc_periods", + err = intel_pt_val_config_term(intel_pt_pmu, dirfd, "caps/mtc_periods", "mtc_period", "caps/mtc", evsel->core.attr.config); if (err) - return err; + goto out; - return intel_pt_val_config_term(intel_pt_pmu, "caps/psb_periods", + err = intel_pt_val_config_term(intel_pt_pmu, dirfd, "caps/psb_periods", "psb_period", "caps/psb_cyc", evsel->core.attr.config); -} -static void intel_pt_config_sample_mode(struct perf_pmu *intel_pt_pmu, - struct evsel *evsel) -{ - u64 user_bits = 0, bits; - struct evsel_config_term *term = evsel__get_config_term(evsel, CFG_CHG); - - if (term) - user_bits = term->val.cfg_chg; - - bits = perf_pmu__format_bits(&intel_pt_pmu->format, "psb_period"); - - /* Did user change psb_period */ - if (bits & user_bits) - return; - - /* Set psb_period to 0 */ - evsel->core.attr.config &= ~bits; +out: + close(dirfd); + return err; } static void intel_pt_min_max_sample_sz(struct evlist *evlist, @@ -674,7 +667,8 @@ static int intel_pt_recording_options(struct auxtrace_record *itr, return 0; if (opts->auxtrace_sample_mode) - intel_pt_config_sample_mode(intel_pt_pmu, intel_pt_evsel); + evsel__set_config_if_unset(intel_pt_pmu, intel_pt_evsel, + "psb_period", 0); err = intel_pt_validate_config(intel_pt_pmu, intel_pt_evsel); if (err) diff --git a/tools/perf/arch/x86/util/iostat.c b/tools/perf/arch/x86/util/iostat.c index 7eb0a7b00b95..df7b5dfcc26a 100644 --- a/tools/perf/arch/x86/util/iostat.c +++ b/tools/perf/arch/x86/util/iostat.c @@ -10,6 +10,7 @@ #include <api/fs/fs.h> #include <linux/kernel.h> #include <linux/err.h> +#include <linux/zalloc.h> #include <limits.h> #include <stdio.h> #include <string.h> @@ -100,8 +101,8 @@ static void iio_root_ports_list_free(struct iio_root_ports_list *list) if (list) { for (idx = 0; idx < list->nr_entries; idx++) - free(list->rps[idx]); - free(list->rps); + zfree(&list->rps[idx]); + zfree(&list->rps); free(list); } } @@ -390,7 +391,7 @@ void iostat_release(struct evlist *evlist) evlist__for_each_entry(evlist, evsel) { if (rp != evsel->priv) { rp = evsel->priv; - free(evsel->priv); + zfree(&evsel->priv); } } } diff --git a/tools/perf/arch/x86/util/kvm-stat.c b/tools/perf/arch/x86/util/kvm-stat.c index c5dd54f6ef5e..424716518b75 100644 --- a/tools/perf/arch/x86/util/kvm-stat.c +++ b/tools/perf/arch/x86/util/kvm-stat.c @@ -18,7 +18,6 @@ static struct kvm_events_ops exit_events = { }; const char *vcpu_id_str = "vcpu_id"; -const int decode_str_len = 20; const char *kvm_exit_reason = "exit_reason"; const char *kvm_entry_trace = "kvm:kvm_entry"; const char *kvm_exit_trace = "kvm:kvm_exit"; @@ -47,7 +46,7 @@ static bool mmio_event_begin(struct evsel *evsel, return true; /* MMIO write begin event in kernel. */ - if (!strcmp(evsel->name, "kvm:kvm_mmio") && + if (evsel__name_is(evsel, "kvm:kvm_mmio") && evsel__intval(evsel, sample, "type") == KVM_TRACE_MMIO_WRITE) { mmio_event_get_key(evsel, sample, key); return true; @@ -64,7 +63,7 @@ static bool mmio_event_end(struct evsel *evsel, struct perf_sample *sample, return true; /* MMIO read end event in kernel.*/ - if (!strcmp(evsel->name, "kvm:kvm_mmio") && + if (evsel__name_is(evsel, "kvm:kvm_mmio") && evsel__intval(evsel, sample, "type") == KVM_TRACE_MMIO_READ) { mmio_event_get_key(evsel, sample, key); return true; @@ -77,7 +76,7 @@ static void mmio_event_decode_key(struct perf_kvm_stat *kvm __maybe_unused, struct event_key *key, char *decode) { - scnprintf(decode, decode_str_len, "%#lx:%s", + scnprintf(decode, KVM_EVENT_NAME_LEN, "%#lx:%s", (unsigned long)key->key, key->info == KVM_TRACE_MMIO_WRITE ? "W" : "R"); } @@ -102,7 +101,7 @@ static bool ioport_event_begin(struct evsel *evsel, struct perf_sample *sample, struct event_key *key) { - if (!strcmp(evsel->name, "kvm:kvm_pio")) { + if (evsel__name_is(evsel, "kvm:kvm_pio")) { ioport_event_get_key(evsel, sample, key); return true; } @@ -121,7 +120,7 @@ static void ioport_event_decode_key(struct perf_kvm_stat *kvm __maybe_unused, struct event_key *key, char *decode) { - scnprintf(decode, decode_str_len, "%#llx:%s", + scnprintf(decode, KVM_EVENT_NAME_LEN, "%#llx:%s", (unsigned long long)key->key, key->info ? "POUT" : "PIN"); } @@ -146,7 +145,7 @@ static bool msr_event_begin(struct evsel *evsel, struct perf_sample *sample, struct event_key *key) { - if (!strcmp(evsel->name, "kvm:kvm_msr")) { + if (evsel__name_is(evsel, "kvm:kvm_msr")) { msr_event_get_key(evsel, sample, key); return true; } @@ -165,7 +164,7 @@ static void msr_event_decode_key(struct perf_kvm_stat *kvm __maybe_unused, struct event_key *key, char *decode) { - scnprintf(decode, decode_str_len, "%#llx:%s", + scnprintf(decode, KVM_EVENT_NAME_LEN, "%#llx:%s", (unsigned long long)key->key, key->info ? "W" : "R"); } diff --git a/tools/perf/arch/x86/util/pmu.c b/tools/perf/arch/x86/util/pmu.c index 358340b34243..3c0de3370d7e 100644 --- a/tools/perf/arch/x86/util/pmu.c +++ b/tools/perf/arch/x86/util/pmu.c @@ -27,10 +27,14 @@ static bool cached_list; struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused) { #ifdef HAVE_AUXTRACE_SUPPORT - if (!strcmp(pmu->name, INTEL_PT_PMU_NAME)) + if (!strcmp(pmu->name, INTEL_PT_PMU_NAME)) { + pmu->auxtrace = true; return intel_pt_pmu_default_config(pmu); - if (!strcmp(pmu->name, INTEL_BTS_PMU_NAME)) + } + if (!strcmp(pmu->name, INTEL_BTS_PMU_NAME)) { + pmu->auxtrace = true; pmu->selectable = true; + } #endif return NULL; } @@ -67,7 +71,7 @@ out_delete: static int setup_pmu_alias_list(void) { - char path[PATH_MAX]; + int fd, dirfd; DIR *dir; struct dirent *dent; struct pmu_alias *pmu_alias; @@ -75,10 +79,11 @@ static int setup_pmu_alias_list(void) FILE *file; int ret = -ENOMEM; - if (!perf_pmu__event_source_devices_scnprintf(path, sizeof(path))) + dirfd = perf_pmu__event_source_devices_fd(); + if (dirfd < 0) return -1; - dir = opendir(path); + dir = fdopendir(dirfd); if (!dir) return -errno; @@ -87,11 +92,11 @@ static int setup_pmu_alias_list(void) !strcmp(dent->d_name, "..")) continue; - perf_pmu__pathname_scnprintf(path, sizeof(path), dent->d_name, "alias"); - if (!file_available(path)) + fd = perf_pmu__pathname_fd(dirfd, dent->d_name, "alias", O_RDONLY); + if (fd < 0) continue; - file = fopen(path, "r"); + file = fdopen(fd, "r"); if (!file) continue; diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c index 54810f9acd6f..9ad5e5c7bd27 100644 --- a/tools/perf/arch/x86/util/topdown.c +++ b/tools/perf/arch/x86/util/topdown.c @@ -1,19 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 -#include <stdio.h> #include "api/fs/fs.h" +#include "util/evsel.h" #include "util/pmu.h" #include "util/topdown.h" -#include "util/evlist.h" -#include "util/debug.h" -#include "util/pmu-hybrid.h" #include "topdown.h" #include "evsel.h" -#define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}" -#define TOPDOWN_L1_EVENTS_CORE "{slots,cpu_core/topdown-retiring/,cpu_core/topdown-bad-spec/,cpu_core/topdown-fe-bound/,cpu_core/topdown-be-bound/}" -#define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}" -#define TOPDOWN_L2_EVENTS_CORE "{slots,cpu_core/topdown-retiring/,cpu_core/topdown-bad-spec/,cpu_core/topdown-fe-bound/,cpu_core/topdown-be-bound/,cpu_core/topdown-heavy-ops/,cpu_core/topdown-br-mispredict/,cpu_core/topdown-fetch-lat/,cpu_core/topdown-mem-bound/}" - /* Check whether there is a PMU which supports the perf metrics. */ bool topdown_sys_has_perf_metrics(void) { @@ -38,30 +30,6 @@ bool topdown_sys_has_perf_metrics(void) return has_perf_metrics; } -/* - * Check whether we can use a group for top down. - * Without a group may get bad results due to multiplexing. - */ -bool arch_topdown_check_group(bool *warn) -{ - int n; - - if (sysctl__read_int("kernel/nmi_watchdog", &n) < 0) - return false; - if (n > 0) { - *warn = true; - return false; - } - return true; -} - -void arch_topdown_group_warn(void) -{ - fprintf(stderr, - "nmi_watchdog enabled with topdown. May give wrong results.\n" - "Disable with echo 0 > /proc/sys/kernel/nmi_watchdog\n"); -} - #define TOPDOWN_SLOTS 0x0400 /* @@ -70,7 +38,6 @@ void arch_topdown_group_warn(void) * Only Topdown metric supports sample-read. The slots * event must be the leader of the topdown group. */ - bool arch_topdown_sample_read(struct evsel *leader) { if (!evsel__sys_has_perf_metrics(leader)) @@ -81,46 +48,3 @@ bool arch_topdown_sample_read(struct evsel *leader) return false; } - -const char *arch_get_topdown_pmu_name(struct evlist *evlist, bool warn) -{ - const char *pmu_name; - - if (!perf_pmu__has_hybrid()) - return "cpu"; - - if (!evlist->hybrid_pmu_name) { - if (warn) - pr_warning("WARNING: default to use cpu_core topdown events\n"); - evlist->hybrid_pmu_name = perf_pmu__hybrid_type_to_pmu("core"); - } - - pmu_name = evlist->hybrid_pmu_name; - - return pmu_name; -} - -int topdown_parse_events(struct evlist *evlist) -{ - const char *topdown_events; - const char *pmu_name; - - if (!topdown_sys_has_perf_metrics()) - return 0; - - pmu_name = arch_get_topdown_pmu_name(evlist, false); - - if (pmu_have_event(pmu_name, "topdown-heavy-ops")) { - if (!strcmp(pmu_name, "cpu_core")) - topdown_events = TOPDOWN_L2_EVENTS_CORE; - else - topdown_events = TOPDOWN_L2_EVENTS; - } else { - if (!strcmp(pmu_name, "cpu_core")) - topdown_events = TOPDOWN_L1_EVENTS_CORE; - else - topdown_events = TOPDOWN_L1_EVENTS; - } - - return parse_event(evlist, topdown_events); -} diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h index 7eb81f042838..46bf9273e572 100644 --- a/tools/perf/arch/x86/util/topdown.h +++ b/tools/perf/arch/x86/util/topdown.h @@ -3,6 +3,5 @@ #define _TOPDOWN_H 1 bool topdown_sys_has_perf_metrics(void); -int topdown_parse_events(struct evlist *evlist); #endif |