diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-05-07 11:32:18 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-05-07 11:32:18 -0700 |
commit | f085df1be60abf670315c11036261cfaec16b2eb (patch) | |
tree | c02c07ad31578b90c3cc99be6b5ba680d163bca7 /tools/perf/util/cs-etm.c | |
parent | 17784de648be93b4eef0ef8fe28a16ff04feecc7 (diff) | |
parent | 9a2d5178b9d51e1c5f9e08989ff97fc8d4893f31 (diff) |
Merge tag 'perf-tools-for-v6.4-3-2023-05-06' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tool updates from Arnaldo Carvalho de Melo:
"Third version of perf tool updates, with the build problems with with
using a 'vmlinux.h' generated from the main build fixed, and the bpf
skeleton build disabled by default.
Build:
- Require libtraceevent to build, one can disable it using
NO_LIBTRACEEVENT=1.
It is required for tools like 'perf sched', 'perf kvm', 'perf
trace', etc.
libtraceevent is available in most distros so installing
'libtraceevent-devel' should be a one-time event to continue
building perf as usual.
Using NO_LIBTRACEEVENT=1 produces tooling that is functional and
sufficient for lots of users not interested in those libtraceevent
dependent features.
- Allow Python support in 'perf script' when libtraceevent isn't
linked, as not all features requires it, for instance Intel PT does
not use tracepoints.
- Error if the python interpreter needed for jevents to work isn't
available and NO_JEVENTS=1 isn't set, preventing a build without
support for JSON vendor events, which is a rare but possible
condition. The two check error messages:
$(error ERROR: No python interpreter needed for jevents generation. Install python or build with NO_JEVENTS=1.)
$(error ERROR: Python interpreter needed for jevents generation too old (older than 3.6). Install a newer python or build with NO_JEVENTS=1.)
- Make libbpf 1.0 the minimum required when building with out of
tree, distro provided libbpf.
- Use libsdtc++'s and LLVM's libcxx's __cxa_demangle, a portable C++
demangler, add 'perf test' entry for it.
- Make binutils libraries opt in, as distros disable building with it
due to licensing, they were used for C++ demangling, for instance.
- Switch libpfm4 to opt-out rather than opt-in, if libpfm-devel (or
equivalent) isn't installed, we'll just have a build warning:
Makefile.config:1144: libpfm4 not found, disables libpfm4 support. Please install libpfm4-dev
- Add a feature test for scandirat(), that is not implemented so far
in musl and uclibc, disabling features that need it, such as
scanning for tracepoints in /sys/kernel/tracing/events.
perf BPF filters:
- New feature where BPF can be used to filter samples, for instance:
$ sudo ./perf record -e cycles --filter 'period > 1000' true
$ sudo ./perf script
perf-exec 2273949 546850.708501: 5029 cycles: ffffffff826f9e25 finish_wait+0x5 ([kernel.kallsyms])
perf-exec 2273949 546850.708508: 32409 cycles: ffffffff826f9e25 finish_wait+0x5 ([kernel.kallsyms])
perf-exec 2273949 546850.708526: 143369 cycles: ffffffff82b4cdbf xas_start+0x5f ([kernel.kallsyms])
perf-exec 2273949 546850.708600: 372650 cycles: ffffffff8286b8f7 __pagevec_lru_add+0x117 ([kernel.kallsyms])
perf-exec 2273949 546850.708791: 482953 cycles: ffffffff829190de __mod_memcg_lruvec_state+0x4e ([kernel.kallsyms])
true 2273949 546850.709036: 501985 cycles: ffffffff828add7c tlb_gather_mmu+0x4c ([kernel.kallsyms])
true 2273949 546850.709292: 503065 cycles: 7f2446d97c03 _dl_map_object_deps+0x973 (/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
- In addition to 'period' (PERF_SAMPLE_PERIOD), the other
PERF_SAMPLE_ can be used for filtering, and also some other sample
accessible values, from tools/perf/Documentation/perf-record.txt:
Essentially the BPF filter expression is:
<term> <operator> <value> (("," | "||") <term> <operator> <value>)*
The <term> can be one of:
ip, id, tid, pid, cpu, time, addr, period, txn, weight, phys_addr,
code_pgsz, data_pgsz, weight1, weight2, weight3, ins_lat, retire_lat,
p_stage_cyc, mem_op, mem_lvl, mem_snoop, mem_remote, mem_lock,
mem_dtlb, mem_blk, mem_hops
The <operator> can be one of:
==, !=, >, >=, <, <=, &
The <value> can be one of:
<number> (for any term)
na, load, store, pfetch, exec (for mem_op)
l1, l2, l3, l4, cxl, io, any_cache, lfb, ram, pmem (for mem_lvl)
na, none, hit, miss, hitm, fwd, peer (for mem_snoop)
remote (for mem_remote)
na, locked (for mem_locked)
na, l1_hit, l1_miss, l2_hit, l2_miss, any_hit, any_miss, walk, fault (for mem_dtlb)
na, by_data, by_addr (for mem_blk)
hops0, hops1, hops2, hops3 (for mem_hops)
perf lock contention:
- Show lock type with address.
- Track and show mmap_lock, siglock and per-cpu rq_lock with address.
This is done for mmap_lock by following the current->mm pointer:
$ sudo ./perf lock con -abl -- sleep 10
contended total wait max wait avg wait address symbol
...
16344 312.30 ms 2.22 ms 19.11 us ffff8cc702595640
17686 310.08 ms 1.49 ms 17.53 us ffff8cc7025952c0
3 84.14 ms 45.79 ms 28.05 ms ffff8cc78114c478 mmap_lock
3557 76.80 ms 68.75 us 21.59 us ffff8cc77ca3af58
1 68.27 ms 68.27 ms 68.27 ms ffff8cda745dfd70
9 54.53 ms 7.96 ms 6.06 ms ffff8cc7642a48b8 mmap_lock
14629 44.01 ms 60.00 us 3.01 us ffff8cc7625f9ca0
3481 42.63 ms 140.71 us 12.24 us ffffffff937906ac vmap_area_lock
16194 38.73 ms 42.15 us 2.39 us ffff8cd397cbc560
11 38.44 ms 10.39 ms 3.49 ms ffff8ccd6d12fbb8 mmap_lock
1 5.43 ms 5.43 ms 5.43 ms ffff8cd70018f0d8
1674 5.38 ms 422.93 us 3.21 us ffffffff92e06080 tasklist_lock
581 4.51 ms 130.68 us 7.75 us ffff8cc9b1259058
5 3.52 ms 1.27 ms 703.23 us ffff8cc754510070
112 3.47 ms 56.47 us 31.02 us ffff8ccee38b3120
381 3.31 ms 73.44 us 8.69 us ffffffff93790690 purge_vmap_area_lock
255 3.19 ms 36.35 us 12.49 us ffff8d053ce30c80
- Update default map size to 16384.
- Allocate single letter option -M for --map-nr-entries, as it is
proving being frequently used.
- Fix struct rq lock access for older kernels with BPF's CO-RE
(Compile once, run everywhere).
- Fix problems found with MSAn.
perf report/top:
- Add inline information when using --call-graph=fp or lbr, as was
already done to the --call-graph=dwarf callchain mode.
- Improve the 'srcfile' sort key performance by really using an
optimization introduced in 6.2 for the 'srcline' sort key that
avoids calling addr2line for comparision with each sample.
perf sched:
- Make 'perf sched latency/map/replay' to use "sched:sched_waking"
instead of "sched:sched_waking", consistent with 'perf record'
since d566a9c2d482 ("perf sched: Prefer sched_waking event when it
exists").
perf ftrace:
- Make system wide the default target for latency subcommand, run the
following command then generate some network traffic and press
control+C:
# perf ftrace latency -T __kfree_skb
^C
DURATION | COUNT | GRAPH |
0 - 1 us | 27 | ############# |
1 - 2 us | 22 | ########### |
2 - 4 us | 8 | #### |
4 - 8 us | 5 | ## |
8 - 16 us | 24 | ############ |
16 - 32 us | 2 | # |
32 - 64 us | 1 | |
64 - 128 us | 0 | |
128 - 256 us | 0 | |
256 - 512 us | 0 | |
512 - 1024 us | 0 | |
1 - 2 ms | 0 | |
2 - 4 ms | 0 | |
4 - 8 ms | 0 | |
8 - 16 ms | 0 | |
16 - 32 ms | 0 | |
32 - 64 ms | 0 | |
64 - 128 ms | 0 | |
128 - 256 ms | 0 | |
256 - 512 ms | 0 | |
512 - 1024 ms | 0 | |
1 - ... s | 0 | |
#
perf top:
- Add --branch-history (LBR: Last Branch Record) option, just like
already available for 'perf record'.
- Fix segfault in thread__comm_len() where thread->comm was being
used outside thread->comm_lock.
perf annotate:
- Allow configuring objdump and addr2line in ~/.perfconfig., so that
you can use alternative binaries, such as llvm's.
perf kvm:
- Add TUI mode for 'perf kvm stat report'.
Reference counting:
- Add reference count checking infrastructure to check for use after
free, done to the 'cpumap', 'namespaces', 'maps' and 'map' structs,
more to come.
To build with it use -DREFCNT_CHECKING=1 in the make command line
to build tools/perf. Documented at:
https://perf.wiki.kernel.org/index.php/Reference_Count_Checking
- The above caught, for instance, fix, present in this series:
- Fix maps use after put in 'perf test "Share thread maps"':
'maps' is copied from leader, but the leader is put on line 79
and then 'maps' is used to read the reference count below - so
a use after put, with the put of maps happening within
thread__put.
Fixed by reversing the order of puts so that the leader is put
last.
- Also several fixes were made to places where reference counts were
not being held.
- Make this one of the tests in 'make -C tools/perf build-test' to
regularly build test it and to make sure no direct access to the
reference counted structs are made, doing that via accessors to
check the validity of the struct pointer.
ARM64:
- Fix 'perf report' segfault when filtering coresight traces by
sparse lists of CPUs.
- Add support for 'simd' as a sort field for 'perf report', to show
ARM's NEON SIMD's predicate flags: "partial" and "empty".
arm64 vendor events:
- Add N1 metrics.
Intel vendor events:
- Add graniterapids, grandridge and sierraforrest events.
- Refresh events for: alderlake, aldernaken, broadwell, broadwellde,
broadwellx, cascadelakx, haswell, haswellx, icelake, icelakex,
jaketown, meteorlake, knightslanding, sandybridge, sapphirerapids,
silvermont, skylake, tigerlake and westmereep-dp
- Refresh metrics for alderlake-n, broadwell, broadwellde,
broadwellx, haswell, haswellx, icelakex, ivybridge, ivytown and
skylakex.
perf stat:
- Implement --topdown using JSON metrics.
- Add TopdownL1 JSON metric as a default if present, but disable it
for now for some Intel hybrid architectures, a series of patches
addressing this is being reviewed and will be submitted for v6.5.
- Use metrics for --smi-cost.
- Update topdown documentation.
Vendor events (JSON) infrastructure:
- Add support for computing and printing metric threshold values. For
instance, here is one found in thesapphirerapids json file:
{
"BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
"MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
"MetricGroup": "smi",
"MetricName": "smi_cycles",
"MetricThreshold": "smi_cycles > 0.1",
"ScaleUnit": "100%"
},
- Test parsing metric thresholds with the fake PMU in 'perf test
pmu-events'.
- Support for printing metric thresholds in 'perf list'.
- Add --metric-no-threshold option to 'perf stat'.
- Add rand (reverse and) and has_pmem (optane memory) support to
metrics.
- Sort list of input files to avoid depending on the order from
readdir() helping in obtaining reproducible builds.
S/390:
- Add common metrics: - CPI (cycles per instruction), prbstate (ratio
of instructions executed in problem state compared to total number
of instructions), l1mp (Level one instruction and data cache misses
per 100 instructions).
- Add cache metrics for z13, z14, z15 and z16.
- Add metric for TLB and cache.
ARM:
- Add raw decoding for SPE (Statistical Profiling Extension) v1.3 MTE
(Memory Tagging Extension) and MOPS (Memory Operations) load/store.
Intel PT hardware tracing:
- Add event type names UINTR (User interrupt delivered) and UIRET
(Exiting from user interrupt routine), documented in table 32-50
"CFE Packet Type and Vector Fields Details" in the Intel Processor
Trace chapter of The Intel SDM Volume 3 version 078.
- Add support for new branch instructions ERETS and ERETU.
- Fix CYC timestamps after standalone CBR
ARM CoreSight hardware tracing:
- Allow user to override timestamp and contextid settings.
- Fix segfault in dso lookup.
- Fix timeless decode mode detection.
- Add separate decode paths for timeless and per-thread modes.
auxtrace:
- Fix address filter entire kernel size.
Miscellaneous:
- Fix use-after-free and unaligned bugs in the PLT handling routines.
- Use zfree() to reduce chances of use after free.
- Add missing 0x prefix for addresses printed in hexadecimal in 'perf
probe'.
- Suppress massive unsupported target platform errors in the unwind
code.
- Fix return incorrect build_id size in elf_read_build_id().
- Fix 'perf scripts intel-pt-events.py' IPC output for Python 2 .
- Add missing new parameter in kfree_skb tracepoint to the python
scripts using it.
- Add 'perf bench syscall fork' benchmark.
- Add support for printing PERF_MEM_LVLNUM_UNC (Uncached access) in
'perf mem'.
- Fix wrong size expectation for perf test 'Setup struct
perf_event_attr' caused by the patch adding
perf_event_attr::config3.
- Fix some spelling mistakes"
* tag 'perf-tools-for-v6.4-3-2023-05-06' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (365 commits)
Revert "perf build: Make BUILD_BPF_SKEL default, rename to NO_BPF_SKEL"
Revert "perf build: Warn for BPF skeletons if endian mismatches"
perf metrics: Fix SEGV with --for-each-cgroup
perf bpf skels: Stop using vmlinux.h generated from BTF, use subset of used structs + CO-RE
perf stat: Separate bperf from bpf_profiler
perf test record+probe_libc_inet_pton: Fix call chain match on x86_64
perf test record+probe_libc_inet_pton: Fix call chain match on s390
perf tracepoint: Fix memory leak in is_valid_tracepoint()
perf cs-etm: Add fix for coresight trace for any range of CPUs
perf build: Fix unescaped # in perf build-test
perf unwind: Suppress massive unsupported target platform errors
perf script: Add new parameter in kfree_skb tracepoint to the python scripts using it
perf script: Print raw ip instead of binary offset for callchain
perf symbols: Fix return incorrect build_id size in elf_read_build_id()
perf list: Modify the warning message about scandirat(3)
perf list: Fix memory leaks in print_tracepoint_events()
perf lock contention: Rework offset calculation with BPF CO-RE
perf lock contention: Fix struct rq lock access
perf stat: Disable TopdownL1 on hybrid
perf stat: Avoid SEGV on counter->name
...
Diffstat (limited to 'tools/perf/util/cs-etm.c')
-rw-r--r-- | tools/perf/util/cs-etm.c | 568 |
1 files changed, 463 insertions, 105 deletions
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c index f65bac5ddbdb..91299cc56bf7 100644 --- a/tools/perf/util/cs-etm.c +++ b/tools/perf/util/cs-etm.c @@ -38,6 +38,7 @@ #include "tsc.h" #include <tools/libc_compat.h> #include "util/synthetic-events.h" +#include "util/util.h" struct cs_etm_auxtrace { struct auxtrace auxtrace; @@ -49,10 +50,25 @@ struct cs_etm_auxtrace { struct thread *unknown_thread; struct perf_tsc_conversion tc; - u8 timeless_decoding; - u8 snapshot_mode; - u8 data_queued; - u8 has_virtual_ts; /* Virtual/Kernel timestamps in the trace. */ + /* + * Timeless has no timestamps in the trace so overlapping mmap lookups + * are less accurate but produces smaller trace data. We use context IDs + * in the trace instead of matching timestamps with fork records so + * they're not really needed in the general case. Overlapping mmaps + * happen in cases like between a fork and an exec. + */ + bool timeless_decoding; + + /* + * Per-thread ignores the trace channel ID and instead assumes that + * everything in a buffer comes from the same process regardless of + * which CPU it ran on. It also implies no context IDs so the TID is + * taken from the auxtrace buffer. + */ + bool per_thread_decoding; + bool snapshot_mode; + bool data_queued; + bool has_virtual_ts; /* Virtual/Kernel timestamps in the trace. */ int num_cpu; u64 latest_kernel_timestamp; @@ -97,7 +113,7 @@ struct cs_etm_queue { /* RB tree for quick conversion between traceID and metadata pointers */ static struct intlist *traceid_list; -static int cs_etm__process_queues(struct cs_etm_auxtrace *etm); +static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm); static int cs_etm__process_timeless_queues(struct cs_etm_auxtrace *etm, pid_t tid); static int cs_etm__get_data_block(struct cs_etm_queue *etmq); @@ -196,6 +212,189 @@ int cs_etm__get_pid_fmt(u8 trace_chan_id, u64 *pid_fmt) return 0; } +static int cs_etm__map_trace_id(u8 trace_chan_id, u64 *cpu_metadata) +{ + struct int_node *inode; + + /* Get an RB node for this CPU */ + inode = intlist__findnew(traceid_list, trace_chan_id); + + /* Something went wrong, no need to continue */ + if (!inode) + return -ENOMEM; + + /* + * The node for that CPU should not be taken. + * Back out if that's the case. + */ + if (inode->priv) + return -EINVAL; + + /* All good, associate the traceID with the metadata pointer */ + inode->priv = cpu_metadata; + + return 0; +} + +static int cs_etm__metadata_get_trace_id(u8 *trace_chan_id, u64 *cpu_metadata) +{ + u64 cs_etm_magic = cpu_metadata[CS_ETM_MAGIC]; + + switch (cs_etm_magic) { + case __perf_cs_etmv3_magic: + *trace_chan_id = (u8)(cpu_metadata[CS_ETM_ETMTRACEIDR] & + CORESIGHT_TRACE_ID_VAL_MASK); + break; + case __perf_cs_etmv4_magic: + case __perf_cs_ete_magic: + *trace_chan_id = (u8)(cpu_metadata[CS_ETMV4_TRCTRACEIDR] & + CORESIGHT_TRACE_ID_VAL_MASK); + break; + default: + return -EINVAL; + } + return 0; +} + +/* + * update metadata trace ID from the value found in the AUX_HW_INFO packet. + * This will also clear the CORESIGHT_TRACE_ID_UNUSED_FLAG flag if present. + */ +static int cs_etm__metadata_set_trace_id(u8 trace_chan_id, u64 *cpu_metadata) +{ + u64 cs_etm_magic = cpu_metadata[CS_ETM_MAGIC]; + + switch (cs_etm_magic) { + case __perf_cs_etmv3_magic: + cpu_metadata[CS_ETM_ETMTRACEIDR] = trace_chan_id; + break; + case __perf_cs_etmv4_magic: + case __perf_cs_ete_magic: + cpu_metadata[CS_ETMV4_TRCTRACEIDR] = trace_chan_id; + break; + + default: + return -EINVAL; + } + return 0; +} + +/* + * FIELD_GET (linux/bitfield.h) not available outside kernel code, + * and the header contains too many dependencies to just copy over, + * so roll our own based on the original + */ +#define __bf_shf(x) (__builtin_ffsll(x) - 1) +#define FIELD_GET(_mask, _reg) \ + ({ \ + (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ + }) + +/* + * Get a metadata for a specific cpu from an array. + * + */ +static u64 *get_cpu_data(struct cs_etm_auxtrace *etm, int cpu) +{ + int i; + u64 *metadata = NULL; + + for (i = 0; i < etm->num_cpu; i++) { + if (etm->metadata[i][CS_ETM_CPU] == (u64)cpu) { + metadata = etm->metadata[i]; + break; + } + } + + return metadata; +} + +/* + * Handle the PERF_RECORD_AUX_OUTPUT_HW_ID event. + * + * The payload associates the Trace ID and the CPU. + * The routine is tolerant of seeing multiple packets with the same association, + * but a CPU / Trace ID association changing during a session is an error. + */ +static int cs_etm__process_aux_output_hw_id(struct perf_session *session, + union perf_event *event) +{ + struct cs_etm_auxtrace *etm; + struct perf_sample sample; + struct int_node *inode; + struct evsel *evsel; + u64 *cpu_data; + u64 hw_id; + int cpu, version, err; + u8 trace_chan_id, curr_chan_id; + + /* extract and parse the HW ID */ + hw_id = event->aux_output_hw_id.hw_id; + version = FIELD_GET(CS_AUX_HW_ID_VERSION_MASK, hw_id); + trace_chan_id = FIELD_GET(CS_AUX_HW_ID_TRACE_ID_MASK, hw_id); + + /* check that we can handle this version */ + if (version > CS_AUX_HW_ID_CURR_VERSION) + return -EINVAL; + + /* get access to the etm metadata */ + etm = container_of(session->auxtrace, struct cs_etm_auxtrace, auxtrace); + if (!etm || !etm->metadata) + return -EINVAL; + + /* parse the sample to get the CPU */ + evsel = evlist__event2evsel(session->evlist, event); + if (!evsel) + return -EINVAL; + err = evsel__parse_sample(evsel, event, &sample); + if (err) + return err; + cpu = sample.cpu; + if (cpu == -1) { + /* no CPU in the sample - possibly recorded with an old version of perf */ + pr_err("CS_ETM: no CPU AUX_OUTPUT_HW_ID sample. Use compatible perf to record."); + return -EINVAL; + } + + /* See if the ID is mapped to a CPU, and it matches the current CPU */ + inode = intlist__find(traceid_list, trace_chan_id); + if (inode) { + cpu_data = inode->priv; + if ((int)cpu_data[CS_ETM_CPU] != cpu) { + pr_err("CS_ETM: map mismatch between HW_ID packet CPU and Trace ID\n"); + return -EINVAL; + } + + /* check that the mapped ID matches */ + err = cs_etm__metadata_get_trace_id(&curr_chan_id, cpu_data); + if (err) + return err; + if (curr_chan_id != trace_chan_id) { + pr_err("CS_ETM: mismatch between CPU trace ID and HW_ID packet ID\n"); + return -EINVAL; + } + + /* mapped and matched - return OK */ + return 0; + } + + cpu_data = get_cpu_data(etm, cpu); + if (cpu_data == NULL) + return err; + + /* not one we've seen before - lets map it */ + err = cs_etm__map_trace_id(trace_chan_id, cpu_data); + if (err) + return err; + + /* + * if we are picking up the association from the packet, need to plug + * the correct trace ID into the metadata for setting up decoders later. + */ + err = cs_etm__metadata_set_trace_id(trace_chan_id, cpu_data); + return err; +} + void cs_etm__etmq_set_traceid_queue_timestamp(struct cs_etm_queue *etmq, u8 trace_chan_id) { @@ -330,7 +529,7 @@ static struct cs_etm_traceid_queue struct cs_etm_traceid_queue *tidq, **traceid_queues; struct cs_etm_auxtrace *etm = etmq->etm; - if (etm->timeless_decoding) + if (etm->per_thread_decoding) trace_chan_id = CS_ETM_PER_THREAD_TRACEID; traceid_queues_list = etmq->traceid_queues_list; @@ -569,10 +768,15 @@ static int cs_etm__flush_events(struct perf_session *session, if (!tool->ordered_events) return -EINVAL; - if (etm->timeless_decoding) + if (etm->timeless_decoding) { + /* + * Pass tid = -1 to process all queues. But likely they will have + * already been processed on PERF_RECORD_EXIT anyway. + */ return cs_etm__process_timeless_queues(etm, -1); + } - return cs_etm__process_queues(etm); + return cs_etm__process_timestamped_queues(etm); } static void cs_etm__free_traceid_queues(struct cs_etm_queue *etmq) @@ -704,6 +908,7 @@ static u32 cs_etm__mem_access(struct cs_etm_queue *etmq, u8 trace_chan_id, struct thread *thread; struct machine *machine; struct addr_location al; + struct dso *dso; struct cs_etm_traceid_queue *tidq; if (!etmq) @@ -722,27 +927,31 @@ static u32 cs_etm__mem_access(struct cs_etm_queue *etmq, u8 trace_chan_id, thread = etmq->etm->unknown_thread; } - if (!thread__find_map(thread, cpumode, address, &al) || !al.map->dso) + if (!thread__find_map(thread, cpumode, address, &al)) + return 0; + + dso = map__dso(al.map); + if (!dso) return 0; - if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR && - dso__data_status_seen(al.map->dso, DSO_DATA_STATUS_SEEN_ITRACE)) + if (dso->data.status == DSO_DATA_STATUS_ERROR && + dso__data_status_seen(dso, DSO_DATA_STATUS_SEEN_ITRACE)) return 0; - offset = al.map->map_ip(al.map, address); + offset = map__map_ip(al.map, address); map__load(al.map); - len = dso__data_read_offset(al.map->dso, machine, offset, buffer, size); + len = dso__data_read_offset(dso, machine, offset, buffer, size); if (len <= 0) { ui__warning_once("CS ETM Trace: Missing DSO. Use 'perf archive' or debuginfod to export data from the traced system.\n" " Enable CONFIG_PROC_KCORE or use option '-k /path/to/vmlinux' for kernel symbols.\n"); - if (!al.map->dso->auxtrace_warned) { + if (!dso->auxtrace_warned) { pr_err("CS ETM Trace: Debug data not found for address %#"PRIx64" in %s\n", address, - al.map->dso->long_name ? al.map->dso->long_name : "Unknown"); - al.map->dso->auxtrace_warned = true; + dso->long_name ? dso->long_name : "Unknown"); + dso->auxtrace_warned = true; } return 0; } @@ -899,7 +1108,7 @@ static int cs_etm__queue_first_cs_timestamp(struct cs_etm_auxtrace *etm, * chronological order. * * Note that packets decoded above are still in the traceID's packet - * queue and will be processed in cs_etm__process_queues(). + * queue and will be processed in cs_etm__process_timestamped_queues(). */ cs_queue_nr = TO_CS_QUEUE_NR(queue_nr, trace_chan_id); ret = auxtrace_heap__add(&etm->heap, cs_queue_nr, cs_timestamp); @@ -1180,9 +1389,7 @@ static inline u64 cs_etm__resolve_sample_time(struct cs_etm_queue *etmq, struct cs_etm_auxtrace *etm = etmq->etm; struct cs_etm_packet_queue *packet_queue = &tidq->packet_queue; - if (etm->timeless_decoding) - return 0; - else if (etm->has_virtual_ts) + if (!etm->timeless_decoding && etm->has_virtual_ts) return packet_queue->cs_timestamp; else return etm->latest_kernel_timestamp; @@ -2162,7 +2369,7 @@ static void cs_etm__clear_all_traceid_queues(struct cs_etm_queue *etmq) } } -static int cs_etm__run_decoder(struct cs_etm_queue *etmq) +static int cs_etm__run_per_thread_timeless_decoder(struct cs_etm_queue *etmq) { int err = 0; struct cs_etm_traceid_queue *tidq; @@ -2200,6 +2407,51 @@ static int cs_etm__run_decoder(struct cs_etm_queue *etmq) return err; } +static int cs_etm__run_per_cpu_timeless_decoder(struct cs_etm_queue *etmq) +{ + int idx, err = 0; + struct cs_etm_traceid_queue *tidq; + struct int_node *inode; + + /* Go through each buffer in the queue and decode them one by one */ + while (1) { + err = cs_etm__get_data_block(etmq); + if (err <= 0) + return err; + + /* Run trace decoder until buffer consumed or end of trace */ + do { + err = cs_etm__decode_data_block(etmq); + if (err) + return err; + + /* + * cs_etm__run_per_thread_timeless_decoder() runs on a + * single traceID queue because each TID has a separate + * buffer. But here in per-cpu mode we need to iterate + * over each channel instead. + */ + intlist__for_each_entry(inode, + etmq->traceid_queues_list) { + idx = (int)(intptr_t)inode->priv; + tidq = etmq->traceid_queues[idx]; + cs_etm__process_traceid_queue(etmq, tidq); + } + } while (etmq->buf_len); + + intlist__for_each_entry(inode, etmq->traceid_queues_list) { + idx = (int)(intptr_t)inode->priv; + tidq = etmq->traceid_queues[idx]; + /* Flush any remaining branch stack entries */ + err = cs_etm__end_block(etmq, tidq); + if (err) + return err; + } + } + + return err; +} + static int cs_etm__process_timeless_queues(struct cs_etm_auxtrace *etm, pid_t tid) { @@ -2214,22 +2466,30 @@ static int cs_etm__process_timeless_queues(struct cs_etm_auxtrace *etm, if (!etmq) continue; - tidq = cs_etm__etmq_get_traceid_queue(etmq, - CS_ETM_PER_THREAD_TRACEID); + /* + * Per-cpu mode has contextIDs in the trace and the decoder + * calls cs_etm__set_pid_tid_cpu() automatically so no need + * to do this here + */ + if (etm->per_thread_decoding) { + tidq = cs_etm__etmq_get_traceid_queue( + etmq, CS_ETM_PER_THREAD_TRACEID); - if (!tidq) - continue; + if (!tidq) + continue; - if ((tid == -1) || (tidq->tid == tid)) { - cs_etm__set_pid_tid_cpu(etm, tidq); - cs_etm__run_decoder(etmq); - } + if ((tid == -1) || (tidq->tid == tid)) { + cs_etm__set_pid_tid_cpu(etm, tidq); + cs_etm__run_per_thread_timeless_decoder(etmq); + } + } else + cs_etm__run_per_cpu_timeless_decoder(etmq); } return 0; } -static int cs_etm__process_queues(struct cs_etm_auxtrace *etm) +static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm) { int ret = 0; unsigned int cs_queue_nr, queue_nr, i; @@ -2406,7 +2666,6 @@ static int cs_etm__process_event(struct perf_session *session, struct perf_sample *sample, struct perf_tool *tool) { - u64 sample_kernel_timestamp; struct cs_etm_auxtrace *etm = container_of(session->auxtrace, struct cs_etm_auxtrace, auxtrace); @@ -2419,33 +2678,39 @@ static int cs_etm__process_event(struct perf_session *session, return -EINVAL; } - if (sample->time && (sample->time != (u64) -1)) - sample_kernel_timestamp = sample->time; - else - sample_kernel_timestamp = 0; - - /* - * Don't wait for cs_etm__flush_events() in per-thread/timeless mode to start the decode. We - * need the tid of the PERF_RECORD_EXIT event to assign to the synthesised samples because - * ETM_OPT_CTXTID is not enabled. - */ - if (etm->timeless_decoding && - event->header.type == PERF_RECORD_EXIT) - return cs_etm__process_timeless_queues(etm, - event->fork.tid); + switch (event->header.type) { + case PERF_RECORD_EXIT: + /* + * Don't need to wait for cs_etm__flush_events() in per-thread mode to + * start the decode because we know there will be no more trace from + * this thread. All this does is emit samples earlier than waiting for + * the flush in other modes, but with timestamps it makes sense to wait + * for flush so that events from different threads are interleaved + * properly. + */ + if (etm->per_thread_decoding && etm->timeless_decoding) + return cs_etm__process_timeless_queues(etm, + event->fork.tid); + break; - if (event->header.type == PERF_RECORD_ITRACE_START) + case PERF_RECORD_ITRACE_START: return cs_etm__process_itrace_start(etm, event); - else if (event->header.type == PERF_RECORD_SWITCH_CPU_WIDE) + + case PERF_RECORD_SWITCH_CPU_WIDE: return cs_etm__process_switch_cpu_wide(etm, event); - if (!etm->timeless_decoding && event->header.type == PERF_RECORD_AUX) { + case PERF_RECORD_AUX: /* * Record the latest kernel timestamp available in the header * for samples so that synthesised samples occur from this point * onwards. */ - etm->latest_kernel_timestamp = sample_kernel_timestamp; + if (sample->time && (sample->time != (u64)-1)) + etm->latest_kernel_timestamp = sample->time; + break; + + default: + break; } return 0; @@ -2517,26 +2782,29 @@ static int cs_etm__process_auxtrace_event(struct perf_session *session, return 0; } -static bool cs_etm__is_timeless_decoding(struct cs_etm_auxtrace *etm) +static int cs_etm__setup_timeless_decoding(struct cs_etm_auxtrace *etm) { struct evsel *evsel; struct evlist *evlist = etm->session->evlist; - bool timeless_decoding = true; /* Override timeless mode with user input from --itrace=Z */ - if (etm->synth_opts.timeless_decoding) - return true; + if (etm->synth_opts.timeless_decoding) { + etm->timeless_decoding = true; + return 0; + } /* - * Circle through the list of event and complain if we find one - * with the time bit set. + * Find the cs_etm evsel and look at what its timestamp setting was */ - evlist__for_each_entry(evlist, evsel) { - if ((evsel->core.attr.sample_type & PERF_SAMPLE_TIME)) - timeless_decoding = false; - } + evlist__for_each_entry(evlist, evsel) + if (cs_etm__evsel_is_auxtrace(etm->session, evsel)) { + etm->timeless_decoding = + !(evsel->core.attr.config & BIT(ETM_OPT_TS)); + return 0; + } - return timeless_decoding; + pr_err("CS ETM: Couldn't find ETM evsel\n"); + return -EINVAL; } /* @@ -2644,12 +2912,27 @@ static int cs_etm__queue_aux_fragment(struct perf_session *session, off_t file_o } /* - * In per-thread mode, CPU is set to -1, but TID will be set instead. See - * auxtrace_mmap_params__set_idx(). Return 'not found' if neither CPU nor TID match. + * In per-thread mode, auxtrace CPU is set to -1, but TID will be set instead. See + * auxtrace_mmap_params__set_idx(). However, the sample AUX event will contain a + * CPU as we set this always for the AUX_OUTPUT_HW_ID event. + * So now compare only TIDs if auxtrace CPU is -1, and CPUs if auxtrace CPU is not -1. + * Return 'not found' if mismatch. */ - if ((auxtrace_event->cpu == (__u32) -1 && auxtrace_event->tid != sample->tid) || - auxtrace_event->cpu != sample->cpu) + if (auxtrace_event->cpu == (__u32) -1) { + etm->per_thread_decoding = true; + if (auxtrace_event->tid != sample->tid) + return 1; + } else if (auxtrace_event->cpu != sample->cpu) { + if (etm->per_thread_decoding) { + /* + * Found a per-cpu buffer after a per-thread one was + * already found + */ + pr_err("CS ETM: Inconsistent per-thread/per-cpu mode.\n"); + return -EINVAL; + } return 1; + } if (aux_event->flags & PERF_AUX_FLAG_OVERWRITE) { /* @@ -2697,6 +2980,17 @@ static int cs_etm__queue_aux_fragment(struct perf_session *session, off_t file_o return 1; } +static int cs_etm__process_aux_hw_id_cb(struct perf_session *session, union perf_event *event, + u64 offset __maybe_unused, void *data __maybe_unused) +{ + /* look to handle PERF_RECORD_AUX_OUTPUT_HW_ID early to ensure decoders can be set up */ + if (event->header.type == PERF_RECORD_AUX_OUTPUT_HW_ID) { + (*(int *)data)++; /* increment found count */ + return cs_etm__process_aux_output_hw_id(session, event); + } + return 0; +} + static int cs_etm__queue_aux_records_cb(struct perf_session *session, union perf_event *event, u64 offset __maybe_unused, void *data __maybe_unused) { @@ -2804,18 +3098,77 @@ static bool cs_etm__has_virtual_ts(u64 **metadata, int num_cpu) return true; } +/* map trace ids to correct metadata block, from information in metadata */ +static int cs_etm__map_trace_ids_metadata(int num_cpu, u64 **metadata) +{ + u64 cs_etm_magic; + u8 trace_chan_id; + int i, err; + + for (i = 0; i < num_cpu; i++) { + cs_etm_magic = metadata[i][CS_ETM_MAGIC]; + switch (cs_etm_magic) { + case __perf_cs_etmv3_magic: + metadata[i][CS_ETM_ETMTRACEIDR] &= CORESIGHT_TRACE_ID_VAL_MASK; + trace_chan_id = (u8)(metadata[i][CS_ETM_ETMTRACEIDR]); + break; + case __perf_cs_etmv4_magic: + case __perf_cs_ete_magic: + metadata[i][CS_ETMV4_TRCTRACEIDR] &= CORESIGHT_TRACE_ID_VAL_MASK; + trace_chan_id = (u8)(metadata[i][CS_ETMV4_TRCTRACEIDR]); + break; + default: + /* unknown magic number */ + return -EINVAL; + } + err = cs_etm__map_trace_id(trace_chan_id, metadata[i]); + if (err) + return err; + } + return 0; +} + +/* + * If we found AUX_HW_ID packets, then set any metadata marked as unused to the + * unused value to reduce the number of unneeded decoders created. + */ +static int cs_etm__clear_unused_trace_ids_metadata(int num_cpu, u64 **metadata) +{ + u64 cs_etm_magic; + int i; + + for (i = 0; i < num_cpu; i++) { + cs_etm_magic = metadata[i][CS_ETM_MAGIC]; + switch (cs_etm_magic) { + case __perf_cs_etmv3_magic: + if (metadata[i][CS_ETM_ETMTRACEIDR] & CORESIGHT_TRACE_ID_UNUSED_FLAG) + metadata[i][CS_ETM_ETMTRACEIDR] = CORESIGHT_TRACE_ID_UNUSED_VAL; + break; + case __perf_cs_etmv4_magic: + case __perf_cs_ete_magic: + if (metadata[i][CS_ETMV4_TRCTRACEIDR] & CORESIGHT_TRACE_ID_UNUSED_FLAG) + metadata[i][CS_ETMV4_TRCTRACEIDR] = CORESIGHT_TRACE_ID_UNUSED_VAL; + break; + default: + /* unknown magic number */ + return -EINVAL; + } + } + return 0; +} + int cs_etm__process_auxtrace_info_full(union perf_event *event, struct perf_session *session) { struct perf_record_auxtrace_info *auxtrace_info = &event->auxtrace_info; struct cs_etm_auxtrace *etm = NULL; - struct int_node *inode; struct perf_record_time_conv *tc = &session->time_conv; int event_header_size = sizeof(struct perf_event_header); int total_size = auxtrace_info->header.size; int priv_size = 0; - int num_cpu, trcidr_idx; + int num_cpu; int err = 0; + int aux_hw_id_found; int i, j; u64 *ptr = NULL; u64 **metadata = NULL; @@ -2853,23 +3206,13 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, cs_etm__create_meta_blk(ptr, &i, CS_ETM_PRIV_MAX, CS_ETM_NR_TRC_PARAMS_V0); - - /* The traceID is our handle */ - trcidr_idx = CS_ETM_ETMTRACEIDR; - } else if (ptr[i] == __perf_cs_etmv4_magic) { metadata[j] = cs_etm__create_meta_blk(ptr, &i, CS_ETMV4_PRIV_MAX, CS_ETMV4_NR_TRC_PARAMS_V0); - - /* The traceID is our handle */ - trcidr_idx = CS_ETMV4_TRCTRACEIDR; } else if (ptr[i] == __perf_cs_ete_magic) { metadata[j] = cs_etm__create_meta_blk(ptr, &i, CS_ETE_PRIV_MAX, -1); - - /* ETE shares first part of metadata with ETMv4 */ - trcidr_idx = CS_ETMV4_TRCTRACEIDR; } else { ui__error("CS ETM Trace: Unrecognised magic number %#"PRIx64". File could be from a newer version of perf.\n", ptr[i]); @@ -2881,26 +3224,6 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, err = -ENOMEM; goto err_free_metadata; } - - /* Get an RB node for this CPU */ - inode = intlist__findnew(traceid_list, metadata[j][trcidr_idx]); - - /* Something went wrong, no need to continue */ - if (!inode) { - err = -ENOMEM; - goto err_free_metadata; - } - - /* - * The node for that CPU should not be taken. - * Back out if that's the case. - */ - if (inode->priv) { - err = -EINVAL; - goto err_free_metadata; - } - /* All good, associate the traceID with the metadata pointer */ - inode->priv = metadata[j]; } /* @@ -2943,7 +3266,6 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, etm->snapshot_mode = (ptr[CS_ETM_SNAPSHOT] != 0); etm->metadata = metadata; etm->auxtrace_type = auxtrace_info->type; - etm->timeless_decoding = cs_etm__is_timeless_decoding(etm); /* Use virtual timestamps if all ETMs report ts_source = 1 */ etm->has_virtual_ts = cs_etm__has_virtual_ts(metadata, num_cpu); @@ -2960,6 +3282,10 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, etm->auxtrace.evsel_is_auxtrace = cs_etm__evsel_is_auxtrace; session->auxtrace = &etm->auxtrace; + err = cs_etm__setup_timeless_decoding(etm); + if (err) + return err; + etm->unknown_thread = thread__new(999999999, 999999999); if (!etm->unknown_thread) { err = -ENOMEM; @@ -2994,19 +3320,51 @@ int cs_etm__process_auxtrace_info_full(union perf_event *event, if (err) goto err_delete_thread; + /* + * Map Trace ID values to CPU metadata. + * + * Trace metadata will always contain Trace ID values from the legacy algorithm. If the + * files has been recorded by a "new" perf updated to handle AUX_HW_ID then the metadata + * ID value will also have the CORESIGHT_TRACE_ID_UNUSED_FLAG set. + * + * The updated kernel drivers that use AUX_HW_ID to sent Trace IDs will attempt to use + * the same IDs as the old algorithm as far as is possible, unless there are clashes + * in which case a different value will be used. This means an older perf may still + * be able to record and read files generate on a newer system. + * + * For a perf able to interpret AUX_HW_ID packets we first check for the presence of + * those packets. If they are there then the values will be mapped and plugged into + * the metadata. We then set any remaining metadata values with the used flag to a + * value CORESIGHT_TRACE_ID_UNUSED_VAL - which indicates no decoder is required. + * + * If no AUX_HW_ID packets are present - which means a file recorded on an old kernel + * then we map Trace ID values to CPU directly from the metadata - clearing any unused + * flags if present. + */ + + /* first scan for AUX_OUTPUT_HW_ID records to map trace ID values to CPU metadata */ + aux_hw_id_found = 0; + err = perf_session__peek_events(session, session->header.data_offset, + session->header.data_size, + cs_etm__process_aux_hw_id_cb, &aux_hw_id_found); + if (err) + goto err_delete_thread; + + /* if HW ID found then clear any unused metadata ID values */ + if (aux_hw_id_found) + err = cs_etm__clear_unused_trace_ids_metadata(num_cpu, metadata); + /* otherwise, this is a file with metadata values only, map from metadata */ + else + err = cs_etm__map_trace_ids_metadata(num_cpu, metadata); + + if (err) + goto err_delete_thread; + err = cs_etm__queue_aux_records(session); if (err) goto err_delete_thread; etm->data_queued = etm->queues.populated; - /* - * Print warning in pipe mode, see cs_etm__process_auxtrace_event() and - * cs_etm__queue_aux_fragment() for details relating to limitations. - */ - if (!etm->data_queued) - pr_warning("CS ETM warning: Coresight decode and TRBE support requires random file access.\n" - "Continuing with best effort decoding in piped mode.\n\n"); - return 0; err_delete_thread: |