Age | Commit message (Collapse) | Author | Files | Lines |
|
Check if access("devices/system/cpu/cpu%d/topology/core_cpus", F_OK)
fails, which will happen unless the current directory is "/sys".
Simply try to read this file first.
Fixes: 0ccdb8407a46 ("perf tools: Apply new CPU topology sysfs attributes")
Signed-off-by: Konstantin Khlebnikov <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Dmitry Monakhov <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/158817718710.747528.11009278875028211991.stgit@buzz
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Fix another memory leak found by applying LLVM's libfuzzer on parse_events().
Signed-off-by: Ian Rogers <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: [email protected]
Link: http://lore.kernel.org/lkml/[email protected]
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
free_list_evsel() deals with tools/perf/ evsels, not with libperf
perf_evsels, use the right destructor and avoid a leak, as
evsel__delete() will delete something perf_evsel__delete() doesn't.
Signed-off-by: Ian Rogers <[email protected]>
Cc: Adrian Hunter <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: [email protected]
Link: http://lore.kernel.org/lkml/[email protected]
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Fix a memory leak found by applying LLVM's libfuzzer on parse_events().
Signed-off-by: Ian Rogers <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Leo Yan <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: [email protected]
Link: http://lore.kernel.org/lkml/[email protected]
[ split from a larger patch, use zfree() ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
I.e. so far we had just one event in that side band thread, a dummy one
with attr.bpf_event set, so that 'perf record' can go ahead and ask the
kernel for further information about BPF programs being loaded.
Allow for more than one event to be there, so that we can use it as
well for the upcoming --switch-output-event feature.
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Song Liu <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To avoid dragging more stuff into the perf python binding in the
following csets.
Reported-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Namhyung Kim <[email protected]>
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
For the upcoming --switch-output-event option we want to create the side
band event, populate it with the specified events and then, if it is
present multiple times, go on adding to it, then, if the BPF tracking is
required, use the first event to set its attr.bpf_event to get those
PERF_RECORD_BPF_EVENT metadata events too.
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Song Liu <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Renaming bpf_event__add_sb_event() to evlist__add_sb_event() and
requiring that the evlist be allocated beforehand.
This will allow using the same side band thread and evlist to be used
for multiple purposes in addition to react to PERF_RECORD_BPF_EVENT soon
after they are generated.
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Song Liu <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Where state related to a 'perf top' session is grouped.
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Song Liu <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Trying to disentangle this a bit further, unfortunately it uses
parse_events(), its interesting to have it separated anyway, so do it.
Cc: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Commit 54b5091606c18 ("perf stat: Implement --metric-only mode") added
function 'valid_only_metric()' which drops "Hz" or "hz", if it is part
of "ScaleUnit". This patch enable it since hv_24x7 supports couple of
frequency events.
Signed-off-by: Kajol Jain <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Anju T Sudhakar <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jin Yao <[email protected]>
Cc: Joe Mario <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Madhavan Srinivasan <[email protected]>
Cc: Mamatha Inamdar <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Sukadev Bhattiprolu <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Patch enhances current metric infrastructure to handle "?" in the metric
expression. The "?" can be use for parameters whose value not known
while creating metric events and which can be replace later at runtime
to the proper value. It also add flexibility to create multiple events
out of single metric event added in JSON file.
Patch adds function 'arch_get_runtimeparam' which is a arch specific
function, returns the count of metric events need to be created. By
default it return 1.
This infrastructure needed for hv_24x7 socket/chip level events.
"hv_24x7" chip level events needs specific chip-id to which the data is
requested. Function 'arch_get_runtimeparam' implemented in header.c
which extract number of sockets from sysfs file "sockets" under
"/sys/devices/hv_24x7/interface/".
With this patch basically we are trying to create as many metric events
as define by runtime_param.
For that one loop is added in function 'metricgroup__add_metric', which
create multiple events at run time depend on return value of
'arch_get_runtimeparam' and merge that event in 'group_list'.
To achieve that we are actually passing this parameter value as part of
`expr__find_other` function and changing "?" present in metric
expression with this value.
As in our JSON file, there gonna be single metric event, and out of
which we are creating multiple events.
To understand which data count belongs to which parameter value,
we also printing param value in generic_metric function.
For example,
command:# ./perf stat -M PowerBUS_Frequency -C 0 -I 1000
1.000101867 9,356,933 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
1.000101867 9,366,134 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
2.000314878 9,365,868 hv_24x7/pm_pb_cyc,chip=0/ # 2.3 GHz PowerBUS_Frequency_0
2.000314878 9,366,092 hv_24x7/pm_pb_cyc,chip=1/ # 2.3 GHz PowerBUS_Frequency_1
So, here _0 and _1 after PowerBUS_Frequency specify parameter value.
Signed-off-by: Kajol Jain <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Anju T Sudhakar <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jin Yao <[email protected]>
Cc: Joe Mario <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Madhavan Srinivasan <[email protected]>
Cc: Mamatha Inamdar <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Sukadev Bhattiprolu <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Fixes coccicheck warnings:
tools/perf/builtin-diff.c:1565:2-3: Unneeded semicolon
tools/perf/builtin-lock.c:778:2-3: Unneeded semicolon
tools/perf/builtin-mem.c:126:2-3: Unneeded semicolon
tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c:555:2-3: Unneeded semicolon
tools/perf/util/ordered-events.c:317:2-3: Unneeded semicolon
tools/perf/util/synthetic-events.c:1131:2-3: Unneeded semicolon
tools/perf/util/trace-event-read.c:78:2-3: Unneeded semicolon
Reported-by: Hulk Robot <[email protected]>
Signed-off-by: Zou Wei <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The synthesize benchmark, run on a single process and thread, shows
perf_event__synthesize_mmap_events as the hottest function with fgets
and sscanf taking the majority of execution time.
fscanf performs similarly well. Replace the scanf call with manual
reading of each field of the /proc/pid/maps line, and remove some
unnecessary buffering.
This change also addresses potential, but unlikely, buffer overruns for
the string values read by scanf.
Performance before is:
$ sudo perf bench internals synthesize -m 16 -M 16 -s -t
\# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
Average synthesis took: 102.810 usec (+- 0.027 usec)
Average num. events: 17.000 (+- 0.000)
Average time per event 6.048 usec
Average data synthesis took: 106.325 usec (+- 0.018 usec)
Average num. events: 89.000 (+- 0.000)
Average time per event 1.195 usec
Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:
Number of synthesis threads: 16
Average synthesis took: 68103.100 usec (+- 441.234 usec)
Average num. events: 30703.000 (+- 0.730)
Average time per event 2.218 usec
And after is:
$ sudo perf bench internals synthesize -m 16 -M 16 -s -t
\# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
Average synthesis took: 50.388 usec (+- 0.031 usec)
Average num. events: 17.000 (+- 0.000)
Average time per event 2.964 usec
Average data synthesis took: 52.693 usec (+- 0.020 usec)
Average num. events: 89.000 (+- 0.000)
Average time per event 0.592 usec
Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:
Number of synthesis threads: 16
Average synthesis took: 45022.400 usec (+- 552.740 usec)
Average num. events: 30624.200 (+- 10.037)
Average time per event 1.470 usec
On a Intel Xeon 6154 compiling with Debian gcc 9.2.1.
Committer testing:
On a AMD Ryzen 5 3600X 6-Core Processor:
Before:
# perf bench internals synthesize --min-threads 12 --max-threads 12 --st --mt
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
Average synthesis took: 267.491 usec (+- 0.176 usec)
Average num. events: 56.000 (+- 0.000)
Average time per event 4.777 usec
Average data synthesis took: 277.257 usec (+- 0.169 usec)
Average num. events: 287.000 (+- 0.000)
Average time per event 0.966 usec
Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:
Number of synthesis threads: 12
Average synthesis took: 81599.500 usec (+- 346.315 usec)
Average num. events: 36096.100 (+- 2.523)
Average time per event 2.261 usec
#
After:
# perf bench internals synthesize --min-threads 12 --max-threads 12 --st --mt
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
Average synthesis took: 110.125 usec (+- 0.080 usec)
Average num. events: 56.000 (+- 0.000)
Average time per event 1.967 usec
Average data synthesis took: 118.518 usec (+- 0.057 usec)
Average num. events: 287.000 (+- 0.000)
Average time per event 0.413 usec
Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:
Number of synthesis threads: 12
Average synthesis took: 43490.700 usec (+- 284.527 usec)
Average num. events: 37028.500 (+- 0.563)
Average time per event 1.175 usec
#
Signed-off-by: Ian Rogers <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Acked-by: Namhyung Kim <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andrey Zhizhikin <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To control degree of parallelism of the synthesize_mmap() code which
is scanning /proc/PID/task/PID/maps and can be time consuming.
Mimic perf top way of handling the option.
If not specified will default to 1 thread, i.e. default behavior before
this option.
On a desktop computer the processing of /proc/PID/task/PID/maps isn't
slow enough to warrant parallel processing and the thread creation has
some cost - hence the default of 1. On a loaded server with
>100 cores it is possible to see synthesis times in the order of
seconds and in this case having the option is desirable.
As the processing is a synchronization point, it is legitimate to worry if
Amdahl's law will apply to this patch. Profiling with this patch in
place:
https://lore.kernel.org/lkml/[email protected]/
shows:
...
- 32.59% __perf_event__synthesize_threads
- 32.54% __event__synthesize_thread
+ 22.13% perf_event__synthesize_mmap_events
+ 6.68% perf_event__get_comm_ids.constprop.0
+ 1.49% process_synthesized_event
+ 1.29% __GI___readdir64
+ 0.60% __opendir
...
That is the processing is 1.49% of execution time and there is plenty to
make parallel. This is shown in the benchmark in this patch:
https://lore.kernel.org/lkml/[email protected]/
Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:
Number of synthesis threads: 1
Average synthesis took: 127729.000 usec (+- 3372.880 usec)
Average num. events: 21548.600 (+- 0.306)
Average time per event 5.927 usec
Number of synthesis threads: 2
Average synthesis took: 88863.500 usec (+- 385.168 usec)
Average num. events: 21552.800 (+- 0.327)
Average time per event 4.123 usec
Number of synthesis threads: 3
Average synthesis took: 83257.400 usec (+- 348.617 usec)
Average num. events: 21553.200 (+- 0.327)
Average time per event 3.863 usec
Number of synthesis threads: 4
Average synthesis took: 75093.000 usec (+- 422.978 usec)
Average num. events: 21554.200 (+- 0.200)
Average time per event 3.484 usec
Number of synthesis threads: 5
Average synthesis took: 64896.600 usec (+- 353.348 usec)
Average num. events: 21558.000 (+- 0.000)
Average time per event 3.010 usec
Number of synthesis threads: 6
Average synthesis took: 59210.200 usec (+- 342.890 usec)
Average num. events: 21560.000 (+- 0.000)
Average time per event 2.746 usec
Number of synthesis threads: 7
Average synthesis took: 54093.900 usec (+- 306.247 usec)
Average num. events: 21562.000 (+- 0.000)
Average time per event 2.509 usec
Number of synthesis threads: 8
Average synthesis took: 48938.700 usec (+- 341.732 usec)
Average num. events: 21564.000 (+- 0.000)
Average time per event 2.269 usec
Where average time per synthesized event goes from 5.927 usec with 1
thread to 2.269 usec with 8. This isn't a linear speed up as not all of
synthesize code has been made parallel. If the synthesis time was about
10 seconds then using 8 threads may bring this down to less than 4.
Signed-off-by: Stephane Eranian <[email protected]>
Reviewed-by: Ian Rogers <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tony Jones <[email protected]>
Cc: yuzhoujian <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
As the code comments in perf_stat_process_counter() say, we calculate
counter's data every interval, and the display code shows ps->res_stats
avg value. We need to zero the stats for interval mode.
But the current code only zeros the res_stats[0], it doesn't zero the
res_stats[1] and res_stats[2], which are for ena and run of counter.
This patch zeros the whole res_stats[] for interval mode.
Fixes: 51fd2df1e882 ("perf stat: Fix interval output values")
Signed-off-by: Jin Yao <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jin Yao <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
al->sym may be NULL given current if conditions and may cause a segv.
Fixes: d2bedb7863e9 ("perf script: Allow --symbol to accept hexadecimal addresses")
Signed-off-by: Ian Rogers <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Do not bother with close() if fd is not valid, just to silence valgrind:
$ valgrind ./perf script
==59169== Memcheck, a memory error detector
==59169== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==59169== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==59169== Command: ./perf script
==59169==
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
==59169== Warning: invalid file descriptor -1 in syscall close()
Signed-off-by: Tommi Rantala <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Perf checks the duplicate entries in a callchain before adding an entry.
However the check is very slow especially with deeper call stack.
Almost ~50% elapsed time of perf report is spent on the check when the
call stack is always depth of 32.
The hist_entry__cmp() is used to compare the new entry with the old
entries. It will go through all the available sorts in the sort_list,
and call the specific cmp of each sort, which is very slow.
Actually, for most cases, there are no duplicate entries in callchain.
The symbols are usually different. It's much faster to do a quick check
for symbols first. Only do the full cmp when the symbols are exactly the
same.
The quick check is only to check symbols, not dso. Export
_sort__sym_cmp.
$ perf record --call-graph lbr ./tchain_edit_64
Without the patch
$time perf report --stdio
real 0m21.142s
user 0m21.110s
sys 0m0.033s
With the patch
$time perf report --stdio
real 0m10.977s
user 0m10.948s
sys 0m0.027s
Signed-off-by: Kan Liang <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
With the LBR stitching approach, the reconstructed LBR call stack
can break the HW limitation. However, it may reconstruct invalid call
stacks in some cases, e.g. exception handing such as setjmp/longjmp.
Also, it may impact the processing time especially when the number of
samples with stitched LBRs are huge.
Add an option to enable the approach.
The option must be used with --call-graph lbr.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
In LBR call stack mode, the depth of reconstructed LBR call stack limits
to the number of LBR registers.
For example, on skylake, the depth of reconstructed LBR call stack is
always <= 32.
# To display the perf.data header info, please use
# --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 6K of event 'cycles'
# Event count (approx.): 6487119731
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ..................
# ................................
99.97% 99.97% tchain_edit tchain_edit [.] f43
|
--99.64%--f11
f12
f13
f14
f15
f16
f17
f18
f19
f20
f21
f22
f23
f24
f25
f26
f27
f28
f29
f30
f31
f32
f33
f34
f35
f36
f37
f38
f39
f40
f41
f42
f43
For a call stack which is deeper than LBR limit, HW will overwrite the
LBR register with oldest branch. Only partial call stacks can be
reconstructed.
However, the overwritten LBRs may still be retrieved from previous
sample. At that moment, HW hasn't overwritten the LBR registers yet.
Perf tools can stitch those overwritten LBRs on current call stacks to
get a more complete call stack.
To determine if LBRs can be stitched, perf tools need to compare current
sample with previous sample.
- They should have identical LBR records (Same from, to and flags
values, and the same physical index of LBR registers).
- The searching starts from the base-of-stack of current sample.
Once perf determines to stitch the previous LBRs, the corresponding LBR
cursor nodes will be copied to 'lists'. The 'lists' is to track the LBR
cursor nodes which are going to be stitched.
When the stitching is over, the nodes will not be freed immediately.
They will be moved to 'free_lists'. Next stitching may reuse the space.
Both 'lists' and 'free_lists' will be freed when all samples are
processed.
Committer notes:
Fix the intel-pt.c initialization of the union with 'struct
branch_flags', that breaks the build with its unnamed union on older gcc
versions.
Uninline thread__free_stitch_list(), as it grew big and started dragging
includes to thread.h, so move it to thread.c where what it needs in
terms of headers are already there.
This fixes the build in several systems such as debian:experimental when
cross building to the MIPS32 architecture, i.e. in the other cases what
was needed was being included by sheer luck.
In file included from builtin-sched.c:11:
util/thread.h: In function 'thread__free_stitch_list':
util/thread.h:169:3: error: implicit declaration of function 'free' [-Werror=implicit-function-declaration]
169 | free(pos);
| ^~~~
util/thread.h:169:3: error: incompatible implicit declaration of built-in function 'free' [-Werror]
util/thread.h:19:1: note: include '<stdlib.h>' or provide a declaration of 'free'
18 | #include "callchain.h"
+++ |+#include <stdlib.h>
19 |
util/thread.h:174:3: error: incompatible implicit declaration of built-in function 'free' [-Werror]
174 | free(pos);
| ^~~~
util/thread.h:174:3: note: include '<stdlib.h>' or provide a declaration of 'free'
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The cursor nodes which generates from sample are eventually added into
callchain. To avoid generating cursor nodes from previous samples again,
the previous cursor nodes are also saved for LBR stitching approach.
Some option, e.g. hide-unresolved, may hide some LBRs. Add a variable
'valid' in struct callchain_cursor_node to indicate this case. The LBR
stitching approach will only append the valid cursor nodes from previous
samples later.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
[ Use zfree() instead of open coded equivalent, and use it when freeing members of structs ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To retrieve the overwritten LBRs from previous sample for LBR stitching
approach, perf has to save the previous sample.
Only allocate the struct lbr_stitch once, when LBR stitching approach is
enabled and kernel supports hw_idx.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
[ Use zalloc()/zfree() for thread->lbr_stitch ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The LBR stitch approach should be disabled by default. Because
- The stitching approach base on LBR call stack technology. The known
limitations of LBR call stack technology still apply to the approach,
e.g. Exception handing such as setjmp/longjmp will have calls/returns
not match.
- This approach is not foolproof. There can be cases where it creates
incorrect call stacks from incorrect matches. There is no attempt to
validate any matches in another way.
The 'lbr_stitch_enable' is used to indicate whether enable LBR stitch
approach, which is disabled by default. The following patch will
introduce a new option for each tools to enable the LBR stitch
approach.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Both caller and callee needs to add ip from LBR to callchain.
Factor out lbr_callchain_add_lbr_ip() to improve code readability.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Both caller and callee needs to add kernel ip to callchain. Factor out
lbr_callchain_add_kernel_ip() to improve code readability.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
LBR only collect the user call stack. To reconstruct a call stack, both
kernel call stack and user call stack are required. The function
resolve_lbr_callchain_sample() mix the kernel call stack and user call
stack.
Now, with the help of HW idx, perf tool can reconstruct a more complete
call stack by adding some user call stack from previous sample. However,
current implementation is hard to be extended to support it.
Current code path for resolve_lbr_callchain_sample()
for (j = 0; j < mix_chain_nr; j++) {
if (ORDER_CALLEE) {
if (kernel callchain)
Fill callchain info
else if (LBR callchain)
Fill callchain info
} else {
if (LBR callchain)
Fill callchain info
else if (kernel callchain)
Fill callchain info
}
add_callchain_ip();
}
With the patch,
if (ORDER_CALLEE) {
for (j = 0; j < NUM of kernel callchain) {
Fill callchain info
add_callchain_ip();
}
for (; j < mix_chain_nr) {
Fill callchain info
add_callchain_ip();
}
} else {
for (; j < NUM of LBR callchain) {
Fill callchain info
add_callchain_ip();
}
for (j = 0; j < mix_chain_nr) {
Fill callchain info
add_callchain_ip();
}
}
No functional changes.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The indent is unnecessary in resolve_lbr_callchain_sample. Removing it
will make the following patch simpler.
Current code path for resolve_lbr_callchain_sample()
/* LBR only affects the user callchain */
if (i != chain_nr) {
body of the function
....
return 1;
}
return 0;
With the patch,
/* LBR only affects the user callchain */
if (i == chain_nr)
return 0;
body of the function
...
return 1;
No functional changes.
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To stitch LBR call stack, the max LBR information is required. So the
CPU PMU capabilities information has to be stored in perf header.
Add a new feature HEADER_CPU_PMU_CAPS for CPU PMU capabilities.
Retrieve all CPU PMU capabilities, not just max LBR information.
Add variable max_branches to facilitate future usage.
Committer testing:
# ls -la /sys/devices/cpu/caps/
total 0
drwxr-xr-x. 2 root root 0 Apr 17 10:53 .
drwxr-xr-x. 6 root root 0 Apr 17 07:02 ..
-r--r--r--. 1 root root 4096 Apr 17 10:53 max_precise
#
# cat /sys/devices/cpu/caps/max_precise
0
# perf record sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.033 MB perf.data (7 samples) ]
#
# perf report --header-only | egrep 'cpu(desc|.*capabilities)'
# cpudesc : AMD Ryzen 5 3600X 6-Core Processor
# cpu pmu capabilities: max_precise=0
#
And then on an Intel machine:
$ ls -la /sys/devices/cpu/caps/
total 0
drwxr-xr-x. 2 root root 0 Apr 17 10:51 .
drwxr-xr-x. 6 root root 0 Apr 17 10:04 ..
-r--r--r--. 1 root root 4096 Apr 17 11:37 branches
-r--r--r--. 1 root root 4096 Apr 17 10:51 max_precise
-r--r--r--. 1 root root 4096 Apr 17 11:37 pmu_name
$ cat /sys/devices/cpu/caps/max_precise
3
$ cat /sys/devices/cpu/caps/branches
32
$ cat /sys/devices/cpu/caps/pmu_name
skylake
$ perf record sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.001 MB perf.data (8 samples) ]
$ perf report --header-only | egrep 'cpu(desc|.*capabilities)'
# cpudesc : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
# cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
$
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The current rXXXX event specification creates event under PERF_TYPE_RAW
pmu type. This change allows to use rXXXX within pmu syntax, so it's
type is used via the following syntax:
-e 'cpu/r3c/'
-e 'cpum_cf/r0/'
The XXXX number goes directly to perf_event_attr::config the same way as
in '-e rXXXX' event. The perf_event_attr::type is filled with pmu type.
Committer testing:
So, lets see what goes in perf_event_attr::config for, say, the
'instructions' PERF_TYPE_HARDWARE (0) event, first we should look at how
to encode this event as a PERF_TYPE_RAW event for this specific CPU, an
AMD Ryzen 5:
# cat /sys/devices/cpu/events/instructions
event=0xc0
#
Then try with it _and_ the instruction, just to see that they are close
enough:
# perf stat -e rc0,instructions sleep 1
Performance counter stats for 'sleep 1':
919,794 rc0
919,898 instructions
1.000754579 seconds time elapsed
0.000715000 seconds user
0.000000000 seconds sys
#
Now we should try, before this patch, the PMU event encoding:
# perf stat -e cpu/rc0/ sleep 1
event syntax error: 'cpu/rc0/'
\___ unknown term
valid terms: event,edge,inv,umask,cmask,config,config1,config2,name,period,percore
#
Now with this patch, the three ways of specifying the 'instructions' CPU
counter are accepted:
# perf stat -e cpu/rc0/,rc0,instructions sleep 1
Performance counter stats for 'sleep 1':
892,948 cpu/rc0/
893,052 rc0
893,156 instructions
1.000931819 seconds time elapsed
0.000916000 seconds user
0.000000000 seconds sys
#
Requested-by: Thomas Richter <[email protected]>
Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Tested-by: Thomas Richter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Sumanth Korikkar <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The PMU capabilities information, which is located at
/sys/bus/event_source/devices/<dev>/caps, is required by perf tool. For
example, the max LBR information is required to stitch LBR call stack.
Add perf_pmu__caps_parse() to parse the PMU capabilities information.
The information is stored in a list.
The following patch will store the capabilities information in perf
header.
Committer notes:
Here's an example of such directories and its files in an i5 7th gen
machine:
[root@seventh ~]# ls -lad /sys/bus/event_source/devices/*/caps
drwxr-xr-x. 2 root root 0 Apr 14 13:33 /sys/bus/event_source/devices/cpu/caps
drwxr-xr-x. 2 root root 0 Apr 14 13:33 /sys/bus/event_source/devices/intel_pt/caps
[root@seventh ~]# ls -la /sys/bus/event_source/devices/intel_pt/caps
total 0
drwxr-xr-x. 2 root root 0 Apr 14 13:33 .
drwxr-xr-x. 5 root root 0 Apr 14 13:12 ..
-r--r--r--. 1 root root 4096 Apr 16 13:10 cr3_filtering
-r--r--r--. 1 root root 4096 Apr 16 11:42 cycle_thresholds
-r--r--r--. 1 root root 4096 Apr 16 13:10 ip_filtering
-r--r--r--. 1 root root 4096 Apr 16 13:10 max_subleaf
-r--r--r--. 1 root root 4096 Apr 14 13:33 mtc
-r--r--r--. 1 root root 4096 Apr 14 13:33 mtc_periods
-r--r--r--. 1 root root 4096 Apr 16 13:10 num_address_ranges
-r--r--r--. 1 root root 4096 Apr 16 13:10 output_subsys
-r--r--r--. 1 root root 4096 Apr 16 13:10 payloads_lip
-r--r--r--. 1 root root 4096 Apr 16 13:10 power_event_trace
-r--r--r--. 1 root root 4096 Apr 14 13:33 psb_cyc
-r--r--r--. 1 root root 4096 Apr 14 13:33 psb_periods
-r--r--r--. 1 root root 4096 Apr 16 13:10 ptwrite
-r--r--r--. 1 root root 4096 Apr 16 13:10 single_range_output
-r--r--r--. 1 root root 4096 Apr 16 12:03 topa_multiple_entries
-r--r--r--. 1 root root 4096 Apr 16 13:10 topa_output
[root@seventh ~]# cat /sys/bus/event_source/devices/intel_pt/caps/topa_output
1
[root@seventh ~]# cat /sys/bus/event_source/devices/intel_pt/caps/topa_multiple_entries
1
[root@seventh ~]# cat /sys/bus/event_source/devices/intel_pt/caps/mtc
1
[root@seventh ~]# cat /sys/bus/event_source/devices/intel_pt/caps/power_event_trace
0
[root@seventh ~]#
[root@seventh ~]# ls -la /sys/bus/event_source/devices/cpu/caps/
total 0
drwxr-xr-x. 2 root root 0 Apr 14 13:33 .
drwxr-xr-x. 6 root root 0 Apr 14 13:12 ..
-r--r--r--. 1 root root 4096 Apr 16 13:10 branches
-r--r--r--. 1 root root 4096 Apr 14 13:33 max_precise
-r--r--r--. 1 root root 4096 Apr 16 13:10 pmu_name
[root@seventh ~]# cat /sys/bus/event_source/devices/cpu/caps/max_precise
3
[root@seventh ~]# cat /sys/bus/event_source/devices/cpu/caps/branches
32
[root@seventh ~]# cat /sys/bus/event_source/devices/cpu/caps/pmu_name
skylake
[root@seventh ~]#
Wow, first time I've heard about
/sys/bus/event_source/devices/cpu/caps/max_precise, I think I'll use it!
:-)
Signed-off-by: Kan Liang <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pavel Gerasimov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Vitaly Slobodskoy <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
When it is not possible for a non-privilege perf command to monitor at
the kernel level (:k), the fallback code forces a :u. That works if the
event was previously monitoring both levels. But if the event was
already constrained to kernel only, then it does not make sense to
restrict it to user only.
Given the code works by exclusion, a kernel only event would have:
attr->exclude_user = 1
The fallback code would add:
attr->exclude_kernel = 1
In the end the end would not monitor in either the user level or kernel
level. In other words, it would count nothing.
An event programmed to monitor kernel only cannot be switched to user
only without seriously warning the user.
This patch forces an error in this case to make it clear the request
cannot really be satisfied.
Behavior with paranoid 1:
$ sudo bash -c "echo 1 > /proc/sys/kernel/perf_event_paranoid"
$ perf stat -e cycles:k sleep 1
Performance counter stats for 'sleep 1':
1,520,413 cycles:k
1.002361664 seconds time elapsed
0.002480000 seconds user
0.000000000 seconds sys
Old behavior with paranoid 2:
$ sudo bash -c "echo 2 > /proc/sys/kernel/perf_event_paranoid"
$ perf stat -e cycles:k sleep 1
Performance counter stats for 'sleep 1':
0 cycles:ku
1.002358127 seconds time elapsed
0.002384000 seconds user
0.000000000 seconds sys
New behavior with paranoid 2:
$ sudo bash -c "echo 2 > /proc/sys/kernel/perf_event_paranoid"
$ perf stat -e cycles:k sleep 1
Error:
You may not have permission to collect stats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid,
which controls use of the performance events system by
unprivileged users (without CAP_PERFMON or CAP_SYS_ADMIN).
The current value is 2:
-1: Allow use of (almost) all events by all users
Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK
>= 0: Disallow ftrace function tracepoint by users without CAP_PERFMON or CAP_SYS_ADMIN
Disallow raw tracepoint access by users without CAP_SYS_PERFMON or CAP_SYS_ADMIN
>= 1: Disallow CPU event access by users without CAP_PERFMON or CAP_SYS_ADMIN
>= 2: Disallow kernel profiling by users without CAP_PERFMON or CAP_SYS_ADMIN
To make this setting permanent, edit /etc/sysctl.conf too, e.g.:
kernel.perf_event_paranoid = -1
v2 of this patch addresses the review feedback from [email protected].
Signed-off-by: Stephane Eranian <[email protected]>
Reviewed-by: Ian Rogers <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
When AUX area events are used in sampling mode, they must be the group
leader, but the group leader is also used for leader-sampling. However,
it is not desirable to use an AUX area event as the leader for
leader-sampling, because it doesn't have any samples of its own. To support
leader-sampling with AUX area events, use the 2nd event of the group as the
"leader" for the purposes of leader-sampling.
Example:
# perf record --kcore --aux-sample -e '{intel_pt//,cycles,instructions}:S' -c 10000 uname
[ perf record: Woken up 3 times to write data ]
[ perf record: Captured and wrote 0.786 MB perf.data ]
# perf report
Samples: 380 of events 'anon group { cycles, instructions }', Event count (approx.): 3026164
Children Self Command Shared Object Symbol
+ 38.76% 42.65% 0.00% 0.00% uname [kernel.kallsyms] [k] __x86_indirect_thunk_rax
+ 35.82% 31.33% 0.00% 0.00% uname ld-2.28.so [.] _dl_start_user
+ 34.29% 29.74% 0.55% 0.47% uname ld-2.28.so [.] _dl_start
+ 33.73% 28.62% 1.60% 0.97% uname ld-2.28.so [.] dl_main
+ 33.19% 29.04% 0.52% 0.32% uname ld-2.28.so [.] _dl_sysdep_start
+ 27.83% 33.74% 0.00% 0.00% uname [kernel.kallsyms] [k] do_syscall_64
+ 26.76% 33.29% 0.00% 0.00% uname [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
+ 23.78% 20.33% 5.97% 5.25% uname [kernel.kallsyms] [k] page_fault
+ 23.18% 24.60% 0.00% 0.00% uname libc-2.28.so [.] __libc_start_main
+ 22.64% 24.37% 0.00% 0.00% uname uname [.] _start
+ 21.04% 23.27% 0.00% 0.00% uname uname [.] main
+ 19.48% 18.08% 3.72% 3.64% uname ld-2.28.so [.] _dl_relocate_object
+ 19.47% 21.81% 0.00% 0.00% uname libc-2.28.so [.] setlocale
+ 19.44% 21.56% 0.52% 0.61% uname libc-2.28.so [.] _nl_find_locale
+ 17.87% 19.66% 0.00% 0.00% uname libc-2.28.so [.] _nl_load_locale_from_archive
+ 15.71% 13.73% 0.53% 0.52% uname [kernel.kallsyms] [k] do_page_fault
+ 15.18% 13.21% 1.03% 0.68% uname [kernel.kallsyms] [k] handle_mm_fault
+ 14.15% 12.53% 1.01% 1.12% uname [kernel.kallsyms] [k] __handle_mm_fault
+ 12.03% 9.67% 0.54% 0.32% uname ld-2.28.so [.] _dl_map_object
+ 10.55% 8.48% 0.00% 0.00% uname ld-2.28.so [.] openaux
+ 10.55% 20.20% 0.52% 0.61% uname libc-2.28.so [.] __run_exit_handlers
Comnmitter notes:
Fixed up this problem:
util/record.c: In function ‘perf_evlist__config’:
util/record.c:256:3: error: too few arguments to function ‘perf_evsel__config_leader_sampling’
256 | perf_evsel__config_leader_sampling(evsel);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
util/record.c:190:13: note: declared here
190 | static void perf_evsel__config_leader_sampling(struct evsel *evsel,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Tools find the correct evsel, and therefore read format, using the event
ID, so it isn't necessary for all read formats to be the same. In the
case of leader-sampling of AUX area events, dummy tracking events will
have a different read format, so relax the validation to become a debug
message only.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
In preparation for adding support for leader sampling with AUX area events.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move leader-sampling configuration in preparation for adding support for
leader sampling with AUX area events.
Committer notes:
It only makes sense when configuring an evsel that is part of an evlist,
so the only case where it is called outside perf_evlist__config(), in
some 'perf test' entry, is safe, and even there we should just use
perf_evlist__config(), but since in that case we have just one evsel in
the evlist, it is equivalent.
Also fixed up this problem:
util/record.c: In function ‘perf_evlist__config’:
util/record.c:223:3: error: too many arguments to function ‘perf_evsel__config_leader_sampling’
223 | perf_evsel__config_leader_sampling(evsel, evlist);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
util/record.c:170:13: note: declared here
170 | static void perf_evsel__config_leader_sampling(struct evsel *evsel)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
perf_evsel__is_aux_event()
Move and globalize 2 functions from the auxtrace specific sources so
that they can be reused.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
[ Move to pmu.c, as moving to evsel.h breaks the python binding ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Currently, callchains can be synthesized only for synthesized events.
Support also synthesizing callchains for regular events.
Example:
# perf record --kcore --aux-sample -e '{intel_pt//,cycles}' -c 10000 uname
Linux
[ perf record: Woken up 3 times to write data ]
[ perf record: Captured and wrote 0.532 MB perf.data ]
# perf script --itrace=Ge | head -20
uname 4864 2419025.358181: 10000 cycles:
ffffffffbba56965 apparmor_bprm_committing_creds+0x35 ([kernel.kallsyms])
ffffffffbc400cd5 __indirect_thunk_start+0x5 ([kernel.kallsyms])
ffffffffbba07422 security_bprm_committing_creds+0x22 ([kernel.kallsyms])
ffffffffbb89805d install_exec_creds+0xd ([kernel.kallsyms])
ffffffffbb90d9ac load_elf_binary+0x3ac ([kernel.kallsyms])
uname 4864 2419025.358185: 10000 cycles:
ffffffffbba56db0 apparmor_bprm_committed_creds+0x20 ([kernel.kallsyms])
ffffffffbc400cd5 __indirect_thunk_start+0x5 ([kernel.kallsyms])
ffffffffbba07452 security_bprm_committed_creds+0x22 ([kernel.kallsyms])
ffffffffbb89809a install_exec_creds+0x4a ([kernel.kallsyms])
ffffffffbb90d9ac load_elf_binary+0x3ac ([kernel.kallsyms])
uname 4864 2419025.358189: 10000 cycles:
ffffffffbb86fdf6 vma_adjust_trans_huge+0x6 ([kernel.kallsyms])
ffffffffbb821660 __vma_adjust+0x160 ([kernel.kallsyms])
ffffffffbb897be7 shift_arg_pages+0x97 ([kernel.kallsyms])
ffffffffbb897ed9 setup_arg_pages+0x1e9 ([kernel.kallsyms])
ffffffffbb90d9f2 load_elf_binary+0x3f2 ([kernel.kallsyms])
Committer testing:
# perf record --kcore --aux-sample -e '{intel_pt//,cycles}' -c 10000 uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.233 MB perf.data ]
#
Then, before this patch:
# perf script --itrace=Ge | head -20
uname 28642 168664.856384: 10000 cycles: ffffffff9810aeaa commit_creds+0x2a ([kernel.kallsyms])
uname 28642 168664.856388: 10000 cycles: ffffffff982a24f1 mprotect_fixup+0x151 ([kernel.kallsyms])
uname 28642 168664.856392: 10000 cycles: ffffffff982a385b move_page_tables+0xbcb ([kernel.kallsyms])
uname 28642 168664.856396: 10000 cycles: ffffffff982fd4ec __mod_memcg_state+0x1c ([kernel.kallsyms])
uname 28642 168664.856400: 10000 cycles: ffffffff9829fddd do_mmap+0xfd ([kernel.kallsyms])
uname 28642 168664.856404: 10000 cycles: ffffffff9829c879 __vma_adjust+0x479 ([kernel.kallsyms])
uname 28642 168664.856408: 10000 cycles: ffffffff98238e94 __perf_addr_filters_adjust+0x34 ([kernel.kallsyms])
uname 28642 168664.856412: 10000 cycles: ffffffff98a38e0b down_write+0x1b ([kernel.kallsyms])
uname 28642 168664.856416: 10000 cycles: ffffffff983006a0 memcg_kmem_get_cache+0x0 ([kernel.kallsyms])
uname 28642 168664.856421: 10000 cycles: ffffffff98396eaf load_elf_binary+0x92f ([kernel.kallsyms])
uname 28642 168664.856425: 10000 cycles: ffffffff982e0222 kfree+0x62 ([kernel.kallsyms])
uname 28642 168664.856428: 10000 cycles: ffffffff9846dfd4 file_has_perm+0x54 ([kernel.kallsyms])
uname 28642 168664.856433: 10000 cycles: ffffffff98288911 vma_interval_tree_insert+0x51 ([kernel.kallsyms])
uname 28642 168664.856437: 10000 cycles: ffffffff9823e577 perf_event_mmap_output+0x27 ([kernel.kallsyms])
uname 28642 168664.856441: 10000 cycles: ffffffff98a26fa0 xas_load+0x40 ([kernel.kallsyms])
uname 28642 168664.856445: 10000 cycles: ffffffff98004f30 arch_setup_additional_pages+0x0 ([kernel.kallsyms])
uname 28642 168664.856448: 10000 cycles: ffffffff98a297c0 copy_user_generic_unrolled+0xa0 ([kernel.kallsyms])
uname 28642 168664.856452: 10000 cycles: ffffffff9853a87a strnlen_user+0x10a ([kernel.kallsyms])
uname 28642 168664.856456: 10000 cycles: ffffffff986638a7 randomize_page+0x27 ([kernel.kallsyms])
uname 28642 168664.856460: 10000 cycles: ffffffff98a3b645 _raw_spin_lock+0x5 ([kernel.kallsyms])
#
And after:
# perf script --itrace=Ge | head -20
uname 28642 168664.856384: 10000 cycles:
ffffffff9810aeaa commit_creds+0x2a ([kernel.kallsyms])
ffffffff9831fe87 install_exec_creds+0x17 ([kernel.kallsyms])
ffffffff983968d9 load_elf_binary+0x359 ([kernel.kallsyms])
ffffffff98e00c45 __x86_indirect_thunk_rax+0x5 ([kernel.kallsyms])
ffffffff98e00c45 __x86_indirect_thunk_rax+0x5 ([kernel.kallsyms])
uname 28642 168664.856388: 10000 cycles:
ffffffff982a24f1 mprotect_fixup+0x151 ([kernel.kallsyms])
ffffffff9831fa83 setup_arg_pages+0x123 ([kernel.kallsyms])
ffffffff9839691f load_elf_binary+0x39f ([kernel.kallsyms])
ffffffff98e00c45 __x86_indirect_thunk_rax+0x5 ([kernel.kallsyms])
ffffffff98e00c45 __x86_indirect_thunk_rax+0x5 ([kernel.kallsyms])
uname 28642 168664.856392: 10000 cycles:
ffffffff982a385b move_page_tables+0xbcb ([kernel.kallsyms])
ffffffff9831f889 shift_arg_pages+0xa9 ([kernel.kallsyms])
ffffffff9831fb4f setup_arg_pages+0x1ef ([kernel.kallsyms])
ffffffff9839691f load_elf_binary+0x39f ([kernel.kallsyms])
ffffffff98e00c45 __x86_indirect_thunk_rax+0x5 ([kernel.kallsyms])
#
Signed-off-by: Adrian Hunter <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
For reporting purposes, an evsel sample can have a callchain synthesized
from AUX area data. Add support for keeping track of synthesized sample
types. Note, the recorded sample_type cannot be changed because it is
needed to continue to parse events.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Using 'type' variable for checking for callchains is equivalent to using
evsel__has_callchain(evsel) and is how the other PERF_SAMPLE_ bits are checked
in this function, so use it to be consistent.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add a thread stack function to create a call chain for hardware events
where the sample records get created some time after the event occurred.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Currently, callchains can be synthesized only for synthesized events. Add
an itrace option to synthesize callchains for regular events.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
An AUX area event must be the group leader when recording traces in
sample mode, but that does not produce the expected results from
'perf report' because it expects the leader to provide samples.
Rather than teach 'perf report' about AUX area sampling, un-group the
AUX area event during processing, making the 2nd event the leader.
Example:
$ perf record -e '{intel_pt//u,branch-misses:u}' -c 1 uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.080 MB perf.data ]
Before:
$ perf report
Samples: 800 of events 'anon group { intel_pt//u, branch-misses:u }', Event count (approx.): 800
Children Self Command Shared Object Symbol
0.00% 47.50% 0.00% 47.50% uname libc-2.28.so [.] _dl_addr
0.00% 16.38% 0.00% 16.38% uname ld-2.28.so [.] __GI___tunables_init
0.00% 54.75% 0.00% 4.75% uname ld-2.28.so [.] dl_main
0.00% 3.12% 0.00% 3.12% uname ld-2.28.so [.] _dl_map_object_from_fd
0.00% 2.38% 0.00% 2.38% uname ld-2.28.so [.] strcmp
0.00% 2.25% 0.00% 2.25% uname ld-2.28.so [.] _dl_check_map_versions
0.00% 2.00% 0.00% 2.00% uname ld-2.28.so [.] _dl_important_hwcaps
0.00% 2.00% 0.00% 2.00% uname ld-2.28.so [.] _dl_map_object_deps
0.00% 51.50% 0.00% 1.50% uname ld-2.28.so [.] _dl_sysdep_start
0.00% 1.25% 0.00% 1.25% uname ld-2.28.so [.] _dl_load_cache_lookup
0.00% 51.12% 0.00% 1.12% uname ld-2.28.so [.] _dl_start
0.00% 50.88% 0.00% 1.12% uname ld-2.28.so [.] do_lookup_x
0.00% 50.62% 0.00% 1.00% uname ld-2.28.so [.] _dl_lookup_symbol_x
0.00% 1.00% 0.00% 1.00% uname ld-2.28.so [.] _dl_map_object
0.00% 1.00% 0.00% 1.00% uname ld-2.28.so [.] _dl_next_ld_env_entry
0.00% 0.88% 0.00% 0.88% uname ld-2.28.so [.] _dl_cache_libcmp
0.00% 0.88% 0.00% 0.88% uname ld-2.28.so [.] _dl_new_object
0.00% 50.88% 0.00% 0.88% uname ld-2.28.so [.] _dl_relocate_object
0.00% 0.62% 0.00% 0.62% uname ld-2.28.so [.] _dl_init_paths
0.00% 0.62% 0.00% 0.62% uname ld-2.28.so [.] _dl_name_match_p
0.00% 0.50% 0.00% 0.50% uname ld-2.28.so [.] get_common_indeces.constprop.1
0.00% 0.50% 0.00% 0.50% uname ld-2.28.so [.] memmove
0.00% 0.50% 0.00% 0.50% uname ld-2.28.so [.] memset
0.00% 0.50% 0.00% 0.50% uname ld-2.28.so [.] open_verify.constprop.11
0.00% 0.38% 0.00% 0.38% uname ld-2.28.so [.] _dl_check_all_versions
0.00% 0.38% 0.00% 0.38% uname ld-2.28.so [.] _dl_find_dso_for_object
0.00% 0.38% 0.00% 0.38% uname ld-2.28.so [.] init_tls
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] __tunable_get_val
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] _dl_add_to_namespace_list
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] _dl_determine_tlsoffset
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] _dl_discover_osversion
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] calloc@plt
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] malloc
0.00% 0.25% 0.00% 0.25% uname ld-2.28.so [.] malloc@plt
0.00% 0.25% 0.00% 0.25% uname libc-2.28.so [.] _nl_load_locale_from_archive
0.00% 0.25% 0.00% 0.25% uname [unknown] [k] 0xffffffffa3a00010
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] __libc_scratch_buffer_set_array_size
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] _dl_allocate_tls_storage
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] _dl_catch_exception
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] _dl_setup_hash
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] _dl_sort_maps
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] _dl_sysdep_read_whole_file
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] access
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] calloc
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] mmap64
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] openaux
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] rtld_lock_default_lock_recursive
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] rtld_lock_default_unlock_recursive
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] strchr
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] strlen
0.00% 0.12% 0.00% 0.12% uname ld-2.28.so [.] 0x0000000000001080
0.00% 0.12% 0.00% 0.12% uname libc-2.28.so [.] __strchrnul_avx2
0.00% 0.12% 0.00% 0.12% uname libc-2.28.so [.] _nl_normalize_codeset
0.00% 0.12% 0.00% 0.12% uname libc-2.28.so [.] malloc
0.00% 0.12% 0.00% 0.12% uname [unknown] [k] 0xffffffffa3a011f0
0.00% 50.00% 0.00% 0.00% uname ld-2.28.so [.] _dl_start_user
0.00% 50.00% 0.00% 0.00% uname [unknown] [.] 0000000000000000
After:
Samples: 800 of event 'branch-misses:u', Event count (approx.): 800
Children Self Command Shared Object Symbol
54.75% 4.75% uname ld-2.28.so [.] dl_main
51.50% 1.50% uname ld-2.28.so [.] _dl_sysdep_start
51.12% 1.12% uname ld-2.28.so [.] _dl_start
50.88% 0.88% uname ld-2.28.so [.] _dl_relocate_object
50.88% 1.12% uname ld-2.28.so [.] do_lookup_x
50.62% 1.00% uname ld-2.28.so [.] _dl_lookup_symbol_x
50.00% 0.00% uname ld-2.28.so [.] _dl_start_user
50.00% 0.00% uname [unknown] [.] 0000000000000000
47.50% 47.50% uname libc-2.28.so [.] _dl_addr
16.38% 16.38% uname ld-2.28.so [.] __GI___tunables_init
3.12% 3.12% uname ld-2.28.so [.] _dl_map_object_from_fd
2.38% 2.38% uname ld-2.28.so [.] strcmp
2.25% 2.25% uname ld-2.28.so [.] _dl_check_map_versions
2.00% 2.00% uname ld-2.28.so [.] _dl_important_hwcaps
2.00% 2.00% uname ld-2.28.so [.] _dl_map_object_deps
1.25% 1.25% uname ld-2.28.so [.] _dl_load_cache_lookup
1.00% 1.00% uname ld-2.28.so [.] _dl_map_object
1.00% 1.00% uname ld-2.28.so [.] _dl_next_ld_env_entry
0.88% 0.88% uname ld-2.28.so [.] _dl_cache_libcmp
0.88% 0.88% uname ld-2.28.so [.] _dl_new_object
0.62% 0.62% uname ld-2.28.so [.] _dl_init_paths
0.62% 0.62% uname ld-2.28.so [.] _dl_name_match_p
0.50% 0.50% uname ld-2.28.so [.] get_common_indeces.constprop.1
0.50% 0.50% uname ld-2.28.so [.] memmove
0.50% 0.50% uname ld-2.28.so [.] memset
0.50% 0.50% uname ld-2.28.so [.] open_verify.constprop.11
0.38% 0.38% uname ld-2.28.so [.] _dl_check_all_versions
0.38% 0.38% uname ld-2.28.so [.] _dl_find_dso_for_object
0.38% 0.38% uname ld-2.28.so [.] init_tls
0.25% 0.25% uname ld-2.28.so [.] __tunable_get_val
0.25% 0.25% uname ld-2.28.so [.] _dl_add_to_namespace_list
0.25% 0.25% uname ld-2.28.so [.] _dl_determine_tlsoffset
0.25% 0.25% uname ld-2.28.so [.] _dl_discover_osversion
0.25% 0.25% uname ld-2.28.so [.] calloc@plt
0.25% 0.25% uname ld-2.28.so [.] malloc
0.25% 0.25% uname ld-2.28.so [.] malloc@plt
0.25% 0.25% uname libc-2.28.so [.] _nl_load_locale_from_archive
0.25% 0.25% uname [unknown] [k] 0xffffffffa3a00010
0.12% 0.12% uname ld-2.28.so [.] __libc_scratch_buffer_set_array_size
0.12% 0.12% uname ld-2.28.so [.] _dl_allocate_tls_storage
0.12% 0.12% uname ld-2.28.so [.] _dl_catch_exception
0.12% 0.12% uname ld-2.28.so [.] _dl_setup_hash
0.12% 0.12% uname ld-2.28.so [.] _dl_sort_maps
0.12% 0.12% uname ld-2.28.so [.] _dl_sysdep_read_whole_file
0.12% 0.12% uname ld-2.28.so [.] access
0.12% 0.12% uname ld-2.28.so [.] calloc
0.12% 0.12% uname ld-2.28.so [.] mmap64
0.12% 0.12% uname ld-2.28.so [.] openaux
0.12% 0.12% uname ld-2.28.so [.] rtld_lock_default_lock_recursive
0.12% 0.12% uname ld-2.28.so [.] rtld_lock_default_unlock_recursive
0.12% 0.12% uname ld-2.28.so [.] strchr
0.12% 0.12% uname ld-2.28.so [.] strlen
0.12% 0.12% uname ld-2.28.so [.] 0x0000000000001080
0.12% 0.12% uname libc-2.28.so [.] __strchrnul_avx2
0.12% 0.12% uname libc-2.28.so [.] _nl_normalize_codeset
0.12% 0.12% uname libc-2.28.so [.] malloc
0.12% 0.12% uname [unknown] [k] 0xffffffffa3a011f0
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Implement ->evsel_is_auxtrace() callback.
Signed-off-by: Adrian Hunter <[email protected]>
Acked-by: Thomas Richter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Implement ->evsel_is_auxtrace() callback.
Signed-off-by: Adrian Hunter <[email protected]>
Reviewed-by: Mathieu Poirier <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Implement ->evsel_is_auxtrace() callback.
Signed-off-by: Adrian Hunter <[email protected]>
Reviewed-by: Leo Yan <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kim Phillips <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Implement ->evsel_is_auxtrace() callback.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Implement ->evsel_is_auxtrace() callback.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add ->evsel_is_auxtrace() callback to identify if a selected event
is an AUX area event.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kim Phillips <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Cc: Thomas Richter <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
This patch refactors metricgroup__add_metric function where some part of
it move to function metricgroup__add_metric_param. No logic change.
Signed-off-by: Kajol Jain <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Anju T Sudhakar <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jin Yao <[email protected]>
Cc: Joe Mario <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Madhavan Srinivasan <[email protected]>
Cc: Mamatha Inamdar <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Sukadev Bhattiprolu <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|