aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2019-04-03objtool: Fix sibling call detectionPeter Zijlstra1-31/+55
It turned out that we failed to detect some sibling calls; specifically those without relocation records; like: $ ./objdump-func.sh defconfig-build/mm/kasan/generic.o __asan_loadN 0000 0000000000000840 <__asan_loadN>: 0000 840: 48 8b 0c 24 mov (%rsp),%rcx 0004 844: 31 d2 xor %edx,%edx 0006 846: e9 45 fe ff ff jmpq 690 <check_memory_region> So extend the cross-function jump to also consider those that are not between known (or newly detected) parent/child functions, as sibling-cals when they jump to the start of the function. The second part of that condition is to deal with random jumps to the middle of other function, as can be found in arch/x86/lib/copy_user_64.S for example. This then (with later patches applied) makes the above recognise the sibling call: mm/kasan/generic.o: warning: objtool: __asan_loadN()+0x6: call to check_memory_region() with UACCESS enabled Also make sure to set insn->call_dest for sibling calls so we can know who we're calling. This is useful information when printing validation warnings later. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03objtool: Rewrite alt->skip_origPeter Zijlstra1-6/+10
Really skip the original instruction flow, instead of letting it continue with NOPs. Since the alternative code flow already continues after the original instructions, only the alt-original is skipped. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03objtool: Add --backtrace supportPeter Zijlstra4-6/+25
For when you want to know the path that reached your fail state: $ ./objtool check --no-fp --backtrace arch/x86/lib/usercopy_64.o arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0x3: UACCESS disable without MEMOPs: __clear_user() arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x3a: (alt) arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x2e: (branch) arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x18: (branch) arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0xffffffffffffffff: (branch) arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x5: (alt) arch/x86/lib/usercopy_64.o: warning: objtool: __clear_user()+0x0: <=== (func) 0000000000000000 <__clear_user>: 0: e8 00 00 00 00 callq 5 <__clear_user+0x5> 1: R_X86_64_PLT32 __fentry__-0x4 5: 90 nop 6: 90 nop 7: 90 nop 8: 48 89 f0 mov %rsi,%rax b: 48 c1 ee 03 shr $0x3,%rsi f: 83 e0 07 and $0x7,%eax 12: 48 89 f1 mov %rsi,%rcx 15: 48 85 c9 test %rcx,%rcx 18: 74 0f je 29 <__clear_user+0x29> 1a: 48 c7 07 00 00 00 00 movq $0x0,(%rdi) 21: 48 83 c7 08 add $0x8,%rdi 25: ff c9 dec %ecx 27: 75 f1 jne 1a <__clear_user+0x1a> 29: 48 89 c1 mov %rax,%rcx 2c: 85 c9 test %ecx,%ecx 2e: 74 0a je 3a <__clear_user+0x3a> 30: c6 07 00 movb $0x0,(%rdi) 33: 48 ff c7 inc %rdi 36: ff c9 dec %ecx 38: 75 f6 jne 30 <__clear_user+0x30> 3a: 90 nop 3b: 90 nop 3c: 90 nop 3d: 48 89 c8 mov %rcx,%rax 40: c3 retq Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03objtool: Rewrite add_ignores()Peter Zijlstra2-32/+20
The whole add_ignores() thing was wildly weird; rewrite it according to 'modern' ways. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03objtool: Handle function aliasesPeter Zijlstra2-5/+12
Function aliases result in different symbols for the same set of instructions; track a canonical symbol so there is a unique point of access. This again prepares the way for function attributes. And in particular the need for aliases comes from how KASAN uses them. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03objtool: Set insn->func for alternativesPeter Zijlstra1-0/+1
In preparation of function attributes, we need each instruction to have a valid link back to its function. Therefore make sure we set the function association for alternative instruction sequences; they are, after all, still part of the function. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-03x86/nospec, objtool: Introduce ANNOTATE_IGNORE_ALTERNATIVEPeter Zijlstra1-4/+4
To facillitate other usage of ignoring alternatives; rename ANNOTATE_NOSPEC_IGNORE to ANNOTATE_IGNORE_ALTERNATIVE. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2019-04-02selftests: bpf: remove duplicate .flags initialization in ctx_skb.cStanislav Fomichev1-1/+0
verifier/ctx_skb.c:708:11: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides] .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-02selftests: bpf: fix -Wformat-invalid-specifier for bpf_obj_id.cStanislav Fomichev1-4/+4
Use standard C99 %zu for sizeof, not GCC's custom %Zu: bpf_obj_id.c:76:48: warning: invalid conversion specifier 'Z' Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-02selftests: bpf: fix -Wformat-security warning for flow_dissector_load.cStanislav Fomichev1-1/+1
flow_dissector_load.c:55:19: warning: format string is not a string literal (potentially insecure) [-Wformat-security] error(1, errno, command); ^~~~~~~ flow_dissector_load.c:55:19: note: treat the string as an argument to avoid this error(1, errno, command); ^ "%s", 1 warning generated. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-02selftests: bpf: tests.h should depend on .c files, not the outputStanislav Fomichev1-2/+2
This makes sure we don't put headers as input files when doing compilation, because clang complains about the following: clang-9: error: cannot specify -o when generating multiple output files ../lib.mk:152: recipe for target 'xxx/tools/testing/selftests/bpf/test_verifier' failed make: *** [xxx/tools/testing/selftests/bpf/test_verifier] Error 1 make: *** Waiting for unfinished jobs.... clang-9: error: cannot specify -o when generating multiple output files ../lib.mk:152: recipe for target 'xxx/tools/testing/selftests/bpf/test_progs' failed Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-01perf vendor events intel: Update Silvermont to v14Andi Kleen3-5/+22
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update GoldmontPlus to v1.01Andi Kleen3-37/+51
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update Goldmont to v13Andi Kleen4-1127/+127
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update Bonnell to V4Andi Kleen2-2/+2
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update KnightsLanding events to v9Andi Kleen4-477/+474
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update Haswell events to v28Andi Kleen4-200/+213
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update IvyBridge events to v21Andi Kleen2-9/+5
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update SandyBridge events to v16Andi Kleen7-1295/+1301
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update JakeTown events to v20Andi Kleen2-11/+7
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update IvyTown events to v20Andi Kleen1-4/+0
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update HaswellX events to v20Andi Kleen3-178/+177
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update BroadwellX events to v14Andi Kleen4-189/+186
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update SkylakeX events to v1.12Andi Kleen5-1047/+899
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update Skylake events to v42Andi Kleen4-168/+3163
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update Broadwell-DE events to v7Andi Kleen2-7/+3
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update Broadwell events to v23Andi Kleen5-1676/+1685
Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf vendor events intel: Update metrics from TMAM 3.5Andi Kleen11-385/+2321
Update all the Intel JSON metrics from Ahmad Yasin's TMAM 3.5 for Intel big core from Sandy Bridge to Cascade Lake. This has many improvements and new metircs - New TopDownL1_SMT group that provides a per SMT thread version of --topdown that does not require -a anymore. The drawback is increased multiplexing though since L1 TopDown does not fit into 4 generic counters anymore. - Added SMT aware versions of other metrics - Split SMT aware metrics into separate metrics to avoid unnecessary event collections - New metrics for better branch analysis: Estimated Branch_Mispredict_Costs, Instructions per taken Branch, Branch Instructions per Taken Branch, etc. - Instruction mix metrics: Instructions per load, Instructions per store, Instructions per Branch, Instructions per Call - New Cache metrics: Bandwidth to L1/L2/L3 caches. L1/L2/L3 misses per kilo instructions. memory level parallelism - New memory controller metrics: Normalized memory bandwidth in interval mode, Average memory latency, Average number of parallel read requests, - 3DXP persistent memory metrics for Cascade Lake: 3dxp read latency, 3dxp read/write bandwidth - Some other useful metrics like Instruction Level Parallelism, - Various other improvements. Not all metrics are available on all CPUs. Skylake has best coverage. Signed-off-by: Andi Kleen <[email protected]> Cc: Kan Liang <[email protected]> Cc: Jiri Olsa <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf record: Implement --mmap-flush=<number> optionAlexey Budankov7-12/+89
Implement a --mmap-flush option that specifies minimal number of bytes that is extracted from mmaped kernel buffer to store into a trace. The default option value is 1 byte what means every time trace writing thread finds some new data in the mmaped buffer the data is extracted, possibly compressed and written to a trace. $ tools/perf/perf record --mmap-flush 1024 -e cycles -- matrix.gcc $ tools/perf/perf record --aio --mmap-flush 1K -e cycles -- matrix.gcc The option is independent from -z setting, doesn't vary with compression level and can serve two purposes. The first purpose is to increase the compression ratio of a trace data. Larger data chunks are compressed more effectively so the implemented option allows specifying data chunk size to compress. Also at some cases executing more write syscalls with smaller data size can take longer than executing less write syscalls with bigger data size due to syscall overhead so extracting bigger data chunks specified by the option value could additionally decrease runtime overhead. The second purpose is to avoid self monitoring live-lock issue in system wide (-a) profiling mode. Profiling in system wide mode with compression (-a -z) can additionally induce data into the kernel buffers along with the data from monitored processes. If performance data rate and volume from the monitored processes is high then trace streaming and compression activity in the tool is also high. High tool process activity can lead to subtle live-lock effect when compression of single new byte from some of mmaped kernel buffer leads to generation of the next single byte at some mmaped buffer. So perf tool process ends up in endless self monitoring. Implemented synch parameter is the mean to force data move independently from the specified flush threshold value. Despite the provided flush value the tool needs capability to unconditionally drain memory buffers, at least in the end of the collection. Committer testing: Running with the default value, i.e. as soon as there is something to read go on consuming, we first write the synthesized events, small chunks of about 128 bytes: # perf trace -m 2048 --call-graph dwarf -e write -- perf record <SNIP> 101.142 ( 0.004 ms): perf/25821 write(fd: 3</root/perf.data>, buf: 0x210db60, count: 120) = 120 __libc_write (/usr/lib64/libpthread-2.28.so) ion (/home/acme/bin/perf) record__write (inlined) process_synthesized_event (/home/acme/bin/perf) perf_tool__process_synth_event (inlined) perf_event__synthesize_mmap_events (/home/acme/bin/perf) Then we move to reading the mmap buffers consuming the events put there by the kernel perf infrastructure: 107.561 ( 0.005 ms): perf/25821 write(fd: 3</root/perf.data>, buf: 0x7f1befc02000, count: 336) = 336 __libc_write (/usr/lib64/libpthread-2.28.so) ion (/home/acme/bin/perf) record__write (inlined) record__pushfn (/home/acme/bin/perf) perf_mmap__push (/home/acme/bin/perf) record__mmap_read_evlist (inlined) record__mmap_read_all (inlined) __cmd_record (inlined) cmd_record (/home/acme/bin/perf) 12919.953 ( 0.136 ms): perf/25821 write(fd: 3</root/perf.data>, buf: 0x7f1befc83150, count: 184984) = 184984 <SNIP same backtrace as in the 107.561 timestamp> 12920.094 ( 0.155 ms): perf/25821 write(fd: 3</root/perf.data>, buf: 0x7f1befc02150, count: 261816) = 261816 <SNIP same backtrace as in the 107.561 timestamp> 12920.253 ( 0.093 ms): perf/25821 write(fd: 3</root/perf.data>, buf: 0x7f1befb81120, count: 170832) = 170832 <SNIP same backtrace as in the 107.561 timestamp> If we limit it to write only when more than 16MB are available for reading, it throttles that to a quarter of the --mmap-pages set for 'perf record', which by default get to 528384 bytes, found out using 'record -v': mmap flush: 132096 mmap size 528384B With that in place all the writes coming from record__mmap_read_evlist(), i.e. from the mmap buffers setup by the kernel perf infrastructure were at least 132096 bytes long. Trying with a bigger mmap size: perf trace -e write perf record -v -m 2048 --mmap-flush 16M 74982.928 ( 2.471 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff94a6cc000, count: 3580888) = 3580888 74985.406 ( 2.353 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff949ecb000, count: 3453256) = 3453256 74987.764 ( 2.629 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff9496ca000, count: 3859232) = 3859232 74990.399 ( 2.341 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff948ec9000, count: 3769032) = 3769032 74992.744 ( 2.064 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff9486c8000, count: 3310520) = 3310520 74994.814 ( 2.619 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff947ec7000, count: 4194688) = 4194688 74997.439 ( 2.787 ms): perf/26500 write(fd: 3</root/perf.data>, buf: 0x7ff9476c6000, count: 4029760) = 4029760 Was again limited to a quarter of the mmap size: mmap flush: 2098176 mmap size 8392704B A warning about that would be good to have but can be added later, something like: "max flush is a quarter of the mmap size, if wanting to bump the mmap flush further, bump the mmap size as well using -m/--mmap-pages" Also rename the 'sync' parameters to 'synch' to keep tools/perf building with older glibcs: cc1: warnings being treated as errors builtin-record.c: In function 'record__mmap_read_evlist': builtin-record.c:775: warning: declaration of 'sync' shadows a global declaration /usr/include/unistd.h:933: warning: shadowed declaration is here builtin-record.c: In function 'record__mmap_read_all': builtin-record.c:856: warning: declaration of 'sync' shadows a global declaration /usr/include/unistd.h:933: warning: shadowed declaration is here Signed-off-by: Alexey Budankov <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools build: Implement libzstd feature check, LIBZSTD_DIR and NO_LIBZSTD definesAlexey Budankov7-1/+49
Implement libzstd feature check, NO_LIBZSTD and LIBZSTD_DIR defines to override Zstd library sources or disable the feature from the command line: $ make -C tools/perf LIBZSTD_DIR=/path/to/zstd/sources/ clean all $ make -C tools/perf NO_LIBZSTD=1 clean all Auto detection feature status is reported just before compilation starts. If your system has some version of the zstd library preinstalled then the build system finds and uses it during the build. If you still prefer to compile with some other version of zstd library you have capability to refer the compilation to that version using LIBZSTD_DIR define. Signed-off-by: Alexey Budankov <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Rename input arguments and local variables of ↵Tzvetomir Stoyanov4-180/+180
libtraceevent from pevent to tep "pevent" to "tep" renaming of: - all "pevent" input arguments of libtraceevent internal functions. - all local "pevent" variables of libtraceevent. This makes the implementation consistent with the chosen naming convention, tep (trace event parser), and will avoid any confusion with the old pevent name Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf tools, tools lib traceevent: Rename "pevent" member of struct ↵Tzvetomir Stoyanov2-8/+8
tep_event_filter to "tep" The member "pevent" of the struct tep_event_filter is renamed to "tep". This makes the struct consistent with the chosen naming convention: tep (trace event parser), instead of the old pevent. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf tools, tools lib traceevent: Rename "pevent" member of struct tep_event ↵Tzvetomir Stoyanov13-33/+33
to "tep" The member "pevent" of the struct tep_event is renamed to "tep". This makes the struct consistent with the chosen naming convention: tep (trace event parser), instead of the old pevent. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Rename input arguments of libtraceevent APIs from ↵Tzvetomir Stoyanov16-442/+439
pevent to tep Input arguments of libtraceevent APIs are renamed from "struct tep_handle *pevent" to "struct tep_handle *tep". This makes the API consistent with the chosen naming convention: tep (trace event parser), instead of the old pevent. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools tools, tools lib traceevent: Make traceevent APIs more consistentTzvetomir Stoyanov6-46/+46
Rename some traceevent APIs for consistency: tep_pid_is_registered() to tep_is_pid_registered() tep_file_bigendian() to tep_is_file_bigendian() to make the names and return values consistent with other tep_is_... APIs tep_data_lat_fmt() to tep_data_latency_format() to make the name more descriptive tep_host_bigendian() to tep_is_bigendian() tep_set_host_bigendian() to tep_set_local_bigendian() tep_is_host_bigendian() to tep_is_local_bigendian() "host" can be confused with VMs, and "local" is about the local machine. All tep_is_..._bigendian(struct tep_handle *tep) APIs return the saved data in the tep handle, while tep_is_bigendian() returns the running machine's endianness. All tep_is_... functions are modified to return bool value, instead of int. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] [ Removed some extra parenthesis around return statements ] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Remove call to exit() from tep_filter_add_filter_str()Tzvetomir Stoyanov1-3/+0
This patch removes call to exit() from tep_filter_add_filter_str(). A library function should not force the application to exit. In the current implementation tep_filter_add_filter_str() calls exit() when a special "test_filters" mode is set, used only for debugging purposes. When this mode is set and a filter is added - its string is printed to the console and exit() is called. This patch changes the logic - when in "test_filters" mode, the filter string is still printed, but the exit() is not called. It is up to the application to track when "test_filters" mode is set and to call exit, if needed. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Remove tep filter trivial APIsTzvetomir Stoyanov2-185/+0
This patch removes trivial filter tep APIs: enum tep_filter_trivial_type tep_filter_event_has_trivial() tep_update_trivial() tep_filter_clear_trivial() Trivial filters is an optimization, used only in the first version of KernelShark. The API is deprecated, the next KernelShark release does not use it. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Removed unneeded !! and return parenthesisSteven Rostedt (VMware)1-2/+2
As return is not a function we do not need parenthesis around the return value. Also, a function returning bool does not need to add !!. Signed-off-by: Steven Rostedt (VMware) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Tzvetomir Stoyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Implement new traceevent APIs for accessing struct ↵Tzvetomir Stoyanov2-4/+108
tep_handler fields As struct tep_handler definition is not exposed as part of libtraceevent API, its fields cannot be accessed directly by the library users. This patch implements new APIs, which can be used to access the struct tep_handler fields: tep_get_event() - retrieves an event pointer at a specific index tep_get_first_event() - is modified to use tep_get_event() tep_clear_flag() - clears a tep handle flag tep_test_flag() - test if a given flag is set tep_get_header_timestamp_size() - returns the size of the timestamp stored in the header. tep_get_cpus() - returns the number of CPUs tep_is_old_format() - returns true if data was created by an older kernel with the old data format tep_set_print_raw() - have the output print in the raw format tep_set_test_filters() - debugging utility for testing tep filters Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] [ Renamed some newly added "pevent" to "tep" ] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Coding style fixesTzvetomir Stoyanov1-15/+15
Fixed few coding style problems in event-parse-api.c Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Change description of few APIsTzvetomir Stoyanov2-12/+14
APIs descriptions should describe the purpose of the function, its parameters and return value. While working on man pages implementation, I noticed mismatches in the descriptions of few APIs. This patch changes the description of these APIs, making them consistent with the man pages: - tep_print_num_field() - tep_print_func_field() - tep_get_header_page_size() - tep_get_long_size() - tep_set_long_size() - tep_get_page_size() - tep_set_page_size() Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Add more debugging to see various internal ring buffer ↵Steven Rostedt (Red Hat)2-0/+62
entries When trace-cmd report --debug is set, show the internal ring buffer entries like time-extends and padding. This requires adding new kbuffer API to retrieve these items. Signed-off-by: Steven Rostedt (VMware) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Tzvetomir Stoyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Implement a new API, tep_list_events_copy()Tzvetomir Stoyanov2-23/+91
Existing API tep_list_events() is not thread safe, it uses the internal array sort_events to keep cache of the sorted events and reuses it. This patch implements a new API, tep_list_events_copy(), which allocates new sorted array each time it is called. It could be used when a sorted events functionality is needed in thread safe use cases. It is up to the caller to free the array. Signed-off-by: Tzvetomir Stoyanov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/linux-trace-devel/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Add mono clocks to be parsed in secondsSteven Rostedt (VMware)1-1/+2
The mono clocks can display in seconds instead of whole numbers: trace-cmd-521 [001] 99176715281005: sched_waking: comm=kworker/u16:2 pid=32118 prio=120 target_cpu=002 trace-cmd-521 [001] 99176715286349: sched_wake_idle_without_ipi: cpu=2 trace-cmd-521 [001] 99176715288047: sched_wakeup: kworker/u16:2:32118 [120] success=1 CPU:002 trace-cmd-521 [001] 99176715290022: sched_waking: comm=trace-cmd pid=523 prio=120 target_cpu=000 trace-cmd-521 [001] 99176715292332: sched_wake_idle_without_ipi: cpu=0 trace-cmd-521 [001] 99176715292855: sched_wakeup: trace-cmd:523 [120] success=1 CPU:000 trace-cmd-521 [001] 99176715300697: sched_stat_runtime: comm=trace-cmd pid=521 runtime=80233 [ns] vruntime=66705540554 [ns Break it up in seconds: trace-cmd-521 [001] 99176.715281: sched_waking: comm=kworker/u16:2 pid=32118 prio=120 target_cpu=002 trace-cmd-521 [001] 99176.715286: sched_wake_idle_without_ipi: cpu=2 trace-cmd-521 [001] 99176.715288: sched_wakeup: kworker/u16:2:32118 [120] success=1 CPU:002 trace-cmd-521 [001] 99176.715290: sched_waking: comm=trace-cmd pid=523 prio=120 target_cpu=000 trace-cmd-521 [001] 99176.715292: sched_wake_idle_without_ipi: cpu=0 trace-cmd-521 [001] 99176.715293: sched_wakeup: trace-cmd:523 [120] success=1 CPU:000 trace-cmd-521 [001] 99176.715301: sched_stat_runtime: comm=trace-cmd pid=521 runtime=80233 [ns] vruntime=66705540554 [ns] Signed-off-by: Steven Rostedt (VMware) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Tzvetomir Stoyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01tools lib traceevent: Handle trace_printk() "%px"Steven Rostedt (VMware)1-0/+1
With security updates, %p in the kernel is hashed to protect true kernel locations. But trace_printk() is not allowed in production systems, and when a pointer is used, there are many times that the actual address is needed. "%px" produces the real address. But libtraceevent does not know how to handle that extension. Add it. Signed-off-by: Steven Rostedt (VMware) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Tzvetomir Stoyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf list: Output tool eventsAndi Kleen3-2/+25
Add support in 'perf list' to output tool internal events, currently only 'duration_time'. Committer testing: $ perf list dur* List of pre-defined events (to be used in -e): duration_time [Tool event] Metric Groups: $ perf list sw List of pre-defined events (to be used in -e): alignment-faults [Software event] bpf-output [Software event] context-switches OR cs [Software event] cpu-clock [Software event] cpu-migrations OR migrations [Software event] dummy [Software event] emulation-faults [Software event] major-faults [Software event] minor-faults [Software event] page-faults OR faults [Software event] task-clock [Software event] duration_time [Tool event] $ perf list | grep duration duration_time [Tool event] [L1D miss outstandings duration in cycles] page walk duration are excluded in Skylake] load. EPT page walk duration are excluded in Skylake] page walk duration are excluded in Skylake] store. EPT page walk duration are excluded in Skylake] (instruction fetch) request. EPT page walk duration are excluded in instruction fetch request. EPT page walk duration are excluded in $ Signed-off-by: Andi Kleen <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Acked-by: Jiri Olsa <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf evsel: Support printing evsel name for 'duration_time'Andi Kleen1-1/+10
Implement printing the correct name for duration_time Signed-off-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf stat: Implement duration_time as a proper eventAndi Kleen6-13/+86
The perf metric expression use 'duration_time' internally to normalize events. Normal 'perf stat' without -x also prints the duration time. But when using -x, the interval is not output anywhere, which is inconvenient for any post processing which often wants to normalize values to time. So implement 'duration_time' as a proper perf event that can be specified explicitely with -e. The previous implementation of 'duration_time' only worked for metric processing. This adds the concept of a tool event that is handled by the tool. On the kernel level it is still mapped to the dummy software event, but the values are not read anymore, but instead computed by the tool. Add proper plumbing to handle this in the event parser, and display it in 'perf stat'. We don't want 'duration_time' to be added up, so it's only printed for the first CPU. % perf stat -e duration_time,cycles true Performance counter stats for 'true': 555,476 ns duration_time 771,958 cycles 0.000555476 seconds time elapsed 0.000644000 seconds user 0.000000000 seconds sys Signed-off-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf stat: Revert checks for duration_timeAndi Kleen1-18/+0
This reverts e864c5ca145e ("perf stat: Hide internal duration_time counter") but doing it manually since the code has now moved to a different file. The next patch will properly implement duration_time as a full event, so no need to hide it anymore. Signed-off-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-04-01perf list: Fix s390 counter long description for L1D_RO_EXCL_WRITESThomas Richter1-1/+1
Command # perf list --long-desc pmu lists the long description of the available counters. For counter named L1D_RO_EXCL_WRITES on machine types 3906 and 3907 the long description contains the counter number 'Counter:128 Name:' prefix. This is wrong. The fix changes the description text and removes this prefix. Output before: [root@m35lp76 perf]# ./perf list --long-desc pmu ... L1D_ONDRAWER_L4_SOURCED_WRITES [A directory write to the Level-1 Data cache directory where the returned cache line was sourced from On-Drawer Level-4 cache] L1D_RO_EXCL_WRITES [Counter:128 Name:L1D_RO_EXCL_WRITES A directory write to the Level-1 Data cache where the line was originally in a Read-Only state in the cache but has been updated to be in the Exclusive state that allows stores to the cache line] ... Output after: [root@m35lp76 perf]# ./perf list --long-desc pmu ... L1D_ONDRAWER_L4_SOURCED_WRITES [A directory write to the Level-1 Data cache directory where the returned cache line was sourced from On-Drawer Level-4 cache] L1D_RO_EXCL_WRITES [L1D_RO_EXCL_WRITES A directory write to the Level-1 Data cache where the line was originally in a Read-Only state in the cache but has been updated to be in the Exclusive state that allows stores to the cache line] ... Signed-off-by: Thomas Richter <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Hendrik Brueckner <[email protected]> Cc: Martin Schwidefsky <[email protected]> Fixes: 109d59b900e7 ("perf vendor events s390: Add JSON files for IBM z14") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>