aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2020-04-22objtool: Avoid iterating !text section symbolsPeter Zijlstra1-1/+5
validate_functions() iterates all sections their symbols; this is pointless to do for !text sections as they won't have instructions anyway. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Use sec_offset_hash() for insn_hashPeter Zijlstra1-2/+3
In preparation for find_insn_containing(), change insn_hash to use sec_offset_hash(). This actually reduces runtime; probably because mixing in the section index reduces the collisions due to text sections all starting their instructions at offset 0. Runtime on vmlinux.o from 3.1 to 2.5 seconds. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Optimize !vmlinux.o againPeter Zijlstra3-26/+52
When doing kbuild tests to see if the objtool changes affected those I found that there was a measurable regression: pre post real 1m13.594 1m16.488s user 34m58.246s 35m23.947s sys 4m0.393s 4m27.312s Perf showed that for small files the increased hash-table sizes were a measurable difference. Since we already have -l "vmlinux" to distinguish between the modes, make it also use a smaller portion of the hash-tables. This flips it into a small win: real 1m14.143s user 34m49.292s sys 3m44.746s Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Implement noinstr validationPeter Zijlstra5-4/+112
Validate that any call out of .noinstr.text is in between instr_begin() and instr_end() annotations. This annotation is useful to ensure correct behaviour wrt tracing sensitive code like entry/exit and idle code. When we run code in a sensitive context we want a guarantee no unknown code is ran. Since this validation relies on knowing the section of call destination symbols, we must run it on vmlinux.o instead of on individual object files. Add two options: -d/--duplicate "duplicate validation for vmlinux" -l/--vmlinux "vmlinux.o validation" Where the latter auto-detects when objname ends with "vmlinux.o" and the former will force all validations, also those already done on !vmlinux object files. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Fix !CFI insn_state propagationPeter Zijlstra4-140/+157
Objtool keeps per instruction CFI state in struct insn_state and will save/restore this where required. However, insn_state has grown some !CFI state, and this must not be saved/restored (that would loose/destroy state). Fix this by moving the CFI specific parts of insn_state into struct cfi_state. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Rename struct cfi_statePeter Zijlstra4-4/+4
There's going to be a new struct cfi_state, rename this one to make place. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Remove SAVE/RESTORE hintsPeter Zijlstra3-43/+5
The SAVE/RESTORE hints are now unused; remove them. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Introduce HINT_RET_OFFSETPeter Zijlstra3-9/+20
Normally objtool ensures a function keeps the stack layout invariant. But there is a useful exception, it is possible to stuff the return stack in order to 'inject' a 'call': push $fun ret In this case the invariant mentioned above is violated. Add an objtool HINT to annotate this and allow a function exit with a modified stack frame. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Better handle IRETPeter Zijlstra3-15/+29
Teach objtool a little more about IRET so that we can avoid using the SAVE/RESTORE annotation. In particular, make the weird corner case in insn->restore go away. The purpose of that corner case is to deal with the fact that UNWIND_HINT_RESTORE lands on the instruction after IRET, but that instruction can end up being outside the basic block, consider: if (cond) sync_core() foo(); Then the hint will land on foo(), and we'll encounter the restore hint without ever having seen the save hint. By teaching objtool about the arch specific exception frame size, and assuming that any IRET in an STT_FUNC symbol is an exception frame sized POP, we can remove the use of save/restore hints for this code. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Support multiple stack_op per instructionJulien Thierry4-31/+62
Instruction sets can include more or less complex operations which might not fit the currently defined set of stack_ops. Combining more than one stack_op provides more flexibility to describe the behaviour of an instruction. This also reduces the need to define new stack_ops specific to a single instruction set. Allow instruction decoders to generate multiple stack_op per instruction. Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Remove redundant .rodata section name comparisonMuchun Song1-2/+2
If the prefix of section name is not '.rodata', the following function call can never return 0. strcmp(sec->name, C_JUMP_TABLE_SECTION) So the name comparison is pointless, just remove it. Signed-off-by: Muchun Song <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Documentation: document UACCESS warningsNick Desaulniers1-0/+26
Compiling with Clang and CONFIG_KASAN=y was exposing a few warnings: call to memset() with UACCESS enabled Document how to fix these for future travelers. Link: https://github.com/ClangBuiltLinux/linux/issues/876 Suggested-by: Kamalesh Babulal <[email protected]> Suggested-by: Matt Helsley <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Suggested-by: Randy Dunlap <[email protected]> Signed-off-by: Nick Desaulniers <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Split out arch-specific CFI definitionsJulien Thierry3-20/+29
Some CFI definitions used by generic objtool code have no reason to vary from one architecture to another. Keep those definitions in generic code and move the arch-specific ones to a new arch-specific header. Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Add abstraction for destination offsetsRaphael Gault3-8/+27
The jump and call destination relocation offsets are x86-specific. Abstract them by calling arch-specific implementations. [ jthierry: Remove superfluous comment; replace other addend offsets with arch_dest_rela_offset() ] Signed-off-by: Raphael Gault <[email protected]> Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Use arch specific values in restore_reg()Julien Thierry1-2/+2
The initial register state is set up by arch specific code. Use the value the arch code has set when restoring registers from the stack. Suggested-by: Raphael Gault <[email protected]> Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Ignore empty alternativesJulien Thierry1-0/+6
The .alternatives section can contain entries with no original instructions. Objtool will currently crash when handling such an entry. Just skip that entry, but still give a warning to discourage useless entries. Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Clean instruction state before each function validationJulien Thierry1-7/+6
When a function fails its validation, it might leave a stale state that will be used for the validation of other functions. That would cause false warnings on potentially valid functions. Reset the instruction state before the validation of each individual function. Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Remove redundant checks on operand typeJulien Thierry1-3/+1
POP operations are already in the code path where the destination operand is OP_DEST_REG. There is no need to check the operand type again. Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Always do header sync checkJulien Thierry1-1/+1
Currently, the check of tools files against kernel equivalent is only done after every object file has been built. This means one might fix build issues against outdated headers without seeing a warning about this. Check headers before any object is built. Also, make it part of a FORCE'd recipe so every attempt to build objtool will report the outdated headers (if any). Signed-off-by: Julien Thierry <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Fix off-by-one in symbol_by_offset()Julien Thierry1-1/+1
Sometimes, WARN_FUNC() and other users of symbol_by_offset() will associate the first instruction of a symbol with the symbol preceding it. This is because symbol->offset + symbol->len is already outside of the symbol's range. Fixes: 2a362ecc3ec9 ("objtool: Optimize find_symbol_*() and read_symbols()") Signed-off-by: Julien Thierry <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-22objtool: Fix 32bit cross buildsPeter Zijlstra1-1/+1
Apparently there's people doing 64bit builds on 32bit machines. Fixes: 74b873e49d92 ("objtool: Optimize find_rela_by_dest_range()") Reported-by: [email protected] Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-04-21Merge branch 'akpm' (patches from Andrew)Linus Torvalds2-1/+3
Merge misc fixes from Andrew Morton: "15 fixes" * emailed patches from Andrew Morton <[email protected]>: tools/vm: fix cross-compile build coredump: fix null pointer dereference on coredump mm: shmem: disable interrupt when acquiring info->lock in userfaultfd_copy path shmem: fix possible deadlocks on shmlock_user_lock vmalloc: fix remap_vmalloc_range() bounds checks mm/shmem: fix build without THP mm/ksm: fix NULL pointer dereference when KSM zero page is enabled tools/build: tweak unused value workaround checkpatch: fix a typo in the regex for $allocFunctions mm, gup: return EINTR when gup is interrupted by fatal signals mm/hugetlb: fix a addressing exception caused by huge_pte_offset MAINTAINERS: add an entry for kfifo mm/userfaultfd: disable userfaultfd-wp on x86_32 slub: avoid redzone when choosing freepointer location sh: fix build error in mm/init.c
2020-04-21Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds4-2/+5
Pull virtio fixes and cleanups from Michael Tsirkin: - Some bug fixes - Cleanup a couple of issues that surfaced meanwhile - Disable vhost on ARM with OABI for now - to be fixed fully later in the cycle or in the next release. * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (24 commits) vhost: disable for OABI virtio: drop vringh.h dependency virtio_blk: add a missing include virtio-balloon: Avoid using the word 'report' when referring to free page hinting virtio-balloon: make virtballoon_free_page_report() static vdpa: fix comment of vdpa_register_device() vdpa: make vhost, virtio depend on menu vdpa: allow a 32 bit vq alignment drm/virtio: fix up for include file changes remoteproc: pull in slab.h rpmsg: pull in slab.h virtio_input: pull in slab.h remoteproc: pull in slab.h virtio-rng: pull in slab.h virtgpu: pull in uaccess.h tools/virtio: make asm/barrier.h self contained tools/virtio: define aligned attribute virtio/test: fix up after IOTLB changes vhost: Create accessors for virtqueues private_data vdpasim: Return status in vdpasim_get_status ...
2020-04-21tools/vm: fix cross-compile buildLucas Stach1-0/+2
Commit 7ed1c1901fe5 ("tools: fix cross-compile var clobbering") moved the setup of the CC variable to tools/scripts/Makefile.include to make the behavior consistent across all the tools Makefiles. As the vm tools missed the include we end up with the wrong CC in a cross-compiling evironment. Fixes: 7ed1c1901fe5 (tools: fix cross-compile var clobbering) Signed-off-by: Lucas Stach <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Martin Kelly <[email protected]> Cc: <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-21tools/build: tweak unused value workaroundGeorge Burgess IV1-1/+1
Clang has -Wself-assign enabled by default under -Wall, which always gets -Werror'ed on this file, causing sync-compare-and-swap to be disabled by default. The generally-accepted way to spell "this value is intentionally unused," is casting it to `void`. This is accepted by both GCC and Clang with -Wall enabled: https://godbolt.org/z/qqZ9r3 Signed-off-by: George Burgess IV <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Nick Desaulniers <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-21docs: Add rbtree documentation to the core-apiMatthew Wilcox (Oracle)2-2/+2
This file is close enough to being in rst format that I didn't feel the need to alter it in any way. Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Michel Lespinasse <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jonathan Corbet <[email protected]>
2020-04-21selftests: kvm/set_memory_region_test: do not check RIP if the guest shuts downPaolo Bonzini1-4/+9
On AMD, the state of the VMCB is undefined after a shutdown VMEXIT. KVM takes a very conservative approach to that and resets the guest altogether when that happens. This causes the set_memory_region_test to fail because the RIP is 0xfff0 (the reset vector). Restrict the RIP test to KVM_EXIT_INTERNAL_ERROR in order to fix this. Signed-off-by: Paolo Bonzini <[email protected]>
2020-04-21tools/kvm_stat: add sample systemd unit fileStefan Raspl1-0/+16
Add a sample unit file as a basis for systemd integration of kvm_stat logs. Signed-off-by: Stefan Raspl <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-04-21tools/kvm_stat: Add command line switch '-L' to log to fileStefan Raspl2-13/+68
To integrate with logrotate, we have a signal handler that will re-open the logfile. Assuming we have a systemd unit file with ExecStart=kvm_stat -dtc -s 10 -L /var/log/kvm_stat.csv ExecReload=/bin/kill -HUP $MAINPID and a logrotate config featuring postrotate /bin/systemctl reload kvm_stat.service endscript Then the overall flow will look like this: (1) systemd starts kvm_stat, logging to A. (2) At some point, logrotate runs, moving A to B. kvm_stat continues to write to B at this point. (3) After rotating, logrotate restarts the kvm_stat unit via systemctl. (4) The kvm_stat unit sends a SIGHUP to kvm_stat, finally making it switch over to writing to A again. Note that in order to keep the structure of the cvs output in tact, we make sure to, in contrast to the standard log format, only write the header once at the beginning of a file. This implies that the header is suppressed when appending to an existing file. Unlike with the standard format, where we append to an existing file by starting out with a header. Signed-off-by: Stefan Raspl <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-04-21tools/kvm_stat: add command line switch '-z' to skip zero recordsStefan Raspl2-8/+24
When running in logging mode, skip records with all zeros (=empty records) to preserve space when logging to files. Signed-off-by: Stefan Raspl <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-04-20bpf, selftests: Add test for BPF_STX BPF_B storing R10Luke Nelson1-0/+40
This patch adds a test to test_verifier that writes the lower 8 bits of R10 (aka FP) using BPF_B to an array map and reads the result back. The expected behavior is that the result should be the same as first copying R10 to R9, and then storing / loading the lower 8 bits of R9. This test catches a bug that was present in the x86-64 JIT that caused an incorrect encoding for BPF_STX BPF_B when the source operand is R10. Signed-off-by: Xi Wang <[email protected]> Signed-off-by: Luke Nelson <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-20bpf: Forbid XADD on spilled pointers for unprivileged usersJann Horn1-0/+1
When check_xadd() verifies an XADD operation on a pointer to a stack slot containing a spilled pointer, check_stack_read() verifies that the read, which is part of XADD, is valid. However, since the placeholder value -1 is passed as `value_regno`, check_stack_read() can only return a binary decision and can't return the type of the value that was read. The intent here is to verify whether the value read from the stack slot may be used as a SCALAR_VALUE; but since check_stack_read() doesn't check the type, and the type information is lost when check_stack_read() returns, this is not enforced, and a malicious user can abuse XADD to leak spilled kernel pointers. Fix it by letting check_stack_read() verify that the value is usable as a SCALAR_VALUE if no type information is passed to the caller. To be able to use __is_pointer_value() in check_stack_read(), move it up. Fix up the expected unprivileged error message for a BPF selftest that, until now, assumed that unprivileged users can use XADD on stack-spilled pointers. This also gives us a test for the behavior introduced in this patch for free. In theory, this could also be fixed by forbidding XADD on stack spills entirely, since XADD is a locked operation (for operations on memory with concurrency) and there can't be any concurrency on the BPF stack; but Alexei has said that he wants to keep XADD on stack slots working to avoid changes to the test suite [1]. The following BPF program demonstrates how to leak a BPF map pointer as an unprivileged user using this bug: // r7 = map_pointer BPF_LD_MAP_FD(BPF_REG_7, small_map), // r8 = launder(map_pointer) BPF_STX_MEM(BPF_DW, BPF_REG_FP, BPF_REG_7, -8), BPF_MOV64_IMM(BPF_REG_1, 0), ((struct bpf_insn) { .code = BPF_STX | BPF_DW | BPF_XADD, .dst_reg = BPF_REG_FP, .src_reg = BPF_REG_1, .off = -8 }), BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_FP, -8), // store r8 into map BPF_MOV64_REG(BPF_REG_ARG1, BPF_REG_7), BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP), BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -4), BPF_ST_MEM(BPF_W, BPF_REG_ARG2, 0, 0), BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), BPF_EXIT_INSN(), BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_8, 0), BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN() [1] https://lore.kernel.org/bpf/[email protected]/ Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)") Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-20docs: fix broken references for ReST files that moved aroundMauro Carvalho Chehab1-2/+2
Some broken references happened due to shifting files around and ReST renames. Those can't be auto-fixed by the script, so let's fix them manually. Signed-off-by: Mauro Carvalho Chehab <[email protected]> Acked-by: Corentin Labbe <[email protected]> Link: https://lore.kernel.org/r/64773a12b4410aaf3e3be89e3ec7e34de2484eea.1586881715.git.mchehab+huawei@kernel.org Signed-off-by: Jonathan Corbet <[email protected]>
2020-04-20selftests: pmtu: implement IPIP, SIT and ip6tnl PMTU discovery testsLourdes Pedrajas1-0/+122
Add PMTU discovery tests for these encapsulations: - IPIP - SIT, mode ip6ip - ip6tnl, modes ip6ip6 and ipip6 Signed-off-by: Lourdes Pedrajas <[email protected]> Reviewed-by: Stefano Brivio <[email protected]> Reviewed-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-04-20pm-graph v5.6Todd Brandt5-371/+619
sleepgraph: - force usage of python3 instead of using system default - fix bugzilla 204773 (https://bugzilla.kernel.org/show_bug.cgi?id=204773) - fix issue of platform info not being reset in -multi (logs fill up) - change -ftop call to "pm_suspend", this is one level below state_store - add -wificheck command to read out the current wifi device details - change -wifi behavior to poll /proc/net/wireless for wifi connect - add wifi reconnect time to timeline, include time in summary column - add "fail on wifi_resume" to timeline and summary when wifi fails - add a set of commands to collect data before/after suspend in the log - add "-cmdinfo" command which prints out all the data collected - check for cmd info tools at start, print found/missing in green/red - fix kernel suspend time calculation: tool used to look for start of pm_suspend_console, but the order has changed. latest kernel starts with ksys_sync, use this instead - include time spent in mem/disk in the header (same as freeze/standby) - ignore turbostat 32-bit capability warnings - print to result.txt when -skiphtml is used, just say result: pass - don't exit on SIGTSTP, it's a ctrl-Z and the tool may come back - -multi argument supports duration as well as count: hours, minutes, seconds - update the -multi status output to be more informative - -maxfail sets maximum consecutive fails before a -multi run is aborted - in -summary, ignore dmesg/ftrace/html files that are 0 size bootgraph: - force usage of python3 instead of using system default README: - add endurance testing instructions Makefile: - remove pycache on uninstall Signed-off-by: Todd Brandt <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2020-04-19Merge tag 'x86-urgent-2020-04-19' of ↵Linus Torvalds3-32/+71
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 and objtool fixes from Thomas Gleixner: "A set of fixes for x86 and objtool: objtool: - Ignore the double UD2 which is emitted in BUG() when CONFIG_UBSAN_TRAP is enabled. - Support clang non-section symbols in objtool ORC dump - Fix switch table detection in .text.unlikely - Make the BP scratch register warning more robust. x86: - Increase microcode maximum patch size for AMD to cope with new CPUs which have a larger patch size. - Fix a crash in the resource control filesystem when the removal of the default resource group is attempted. - Preserve Code and Data Prioritization enabled state accross CPU hotplug. - Update split lock cpu matching to use the new X86_MATCH macros. - Change the split lock enumeration as Intel finaly decided that the IA32_CORE_CAPABILITIES bits are not architectural contrary to what the SDM claims. !@#%$^! - Add Tremont CPU models to the split lock detection cpu match. - Add a missing static attribute to make sparse happy" * tag 'x86-urgent-2020-04-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/split_lock: Add Tremont family CPU models x86/split_lock: Bits in IA32_CORE_CAPABILITIES are not architectural x86/resctrl: Preserve CDP enable over CPU hotplug x86/resctrl: Fix invalid attempt at removing the default resource group x86/split_lock: Update to use X86_MATCH_INTEL_FAM6_MODEL() x86/umip: Make umip_insns static x86/microcode/AMD: Increase microcode PATCH_MAX_SIZE objtool: Make BP scratch register warning more robust objtool: Fix switch table detection in .text.unlikely objtool: Support Clang non-section symbols in ORC generation objtool: Support Clang non-section symbols in ORC dump objtool: Fix CONFIG_UBSAN_TRAP unreachable warnings
2020-04-19Merge tag 'perf-urgent-2020-04-19' of ↵Linus Torvalds22-387/+646
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf tooling fixes and updates from Thomas Gleixner: - Fix the header line of perf stat output for '--metric-only --per-socket' - Fix the python build with clang - The usual tools UAPI header synchronization * tag 'perf-urgent-2020-04-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: tools headers: Synchronize linux/bits.h with the kernel sources tools headers: Adopt verbatim copy of compiletime_assert() from kernel sources tools headers: Update x86's syscall_64.tbl with the kernel sources tools headers UAPI: Sync drm/i915_drm.h with the kernel sources tools headers UAPI: Update tools's copy of drm.h headers tools headers kvm: Sync linux/kvm.h with the kernel sources tools headers UAPI: Sync linux/fscrypt.h with the kernel sources tools include UAPI: Sync linux/vhost.h with the kernel sources tools arch x86: Sync asm/cpufeatures.h with the kernel sources tools headers UAPI: Sync linux/mman.h with the kernel tools headers UAPI: Sync sched.h with the kernel tools headers: Update linux/vdso.h and grab a copy of vdso/const.h perf stat: Fix no metric header if --per-socket and --metric-only set perf python: Check if clang supports -fno-semantic-interposition tools arch x86: Sync the msr-index.h copy with the kernel sources
2020-04-18perf hist: Add fast path for duplicate entries checkKan Liang3-1/+26
Perf checks the duplicate entries in a callchain before adding an entry. However the check is very slow especially with deeper call stack. Almost ~50% elapsed time of perf report is spent on the check when the call stack is always depth of 32. The hist_entry__cmp() is used to compare the new entry with the old entries. It will go through all the available sorts in the sort_list, and call the specific cmp of each sort, which is very slow. Actually, for most cases, there are no duplicate entries in callchain. The symbols are usually different. It's much faster to do a quick check for symbols first. Only do the full cmp when the symbols are exactly the same. The quick check is only to check symbols, not dso. Export _sort__sym_cmp. $ perf record --call-graph lbr ./tchain_edit_64 Without the patch $time perf report --stdio real 0m21.142s user 0m21.110s sys 0m0.033s With the patch $time perf report --stdio real 0m10.977s user 0m10.948s sys 0m0.027s Signed-off-by: Kan Liang <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf c2c: Add option to enable the LBR stitching approachKan Liang2-0/+23
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf top: Add option to enable the LBR stitching approachKan Liang3-0/+21
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. The option must be used with --call-graph lbr. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf script: Add option to enable the LBR stitching approachKan Liang2-0/+23
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. Committer testing: Using the same perf.data as with the latest cset committer testing section: $ perf script --stitch-lbr <SNIP> tchain_edit 11131 15164.984292: 437491 cycles:u: 401106 f43+0x0 (/wb/tchain_edit) 40114c f42+0x18 (/wb/tchain_edit) 401172 f41+0xe (/wb/tchain_edit) 401194 f40+0x0 (/wb/tchain_edit) 40119b f39+0x0 (/wb/tchain_edit) 4011a2 f38+0x0 (/wb/tchain_edit) 4011a9 f37+0x0 (/wb/tchain_edit) 4011b0 f36+0x0 (/wb/tchain_edit) 4011b7 f35+0x0 (/wb/tchain_edit) 4011be f34+0x0 (/wb/tchain_edit) 4011c5 f33+0x0 (/wb/tchain_edit) 4011cc f32+0x0 (/wb/tchain_edit) 401207 f31+0x34 (/wb/tchain_edit) 401212 f30+0x0 (/wb/tchain_edit) 401219 f29+0x0 (/wb/tchain_edit) 401220 f28+0x0 (/wb/tchain_edit) 401227 f27+0x0 (/wb/tchain_edit) 40122e f26+0x0 (/wb/tchain_edit) 401235 f25+0x0 (/wb/tchain_edit) 40123c f24+0x0 (/wb/tchain_edit) 401243 f23+0x0 (/wb/tchain_edit) 40124a f22+0x0 (/wb/tchain_edit) 401251 f21+0x0 (/wb/tchain_edit) 401258 f20+0x0 (/wb/tchain_edit) 40125f f19+0x0 (/wb/tchain_edit) 401266 f18+0x0 (/wb/tchain_edit) 40126d f17+0x0 (/wb/tchain_edit) 401274 f16+0x0 (/wb/tchain_edit) 40127b f15+0x0 (/wb/tchain_edit) 401282 f14+0x0 (/wb/tchain_edit) 401289 f13+0x0 (/wb/tchain_edit) 401290 f12+0x0 (/wb/tchain_edit) 401297 f11+0x0 (/wb/tchain_edit) 40129e f10+0x0 (/wb/tchain_edit) 4012a5 f9+0x0 (/wb/tchain_edit) 4012ac f8+0x0 (/wb/tchain_edit) 4012b3 f7+0x0 (/wb/tchain_edit) 4012ba f6+0x0 (/wb/tchain_edit) 4012c1 f5+0x0 (/wb/tchain_edit) 4012c8 f4+0x0 (/wb/tchain_edit) 4012cf f3+0x0 (/wb/tchain_edit) 4012d6 f2+0x0 (/wb/tchain_edit) 4012dd f1+0x0 (/wb/tchain_edit) 4012e4 main+0x0 (/wb/tchain_edit) 7f41a5016f41 __libc_start_main+0xf1 (/usr/lib64/libc-2.29.so) <SNIP> $ Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf report: Add option to enable the LBR stitching approachKan Liang2-0/+23
With the LBR stitching approach, the reconstructed LBR call stack can break the HW limitation. However, it may reconstruct invalid call stacks in some cases, e.g. exception handing such as setjmp/longjmp. Also, it may impact the processing time especially when the number of samples with stitched LBRs are huge. Add an option to enable the approach. # To display the perf.data header info, please use # --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6K of event 'cycles' # Event count (approx.): 6492797701 # # Children Self Command Shared Object Symbol # ........ ........ ............... .................. # ................................. # 99.99% 99.99% tchain_edit tchain_edit [.] f43 | ---main f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30 f31 | --99.65%--f32 f33 f34 f35 f36 f37 f38 f39 f40 f41 f42 f43 Committer testing: $ perf record --call-graph lbr /wb/tchain_edit [ perf record: Woken up 23 times to write data ] [ perf record: Captured and wrote 5.578 MB perf.data (6839 samples) ] $ perf report --header-only | egrep 'cpu(desc|.*capabilities)' # cpudesc : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz # cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake $ Before: $ perf report --no-children --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6K of event 'cycles:u' # Event count (approx.): 6459523879 # # Overhead Command Shared Object Symbol # ........ ........... ................ ....................... # 99.95% tchain_edit tchain_edit [.] f43 | --99.92%--f43 f42 f41 f40 f39 f38 f37 f36 f35 f34 f33 f32 f31 f30 f29 f28 f27 f26 f25 f24 f23 f22 f21 f20 f19 f18 f17 f16 f15 f14 f13 f12 f11 0.03% tchain_edit tchain_edit [.] f42 0.01% tchain_edit tchain_edit [.] f41 0.00% tchain_edit tchain_edit [.] f31 0.00% tchain_edit ld-2.29.so [.] _dl_relocate_object 0.00% tchain_edit ld-2.29.so [.] memmove 0.00% tchain_edit [unknown] [k] 0xffffffff93a00b17 After: $ perf report --stitch-lbr --no-children --stdio # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6K of event 'cycles:u' # Event count (approx.): 6459496645 # # Overhead Command Shared Object Symbol # ........ ........... ................ ........................ # 99.97% tchain_edit tchain_edit [.] f43 | --99.93%--f43 f42 f41 f40 f39 f38 f37 f36 f35 f34 f33 f32 f31 f30 f29 f28 f27 f26 f25 f24 f23 f22 f21 f20 f19 f18 f17 f16 f15 f14 f13 f12 f11 f10 f9 f8 f7 f6 f5 f4 f3 f2 f1 main __libc_start_main 0.02% tchain_edit [unknown] [k] 0xffffffff93a00b17 0.01% tchain_edit tchain_edit [.] f31 0.00% tchain_edit ld-2.29.so [.] _dl_important_hwcaps Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf callchain: Stitch LBR call stackKan Liang6-28/+188
In LBR call stack mode, the depth of reconstructed LBR call stack limits to the number of LBR registers. For example, on skylake, the depth of reconstructed LBR call stack is always <= 32. # To display the perf.data header info, please use # --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 6K of event 'cycles' # Event count (approx.): 6487119731 # # Children Self Command Shared Object Symbol # ........ ........ ............... .................. # ................................ 99.97% 99.97% tchain_edit tchain_edit [.] f43 | --99.64%--f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30 f31 f32 f33 f34 f35 f36 f37 f38 f39 f40 f41 f42 f43 For a call stack which is deeper than LBR limit, HW will overwrite the LBR register with oldest branch. Only partial call stacks can be reconstructed. However, the overwritten LBRs may still be retrieved from previous sample. At that moment, HW hasn't overwritten the LBR registers yet. Perf tools can stitch those overwritten LBRs on current call stacks to get a more complete call stack. To determine if LBRs can be stitched, perf tools need to compare current sample with previous sample. - They should have identical LBR records (Same from, to and flags values, and the same physical index of LBR registers). - The searching starts from the base-of-stack of current sample. Once perf determines to stitch the previous LBRs, the corresponding LBR cursor nodes will be copied to 'lists'. The 'lists' is to track the LBR cursor nodes which are going to be stitched. When the stitching is over, the nodes will not be freed immediately. They will be moved to 'free_lists'. Next stitching may reuse the space. Both 'lists' and 'free_lists' will be freed when all samples are processed. Committer notes: Fix the intel-pt.c initialization of the union with 'struct branch_flags', that breaks the build with its unnamed union on older gcc versions. Uninline thread__free_stitch_list(), as it grew big and started dragging includes to thread.h, so move it to thread.c where what it needs in terms of headers are already there. This fixes the build in several systems such as debian:experimental when cross building to the MIPS32 architecture, i.e. in the other cases what was needed was being included by sheer luck. In file included from builtin-sched.c:11: util/thread.h: In function 'thread__free_stitch_list': util/thread.h:169:3: error: implicit declaration of function 'free' [-Werror=implicit-function-declaration] 169 | free(pos); | ^~~~ util/thread.h:169:3: error: incompatible implicit declaration of built-in function 'free' [-Werror] util/thread.h:19:1: note: include '<stdlib.h>' or provide a declaration of 'free' 18 | #include "callchain.h" +++ |+#include <stdlib.h> 19 | util/thread.h:174:3: error: incompatible implicit declaration of built-in function 'free' [-Werror] 174 | free(pos); | ^~~~ util/thread.h:174:3: note: include '<stdlib.h>' or provide a declaration of 'free' Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf callchain: Save previous cursor nodes for LBR stitching approachKan Liang3-4/+83
The cursor nodes which generates from sample are eventually added into callchain. To avoid generating cursor nodes from previous samples again, the previous cursor nodes are also saved for LBR stitching approach. Some option, e.g. hide-unresolved, may hide some LBRs. Add a variable 'valid' in struct callchain_cursor_node to indicate this case. The LBR stitching approach will only append the valid cursor nodes from previous samples later. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] [ Use zfree() instead of open coded equivalent, and use it when freeing members of structs ] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf thread: Save previous sample for LBR stitching approachKan Liang3-0/+36
To retrieve the overwritten LBRs from previous sample for LBR stitching approach, perf has to save the previous sample. Only allocate the struct lbr_stitch once, when LBR stitching approach is enabled and kernel supports hw_idx. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] [ Use zalloc()/zfree() for thread->lbr_stitch ] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf thread: Add a knob for LBR stitch approachKan Liang2-0/+4
The LBR stitch approach should be disabled by default. Because - The stitching approach base on LBR call stack technology. The known limitations of LBR call stack technology still apply to the approach, e.g. Exception handing such as setjmp/longjmp will have calls/returns not match. - This approach is not foolproof. There can be cases where it creates incorrect call stacks from incorrect matches. There is no attempt to validate any matches in another way. The 'lbr_stitch_enable' is used to indicate whether enable LBR stitch approach, which is disabled by default. The following patch will introduce a new option for each tools to enable the LBR stitch approach. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf machine: Factor out lbr_callchain_add_lbr_ip()Kan Liang1-70/+73
Both caller and callee needs to add ip from LBR to callchain. Factor out lbr_callchain_add_lbr_ip() to improve code readability. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf machine: Factor out lbr_callchain_add_kernel_ip()Kan Liang1-22/+45
Both caller and callee needs to add kernel ip to callchain. Factor out lbr_callchain_add_kernel_ip() to improve code readability. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf machine: Refine the function for LBR call stack reconstructionKan Liang1-35/+76
LBR only collect the user call stack. To reconstruct a call stack, both kernel call stack and user call stack are required. The function resolve_lbr_callchain_sample() mix the kernel call stack and user call stack. Now, with the help of HW idx, perf tool can reconstruct a more complete call stack by adding some user call stack from previous sample. However, current implementation is hard to be extended to support it. Current code path for resolve_lbr_callchain_sample() for (j = 0; j < mix_chain_nr; j++) { if (ORDER_CALLEE) { if (kernel callchain) Fill callchain info else if (LBR callchain) Fill callchain info } else { if (LBR callchain) Fill callchain info else if (kernel callchain) Fill callchain info } add_callchain_ip(); } With the patch, if (ORDER_CALLEE) { for (j = 0; j < NUM of kernel callchain) { Fill callchain info add_callchain_ip(); } for (; j < mix_chain_nr) { Fill callchain info add_callchain_ip(); } } else { for (; j < NUM of LBR callchain) { Fill callchain info add_callchain_ip(); } for (j = 0; j < mix_chain_nr) { Fill callchain info add_callchain_ip(); } } No functional changes. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-04-18perf machine: Remove the indent in resolve_lbr_callchain_sampleKan Liang1-60/+63
The indent is unnecessary in resolve_lbr_callchain_sample. Removing it will make the following patch simpler. Current code path for resolve_lbr_callchain_sample() /* LBR only affects the user callchain */ if (i != chain_nr) { body of the function .... return 1; } return 0; With the patch, /* LBR only affects the user callchain */ if (i == chain_nr) return 0; body of the function ... return 1; No functional changes. Signed-off-by: Kan Liang <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>