aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2020-03-12bpf: Added new helper bpf_get_ns_current_pid_tgidCarlos Neira1-1/+19
New bpf helper bpf_get_ns_current_pid_tgid, This helper will return pid and tgid from current task which namespace matches dev_t and inode number provided, this will allows us to instrument a process inside a container. Signed-off-by: Carlos Neira <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-13tools: bpftool: Fix minor bash completion mistakesQuentin Monnet1-8/+21
Minor fixes for bash completion: addition of program name completion for two subcommands, and correction for program test-runs and map pinning. The completion for the following commands is fixed or improved: # bpftool prog run [TAB] # bpftool prog pin [TAB] # bpftool map pin [TAB] # bpftool net attach xdp name [TAB] Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-13tools: bpftool: Allow all prog/map handles for pinning objectsQuentin Monnet4-32/+7
Documentation and interactive help for bpftool have always explained that the regular handles for programs (id|name|tag|pinned) and maps (id|name|pinned) can be passed to the utility when attempting to pin objects (bpftool prog pin PROG / bpftool map pin MAP). THIS IS A LIE!! The tool actually accepts only ids, as the parsing is done in do_pin_any() in common.c instead of reusing the parsing functions that have long been generic for program and map handles. Instead of fixing the doc, fix the code. It is trivial to reuse the generic parsing, and to simplify do_pin_any() in the process. Do not accept to pin multiple objects at the same time with prog_parse_fds() or map_parse_fds() (this would require a more complex syntax for passing multiple sysfs paths and validating that they correspond to the number of e.g. programs we find for a given name or tag). Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds2-3/+32
Pull networking fixes from David Miller: "It looks like a decent sized set of fixes, but a lot of these are one liner off-by-one and similar type changes: 1) Fix netlink header pointer to calcular bad attribute offset reported to user. From Pablo Neira Ayuso. 2) Don't double clear PHY interrupts when ->did_interrupt is set, from Heiner Kallweit. 3) Add missing validation of various (devlink, nl802154, fib, etc.) attributes, from Jakub Kicinski. 4) Missing *pos increments in various netfilter seq_next ops, from Vasily Averin. 5) Missing break in of_mdiobus_register() loop, from Dajun Jin. 6) Don't double bump tx_dropped in veth driver, from Jiang Lidong. 7) Work around FMAN erratum A050385, from Madalin Bucur. 8) Make sure ARP header is pulled early enough in bonding driver, from Eric Dumazet. 9) Do a cond_resched() during multicast processing of ipvlan and macvlan, from Mahesh Bandewar. 10) Don't attach cgroups to unrelated sockets when in interrupt context, from Shakeel Butt. 11) Fix tpacket ring state management when encountering unknown GSO types. From Willem de Bruijn. 12) Fix MDIO bus PHY resume by checking mdio_bus_phy_may_suspend() only in the suspend context. From Heiner Kallweit" * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (112 commits) net: systemport: fix index check to avoid an array out of bounds access tc-testing: add ETS scheduler to tdc build configuration net: phy: fix MDIO bus PM PHY resuming net: hns3: clear port base VLAN when unload PF net: hns3: fix RMW issue for VLAN filter switch net: hns3: fix VF VLAN table entries inconsistent issue net: hns3: fix "tc qdisc del" failed issue taprio: Fix sending packets without dequeueing them net: mvmdio: avoid error message for optional IRQ net: dsa: mv88e6xxx: Add missing mask of ATU occupancy register net: memcg: fix lockdep splat in inet_csk_accept() s390/qeth: implement smarter resizing of the RX buffer pool s390/qeth: refactor buffer pool code s390/qeth: use page pointers to manage RX buffer pool seg6: fix SRv6 L2 tunnels to use IANA-assigned protocol number net: dsa: Don't instantiate phylink for CPU/DSA ports unless needed net/packet: tpacket_rcv: do not increment ring index on drop sxgbe: Fix off by one in samsung driver strncpy size arg net: caif: Add lockdep expression to RCU traversal primitive MAINTAINERS: remove Sathya Perla as Emulex NIC maintainer ...
2020-03-13libbpf: Split BTF presence checks into libbpf- and kernel-specific partsAndrii Nakryiko1-5/+12
Needs for application BTF being present differs between user-space libbpf needs and kernel needs. Currently, BTF is mandatory only in kernel only when BPF application is using STRUCT_OPS. While libbpf itself relies more heavily on presense of BTF: - for BTF-defined maps; - for Kconfig externs; - for STRUCT_OPS as well. Thus, checks for presence and validness of bpf_object's BPF needs to be performed separately, which is patch does. Fixes: 5327644614a1 ("libbpf: Relax check whether BTF is mandatory") Reported-by: Michal Rostecki <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Cc: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-13bpftool: Add _bpftool and profiler.skel.h to .gitignoreSong Liu1-0/+2
These files are generated, so ignore them. Signed-off-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-13bpftool: Skeleton should depend on libbpfSong Liu1-2/+3
Add the dependency to libbpf, to fix build errors like: In file included from skeleton/profiler.bpf.c:5: .../bpf_helpers.h:5:10: fatal error: 'bpf_helper_defs.h' file not found #include "bpf_helper_defs.h" ^~~~~~~~~~~~~~~~~~~ 1 error generated. make: *** [skeleton/profiler.bpf.o] Error 1 make: *** Waiting for unfinished jobs.... Fixes: 47c09d6a9f67 ("bpftool: Introduce "prog profile" command") Suggested-by: Quentin Monnet <[email protected]> Signed-off-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Acked-by: John Fastabend <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-13bpftool: Only build bpftool-prog-profile if supported by clangSong Liu4-5/+24
bpftool-prog-profile requires clang to generate BTF for global variables. When compared with older clang, skip this command. This is achieved by adding a new feature, clang-bpf-global-var, to tools/build/feature. Signed-off-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-12tc-testing: add ETS scheduler to tdc build configurationDavide Caratti1-0/+1
add CONFIG_NET_SCH_ETS to 'config', otherwise test suites using this file to perform a full tdc run will encounter the following warning: ok 645 e90e - Add ETS qdisc using bands # skipped - "-----> teardown stage" did not complete successfully Fixes: 82c664b69c8b ("selftests: qdiscs: Add test coverage for ETS Qdisc") Reported-by: Jamal Hadi Salim <[email protected]> Signed-off-by: Davide Caratti <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-03-12xarray: Fix early termination of xas_for_each_markedMatthew Wilcox (Oracle)4-2/+91
xas_for_each_marked() is using entry == NULL as a termination condition of the iteration. When xas_for_each_marked() is used protected only by RCU, this can however race with xas_store(xas, NULL) in the following way: TASK1 TASK2 page_cache_delete() find_get_pages_range_tag() xas_for_each_marked() xas_find_marked() off = xas_find_chunk() xas_store(&xas, NULL) xas_init_marks(&xas); ... rcu_assign_pointer(*slot, NULL); entry = xa_entry(off); And thus xas_for_each_marked() terminates prematurely possibly leading to missed entries in the iteration (translating to missing writeback of some pages or a similar problem). If we find a NULL entry that has been marked, skip it (unless we're trying to allocate an entry). Reported-by: Jan Kara <[email protected]> CC: [email protected] Fixes: ef8e5717db01 ("page cache: Convert delete_batch to XArray") Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
2020-03-12selftests: net: Add SO_REUSEADDR test to check if 4-tuples are fully utilized.Kuniyuki Iwashima4-0/+200
This commit adds a test to check if we can fully utilize 4-tuples for connect() when all ephemeral ports are exhausted. The test program changes the local port range to use only one port and binds two sockets with or without SO_REUSEADDR and SO_REUSEPORT, and with the same EUID or with different EUIDs, then do listen(). We should be able to bind only one socket having both SO_REUSEADDR and SO_REUSEPORT per EUID, which restriction is to prevent unintentional listen(). Signed-off-by: Kuniyuki Iwashima <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-03-12bpftool: Use linux/types.h from source tree for profiler buildTobias Klauser2-10/+12
When compiling bpftool on a system where the /usr/include/asm symlink doesn't exist (e.g. on an Ubuntu system without gcc-multilib installed), the build fails with: CLANG skeleton/profiler.bpf.o In file included from skeleton/profiler.bpf.c:4: In file included from /usr/include/linux/bpf.h:11: /usr/include/linux/types.h:5:10: fatal error: 'asm/types.h' file not found #include <asm/types.h> ^~~~~~~~~~~~~ 1 error generated. make: *** [Makefile:123: skeleton/profiler.bpf.o] Error 1 This indicates that the build is using linux/types.h from system headers instead of source tree headers. To fix this, adjust the clang search path to include the necessary headers from tools/testing/selftests/bpf/include/uapi and tools/include/uapi. Also use __bitwise__ instead of __bitwise in skeleton/profiler.h to avoid clashing with the definition in tools/testing/selftests/bpf/include/uapi/linux/types.h. Signed-off-by: Tobias Klauser <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-12perf record: Fix binding of AIO user space buffers to nodesAlexey Budankov1-6/+15
Correct maxnode parameter value passed to mbind() syscall to be the amount of node mask bits to analyze plus 1. Dynamically allocate node mask memory depending on the index of node of cpu being profiled. Fixes: c44a8b44ca9f ("perf record: Bind the AIO user space buffers to nodes") Signed-off-by: Alexey Budankov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] [ Remove leftover nr_bits + 1 comment in mbind() call ] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf scripting perl: Add common_callchain to fix argument orderMichael Petlan6-20/+20
Since common_callchain has been added to the argument array, we need to reflect it in perl-based scripts, because otherwise the following args would be shifted and thus incorrect. E.g. rw-by-pid and calculation of read and written bytes: Before: read counts by pid: pid comm # reads bytes_requested bytes_read ------ -------------------- ----------- ---------- ---------- 19301 dd 4 424510450039736 0 After: read counts by pid: pid comm # reads bytes_requested bytes_read ------ -------------------- ----------- ---------- ---------- 19301 dd 4 9536 4341 Committer testing: To see before after first do: # perf script record rw-by-pid ^C Now you'll have a perf.data file to report on, then do before and after using: # perf script report rw-by-pid Anbd notice the bytes_request/bytes_read, as above. Signed-off-by: Michael Petlan <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Benjamin Salon <[email protected]> Cc: Jiri Olsa <[email protected]> LPU-Reference: [email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11tools/runqslower: Add BPF_F_CURRENT_CPU for running selftest on older kernelsAndrii Nakryiko1-0/+1
Libbpf compiles and runs subset of selftests on each PR in its Github mirror repository. To allow still building up-to-date selftests against outdated kernel images, add back BPF_F_CURRENT_CPU definitions back. N.B. BCC's runqslower version ([0]) doesn't need BPF_F_CURRENT_CPU due to use of locally checked in vmlinux.h, generated against kernel with 1aae4bdd7879 ("bpf: Switch BPF UAPI #define constants used from BPF program side to enums") applied. [0] https://github.com/iovisor/bcc/pull/2809 Fixes: 367d82f17eff (" tools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-11perf intel-pt: Update intel-pt.txt file with new location of the documentationAdrian Hunter1-0/+1
Make it easy for people looking in intel-pt.txt to find the new file. Signed-off-by: Adrian Hunter <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf intel-pt: Add Intel PT man page referencesAdrian Hunter5-4/+13
Add references to Intel PT man page in man pages of associated tools. Signed-off-by: Adrian Hunter <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf intel-pt: Rename intel-pt.txt and put it in man page formatAdrian Hunter1-24/+33
Make the Intel PT documentation into a man page. Signed-off-by: Adrian Hunter <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf doc: Set man page date to last git commitIan Rogers1-1/+4
Currently the man page dates reflect the date the man pages were built. This patch adjusts the date so that the date is when then man page last had a commit against it. The date is generated using 'git log'. Committer testing: $ git log -1 --pretty="format:%cd" --date=short tools/perf/Documentation/perf-top.txt 2020-01-14 Before: rm -rf /tmp/build/perf mkdir -p /tmp/build/perf make -C tools/perf O=/tmp/build/perf/ install $ date Wed 11 Mar 2020 10:21:19 AM -03 $ man perf-top | tail -1 perf 03/11/2020 PERF-TOP(1) $ After: rm -rf /tmp/build/perf mkdir -p /tmp/build/perf make -C tools/perf O=/tmp/build/perf/ install $ date $ date Wed 11 Mar 2020 10:24:06 AM -03 $ man perf-top | tail -1 perf 2020-01-14 PERF-TOP(1) $ Signed-off-by: Ian Rogers <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Masanari Iida <[email protected]> Cc: Mukesh Ojha <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf cs-etm: Fix unsigned variable comparison to zeroLeo Yan1-1/+1
The variable 'offset' in function cs_etm__sample() is u64 type, it's not appropriate to check it with 'while (offset > 0)'; this patch changes to 'while (offset)'. Signed-off-by: Leo Yan <[email protected]> Reviewed-by: Mathieu Poirier <[email protected]> Reviewed-by: Mike Leach <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Robert Walker <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: coresight ml <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf cs-etm: Optimize copying last branchesLeo Yan1-5/+17
If an instruction range packet can generate multiple instruction samples, these samples share the same last branches; it's not necessary to copy the same last branches repeatedly for these samples within the same packet. This patch moves out the last branches copying from function cs_etm__synth_instruction_sample(), and execute it prior to generating instruction samples. Signed-off-by: Leo Yan <[email protected]> Reviewed-by: Mathieu Poirier <[email protected]> Reviewed-by: Mike Leach <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Robert Walker <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: coresight ml <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf cs-etm: Correct synthesizing instruction samplesLeo Yan1-17/+70
When 'etm->instructions_sample_period' is less than 'tidq->period_instructions', the function cs_etm__sample() cannot handle this case properly with its logic. Let's see below flow as an example: - If we set itrace option '--itrace=i4', then function cs_etm__sample() has variables with initialized values: tidq->period_instructions = 0 etm->instructions_sample_period = 4 - When the first packet is coming: packet->instr_count = 10; the number of instructions executed in this packet is 10, thus update period_instructions as below: tidq->period_instructions = 0 + 10 = 10 instrs_over = 10 - 4 = 6 offset = 10 - 6 - 1 = 3 tidq->period_instructions = instrs_over = 6 - When the second packet is coming: packet->instr_count = 10; in the second pass, assume 10 instructions in the trace sample again: tidq->period_instructions = 6 + 10 = 16 instrs_over = 16 - 4 = 12 offset = 10 - 12 - 1 = -3 -> the negative value tidq->period_instructions = instrs_over = 12 So after handle these two packets, there have below issues: The first issue is that cs_etm__instr_addr() returns the address within the current trace sample of the instruction related to offset, so the offset is supposed to be always unsigned value. But in fact, function cs_etm__sample() might calculate a negative offset value (in handling the second packet, the offset is -3) and pass to cs_etm__instr_addr() with u64 type with a big positive integer. The second issue is it only synthesizes 2 samples for sample period = 4. In theory, every packet has 10 instructions so the two packets have total 20 instructions, 20 instructions should generate 5 samples (4 x 5 = 20). This is because cs_etm__sample() only calls once cs_etm__synth_instruction_sample() to generate instruction sample per range packet. This patch fixes the logic in function cs_etm__sample(); the basic idea for handling coming packet is: - To synthesize the first instruction sample, it combines the left instructions from the previous packet and the head of the new packet; then generate continuous samples with sample period; - At the tail of the new packet, if it has the rest instructions, these instructions will be left for the sequential sample. Suggested-by: Mike Leach <[email protected]> Signed-off-by: Leo Yan <[email protected]> Reviewed-by: Mathieu Poirier <[email protected]> Reviewed-by: Mike Leach <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Robert Walker <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: coresight ml <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf cs-etm: Continuously record last branchLeo Yan1-3/+4
Every time synthesize instruction sample, the last branch recording will be reset. This is fine if the instruction period is big enough, for example if use the option '--itrace=i100000', the last branch array is reset for every sample with 100000 instructions per period; before generate the next instruction sample, there has the sufficient packets coming to fill the last branch array. On the other hand, if set a very small period, the packets will be significantly reduced between two continuous instruction samples, thus the last branch array is almost empty for new instruction sample by frequently resetting. To allow the last branches to work properly for any instruction periods, this patch avoids to reset the last branch for every instruction sample and only reset it when flush the trace data. The last branches will be reset only for two cases, one is for trace starting, another case is for discontinuous trace; other cases can keep recording last branches for continuous instruction samples. Signed-off-by: Leo Yan <[email protected]> Reviewed-by: Mathieu Poirier <[email protected]> Reviewed-by: Mike Leach <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Robert Walker <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: coresight ml <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf cs-etm: Swap packets for instruction samplesLeo Yan1-20/+19
If use option '--itrace=iNNN' with Arm CoreSight trace data, perf tool fails inject instruction samples; the root cause is the packets are only swapped for branch samples and last branches but not for instruction samples, so the new coming packets cannot be properly handled for only synthesizing instruction samples. To fix this issue, this patch refactors the code with a new function cs_etm__packet_swap() which is used to swap packets and adds the condition for instruction samples. Signed-off-by: Leo Yan <[email protected]> Reviewed-by: Mathieu Poirier <[email protected]> Reviewed-by: Mike Leach <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Robert Walker <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: coresight ml <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf map: Use strstarts() to look for Android librariesArnaldo Carvalho de Melo1-4/+4
And add the '/' to avoid looking at things like "/system/libsomething", when all we want to know if it is like "/system/lib/something", i.e. if it is in that system library dir. Using strstarts() avoids off-by-one errors like recently fixed in this file. Since this adds the '/' I separated this patch, another patch will make this consistent by removing other strncmp(str, prefix, manually calculated prefix length) usage. Reported-by: Dominik Czarnota <[email protected]> Acked-by: Dominik Czarnota <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: Link: http://lore.kernel.org/lkml/CABEVAa0_q-uC0vrrqpkqRHy_9RLOSXOJxizMLm1n5faHRy2AeA@mail.gmail.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-11perf map: Fix off by one in strncpy() size argumentdisconnect3d1-1/+1
This patch fixes an off-by-one error in strncpy size argument in tools/perf/util/map.c. The issue is that in: strncmp(filename, "/system/lib/", 11) the passed string literal: "/system/lib/" has 12 bytes (without the NULL byte) and the passed size argument is 11. As a result, the logic won't match the ending "/" byte and will pass filepaths that are stored in other directories e.g. "/system/libmalicious/bin" or just "/system/libmalicious". This functionality seems to be present only on Android. I assume the /system/ directory is only writable by the root user, so I don't think this bug has much (or any) security impact. Fixes: eca818369996 ("perf tools: Add automatic remapping of Android libraries") Signed-off-by: disconnect3d <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Changbin Du <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Keeping <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Michael Lentine <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10perf vendor events intel: Add NO_NMI_WATCHDOG metric constraintKan Liang3-3/+6
Add NO_NMI_WATCHDOG metric constraint to Page_Walks_Utilization for Sky Lake and Cascade Lake. Committer testing: On a Lenovo T480S, Intel(R) Core(TM) i7-8650U Kaby Lake, that looking at x86's mapfile.csv file is a: $ grep -w skylake tools/perf/pmu-events/arch/x86/mapfile.csv GenuineIntel-6-[4589]E,v24,skylake,core $ So uses the constraint added in this patch in this file: tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json Before: # perf stat -a -M Page_Walks_Utilization sleep 2 Performance counter stats for 'system wide': <not counted> itlb_misses.walk_pending (0.00%) <not counted> dtlb_load_misses.walk_pending (0.00%) <not counted> dtlb_store_misses.walk_pending (0.00%) <not counted> ept.walk_pending (0.00%) <not counted> cycles (0.00%) 2.001750514 seconds time elapsed Some events weren't counted. Try disabling the NMI watchdog: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog The events in group usually have to be from the same PMU. Try reorganizing the group. # After: # perf stat -a -M Page_Walks_Utilization sleep 2 Splitting metric group Page_Walks_Utilization into standalone metrics. Try disabling the NMI watchdog to comply NO_NMI_WATCHDOG metric constraint: echo 0 > /proc/sys/kernel/nmi_watchdog perf stat ... echo 1 > /proc/sys/kernel/nmi_watchdog , Performance counter stats for 'system wide': 36,883,102 itlb_misses.walk_pending # 0.1 Page_Walks_Utilization (79.99%) 123,104,146 dtlb_load_misses.walk_pending (80.02%) 13,720,795 dtlb_store_misses.walk_pending (79.99%) 0 ept.walk_pending (79.99%) 1,519,948,400 cycles (80.01%) 2.002170780 seconds time elapsed # Before and after, if we disable the nmi_watchdog we get: # echo 0 > /proc/sys/kernel/nmi_watchdog # perf stat -a -M Page_Walks_Utilization sleep 2 Performance counter stats for 'system wide': 33,721,658 itlb_misses.walk_pending # 0.1 Page_Walks_Utilization 84,070,996 dtlb_load_misses.walk_pending 9,816,071 dtlb_store_misses.walk_pending 0 ept.walk_pending 704,920,899 cycles 2.002331670 seconds time elapsed # More information about the metric expressions: # perf stat -v -a -M Page_Walks_Utilization sleep 2 Using CPUID GenuineIntel-6-8E-A metric expr ( itlb_misses.walk_pending + dtlb_load_misses.walk_pending + dtlb_store_misses.walk_pending + ept.walk_pending ) / ( 2 * cycles ) for Page_Walks_Utilization found event itlb_misses.walk_pending found event dtlb_load_misses.walk_pending found event dtlb_store_misses.walk_pending found event ept.walk_pending found event cycles adding {itlb_misses.walk_pending,dtlb_load_misses.walk_pending,dtlb_store_misses.walk_pending,ept.walk_pending,cycles}:W -> cpu/umask=0x10,(null)=0x186a3,event=0x85/ -> cpu/umask=0x10,(null)=0x1e8483,event=0x8/ -> cpu/umask=0x10,(null)=0x1e8483,event=0x49/ -> cpu/umask=0x10,(null)=0x1e8483,event=0x4f/ itlb_misses.walk_pending: 8085772 16010162799 16010162799 dtlb_load_misses.walk_pending: 28134579 16010162799 16010162799 dtlb_store_misses.walk_pending: 7276535 16010162799 16010162799 ept.walk_pending: 2 16010162799 16010162799 cycles: 315140605 16010162799 16010162799 Performance counter stats for 'system wide': 8,085,772 itlb_misses.walk_pending # 0.1 Page_Walks_Utilization 28,134,579 dtlb_load_misses.walk_pending 7,276,535 dtlb_store_misses.walk_pending 2 ept.walk_pending 315,140,605 cycles 2.002333181 seconds time elapsed # Signed-off-by: Kan Liang <[email protected]> Acked-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10perf metricgroup: Support metric constraintKan Liang1-1/+53
Some metric groups have metric constraints. A metric group can be scheduled as a group only when some constraints are applied. For example, Page_Walks_Utilization has a metric constraint, "NO_NMI_WATCHDOG". When NMI watchdog is disabled, the metric group can be scheduled as a group. Otherwise, splitting the metric group into standalone metrics. Add a new function, metricgroup__has_constraint(), to check whether all constraints are applied. If not, splitting the metric group into standalone metrics. Currently, only one constraint, "NO_NMI_WATCHDOG", is checked. Print a warning for the metric group with the constraint, when NMI WATCHDOG is enabled. Signed-off-by: Kan Liang <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10perf util: Factor out sysctl__nmi_watchdog_enabled()Kan Liang3-4/+22
The NMI watchdog status is required for metric group constraint examination. Factor out sysctl__nmi_watchdog_enabled() to retrieve the NMI watchdog status. Users may count more than one metric group each time. If so, the NMI watchdog status may be retrieved several times. To reduce the overhead, cache the NMI watchdog status. Replace the NMI watchdog status checking in print_footer() by sysctl__nmi_watchdog_enabled(). Suggested-by: Andi Kleen <[email protected]> Signed-off-by: Kan Liang <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10perf metricgroup: Factor out metricgroup__add_metric_weak_group()Kan Liang1-24/+33
Factor out metricgroup__add_metric_weak_group() which add metrics into a weak group. The change can improve code readability. Because following patch will introduce a function which add standalone metrics. Signed-off-by: Kan Liang <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10perf jevents: Support metric constraintKan Liang3-7/+15
A new field "MetricConstraint" is introduced in JSON event list. Extend jevents to parse the field and save the value in metric_constraint. Signed-off-by: Kan Liang <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10selftests/bpf: Add test for the packed enum member in struct/unionYoshiki Komachi1-0/+42
Add a simple test to the existing selftest program in order to make sure that a packed enum member in struct unexceeds the struct_size. Signed-off-by: Yoshiki Komachi <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-10perf vendor events s390: Add new deflate counters for IBM z15Thomas Richter2-5/+33
Add support for new deflate counters: - Counter 247: cycles CPU spent obtaining access to Deflate unit - Counter 252: cycles CPU is using Deflate unit - Counter 264: Increments by one for every DEFLATE CONVERSION CALL instruction executed. - Counter 265: Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2. Also adjust the some crypto counter description to latest documentation. Signed-off-by: Thomas Richter <[email protected]> Reviewed-by: Sumanth Korikkar <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Vasily Gorbik <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-10Merge tag 'linux-cpupower-5.6-rc6' of ↵Rafael J. Wysocki4-3/+5
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux Pull cpupower utility fix for v5.6 from Shuah Khan: "This cpupower update for Linux 5.6-rc6 consists of a fix from Mike Gilbert for build failures when -fno-common is enabled. -fno-common will be default in gcc v10." * tag 'linux-cpupower-5.6-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux: cpupower: avoid multiple definition with gcc -fno-common
2020-03-09mptcp: selftests: add rcvbuf set optionFlorian Westphal2-15/+54
allows to run the tests with fixed receive buffer by passing "-R <value>" to mptcp_connect.sh. While at it, add a default 10 second poll timeout so the "-t" becomes optional -- this makes mptcp_connect simpler to use during manual testing. Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-03-09perf block-info: Support color ops to print block percents in colorJin Yao1-10/+15
It would be nice to print the block percents with colors. This patch supports the 'Sampled Cycles%' and 'Avg Cycles%' printed in colors. For example, perf record -b ... perf report --total-cycles or perf report --total-cycles --stdio percent > 5%, colored in red percent > 0.5%, colored in green percent < 0.5%, default color Signed-off-by: Jin Yao <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf block-info: Allow selecting which columns to report and its orderJin Yao3-20/+59
Currently we use a predefined array to set the block info output formats, it's fixed and inflexible. This patch adds two parameters "block_hpps" and "nr_hpps" in block_info__create_report and other static functions, in order to let user decide which columns to report and with specified report ordering. It should be more flexible. Buffers will be allocated to contain the new fmts, of course, we need to release them before perf exits. Signed-off-by: Jin Yao <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf diff: Use __block_info__cmp() to replace block_pair_cmp()Jin Yao3-21/+11
'perf diff' uses block_pair_cmp() to compare two blocks. But block_info__cmp() has the similar functionality and it's a bit more complete. This patch removes block_pair_cmp() and uses __block_info__cmp() instead. __block_info__cmp() is wrapped by block_info__cmp() and it doesn't receives a perf_hpp_fmt parameter. Signed-off-by: Jin Yao <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf block-info: Fix wrong block address comparison in block_info__cmp()Jin Yao1-15/+6
Commit 6041441870ab ("perf block: Cleanup and refactor block info functions") introduces block_info__cmp(), which compares two blocks. But the issues are: 1. It should return the strcmp cmp value only if it's not 0. 2. When symbol names are matched, we need to compare the addresses of blocks further. But it wrongly uses the symbol addresses for comparison. 3. If the syms are both NULL, we can't consider these two blocks are matched. This patch fixes above 3 issues. Fixes: 6041441870ab ("perf block: Cleanup and refactor block info functions") Signed-off-by: Jin Yao <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf expr: Make expr__parse() return -1 on errorJiri Olsa2-3/+3
To match the error value of the expr__find_other function, so all exported expr functions return the same values: 0 on success, -1 on error. Signed-off-by: Jiri Olsa <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf expr: Straighten expr__parse()/expr__find_other() interfaceJiri Olsa4-12/+10
Now that we have a flex parser we don't need to update the parsed string pointer, so the interface can just be passed the pointer to the expression instead of a pointer to pointer. Signed-off-by: Jiri Olsa <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf expr: Increase EXPR_MAX_OTHER to support metrics with more than 15 ↵Jiri Olsa1-1/+1
variables We have metrics that define more than 15 variables, like Branch_Misprediction_Cost. Increasing the allowed variables count to 20. As Andy pointed out, we can't go too high in here, because some of the code has O(n^2) complexity (already_seen) and we might want to do some other changes (like using hash tables) before increasing the maximum even more. Signed-off-by: Jiri Olsa <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf expr: Move expr lexer to flexJiri Olsa5-141/+247
Adding expr flex code instead of the manual parser code. So it's easily extensible in upcoming changes. The new flex code is in flex.l object and gets compiled like all the other flexers we use. It's defined as flex reentrant parser. It's used by both expr__parse and expr__find_other interfaces by separating the starting point. There's no intended change of functionality ;-) the test expr is passing. Signed-off-by: Jiri Olsa <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf expr: Add expr.c objectJiri Olsa3-16/+20
Add generic expr code into new expr.c object. The expr.c object will be mainly used in following change that will get rid of the manual flex code, Signed-off-by: Jiri Olsa <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf header: Add check for unexpected use of reserved membrs in event attrKan Liang1-0/+37
The perf.data may be generated by a newer version of perf tool, which support new input bits in attr, e.g. new bit for branch_sample_type. The perf.data may be parsed by an older version of perf tool later. The old perf tool may parse the perf.data incorrectly. There is no warning message for this case. Current perf header never check for unknown input bits in attr. When read the event desc from header, check the stored event attr. The reserved bits, sample type, read format and branch sample type will be checked. Signed-off-by: Kan Liang <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf evsel: Support PERF_SAMPLE_BRANCH_HW_INDEXKan Liang3-3/+14
A new branch sample type PERF_SAMPLE_BRANCH_HW_INDEX has been introduced in latest kernel. Enable HW_INDEX by default in LBR call stack mode. If kernel doesn't support the sample type, switching it off. Add HW_INDEX in attr_fprintf as well. User can check whether the branch sample type is set via debug information or header. Committer testing: First collect some samples with LBR callchains, system wide, for a few seconds: # perf record --call-graph lbr -a sleep 5 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.625 MB perf.data (224 samples) ] # Now lets use 'perf evlist -v' to look at the branch_sample_type: # perf evlist -v cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: USER|CALL_STACK|NO_FLAGS|NO_CYCLES|HW_INDEX # So the machine has the kernel feature, and it was correctly added to perf_event_attr.branch_sample_type, for the default 'cycles' event. If we do it in another machine, where the kernel lacks the HW_INDEX feature, we get: # perf record --call-graph lbr -a sleep 2s [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 1.690 MB perf.data (499 samples) ] # perf evlist -v cycles: size: 120, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CALLCHAIN|CPU|PERIOD|BRANCH_STACK, read_format: ID, disabled: 1, inherit: 1, mmap: 1, comm: 1, freq: 1, task: 1, precise_ip: 3, sample_id_all: 1, exclude_guest: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1, branch_sample_type: USER|CALL_STACK|NO_FLAGS|NO_CYCLES # No HW_INDEX in attr.branch_sample_type. Signed-off-by: Kan Liang <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09perf tools: Add hw_idx in struct branch_stackKan Liang13-71/+125
The low level index of raw branch records for the most recent branch can be recorded in a sample with PERF_SAMPLE_BRANCH_HW_INDEX branch_sample_type. Extend struct branch_stack to support it. However, if the PERF_SAMPLE_BRANCH_HW_INDEX is not applied, only nr and entries[] will be output by kernel. The pointer of entries[] could be wrong, since the output format is different with new struct branch_stack. Add a variable no_hw_idx in struct perf_sample to indicate whether the hw_idx is output. Add get_branch_entry() to return corresponding pointer of entries[0]. To make dummy branch sample consistent as new branch sample, add hw_idx in struct dummy_branch_stack for cs-etm and intel-pt. Apply the new struct branch_stack for synthetic events as well. Extend test case sample-parsing to support new struct branch_stack. Committer notes: Renamed get_branch_entries() to perf_sample__branch_entries() to have proper namespacing and pave the way for this to be moved to libperf, eventually. Add 'static' to that inline as it is in a header. Add 'hw_idx' to 'struct dummy_branch_stack' in cs-etm.c to fix the build on arm64. Signed-off-by: Kan Liang <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexey Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Pavel Gerasimov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Vitaly Slobodskoy <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-03-09Merge tag 'ktest-v5.6' of ↵Linus Torvalds2-19/+19
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest Pull Ktest fixes and clean ups from Steven Rostedt: - Make the default option oldconfig instead of randconfig (one too many times I lost my config because I left the build type out) - Add timeout to ssh sync to sync before reboot (prevents test hangs) - A couple of spelling fix patches * tag 'ktest-v5.6' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest: ktest: Fix typos in ktest.pl ktest: Add timeout for ssh sync testing ktest: Make default build option oldconfig not randconfig ktest: Fix some typos in sample.conf
2020-03-10bpftool: Fix typo in bash-completionSong Liu1-1/+1
_bpftool_get_map_names => _bpftool_get_prog_names for prog-attach|detach. Fixes: 99f9863a0c45 ("bpftool: Match maps by name") Signed-off-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-10bpftool: Bash completion for "bpftool prog profile"Song Liu1-1/+44
Add bash completion for "bpftool prog profile" command. Signed-off-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]