aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2018-10-21radix tree tests: Convert item_kill_tree to XArrayMatthew Wilcox1-21/+9
In preparation for the removal of the multiorder radix tree code, convert item_kill_tree() to use the XArray so it can still be called for XArrays containing multi-index entries. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree tests: Move item_insert_orderMatthew Wilcox3-11/+22
The remaining tests are not suitable for moving in-kernel, so move item_insert_order() into multiorder.c, make it static and make it use the XArray. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree test suite: Remove multiorder benchmarkingMatthew Wilcox1-29/+21
The multiorder radix tree code is being removed, so remove the benchmarking of its performance. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree test suite: Remove __item_insertMatthew Wilcox2-7/+1
Inline it into its one caller Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Move multiorder_check to in-kernel testsMatthew Wilcox1-48/+0
This version is a little less thorough in order to be a little quicker, but tests the important edge cases. Also test adding a multiorder entry at a non-canonical index, and erasing it. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Move multiorder_shrink to kernel testsMatthew Wilcox1-173/+0
Test this functionality inside the kernel as well as in userspace. Also remove insert_bug() as there's no comparable thing to test in the XArray code. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Move multiorder account test in-kernelMatthew Wilcox1-24/+0
Move this test to the in-kernel test suite, and enhance it to test several different orders. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree test suite: Convert iteration test to XArrayMatthew Wilcox3-58/+63
With no code left in the kernel using the multiorder radix tree, convert the iteration test from the radix tree to the XArray. It's unlikely to suffer the same bug as the radix tree, but this test will prevent that bug from ever creeping into the XArray implementation. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree test suite: Convert tag_tagged_items to XArrayMatthew Wilcox8-57/+46
The tag_tagged_items() function is supposed to test the page-writeback tagging code. Since that has been converted to the XArray, there's not much point in testing the radix tree's tagging code. This requires using the pthread mutex embedded in the xarray instead of an external lock, so remove the pthread mutexes which protect xarrays/radix trees. Also remove radix_tree_iter_tag_set() as this was the last user. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree: Remove radix_tree_clear_tagsMatthew Wilcox1-29/+0
The page cache was the only user of this interface and it has now been converted to the XArray. Transform the test into a test of xas_init_marks(). Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree: Remove split/join codeMatthew Wilcox2-338/+0
radix_tree_split and radix_tree_join were never used upstream. Remove them; if they're needed in future they will be replaced by XArray equivalents. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree: Remove radix_tree_update_node_tMatthew Wilcox1-1/+1
The only user of this functionality was the workingset code, and it's now been converted to the XArray. Remove __radix_tree_delete_node() entirely as it was also only used by the workingset code. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21shmem: Convert find_swap_entry to XArrayMatthew Wilcox3-84/+0
This is a 1:1 conversion. The major part of this patch is converting the test framework from userspace to kernel space and mirroring the algorithm now used in find_swap_entry(). Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21radix tree test suite: Convert regression1 to XArrayMatthew Wilcox1-39/+19
Now the page cache lookup is using the XArray, let's convert this regression test from the radix tree API to the XArray so it's testing roughly the same thing it was testing before. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21page cache: Convert find_get_pages_contig to XArrayMatthew Wilcox1-23/+0
There's no direct replacement for radix_tree_for_each_contig() in the XArray API as it's an unusual thing to do. Instead, open-code a loop using xas_next(). This removes the only user of radix_tree_for_each_contig() so delete the iterator from the API and the test suite code for it. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21ida: Convert to XArrayMatthew Wilcox1-5/+3
Use the XA_TRACK_FREE ability to track which entries have a free bit, similarly to how it uses the radix tree's IDR_FREE tag. This eliminates the per-cpu ida_bitmap preload, and fixes the memory consumption regression I introduced when making the IDR able to store any pointer. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Add XArray unconditional store operationsMatthew Wilcox6-1/+42
xa_store() differs from radix_tree_insert() in that it will overwrite an existing element in the array rather than returning an error. This is the behaviour which most users want, and those that want more complex behaviour generally want to use the xas family of routines anyway. For memory allocation, xa_store() will first attempt to request memory from the slab allocator; if memory is not immediately available, it will drop the xa_lock and allocate memory, keeping a pointer in the xa_state. It does not use the per-CPU cache, although those will continue to exist until all radix tree users are converted to the xarray. This patch also includes xa_erase() and __xa_erase() for a streamlined way to store NULL. Since there is no need to allocate memory in order to store a NULL in the XArray, we do not need to trouble the user with deciding what memory allocation flags to use. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Add XArray marksMatthew Wilcox4-11/+118
XArray marks are like the radix tree tags, only slightly more strongly typed. They are renamed in order to distinguish them from tagged pointers. This commit adds the basic get/set/clear operations. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Add XArray load operationMatthew Wilcox8-2/+39
The xa_load function brings with it a lot of infrastructure; xa_empty(), xa_is_err(), and large chunks of the XArray advanced API that are used to implement xa_load. As the test-suite demonstrates, it is possible to use the XArray functions on a radix tree. The radix tree functions depend on the GFP flags being stored in the root of the tree, so it's not possible to use the radix tree functions on an XArray. Signed-off-by: Matthew Wilcox <[email protected]>
2018-10-21xarray: Define struct xa_nodeMatthew Wilcox1-15/+15
This is a direct replacement for struct radix_tree_node. A couple of struct members have changed name, so convert those. Use a #define so that radix tree users continue to work without change. Signed-off-by: Matthew Wilcox <[email protected]> Reviewed-by: Josef Bacik <[email protected]>
2018-10-21xarray: Add definition of struct xarrayMatthew Wilcox6-7/+19
This is a direct replacement for struct radix_tree_root. Some of the struct members have changed name; convert those, and use a #define so that radix_tree users continue to work without change. Signed-off-by: Matthew Wilcox <[email protected]> Reviewed-by: Josef Bacik <[email protected]>
2018-10-20selftests/bpf: fix return value comparison for tests in test_libbpf.shQuentin Monnet1-1/+1
The return value for each test in test_libbpf.sh is compared with if (( $? == 0 )) ; then ... This works well with bash, but not with dash, that /bin/sh is aliased to on some systems (such as Ubuntu). Let's replace this comparison by something that works on both shells. Signed-off-by: Quentin Monnet <[email protected]> Reviewed-by: Jakub Kicinski <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-20bpf, libbpf: simplify and cleanup perf ring buffer walkDaniel Borkmann4-52/+47
Simplify bpf_perf_event_read_simple() a bit and fix up some minor things along the way: the return code in the header is not of type int but enum bpf_perf_event_ret instead. Once callback indicated to break the loop walking event data, it also needs to be consumed in data_tail since it has been processed already. Moreover, bpf_perf_event_print_t callback should avoid void * as we actually get a pointer to struct perf_event_header and thus applications can make use of container_of() to have type checks. The walk also doesn't have to use modulo op since the ring size is required to be power of two. Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-20bpf, verifier: fix register type dump in xadd and stDaniel Borkmann1-5/+5
Using reg_type_str[insn->dst_reg] is incorrect since insn->dst_reg contains the register number but not the actual register type. Add a small reg_state() helper and use it to get to the type. Also fix up the test_verifier test cases that have an incorrect errstr. Fixes: 9d2be44a7f33 ("bpf: Reuse canonical string formatter for ctx errs") Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-20bpf: test_sockmap add options to use msg_push_dataJohn Fastabend2-26/+129
Add options to run msg_push_data, this patch creates two more flags in test_sockmap that can be used to specify the offset and length of bytes to be added. The new options are --txmsg_start_push to specify where bytes should be inserted and --txmsg_end_push to specify how many bytes. This is analagous to the options that are used to pull data, --txmsg_start and --txmsg_end. In addition to adding the options tests are added to the test suit to run the tests similar to what was done for msg_pull_data. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-20bpf: libbpf support for msg_push_dataJohn Fastabend2-1/+21
Add support for new bpf_msg_push_data in libbpf. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-20Merge branch 'perf-urgent-for-linus' of ↵Greg Kroah-Hartman12-45/+39
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Ingo writes: "perf fixes: Misc perf tooling fixes." * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf tools: Stop fallbacking to kallsyms for vdso symbols lookup perf tools: Pass build flags to traceevent build perf report: Don't crash on invalid inline debug information perf cpu_map: Align cpu map synthesized events properly. perf tools: Fix tracing_path_mount proper path perf tools: Fix use of alternatives to find JDIR perf evsel: Store ids for events with their own cpus perf_event__synthesize_event_update_cpus perf vendor events intel: Fix wrong filter_band* values for uncore events Revert "perf tools: Fix PMU term format max value calculation" tools headers uapi: Sync kvm.h copy tools arch uapi: Sync the x86 kvm.h copy
2018-10-20Merge tag 'trace-v4.19-rc8-2' of ↵Greg Kroah-Hartman1-0/+80
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Steven writes: "tracing: A few small fixes to synthetic events Masami found some issues with the creation of synthetic events. The first two patches fix handling of unsigned type, and handling of a space before an ending semi-colon. The third patch adds a selftest to test the processing of synthetic events." * tag 'trace-v4.19-rc8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: selftests: ftrace: Add synthetic event syntax testcase tracing: Fix synthetic event to allow semicolon at end tracing: Fix synthetic event to accept unsigned modifier
2018-10-20selftests/powerpc: Add a test of wild bctrMichael Ellerman4-2/+161
This tests that a bctr (Branch to counter and link), ie. a function call, to a wildly out-of-bounds address is handled correctly. Some old kernel versions didn't handle it correctly, see eg: "powerpc/slb: Force a full SLB flush when we insert for a bad EA" https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-April/157397.html Signed-off-by: Michael Ellerman <[email protected]>
2018-10-20selftests/powerpc: Fix out-of-tree build errorsMichael Ellerman4-6/+1
Some of our Makefiles don't do the right thing when building the selftests with O=, fix them up. Signed-off-by: Michael Ellerman <[email protected]>
2018-10-20selftests/powerpc: Add test to verify rfi flush across a system callNaveen N. Rao5-1/+305
This adds a test to verify proper functioning of the rfi flush capability implemented to mitigate meltdown. The test works by measuring the number of L1d cache misses encountered while loading data from memory. Across a system call, since the L1d cache is flushed when rfi_flush is enabled, the number of cache misses is expected to be relative to the number of cachelines corresponding to the data being loaded. The current system setting is reflected via powerpc/rfi_flush under debugfs (assumed to be /sys/kernel/debug/). This test verifies the expected result with rfi_flush enabled as well as when it is disabled. Signed-off-by: Anton Blanchard <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Signed-off-by: Naveen N. Rao <[email protected]> [mpe: Add SPDX tags, clang format, skip if the debugfs is missing, use __u64 and SANE_USERSPACE_TYPES to avoid printf() build errors.] Signed-off-by: Michael Ellerman <[email protected]>
2018-10-20selftests/powerpc: Move UCONTEXT_NIA() into utils.hNaveen N. Rao2-8/+8
... so that it can be used by others. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-10-19selftests: ftrace: Add synthetic event syntax testcaseMasami Hiramatsu1-0/+80
Add a testcase to check the syntax and field types for synthetic_events interface. Link: http://lkml.kernel.org/r/153986838264.18251.16627517536956299922.stgit@devbox Acked-by: Shuah Khan <[email protected]> Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-10-19bpf: add tests for direct packet access from CGROUP_SKBSong Liu1-0/+171
Tests are added to make sure CGROUP_SKB cannot access: tc_classid, data_meta, flow_keys and can read and write: mark, prority, and cb[0-4] and can read other fields. To make selftest with skb->sk work, a dummy sk is added in bpf_prog_test_run_skb(). Signed-off-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-19bpf, libbpf: use correct barriers in perf ring buffer walkDaniel Borkmann1-6/+4
Given libbpf is a generic library and not restricted to x86-64 only, the compiler barrier in bpf_perf_event_read_simple() after fetching the head needs to be replaced with smp_rmb() at minimum. Also, writing out the tail we should use WRITE_ONCE() to avoid store tearing. Now that we have the logic in place in ring_buffer_read_head() and ring_buffer_write_tail() helper also used by perf tool which would select the correct and best variant for a given architecture (e.g. x86-64 can avoid CPU barriers entirely), make use of these in order to fix bpf_perf_event_read_simple(). Fixes: d0cabbb021be ("tools: bpf: move the event reading loop to libbpf") Fixes: 39111695b1b8 ("samples: bpf: add bpf_perf_event_output example") Signed-off-by: Daniel Borkmann <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Will Deacon <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-19tools, perf: add and use optimized ring_buffer_{read_head, write_tail} helpersDaniel Borkmann9-12/+250
Currently, on x86-64, perf uses LFENCE and MFENCE (rmb() and mb(), respectively) when processing events from the perf ring buffer which is unnecessarily expensive as we can do more lightweight in particular given this is critical fast-path in perf. According to Peter rmb()/mb() were added back then via a94d342b9cb0 ("tools/perf: Add required memory barriers") at a time where kernel still supported chips that needed it, but nowadays support for these has been ditched completely, therefore we can fix them up as well. While for x86-64, replacing rmb() and mb() with smp_*() variants would result in just a compiler barrier for the former and LOCK + ADD for the latter (__sync_synchronize() uses slower MFENCE by the way), Peter suggested we can use smp_{load_acquire,store_release}() instead for architectures where its implementation doesn't resolve in slower smp_mb(). Thus, e.g. in x86-64 we would be able to avoid CPU barrier entirely due to TSO. For architectures where the latter needs to use smp_mb() e.g. on arm, we stick to cheaper smp_rmb() variant for fetching the head. This work adds helpers ring_buffer_read_head() and ring_buffer_write_tail() for tools infrastructure that either switches to smp_load_acquire() for architectures where it is cheaper or uses READ_ONCE() + smp_rmb() barrier for those where it's not in order to fetch the data_head from the perf control page, and it uses smp_store_release() to write the data_tail. Latter is smp_mb() + WRITE_ONCE() combination or a cheaper variant if architecture allows for it. Those that rely on smp_rmb() and smp_mb() can further improve performance in a follow up step by implementing the two under tools/arch/*/include/asm/barrier.h such that they don't have to fallback to rmb() and mb() in tools/include/asm/barrier.h. Switch perf to use ring_buffer_read_head() and ring_buffer_write_tail() so it can make use of the optimizations. Later, we convert libbpf as well to use the same helpers. Side note [0]: the topic has been raised of whether one could simply use the C11 gcc builtins [1] for the smp_load_acquire() and smp_store_release() instead: __atomic_load_n(ptr, __ATOMIC_ACQUIRE); __atomic_store_n(ptr, val, __ATOMIC_RELEASE); Kernel and (presumably) tooling shipped along with the kernel has a minimum requirement of being able to build with gcc-4.6 and the latter does not have C11 builtins. While generally the C11 memory models don't align with the kernel's, the C11 load-acquire and store-release alone /could/ suffice, however. Issue is that this is implementation dependent on how the load-acquire and store-release is done by the compiler and the mapping of supported compilers must align to be compatible with the kernel's implementation, and thus needs to be verified/tracked on a case by case basis whether they match (unless an architecture uses them also from kernel side). The implementations for smp_load_acquire() and smp_store_release() in this patch have been adapted from the kernel side ones to have a concrete and compatible mapping in place. [0] http://patchwork.ozlabs.org/patch/985422/ [1] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Will Deacon <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-19selftests/bpf: add missing executables to .gitignoreAnders Roxell1-0/+2
Fixes: 371e4fcc9d96 ("selftests/bpf: cgroup local storage-based network counters") Fixes: 370920c47b26 ("selftests/bpf: Test libbpf_{prog,attach}_type_by_name") Signed-off-by: Anders Roxell <[email protected]> Acked-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-19selftests/bpf: add test cases for queue and stack mapsMauricio Vasquez B9-1/+313
test_maps: Tests that queue/stack maps are behaving correctly even in corner cases test_progs: Tests new ebpf helpers Signed-off-by: Mauricio Vasquez B <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-19Sync uapi/bpf.h to tools/includeMauricio Vasquez B1-1/+29
Sync both files. Signed-off-by: Mauricio Vasquez B <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-4/+9
net/sched/cls_api.c has overlapping changes to a call to nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL to the 5th argument, and another (from 'net-next') added cb->extack instead of NULL to the 6th argument. net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to code which moved (to mr_table_dump)) in 'net-next'. Thanks to David Ahern for the heads up. Signed-off-by: David S. Miller <[email protected]>
2018-10-19Merge tag 'usb-4.19-final' of ↵Greg Kroah-Hartman1-0/+4
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb I wrote: "USB fixes for 4.19-final Here are a small number of last-minute USB driver fixes Included here are: - spectre fix for usb storage gadgets - xhci fixes - cdc-acm fixes - usbip fixes for reported problems All of these have been in linux-next with no reported issues." * tag 'usb-4.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: usb: gadget: storage: Fix Spectre v1 vulnerability USB: fix the usbfs flag sanitization for control transfers usb: xhci: pci: Enable Intel USB role mux on Apollo Lake platforms usb: roles: intel_xhci: Fix Unbalanced pm_runtime_enable cdc-acm: correct counting of UART states in serial state notification cdc-acm: do not reset notification buffer index upon urb unlinking cdc-acm: fix race between reset and control messaging usb: usbip: Fix BUG: KASAN: slab-out-of-bounds in vhci_hub_control() selftests: usbip: add wait after attach and before checking port status
2018-10-19selftests/kvm: add missing executables to .gitignoreAnders Roxell1-0/+1
Fixes: 18178ff86217 ("KVM: selftests: add Enlightened VMCS test") Signed-off-by: Anders Roxell <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2018-10-19Merge tag 'kvmarm-for-v4.20' of ↵Paolo Bonzini5-12/+36
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm updates for 4.20 - Improved guest IPA space support (32 to 52 bits) - RAS event delivery for 32bit - PMU fixes - Guest entry hardening - Various cleanups
2018-10-18tools: bpftool: use 4 context mode for the NFP disasmJakub Kicinski4-9/+20
The nfp driver is currently always JITing the BPF for 4 context/thread mode of the NFP flow processors. Tell this to the disassembler, otherwise some registers may be incorrectly decoded. Signed-off-by: Jakub Kicinski <[email protected]> Reviewed-by: Jiong Wang <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-18selftests/bpf: fix file resource leak in load_kallsymsPeng Hao1-0/+1
FILE pointer variable f is opened but never closed. Signed-off-by: Peng Hao <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-10-18usbip: tools: fix atoi() on non-null terminated stringColin Ian King1-3/+3
Currently the call to atoi is being passed a single char string that is not null terminated, so there is a potential read overrun along the stack when parsing for an integer value. Fix this by instead using a 2 char string that is initialized to all zeros to ensure that a 1 char read into the string is always terminated with a \0. Detected by cppcheck: "Invalid atoi() argument nr 1. A nul-terminated string is required." Fixes: 3391ba0e2792 ("usbip: tools: Extract generic code to be shared with vudc backend") Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2018-10-18Merge branches 'pm-devfreq' and 'pm-tools'Rafael J. Wysocki9-898/+1096
* pm-devfreq: PM / devfreq: remove redundant null pointer check before kfree PM / devfreq: stopping the governor before device_unregister() PM / devfreq: Convert to using %pOFn instead of device_node.name PM / devfreq: Make update_devfreq() public PM / devfreq: Don't adjust to user limits in governors PM / devfreq: Fix handling of min/max_freq == 0 PM / devfreq: Drop custom MIN/MAX macros PM / devfreq: Fix devfreq_add_device() when drivers are built as modules. * pm-tools: PM / tools: sleepgraph and bootgraph: upgrade to v5.2 PM / tools: sleepgraph: first batch of v5.2 changes cpupower: Fix coredump on VMWare cpupower: Fix AMD Family 0x17 msr_pstate size cpupower: remove stringop-truncation waring
2018-10-18Merge tag 'perf-urgent-for-mingo-4.19-20181017' of ↵Ingo Molnar12-45/+39
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent fixes from Arnaldo Carvalho de Melo: - Stop falling back to kallsyms for vDSO symbols lookup, this wasn't being really used and is not valid in arches such as Sparc, where user and kernel space don't share the address space, relying only on cpumode to figure out what DSOs to lookup (Arnaldo Carvalho de Melo) - Align CPU map synthesized events properly, fixing SIGBUS in CPUs like Sparc (David Miller) - Fix use of alternatives to find JDIR (Jarod Wilson) - Store IDs for events with their own CPUs when synthesizing user level event details (scale, unit, etc) events, fixing a crash when recording a PMU event with a cpumask defined (Jiri Olsa) - Fix wrong filter_band* values for uncore Intel vendor events (Jiri Olsa) - Fix detection of tracefs path in systems without tracefs, where that path should be the debugfs mountpoint plus "/tracing/" (Jiri Olsa) - Pass build flags to traceevent build, allowing using alternative flags in distro packages, RPM, for instance (Jiri Olsa) - Fix 'perf report' crash on invalid inline debug information (Milian Wolff) - Synch KVM UAPI copies (Arnaldo Carvalho de Melo) Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2018-10-17bpf: fix doc of bpf_skb_adjust_room() in uapiNicolas Dichtel1-1/+1
len_diff is signed. Fixes: fa15601ab31e ("bpf: add documentation for eBPF helpers (33-41)") CC: Quentin Monnet <[email protected]> Signed-off-by: Nicolas Dichtel <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2018-10-17perf tools: Stop fallbacking to kallsyms for vdso symbols lookupArnaldo Carvalho de Melo1-19/+2
David reports that: <quote> Perf has this hack where it uses the kernel symbol map as a backup when a symbol can't be found in the user's symbol table(s). This causes problems because the tests driving this code path use machine__kernel_ip(), and that is completely meaningless on Sparc. On sparc64 the kernel and user live in physically separate virtual address spaces, rather than a shared one. And the kernel lives at a virtual address that overlaps common userspace addresses. So this test passes almost all the time when a user symbol lookup fails. The consequence of this is that, if the unfound user virtual address in the sample doesn't match up to a kernel symbol either, we trigger things like this code in builtin-top.c: if (al.sym == NULL && al.map != NULL) { const char *msg = "Kernel samples will not be resolved.\n"; /* * As we do lazy loading of symtabs we only will know if the * specified vmlinux file is invalid when we actually have a * hit in kernel space and then try to load it. So if we get * here and there are _no_ symbols in the DSO backing the * kernel map, bail out. * * We may never get here, for instance, if we use -K/ * --hide-kernel-symbols, even if the user specifies an * invalid --vmlinux ;-) */ if (!machine->kptr_restrict_warned && !top->vmlinux_warned && __map__is_kernel(al.map) && map__has_symbols(al.map)) { if (symbol_conf.vmlinux_name) { char serr[256]; dso__strerror_load(al.map->dso, serr, sizeof(serr)); ui__warning("The %s file can't be used: %s\n%s", symbol_conf.vmlinux_name, serr, msg); } else { ui__warning("A vmlinux file was not found.\n%s", msg); } if (use_browser <= 0) sleep(5); top->vmlinux_warned = true; } } When I fire up a compilation on sparc, this triggers immediately. I'm trying to figure out what the "backup to kernel map" code is accomplishing. I see some language in the current code and in the changes that have happened in this area talking about vdso. Does that really happen? The vdso is mapped into userspace virtual addresses, not kernel ones. More history. This didn't cause problems on sparc some time ago, because the kernel IP check used to be "ip < 0" :-) Sparc kernel addresses are not negative. But now with machine__kernel_ip(), which works using the symbol table determined kernel address range, it does trigger. What it all boils down to is that on architectures like sparc, machine__kernel_ip() should always return false in this scenerio, and therefore this kind of logic: if (cpumode == PERF_RECORD_MISC_USER && machine && mg != &machine->kmaps && machine__kernel_ip(machine, al->addr)) { is basically invalid. PERF_RECORD_MISC_USER implies no kernel address can possibly match for the sample/event in question (no matter how hard you try!) :-) </> So, I thought something had changed and in the past we would somehow find that address in the kallsyms, but I couldn't find anything to back that up, the patch introducing this is over a decade old, lots of things changed, so I was just thinking I was missing something. I tried a gtod busy loop to generate vdso activity and added a 'perf probe' at that branch, on x86_64 to see if it ever gets hit: Made thread__find_map() noinline, as 'perf probe' in lines of inline functions seems to not be working, only at function start. (Masami?) # perf probe -x ~/bin/perf -L thread__find_map:57 <thread__find_map@/home/acme/git/perf/tools/perf/util/event.c:57> 57 if (cpumode == PERF_RECORD_MISC_USER && machine && 58 mg != &machine->kmaps && 59 machine__kernel_ip(machine, al->addr)) { 60 mg = &machine->kmaps; 61 load_map = true; 62 goto try_again; } } else { /* * Kernel maps might be changed when loading * symbols so loading * must be done prior to using kernel maps. */ 69 if (load_map) 70 map__load(al->map); 71 al->addr = al->map->map_ip(al->map, al->addr); # perf probe -x ~/bin/perf thread__find_map:60 Added new event: probe_perf:thread__find_map (on thread__find_map:60 in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:thread__find_map -aR sleep 1 # Then used this to see if, system wide, those probe points were being hit: # perf trace -e *perf:thread*/max-stack=8/ ^C[root@jouet ~]# No hits when running 'perf top' and: # cat gtod.c #include <sys/time.h> int main(void) { struct timeval tv; while (1) gettimeofday(&tv, 0); return 0; } [root@jouet c]# ./gtod ^C Pressed 'P' in 'perf top' and the [vdso] samples are there: 62.84% [vdso] [.] __vdso_gettimeofday 8.13% gtod [.] main 7.51% [vdso] [.] 0x0000000000000914 5.78% [vdso] [.] 0x0000000000000917 5.43% gtod [.] _init 2.71% [vdso] [.] 0x000000000000092d 0.35% [kernel] [k] native_io_delay 0.33% libc-2.26.so [.] __memmove_avx_unaligned_erms 0.20% [vdso] [.] 0x000000000000091d 0.17% [i2c_i801] [k] i801_access 0.06% firefox [.] free 0.06% libglib-2.0.so.0.5400.3 [.] g_source_iter_next 0.05% [vdso] [.] 0x0000000000000919 0.05% libpthread-2.26.so [.] __pthread_mutex_lock 0.05% libpixman-1.so.0.34.0 [.] 0x000000000006d3a7 0.04% [kernel] [k] entry_SYSCALL_64_trampoline 0.04% libxul.so [.] style::dom_apis::query_selector_slow 0.04% [kernel] [k] module_get_kallsym 0.04% firefox [.] malloc 0.04% [vdso] [.] 0x0000000000000910 I added a 'perf probe' to thread__find_map:69, and that surely got tons of hits, i.e. for every map found, just to make sure the 'perf probe' command was really working. In the process I noticed a bug, we're only have records for '[vdso]' for pre-existing commands, i.e. ones that are running when we start 'perf top', when we will generate the PERF_RECORD_MMAP by looking at /perf/PID/maps. I.e. like this, for preexisting processes with a vdso map, again, tracing for all the system, only pre-existing processes get a [vdso] map (when having one): [root@jouet ~]# perf probe -x ~/bin/perf __machine__addnew_vdso Added new event: probe_perf:__machine__addnew_vdso (on __machine__addnew_vdso in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:__machine__addnew_vdso -aR sleep 1 [root@jouet ~]# perf trace -e probe_perf:__machine__addnew_vdso/max-stack=8/ 0.000 probe_perf:__machine__addnew_vdso:(568eb3) __machine__addnew_vdso (/home/acme/bin/perf) map__new (/home/acme/bin/perf) machine__process_mmap2_event (/home/acme/bin/perf) machine__process_event (/home/acme/bin/perf) perf_event__process (/home/acme/bin/perf) perf_tool__process_synth_event (/home/acme/bin/perf) perf_event__synthesize_mmap_events (/home/acme/bin/perf) __event__synthesize_thread (/home/acme/bin/perf) The kernel is generating a PERF_RECORD_MMAP for vDSOs, but somehow 'perf top' is not getting those records while 'perf record' is: # perf record ~acme/c/gtod ^C[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.076 MB perf.data (1499 samples) ] # perf report -D | grep PERF_RECORD_MMAP2 71293612401913 0x11b48 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x400000(0x1000) @ 0 fd:02 1137 541179306]: r-xp /home/acme/c/gtod 71293612419012 0x11be0 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x7fa4a2783000(0x227000) @ 0 fd:00 3146370 854107250]: r-xp /usr/lib64/ld-2.26.so 71293612432110 0x11c50 [0x60]: PERF_RECORD_MMAP2 25484/25484: [0x7ffcdb53a000(0x2000) @ 0 00:00 0 0]: r-xp [vdso] 71293612509944 0x11cb0 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x7fa4a23cd000(0x3b6000) @ 0 fd:00 3149723 262067164]: r-xp /usr/lib64/libc-2.26.so # # perf script | grep vdso | head gtod 25484 71293.612768: 2485554 cycles:ppp: 7ffcdb53a914 [unknown] ([vdso]) gtod 25484 71293.613576: 2149343 cycles:ppp: 7ffcdb53a917 [unknown] ([vdso]) gtod 25484 71293.614274: 1814652 cycles:ppp: 7ffcdb53aca8 __vdso_gettimeofday+0x98 ([vdso]) gtod 25484 71293.614862: 1669070 cycles:ppp: 7ffcdb53acc5 __vdso_gettimeofday+0xb5 ([vdso]) gtod 25484 71293.615404: 1451589 cycles:ppp: 7ffcdb53acc5 __vdso_gettimeofday+0xb5 ([vdso]) gtod 25484 71293.615999: 1269941 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso]) gtod 25484 71293.616405: 1177946 cycles:ppp: 7ffcdb53a914 [unknown] ([vdso]) gtod 25484 71293.616775: 1121290 cycles:ppp: 7ffcdb53ac47 __vdso_gettimeofday+0x37 ([vdso]) gtod 25484 71293.617150: 1037721 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso]) gtod 25484 71293.617478: 994526 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso]) # The patch is the obvious one and with it we also continue to resolve vdso symbols for pre-existing processes in 'perf top' and for all processes in 'perf record' + 'perf report/script'. Suggested-by: David Miller <[email protected]> Acked-by: David Miller <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Wang Nan <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>