aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2020-06-22tools/bpftool: Show info for processes holding BPF map/prog/link/btf FDsAndrii Nakryiko10-0/+378
Add bpf_iter-based way to find all the processes that hold open FDs against BPF object (map, prog, link, btf). bpftool always attempts to discover this, but will silently give up if kernel doesn't yet support bpf_iter BPF programs. Process name and PID are emitted for each process (task group). Sample output for each of 4 BPF objects: $ sudo ./bpftool prog show 2694: cgroup_device tag 8c42dee26e8cd4c2 gpl loaded_at 2020-06-16T15:34:32-0700 uid 0 xlated 648B jited 409B memlock 4096B pids systemd(1) 2907: cgroup_skb name egress tag 9ad187367cf2b9e8 gpl loaded_at 2020-06-16T18:06:54-0700 uid 0 xlated 48B jited 59B memlock 4096B map_ids 2436 btf_id 1202 pids test_progs(2238417), test_progs(2238445) $ sudo ./bpftool map show 2436: array name test_cgr.bss flags 0x400 key 4B value 8B max_entries 1 memlock 8192B btf_id 1202 pids test_progs(2238417), test_progs(2238445) 2445: array name pid_iter.rodata flags 0x480 key 4B value 4B max_entries 1 memlock 8192B btf_id 1214 frozen pids bpftool(2239612) $ sudo ./bpftool link show 61: cgroup prog 2908 cgroup_id 375301 attach_type egress pids test_progs(2238417), test_progs(2238445) 62: cgroup prog 2908 cgroup_id 375344 attach_type egress pids test_progs(2238417), test_progs(2238445) $ sudo ./bpftool btf show 1202: size 1527B prog_ids 2908,2907 map_ids 2436 pids test_progs(2238417), test_progs(2238445) 1242: size 34684B pids bpftool(2258892) Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22libbpf: Wrap source argument of BPF_CORE_READ macro in parenthesesAndrii Nakryiko1-4/+4
Wrap source argument of BPF_CORE_READ family of macros into parentheses to allow uses like this: BPF_CORE_READ((struct cast_struct *)src, a, b, c); Fixes: 7db3822ab991 ("libbpf: Add BPF_CORE_READ/BPF_CORE_READ_INTO helpers") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22tools/bpftool: Generalize BPF skeleton support and generate vmlinux.hAndrii Nakryiko7-66/+45
Adapt Makefile to support BPF skeleton generation beyond single profiler.bpf.c case. Also add vmlinux.h generation and switch profiler.bpf.c to use it. clang-bpf-global-var feature is extended and renamed to clang-bpf-co-re to check for support of preserve_access_index attribute, which, together with BTF for global variables, is the minimum requirement for modern BPF programs. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22tools/bpftool: Minimize bootstrap bpftoolAndrii Nakryiko4-32/+38
Build minimal "bootstrap mode" bpftool to enable skeleton (and, later, vmlinux.h generation), instead of building almost complete, but slightly different (w/o skeletons, etc) bpftool to bootstrap complete bpftool build. Current approach doesn't scale well (engineering-wise) when adding more BPF programs to bpftool and other complicated functionality, as it requires constant adjusting of the code to work in both bootstrapped mode and normal mode. So it's better to build only minimal bpftool version that supports only BPF skeleton code generation and BTF-to-C conversion. Thankfully, this is quite easy to accomplish due to internal modularity of bpftool commands. This will also allow to keep adding new functionality to bpftool in general, without the need to care about bootstrap mode for those new parts of bpftool. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22tools/bpftool: Move map/prog parsing logic into commonAndrii Nakryiko4-308/+310
Move functions that parse map and prog by id/tag/name/etc outside of map.c/prog.c, respectively. These functions are used outside of those files and are generic enough to be in common. This also makes heavy-weight map.c and prog.c more decoupled from the rest of bpftool files and facilitates more lightweight bootstrap bpftool variant. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22selftests/bpf: Add __ksym extern selftestAndrii Nakryiko2-0/+103
Validate libbpf is able to handle weak and strong kernel symbol externs in BPF code correctly. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Hao Luo <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22libbpf: Add support for extracting kernel symbol addressesAndrii Nakryiko3-6/+144
Add support for another (in addition to existing Kconfig) special kind of externs in BPF code, kernel symbol externs. Such externs allow BPF code to "know" kernel symbol address and either use it for comparisons with kernel data structures (e.g., struct file's f_op pointer, to distinguish different kinds of file), or, with the help of bpf_probe_user_kernel(), to follow pointers and read data from global variables. Kernel symbol addresses are found through /proc/kallsyms, which should be present in the system. Currently, such kernel symbol variables are typeless: they have to be defined as `extern const void <symbol>` and the only operation you can do (in C code) with them is to take its address. Such extern should reside in a special section '.ksyms'. bpf_helpers.h header provides __ksym macro for this. Strong vs weak semantics stays the same as with Kconfig externs. If symbol is not found in /proc/kallsyms, this will be a failure for strong (non-weak) extern, but will be defaulted to 0 for weak externs. If the same symbol is defined multiple times in /proc/kallsyms, then it will be error if any of the associated addresses differs. In that case, address is ambiguous, so libbpf falls on the side of caution, rather than confusing user with randomly chosen address. In the future, once kernel is extended with variables BTF information, such ksym externs will be supported in a typed version, which will allow BPF program to read variable's contents directly, similarly to how it's done for fentry/fexit input arguments. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Hao Luo <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22libbpf: Generalize libbpf externs supportAndrii Nakryiko1-140/+206
Switch existing Kconfig externs to be just one of few possible kinds of more generic externs. This refactoring is in preparation for ksymbol extern support, added in the follow up patch. There are no functional changes intended. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Hao Luo <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22selftests: forwarding: Add a test for pedit munge tcp, udp sport, dportPetr Machata1-0/+198
Add a test that checks that pedit adjusts port numbers of tcp and udp packets. Signed-off-by: Petr Machata <[email protected]> Signed-off-by: Ido Schimmel <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-06-23libbpf: Add a bunch of attribute getters/setters for map definitionsAndrii Nakryiko3-10/+134
Add a bunch of getter for various aspects of BPF map. Some of these attribute (e.g., key_size, value_size, type, etc) are available right now in struct bpf_map_def, but this patch adds getter allowing to fetch them individually. bpf_map_def approach isn't very scalable, when ABI stability requirements are taken into account. It's much easier to extend libbpf and add support for new features, when each aspect of BPF map has separate getter/setter. Getters follow the common naming convention of not explicitly having "get" in its name: bpf_map__type() returns map type, bpf_map__key_size() returns key_size. Setters, though, explicitly have set in their name: bpf_map__set_type(), bpf_map__set_key_size(). This patch ensures we now have a getter and a setter for the following map attributes: - type; - max_entries; - map_flags; - numa_node; - key_size; - value_size; - ifindex. bpf_map__resize() enforces unnecessary restriction of max_entries > 0. It is unnecessary, because libbpf actually supports zero max_entries for some cases (e.g., for PERF_EVENT_ARRAY map) and treats it specially during map creation time. To allow setting max_entries=0, new bpf_map__set_max_entries() setter is added. bpf_map__resize()'s behavior is preserved for backwards compatibility reasons. Map ifindex getter is added as well. There is a setter already, but no corresponding getter. Fix this assymetry as well. bpf_map__set_ifindex() itself is converted from void function into error-returning one, similar to other setters. The only error returned right now is -EBUSY, if BPF map is already loaded and has corresponding FD. One lacking attribute with no ability to get/set or even specify it declaratively is numa_node. This patch fixes this gap and both adds programmatic getter/setter, as well as adds support for numa_node field in BTF-defined map. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Toke Høiland-Jørgensen <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22libbpf: Forward-declare bpf_stats_type for systems with outdated UAPI headersAndrii Nakryiko1-0/+2
Systems that doesn't yet have the very latest linux/bpf.h header, enum bpf_stats_type will be undefined, causing compilation warnings. Prevents this by forward-declaring enum. Fixes: 0bee106716cf ("libbpf: Add support for command BPF_ENABLE_STATS") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22selftests/bpf: Test access to bpf map pointerAndrey Ignatov3-0/+780
Add selftests to test access to map pointers from bpf program for all map types except struct_ops (that one would need additional work). verifier test focuses mostly on scenarios that must be rejected. prog_tests test focuses on accessing multiple fields both scalar and a nested struct from bpf program and verifies that those fields have expected values. Signed-off-by: Andrey Ignatov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/139a6a17f8016491e39347849b951525335c6eb4.1592600985.git.rdna@fb.com
2020-06-22bpf: Support access to bpf map fieldsAndrey Ignatov1-1/+1
There are multiple use-cases when it's convenient to have access to bpf map fields, both `struct bpf_map` and map type specific struct-s such as `struct bpf_array`, `struct bpf_htab`, etc. For example while working with sock arrays it can be necessary to calculate the key based on map->max_entries (some_hash % max_entries). Currently this is solved by communicating max_entries via "out-of-band" channel, e.g. via additional map with known key to get info about target map. That works, but is not very convenient and error-prone while working with many maps. In other cases necessary data is dynamic (i.e. unknown at loading time) and it's impossible to get it at all. For example while working with a hash table it can be convenient to know how much capacity is already used (bpf_htab.count.counter for BPF_F_NO_PREALLOC case). At the same time kernel knows this info and can provide it to bpf program. Fill this gap by adding support to access bpf map fields from bpf program for both `struct bpf_map` and map type specific fields. Support is implemented via btf_struct_access() so that a user can define their own `struct bpf_map` or map type specific struct in their program with only necessary fields and preserve_access_index attribute, cast a map to this struct and use a field. For example: struct bpf_map { __u32 max_entries; } __attribute__((preserve_access_index)); struct bpf_array { struct bpf_map map; __u32 elem_size; } __attribute__((preserve_access_index)); struct { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 4); __type(key, __u32); __type(value, __u32); } m_array SEC(".maps"); SEC("cgroup_skb/egress") int cg_skb(void *ctx) { struct bpf_array *array = (struct bpf_array *)&m_array; struct bpf_map *map = (struct bpf_map *)&m_array; /* .. use map->max_entries or array->map.max_entries .. */ } Similarly to other btf_struct_access() use-cases (e.g. struct tcp_sock in net/ipv4/bpf_tcp_ca.c) the patch allows access to any fields of corresponding struct. Only reading from map fields is supported. For btf_struct_access() to work there should be a way to know btf id of a struct that corresponds to a map type. To get btf id there should be a way to get a stringified name of map-specific struct, such as "bpf_array", "bpf_htab", etc for a map type. Two new fields are added to `struct bpf_map_ops` to handle it: * .map_btf_name keeps a btf name of a struct returned by map_alloc(); * .map_btf_id is used to cache btf id of that struct. To make btf ids calculation cheaper they're calculated once while preparing btf_vmlinux and cached same way as it's done for btf_id field of `struct bpf_func_proto` While calculating btf ids, struct names are NOT checked for collision. Collisions will be checked as a part of the work to prepare btf ids used in verifier in compile time that should land soon. The only known collision for `struct bpf_htab` (kernel/bpf/hashtab.c vs net/core/sock_map.c) was fixed earlier. Both new fields .map_btf_name and .map_btf_id must be set for a map type for the feature to work. If neither is set for a map type, verifier will return ENOTSUPP on a try to access map_ptr of corresponding type. If just one of them set, it's verifier misconfiguration. Only `struct bpf_array` for BPF_MAP_TYPE_ARRAY and `struct bpf_htab` for BPF_MAP_TYPE_HASH are supported by this patch. Other map types will be supported separately. The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by perfmon_capable() so that unpriv programs won't have access to bpf map fields. Signed-off-by: Andrey Ignatov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/6479686a0cd1e9067993df57b4c3eef0e276fec9.1592600985.git.rdna@fb.com
2020-06-22perf parse-events: Declare flex header file outputIan Rogers1-6/+9
Declare flex header file output so that bison C files can depend upon them. As there are multiple output targets $@ is replaced by the target name. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf pmu: Add flex debug build flagIan Rogers1-1/+1
Allow pmu parser's flex to be debugged as the parse-events and expr currently are. Enabling this requires the C code to call perf_pmu__flex_debug. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf pmu: Add bison debug build flagIan Rogers1-1/+1
Allow pmu parser to be debugged as the parse-events and expr currently are. Enabling this requires the C code to set perf_pmu_debug. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf parse-events: Use automatic variable for yacc inputIan Rogers1-3/+3
This reduces the command line size slightly. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf parse-events: Use automatic variable for flex inputIan Rogers1-3/+3
This reduces the command line size slightly. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf evlist: Fix the class prefix for 'struct evlist' branch_type methodsArnaldo Carvalho de Melo4-6/+4
To differentiate from libperf's 'struct perf_evlist' methods. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf evlist: Fix the class prefix for 'struct evlist' sample_id_all methodsArnaldo Carvalho de Melo4-12/+12
To differentiate from libperf's 'struct perf_evlist' methods. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf evlist: Fix the class prefix for 'struct evlist' sample_type methodsArnaldo Carvalho de Melo6-15/+15
To differentiate from libperf's 'struct perf_evlist' methods. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf evlist: Fix the class prefix for 'struct evlist' strerror methodsArnaldo Carvalho de Melo4-8/+7
To differentiate from libperf's 'struct perf_evlist' methods. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf evlist: Fix the class prefix for 'struct evlist' 'add' evsel methodsArnaldo Carvalho de Melo7-34/+27
To differentiate from libperf's 'struct perf_evlist' methods. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf pmu: Improve CPU core PMU HW event list orderingJohn Garry1-0/+7
For perf list, the CPU core PMU HW event ordering is such that not all events may will be listed adjacent - consider this example: $ tools/perf/perf list List of pre-defined events (to be used in -e): duration_time [Tool event] branch-instructions OR cpu/branch-instructions/ [Kernel PMU event] branch-misses OR cpu/branch-misses/ [Kernel PMU event] bus-cycles OR cpu/bus-cycles/ [Kernel PMU event] cache-misses OR cpu/cache-misses/ [Kernel PMU event] cache-references OR cpu/cache-references/ [Kernel PMU event] cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event] cstate_core/c3-residency/ [Kernel PMU event] cstate_core/c6-residency/ [Kernel PMU event] cstate_core/c7-residency/ [Kernel PMU event] cstate_pkg/c2-residency/ [Kernel PMU event] cstate_pkg/c3-residency/ [Kernel PMU event] cstate_pkg/c6-residency/ [Kernel PMU event] cstate_pkg/c7-residency/ [Kernel PMU event] cycles-ct OR cpu/cycles-ct/ [Kernel PMU event] cycles-t OR cpu/cycles-t/ [Kernel PMU event] el-abort OR cpu/el-abort/ [Kernel PMU event] el-capacity OR cpu/el-capacity/ [Kernel PMU event] Notice in the above example how the cstate_core PMU events are mixed in the middle of the CPU core events. For my arm64 platform, all the uncore events get mixed in, making the list very disorganised: page-faults OR faults [Software event] task-clock [Software event] duration_time [Tool event] L1-dcache-load-misses [Hardware cache event] L1-dcache-loads [Hardware cache event] L1-icache-load-misses [Hardware cache event] L1-icache-loads [Hardware cache event] branch-load-misses [Hardware cache event] branch-loads [Hardware cache event] dTLB-load-misses [Hardware cache event] dTLB-loads [Hardware cache event] iTLB-load-misses [Hardware cache event] iTLB-loads [Hardware cache event] br_mis_pred OR armv8_pmuv3_0/br_mis_pred/ [Kernel PMU event] br_mis_pred_retired OR armv8_pmuv3_0/br_mis_pred_retired/ [Kernel PMU event] br_pred OR armv8_pmuv3_0/br_pred/ [Kernel PMU event] br_retired OR armv8_pmuv3_0/br_retired/ [Kernel PMU event] br_return_retired OR armv8_pmuv3_0/br_return_retired/ [Kernel PMU event] bus_access OR armv8_pmuv3_0/bus_access/ [Kernel PMU event] bus_cycles OR armv8_pmuv3_0/bus_cycles/ [Kernel PMU event] cid_write_retired OR armv8_pmuv3_0/cid_write_retired/ [Kernel PMU event] cpu_cycles OR armv8_pmuv3_0/cpu_cycles/ [Kernel PMU event] dtlb_walk OR armv8_pmuv3_0/dtlb_walk/ [Kernel PMU event] exc_return OR armv8_pmuv3_0/exc_return/ [Kernel PMU event] exc_taken OR armv8_pmuv3_0/exc_taken/ [Kernel PMU event] hisi_sccl1_ddrc0/act_cmd/ [Kernel PMU event] hisi_sccl1_ddrc0/flux_rcmd/ [Kernel PMU event] hisi_sccl1_ddrc0/flux_rd/ [Kernel PMU event] hisi_sccl1_ddrc0/flux_wcmd/ [Kernel PMU event] hisi_sccl1_ddrc0/flux_wr/ [Kernel PMU event] hisi_sccl1_ddrc0/pre_cmd/ [Kernel PMU event] hisi_sccl1_ddrc0/rnk_chg/ [Kernel PMU event] ... hisi_sccl7_l3c21/wr_hit_cpipe/ [Kernel PMU event] hisi_sccl7_l3c21/wr_hit_spipe/ [Kernel PMU event] hisi_sccl7_l3c21/wr_spipe/ [Kernel PMU event] inst_retired OR armv8_pmuv3_0/inst_retired/ [Kernel PMU event] inst_spec OR armv8_pmuv3_0/inst_spec/ [Kernel PMU event] itlb_walk OR armv8_pmuv3_0/itlb_walk/ [Kernel PMU event] l1d_cache OR armv8_pmuv3_0/l1d_cache/ [Kernel PMU event] l1d_cache_refill OR armv8_pmuv3_0/l1d_cache_refill/ [Kernel PMU event] l1d_cache_wb OR armv8_pmuv3_0/l1d_cache_wb/ [Kernel PMU event] l1d_tlb OR armv8_pmuv3_0/l1d_tlb/ [Kernel PMU event] l1d_tlb_refill OR armv8_pmuv3_0/l1d_tlb_refill/ [Kernel PMU event] So the events are list alphabetically. However, CPU core event listing is special from commit dc098b35b56f ("perf list: List kernel supplied event aliases"), in that the alias and full event is shown (in that order). As such, the core events may become sparse. Improve this by grouping the CPU core events and ensure that they are listed first for kernel PMU events. For the first example, above, this now looks like: duration_time [Tool event] branch-instructions OR cpu/branch-instructions/ [Kernel PMU event] branch-misses OR cpu/branch-misses/ [Kernel PMU event] bus-cycles OR cpu/bus-cycles/ [Kernel PMU event] cache-misses OR cpu/cache-misses/ [Kernel PMU event] cache-references OR cpu/cache-references/ [Kernel PMU event] cpu-cycles OR cpu/cpu-cycles/ [Kernel PMU event] cycles-ct OR cpu/cycles-ct/ [Kernel PMU event] cycles-t OR cpu/cycles-t/ [Kernel PMU event] el-abort OR cpu/el-abort/ [Kernel PMU event] el-capacity OR cpu/el-capacity/ [Kernel PMU event] el-commit OR cpu/el-commit/ [Kernel PMU event] el-conflict OR cpu/el-conflict/ [Kernel PMU event] el-start OR cpu/el-start/ [Kernel PMU event] instructions OR cpu/instructions/ [Kernel PMU event] mem-loads OR cpu/mem-loads/ [Kernel PMU event] mem-stores OR cpu/mem-stores/ [Kernel PMU event] ref-cycles OR cpu/ref-cycles/ [Kernel PMU event] topdown-fetch-bubbles OR cpu/topdown-fetch-bubbles/ [Kernel PMU event] topdown-recovery-bubbles OR cpu/topdown-recovery-bubbles/ [Kernel PMU event] topdown-slots-issued OR cpu/topdown-slots-issued/ [Kernel PMU event] topdown-slots-retired OR cpu/topdown-slots-retired/ [Kernel PMU event] topdown-total-slots OR cpu/topdown-total-slots/ [Kernel PMU event] tx-abort OR cpu/tx-abort/ [Kernel PMU event] tx-capacity OR cpu/tx-capacity/ [Kernel PMU event] tx-commit OR cpu/tx-commit/ [Kernel PMU event] tx-conflict OR cpu/tx-conflict/ [Kernel PMU event] tx-start OR cpu/tx-start/ [Kernel PMU event] cstate_core/c3-residency/ [Kernel PMU event] cstate_core/c6-residency/ [Kernel PMU event] cstate_core/c7-residency/ [Kernel PMU event] cstate_pkg/c2-residency/ [Kernel PMU event] cstate_pkg/c3-residency/ [Kernel PMU event] cstate_pkg/c6-residency/ [Kernel PMU event] cstate_pkg/c7-residency/ [Kernel PMU event] Signed-off-by: John Garry <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Namhyung Kim <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf pmu: List kernel supplied event aliases for arm64John Garry1-1/+1
In commit dc098b35b56f ("perf list: List kernel supplied event aliases"), the aliases for events are supplied in addition to CPU event in perf list. This relies on the name of the core PMU being "cpu", which is not the case for arm64, so arm64 has always missed this. Use generic is_pmu_core() helper which takes account of arm64 to make this feature work for arm64 (and possibly other archs). Sample, before: armv8_pmuv3_0/br_mis_pred/ [Kernel PMU event] after: br_mis_pred OR armv8_pmuv3_0/br_mis_pred/ [Kernel PMU event] Signed-off-by: John Garry <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf cs-etm: Allow no CoreSight sink to be specified on command lineMike Leach1-3/+3
Adjust the handling of the session sink selection to allow no sink to be selected on the command line. This then forwards the sink selection to the CoreSight infrastructure which will attempt to select a sink based on the default sink select priorities. Signed-off-by: Mike Leach <[email protected]> Tested-by: Leo Yan <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Suzuki Poulouse <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf expr: Add < and > operatorsIan Rogers3-1/+12
These are broadly useful but required to handle TMA metrics. For example encoding Ports_Utilization from: https://download.01.org/perfmon/TMA_Metrics.csv requires '<'. { "BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related). Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations.", "MetricExpr": "( ( cpu@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ + cpu@EXE_ACTIVITY.1_PORTS_UTIL@ + ( cpu@EXE_ACTIVITY.2_PORTS_UTIL@ * ( ( ( cpu@UOPS_RETIRED.RETIRE_SLOTS@ ) / ( cpu@CPU_CLK_UNHALTED.THREAD@ ) ) / ( ( 4.000000 ) + 1.000000 ) ) ) ) / ( cpu@CPU_CLK_UNHALTED.THREAD@ ) if ( [email protected]_ACTIVE\\,cmask\\=1@ < cpu@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ ) else ( ( cpu@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ + cpu@EXE_ACTIVITY.1_PORTS_UTIL@ + ( cpu@EXE_ACTIVITY.2_PORTS_UTIL@ * ( ( ( cpu@UOPS_RETIRED.RETIRE_SLOTS@ ) / ( cpu@CPU_CLK_UNHALTED.THREAD@ ) ) / ( ( 4.000000 ) + 1.000000 ) ) ) ) - cpu@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ ) / ( cpu@CPU_CLK_UNHALTED.THREAD@ ) )", "MetricGroup": "Topdown_Group_Ports_Utilization", "MetricName": "Topdown_Metric_Ports_Utilization" }, Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Paul Clarke <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf expr: Add d_ratio operationIan Rogers3-2/+15
d_ratio avoids division by 0 yielding infinity, such as when a counter doesn't get scheduled. An example usage is: { "BriefDescription": "DCache L1 misses", "MetricExpr": "d_ratio(MEM_LOAD_RETIRED.L1_MISS, MEM_LOAD_RETIRED.L1_HIT + MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)", "MetricGroup": "DCache;DCache_L1", "MetricName": "DCache_L1_Miss", "ScaleUnit": "100%", } Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: John Garry <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Paul Clarke <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf script: Fixup some evsel/evlist method namesArnaldo Carvalho de Melo1-5/+5
Fixups related to the introduction of libperf, where the perf_{evsel,evlist}__ prefix is reserved for functions operating on struct perf_{evsel,evlist}. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tests: Add parse metric test for frontend metricJiri Olsa1-0/+25
Adding new metric test for frontend metric. It's stolen from x86 pmu events. Committer testing: # perf test "Parse and process metrics" 67: Parse and process metrics : Ok # perf test -v "Parse and process metrics" # 67: Parse and process metrics : --- start --- test child forked, pid 104881 metric expr inst_retired.any / cpu_clk_unhalted.thread for IPC found event inst_retired.any found event cpu_clk_unhalted.thread adding {inst_retired.any,cpu_clk_unhalted.thread}:W metric expr idq_uops_not_delivered.core / (4 * (( ( cpu_clk_unhalted.thread / 2 ) * ( 1 + cpu_clk_unhalted.one_thread_active / cpu_clk_unhalted.ref_xclk ) ))) for Frontend_Bound_SMT found event cpu_clk_unhalted.one_thread_active found event cpu_clk_unhalted.ref_xclk found event idq_uops_not_delivered.core found event cpu_clk_unhalted.thread adding {cpu_clk_unhalted.one_thread_active,cpu_clk_unhalted.ref_xclk,idq_uops_not_delivered.core,cpu_clk_unhalted.thread}:W test child finished with 0 ---- end ---- Parse and process metrics: Ok # Had to fix it to initialize that 'struct value' array sentinel with a named initializer to fix the build with some versions of clang: tests/parse-metric.c:154:7: error: missing field 'val' initializer [-Werror,-Wmissing-field-initializers] { 0 }, Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tests: Add parse metric test for ipc metricJiri Olsa4-0/+151
Adding new test that process metrics code and checks the expected results. Starting with easy ipc metric. Committer testing: # perf test "Parse and process metrics" 67: Parse and process metrics : Ok # # perf test -v "Parse and process metrics" 67: Parse and process metrics : --- start --- test child forked, pid 103402 metric expr inst_retired.any / cpu_clk_unhalted.thread for IPC found event inst_retired.any found event cpu_clk_unhalted.thread adding {inst_retired.any,cpu_clk_unhalted.thread}:W test child finished with 0 ---- end ---- Parse and process metrics: Ok # Had to fix it to initialize that 'struct value' array sentinel with a named initializer to fix the build with some versions of clang: tests/parse-metric.c:135:7: error: missing field 'val' initializer [-Werror,-Wmissing-field-initializers] { 0 }, Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Add test_generic_metric functionJiri Olsa2-0/+17
Adding test_generic_metric that prepares and runs given metric over the data from struct runtime_stat object. Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Release metric_events rblistJiri Olsa3-0/+21
We don't release metric_events rblist, add the missing delete hook and call the release before leaving cmd_stat. Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Factor out prepare_metric functionJiri Olsa1-19/+34
Factoring out prepare_metric function so it can be used in test interface coming in following changes. Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Add metricgroup__parse_groups_test functionJiri Olsa2-0/+20
Add the metricgroup__parse_groups_test function. It will be used as test's interface to metric parsing in following changes. Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Add map to parse_groups() functionJiri Olsa1-10/+13
For testing purposes we need to pass our own map of events from parse_groups() through metricgroup__add_metric. Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Add fake_pmu to parse_group() functionJiri Olsa1-2/+3
Allow to pass fake_pmu in parse_groups function so it can be used in parse_events call. It's will be passed by the upcoming metricgroup__parse_groups_test function. Committer notes: Made it a 'struct perf_pmu' pointer, in line with the changes at the start of this patchkit to avoid statics deep down in library code. Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf parse: Factor out parse_groups() functionJiri Olsa1-6/+16
Factor out the parse_groups function, it will be used for new test interface coming in following changes. Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tests: Add another metric parsing testJiri Olsa1-3/+114
The test goes through all metrics compiled for arch within pmu events and try to parse them. This test is different from 'test_parsing' in that we go through all the events in the current arch, not just one defined for current CPU model. Using 'fake_pmu' to parse events which do not have PMUs defined in the system. Say there's bad change in ivybridge metrics file, like: - a/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json + b/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json @@ -8,7 +8,7 @@ - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * (( + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / / (4 * the test fails with (on my kabylake laptop): $ perf test 'Parsing of PMU event table metrics with fake PMUs' -v parsing 'idq_uops_not_delivered.core / / (4 * (( ( cpu_clk_unh... syntax error, line 1 expr__parse failed test child finished with -1 ... The test also defines its own list of metrics and tries to parse them. It's handy for developing. Committer notes: Testing it: $ perf test fake 10: PMU events : 10.4: Parsing of PMU event table metrics with fake PMUs : FAILED! $ perf test -v fake |& tail parsing '(unc_p_freq_trans_cycles / unc_p_clockticks) * 100.' parsing '(unc_m_power_channel_ppd / unc_m_clockticks) * 100.' parsing '(unc_m_power_critical_throttle_cycles / unc_m_clockticks) * 100.' parsing '(unc_m_power_self_refresh / unc_m_clockticks) * 100.' parsing 'idq_uops_not_delivered.core / * (4 * cycles)' syntax error expr__parse failed test child finished with -1 ---- end ---- PMU events subtest 4: FAILED! $ And fix this error: tests/pmu-events.c:437:40: error: missing field 'idx' initializer [-Werror,-Wmissing-field-initializers] struct parse_events_error error = { 0 }; Signed-off-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf pmu: Add a perf_pmu__fake object to use with __parse_events()Arnaldo Carvalho de Melo2-0/+4
When wanting to use the support in __parse_events() for fake pmus, just pass it. Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf parse: Provide a way to pass a fake_pmu to parse_events()Arnaldo Carvalho de Melo2-9/+17
This is an alternative patch to what Jiri sent that instead of changing all callers to parse_events() for allowing to pass a fake_pmu, provide another function specifically for that. From Jiri's patch: This way it's possible to parse events from PMUs which are not present in the system. It's available only for testing purposes coming in following changes, so all the current users set fake_pmu argument as false. Based-on-a-patch-by: Jiri Olsa <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tests: Factor check_parse_id functionJiri Olsa1-6/+14
Separating the generic part of check_parse_id function, so it can be used in following changes for the new test. Committer notes: Fix this error: tests/pmu-events.c:413:40: error: missing field 'idx' initializer [-Werror,-Wmissing-field-initializers] struct parse_events_error error = { 0 }; Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf tools: Add fake pmu supportJiri Olsa4-8/+50
Add a way to create a pmu event without the actual PMU being in place. This way we can test metrics defined for any processor. The interface is to define fake_pmu in struct parse_events_state data. It will be used only in tests via special interface function added in following changes. Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22perf annotate: Remove unneeded conversion to boolJason Yan1-1/+1
The '>' expression itself is bool, no need to convert it to bool again. This fixes the following coccicheck warning: tools/perf/ui/browsers/annotate.c:212:30-35: WARNING: conversion to bool not needed here Signed-off-by: Jason Yan <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-06-22selftests/x86: Add a syscall_arg_fault_64 test for negative GSBASEAndy Lutomirski1-0/+26
If the kernel erroneously allows WRGSBASE and user code writes a negative value, paranoid_entry will get confused. Check for this by writing a negative value to GSBASE and doing SYSENTER with TF set. A successful run looks like: [RUN] SYSENTER with TF, invalid state, and GSBASE < 0 [SKIP] Illegal instruction A failed run causes a kernel hang, and I believe it's because we double-fault and then get a never ending series of page faults and, when we exhaust the double fault stack we double fault again, starting the process over. Signed-off-by: Andy Lutomirski <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/f4f71efc91b9eae5e3dae21c9aee1c70cf5f370e.1590620529.git.luto@kernel.org
2020-06-22Merge tag 'spi-fix-v5.8-rc2' of ↵Linus Torvalds1-5/+5
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi Pull spi fixes from Mark Brown: "Quite a lot of fixes here for no single reason. There's a collection of the usual sort of device specific fixes and also a bunch of people have been working on spidev and the userspace test program spidev_test so they've got an unusually large collection of small fixes" * tag 'spi-fix-v5.8-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: spi: spidev: fix a potential use-after-free in spidev_release() spi: spidev: fix a race between spidev_release and spidev_remove spi: stm32-qspi: Fix error path in case of -EPROBE_DEFER spi: uapi: spidev: Use TABs for alignment spi: spi-fsl-dspi: Free DMA memory with matching function spi: tools: Add macro definitions to fix build errors spi: tools: Make default_tx/rx and input_tx static spi: dt-bindings: amlogic, meson-gx-spicc: Fix schema for meson-g12a spi: rspi: Use requested instead of maximum bit rate spi: spidev_test: Use %u to format unsigned numbers spi: sprd: switch the sequence of setting WDG_LOAD_LOW and _HIGH
2020-06-22tools/virtio: Use tools/include/list.h instead of stubsEugenio Pérez4-9/+6
It should not make any significant difference but reduce stub code. Signed-off-by: Eugenio Pérez <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Michael S. Tsirkin <[email protected]>
2020-06-22tools/virtio: Reset index in virtio_test --reset.Eugenio Pérez1-2/+24
This way behavior for vhost is more like a VM. Signed-off-by: Eugenio Pérez <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Michael S. Tsirkin <[email protected]>
2020-06-22tools/virtio: Extract virtqueue initialization in vq_resetEugenio Pérez1-7/+14
So we can reset after that in the main loop. Signed-off-by: Eugenio Pérez <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Michael S. Tsirkin <[email protected]>
2020-06-22tools/virtio: Use __vring_new_virtqueue in virtio_test.cEugenio Pérez1-4/+3
As updated in ("2a2d1382fe9d virtio: Add improved queue allocation API") Signed-off-by: Eugenio Pérez <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Michael S. Tsirkin <[email protected]>