aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2021-10-25tools/latency-collector: Use correct size when writing queue_full_warningViktor Rosendahl1-1/+1
queue_full_warning is a pointer, so it is wrong to use sizeof to calculate the number of characters of the string it points to. The effect is that we only print out the first few characters of the warning string. The correct way is to use strlen(). We don't need to add 1 to the strlen() because we don't want to write the terminating null character to stdout. Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/8fd4bb65ef3da67feac9ce3258cdbe9824752cf1.1629198502.git.jing.yangyang@zte.com.cn Link: https://lore.kernel.org/r/[email protected] Reported-by: Zeal Robot <[email protected]> Signed-off-by: Viktor Rosendahl <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2021-10-25libbpf: Deprecate ambiguously-named bpf_program__size() APIAndrii Nakryiko1-0/+1
The name of the API doesn't convey clearly that this size is in number of bytes (there needed to be a separate comment to make this clear in libbpf.h). Further, measuring the size of BPF program in bytes is not exactly the best fit, because BPF programs always consist of 8-byte instructions. As such, bpf_program__insn_cnt() is a better alternative in pretty much any imaginable case. So schedule bpf_program__size() deprecation starting from v0.7 and it will be removed in libbpf 1.0. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25libbpf: Deprecate multi-instance bpf_program APIsAndrii Nakryiko2-10/+18
Schedule deprecation of a set of APIs that are related to multi-instance bpf_programs: - bpf_program__set_prep() ([0]); - bpf_program__{set,unset}_instance() ([1]); - bpf_program__nth_fd(). These APIs are obscure, very niche, and don't seem to be used much in practice. bpf_program__set_prep() is pretty useless for anything but the simplest BPF programs, as it doesn't allow to adjust BPF program load attributes, among other things. In short, it already bitrotted and will bitrot some more if not removed. With bpf_program__insns() API, which gives access to post-processed BPF program instructions of any given entry-point BPF program, it's now possible to do whatever necessary adjustments were possible with set_prep() API before, but also more. Given any such use case is automatically an advanced use case, requiring users to stick to low-level bpf_prog_load() APIs and managing their own prog FDs is reasonable. [0] Closes: https://github.com/libbpf/libbpf/issues/299 [1] Closes: https://github.com/libbpf/libbpf/issues/300 Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25libbpf: Add ability to fetch bpf_program's underlying instructionsAndrii Nakryiko3-4/+46
Add APIs providing read-only access to bpf_program BPF instructions ([0]). This is useful for diagnostics purposes, but it also allows a cleaner support for cloning BPF programs after libbpf did all the FD resolution and CO-RE relocations, subprog instructions appending, etc. Currently, cloning BPF program is possible only through hijacking a half-broken bpf_program__set_prep() API, which doesn't really work well for anything but most primitive programs. For instance, set_prep() API doesn't allow adjusting BPF program load parameters which are necessary for loading fentry/fexit BPF programs (the case where BPF program cloning is a necessity if doing some sort of mass-attachment functionality). Given bpf_program__set_prep() API is set to be deprecated, having a cleaner alternative is a must. libbpf internally already keeps track of linear array of struct bpf_insn, so it's not hard to expose it. The only gotcha is that libbpf previously freed instructions array during bpf_object load time, which would make this API much less useful overall, because in between bpf_object__open() and bpf_object__load() a lot of changes to instructions are done by libbpf. So this patch makes libbpf hold onto prog->insns array even after BPF program loading. I think this is a small price for added functionality and improved introspection of BPF program code. See retsnoop PR ([1]) for how it can be used in practice and code savings compared to relying on bpf_program__set_prep(). [0] Closes: https://github.com/libbpf/libbpf/issues/298 [1] https://github.com/anakryiko/retsnoop/pull/1 Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25libbpf: Fix off-by-one bug in bpf_core_apply_relo()Andrii Nakryiko1-1/+1
Fix instruction index validity check which has off-by-one error. Fixes: 3ee4f5335511 ("libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25bpftool: Switch to libbpf's hashmap for PIDs/names referencesQuentin Monnet7-65/+72
In order to show PIDs and names for processes holding references to BPF programs, maps, links, or BTF objects, bpftool creates hash maps to store all relevant information. This commit is part of a set that transitions from the kernel's hash map implementation to the one coming with libbpf. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This is the third and final step of the transition, in which we convert the hash maps used for storing the information about the processes holding references to BPF objects (programs, maps, links, BTF), and at last we drop the inclusion of tools/include/linux/hashtable.h. Note: Checkpatch complains about the use of __weak declarations, and the missing empty lines after the bunch of empty function declarations when compiling without the BPF skeletons (none of these were introduced in this patch). We want to keep things as they are, and the reports should be safe to ignore. Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25bpftool: Switch to libbpf's hashmap for programs/maps in BTF listingQuentin Monnet2-68/+62
In order to show BPF programs and maps using BTF objects when the latter are being listed, bpftool creates hash maps to store all relevant items. This commit is part of a set that transitions from the kernel's hash map implementation to the one coming with libbpf. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This commit focuses on the two hash maps used by bpftool when listing BTF objects to store references to programs and maps, and convert them to the libbpf's implementation. Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25bpftool: Switch to libbpf's hashmap for pinned paths of BPF objectsQuentin Monnet6-84/+111
In order to show pinned paths for BPF programs, maps, or links when listing them with the "-f" option, bpftool creates hash maps to store all relevant paths under the bpffs. So far, it would rely on the kernel implementation (from tools/include/linux/hashtable.h). We can make bpftool rely on libbpf's implementation instead. The motivation is to make bpftool less dependent of kernel headers, to ease the path to a potential out-of-tree mirror, like libbpf has. This commit is the first step of the conversion: the hash maps for pinned paths for programs, maps, and links are converted to libbpf's hashmap.{c,h}. Other hash maps used for the PIDs of process holding references to BPF objects are left unchanged for now. On the build side, this requires adding a dependency to a second header internal to libbpf, and making it a dependency for the bootstrap bpftool version as well. The rest of the changes are a rather straightforward conversion. Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25bpftool: Do not expose and init hash maps for pinned path in main.cQuentin Monnet5-18/+24
BPF programs, maps, and links, can all be listed with their pinned paths by bpftool, when the "-f" option is provided. To do so, bpftool builds hash maps containing all pinned paths for each kind of objects. These three hash maps are always initialised in main.c, and exposed through main.h. There appear to be no particular reason to do so: we can just as well make them static to the files that need them (prog.c, map.c, and link.c respectively), and initialise them only when we want to show objects and the "-f" switch is provided. This may prevent unnecessary memory allocations if the implementation of the hash maps was to change in the future. Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25bpftool: Remove Makefile dep. on $(LIBBPF) for $(LIBBPF_INTERNAL_HDRS)Quentin Monnet1-2/+2
The dependency is only useful to make sure that the $(LIBBPF_HDRS_DIR) directory is created before we try to install locally the required libbpf internal header. Let's create this directory properly instead. This is in preparation of making $(LIBBPF_INTERNAL_HDRS) a dependency to the bootstrap bpftool version, in which case we want no dependency on $(LIBBPF). Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25selftests/bpf: Split out bpf_verif_scale selftests into multiple testsAndrii Nakryiko1-68/+152
Instead of using subtests in bpf_verif_scale selftest, turn each scale sub-test into its own test. Each subtest is compltely independent and just reuses a bit of common test running logic, so the conversion is trivial. For convenience, keep all of BPF verifier scale tests in one file. This conversion shaves off a significant amount of time when running test_progs in parallel mode. E.g., just running scale tests (-t verif_scale): BEFORE ====== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m22.894s user 0m0.012s sys 0m22.797s AFTER ===== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m12.044s user 0m0.024s sys 0m27.869s Ten second saving right there. test_progs -j is not yet ready to be turned on by default, unfortunately, and some tests fail almost every time, but this is a good improvement nevertheless. Ignoring few failures, here is sequential vs parallel run times when running all tests now: SEQUENTIAL ========== Summary: 206/953 PASSED, 4 SKIPPED, 0 FAILED real 1m5.625s user 0m4.211s sys 0m31.650s PARALLEL ======== Summary: 204/952 PASSED, 4 SKIPPED, 2 FAILED real 0m35.550s user 0m4.998s sys 0m39.890s Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25selftests/bpf: Mark tc_redirect selftest as serialAndrii Nakryiko1-1/+1
It seems to cause a lot of harm to kprobe/tracepoint selftests. Yucong mentioned before that it does manipulate sysfs, which might be the reason. So let's mark it as serial, though ideally it would be less intrusive on the system at test. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25selftests/bpf: Support multiple tests per fileAndrii Nakryiko1-4/+3
Revamp how test discovery works for test_progs and allow multiple test entries per file. Any global void function with no arguments and serial_test_ or test_ prefix is considered a test. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25selftests/bpf: Normalize selftest entry pointsAndrii Nakryiko6-15/+13
Ensure that all test entry points are global void functions with no input arguments. Mark few subtest entry points as static. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-25kunit: tool: continue past invalid utf-8 outputDaniel Latypov2-3/+4
kunit.py currently crashes and fails to parse kernel output if it's not fully valid utf-8. This can come from memory corruption or just inadvertently printing out binary data as strings. E.g. adding this line into a kunit test pr_info("\x80") will cause this exception UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 1961: invalid start byte We can tell Python how to handle errors, see https://docs.python.org/3/library/codecs.html#error-handlers Unfortunately, it doesn't seem like there's a way to specify this in just one location, so we need to repeat ourselves quite a bit. Specify `errors='backslashreplace'` so we instead: * print out the offending byte as '\x80' * try and continue parsing the output. * as long as the TAP lines themselves are valid, we're fine. Fixed spelling/grammar in commit log: Shuah Khan <<[email protected]> Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Tested-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-10-25perf jevents: Fix some would-be warningsJohn Garry1-6/+4
Before enabling warnings through HOSTCFLAGS, fix the would-be warnings: HOSTCC pmu-events/jevents.o pmu-events/jevents.c:74:22: warning: no previous prototype for ‘convert’ [-Wmissing-prototypes] 74 | enum aggr_mode_class convert(const char *aggr_mode) | ^~~~~~~ pmu-events/jevents.c: In function ‘print_events_table_entry’: pmu-events/jevents.c:373:8: warning: declaration of ‘topic’ shadows a global declaration [-Wshadow] 373 | char *topic = pd->topic; | ^~~~~ pmu-events/jevents.c:316:14: note: shadowed declaration is here 316 | static char *topic; | ^~~~~ pmu-events/jevents.c: In function ‘json_events’: pmu-events/jevents.c:554:9: warning: declaration of ‘func’ shadows a global declaration [-Wshadow] 554 | int (*func)(void *data, struct json_event *je), | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ pmu-events/jevents.c:85:15: note: shadowed declaration is here 85 | typedef int (*func)(void *data, struct json_event *je); | ^~~~ pmu-events/jevents.c: In function ‘main’: pmu-events/jevents.c:1211:25: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1211 | char *err_string_ext = ""; | ^~ pmu-events/jevents.c:1304:17: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1304 | err_string_ext = " for std arch event"; | ^ Signed-off-by: John Garry <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ian Rogers <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kajol Jain <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf dso: Fix /proc/kcore access on 32 bit systemsJames Clark1-1/+1
Because _LARGEFILE64_SOURCE is set in perf, file offset sizes can be 64 bits. If a workflow needs to open /proc/kcore on a 32 bit system (for example to decode Arm ETM kernel trace) then the size value will be wrapped to 32 bits in the function file_size() at this line: dso->data.file_size = st.st_size; Setting the file_size member to be u64 fixes the issue and allows /proc/kcore to be opened. Reported-by: Denis Nikitin <[email protected]> Signed-off-by: James Clark <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf build: Suppress 'rm dlfilter' build messageAdrian Hunter1-0/+2
The following build message: rm dlfilters/dlfilter-test-api-v0.o is unwanted. The object file is being treated as an intermediate file and being automatically removed. Mark the object file as .SECONDARY to prevent removal and hence the message. Requested-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf list: Display hybrid PMU events with cpu typeJin Yao8-24/+73
Add a new option '--cputype' to 'perf list' to display core-only PMU events or atom-only PMU events. Each hybrid PMU event has been assigned with a PMU name, this patch compares the PMU name before listing the result. For example: perf list --cputype atom ... cache: core_reject_l2q.any [Counts the number of request that were not accepted into the L2Q because the L2Q is FULL. Unit: cpu_atom] ... The "Unit: cpu_atom" is displayed in the brief description section to indicate this is an atom event. Signed-off-by: Jin Yao <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jin Yao <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf powerpc: Add support to expose instruction and data address registers ↵Athira Rajeev3-4/+11
as part of extended regs This patch enables presenting Sampled Instruction Address Register (SIAR) and Sampled Data Address Register (SDAR) SPRs as part of extended registers for the perf tool. Add these SPR's to sample_reg_mask in the tool side (to use with -I? option). Reviewed-by: Kajol Jain <[email protected]> Signed-off-by: Athira Jajeev <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nageswara R Sastry <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf powerpc: Refactor the code definition of perf reg extended mask in ↵Athira Rajeev1-8/+13
tools side header file PERF_REG_PMU_MASK_300 and PERF_REG_PMU_MASK_31 defines the mask value for extended registers. Current definition of these mask values uses hex constant and does not use registers by name, making it less readable. Patch refactor the macro values in perf tools side header file by or'ing together the actual register value constants. Suggested-by: Michael Ellerman <[email protected]> Reviewed-by: Kajol Jain <[email protected]> Signed-off-by: Athira Jajeev <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Madhavan Srinivasan <[email protected]> Cc: Nageswara R Sastry <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Introduce reader EOF functionAlexey Bayduraev1-1/+7
Introduce function to check end-of-file status. Reviewed-by: Jiri Olsa <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Riccardo Mancini <[email protected]> Link: https://lore.kernel.org/r/b3b0e0904da01f9ec84d4ae9368df99ecd231598.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Introduce reader return codesAlexey Bayduraev1-3/+8
Add READER_OK and READER_NODATA return codes to make the code more clear. Suggested-by: Jiri Olsa <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/5fca481e91c3c5d2ba033d4c6e9b969f8033ab0f.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Move the event read code to a separate functionAlexey Bayduraev1-15/+31
Separate the reading code of a single event to a new reader__read_event() function. Suggested-by: Jiri Olsa <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/ffe570d937138dd24f282978ce7ed9c46a06ff9b.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Move unmap code to reader__mmapAlexey Bayduraev1-17/+13
Move the unmapping code to reader__mmap(), so that the mmap code is located together. Move the head/file_offset computation to reader__mmap(), so all the offset computation is located together and in one place only. Suggested-by: Jiri Olsa <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/f1c5e17cfa1ecfe912d10b411be203b55d148bc7.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Move reader map code to a separate functionAlexey Bayduraev1-15/+28
Move the mapping code into a separate reader__mmap() function. Suggested-by: Jiri Olsa <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/e445de5bb85bbd91287986802d6ed0ce1b419b5a.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Move init/release code to separate functionsAlexey Bayduraev1-13/+32
Separate init/release code into reader__init() and reader__release_decomp() functions. Remove a duplicate call to ui_progress__init_size(), the same call can be found in __perf_session__process_events(). For multiple traces ui_progress should be initialized by total size before reader__init() calls. Suggested-by: Jiri Olsa <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/8bacf247de220be8e57af1d2b796322175f5e257.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Introduce decompressor in reader objectAlexey Bayduraev2-16/+33
Introduce a decompressor data structure with pointers to decomp objects and to zstd object. We cannot just move session->zstd_data to decomp_data as session->zstd_data is not only used for decompression. Adding decompressor data object to reader object and introducing active_decomp into perf_session object to select current decompressor. Thus decompression could be executed separately for each data file. Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/0eee270cb52aebcbd029c8445d9009fd17709d53.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf session: Move all state items to reader objectAlexey Bayduraev1-28/+35
We need all the state info about reader in separate object to load data from multiple files, so we can keep multiple readers at the same time. Moving all items that need to be kept from reader__process_events to the reader object. Introducing mmap_cur to keep current mapping. Suggested-by: Jiri Olsa <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Reviewed-by: Riccardo Mancini <[email protected]> Signed-off-by: Alexey Bayduraev <[email protected]> Tested-by: Riccardo Mancini <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Antonov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Budankov <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/5c7bdebfaadd7fcb729bd999b181feccaa292e8e.1634113027.git.alexey.v.bayduraev@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25perf intel-pt: Add support for PERF_RECORD_AUX_OUTPUT_HW_IDAdrian Hunter2-5/+87
Originally, software only supported redirecting at most one PEBS event to Intel PT (PEBS-via-PT) because it was not able to differentiate one event from another. To overcome that, add support for the PERF_RECORD_AUX_OUTPUT_HW_ID side-band event. Committer notes: Cast the pointer arg to for_each_set_bit() to (unsigned long *), to fix the build on 32-bit systems. Reviewed-by: Alexander Shishkin <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Signed-off-by: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-10-25selftests: x86: fix [-Wstringop-overread] warn in test_process_vm_readv()Shuah Khan1-1/+1
Fix the following [-Wstringop-overread] by passing in the variable instead of the value. test_vsyscall.c: In function ‘test_process_vm_readv’: test_vsyscall.c:500:22: warning: ‘__builtin_memcmp_eq’ specified bound 4096 exceeds source size 0 [-Wstringop-overread] 500 | if (!memcmp(buf, (const void *)0xffffffffff600000, 4096)) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Shuah Khan <[email protected]>
2021-10-25selftests: mlxsw: Reduce test run timeIdo Schimmel2-18/+20
Instead of iterating over all the available trap policers, only perform the tests with three policers: The first, the last and the one in the middle of the range. On a Spectrum-3 system, this reduces the run time from almost an hour to a few minutes. Signed-off-by: Ido Schimmel <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-25selftests: mlxsw: Use permanent neighbours instead of reachable onesIdo Schimmel1-11/+11
The nexthop objects tests configure dummy reachable neighbours so that the nexthops will have a MAC address and be programmed to the device. Since these are dummy reachable neighbours, they can be transitioned by the kernel to a failed state if they are around for too long. This can happen, for example, if the "TIMEOUT" variable is configured with a too high value. Make the tests more robust by configuring the neighbours as permanent, so that the tests do not depend on the configured timeout value. Signed-off-by: Ido Schimmel <[email protected]> Reviewed-by: Petr Machata <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-25selftests: mlxsw: Add helpers for skipping selftestsPetr Machata8-24/+81
A number of mlxsw-specific selftests currently detect whether they are run on a compatible machine, and bail out silently when not. These tests are however done in a somewhat impenetrable manner by directly comparing PCI IDs against a blacklist or a whitelist, and bailing out silently if the machine is not compatible. Instead, add a helper, mlxsw_only_on_spectrum(), which allows specifying the supported machines in a human-readable manner. If the current machine is incompatible, the helper emits a SKIP message and returns an error code, based on which the caller can gracefully bail out in a suitable way. This allows a more readable conditions such as: mlxsw_only_on_spectrum 2+ || return Convert all existing open-coded guards to the new helper. Also add two new guards to do_mark_test() and do_drop_test(), which are supported only on Spectrum-2+, but the corresponding check was not there. Signed-off-by: Petr Machata <[email protected]> Signed-off-by: Ido Schimmel <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-25selftests: net: dsa: add a stress test for unlocked FDB operationsVladimir Oltean1-0/+47
This test is a bit strange in that it is perhaps more manual than others: it does not transmit a clear OK/FAIL verdict, because user space does not have synchronous feedback from the kernel. If a hardware access fails, it is in deferred context. Nonetheless, on sja1105 I have used it successfully to find and solve a concurrency issue, so it can be used as a starting point for other driver maintainers too. Signed-off-by: Vladimir Oltean <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-25selftests: lib: forwarding: allow tests to not require mz and jqVladimir Oltean1-2/+8
These programs are useful, but not all selftests require them. Additionally, on embedded boards without package management (things like buildroot), installing mausezahn or jq is not always as trivial as downloading a package from the web. So it is actually a bit annoying to require programs that are not used. Introduce options that can be set by scripts to not enforce these dependencies. For compatibility, default to "yes". Cc: Nikolay Aleksandrov <[email protected]> Cc: Ido Schimmel <[email protected]> Cc: Guillaume Nault <[email protected]> Cc: Po-Hsu Lin <[email protected]> Signed-off-by: Vladimir Oltean <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Reviewed-by: Ido Schimmel <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-25Revert "Merge branch 'dsa-rtnl'"David S. Miller2-55/+2
This reverts commit 965e6b262f48257dbdb51b565ecfd84877a0ab5f, reversing changes made to 4d98bb0d7ec2d0b417df6207b0bafe1868bad9f8.
2021-10-25lkdtm/bugs: Check that a per-task stack canary existsKees Cook2-0/+2
Introduce REPORT_STACK_CANARY to check for differing stack canaries between two processes (i.e. that an architecture is correctly implementing per-task stack canaries), using the task_struct canary as the hint to locate in the stack. Requires that one of the processes being tested not be pid 1. Cc: Ard Biesheuvel <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-10-25selftests/lkdtm: Add way to repeat a testKees Cook1-1/+9
Some LKDTM tests need to be run more than once (usually to setup and then later trigger). Until now, the only case was the SOFT_LOCKUP test, which wasn't useful to run in the bulk selftests. The coming stack canary checking needs to run twice, so support this with a new test output prefix "repeat". Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-10-24selftests: net: dsa: add a stress test for unlocked FDB operationsVladimir Oltean1-0/+47
This test is a bit strange in that it is perhaps more manual than others: it does not transmit a clear OK/FAIL verdict, because user space does not have synchronous feedback from the kernel. If a hardware access fails, it is in deferred context. Nonetheless, on sja1105 I have used it successfully to find and solve a concurrency issue, so it can be used as a starting point for other driver maintainers too. Signed-off-by: Vladimir Oltean <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-24selftests: lib: forwarding: allow tests to not require mz and jqVladimir Oltean1-2/+8
These programs are useful, but not all selftests require them. Additionally, on embedded boards without package management (things like buildroot), installing mausezahn or jq is not always as trivial as downloading a package from the web. So it is actually a bit annoying to require programs that are not used. Introduce options that can be set by scripts to not enforce these dependencies. For compatibility, default to "yes". Cc: Nikolay Aleksandrov <[email protected]> Cc: Ido Schimmel <[email protected]> Cc: Guillaume Nault <[email protected]> Cc: Po-Hsu Lin <[email protected]> Signed-off-by: Vladimir Oltean <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-10-22libbpf: Fix BTF header parsing checksAndrii Nakryiko1-3/+9
Original code assumed fixed and correct BTF header length. That's not always the case, though, so fix this bug with a proper additional check. And use actual header length instead of sizeof(struct btf_header) in sanity checks. Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf") Reported-by: Evgeny Vereshchagin <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22libbpf: Fix overflow in BTF sanity checksAndrii Nakryiko1-2/+2
btf_header's str_off+str_len or type_off+type_len can overflow as they are u32s. This will lead to bypassing the sanity checks during BTF parsing, resulting in crashes afterwards. Fix by using 64-bit signed integers for comparison. Fixes: d8123624506c ("libbpf: Fix BTF data layout checks and allow empty BTF") Reported-by: Evgeny Vereshchagin <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22selftests/bpf: Add BTF_KIND_DECL_TAG typedef example in tag.cYonghong Song1-2/+7
Change value type in progs/tag.c to a typedef with a btf_decl_tag. With `bpftool btf dump file tag.o`, we have ... [14] TYPEDEF 'value_t' type_id=17 [15] DECL_TAG 'tag1' type_id=14 component_idx=-1 [16] DECL_TAG 'tag2' type_id=14 component_idx=-1 [17] STRUCT '(anon)' size=8 vlen=2 'a' type_id=2 bits_offset=0 'b' type_id=2 bits_offset=32 ... The btf_tag selftest also succeeded: $ ./test_progs -t tag #21 btf_tag:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22selftests/bpf: Test deduplication for BTF_KIND_DECL_TAG typedefYonghong Song1-6/+41
Add unit tests for deduplication of BTF_KIND_DECL_TAG to typedef types. Also changed a few comments from "tag" to "decl_tag" to match BTF_KIND_DECL_TAG enum value name. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22selftests/bpf: Add BTF_KIND_DECL_TAG typedef unit testsYonghong Song1-0/+36
Test good and bad variants of typedef BTF_KIND_DECL_TAG encoding. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22selftests/bpf: Fix flow dissector testsStanislav Fomichev3-20/+18
- update custom loader to search by name, not section name - update bpftool commands to use proper pin path Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22libbpf: Use func name when pinning programs with LIBBPF_STRICT_SEC_NAMEStanislav Fomichev2-2/+14
We can't use section name anymore because they are not unique and pinning objects with multiple programs with the same progtype/secname will fail. [0] Closes: https://github.com/libbpf/libbpf/issues/273 Fixes: 33a2c75c55e2 ("libbpf: add internal pin_name") Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22bpftool: Avoid leaking the JSON writer prepared for program metadataQuentin Monnet1-7/+9
Bpftool creates a new JSON object for writing program metadata in plain text mode, regardless of metadata being present or not. Then this writer is freed if any metadata has been found and printed, but it leaks otherwise. We cannot destroy the object unconditionally, because the destructor prints an undesirable line break. Instead, make sure the writer is created only after we have found program metadata to print. Found with valgrind. Fixes: aff52e685eb3 ("bpftool: Support dumping metadata") Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-22selftests/bpf: Switch to new btf__type_cnt/btf__raw_data APIsHengqi Chen8-22/+22
Replace the calls to btf__get_nr_types/btf__get_raw_data in selftests with new APIs btf__type_cnt/btf__raw_data. The old APIs will be deprecated in libbpf v0.7+. Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]