aboutsummaryrefslogtreecommitdiff
path: root/tools/lib
AgeCommit message (Collapse)AuthorFilesLines
2021-10-01Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski7-362/+631
Daniel Borkmann says: ==================== bpf-next 2021-10-02 We've added 85 non-merge commits during the last 15 day(s) which contain a total of 132 files changed, 13779 insertions(+), 6724 deletions(-). The main changes are: 1) Massive update on test_bpf.ko coverage for JITs as preparatory work for an upcoming MIPS eBPF JIT, from Johan Almbladh. 2) Add a batched interface for RX buffer allocation in AF_XDP buffer pool, with driver support for i40e and ice from Magnus Karlsson. 3) Add legacy uprobe support to libbpf to complement recently merged legacy kprobe support, from Andrii Nakryiko. 4) Add bpf_trace_vprintk() as variadic printk helper, from Dave Marchevsky. 5) Support saving the register state in verifier when spilling <8byte bounded scalar to the stack, from Martin Lau. 6) Add libbpf opt-in for stricter BPF program section name handling as part of libbpf 1.0 effort, from Andrii Nakryiko. 7) Add a document to help clarifying BPF licensing, from Alexei Starovoitov. 8) Fix skel_internal.h to propagate errno if the loader indicates an internal error, from Kumar Kartikeya Dwivedi. 9) Fix build warnings with -Wcast-function-type so that the option can later be enabled by default for the kernel, from Kees Cook. 10) Fix libbpf to ignore STT_SECTION symbols in legacy map definitions as it otherwise errors out when encountering them, from Toke Høiland-Jørgensen. 11) Teach libbpf to recognize specialized maps (such as for perf RB) and internally remove BTF type IDs when creating them, from Hengqi Chen. 12) Various fixes and improvements to BPF selftests. ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2021-10-01libbpf: Support uniform BTF-defined key/value specification across all BPF mapsHengqi Chen1-0/+24
A bunch of BPF maps do not support specifying BTF types for key and value. This is non-uniform and inconvenient[0]. Currently, libbpf uses a retry logic which removes BTF type IDs when BPF map creation failed. Instead of retrying, this commit recognizes those specialized maps and removes BTF type IDs when creating BPF map. [0] Closes: https://github.com/libbpf/libbpf/issues/355 Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-10-01libbpf: Fix memory leak in strsetAndrii Nakryiko1-0/+1
Free struct strset itself, not just its internal parts. Fixes: 90d76d3ececc ("libbpf: Extract internal set-of-strings datastructure APIs") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-30Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+7
drivers/net/phy/bcm7xxx.c d88fd1b546ff ("net: phy: bcm7xxx: Fixed indirect MMD operations") f68d08c437f9 ("net: phy: bcm7xxx: Add EPHY entry for 72165") net/sched/sch_api.c b193e15ac69d ("net: prevent user from passing illegal stab size") 69508d43334e ("net_sched: Use struct_size() and flex_array_size() helpers") Both cases trivial - adjacent code additions. Signed-off-by: Jakub Kicinski <[email protected]>
2021-09-30libbpf: Fix segfault in light skeleton for objects without BTFKumar Kartikeya Dwivedi1-1/+2
When fed an empty BPF object, bpftool gen skeleton -L crashes at btf__set_fd() since it assumes presence of obj->btf, however for the sequence below clang adds no .BTF section (hence no BTF). Reproducer: $ touch a.bpf.c $ clang -O2 -g -target bpf -c a.bpf.c $ bpftool gen skeleton -L a.bpf.o /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ /* THIS FILE IS AUTOGENERATED! */ struct a_bpf { struct bpf_loader_ctx ctx; Segmentation fault (core dumped) The same occurs for files compiled without BTF info, i.e. without clang's -g flag. Fixes: 67234743736a (libbpf: Generate loader program out of BPF ELF file.) Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-29libbpf: Fix skel_internal.h to set errno on loader retval < 0Kumar Kartikeya Dwivedi1-2/+4
When the loader indicates an internal error (result of a checked bpf system call), it returns the result in attr.test.retval. However, tests that rely on ASSERT_OK_PTR on NULL (returned from light skeleton) may miss that NULL denotes an error if errno is set to 0. This would result in skel pointer being NULL, while ASSERT_OK_PTR returning 1, leading to a SEGV on dereference of skel, because libbpf_get_error relies on the assumption that errno is always set in case of error for ptr == NULL. In particular, this was observed for the ksyms_module test. When executed using `./test_progs -t ksyms`, prior tests manipulated errno and the test didn't crash when it failed at ksyms_module load, while using `./test_progs -t ksyms_module` crashed due to errno being untouched. Fixes: 67234743736a (libbpf: Generate loader program out of BPF ELF file.) Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-29libbpf: Properly ignore STT_SECTION symbols in legacy map definitionsToke Høiland-Jørgensen1-0/+2
The previous patch to ignore STT_SECTION symbols only added the ignore condition in one of them. This fails if there's more than one map definition in the 'maps' section, because the subsequent modulus check will fail, resulting in error messages like: libbpf: elf: unable to determine legacy map definition size in ./xdpdump_xdp.o Fix this by also ignoring STT_SECTION in the first loop. Fixes: c3e8c44a9063 ("libbpf: Ignore STT_SECTION symbols in 'maps' section") Signed-off-by: Toke Høiland-Jørgensen <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-29libbpf: Make gen_loader data aligned.Alexei Starovoitov1-1/+6
Align gen_loader data to 8 byte boundary to make sure union bpf_attr, bpf_insns and other structs are aligned. Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28selftests/bpf: Switch sk_lookup selftests to strict SEC("sk_lookup") useAndrii Nakryiko1-1/+1
Update "sk_lookup/" definition to be a stand-alone type specifier, with backwards-compatible prefix match logic in non-libbpf-1.0 mode. Currently in selftests all the "sk_lookup/<whatever>" uses just use <whatever> for duplicated unique name encoding, which is redundant as BPF program's name (C function name) uniquely and descriptively identifies the intended use for such BPF programs. With libbpf's SEC_DEF("sk_lookup") definition updated, switch existing sk_lookup programs to use "unqualified" SEC("sk_lookup") section names, with no random text after it. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28libbpf: Add opt-in strict BPF program section name handling logicAndrii Nakryiko2-46/+99
Implement strict ELF section name handling for BPF programs. It utilizes `libbpf_set_strict_mode()` framework and adds new flag: LIBBPF_STRICT_SEC_NAME. If this flag is set, libbpf will enforce exact section name matching for a lot of program types that previously allowed just partial prefix match. E.g., if previously SEC("xdp_whatever_i_want") was allowed, now in strict mode only SEC("xdp") will be accepted, which makes SEC("") definitions cleaner and more structured. SEC() now won't be used as yet another way to uniquely encode BPF program identifier (for that C function name is better and is guaranteed to be unique within bpf_object). Now SEC() is strictly BPF program type and, depending on program type, extra load/attach parameter specification. Libbpf completely supports multiple BPF programs in the same ELF section, so multiple BPF programs of the same type/specification easily co-exist together within the same bpf_object scope. Additionally, a new (for now internal) convention is introduced: section name that can be a stand-alone exact BPF program type specificator, but also could have extra parameters after '/' delimiter. An example of such section is "struct_ops", which can be specified by itself, but also allows to specify the intended operation to be attached to, e.g., "struct_ops/dctcp_init". Note, that "struct_ops_some_op" is not allowed. Such section definition is specified as "struct_ops+". This change is part of libbpf 1.0 effort ([0], [1]). [0] Closes: https://github.com/libbpf/libbpf/issues/271 [1] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#stricter-and-more-uniform-bpf-program-section-name-sec-handling Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28libbpf: Complete SEC() table unification for BPF_APROG_SEC/BPF_EAPROG_SECAndrii Nakryiko1-101/+35
Complete SEC() table refactoring towards unified form by rewriting BPF_APROG_SEC and BPF_EAPROG_SEC definitions with SEC_DEF(SEC_ATTACHABLE_OPT) (for optional expected_attach_type) and SEC_DEF(SEC_ATTACHABLE) (mandatory expected_attach_type), respectively. Drop BPF_APROG_SEC, BPF_EAPROG_SEC, and BPF_PROG_SEC_IMPL macros after that, leaving SEC_DEF() macro as the only one used. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28libbpf: Refactor ELF section handler definitionsAndrii Nakryiko1-111/+84
Refactor ELF section handler definitions table to use a set of flags and unified SEC_DEF() macro. This allows for more succinct and table-like set of definitions, and allows to more easily extend the logic without adding more verbosity (this is utilized in later patches in the series). This approach is also making libbpf-internal program pre-load callback not rely on bpf_sec_def definition, which demonstrates that future pluggable ELF section handlers will be able to achieve similar level of integration without libbpf having to expose extra types and APIs. For starters, update SEC_DEF() definitions and make them more succinct. Also convert BPF_PROG_SEC() and BPF_APROG_COMPAT() definitions to a common SEC_DEF() use. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28libbpf: Reduce reliance of attach_fns on sec_def internalsAndrii Nakryiko2-18/+30
Move closer to not relying on bpf_sec_def internals that won't be part of public API, when pluggable SEC() handlers will be allowed. Drop pre-calculated prefix length, and in various helpers don't rely on this prefix length availability. Also minimize reliance on knowing bpf_sec_def's prefix for few places where section prefix shortcuts are supported (e.g., tp vs tracepoint, raw_tp vs raw_tracepoint). Given checking some string for having a given string-constant prefix is such a common operation and so annoying to be done with pure C code, add a small macro helper, str_has_pfx(), and reuse it throughout libbpf.c where prefix comparison is performed. With __builtin_constant_p() it's possible to have a convenient helper that checks some string for having a given prefix, where prefix is either string literal (or compile-time known string due to compiler optimization) or just a runtime string pointer, which is quite convenient and saves a lot of typing and string literal duplication. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28libbpf: Refactor internal sec_def handling to enable pluggabilityAndrii Nakryiko1-42/+87
Refactor internals of libbpf to allow adding custom SEC() handling logic easily from outside of libbpf. To that effect, each SEC()-handling registration sets mandatory program type/expected attach type for a given prefix and can provide three callbacks called at different points of BPF program lifetime: - init callback for right after bpf_program is initialized and prog_type/expected_attach_type is set. This happens during bpf_object__open() step, close to the very end of constructing bpf_object, so all the libbpf APIs for querying and updating bpf_program properties should be available; - pre-load callback is called right before BPF_PROG_LOAD command is called in the kernel. This callbacks has ability to set both bpf_program properties, as well as program load attributes, overriding and augmenting the standard libbpf handling of them; - optional auto-attach callback, which makes a given SEC() handler support auto-attachment of a BPF program through bpf_program__attach() API and/or BPF skeletons <skel>__attach() method. Each callbacks gets a `long cookie` parameter passed in, which is specified during SEC() handling. This can be used by callbacks to lookup whatever additional information is necessary. This is not yet completely ready to be exposed to the outside world, mainly due to non-public nature of struct bpf_prog_load_params. Instead of making it part of public API, we'll wait until the planned low-level libbpf API improvements for BPF_PROG_LOAD and other typical bpf() syscall APIs, at which point we'll have a public, probably OPTS-based, way to fully specify BPF program load parameters, which will be used as an interface for custom pre-load callbacks. But this change itself is already a good first step to unify the BPF program hanling logic even within the libbpf itself. As one example, all the extra per-program type handling (sleepable bit, attach_btf_id resolution, unsetting optional expected attach type) is now more obvious and is gathered in one place. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28libbpf: Add "tc" SEC_DEF which is a better name for "classifier"Andrii Nakryiko1-0/+1
As argued in [0], add "tc" ELF section definition for SCHED_CLS BPF program type. "classifier" is a misleading terminology and should be migrated away from. [0] https://lore.kernel.org/bpf/[email protected]/ Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-28Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller1-1/+7
Daniel Borkmann says: ==================== pull-request: bpf 2021-09-28 The following pull-request contains BPF updates for your *net* tree. We've added 10 non-merge commits during the last 14 day(s) which contain a total of 11 files changed, 139 insertions(+), 53 deletions(-). The main changes are: 1) Fix MIPS JIT jump code emission for too large offsets, from Piotr Krysiuk. 2) Fix x86 JIT atomic/fetch emission when dst reg maps to rax, from Johan Almbladh. 3) Fix cgroup_sk_alloc corner case when called from interrupt, from Daniel Borkmann. 4) Fix segfault in libbpf's linker for objects without BTF, from Kumar Kartikeya Dwivedi. 5) Fix bpf_jit_charge_modmem for applications with CAP_BPF, from Lorenz Bauer. 6) Fix return value handling for struct_ops BPF programs, from Hou Tao. 7) Various fixes to BPF selftests, from Jiri Benc. ==================== Signed-off-by: David S. Miller <[email protected]> ,
2021-09-28libbpf: Fix segfault in static linker for objects without BTFKumar Kartikeya Dwivedi1-1/+7
When a BPF object is compiled without BTF info (without -g), trying to link such objects using bpftool causes a SIGSEGV due to btf__get_nr_types accessing obj->btf which is NULL. Fix this by checking for the NULL pointer, and return error. Reproducer: $ cat a.bpf.c extern int foo(void); int bar(void) { return foo(); } $ cat b.bpf.c int foo(void) { return 0; } $ clang -O2 -target bpf -c a.bpf.c $ clang -O2 -target bpf -c b.bpf.c $ bpftool gen obj out a.bpf.o b.bpf.o Segmentation fault (core dumped) After fix: $ bpftool gen obj out a.bpf.o b.bpf.o libbpf: failed to find BTF info for object 'a.bpf.o' Error: failed to link 'a.bpf.o': Unknown error -22 (-22) Fixes: a46349227cd8 (libbpf: Add linker extern resolution support for functions and global variables) Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-27libbpf: Ignore STT_SECTION symbols in 'maps' sectionToke Høiland-Jørgensen1-2/+3
When parsing legacy map definitions, libbpf would error out when encountering an STT_SECTION symbol. This becomes a problem because some versions of binutils will produce SECTION symbols for every section when processing an ELF file, so BPF files run through 'strip' will end up with such symbols, making libbpf refuse to load them. There's not really any reason why erroring out is strictly necessary, so change libbpf to just ignore SECTION symbols when parsing the ELF. Signed-off-by: Toke Høiland-Jørgensen <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-23/+41
net/mptcp/protocol.c 977d293e23b4 ("mptcp: ensure tx skbs always have the MPTCP ext") efe686ffce01 ("mptcp: ensure tx skbs always have the MPTCP ext") same patch merged in both trees, keep net-next. Signed-off-by: Jakub Kicinski <[email protected]>
2021-09-21libbpf: Add legacy uprobe attaching supportAndrii Nakryiko1-8/+122
Similarly to recently added legacy kprobe attach interface support through tracefs, support attaching uprobes using the legacy interface if host kernel doesn't support newer FD-based interface. For uprobes event name consists of "libbpf_" prefix, PID, sanitized binary path and offset within that binary. Structuraly the code is aligned with kprobe logic refactoring in previous patch. struct bpf_link_perf is re-used and all the same legacy_probe_name and legacy_is_retprobe fields are used to ensure proper cleanup on bpf_link__destroy(). Users should be aware, though, that on old kernels which don't support FD-based interface for kprobe/uprobe attachment, if the application crashes before bpf_link__destroy() is called, uprobe legacy events will be left in tracefs. This is the same limitation as with legacy kprobe interfaces. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-21libbpf: Refactor and simplify legacy kprobe codeAndrii Nakryiko2-73/+88
Refactor legacy kprobe handling code to follow the same logic as uprobe legacy logic added in the next patchs: - add append_to_file() helper that makes it simpler to work with tracefs file-based interface for creating and deleting probes; - move out probe/event name generation outside of the code that adds/removes it, which simplifies bookkeeping significantly; - change the probe name format to start with "libbpf_" prefix and include offset within kernel function; - switch 'unsigned long' to 'size_t' for specifying kprobe offsets, which is consistent with how uprobes define that, simplifies printf()-ing internally, and also avoids unnecessary complications on architectures where sizeof(long) != sizeof(void *). This patch also implicitly fixes the problem with invalid open() error handling present in poke_kprobe_events(), which (the function) this patch removes. Fixes: ca304b40c20d ("libbpf: Introduce legacy kprobe events support") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-21libbpf: Fix memory leak in legacy kprobe attach logicAndrii Nakryiko1-3/+7
In some error scenarios legacy_probe string won't be free()'d. Fix this. This was reported by Coverity static analysis. Fixes: ca304b40c20d ("libbpf: Introduce legacy kprobe events support") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-20libbpf: Add doc comments in libbpf.hGrant Seltzer1-8/+57
This adds comments above functions in libbpf.h which document their uses. These comments are of a format that doxygen and sphinx can pick up and render. These are rendered by libbpf.readthedocs.org These doc comments are for: - bpf_object__find_map_by_name() - bpf_map__fd() - bpf_map__is_internal() - libbpf_get_error() - libbpf_num_possible_cpus() Signed-off-by: Grant Seltzer <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-18libperf evsel: Make use of FD robust.Ian Rogers1-23/+41
FD uses xyarray__entry that may return NULL if an index is out of bounds. If NULL is returned then a segv happens as FD unconditionally dereferences the pointer. This was happening in a case of with perf iostat as shown below. The fix is to make FD an "int*" rather than an int and handle the NULL case as either invalid input or a closed fd. $ sudo gdb --args perf stat --iostat list ... Breakpoint 1, perf_evsel__alloc_fd (evsel=0x5555560951a0, ncpus=1, nthreads=1) at evsel.c:50 50 { (gdb) bt #0 perf_evsel__alloc_fd (evsel=0x5555560951a0, ncpus=1, nthreads=1) at evsel.c:50 #1 0x000055555585c188 in evsel__open_cpu (evsel=0x5555560951a0, cpus=0x555556093410, threads=0x555556086fb0, start_cpu=0, end_cpu=1) at util/evsel.c:1792 #2 0x000055555585cfb2 in evsel__open (evsel=0x5555560951a0, cpus=0x0, threads=0x555556086fb0) at util/evsel.c:2045 #3 0x000055555585d0db in evsel__open_per_thread (evsel=0x5555560951a0, threads=0x555556086fb0) at util/evsel.c:2065 #4 0x00005555558ece64 in create_perf_stat_counter (evsel=0x5555560951a0, config=0x555555c34700 <stat_config>, target=0x555555c2f1c0 <target>, cpu=0) at util/stat.c:590 #5 0x000055555578e927 in __run_perf_stat (argc=1, argv=0x7fffffffe4a0, run_idx=0) at builtin-stat.c:833 #6 0x000055555578f3c6 in run_perf_stat (argc=1, argv=0x7fffffffe4a0, run_idx=0) at builtin-stat.c:1048 #7 0x0000555555792ee5 in cmd_stat (argc=1, argv=0x7fffffffe4a0) at builtin-stat.c:2534 #8 0x0000555555835ed3 in run_builtin (p=0x555555c3f540 <commands+288>, argc=3, argv=0x7fffffffe4a0) at perf.c:313 #9 0x0000555555836154 in handle_internal_command (argc=3, argv=0x7fffffffe4a0) at perf.c:365 #10 0x000055555583629f in run_argv (argcp=0x7fffffffe2ec, argv=0x7fffffffe2e0) at perf.c:409 #11 0x0000555555836692 in main (argc=3, argv=0x7fffffffe4a0) at perf.c:539 ... (gdb) c Continuing. Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (uncore_iio_0/event=0x83,umask=0x04,ch_mask=0xF,fc_mask=0x07/). /bin/dmesg | grep -i perf may provide additional information. Program received signal SIGSEGV, Segmentation fault. 0x00005555559b03ea in perf_evsel__close_fd_cpu (evsel=0x5555560951a0, cpu=1) at evsel.c:166 166 if (FD(evsel, cpu, thread) >= 0) v3. fixes a bug in perf_evsel__run_ioctl where the sense of a branch was backward. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-09-17libbpf: Use static const fmt string in __bpf_printkDave Marchevsky1-1/+7
The __bpf_printk convenience macro was using a 'char' fmt string holder as it predates support for globals in libbpf. Move to more efficient 'static const char', but provide a fallback to the old way via BPF_NO_GLOBAL_DATA so users on old kernels can still use the macro. Signed-off-by: Dave Marchevsky <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-17libbpf: Modify bpf_printk to choose helper based on arg countDave Marchevsky1-8/+37
Instead of being a thin wrapper which calls into bpf_trace_printk, libbpf's bpf_printk convenience macro now chooses between bpf_trace_printk and bpf_trace_vprintk. If the arg count (excluding format string) is >3, use bpf_trace_vprintk, otherwise use the older helper. The motivation behind this added complexity - instead of migrating entirely to bpf_trace_vprintk - is to maintain good developer experience for users compiling against new libbpf but running on older kernels. Users who are passing <=3 args to bpf_printk will see no change in their bytecode. __bpf_vprintk functions similarly to BPF_SEQ_PRINTF and BPF_SNPRINTF macros elsewhere in the file - it allows use of bpf_trace_vprintk without manual conversion of varargs to u64 array. Previous implementation of bpf_printk macro is moved to __bpf_printk for use by the new implementation. This does change behavior of bpf_printk calls with >3 args in the "new libbpf, old kernels" scenario. Before this patch, attempting to use 4 args to bpf_printk results in a compile-time error. After this patch, using bpf_printk with 4 args results in a trace_vprintk helper call being emitted and a load-time failure on older kernels. Signed-off-by: Dave Marchevsky <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-17libbpf: Constify all high-level program attach APIsAndrii Nakryiko2-52/+52
Attach APIs shouldn't need to modify bpf_program/bpf_map structs, so change all struct bpf_program and struct bpf_map pointers to const pointers. This is completely backwards compatible with no functional change. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-17libbpf: Schedule open_opts.attach_prog_fd deprecation since v0.7Andrii Nakryiko3-0/+10
bpf_object_open_opts.attach_prog_fd makes a pretty strong assumption that bpf_object contains either only single freplace BPF program or all of BPF programs in BPF object are freplaces intended to replace different subprograms of the same target BPF program. This seems both a bit confusing, too assuming, and limiting. We've had bpf_program__set_attach_target() API which allows more fine-grained control over this, on a per-program level. As such, mark open_opts.attach_prog_fd as deprecated starting from v0.7, so that we have one more universal way of setting freplace targets. With previous change to allow NULL attach_func_name argument, and especially combined with BPF skeleton, arguable bpf_program__set_attach_target() is a more convenient and explicit API as well. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-17libbpf: Allow skipping attach_func_name in bpf_program__set_attach_target()Andrii Nakryiko1-1/+12
Allow to use bpf_program__set_attach_target to only set target attach program FD, while letting libbpf to use target attach function name from SEC() definition. This might be useful for some scenarios where bpf_object contains multiple related freplace BPF programs intended to replace different sub-programs in target BPF program. In such case all programs will have the same attach_prog_fd, but different attach_func_name. It's convenient to specify such target function names declaratively in SEC() definitions, but attach_prog_fd is a dynamic runtime setting. To simplify such scenario, allow bpf_program__set_attach_target() to delay BTF ID resolution till the BPF program load time by providing NULL attach_func_name. In that case the behavior will be similar to using bpf_object_open_opts.attach_prog_fd (which is marked deprecated since v0.7), but has the benefit of allowing more control by user in what is attached to what. Such setup allows having BPF programs attached to different target attach_prog_fd with target functions still declaratively recorded in BPF source code in SEC() definitions. Selftests changes in the next patch should make this more obvious. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-17libbpf: Deprecated bpf_object_open_opts.relaxed_core_relocsAndrii Nakryiko1-0/+1
It's relevant and hasn't been doing anything for a long while now. Deprecated it. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-17libbpf: Use pre-setup sec_def in libbpf_find_attach_btf_id()Andrii Nakryiko1-9/+5
Don't perform another search for sec_def inside libbpf_find_attach_btf_id(), as each recognized bpf_program already has prog->sec_def set. Also remove unnecessary NULL check for prog->sec_name, as it can never be NULL. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-15libbpf: Add sphinx code documentation commentsGrant Seltzer1-0/+70
This adds comments above five functions in btf.h which document their uses. These comments are of a format that doxygen and sphinx can pick up and render. These are rendered by libbpf.readthedocs.org Signed-off-by: Grant Seltzer <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-14libbpf: Add support for BTF_KIND_TAGYonghong Song6-3/+118
Add BTF_KIND_TAG support for parsing and dedup. Also added sanitization for BTF_KIND_TAG. If BTF_KIND_TAG is not supported in the kernel, sanitize it to INTs. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-14libbpf: Rename btf_{hash,equal}_int to btf_{hash,equal}_int_tagYonghong Song1-8/+8
This patch renames functions btf_{hash,equal}_int() to btf_{hash,equal}_int_tag() so they can be reused for BTF_KIND_TAG support. There is no functionality change for this patch. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-14libbpf: Minimize explicit iterator of section definition arrayAndrii Nakryiko1-27/+19
Remove almost all the code that explicitly iterated BPF program section definitions in favor of using find_sec_def(). The only remaining user of section_defs is libbpf_get_type_names that has to iterate all of them to construct its result. Having one internal API entry point for section definitions will simplify further refactorings around libbpf's program section definitions parsing. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-14libbpf: Simplify BPF program auto-attach codeAndrii Nakryiko1-39/+22
Remove the need to explicitly pass bpf_sec_def for auto-attachable BPF programs, as it is already recorded at bpf_object__open() time for all recognized type of BPF programs. This further reduces number of explicit calls to find_sec_def(), simplifying further refactorings. No functional changes are done by this patch. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-14libbpf: Ensure BPF prog types are set before relocationsAndrii Nakryiko1-42/+51
Refactor bpf_object__open() sequencing to perform BPF program type detection based on SEC() definitions before we get to relocations collection. This allows to have more information about BPF program by the time we get to, say, struct_ops relocation gathering. This, subsequently, simplifies struct_ops logic and removes the need to perform extra find_sec_def() resolution. With this patch libbpf will require all struct_ops BPF programs to be marked with SEC("struct_ops") or SEC("struct_ops/xxx") annotations. Real-world applications are already doing that through something like selftests's BPF_STRUCT_OPS() macro. This change streamlines libbpf's internal handling of SEC() definitions and is in the sprit of upcoming libbpf-1.0 section strictness changes ([0]). [0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#stricter-and-more-uniform-bpf-program-section-name-sec-handling Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-14libbpf: Introduce legacy kprobe events supportRafael David Tinoco1-4/+124
Allow kprobe tracepoint events creation through legacy interface, as the kprobe dynamic PMUs support, used by default, was only created in v4.17. Store legacy kprobe name in struct bpf_perf_link, instead of creating a new "subclass" off of bpf_perf_link. This is ok as it's just two new fields, which are also going to be reused for legacy uprobe support in follow up patches. Signed-off-by: Rafael David Tinoco <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-13libbpf: Make libbpf_version.h non-auto-generatedAndrii Nakryiko4-13/+33
Turn previously auto-generated libbpf_version.h header into a normal header file. This prevents various tricky Makefile integration issues, simplifies the overall build process, but also allows to further extend it with some more versioning-related APIs in the future. To prevent accidental out-of-sync versions as defined by libbpf.map and libbpf_version.h, Makefile checks their consistency at build time. Simultaneously with this change bump libbpf.map to v0.6. Also undo adding libbpf's output directory into include path for kernel/bpf/preload, bpftool, and resolve_btfids, which is not necessary because libbpf_version.h is just a normal header like any other. Fixes: 0b46b7550560 ("libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-09libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecationsQuentin Monnet3-7/+38
Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare the deprecation of two API functions. This macro marks functions as deprecated when libbpf's version reaches the values passed as an argument. As part of this change libbpf_version.h header is added with recorded major (LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros. They are now part of libbpf public API and can be relied upon by user code. libbpf_version.h is installed system-wide along other libbpf public headers. Due to this new build-time auto-generated header, in-kernel applications relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to include libbpf's output directory as part of a list of include search paths. Better fix would be to use libbpf's make_install target to install public API headers, but that clean up is left out as a future improvement. The build changes were tested by building kernel (with KBUILD_OUTPUT and O= specified explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No problems were detected. Note that because of the constraints of the C preprocessor we have to write a few lines of macro magic for each version used to prepare deprecation (0.6 for now). Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of btf__get_from_id() and btf__load(), which are replaced by btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively, starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/278 Co-developed-by: Quentin Monnet <[email protected]> Co-developed-by: Andrii Nakryiko <[email protected]> Signed-off-by: Quentin Monnet <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-07libbpf: Fix build with latest gcc/binutils with LTOAndrii Nakryiko2-8/+21
After updating to binutils 2.35, the build began to fail with an assembler error. A bug was opened on the Red Hat Bugzilla a few days later for the same issue. Work around the problem by using the new `symver` attribute (introduced in GCC 10) as needed instead of assembler directives. This addresses Red Hat ([0]) and OpenSUSE ([1]) bug reports, as well as libbpf issue ([2]). [0]: https://bugzilla.redhat.com/show_bug.cgi?id=1863059 [1]: https://bugzilla.opensuse.org/show_bug.cgi?id=1188749 [2]: Closes: https://github.com/libbpf/libbpf/issues/338 Co-developed-by: Patrick McCarty <[email protected]> Co-developed-by: Michal Suchanek <[email protected]> Signed-off-by: Patrick McCarty <[email protected]> Signed-off-by: Michal Suchanek <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-07libbpf: Change bpf_object_skeleton data field to const pointerMatt Smith1-1/+1
This change was necessary to enforce the implied contract that bpf_object_skeleton->data should not be mutated. The data will be cast to `void *` during assignment to handle the case where a user is compiling with older libbpf headers to avoid a compiler warning of `const void *` data being cast to `void *` Signed-off-by: Matt Smith <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-07libbpf: Don't crash on object files with no symbol tablesToke Høiland-Jørgensen1-0/+6
If libbpf encounters an ELF file that has been stripped of its symbol table, it will crash in bpf_object__add_programs() when trying to dereference the obj->efile.symbols pointer. Fix this by erroring out of bpf_object__elf_collect() if it is not able able to find the symbol table. v2: - Move check into bpf_object__elf_collect() and add nice error message Fixes: 6245947c1b3c ("libbpf: Allow gaps in BPF program sections to support overriden weak functions") Signed-off-by: Toke Høiland-Jørgensen <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-09-05Merge tag 'perf-tools-for-v5.15-2021-09-04' of ↵Linus Torvalds4-8/+11
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tool updates from Arnaldo Carvalho de Melo: "New features: - Improvements for the flamegraph python script, including: - Display perf.data header - Display PIDs of user stacks - Added option to change color scheme - Default to blue/green color scheme to improve accessibility - Correctly identify kernel stacks when debuginfo is available - Improvements for 'perf bench futex': - Add --mlockall parameter - Add --broadcast and --pi to the 'requeue' sub benchmark - Add support for PMU aliases. - Introduce an ARM Coresight ETE decoder. - Add a 'perf bench' entry for evlist open/close operations, to help quantify improvements with multithreading 'perf record'. - Allow reporting the [un]throttle PERF_RECORD_ meta event in 'perf script's python scripting. - Add a 'perf test' entry for PMU aliases. - Add a 'perf test' entry for 'perf record/perf report/perf script' pipe mode. Fixes: - perf script dlfilter (API for filtering via dynamically loaded shared object introduced in v5.14) fixes and a 'perf test' entry for it. - Fix get_current_dir_name() compilation on Android. - Fix issues with asciidoc and double dashes uses. - Fix memory leaks in the BTF handling code. - Fix leftover problems in the Documentation from the infrastructure originally lifted from the git codebase. - Fix *probe_vfs_getname.sh 'perf test' failures. - Handle fd gaps in 'perf test's test__dso_data_reopen(). - Make sure to show disasembly warnings for 'perf annotate --stdio'. - Fix output from pipe to file and vice-versa in 'perf record/report/script'. - Correct 'perf data -h' output. - Fix wrong comm in system-wide mode with 'perf record --delay'. - Do not allow --for-each-cgroup without cpu in 'perf stat' - Make 'perf test --skip' work on shell tests. - Fix libperf's verbose printing. Misc improvements: - Preparatory patches for multithreading various 'perf record' phases (synthesizing, opening, recording, etc). - Add sparse context/locking annotations in compiler-types.h, also to help with the multithreading effort. - Optimize the generation of the arch specific erno tables used in 'perf trace'. - Optimize libperf's perf_cpu_map__max(). - Improve ARM's CoreSight warnings. - Report collisions in AUX records. - Improve warnings for the LLVM 'perf test' entry. - Improve the PMU events 'perf test' codebase. - perf test: Do not compare overheads in the zstd comp test - Better support annotation on ARM. - Update 'perf trace's cmd string table to decode sys_bpf() first arg. Vendor events: - Add JSON events and metrics for Intel's Ice Lake, Tiger Lake and Elhart Lake. - Update JSON eventsand metrics for Intel's Cascade Lake and Sky Lake servers. Hardware tracing: - Improvements for the ARM hardware tracing auxtrace support" * tag 'perf-tools-for-v5.15-2021-09-04' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (130 commits) perf tests: Add test for PMU aliases perf pmu: Add PMU alias support perf session: Report collisions in AUX records perf script python: Allow reporting the [un]throttle PERF_RECORD_ meta event perf build: Report failure for testing feature libopencsd perf cs-etm: Show a warning for an unknown magic number perf cs-etm: Print the decoder name perf cs-etm: Create ETE decoder perf cs-etm: Update OpenCSD decoder for ETE perf cs-etm: Fix typo perf cs-etm: Save TRCDEVARCH register perf cs-etm: Refactor out ETMv4 header saving perf cs-etm: Initialise architecture based on TRCIDR1 perf cs-etm: Refactor initialisation of decoder params. tools build: Fix feature detect clean for out of source builds perf evlist: Add evlist__for_each_entry_from() macro perf evsel: Handle precise_ip fallback in evsel__open_cpu() perf evsel: Move bpf_counter__install_pe() to success path in evsel__open_cpu() perf evsel: Move test_attr__open() to success path in evsel__open_cpu() perf evsel: Move ignore_missing_thread() to fallback code ...
2021-08-31libperf cpumap: Take into advantage it is sorted to optimize perf_cpu_map__max()Riccardo Mancini1-8/+2
From commit 7074674e7338863e ("perf cpumap: Maintain cpumaps ordered and without dups"), perf_cpu_map elements are sorted in ascending order. This patch improves the perf_cpu_map__max function by returning the last element. Committer notes: Do it as a ternary to keep it in just one return line, add a comment explaining it is sorted and what functions does it. Signed-off-by: Riccardo Mancini <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/fb79f02e7b86ea8044d563adb1e9890c906f982f.1629490974.git.rickyman7@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-08-31libsubcmd: add OPT_UINTEGER_OPTARG option typeRiccardo Mancini1-0/+1
This patch adds OPT_UINTEGER_OPTARG, which is the same as OPT_UINTEGER, but also makes it possible to use the option without any value, setting the variable to a default value, d. Signed-off-by: Riccardo Mancini <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/c46749b3dff796729078352ff164d363457a3587.1629490974.git.rickyman7@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-08-30Merge remote-tracking branch 'torvalds/master' into perf/coreArnaldo Carvalho de Melo2-3/+4
To pick up fixes. Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-08-24libperf tests: Fix verbose printingShunsuke Nakamura1-0/+2
libperf's verbose printing checks the -v option every time the macro _T_ START is called. Since there are currently four libperf tests registered, the macro _T_ START is called four times, but verbose printing after the second time is not output. Resets the index of the element processed by getopt() and fix verbose printing so that it prints in all tests. Signed-off-by: Shunsuke Nakamura <[email protected]> Acked-by: Rob Herring <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2021-08-17libbpf: Add uprobe ref counter offset support for USDT semaphoresAndrii Nakryiko2-4/+17
When attaching to uprobes through perf subsystem, it's possible to specify offset of a so-called USDT semaphore, which is just a reference counted u16, used by kernel to keep track of how many tracers are attached to a given location. Support for this feature was added in [0], so just wire this through uprobe_opts. This is important to enable implementing USDT attachment and tracing through libbpf's bpf_program__attach_uprobe_opts() API. [0] a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-08-17libbpf: Add bpf_cookie to perf_event, kprobe, uprobe, and tp attach APIsAndrii Nakryiko3-25/+127
Wire through bpf_cookie for all attach APIs that use perf_event_open under the hood: - for kprobes, extend existing bpf_kprobe_opts with bpf_cookie field; - for perf_event, uprobe, and tracepoint APIs, add their _opts variants and pass bpf_cookie through opts. For kernel that don't support BPF_LINK_CREATE for perf_events, and thus bpf_cookie is not supported either, return error and log warning for user. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]