aboutsummaryrefslogtreecommitdiff
path: root/tools/lib/bpf
AgeCommit message (Collapse)AuthorFilesLines
2022-01-24libbpf: Mark bpf_object__open_xattr() deprecatedChristy Lee2-1/+2
Mark bpf_object__open_xattr() as deprecated, use bpf_object__open_file() instead. [0] Closes: https://github.com/libbpf/libbpf/issues/287 Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220125010917.679975-1-christylee@fb.com
2022-01-24libbpf: Mark bpf_object__open_buffer() API deprecatedChristy Lee1-0/+1
Deprecate bpf_object__open_buffer() API in favor of the unified opts-based bpf_object__open_mem() API. Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220125005923.418339-2-christylee@fb.com
2022-01-24libbpf: Add "iter.s" section for sleepable bpf iterator programsKenny Yu1-0/+1
This adds a new bpf section "iter.s" to allow bpf iterator programs to be sleepable. Signed-off-by: Kenny Yu <kennyyu@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220124185403.468466-4-kennyyu@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-21libbpf: Add SEC name for xdp frags programsLorenzo Bianconi1-0/+8
Introduce support for the following SEC entries for XDP frags property: - SEC("xdp.frags") - SEC("xdp.frags/devmap") - SEC("xdp.frags/cpumap") Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://lore.kernel.org/r/af23b6e4841c171ad1af01917839b77847a4bc27.1642758637.git.lorenzo@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-20libbpf: streamline low-level XDP APIsAndrii Nakryiko3-33/+117
Introduce 4 new netlink-based XDP APIs for attaching, detaching, and querying XDP programs: - bpf_xdp_attach; - bpf_xdp_detach; - bpf_xdp_query; - bpf_xdp_query_id. These APIs replace bpf_set_link_xdp_fd, bpf_set_link_xdp_fd_opts, bpf_get_link_xdp_id, and bpf_get_link_xdp_info APIs ([0]). The latter don't follow a consistent naming pattern and some of them use non-extensible approaches (e.g., struct xdp_link_info which can't be modified without breaking libbpf ABI). The approach I took with these low-level XDP APIs is similar to what we did with low-level TC APIs. There is a nice duality of bpf_tc_attach vs bpf_xdp_attach, and so on. I left bpf_xdp_attach() to support detaching when -1 is specified for prog_fd for generality and convenience, but bpf_xdp_detach() is preferred due to clearer naming and associated semantics. Both bpf_xdp_attach() and bpf_xdp_detach() accept the same opts struct allowing to specify expected old_prog_fd. While doing the refactoring, I noticed that old APIs require users to specify opts with old_fd == -1 to declare "don't care about already attached XDP prog fd" condition. Otherwise, FD 0 is assumed, which is essentially never an intended behavior. So I made this behavior consistent with other kernel and libbpf APIs, in which zero FD means "no FD". This seems to be more in line with the latest thinking in BPF land and should cause less user confusion, hopefully. For querying, I left two APIs, both more generic bpf_xdp_query() allowing to query multiple IDs and attach mode, but also a specialization of it, bpf_xdp_query_id(), which returns only requested prog_id. Uses of prog_id returning bpf_get_link_xdp_id() were so prevalent across selftests and samples, that it seemed a very common use case and using bpf_xdp_query() for doing it felt very cumbersome with a highly branches if/else chain based on flags and attach mode. Old APIs are scheduled for deprecation in libbpf 0.8 release. [0] Closes: https://github.com/libbpf/libbpf/issues/309 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/r/20220120061422.2710637-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-20libbpf: deprecate legacy BPF map definitionsAndrii Nakryiko3-1/+14
Enact deprecation of legacy BPF map definition in SEC("maps") ([0]). For the definitions themselves introduce LIBBPF_STRICT_MAP_DEFINITIONS flag for libbpf strict mode. If it is set, error out on any struct bpf_map_def-based map definition. If not set, libbpf will print out a warning for each legacy BPF map to raise awareness that it goes away. For any use of BPF_ANNOTATE_KV_PAIR() macro providing a legacy way to associate BTF key/value type information with legacy BPF map definition, warn through libbpf's pr_warn() error message (but don't fail BPF object open). BPF-side struct bpf_map_def is marked as deprecated. User-space struct bpf_map_def has to be used internally in libbpf, so it is left untouched. It should be enough for bpf_map__def() to be marked deprecated to raise awareness that it goes away. bpftool is an interesting case that utilizes libbpf to open BPF ELF object to generate skeleton. As such, even though bpftool itself uses full on strict libbpf mode (LIBBPF_STRICT_ALL), it has to relax it a bit for BPF map definition handling to minimize unnecessary disruptions. So opt-out of LIBBPF_STRICT_MAP_DEFINITIONS for bpftool. User's code that will later use generated skeleton will make its own decision whether to enforce LIBBPF_STRICT_MAP_DEFINITIONS or not. There are few tests in selftests/bpf that are consciously using legacy BPF map definitions to test libbpf functionality. For those, temporary opt out of LIBBPF_STRICT_MAP_DEFINITIONS mode for the duration of those tests. [0] Closes: https://github.com/libbpf/libbpf/issues/272 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220120060529.1890907-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-19libbpf: Improve btf__add_btf() with an additional hashmap for strings.Kui-Feng Lee1-1/+30
Add a hashmap to map the string offsets from a source btf to the string offsets from a target btf to reduce overheads. btf__add_btf() calls btf__add_str() to add strings from a source to a target btf. It causes many string comparisons, and it is a major hotspot when adding a big btf. btf__add_str() uses strcmp() to check if a hash entry is the right one. The extra hashmap here compares offsets of strings, that are much cheaper. It remembers the results of btf__add_str() for later uses to reduce the cost. We are parallelizing BTF encoding for pahole by creating separated btf instances for worker threads. These per-thread btf instances will be added to the btf instance of the main thread by calling btf__add_str() to deduplicate and write out. With this patch and -j4, the running time of pahole drops to about 6.0s from 6.6s. The following lines are the summary of 'perf stat' w/o the change. 6.668126396 seconds time elapsed 13.451054000 seconds user 0.715520000 seconds sys The following lines are the summary w/ the change. 5.986973919 seconds time elapsed 12.939903000 seconds user 0.724152000 seconds sys V4 fixes a bug of error checking against the pointer returned by hashmap__new(). [v3] https://lore.kernel.org/bpf/20220118232053.2113139-1-kuifeng@fb.com/ [v2] https://lore.kernel.org/bpf/20220114193713.461349-1-kuifeng@fb.com/ Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220119180214.255634-1-kuifeng@fb.com
2022-01-18libbpf: Define BTF_KIND_* constants in btf.h to avoid compilation errorsToke Høiland-Jørgensen1-1/+21
The btf.h header included with libbpf contains inline helper functions to check for various BTF kinds. These helpers directly reference the BTF_KIND_* constants defined in the kernel header, and because the header file is included in user applications, this happens in the user application compile units. This presents a problem if a user application is compiled on a system with older kernel headers because the constants are not available. To avoid this, add #defines of the constants directly in btf.h before using them. Since the kernel header moved to an enum for BTF_KIND_*, the #defines can shadow the enum values without any errors, so we only need #ifndef guards for the constants that predates the conversion to enum. We group these so there's only one guard for groups of values that were added together. [0] Closes: https://github.com/libbpf/libbpf/issues/436 Fixes: 223f903e9c83 ("bpf: Rename BTF_KIND_TAG to BTF_KIND_DECL_TAG") Fixes: 5b84bd10363e ("libbpf: Add support for BTF_KIND_TAG") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/bpf/20220118141327.34231-1-toke@redhat.com
2022-01-12libbpf: Deprecate bpf_map__def() APIChristy Lee1-1/+2
All fields accessed via bpf_map_def can now be accessed via appropirate getters and setters. Mark bpf_map__def() API as deprecated. Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220108004218.355761-6-christylee@fb.com
2022-01-12libbpf: Fix possible NULL pointer dereference when destroying skeletonYafang Shao1-0/+3
When I checked the code in skeleton header file generated with my own bpf prog, I found there may be possible NULL pointer dereference when destroying skeleton. Then I checked the in-tree bpf progs, finding that is a common issue. Let's take the generated samples/bpf/xdp_redirect_cpu.skel.h for example. Below is the generated code in xdp_redirect_cpu__create_skeleton(): xdp_redirect_cpu__create_skeleton struct bpf_object_skeleton *s; s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s)); if (!s) goto error; ... error: bpf_object__destroy_skeleton(s); return -ENOMEM; After goto error, the NULL 's' will be deferenced in bpf_object__destroy_skeleton(). We can simply fix this issue by just adding a NULL check in bpf_object__destroy_skeleton(). Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220108134739.32541-1-laoar.shao@gmail.com
2022-01-12libbpf: Rename bpf_prog_attach_xattr() to bpf_prog_attach_opts()Christy Lee3-2/+12
All xattr APIs are being dropped, let's converge to the convention used in high-level APIs and rename bpf_prog_attach_xattr to bpf_prog_attach_opts. Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220107184604.3668544-2-christylee@fb.com
2022-01-12libbpf: Use IS_ERR_OR_NULL() in hashmap__free()Mauricio Vásquez1-2/+1
hashmap__new() uses ERR_PTR() to return an error so it's better to use IS_ERR_OR_NULL() in order to check the pointer before calling free(). This will prevent freeing an invalid pointer if somebody calls hashmap__free() with the result of a failed hashmap__new() call. Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220107152620.192327-1-mauricio@kinvolk.io
2022-01-06libbpf: Add documentation for bpf_map batch operationsGrant Seltzer2-6/+117
This adds documention for: - bpf_map_delete_batch() - bpf_map_lookup_batch() - bpf_map_lookup_and_delete_batch() - bpf_map_update_batch() This also updates the public API for the `keys` parameter of `bpf_map_delete_batch()`, and both the `keys` and `values` parameters of `bpf_map_update_batch()` to be constants. Signed-off-by: Grant Seltzer <grantseltzer@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220106201304.112675-1-grantseltzer@gmail.com
2022-01-05libbpf 1.0: Deprecate bpf_object__find_map_by_offset() APIChristy Lee1-1/+2
API created with simplistic assumptions about BPF map definitions. It hasn’t worked for a while, deprecate it in preparation for libbpf 1.0. [0] Closes: https://github.com/libbpf/libbpf/issues/302 Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220105003120.2222673-1-christylee@fb.com
2022-01-05libbpf 1.0: Deprecate bpf_map__is_offload_neutral()Christy Lee1-0/+1
Deprecate bpf_map__is_offload_neutral(). It’s most probably broken already. PERF_EVENT_ARRAY isn’t the only map that’s not suitable for hardware offloading. Applications can directly check map type instead. [0] Closes: https://github.com/libbpf/libbpf/issues/306 Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220105000601.2090044-1-christylee@fb.com
2022-01-05libbpf: Support repeated legacy kprobes on same functionQiang Wang1-1/+4
If repeated legacy kprobes on same function in one process, libbpf will register using the same probe name and got -EBUSY error. So append index to the probe name format to fix this problem. Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Qiang Wang <wangqiang.wq.frank@bytedance.com> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211227130713.66933-2-wangqiang.wq.frank@bytedance.com
2022-01-05libbpf: Use probe_name for legacy kprobeQiang Wang1-1/+1
Fix a bug in commit 46ed5fc33db9, which wrongly used the func_name instead of probe_name to register legacy kprobe. Fixes: 46ed5fc33db9 ("libbpf: Refactor and simplify legacy kprobe code") Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Qiang Wang <wangqiang.wq.frank@bytedance.com> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Hengqi Chen <hengqi.chen@gmail.com> Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com> Link: https://lore.kernel.org/bpf/20211227130713.66933-1-wangqiang.wq.frank@bytedance.com
2022-01-05libbpf: Deprecate bpf_perf_event_read_simple() APIChristy Lee2-8/+15
With perf_buffer__poll() and perf_buffer__consume() APIs available, there is no reason to expose bpf_perf_event_read_simple() API to users. If users need custom perf buffer, they could re-implement the function. Mark bpf_perf_event_read_simple() and move the logic to a new static function so it can still be called by other functions in the same file. [0] Closes: https://github.com/libbpf/libbpf/issues/310 Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211229204156.13569-1-christylee@fb.com
2021-12-28libbpf: Improve LINUX_VERSION_CODE detectionAndrii Nakryiko3-17/+28
Ubuntu reports incorrect kernel version through uname(), which on older kernels leads to kprobe BPF programs failing to load due to the version check mismatch. Accommodate Ubuntu's quirks with LINUX_VERSION_CODE by using Ubuntu-specific /proc/version_code to fetch major/minor/patch versions to form LINUX_VERSION_CODE. While at it, consolide libbpf's kernel version detection code between libbpf.c and libbpf_probes.c. [0] Closes: https://github.com/libbpf/libbpf/issues/421 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20211222231003.2334940-1-andrii@kernel.org
2021-12-28libbpf: Use 100-character limit to make bpf_tracing.h easier to readAndrii Nakryiko1-32/+22
Improve bpf_tracing.h's macro definition readability by keeping them single-line and better aligned. This makes it easier to follow all those variadic patterns. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20211222213924.1869758-2-andrii@kernel.org
2021-12-28libbpf: Normalize PT_REGS_xxx() macro definitionsAndrii Nakryiko1-225/+152
Refactor PT_REGS macros definitions in bpf_tracing.h to avoid excessive duplication. We currently have classic PT_REGS_xxx() and CO-RE-enabled PT_REGS_xxx_CORE(). We are about to add also _SYSCALL variants, which would require excessive copying of all the per-architecture definitions. Instead, separate architecture-specific field/register names from the final macro that utilize them. That way for upcoming _SYSCALL variants we'll be able to just define x86_64 exception and otherwise have one common set of _SYSCALL macro definitions common for all architectures. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/bpf/20211222213924.1869758-1-andrii@kernel.org
2021-12-23libbpf: Do not use btf_dump__new() macro in C++ modeJiri Olsa1-0/+6
As reported in here [0], C++ compilers don't support __builtin_types_compatible_p(), so at least don't screw up compilation for them and let C++ users pick btf_dump__new vs btf_dump__new_deprecated explicitly. [0] https://github.com/libbpf/libbpf/issues/283#issuecomment-986100727 Fixes: 6084f5dc928f ("libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20211223131736.483956-1-jolsa@kernel.org
2021-12-17libbpf: Rework feature-probing APIsAndrii Nakryiko3-54/+236
Create three extensible alternatives to inconsistently named feature-probing APIs: - libbpf_probe_bpf_prog_type() instead of bpf_probe_prog_type(); - libbpf_probe_bpf_map_type() instead of bpf_probe_map_type(); - libbpf_probe_bpf_helper() instead of bpf_probe_helper(). Set up return values such that libbpf can report errors (e.g., if some combination of input arguments isn't possible to validate, etc), in addition to whether the feature is supported (return value 1) or not supported (return value 0). Also schedule deprecation of those three APIs. Also schedule deprecation of bpf_probe_large_insn_limit(). Also fix all the existing detection logic for various program and map types that never worked: - BPF_PROG_TYPE_LIRC_MODE2; - BPF_PROG_TYPE_TRACING; - BPF_PROG_TYPE_LSM; - BPF_PROG_TYPE_EXT; - BPF_PROG_TYPE_SYSCALL; - BPF_PROG_TYPE_STRUCT_OPS; - BPF_MAP_TYPE_STRUCT_OPS; - BPF_MAP_TYPE_BLOOM_FILTER. Above prog/map types needed special setups and detection logic to work. Subsequent patch adds selftests that will make sure that all the detection logic keeps working for all current and future program and map types, avoiding otherwise inevitable bit rot. [0] Closes: https://github.com/libbpf/libbpf/issues/312 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Cc: Julia Kartseva <hex@fb.com> Link: https://lore.kernel.org/bpf/20211217171202.3352835-2-andrii@kernel.org
2021-12-16tools/libbpf: Enable cross-building with clangJean-Philippe Brucker1-1/+2
Cross-building using clang requires passing the "-target" flag rather than using the CROSS_COMPILE prefix. Makefile.include transforms CROSS_COMPILE into CLANG_CROSS_FLAGS. Add them to the CFLAGS. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211216163842.829836-4-jean-philippe@linaro.org
2021-12-14libbpf: Avoid reading past ELF data section end when copying licenseAndrii Nakryiko1-1/+4
Fix possible read beyond ELF "license" data section if the license string is not properly zero-terminated. Use the fact that libbpf_strlcpy never accesses the (N-1)st byte of the source string because it's replaced with '\0' anyways. If this happens, it's a violation of contract between libbpf and a user, but not handling this more robustly upsets CIFuzz, so given the fix is trivial, let's fix the potential issue. Fixes: 9fc205b413b3 ("libbpf: Add sane strncpy alternative and use it internally") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211214232054.3458774-1-andrii@kernel.org
2021-12-14libbpf: Mark bpf_object__find_program_by_title API deprecated.Kui-Feng Lee1-0/+1
Deprecate this API since v0.7. All callers should move to bpf_object__find_program_by_name if possible, otherwise use bpf_object__for_each_program to find a program out from a given section. [0] Closes: https://github.com/libbpf/libbpf/issues/292 Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211214035931.1148209-5-kuifeng@fb.com
2021-12-14libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPFAndrii Nakryiko6-39/+143
The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is one of the first extremely frustrating gotchas that all new BPF users go through and in some cases have to learn it a very hard way. Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem dropped the dependency on memlock and uses memcg-based memory accounting instead. Unfortunately, detecting memcg-based BPF memory accounting is far from trivial (as can be evidenced by this patch), so in practice most BPF applications still do unconditional RLIMIT_MEMLOCK increase. As we move towards libbpf 1.0, it would be good to allow users to forget about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment automatically. This patch paves the way forward in this matter. Libbpf will do feature detection of memcg-based accounting, and if detected, will do nothing. But if the kernel is too old, just like BCC, libbpf will automatically increase RLIMIT_MEMLOCK on behalf of user application ([0]). As this is technically a breaking change, during the transition period applications have to opt into libbpf 1.0 mode by setting LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling libbpf_set_strict_mode(). Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be called before the first bpf_prog_load(), bpf_btf_load(), or bpf_object__load() call, otherwise it has no effect and will return -EBUSY. [0] Closes: https://github.com/libbpf/libbpf/issues/369 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org
2021-12-14libbpf: Add sane strncpy alternative and use it internallyAndrii Nakryiko6-19/+31
strncpy() has a notoriously error-prone semantics which makes GCC complain about it a lot (and quite often completely completely falsely at that). Instead of pleasing GCC all the time (-Wno-stringop-truncation is unfortunately only supported by GCC, so it's a bit too messy to just enable it in Makefile), add libbpf-internal libbpf_strlcpy() helper which follows what FreeBSD's strlcpy() does and what most people would expect from strncpy(): copies up to N-1 first bytes from source string into destination string and ensures zero-termination afterwards. Replace all the relevant uses of strncpy/strncat/memcpy in libbpf with libbpf_strlcpy(). This also fixes the issue reported by Emmanuel Deloget in xsk.c where memcpy() could access source string beyond its end. Fixes: 2f6324a3937f8 (libbpf: Support shared umems between queues and devices) Reported-by: Emmanuel Deloget <emmanuel.deloget@eho.link> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211211004043.2374068-1-andrii@kernel.org
2021-12-14libbpf: Fix potential uninit memory readAndrii Nakryiko1-0/+1
In case of BPF_CORE_TYPE_ID_LOCAL we fill out target result explicitly. But targ_res itself isn't initialized in such a case, and subsequent call to bpf_core_patch_insn() might read uninitialized field (like fail_memsz_adjust in this case). So ensure that targ_res is zero-initialized for BPF_CORE_TYPE_ID_LOCAL case. This was reported by Coverity static analyzer. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20211214010032.3843804-1-andrii@kernel.org
2021-12-13libbpf: Add doc comments for bpf_program__(un)pin()Grant Seltzer1-0/+24
This adds doc comments for the two bpf_program pinning functions, bpf_program__pin() and bpf_program__unpin() Signed-off-by: Grant Seltzer <grantseltzer@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211209232222.541733-1-grantseltzer@gmail.com
2021-12-12libbpf: Don't validate TYPE_ID relo's original imm valueAndrii Nakryiko1-5/+14
During linking, type IDs in the resulting linked BPF object file can change, and so ldimm64 instructions corresponding to BPF_CORE_TYPE_ID_TARGET and BPF_CORE_TYPE_ID_LOCAL CO-RE relos can get their imm value out of sync with actual CO-RE relocation information that's updated by BPF linker properly during linking process. We could teach BPF linker to adjust such instructions, but it feels a bit too much for linker to re-implement good chunk of bpf_core_patch_insns logic just for this. This is a redundant safety check for TYPE_ID relocations, as the real validation is in matching CO-RE specs, so if that works fine, it's very unlikely that there is something wrong with the instruction itself. So, instead, teach libbpf (and kernel) to ignore insn->imm for BPF_CORE_TYPE_ID_TARGET and BPF_CORE_TYPE_ID_LOCAL relos. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211213010706.100231-1-andrii@kernel.org
2021-12-11libbpf: Fix gen_loader assumption on number of programs.Alexei Starovoitov1-2/+3
libbpf's obj->nr_programs includes static and global functions. That number could be higher than the actual number of bpf programs going be loaded by gen_loader. Passing larger nr_programs to bpf_gen__init() doesn't hurt. Those exra stack slots will stay as zero. bpf_gen__finish() needs to check that actual number of progs that gen_loader saw is less than or equal to obj->nr_programs. Fixes: ba05fd36b851 ("libbpf: Perform map fd cleanup for gen_loader in case of error") Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-12-10Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski19-629/+1185
Andrii Nakryiko says: ==================== bpf-next 2021-12-10 v2 We've added 115 non-merge commits during the last 26 day(s) which contain a total of 182 files changed, 5747 insertions(+), 2564 deletions(-). The main changes are: 1) Various samples fixes, from Alexander Lobakin. 2) BPF CO-RE support in kernel and light skeleton, from Alexei Starovoitov. 3) A batch of new unified APIs for libbpf, logging improvements, version querying, etc. Also a batch of old deprecations for old APIs and various bug fixes, in preparation for libbpf 1.0, from Andrii Nakryiko. 4) BPF documentation reorganization and improvements, from Christoph Hellwig and Dave Tucker. 5) Support for declarative initialization of BPF_MAP_TYPE_PROG_ARRAY in libbpf, from Hengqi Chen. 6) Verifier log fixes, from Hou Tao. 7) Runtime-bounded loops support with bpf_loop() helper, from Joanne Koong. 8) Extend branch record capturing to all platforms that support it, from Kajol Jain. 9) Light skeleton codegen improvements, from Kumar Kartikeya Dwivedi. 10) bpftool doc-generating script improvements, from Quentin Monnet. 11) Two libbpf v0.6 bug fixes, from Shuyi Cheng and Vincent Minet. 12) Deprecation warning fix for perf/bpf_counter, from Song Liu. 13) MAX_TAIL_CALL_CNT unification and MIPS build fix for libbpf, from Tiezhu Yang. 14) BTF_KING_TYPE_TAG follow-up fixes, from Yonghong Song. 15) Selftests fixes and improvements, from Ilya Leoshkevich, Jean-Philippe Brucker, Jiri Olsa, Maxim Mikityanskiy, Tirthendu Sarkar, Yucong Sun, and others. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (115 commits) libbpf: Add "bool skipped" to struct bpf_map libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definition bpftool: Switch bpf_object__load_xattr() to bpf_object__load() selftests/bpf: Remove the only use of deprecated bpf_object__load_xattr() selftests/bpf: Add test for libbpf's custom log_buf behavior selftests/bpf: Replace all uses of bpf_load_btf() with bpf_btf_load() libbpf: Deprecate bpf_object__load_xattr() libbpf: Add per-program log buffer setter and getter libbpf: Preserve kernel error code and remove kprobe prog type guessing libbpf: Improve logging around BPF program loading libbpf: Allow passing user log setting through bpf_object_open_opts libbpf: Allow passing preallocated log_buf when loading BTF into kernel libbpf: Add OPTS-based bpf_btf_load() API libbpf: Fix bpf_prog_load() log_buf logic for log_level 0 samples/bpf: Remove unneeded variable bpf: Remove redundant assignment to pointer t selftests/bpf: Fix a compilation warning perf/bpf_counter: Use bpf_map_create instead of bpf_create_map samples: bpf: Fix 'unknown warning group' build warning on Clang samples: bpf: Fix xdp_sample_user.o linking with Clang ... ==================== Link: https://lore.kernel.org/r/20211210234746.2100561-1-andrii@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-10libbpf: Add "bool skipped" to struct bpf_mapShuyi Cheng1-3/+8
Fix error: "failed to pin map: Bad file descriptor, path: /sys/fs/bpf/_rodata_str1_1." In the old kernel, the global data map will not be created, see [0]. So we should skip the pinning of the global data map to avoid bpf_object__pin_maps returning error. Therefore, when the map is not created, we mark “map->skipped" as true and then check during relocation and during pinning. Fixes: 16e0c35c6f7a ("libbpf: Load global data maps lazily on legacy kernels") Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2021-12-10libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definitionVincent Minet1-1/+1
The btf__dedup_deprecated name was misspelled in the definition of the compat symbol for btf__dedup. This leads it to be missing from the shared library. This fixes it. Fixes: 957d350a8b94 ("libbpf: Turn btf_dedup_opts into OPTS-based struct") Signed-off-by: Vincent Minet <vincent@vincent-minet.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211210063112.80047-1-vincent@vincent-minet.net
2021-12-10libbpf: Deprecate bpf_object__load_xattr()Andrii Nakryiko2-13/+11
Deprecate non-extensible bpf_object__load_xattr() in v0.8 ([0]). With log_level control through bpf_object_open_opts or bpf_program__set_log_level(), we are finally at the point where bpf_object__load_xattr() doesn't provide any functionality that can't be accessed through other (better) ways. The other feature, target_btf_path, is also controllable through bpf_object_open_opts. [0] Closes: https://github.com/libbpf/libbpf/issues/289 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-9-andrii@kernel.org
2021-12-10libbpf: Add per-program log buffer setter and getterAndrii Nakryiko3-17/+84
Allow to set user-provided log buffer on a per-program basis ([0]). This gives great deal of flexibility in terms of which programs are loaded with logging enabled and where corresponding logs go. Log buffer set with bpf_program__set_log_buf() overrides kernel_log_buf and kernel_log_size settings set at bpf_object open time through bpf_object_open_opts, if any. Adjust bpf_object_load_prog_instance() logic to not perform own log buf allocation and load retry if custom log buffer is provided by the user. [0] Closes: https://github.com/libbpf/libbpf/issues/418 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-8-andrii@kernel.org
2021-12-10libbpf: Preserve kernel error code and remove kprobe prog type guessingAndrii Nakryiko1-17/+2
Instead of rewriting error code returned by the kernel of prog load with libbpf-sepcific variants pass through the original error. There is now also no need to have a backup generic -LIBBPF_ERRNO__LOAD fallback error as bpf_prog_load() guarantees that errno will be properly set no matter what. Also drop a completely outdated and pretty useless BPF_PROG_TYPE_KPROBE guess logic. It's not necessary and neither it's helpful in modern BPF applications. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-7-andrii@kernel.org
2021-12-10libbpf: Improve logging around BPF program loadingAndrii Nakryiko1-19/+19
Add missing "prog '%s': " prefixes in few places and use consistently markers for beginning and end of program load logs. Here's an example of log output: libbpf: prog 'handler': BPF program load failed: Permission denied libbpf: -- BEGIN PROG LOAD LOG --- arg#0 reference type('UNKNOWN ') size cannot be determined: -22 ; out1 = in1; 0: (18) r1 = 0xffffc9000cdcc000 2: (61) r1 = *(u32 *)(r1 +0) ... 81: (63) *(u32 *)(r4 +0) = r5 R1_w=map_value(id=0,off=16,ks=4,vs=20,imm=0) R4=map_value(id=0,off=400,ks=4,vs=16,imm=0) invalid access to map value, value_size=16 off=400 size=4 R4 min value is outside of the allowed memory range processed 63 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- libbpf: failed to load program 'handler' libbpf: failed to load object 'test_skeleton' The entire verifier log, including BEGIN and END markers are now always youtput during a single print callback call. This should make it much easier to post-process or parse it, if necessary. It's not an explicit API guarantee, but it can be reasonably expected to stay like that. Also __bpf_object__open is renamed to bpf_object_open() as it's always an adventure to find the exact function that implements bpf_object's open phase, so drop the double underscored and use internal libbpf naming convention. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-6-andrii@kernel.org
2021-12-10libbpf: Allow passing user log setting through bpf_object_open_optsAndrii Nakryiko3-3/+65
Allow users to provide their own custom log_buf, log_size, and log_level at bpf_object level through bpf_object_open_opts. This log_buf will be used during BTF loading. Subsequent patch will use same log_buf during BPF program loading, unless overriden at per-bpf_program level. When such custom log_buf is provided, libbpf won't be attempting retrying loading of BTF to try to provide its own log buffer to capture kernel's error log output. User is responsible to provide big enough buffer, otherwise they run a risk of getting -ENOSPC error from the bpf() syscall. See also comments in bpf_object_open_opts regarding log_level and log_buf interactions. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-5-andrii@kernel.org
2021-12-10libbpf: Allow passing preallocated log_buf when loading BTF into kernelAndrii Nakryiko2-23/+56
Add libbpf-internal btf_load_into_kernel() that allows to pass preallocated log_buf and custom log_level to be passed into kernel during BPF_BTF_LOAD call. When custom log_buf is provided, btf_load_into_kernel() won't attempt an retry with automatically allocated internal temporary buffer to capture BTF validation log. It's important to note the relation between log_buf and log_level, which slightly deviates from stricter kernel logic. From kernel's POV, if log_buf is specified, log_level has to be > 0, and vice versa. While kernel has good reasons to request such "sanity, this, in practice, is a bit unconvenient and restrictive for libbpf's high-level bpf_object APIs. So libbpf will allow to set non-NULL log_buf and log_level == 0. This is fine and means to attempt to load BTF without logging requested, but if it failes, retry the load with custom log_buf and log_level 1. Similar logic will be implemented for program loading. In practice this means that users can provide custom log buffer just in case error happens, but not really request slower verbose logging all the time. This is also consistent with libbpf behavior when custom log_buf is not set: libbpf first tries to load everything with log_level=0, and only if error happens allocates internal log buffer and retries with log_level=1. Also, while at it, make BTF validation log more obvious and follow the log pattern libbpf is using for dumping BPF verifier log during BPF_PROG_LOAD. BTF loading resulting in an error will look like this: libbpf: BTF loading error: -22 libbpf: -- BEGIN BTF LOAD LOG --- magic: 0xeb9f version: 1 flags: 0x0 hdr_len: 24 type_off: 0 type_len: 1040 str_off: 1040 str_len: 2063598257 btf_total_size: 1753 Total section length too long -- END BTF LOAD LOG -- libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring. This makes it much easier to find relevant parts in libbpf log output. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-4-andrii@kernel.org
2021-12-10libbpf: Add OPTS-based bpf_btf_load() APIAndrii Nakryiko4-12/+69
Similar to previous bpf_prog_load() and bpf_map_create() APIs, add bpf_btf_load() API which is taking optional OPTS struct. Schedule bpf_load_btf() for deprecation in v0.8 ([0]). This makes naming consistent with BPF_BTF_LOAD command, sets up an API for extensibility in the future, moves options parameters (log-related fields) into optional options, and also allows to pass log_level directly. It also removes log buffer auto-allocation logic from low-level API (consistent with bpf_prog_load() behavior), but preserves a special treatment of log_level == 0 with non-NULL log_buf, which matches low-level bpf_prog_load() and high-level libbpf APIs for BTF and program loading behaviors. [0] Closes: https://github.com/libbpf/libbpf/issues/419 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-3-andrii@kernel.org
2021-12-10libbpf: Fix bpf_prog_load() log_buf logic for log_level 0Andrii Nakryiko1-13/+16
To unify libbpf APIs behavior w.r.t. log_buf and log_level, fix bpf_prog_load() to follow the same logic as bpf_btf_load() and high-level bpf_object__load() API will follow in the subsequent patches: - if log_level is 0 and non-NULL log_buf is provided by a user, attempt load operation initially with no log_buf and log_level set; - if successful, we are done, return new FD; - on error, retry the load operation with log_level bumped to 1 and log_buf set; this way verbose logging will be requested only when we are sure that there is a failure, but will be fast in the common/expected success case. Of course, user can still specify log_level > 0 from the very beginning to force log collection. Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-2-andrii@kernel.org
2021-12-06libbpf: Add doc comments in libbpf.hGrant Seltzer1-0/+53
This adds comments above functions in libbpf.h which document their uses. These comments are of a format that doxygen and sphinx can pick up and render. These are rendered by libbpf.readthedocs.org These doc comments are for: - bpf_object__open_file() - bpf_object__open_mem() - bpf_program__attach_uprobe() - bpf_program__attach_uprobe_opts() Signed-off-by: Grant Seltzer <grantseltzer@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211206203709.332530-1-grantseltzer@gmail.com
2021-12-06libbpf: Fix trivial typohuangxuesen1-2/+2
Fix typo in comment from 'bpf_skeleton_map' to 'bpf_map_skeleton' and from 'bpf_skeleton_prog' to 'bpf_prog_skeleton'. Signed-off-by: huangxuesen <huangxuesen@kuaishou.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1638755236-3851199-1-git-send-email-hxseverything@gmail.com
2021-12-03libbpf: Reduce bpf_core_apply_relo_insn() stack usage.Alexei Starovoitov3-45/+51
Reduce bpf_core_apply_relo_insn() stack usage and bump BPF_CORE_SPEC_MAX_LEN limit back to 64. Fixes: 29db4bea1d10 ("bpf: Prepare relo_core.c for kernel duty.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211203182836.16646-1-alexei.starovoitov@gmail.com
2021-12-02libbpf: Deprecate bpf_prog_load_xattr() APIAndrii Nakryiko2-0/+6
bpf_prog_load_xattr() is high-level API that's named as a low-level BPF_PROG_LOAD wrapper APIs, but it actually operates on struct bpf_object. It's badly and confusingly misnamed as it will load all the progs insige bpf_object, returning prog_fd of the very first BPF program. It also has a bunch of ad-hoc things like log_level override, map_ifindex auto-setting, etc. All this can be expressed more explicitly and cleanly through existing libbpf APIs. This patch marks bpf_prog_load_xattr() for deprecation in libbpf v0.8 ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/308 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-10-andrii@kernel.org
2021-12-02libbpf: Add API to get/set log_level at per-program levelAndrii Nakryiko4-1/+23
Add bpf_program__set_log_level() and bpf_program__log_level() to fetch and adjust log_level sent during BPF_PROG_LOAD command. This allows to selectively request more or less verbose output in BPF verifier log. Also bump libbpf version to 0.7 and make these APIs the first in v0.7. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-3-andrii@kernel.org
2021-12-02libbpf: Use __u32 fields in bpf_map_create_optsAndrii Nakryiko1-4/+4
Corresponding Linux UAPI struct uses __u32, not int, so keep it consistent. Fixes: 992c4225419a ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-2-andrii@kernel.org
2021-12-02libbpf: Clean gen_loader's attach kind.Alexei Starovoitov1-1/+3
The gen_loader has to clear attach_kind otherwise the programs without attach_btf_id will fail load if they follow programs with attach_btf_id. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211201181040.23337-12-alexei.starovoitov@gmail.com