aboutsummaryrefslogtreecommitdiff
path: root/tools/testing/selftests/bpf
AgeCommit message (Collapse)AuthorFilesLines
2022-12-06selftests/bpf: Use "is not set" instead of "=n"Daan De Meyer1-1/+1
"=n" is not valid kconfig syntax. Use "is not set" instead to indicate the option should be disabled. Signed-off-by: Daan De Meyer <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-12-06selftests/bpf: Install all required files to run selftestsDaan De Meyer1-2/+4
When installing the selftests using "make -C tools/testing/selftests install", we need to make sure all the required files to run the selftests are installed. Let's make sure this is the case. Signed-off-by: Daan De Meyer <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-12-06selftests/bpf: Allow building bpf tests with CONFIG_XFRM_INTERFACE=[m|n]Martin KaFai Lau1-4/+9
It is useful to use vmlinux.h in the xfrm_info test like other kfunc tests do. In particular, it is common for kfunc bpf prog that requires to use other core kernel structures in vmlinux.h Although vmlinux.h is preferred, it needs a ___local flavor of struct bpf_xfrm_info in order to build the bpf selftests when CONFIG_XFRM_INTERFACE=[m|n]. Cc: Eyal Birger <[email protected]> Fixes: 90a3a05eb33f ("selftests/bpf: add xfrm_info tests") Signed-off-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-12-05selftests/bpf: add xfrm_info testsEyal Birger5-0/+403
Test the xfrm_info kfunc helpers. The test setup creates three name spaces - NS0, NS1, NS2. XFRM tunnels are setup between NS0 and the two other NSs. The kfunc helpers are used to steer traffic from NS0 to the other NSs based on a userspace populated bpf global variable and validate that the return traffic had arrived from the desired NS. Signed-off-by: Eyal Birger <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2022-12-04selftests/bpf: Fix conflicts with built-in functions in bpf_iter_ksymJames Hilliard1-3/+3
Both tolower and toupper are built in c functions, we should not redefine them as this can result in a build error. Fixes the following errors: progs/bpf_iter_ksym.c:10:20: error: conflicting types for built-in function 'tolower'; expected 'int(int)' [-Werror=builtin-declaration-mismatch] 10 | static inline char tolower(char c) | ^~~~~~~ progs/bpf_iter_ksym.c:5:1: note: 'tolower' is declared in header '<ctype.h>' 4 | #include <bpf/bpf_helpers.h> +++ |+#include <ctype.h> 5 | progs/bpf_iter_ksym.c:17:20: error: conflicting types for built-in function 'toupper'; expected 'int(int)' [-Werror=builtin-declaration-mismatch] 17 | static inline char toupper(char c) | ^~~~~~~ progs/bpf_iter_ksym.c:17:20: note: 'toupper' is declared in header '<ctype.h>' See background on this sort of issue: https://stackoverflow.com/a/20582607 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=12213 (C99, 7.1.3p1) "All identifiers with external linkage in any of the following subclauses (including the future library directions) are always reserved for use as identifiers with external linkage." This is documented behavior in GCC: https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-std-2 Signed-off-by: James Hilliard <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-12-04bpf: Add sleepable prog tests for cgrp local storageYonghong Song2-0/+174
Add three tests for cgrp local storage support for sleepable progs. Two tests can load and run properly, one for cgroup_iter, another for passing current->cgroups->dfl_cgrp to bpf_cgrp_storage_get() helper. One test has bpf_rcu_read_lock() and failed to load. Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-12-04bpf: Do not mark certain LSM hook arguments as trustedYonghong Song2-0/+12
Martin mentioned that the verifier cannot assume arguments from LSM hook sk_alloc_security being trusted since after the hook is called, the sk ref_count is set to 1. This will overwrite the ref_count changed by the bpf program and may cause ref_count underflow later on. I then further checked some other hooks. For example, for bpf_lsm_file_alloc() hook in fs/file_table.c, f->f_cred = get_cred(cred); error = security_file_alloc(f); if (unlikely(error)) { file_free_rcu(&f->f_rcuhead); return ERR_PTR(error); } atomic_long_set(&f->f_count, 1); The input parameter 'f' to security_file_alloc() cannot be trusted as well. Specifically, I investiaged bpf_map/bpf_prog/file/sk/task alloc/free lsm hooks. Except bpf_map_alloc and task_alloc, arguments for all other hooks should not be considered as trusted. This may not be a complete list, but it covers common usage for sk and task. Fixes: 3f00c5239344 ("bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs") Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-12-04selftests/bpf: Fix rcu_read_lock test with new MEM_RCU semanticsYonghong Song1-10/+45
Add MEM_RCU pointer null checking for related tests. Also modified task_acquire test so it takes a rcu ptr 'ptr' where 'ptr = rcu_ptr->rcu_field'. Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-12-02selftests/bpf: Add GCC compatible builtins to bpf_legacy.hJames Hilliard1-6/+13
The bpf_legacy.h header uses llvm specific load functions, add GCC compatible variants as well to fix tests using these functions under GCC. Signed-off-by: James Hilliard <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-12-01selftests/bpf: Validate multiple ref release_on_unlock logicDave Marchevsky1-1/+16
Modify list_push_pop_multiple to alloc and insert nodes 2-at-a-time. Without the previous patch's fix, this block of code: bpf_spin_lock(lock); bpf_list_push_front(head, &f[i]->node); bpf_list_push_front(head, &f[i + 1]->node); bpf_spin_unlock(lock); would fail check_reference_leak check as release_on_unlock logic would miss a ref that should've been released. Signed-off-by: Dave Marchevsky <[email protected]> cc: Kumar Kartikeya Dwivedi <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-12-01selftests/bpf: Add ingress tests for txmsg with apply_bytesPengcheng Yang1-0/+18
Currently, the ingress redirect is not covered in "txmsg test apply". Signed-off-by: Pengcheng Yang <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Jakub Sitnicki <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30bpf: Tighten ptr_to_btf_id checks.Alexei Starovoitov2-4/+5
The networking programs typically don't require CAP_PERFMON, but through kfuncs like bpf_cast_to_kern_ctx() they can access memory through PTR_TO_BTF_ID. In such case enforce CAP_PERFMON. Also make sure that only GPL programs can access kernel data structures. All kfuncs require GPL already. Also remove allow_ptr_to_map_access. It's the same as allow_ptr_leaks and different name for the same check only causes confusion. Fixes: fd264ca02094 ("bpf: Add a kfunc to type cast from bpf uapi ctx to kernel ctx") Fixes: 50c6b8a9aea2 ("selftests/bpf: Add a test for btf_type_tag "percpu"") Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-12-01selftests/bpf: Add bench test to arm64 and s390x denylistDaniel Borkmann2-0/+2
BPF CI fails for arm64 and s390x each with the following result: [...] All error logs: serial_test_kprobe_multi_bench_attach:PASS:get_syms 0 nsec serial_test_kprobe_multi_bench_attach:PASS:kprobe_multi_empty__open_and_load 0 nsec libbpf: prog 'test_kprobe_empty': failed to attach: Operation not supported serial_test_kprobe_multi_bench_attach:FAIL:bpf_program__attach_kprobe_multi_opts unexpected error: -95 #92 kprobe_multi_bench_attach:FAIL [...] Add the test to the deny list. Fixes: 5b6c7e5c4434 ("selftests/bpf: Add attach bench test") Signed-off-by: Daniel Borkmann <[email protected]>
2022-11-30selftests/bpf: Make sure enum-less bpf_enable_stats() API works in C++ modeAndrii Nakryiko1-3/+10
Just a simple test to make sure we don't introduce unwanted compiler warnings and API still supports passing enums as input argument. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Avoid pinning prog when attaching to tc ingress in ↵Martin KaFai Lau1-15/+10
btf_skc_cls_ingress This patch removes the need to pin prog when attaching to tc ingress in the btf_skc_cls_ingress test. Instead, directly use the bpf_tc_hook_create() and bpf_tc_attach(). The qdisc clsact will go away together with the netns, so no need to bpf_tc_hook_destroy(). Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Remove serial from tests using {open,close}_netnsMartin KaFai Lau5-5/+5
After removing the mount/umount dance from {open,close}_netns() in the pervious patch, "serial_" can be removed from the tests using {open,close}_netns(). Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Remove the "/sys" mount and umount dance in {open,close}_netnsMartin KaFai Lau1-46/+5
The previous patches have removed the need to do the mount and umount dance when switching netns. In particular: * Avoid remounting /sys/fs/bpf to have a clean start * Avoid remounting /sys to get a ifindex of a particular netns This patch can finally remove the mount and umount dance in {open,close}_netns which is unnecessarily complicated and error-prone. Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Avoid pinning bpf prog in the netns_load_bpf() callersMartin KaFai Lau1-56/+27
This patch removes the need to pin prog in the remaining tests in tc_redirect.c by directly using the bpf_tc_hook_create() and bpf_tc_attach(). The clsact qdisc will go away together with the test netns, so no need to do bpf_tc_hook_destroy(). Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Avoid pinning bpf prog in the tc_redirect_peer_l3 testMartin KaFai Lau1-20/+12
This patch removes the need to pin prog in the tc_redirect_peer_l3 test by directly using the bpf_tc_hook_create() and bpf_tc_attach(). The clsact qdisc will go away together with the test netns, so no need to do bpf_tc_hook_destroy(). Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Avoid pinning bpf prog in the tc_redirect_dtime testMartin KaFai Lau1-49/+100
This patch removes the need to pin prog in the tc_redirect_dtime test by directly using the bpf_tc_hook_create() and bpf_tc_attach(). The clsact qdisc will go away together with the test netns, so no need to do bpf_tc_hook_destroy(). Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-30selftests/bpf: Use if_nametoindex instead of reading the ↵Martin KaFai Lau1-28/+18
/sys/net/class/*/ifindex When switching netns, the setns_by_fd() is doing dances in mount/umounting the /sys directories. One reason is the tc_redirect.c test is depending on the /sys/net/class/*/ifindex instead of using the if_nametoindex(). if_nametoindex() uses ioctl() to get the ifindex. This patch is to move all /sys/net/class/*/ifindex usages to if_nametoindex(). The current code checks ifindex >= 0 which is incorrect. ifindex > 0 should be checked instead. This patch also stores ifindex_veth_src and ifindex_veth_dst since the latter patch will need them. Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2-20/+24
tools/lib/bpf/ringbuf.c 927cbb478adf ("libbpf: Handle size overflow for ringbuf mmap") b486d19a0ab0 ("libbpf: checkpatch: Fixed code alignments in ringbuf.c") https://lore.kernel.org/all/[email protected]/ Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-28Daniel Borkmann says:Jakub Kicinski44-68/+4736
==================== bpf-next 2022-11-25 We've added 101 non-merge commits during the last 11 day(s) which contain a total of 109 files changed, 8827 insertions(+), 1129 deletions(-). The main changes are: 1) Support for user defined BPF objects: the use case is to allocate own objects, build own object hierarchies and use the building blocks to build own data structures flexibly, for example, linked lists in BPF, from Kumar Kartikeya Dwivedi. 2) Add bpf_rcu_read_{,un}lock() support for sleepable programs, from Yonghong Song. 3) Add support storing struct task_struct objects as kptrs in maps, from David Vernet. 4) Batch of BPF map documentation improvements, from Maryam Tahhan and Donald Hunter. 5) Improve BPF verifier to propagate nullness information for branches of register to register comparisons, from Eduard Zingerman. 6) Fix cgroup BPF iter infra to hold reference on the start cgroup, from Hou Tao. 7) Fix BPF verifier to not mark fentry/fexit program arguments as trusted given it is not the case for them, from Alexei Starovoitov. 8) Improve BPF verifier's realloc handling to better play along with dynamic runtime analysis tools like KASAN and friends, from Kees Cook. 9) Remove legacy libbpf mode support from bpftool, from Sahid Orentino Ferdjaoui. 10) Rework zero-len skb redirection checks to avoid potentially breaking existing BPF test infra users, from Stanislav Fomichev. 11) Two small refactorings which are independent and have been split out of the XDP queueing RFC series, from Toke Høiland-Jørgensen. 12) Fix a memory leak in LSM cgroup BPF selftest, from Wang Yufen. 13) Documentation on how to run BPF CI without patch submission, from Daniel Müller. Signed-off-by: Jakub Kicinski <[email protected]> ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-24selftests/bpf: Add tests for bpf_rcu_read_lock()Yonghong Song4-0/+450
Add a few positive/negative tests to test bpf_rcu_read_lock() and its corresponding verifier support. The new test will fail on s390x and aarch64, so an entry is added to each of their respective deny lists. Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-23selftests/bpf: Add selftests for bpf_task_from_pid()David Vernet4-0/+91
Add some selftest testcases that validate the expected behavior of the bpf_task_from_pid() kfunc that was added in the prior patch. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-24selftests/bpf: Add reproducer for decl_tag in func_proto argumentStanislav Fomichev1-0/+14
It should trigger a WARN_ON_ONCE in btf_type_id_size: RIP: 0010:btf_type_id_size+0x8bd/0x940 kernel/bpf/btf.c:1952 btf_func_proto_check kernel/bpf/btf.c:4506 [inline] btf_check_all_types kernel/bpf/btf.c:4734 [inline] btf_parse_type_sec+0x1175/0x1980 kernel/bpf/btf.c:4763 btf_parse kernel/bpf/btf.c:5042 [inline] btf_new_fd+0x65a/0xb00 kernel/bpf/btf.c:6709 bpf_btf_load+0x6f/0x90 kernel/bpf/syscall.c:4342 __sys_bpf+0x50a/0x6c0 kernel/bpf/syscall.c:5034 __do_sys_bpf kernel/bpf/syscall.c:5093 [inline] __se_sys_bpf kernel/bpf/syscall.c:5091 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5091 do_syscall_64+0x54/0x70 arch/x86/entry/common.c:48 Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-23selftests/bpf: Mount debugfs in setns_by_fdStanislav Fomichev4-3/+7
Jiri reports broken test_progs after recent commit 68f8e3d4b916 ("selftests/bpf: Make sure zero-len skbs aren't redirectable"). Apparently we don't remount debugfs when we switch back networking namespace. Let's explicitly mount /sys/kernel/debug. 0: https://lore.kernel.org/bpf/[email protected]/T/#ma66ca9c92e99eee0a25e40f422489b26ee0171c1 Fixes: a30338840fa5 ("selftests/bpf: Move open_netns() and close_netns() into network_helpers.c") Reported-by: Jiri Olsa <[email protected]> Signed-off-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-22selftests/bpf: Add selftests for bpf_cgroup_ancestor() kfuncDavid Vernet3-0/+47
bpf_cgroup_ancestor() allows BPF programs to access the ancestor of a struct cgroup *. This patch adds selftests that validate its expected behavior. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-22selftests/bpf: Add cgroup kfunc / kptr selftestsDavid Vernet5-0/+631
This patch adds a selftest suite to validate the cgroup kfuncs that were added in the prior patch. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-22selftests/bpf: Workaround for llvm nop-4 bugAlexei Starovoitov2-4/+4
Currently LLVM fails to recognize .data.* as data section and defaults to .text section. Later BPF backend tries to emit 4-byte NOP instruction which doesn't exist in BPF ISA and aborts. The fix for LLVM is pending: https://reviews.llvm.org/D138477 While waiting for the fix lets workaround the linked_list test case by using .bss.* prefix which is properly recognized by LLVM as BSS section. Fix libbpf to support .bss. prefix and adjust tests. Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-22Revert "selftests/bpf: Temporarily disable linked list tests"Alexei Starovoitov4-34/+16
This reverts commit 0a2f85a1be4328d29aefa54684d10c23a3298fef. Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21selftests/bpf: Make sure zero-len skbs aren't redirectableStanislav Fomichev2-0/+183
LWT_XMIT to test L3 case, TC to test L2 case. v2: - s/veth_ifindex/ipip_ifindex/ in two places (Martin) - add comment about which condition triggers the rejection (Martin) Signed-off-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2022-11-21selftests/bpf: Make test_bench_attach serialJiri Olsa1-3/+1
Alexei hit another rcu warnings because of this test. Making test_bench_attach serial so it does not disrupts other tests during parallel tests run. While this change is not the fix, it should be less likely to hit it with this test being executed serially. Reported-by: Alexei Starovoitov <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21selftests/bpf: Filter out default_idle from kprobe_multi benchJiri Olsa1-1/+3
Alexei hit following rcu warning when running prog_test -j. [ 128.049567] WARNING: suspicious RCU usage [ 128.049569] 6.1.0-rc2 #912 Tainted: G O ... [ 128.050944] kprobe_multi_link_handler+0x6c/0x1d0 [ 128.050947] ? kprobe_multi_link_handler+0x42/0x1d0 [ 128.050950] ? __cpuidle_text_start+0x8/0x8 [ 128.050952] ? __cpuidle_text_start+0x8/0x8 [ 128.050958] fprobe_handler.part.1+0xac/0x150 [ 128.050964] 0xffffffffa02130c8 [ 128.050991] ? default_idle+0x5/0x20 [ 128.050998] default_idle+0x5/0x20 It's caused by bench test attaching kprobe_multi link to default_idle function, which is not executed in rcu safe context so the kprobe handler on top of it will trigger the rcu warning. Filtering out default_idle function from the bench test. Reported-by: Alexei Starovoitov <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21bpf: Set and check spin lock value in sk_storage_map_testXu Kuohai1-16/+20
Update sk_storage_map_test to make sure kernel does not copy user non-zero value spin lock to kernel, and does not copy kernel spin lock value to user. If user spin lock value is copied to kernel, this test case will make kernel spin on the copied lock, resulting in rcu stall and softlockup. Signed-off-by: Xu Kuohai <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21selftests/bpf: Add test for cgroup iterator on a dead cgroupHou Tao1-0/+76
The test closes both iterator link fd and cgroup fd, and removes the cgroup file to make a dead cgroup before reading from cgroup iterator. It also uses kern_sync_rcu() and usleep() to wait for the release of start cgroup. If the start cgroup is not pinned by cgroup iterator, reading from iterator fd will trigger use-after-free. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Hao Luo <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-21selftests/bpf: Add cgroup helper remove_cgroup()Hou Tao2-0/+20
Add remove_cgroup() to remove a cgroup which doesn't have any children or live processes. It will be used by the following patch to test cgroup iterator on a dead cgroup. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-20bpftool: remove support of --legacy option for bpftoolSahid Orentino Ferdjaoui1-3/+3
Following: commit bd054102a8c7 ("libbpf: enforce strict libbpf 1.0 behaviors") commit 93b8952d223a ("libbpf: deprecate legacy BPF map definitions") The --legacy option is no longer relevant as libbpf no longer supports it. libbpf_set_strict_mode() is a no-op operation. Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Acked-by: Yonghong Song <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf: Add type cast unit testsYonghong Song3-0/+198
Three tests are added. One is from John Fastabend ({1]) which tests tracing style access for xdp program from the kernel ctx. Another is a tc test to test both kernel ctx tracing style access and explicit non-ctx type cast. The third one is for negative tests including two tests, a tp_bpf test where the bpf_rdonly_cast() returns a untrusted ptr which cannot be used as helper argument, and a tracepoint test where the kernel ctx is a u64. Also added the test to DENYLIST.s390x since s390 does not currently support calling kernel functions in JIT mode. [1] https://lore.kernel.org/bpf/[email protected]/ Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf/selftests: Add selftests for new task kfuncsDavid Vernet5-0/+640
A previous change added a series of kfuncs for storing struct task_struct objects as referenced kptrs. This patch adds a new task_kfunc test suite for validating their expected behavior. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncsDavid Vernet2-3/+3
Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal to the verifier that it should enforce that a BPF program passes it a "safe", trusted pointer. Currently, "safe" means that the pointer is either PTR_TO_CTX, or is refcounted. There may be cases, however, where the kernel passes a BPF program a safe / trusted pointer to an object that the BPF program wishes to use as a kptr, but because the object does not yet have a ref_obj_id from the perspective of the verifier, the program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS kfunc. The solution is to expand the set of pointers that are considered trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs with these pointers without getting rejected by the verifier. There is already a PTR_UNTRUSTED flag that is set in some scenarios, such as when a BPF program reads a kptr directly from a map without performing a bpf_kptr_xchg() call. These pointers of course can and should be rejected by the verifier. Unfortunately, however, PTR_UNTRUSTED does not cover all the cases for safety that need to be addressed to adequately protect kfuncs. Specifically, pointers obtained by a BPF program "walking" a struct are _not_ considered PTR_UNTRUSTED according to BPF. For example, say that we were to add a kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal that a task was unsafe to pass to a kfunc, the verifier would mistakenly allow the following unsafe BPF program to be loaded: SEC("tp_btf/task_newtask") int BPF_PROG(unsafe_acquire_task, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired, *nested; nested = task->last_wakee; /* Would not be rejected by the verifier. */ acquired = bpf_task_acquire(nested); if (!acquired) return 0; bpf_task_release(acquired); return 0; } To address this, this patch defines a new type flag called PTR_TRUSTED which tracks whether a PTR_TO_BTF_ID pointer is safe to pass to a KF_TRUSTED_ARGS kfunc or a BPF helper function. PTR_TRUSTED pointers are passed directly from the kernel as a tracepoint or struct_ops callback argument. Any nested pointer that is obtained from walking a PTR_TRUSTED pointer is no longer PTR_TRUSTED. From the example above, the struct task_struct *task argument is PTR_TRUSTED, but the 'nested' pointer obtained from 'task->last_wakee' is not PTR_TRUSTED. A subsequent patch will add kfuncs for storing a task kfunc as a kptr, and then another patch will add selftests to validate. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-18selftests/bpf: Skip spin lock failure test on s390xKumar Kartikeya Dwivedi1-0/+6
Instead of adding the whole test to DENYLIST.s390x, which also has success test cases that should be run, just skip over failure test cases in case the JIT does not support kfuncs. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Temporarily disable linked list testsKumar Kartikeya Dwivedi4-16/+34
The latest clang nightly as of writing crashes with the given test case for BPF linked lists wherever global glock, ghead, glock2 are used, hence comment out the parts that cause the crash, and prepare this commit so that it can be reverted when the fix has been made. More context in [0]. [0]: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add BTF sanity testsKumar Kartikeya Dwivedi1-0/+485
Preparing the metadata for bpf_list_head involves a complicated parsing step and type resolution for the contained value. Ensure that corner cases are tested against and invalid specifications in source are duly rejected. Also include tests for incorrect ownership relationships in the BTF. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add BPF linked list API testsKumar Kartikeya Dwivedi6-0/+1264
Include various tests covering the success and failure cases. Also, run the success cases at runtime to verify correctness of linked list manipulation routines, in addition to ensuring successful verification. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add failure test cases for spin lock pairingKumar Kartikeya Dwivedi2-1/+292
First, ensure that whenever a bpf_spin_lock is present in an allocation, the reg->id is preserved. This won't be true for global variables however, since they have a single map value per map, hence the verifier harcodes it to 0 (so that multiple pseudo ldimm64 insns can yield the same lock object per map at a given offset). Next, add test cases for all possible combinations (kptr, global, map value, inner map value). Since we lifted restriction on locking in inner maps, also add test cases for them. Currently, each lookup into an inner map gets a fresh reg->id, so even if the reg->map_ptr is same, they will be treated as separate allocations and the incorrect unlock pairing will be rejected. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Update spinlock selftestKumar Kartikeya Dwivedi3-47/+51
Make updates in preparation for adding more test cases to this selftest: - Convert from CHECK_ to ASSERT macros. - Use BPF skeleton - Fix typo sping -> spin - Rename spinlock.c -> spin_lock.c Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add __contains macro to bpf_experimental.hKumar Kartikeya Dwivedi1-0/+2
Add user facing __contains macro which provides a convenient wrapper over the verbose kernel specific BTF declaration tag required to annotate BPF list head structs in user types. Acked-by: Dave Marchevsky <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce single ownership BPF linked list APIKumar Kartikeya Dwivedi1-0/+28
Add a linked list API for use in BPF programs, where it expects protection from the bpf_spin_lock in the same allocation as the bpf_list_head. For now, only one bpf_spin_lock can be present hence that is assumed to be the one protecting the bpf_list_head. The following functions are added to kick things off: // Add node to beginning of list void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node); // Add node to end of list void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node); // Remove node at beginning of list and return it struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head); // Remove node at end of list and return it struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head); The lock protecting the bpf_list_head needs to be taken for all operations. The verifier ensures that the lock that needs to be taken is always held, and only the correct lock is taken for these operations. These checks are made statically by relying on the reg->id preserved for registers pointing into regions having both bpf_spin_lock and the objects protected by it. The comment over check_reg_allocation_locked in this change describes the logic in detail. Note that bpf_list_push_front and bpf_list_push_back are meant to consume the object containing the node in the 1st argument, however that specific mechanism is intended to not release the ref_obj_id directly until the bpf_spin_unlock is called. In this commit, nothing is done, but the next commit will be introducing logic to handle this case, so it has been left as is for now. bpf_list_pop_front and bpf_list_pop_back delete the first or last item of the list respectively, and return pointer to the element at the list_node offset. The user can then use container_of style macro to get the actual entry type. The verifier however statically knows the actual type, so the safety properties are still preserved. With these additions, programs can now manage their own linked lists and store their objects in them. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce bpf_obj_dropKumar Kartikeya Dwivedi1-0/+13
Introduce bpf_obj_drop, which is the kfunc used to free allocated objects (allocated using bpf_obj_new). Pairing with bpf_obj_new, it implicitly destructs the fields part of object automatically without user intervention. Just like the previous patch, btf_struct_meta that is needed to free up the special fields is passed as a hidden argument to the kfunc. For the user, a convenience macro hides over the kernel side kfunc which is named bpf_obj_drop_impl. Continuing the previous example: void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; bpf_obj_drop(f); } Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>