aboutsummaryrefslogtreecommitdiff
path: root/tools/testing/selftests/bpf/progs
AgeCommit message (Collapse)AuthorFilesLines
2020-09-25bpf: selftest: Adapt sock_fields test to use skel and global variablesMartin KaFai Lau1-88/+66
skel is used. Global variables are used to store the result from bpf prog. addr_map, sock_result_map, and tcp_sock_result_map are gone. Instead, global variables listen_tp, srv_sa6, cli_tp,, srv_tp, listen_sk, srv_sk, and cli_sk are added. Because of that, bpf_addr_array_idx and bpf_result_array_idx are also no longer needed. CHECK() macro from test_progs.h is reused and bail as soon as a CHECK failure. shutdown() is used to ensure the previous data-ack is received. The bytes_acked, bytes_received, and the pkt_out_cnt checks are using "<" to accommodate the final ack may not have been received/sent. It is enough since it is not the focus of this test. The sk local storage is all initialized to 0xeB9F now, so the check_sk_pkt_out_cnt() always checks with the 0xeB9F base. It is to keep things simple. The next patch will reuse helpers from network_helpers.h to simplify things further. Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-25bpf: selftest: Move sock_fields test into test_progsMartin KaFai Lau1-0/+0
This is a mechanical change to 1. move test_sock_fields.c to prog_tests/sock_fields.c 2. rename progs/test_sock_fields_kern.c to progs/test_sock_fields.c Minimal change is made to the code itself. Next patch will make changes to use new ways of writing test, e.g. use skel and global variables. Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller24-100/+570
Alexei Starovoitov says: ==================== pull-request: bpf-next 2020-09-23 The following pull-request contains BPF updates for your *net-next* tree. We've added 95 non-merge commits during the last 22 day(s) which contain a total of 124 files changed, 4211 insertions(+), 2040 deletions(-). The main changes are: 1) Full multi function support in libbpf, from Andrii. 2) Refactoring of function argument checks, from Lorenz. 3) Make bpf_tail_call compatible with functions (subprograms), from Maciej. 4) Program metadata support, from YiFei. 5) bpf iterator optimizations, from Yonghong. ==================== Signed-off-by: David S. Miller <[email protected]>
2020-09-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-0/+15
Two minor conflicts: 1) net/ipv4/route.c, adding a new local variable while moving another local variable and removing it's initial assignment. 2) drivers/net/dsa/microchip/ksz9477.c, overlapping changes. One pretty prints the port mode differently, whilst another changes the driver to try and obtain the port mode from the port node rather than the switch node. Signed-off-by: David S. Miller <[email protected]>
2020-09-21selftests/bpf: Fix stat probe in d_path testJiri Olsa1-1/+8
Some kernels builds might inline vfs_getattr call within fstat syscall code path, so fentry/vfs_getattr trampoline is not called. Add security_inode_getattr to allowlist and switch the d_path test stat trampoline to security_inode_getattr. Keeping dentry_open and filp_close, because they are in their own files, so unlikely to be inlined, but in case they are, adding security_file_open. Adding flags that indicate trampolines were called and failing the test if any of them got missed, so it's easier to identify the issue next time. Fixes: e4d1af4b16f8 ("selftests/bpf: Add test for d_path helper") Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-17selftests/bpf: Add tailcall_bpf2bpf testsMaciej Fijalkowski4-0/+201
Add four tests to tailcalls selftest explicitly named "tailcall_bpf2bpf_X" as their purpose is to validate that combination of tailcalls with bpf2bpf calls are working properly. These tests also validate LD_ABS from subprograms. Signed-off-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2020-09-15selftests/bpf: Test load and dump metadata with btftool and skelYiFei Zhu2-0/+30
This is a simple test to check that loading and dumping metadata in btftool works, whether or not metadata contents are used by the program. A C test is also added to make sure the skeleton code can read the metadata values. Signed-off-by: YiFei Zhu <[email protected]> Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Cc: YiFei Zhu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-10selftests/bpf: Define string const as global for test_sysctl_prog.cYonghong Song1-2/+2
When tweaking llvm optimizations, I found that selftest build failed with the following error: libbpf: elf: skipping unrecognized data section(6) .rodata.str1.1 libbpf: prog 'sysctl_tcp_mem': bad map relo against '.L__const.is_tcp_mem.tcp_mem_name' in section '.rodata.str1.1' Error: failed to open BPF object file: Relocation failed make: *** [/work/net-next/tools/testing/selftests/bpf/test_sysctl_prog.skel.h] Error 255 make: *** Deleting file `/work/net-next/tools/testing/selftests/bpf/test_sysctl_prog.skel.h' The local string constant "tcp_mem_name" is put into '.rodata.str1.1' section which libbpf cannot handle. Using untweaked upstream llvm, "tcp_mem_name" is completely inlined after loop unrolling. Commit 7fb5eefd7639 ("selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change") solved a similar problem by defining the string const as a global. Let us do the same here for test_sysctl_prog.c so it can weather future potential llvm changes. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-10selftests: bpf: Test iterating a sockmapLorenz Bauer3-0/+55
Add a test that exercises a basic sockmap / sockhash iteration. For now we simply count the number of elements seen. Once sockmap update from iterators works we can extend this to perform a full copy. Signed-off-by: Lorenz Bauer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-09selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang changeYonghong Song2-4/+4
Andrii reported that with latest clang, when building selftests, we have error likes: error: progs/test_sysctl_loop1.c:23:16: in function sysctl_tcp_mem i32 (%struct.bpf_sysctl*): Looks like the BPF stack limit of 512 bytes is exceeded. Please move large on stack variables into BPF per-cpu array map. The error is triggered by the following LLVM patch: https://reviews.llvm.org/D87134 For example, the following code is from test_sysctl_loop1.c: static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx) { volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string"; ... } Without the above LLVM patch, the compiler did optimization to load the string (59 bytes long) with 7 64bit loads, 1 8bit load and 1 16bit load, occupying 64 byte stack size. With the above LLVM patch, the compiler only uses 8bit loads, but subregister is 32bit. So stack requirements become 4 * 59 = 236 bytes. Together with other stuff on the stack, total stack size exceeds 512 bytes, hence compiler complains and quits. To fix the issue, removing "volatile" key word or changing "volatile" to "const"/"static const" does not work, the string is put in .rodata.str1.1 section, which libbpf did not process it and errors out with libbpf: elf: skipping unrecognized data section(6) .rodata.str1.1 libbpf: prog 'sysctl_tcp_mem': bad map relo against '.L__const.is_tcp_mem.tcp_mem_name' in section '.rodata.str1.1' Defining the string const as global variable can fix the issue as it puts the string constant in '.rodata' section which is recognized by libbpf. In the future, when libbpf can process '.rodata.str*.*' properly, the global definition can be changed back to local definition. Defining tcp_mem_name as a global, however, triggered a verifier failure. ./test_progs -n 7/21 libbpf: load bpf program failed: Permission denied libbpf: -- BEGIN DUMP LOG --- libbpf: invalid stack off=0 size=1 verification time 6975 usec stack depth 160+64 processed 889 insns (limit 1000000) max_states_per_insn 4 total_states 14 peak_states 14 mark_read 10 libbpf: -- END LOG -- libbpf: failed to load program 'sysctl_tcp_mem' libbpf: failed to load object 'test_sysctl_loop2.o' test_bpf_verif_scale:FAIL:114 #7/21 test_sysctl_loop2.o:FAIL This actually exposed a bpf program bug. In test_sysctl_loop{1,2}, we have code like const char tcp_mem_name[] = "<...long string...>"; ... char name[64]; ... for (i = 0; i < sizeof(tcp_mem_name); ++i) if (name[i] != tcp_mem_name[i]) return 0; In the above code, if sizeof(tcp_mem_name) > 64, name[i] access may be out of bound. The sizeof(tcp_mem_name) is 59 for test_sysctl_loop1.c and 79 for test_sysctl_loop2.c. Without promotion-to-global change, old compiler generates code where the overflowed stack access is actually filled with valid value, so hiding the bpf program bug. With promotion-to-global change, the code is different, more specifically, the previous loading constants to stack is gone, and "name" occupies stack[-64:0] and overflow access triggers a verifier error. To fix the issue, adjust "name" buffer size properly. Reported-by: Andrii Nakryiko <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-08selftests/bpf: Add test for map_ptr arithmeticYonghong Song1-1/+9
Change selftest map_ptr_kern.c with disabling inlining for one of subtests, which will fail the test without previous verifier change. Also added to verifier test for both "map_ptr += scalar" and "scalar += map_ptr" arithmetic. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-03selftests/bpf: Add bpf_{update, delete}_map_elem in hashmap iter programYonghong Song1-0/+15
Added bpf_{updata,delete}_map_elem to the very map element the iter program is visiting. Due to rcu protection, the visited map elements, although stale, should still contain correct values. $ ./test_progs -n 4/18 #4/18 bpf_hash_map:OK #4 bpf_iter:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-03selftests/bpf: Add __noinline variant of cls_redirect selftestAndrii Nakryiko2-49/+58
As one of the most complicated and close-to-real-world programs, cls_redirect is a good candidate to exercise libbpf's logic of handling bpf2bpf calls. So add variant with using explicit __noinline for majority of functions except few most basic ones. If those few functions are inlined, verifier starts to complain about program instruction limit of 1mln instructions being exceeded, most probably due to instruction overhead of doing a sub-program call. Convert user-space part of selftest to have to sub-tests: with and without inlining. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Cc: Lorenz Bauer <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-03selftests/bpf: Modernize xdp_noinline test w/ skeleton and __noinlineAndrii Nakryiko1-10/+26
Update xdp_noinline to use BPF skeleton and force __noinline on helper sub-programs. Also, split existing logic into v4- and v6-only to complicate sub-program calling patterns (partially overlapped sets of functions for entry-level BPF programs). Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-03selftests/bpf: Add subprogs to pyperf, strobemeta, and l4lb_noinline testsAndrii Nakryiko5-32/+65
Add use of non-inlined subprogs to few bigger selftests to excercise libbpf's bpf2bpf handling logic. Also split l4lb_all selftest into two sub-tests. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-03selftests/bpf: Add selftest for multi-prog sections and bpf-to-bpf callsAndrii Nakryiko1-0/+103
Add a selftest excercising bpf-to-bpf subprogram calls, as well as multiple entry-point BPF programs per section. Also make sure that BPF CO-RE works for such set ups both for sub-programs and for multi-entry sections. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-09-02selftests/bpf: Test task_file iterator without visiting pthreadsYonghong Song1-1/+9
Modified existing bpf_iter_test_file.c program to check whether all accessed files from the main thread or not. Modified existing bpf_iter_test_file program to check whether all accessed files from the main thread or not. $ ./test_progs -n 4 ... #4/7 task_file:OK ... #4 bpf_iter:OK Summary: 1/24 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-31bpf: Remove bpf_lsm_file_mprotect from sleepable list.Alexei Starovoitov1-17/+17
Technically the bpf programs can sleep while attached to bpf_lsm_file_mprotect, but such programs need to access user memory. So they're in might_fault() category. Which means they cannot be called from file_mprotect lsm hook that takes write lock on mm->mmap_lock. Adjust the test accordingly. Also add might_fault() to __bpf_prog_enter_sleepable() to catch such deadlocks early. Fixes: 1e6c62a88215 ("bpf: Introduce sleepable BPF programs") Fixes: e68a144547fc ("selftests/bpf: Add sleepable tests") Reported-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-28selftests/bpf: Add sleepable testsAlexei Starovoitov2-2/+71
Modify few tests to sanity test sleepable bpf functionality. Running 'bench trig-fentry-sleep' vs 'bench trig-fentry' and 'perf report': sleepable with SRCU: 3.86% bench [k] __srcu_read_unlock 3.22% bench [k] __srcu_read_lock 0.92% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep 0.50% bench [k] bpf_trampoline_10297 0.26% bench [k] __bpf_prog_exit_sleepable 0.21% bench [k] __bpf_prog_enter_sleepable sleepable with RCU_TRACE: 0.79% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep 0.72% bench [k] bpf_trampoline_10381 0.31% bench [k] __bpf_prog_exit_sleepable 0.29% bench [k] __bpf_prog_enter_sleepable non-sleepable with RCU: 0.88% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry 0.84% bench [k] bpf_trampoline_10297 0.13% bench [k] __bpf_prog_enter 0.12% bench [k] __bpf_prog_exit Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: KP Singh <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-28bpf: selftests: Add test for different inner map sizeMartin KaFai Lau1-0/+31
This patch tests the inner map size can be different for reuseport_sockarray but has to be the same for arraymap. A new subtest "diff_size" is added for this. The existing test is moved to a subtest "lookup_update". Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-26selftests/bpf: Test for map update access from within EXT programsUdip Pant2-0/+74
This adds further tests to ensure access permissions and restrictions are applied properly for some map types such as sock-map. It also adds another negative tests to assert static functions cannot be replaced. In the 'unreliable' mode it still fails with error 'tracing progs cannot use bpf_spin_lock yet' with the change in the verifier Signed-off-by: Udip Pant <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-26selftests/bpf: Test for checking return code for the extended progUdip Pant1-0/+19
This adds test to enforce same check for the return code for the extended prog as it is enforced for the target program. It asserts failure for a return code, which is permitted without the patch in this series, while it is restricted after the application of this patch. Signed-off-by: Udip Pant <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-26selftests/bpf: Add test for freplace program with write accessUdip Pant2-0/+47
This adds a selftest that tests the behavior when a freplace target program attempts to make a write access on a packet. The expectation is that the read or write access is granted based on the program type of the linked program and not itself (which is of type, for e.g., BPF_PROG_TYPE_EXT). This test fails without the associated patch on the verifier. Signed-off-by: Udip Pant <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-25selftests/bpf: Add test for d_path helperJiri Olsa1-0/+58
Adding test for d_path helper which is pretty much copied from Wenbo Zhang's test for bpf_get_fd_path, which never made it in. The test is doing fstat/close on several fd types, and verifies we got the d_path helper working on kernel probes for vfs_getattr/filp_close functions. Original-patch-by: Wenbo Zhang <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-25bpf: Add selftests for local_storageKP Singh1-0/+140
inode_local_storage: * Hook to the file_open and inode_unlink LSM hooks. * Create and unlink a temporary file. * Store some information in the inode's bpf_local_storage during file_open. * Verify that this information exists when the file is unlinked. sk_local_storage: * Hook to the socket_post_create and socket_bind LSM hooks. * Open and bind a socket and set the sk_storage in the socket_post_create hook using the start_server helper. * Verify if the information is set in the socket_bind hook. Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-25bpf: Renames in preparation for bpf_local_storageKP Singh1-3/+3
A purely mechanical change to split the renaming from the actual generalization. Flags/consts: SK_STORAGE_CREATE_FLAG_MASK BPF_LOCAL_STORAGE_CREATE_FLAG_MASK BPF_SK_STORAGE_CACHE_SIZE BPF_LOCAL_STORAGE_CACHE_SIZE MAX_VALUE_SIZE BPF_LOCAL_STORAGE_MAX_VALUE_SIZE Structs: bucket bpf_local_storage_map_bucket bpf_sk_storage_map bpf_local_storage_map bpf_sk_storage_data bpf_local_storage_data bpf_sk_storage_elem bpf_local_storage_elem bpf_sk_storage bpf_local_storage The "sk" member in bpf_local_storage is also updated to "owner" in preparation for changing the type to void * in a subsequent patch. Functions: selem_linked_to_sk selem_linked_to_storage selem_alloc bpf_selem_alloc __selem_unlink_sk bpf_selem_unlink_storage_nolock __selem_link_sk bpf_selem_link_storage_nolock selem_unlink_sk __bpf_selem_unlink_storage sk_storage_update bpf_local_storage_update __sk_storage_lookup bpf_local_storage_lookup bpf_sk_storage_map_free bpf_local_storage_map_free bpf_sk_storage_map_alloc bpf_local_storage_map_alloc bpf_sk_storage_map_alloc_check bpf_local_storage_map_alloc_check bpf_sk_storage_map_check_btf bpf_local_storage_map_check_btf Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-24bpf: selftests: Tcp header optionsMartin KaFai Lau2-0/+948
This patch adds tests for the new bpf tcp header option feature. test_tcp_hdr_options.c: - It tests header option writing and parsing in 3WHS: regular connection establishment, fastopen, and syncookie. - In syncookie, the passive side's bpf prog is asking the active side to resend its bpf header option by specifying a RESEND bit in the outgoing SYNACK. handle_active_estab() and write_nodata_opt() has some details. - handle_passive_estab() has comments on fastopen. - It also has test for header writing and parsing in FIN packet. - Most of the tests is writing an experimental option 254 with magic 0xeB9F. - The no_exprm_estab() also tests writing a regular TCP option without any magic. test_misc_tcp_options.c: - It is an one directional test. Active side writes option and passive side parses option. The focus is to exercise the new helpers and API. - Testing the new helper: bpf_load_hdr_opt() and bpf_store_hdr_opt(). - Testing the bpf_getsockopt(TCP_BPF_SYN). - Negative tests for the above helpers. - Testing the sock_ops->skb_data. Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-21selftests: bpf: Test sockmap update from BPFLorenz Bauer2-0/+71
Add a test which copies a socket from a sockmap into another sockmap or sockhash. This excercises bpf_map_update_elem support from BPF context. Compare the socket cookies from source and destination to ensure that the copy succeeded. Also check that the verifier rejects map_update from unsafe contexts. Signed-off-by: Lorenz Bauer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-20selftests/bpf: List newest Clang built-ins needed for some CO-RE selftestsAndrii Nakryiko1-1/+3
Record which built-ins are optional and needed for some of recent BPF CO-RE subtests. Document Clang diff that fixed corner-case issue with __builtin_btf_type_id(). Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-19selftests/bpf: Add tests for ENUMVAL_EXISTS/ENUMVAL_VALUE relocationsAndrii Nakryiko6-0/+168
Add tests validating existence and value relocations for enum value-based relocations. If __builtin_preserve_enum_value() built-in is not supported, skip tests. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-19selftests/bpf: Add CO-RE relo test for TYPE_ID_LOCAL/TYPE_ID_TARGETAndrii Nakryiko5-14/+160
Add tests for BTF type ID relocations. To allow testing this, enhance core_relo.c test runner to allow dynamic initialization of test inputs. If Clang doesn't have necessary support for new functionality, test is skipped. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-19selftests/bpf: Test TYPE_EXISTS and TYPE_SIZE CO-RE relocationsAndrii Nakryiko8-1/+342
Add selftests for TYPE_EXISTS and TYPE_SIZE relocations, testing correctness of relocations and handling of type compatiblity/incompatibility. If __builtin_preserve_type_info() is not supported by compiler, skip tests. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-18selftests/bpf: Add test validating failure on ambiguous relocation valueAndrii Nakryiko2-0/+29
Add test simulating ambiguous field size relocation, while fields themselves are at the exact same offset. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-18selftests/bpf: Fix test_vmlinux test to use bpf_probe_read_user()Andrii Nakryiko1-3/+9
The test is reading UAPI kernel structure from user-space. So it doesn't need CO-RE relocations and has to use bpf_probe_read_user(). Fixes: acbd06206bbb ("selftests/bpf: Add vmlinux.h selftest exercising tracing of syscalls") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-13selftests/bpf: Make test_varlen work with 32-bit user-space archAndrii Nakryiko1-3/+3
Despite bpftool generating data section memory layout that will work for 32-bit architectures on user-space side, BPF programs should be careful to not use ambiguous types like `long`, which have different size in 32-bit and 64-bit environments. Fix that in test by using __u64 explicitly, which is a recommended approach anyway. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-13selftests/bpf: Correct various core_reloc 64-bit assumptionsAndrii Nakryiko1-32/+37
Ensure that types are memory layout- and field alignment-compatible regardless of 32/64-bitness mix of libbpf and BPF architecture. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-08-13bpf, selftests: Add tests to sock_ops for loading skJohn Fastabend1-0/+21
Add tests to directly accesse sock_ops sk field. Then use it to ensure a bad pointer access will fault if something goes wrong. We do three tests: The first test ensures when we read sock_ops sk pointer into the same register that we don't fault as described earlier. Here r9 is chosen as the temp register. The xlated code is, 36: (7b) *(u64 *)(r1 +32) = r9 37: (61) r9 = *(u32 *)(r1 +28) 38: (15) if r9 == 0x0 goto pc+3 39: (79) r9 = *(u64 *)(r1 +32) 40: (79) r1 = *(u64 *)(r1 +0) 41: (05) goto pc+1 42: (79) r9 = *(u64 *)(r1 +32) The second test ensures the temp register selection does not collide with in-use register r9. Shown here r8 is chosen because r9 is the sock_ops pointer. The xlated code is as follows, 46: (7b) *(u64 *)(r9 +32) = r8 47: (61) r8 = *(u32 *)(r9 +28) 48: (15) if r8 == 0x0 goto pc+3 49: (79) r8 = *(u64 *)(r9 +32) 50: (79) r9 = *(u64 *)(r9 +0) 51: (05) goto pc+1 52: (79) r8 = *(u64 *)(r9 +32) And finally, ensure we didn't break the base case where dst_reg does not equal the source register, 56: (61) r2 = *(u32 *)(r1 +28) 57: (15) if r2 == 0x0 goto pc+1 58: (79) r2 = *(u64 *)(r1 +0) Notice it takes us an extra four instructions when src reg is the same as dst reg. One to save the reg, two to restore depending on the branch taken and a goto to jump over the second restore. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/159718355325.4728.4163036953345999636.stgit@john-Precision-5820-Tower
2020-08-13bpf, selftests: Add tests for sock_ops load with r9, r8.r7 registersJohn Fastabend1-0/+7
Loads in sock_ops case when using high registers requires extra logic to ensure the correct temporary value is used. We need to ensure the temp register does not use either the src_reg or dst_reg. Lets add an asm test to force the logic is triggered. The xlated code is here, 30: (7b) *(u64 *)(r9 +32) = r7 31: (61) r7 = *(u32 *)(r9 +28) 32: (15) if r7 == 0x0 goto pc+2 33: (79) r7 = *(u64 *)(r9 +0) 34: (63) *(u32 *)(r7 +916) = r8 35: (79) r7 = *(u64 *)(r9 +32) Notice r9 and r8 are not used for temp registers and r7 is chosen. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/159718353345.4728.8805043614257933227.stgit@john-Precision-5820-Tower
2020-08-13bpf, selftests: Add tests for ctx access in sock_ops with single registerJohn Fastabend1-0/+13
To verify fix ("bpf: sock_ops ctx access may stomp registers in corner case") we want to force compiler to generate the following code when accessing a field with BPF_TCP_SOCK_GET_COMMON, r1 = *(u32 *)(r1 + 96) // r1 is skops ptr Rather than depend on clang to do this we add the test with inline asm to the tcpbpf test. This saves us from having to create another runner and ensures that if we break this again test_tcpbpf will crash. With above code we get the xlated code, 11: (7b) *(u64 *)(r1 +32) = r9 12: (61) r9 = *(u32 *)(r1 +28) 13: (15) if r9 == 0x0 goto pc+4 14: (79) r9 = *(u64 *)(r1 +32) 15: (79) r1 = *(u64 *)(r1 +0) 16: (61) r1 = *(u32 *)(r1 +2348) 17: (05) goto pc+1 18: (79) r9 = *(u64 *)(r1 +32) We also add the normal case where src_reg != dst_reg so we can compare code generation easily from llvm-objdump and ensure that case continues to work correctly. The normal code is xlated to, 20: (b7) r1 = 0 21: (61) r1 = *(u32 *)(r3 +28) 22: (15) if r1 == 0x0 goto pc+2 23: (79) r1 = *(u64 *)(r3 +0) 24: (61) r1 = *(u32 *)(r1 +2348) Where the temp variable is not used. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/159718351457.4728.3295119261717842496.stgit@john-Precision-5820-Tower
2020-08-01selftests/bpf: Fix spurious test failures in core_retro selftestAndrii Nakryiko1-0/+13
core_retro selftest uses BPF program that's triggered on sys_enter system-wide, but has no protection from some unrelated process doing syscall while selftest is running. This leads to occasional test failures with unexpected PIDs being returned. Fix that by filtering out all processes that are not test_progs process. Fixes: fcda189a5133 ("selftests/bpf: Add test relying only on CO-RE and no recent kernel features") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: John Fastabend <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-31selftests/bpf: Verify socket storage in cgroup/sock_{create, release}Stanislav Fomichev1-0/+19
Augment udp_limit test to set and verify socket storage value. That should be enough to exercise the changes from the previous patch. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-31selftests/bpf: Test bpf_iter buffer access with negative offsetYonghong Song1-0/+21
Commit afbf21dce668 ("bpf: Support readonly/readwrite buffers in verifier") added readonly/readwrite buffer support which is currently used by bpf_iter tracing programs. It has a bug with incorrect parameter ordering which later fixed by Commit f6dfbe31e8fa ("bpf: Fix swapped arguments in calls to check_buffer_access"). This patch added a test case with a negative offset access which will trigger the error path. Without Commit f6dfbe31e8fa, running the test case in the patch, the error message looks like: R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0 ; value_sum += *(__u32 *)(value - 4); 2: (61) r1 = *(u32 *)(r1 -4) R1 invalid (null) buffer access: off=-4, size=4 With the above commit, the error message looks like: R1_w=rdwr_buf(id=0,off=0,imm=0) R10=fp0 ; value_sum += *(__u32 *)(value - 4); 2: (61) r1 = *(u32 *)(r1 -4) R1 invalid rdwr buffer access: off=-4, size=4 Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-28selftests/bpf: Add new bpf_iter context structs to fix build on old kernelsAndrii Nakryiko1-0/+18
Add bpf_iter__bpf_map_elem and bpf_iter__bpf_sk_storage_map to bpf_iter.h. Fixes: 3b1c420bd882 ("selftests/bpf: Add a test for bpf sk_storage_map iterator") Fixes: 2a7c2fff7dd6 ("selftests/bpf: Add test for bpf hash map iterators") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-25selftests/bpf: Add BPF XDP link selftestsAndrii Nakryiko1-0/+12
Add selftest validating all the attachment logic around BPF XDP link. Test also link updates and get_obj_info() APIs. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-25selftests/bpf: Test CGROUP_STORAGE behavior on shared egress + ingressYiFei Zhu2-2/+71
This mirrors the original egress-only test. The cgroup_storage is now extended to have two packet counters, one for egress and one for ingress. We also extend to have two egress programs to test that egress will always share with other egress origrams in the same cgroup. The behavior of the counters are exactly the same as the original egress-only test. The test is split into two, one "isolated" test that when the key type is struct bpf_cgroup_storage_key, which contains the attach type, programs of different attach types will see different storages. The other, "shared" test that when the key type is u64, programs of different attach types will see the same storage if they are attached to the same cgroup. Signed-off-by: YiFei Zhu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/c756f5f1521227b8e6e90a453299dda722d7324d.1595565795.git.zhuyifei@google.com
2020-07-25selftests/bpf: Test CGROUP_STORAGE map can't be used by multiple progsYiFei Zhu3-3/+64
The current assumption is that the lifetime of a cgroup storage is tied to the program's attachment. The storage is created in cgroup_bpf_attach, and released upon cgroup_bpf_detach and cgroup_bpf_release. Because the current semantics is that each attachment gets a completely independent cgroup storage, and you can have multiple programs attached to the same (cgroup, attach type) pair, the key of the CGROUP_STORAGE map, looking up the map with this pair could yield multiple storages, and that is not permitted. Therefore, the kernel verifier checks that two programs cannot share the same CGROUP_STORAGE map, even if they have different expected attach types, considering that the actual attach type does not always have to be equal to the expected attach type. The test creates a CGROUP_STORAGE map and make it shared across two different programs, one cgroup_skb/egress and one /ingress. It asserts that the two programs cannot be both loaded, due to verifier failure from the above reason. Signed-off-by: YiFei Zhu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/30a6b0da67ae6b0296c4d511bfb19c5f3d035916.1595565795.git.zhuyifei@google.com
2020-07-25selftests/bpf: Add test for CGROUP_STORAGE map on multiple attachesYiFei Zhu1-0/+30
This test creates a parent cgroup, and a child of that cgroup. It attaches a cgroup_skb/egress program that simply counts packets, to a global variable (ARRAY map), and to a CGROUP_STORAGE map. The program is first attached to the parent cgroup only, then to parent and child. The test cases sends a message within the child cgroup, and because the program is inherited across parent / child cgroups, it will trigger the egress program for both the parent and child, if they exist. The program, when looking up a CGROUP_STORAGE map, uses the cgroup and attach type of the attachment parameters; therefore, both attaches uses different cgroup storages. We assert that all packet counts returns what we expects. Signed-off-by: YiFei Zhu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/5a20206afa4606144691c7caa0d1b997cd60dec0.1595565795.git.zhuyifei@google.com
2020-07-25selftests/bpf: Add callchain_stackidSong Liu1-0/+59
This tests new helper function bpf_get_stackid_pe and bpf_get_stack_pe. These two helpers have different implementation for perf_event with PEB entries. Signed-off-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-25selftests/bpf: Add a test for out of bound rdonly buf accessYonghong Song1-0/+35
If the bpf program contains out of bound access w.r.t. a particular map key/value size, the verification will be still okay, e.g., it will be accepted by verifier. But it will be rejected during link_create time. A test is added here to ensure link_create failure did happen if out of bound access happened. $ ./test_progs -n 4 ... #4/23 rdonly-buf-out-of-bound:OK ... Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-07-25selftests/bpf: Add a test for bpf sk_storage_map iteratorYonghong Song1-0/+34
Added one test for bpf sk_storage_map_iterator. $ ./test_progs -n 4 ... #4/22 bpf_sk_storage_map:OK ... Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]