aboutsummaryrefslogtreecommitdiff
path: root/tools/testing/selftests/bpf
AgeCommit message (Collapse)AuthorFilesLines
2023-11-18bpf: omit default off=0 and imm=0 in register state logAndrii Nakryiko4-35/+35
Simplify BPF verifier log further by omitting default (and frequently irrelevant) off=0 and imm=0 parts for non-SCALAR_VALUE registers. As can be seen from fixed tests, this is often a visual noise for PTR_TO_CTX register and even for PTR_TO_PACKET registers. Omitting default values follows the rest of register state logic: we omit default values to keep verifier log succinct and to highlight interesting state that deviates from default one. E.g., we do the same for var_off, when it's unknown, which gives no additional information. Acked-by: Eduard Zingerman <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-18bpf: emit map name in register state if applicable and availableAndrii Nakryiko1-5/+5
In complicated real-world applications, whenever debugging some verification error through verifier log, it often would be very useful to see map name for PTR_TO_MAP_VALUE register. Usually this needs to be inferred from key/value sizes and maybe trying to guess C code location, but it's not always clear. Given verifier has the name, and it's never too long, let's just emit it for ptr_to_map_key, ptr_to_map_value, and const_ptr_to_map registers. We reshuffle the order a bit, so that map name, key size, and value size appear before offset and immediate values, which seems like a more logical order. Current output: R1_w=map_ptr(map=array_map,ks=4,vs=8,off=0,imm=0) But we'll get rid of useless off=0 and imm=0 parts in the next patch. Acked-by: Eduard Zingerman <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-17bpf: rename BPF_F_TEST_SANITY_STRICT to BPF_F_TEST_REG_INVARIANTSAndrii Nakryiko8-18/+17
Rename verifier internal flag BPF_F_TEST_SANITY_STRICT to more neutral BPF_F_TEST_REG_INVARIANTS. This is a follow up to [0]. A few selftests and veristat need to be adjusted in the same patch as well. [0] https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/ Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-16Merge tag 'net-6.7-rc2' of ↵Linus Torvalds6-17/+127
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from BPF and netfilter. Current release - regressions: - core: fix undefined behavior in netdev name allocation - bpf: do not allocate percpu memory at init stage - netfilter: nf_tables: split async and sync catchall in two functions - mptcp: fix possible NULL pointer dereference on close Current release - new code bugs: - eth: ice: dpll: fix initial lock status of dpll Previous releases - regressions: - bpf: fix precision backtracking instruction iteration - af_unix: fix use-after-free in unix_stream_read_actor() - tipc: fix kernel-infoleak due to uninitialized TLV value - eth: bonding: stop the device in bond_setup_by_slave() - eth: mlx5: - fix double free of encap_header - avoid referencing skb after free-ing in drop path - eth: hns3: fix VF reset - eth: mvneta: fix calls to page_pool_get_stats Previous releases - always broken: - core: set SOCK_RCU_FREE before inserting socket into hashtable - bpf: fix control-flow graph checking in privileged mode - eth: ppp: limit MRU to 64K - eth: stmmac: avoid rx queue overrun - eth: icssg-prueth: fix error cleanup on failing initialization - eth: hns3: fix out-of-bounds access may occur when coalesce info is read via debugfs - eth: cortina: handle large frames Misc: - selftests: gso: support CONFIG_MAX_SKB_FRAGS up to 45" * tag 'net-6.7-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (78 commits) macvlan: Don't propagate promisc change to lower dev in passthru net: sched: do not offload flows with a helper in act_ct net/mlx5e: Check return value of snprintf writing to fw_version buffer for representors net/mlx5e: Check return value of snprintf writing to fw_version buffer net/mlx5e: Reduce the size of icosq_str net/mlx5: Increase size of irq name buffer net/mlx5e: Update doorbell for port timestamping CQ before the software counter net/mlx5e: Track xmit submission to PTP WQ after populating metadata map net/mlx5e: Avoid referencing skb after free-ing in drop path of mlx5e_sq_xmit_wqe net/mlx5e: Don't modify the peer sent-to-vport rules for IPSec offload net/mlx5e: Fix pedit endianness net/mlx5e: fix double free of encap_header in update funcs net/mlx5e: fix double free of encap_header net/mlx5: Decouple PHC .adjtime and .adjphase implementations net/mlx5: DR, Allow old devices to use multi destination FTE net/mlx5: Free used cpus mask when an IRQ is released Revert "net/mlx5: DR, Supporting inline WQE when possible" bpf: Do not allocate percpu memory at init stage net: Fix undefined behavior in netdev name allocation dt-bindings: net: ethernet-controller: Fix formatting error ...
2023-11-15selftests/bpf: add iter test requiring range x range logicAndrii Nakryiko1-0/+22
Add a simple verifier test that requires deriving reg bounds for one register from another register that's not a constant. This is a realistic example of iterating elements of an array with fixed maximum number of elements, but smaller actual number of elements. This small example was an original motivation for doing this whole patch set in the first place, yes. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-15veristat: add ability to set BPF_F_TEST_SANITY_STRICT flag with -r flagAndrii Nakryiko2-3/+11
Add a new flag -r (--test-sanity), similar to -t (--test-states), to add extra BPF program flags when loading BPF programs. This allows to use veristat to easily catch sanity violations in production BPF programs. reg_bounds tests are also enforcing BPF_F_TEST_SANITY_STRICT flag now. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-15selftests/bpf: set BPF_F_TEST_SANITY_SCRIPT by defaultAndrii Nakryiko6-13/+33
Make sure to set BPF_F_TEST_SANITY_STRICT program flag by default across most verifier tests (and a bunch of others that set custom prog flags). There are currently two tests that do fail validation, if enforced strictly: verifier_bounds/crossing_64_bit_signed_boundary_2 and verifier_bounds/crossing_32_bit_signed_boundary_2. To accommodate them, we teach test_loader a flag negation: __flag(!<flagname>) will *clear* specified flag, allowing easy opt-out. We apply __flag(!BPF_F_TEST_SANITY_STRICT) to these to tests. Also sprinkle BPF_F_TEST_SANITY_STRICT everywhere where we already set test-only BPF_F_TEST_RND_HI32 flag, for completeness. Acked-by: Eduard Zingerman <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-15selftests/bpf: add randomized reg_bounds testsAndrii Nakryiko1-7/+159
Add random cases generation to reg_bounds.c and run them without SLOW_TESTS=1 to increase a chance of BPF CI catching latent issues. Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-15selftests/bpf: add range x range test to reg_boundsAndrii Nakryiko1-0/+86
Now that verifier supports range vs range bounds adjustments, validate that by checking each generated range against every other generated range, across all supported operators (everything by JSET). We also add few cases that were problematic during development either for verifier or for selftest's range tracking implementation. Note that we utilize the same trick with splitting everything into multiple independent parallelizable tests, but init_t and cond_t. This brings down verification time in parallel mode from more than 8 hours down to less that 1.5 hours. 106 million cases were successfully validate for range vs range logic, in addition to about 7 million range vs const cases, added in earlier patch. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-15selftests/bpf: adjust OP_EQ/OP_NE handling to use subranges for branch takenAndrii Nakryiko1-4/+26
Similar to kernel-side BPF verifier logic enhancements, use 32-bit subrange knowledge for is_branch_taken() logic in reg_bounds selftests. Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Eduard Zingerman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-15selftests/bpf: BPF register range bounds testerAndrii Nakryiko1-0/+1838
Add test to validate BPF verifier's register range bounds tracking logic. The main bulk is a lot of auto-generated tests based on a small set of seed values for lower and upper 32 bits of full 64-bit values. Currently we validate only range vs const comparisons, but the idea is to start validating range over range comparisons in subsequent patch set. When setting up initial register ranges we treat registers as one of u64/s64/u32/s32 numeric types, and then independently perform conditional comparisons based on a potentially different u64/s64/u32/s32 types. This tests lots of tricky cases of deriving bounds information across different numeric domains. Given there are lots of auto-generated cases, we guard them behind SLOW_TESTS=1 envvar requirement, and skip them altogether otherwise. With current full set of upper/lower seed value, all supported comparison operators and all the combinations of u64/s64/u32/s32 number domains, we get about 7.7 million tests, which run in about 35 minutes on my local qemu instance without parallelization. But we also split those tests by init/cond numeric types, which allows to rely on test_progs's parallelization of tests with `-j` option, getting run time down to about 5 minutes on 8 cores. It's still something that shouldn't be run during normal test_progs run. But we can run it a reasonable time, and so perhaps a nightly CI test run (once we have it) would be a good option for this. We also add a small set of tricky conditions that came up during development and triggered various bugs or corner cases in either selftest's reimplementation of range bounds logic or in verifier's logic itself. These are fast enough to be run as part of normal test_progs test run and are great for a quick sanity checking. Let's take a look at test output to understand what's going on: $ sudo ./test_progs -t reg_bounds_crafted #191/1 reg_bounds_crafted/(u64)[0; 0xffffffff] (u64)< 0:OK ... #191/115 reg_bounds_crafted/(u64)[0; 0x17fffffff] (s32)< 0:OK ... #191/137 reg_bounds_crafted/(u64)[0xffffffff; 0x100000000] (u64)== 0:OK Each test case is uniquely and fully described by this generated string. E.g.: "(u64)[0; 0x17fffffff] (s32)< 0". This means that we initialize a register (R6) in such a way that verifier knows that it can have a value in [(u64)0; (u64)0x17fffffff] range. Another register (R7) is also set up as u64, but this time a constant (zero in this case). They then are compared using 32-bit signed < operation. Resulting TRUE/FALSE branches are evaluated (including cases where it's known that one of the branches will never be taken, in which case we validate that verifier also determines this as a dead code). Test validates that verifier's final register state matches expected state based on selftest's own reg_state logic, implemented from scratch for cross-checking purposes. These test names can be conveniently used for further debugging, and if -vv verboseness is requested we can get a corresponding verifier log (with mark_precise logs filtered out as irrelevant and distracting). Example below is slightly redacted for brevity, omitting irrelevant register output in some places, marked with [...]. $ sudo ./test_progs -a 'reg_bounds_crafted/(u32)[0; U32_MAX] (s32)< -1' -vv ... VERIFIER LOG: ======================== func#0 @0 0: R1=ctx(off=0,imm=0) R10=fp0 0: (05) goto pc+2 3: (85) call bpf_get_current_pid_tgid#14 ; R0_w=scalar() 4: (bc) w6 = w0 ; R0_w=scalar() R6_w=scalar(smin=0,smax=umax=4294967295,var_off=(0x0; 0xffffffff)) 5: (85) call bpf_get_current_pid_tgid#14 ; R0_w=scalar() 6: (bc) w7 = w0 ; R0_w=scalar() R7_w=scalar(smin=0,smax=umax=4294967295,var_off=(0x0; 0xffffffff)) 7: (b4) w1 = 0 ; R1_w=0 8: (b4) w2 = -1 ; R2=4294967295 9: (ae) if w6 < w1 goto pc-9 9: R1=0 R6=scalar(smin=0,smax=umax=4294967295,var_off=(0x0; 0xffffffff)) 10: (2e) if w6 > w2 goto pc-10 10: R2=4294967295 R6=scalar(smin=0,smax=umax=4294967295,var_off=(0x0; 0xffffffff)) 11: (b4) w1 = -1 ; R1_w=4294967295 12: (b4) w2 = -1 ; R2_w=4294967295 13: (ae) if w7 < w1 goto pc-13 ; R1_w=4294967295 R7=4294967295 14: (2e) if w7 > w2 goto pc-14 14: R2_w=4294967295 R7=4294967295 15: (bc) w0 = w6 ; [...] R6=scalar(id=1,smin=0,smax=umax=4294967295,var_off=(0x0; 0xffffffff)) 16: (bc) w0 = w7 ; [...] R7=4294967295 17: (ce) if w6 s< w7 goto pc+3 ; R6=scalar(id=1,smin=0,smax=umax=4294967295,smin32=-1,var_off=(0x0; 0xffffffff)) R7=4294967295 18: (bc) w0 = w6 ; [...] R6=scalar(id=1,smin=0,smax=umax=4294967295,smin32=-1,var_off=(0x0; 0xffffffff)) 19: (bc) w0 = w7 ; [...] R7=4294967295 20: (95) exit from 17 to 21: [...] 21: (bc) w0 = w6 ; [...] R6=scalar(id=1,smin=umin=umin32=2147483648,smax=umax=umax32=4294967294,smax32=-2,var_off=(0x80000000; 0x7fffffff)) 22: (bc) w0 = w7 ; [...] R7=4294967295 23: (95) exit from 13 to 1: [...] 1: [...] 1: (b7) r0 = 0 ; R0_w=0 2: (95) exit processed 24 insns (limit 1000000) max_states_per_insn 0 total_states 2 peak_states 2 mark_read 1 ===================== Verifier log above is for `(u32)[0; U32_MAX] (s32)< -1` use cases, where u32 range is used for initialization, followed by signed < operator. Note how we use w6/w7 in this case for register initialization (it would be R6/R7 for 64-bit types) and then `if w6 s< w7` for comparison at instruction #17. It will be `if R6 < R7` for 64-bit unsigned comparison. Above example gives a good impression of the overall structure of a BPF programs generated for reg_bounds tests. In the future, this "framework" can be extended to test not just conditional jumps, but also arithmetic operations. Adding randomized testing is another possibility. Some implementation notes. We basically have our own generics-like operations on numbers, where all the numbers are stored in u64, but how they are interpreted is passed as runtime argument enum num_t. Further, `struct range` represents a bounds range, and those are collected together into a minimal `struct reg_state`, which collects range bounds across all four numberical domains: u64, s64, u32, s64. Based on these primitives and `enum op` representing possible conditional operation (<, <=, >, >=, ==, !=), there is a set of generic helpers to perform "range arithmetics", which is used to maintain struct reg_state. We simulate what verifier will do for reg bounds of R6 and R7 registers using these range and reg_state primitives. Simulated information is used to determine branch taken conclusion and expected exact register state across all four number domains. Implementation of "range arithmetics" is more generic than what verifier is currently performing: it allows range over range comparisons and adjustments. This is the intended end goal of this patch set overall and verifier logic is enhanced in subsequent patches in this series to handle range vs range operations, at which point selftests are extended to validate these conditions as well. For now it's range vs const cases only. Note that tests are split into multiple groups by their numeric types for initialization of ranges and for comparison operation. This allows to use test_progs's -j parallelization to speed up tests, as we now have 16 groups of parallel running tests. Overall reduction of running time that allows is pretty good, we go down from more than 30 minutes to slightly less than 5 minutes running time. Acked-by: Eduard Zingerman <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Shung-Hsi Yu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-14selftests/bpf: Add selftests for cgroup1 hierarchyYafang Shao2-0/+229
Add selftests for cgroup1 hierarchy. The result as follows, $ tools/testing/selftests/bpf/test_progs --name=cgroup1_hierarchy #36/1 cgroup1_hierarchy/test_cgroup1_hierarchy:OK #36/2 cgroup1_hierarchy/test_root_cgid:OK #36/3 cgroup1_hierarchy/test_invalid_level:OK #36/4 cgroup1_hierarchy/test_invalid_cgid:OK #36/5 cgroup1_hierarchy/test_invalid_hid:OK #36/6 cgroup1_hierarchy/test_invalid_cgrp_name:OK #36/7 cgroup1_hierarchy/test_invalid_cgrp_name2:OK #36/8 cgroup1_hierarchy/test_sleepable_prog:OK #36 cgroup1_hierarchy:OK Summary: 1/8 PASSED, 0 SKIPPED, 0 FAILED Besides, I also did some stress test similar to the patch #2 in this series, as follows (with CONFIG_PROVE_RCU_LIST enabled): - Continuously mounting and unmounting named cgroups in some tasks, for example: cgrp_name=$1 while true do mount -t cgroup -o none,name=$cgrp_name none /$cgrp_name umount /$cgrp_name done - Continuously run this selftest concurrently, while true; do ./test_progs --name=cgroup1_hierarchy; done They can ran successfully without any RCU warnings in dmesg. Signed-off-by: Yafang Shao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-14selftests/bpf: Add a new cgroup helper get_cgroup_hierarchy_id()Yafang Shao2-0/+53
A new cgroup helper function, get_cgroup1_hierarchy_id(), has been introduced to obtain the ID of a cgroup1 hierarchy based on the provided cgroup name. This cgroup name can be obtained from the /proc/self/cgroup file. Signed-off-by: Yafang Shao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-14selftests/bpf: Add a new cgroup helper get_classid_cgroup_id()Yafang Shao2-6/+23
Introduce a new helper function to retrieve the cgroup ID from a net_cls cgroup directory. Signed-off-by: Yafang Shao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-14selftests/bpf: Add parallel support for classidYafang Shao3-9/+13
Include the current pid in the classid cgroup path. This way, different testers relying on classid-based configurations will have distinct classid cgroup directories, enabling them to run concurrently. Additionally, we leverage the current pid as the classid, ensuring unique identification. Signed-off-by: Yafang Shao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-14selftests/bpf: Fix issues in setup_classid_environment()Yafang Shao1-4/+14
If the net_cls subsystem is already mounted, attempting to mount it again in setup_classid_environment() will result in a failure with the error code EBUSY. Despite this, tmpfs will have been successfully mounted at /sys/fs/cgroup/net_cls. Consequently, the /sys/fs/cgroup/net_cls directory will be empty, causing subsequent setup operations to fail. Here's an error log excerpt illustrating the issue when net_cls has already been mounted at /sys/fs/cgroup/net_cls prior to running setup_classid_environment(): - Before that change $ tools/testing/selftests/bpf/test_progs --name=cgroup_v1v2 test_cgroup_v1v2:PASS:server_fd 0 nsec test_cgroup_v1v2:PASS:client_fd 0 nsec test_cgroup_v1v2:PASS:cgroup_fd 0 nsec test_cgroup_v1v2:PASS:server_fd 0 nsec run_test:PASS:skel_open 0 nsec run_test:PASS:prog_attach 0 nsec test_cgroup_v1v2:PASS:cgroup-v2-only 0 nsec (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup.procs (cgroup_helpers.c:540: errno: No such file or directory) Opening cgroup classid: /sys/fs/cgroup/net_cls/cgroup-test-work-dir/net_cls.classid run_test:PASS:skel_open 0 nsec run_test:PASS:prog_attach 0 nsec (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup-test-work-dir/cgroup.procs run_test:FAIL:join_classid unexpected error: 1 (errno 2) test_cgroup_v1v2:FAIL:cgroup-v1v2 unexpected error: -1 (errno 2) (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup.procs #44 cgroup_v1v2:FAIL Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED - After that change $ tools/testing/selftests/bpf/test_progs --name=cgroup_v1v2 #44 cgroup_v1v2:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yafang Shao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-13selftests/bpf: Add assert for user stacks in test_task_stackJordan Rome2-0/+7
This is a follow up to: commit b8e3a87a627b ("bpf: Add crosstask check to __bpf_get_stack"). This test ensures that the task iterator only gets a single user stack (for the current task). Signed-off-by: Jordan Rome <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2023-11-12Merge tag 'loongarch-6.7' of ↵Linus Torvalds6-6/+12
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch updates from Huacai Chen: - support PREEMPT_DYNAMIC with static keys - relax memory ordering for atomic operations - support BPF CPU v4 instructions for LoongArch - some build and runtime warning fixes * tag 'loongarch-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: selftests/bpf: Enable cpu v4 tests for LoongArch LoongArch: BPF: Support signed mod instructions LoongArch: BPF: Support signed div instructions LoongArch: BPF: Support 32-bit offset jmp instructions LoongArch: BPF: Support unconditional bswap instructions LoongArch: BPF: Support sign-extension mov instructions LoongArch: BPF: Support sign-extension load instructions LoongArch: Add more instruction opcodes and emit_* helpers LoongArch/smp: Call rcutree_report_cpu_starting() earlier LoongArch: Relax memory ordering for atomic operations LoongArch: Mark __percpu functions as always inline LoongArch: Disable module from accessing external data directly LoongArch: Support PREEMPT_DYNAMIC with static keys
2023-11-11selftests/bpf: Fix pyperf180 compilation failure with clang18Yonghong Song1-0/+22
With latest clang18 (main branch of llvm-project repo), when building bpf selftests, [~/work/bpf-next (master)]$ make -C tools/testing/selftests/bpf LLVM=1 -j The following compilation error happens: fatal error: error in backend: Branch target out of insn range ... Stack dump: 0. Program arguments: clang -g -Wall -Werror -D__TARGET_ARCH_x86 -mlittle-endian -I/home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/include -I/home/yhs/work/bpf-next/tools/testing/selftests/bpf -I/home/yhs/work/bpf-next/tools/include/uapi -I/home/yhs/work/bpf-next/tools/testing/selftests/usr/include -idirafter /home/yhs/work/llvm-project/llvm/build.18/install/lib/clang/18/include -idirafter /usr/local/include -idirafter /usr/include -Wno-compare-distinct-pointer-types -DENABLE_ATOMICS_TESTS -O2 --target=bpf -c progs/pyperf180.c -mcpu=v3 -o /home/yhs/work/bpf-next/tools/testing/selftests/bpf/pyperf180.bpf.o 1. <eof> parser at end of file 2. Code generation ... The compilation failure only happens to cpu=v2 and cpu=v3. cpu=v4 is okay since cpu=v4 supports 32-bit branch target offset. The above failure is due to upstream llvm patch [1] where some inlining behavior are changed in clang18. To workaround the issue, previously all 180 loop iterations are fully unrolled. The bpf macro __BPF_CPU_VERSION__ (implemented in clang18 recently) is used to avoid unrolling changes if cpu=v4. If __BPF_CPU_VERSION__ is not available and the compiler is clang18, the unrollng amount is unconditionally reduced. [1] https://github.com/llvm/llvm-project/commit/1a2e77cf9e11dbf56b5720c607313a566eebb16e Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Tested-by: Alan Maguire <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2023-11-09selftests/bpf: add more test cases for check_cfg()Andrii Nakryiko1-0/+62
Add a few more simple cases to validate proper privileged vs unprivileged loop detection behavior. conditional_loop2 is the one reported by Hao Sun that triggered this set of fixes. Acked-by: Eduard Zingerman <[email protected]> Suggested-by: Hao Sun <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09bpf: fix control-flow graph checking in privileged modeAndrii Nakryiko2-6/+9
When BPF program is verified in privileged mode, BPF verifier allows bounded loops. This means that from CFG point of view there are definitely some back-edges. Original commit adjusted check_cfg() logic to not detect back-edges in control flow graph if they are resulting from conditional jumps, which the idea that subsequent full BPF verification process will determine whether such loops are bounded or not, and either accept or reject the BPF program. At least that's my reading of the intent. Unfortunately, the implementation of this idea doesn't work correctly in all possible situations. Conditional jump might not result in immediate back-edge, but just a few unconditional instructions later we can arrive at back-edge. In such situations check_cfg() would reject BPF program even in privileged mode, despite it might be bounded loop. Next patch adds one simple program demonstrating such scenario. To keep things simple, instead of trying to detect back edges in privileged mode, just assume every back edge is valid and let subsequent BPF verification prove or reject bounded loops. Note a few test changes. For unknown reason, we have a few tests that are specified to detect a back-edge in a privileged mode, but looking at their code it seems like the right outcome is passing check_cfg() and letting subsequent verification to make a decision about bounded or not bounded looping. Bounded recursion case is also interesting. The example should pass, as recursion is limited to just a few levels and so we never reach maximum number of nested frames and never exhaust maximum stack depth. But the way that max stack depth logic works today it falsely detects this as exceeding max nested frame count. This patch series doesn't attempt to fix this orthogonal problem, so we just adjust expected verifier failure. Suggested-by: Alexei Starovoitov <[email protected]> Fixes: 2589726d12a1 ("bpf: introduce bounded loops") Reported-by: Hao Sun <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: add edge case backtracking logic testAndrii Nakryiko1-0/+40
Add a dedicated selftests to try to set up conditions to have a state with same first and last instruction index, but it actually is a loop 3->4->1->2->3. This confuses mark_chain_precision() if verifier doesn't take into account jump history. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09bpf: handle ldimm64 properly in check_cfg()Andrii Nakryiko1-4/+4
ldimm64 instructions are 16-byte long, and so have to be handled appropriately in check_cfg(), just like the rest of BPF verifier does. This has implications in three places: - when determining next instruction for non-jump instructions; - when determining next instruction for callback address ldimm64 instructions (in visit_func_call_insn()); - when checking for unreachable instructions, where second half of ldimm64 is expected to be unreachable; We take this also as an opportunity to report jump into the middle of ldimm64. And adjust few test_verifier tests accordingly. Acked-by: Eduard Zingerman <[email protected]> Reported-by: Hao Sun <[email protected]> Fixes: 475fb78fbf48 ("bpf: verifier (add branch/goto checks)") Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests: bpf: xskxceiver: ksft_print_msg: fix format type errorAnders Roxell1-7/+12
Crossbuilding selftests/bpf for architecture arm64, format specifies type error show up like. xskxceiver.c:912:34: error: format specifies type 'int' but the argument has type '__u64' (aka 'unsigned long long') [-Werror,-Wformat] ksft_print_msg("[%s] expected meta_count [%d], got meta_count [%d]\n", ~~ %llu __func__, pkt->pkt_nb, meta->count); ^~~~~~~~~~~ xskxceiver.c:929:55: error: format specifies type 'unsigned long long' but the argument has type 'u64' (aka 'unsigned long') [-Werror,-Wformat] ksft_print_msg("Frag invalid addr: %llx len: %u\n", addr, len); ~~~~ ^~~~ Fixing the issues by casting to (unsigned long long) and changing the specifiers to be %llu from %d and %u, since with u64s it might be %llx or %lx, depending on architecture. Signed-off-by: Anders Roxell <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Test bpf_refcount_acquire of node obtained via direct ldDave Marchevsky2-0/+104
This patch demonstrates that verifier changes earlier in this series result in bpf_refcount_acquire(mapval->stashed_kptr) passing verification. The added test additionally validates that stashing a kptr in mapval and - in a separate BPF program - refcount_acquiring the kptr without unstashing works as expected at runtime. Signed-off-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Add test passing MAYBE_NULL reg to bpf_refcount_acquireDave Marchevsky1-0/+19
The test added in this patch exercises the logic fixed in the previous patch in this series. Before the previous patch's changes, bpf_refcount_acquire accepts MAYBE_NULL local kptrs; after the change the verifier correctly rejects the such a call. Signed-off-by: Dave Marchevsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09veristat: add ability to filter top N resultsAndrii Nakryiko1-0/+10
Add ability to filter top B results, both in replay/verifier mode and comparison mode. Just adding `-n10` will emit only first 10 rows, or less, if there is not enough rows. This is not just a shortcut instead of passing veristat output through `head`, though. Filtering out all the other rows influences final table formatting, as table column widths are calculated based on actual emitted test. To demonstrate the difference, compare two "equivalent" forms below, one using head and another using -n argument. TOP N FEATURE ============= [vmuser@archvm bpf]$ sudo ./veristat -C ~/baseline-results-selftests.csv ~/sanity2-results-selftests.csv -e file,prog,insns,states -s '|insns_diff|' -n10 File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ---------------------------------------- --------------------- --------- --------- ------------ ---------- ---------- ------------- test_seg6_loop.bpf.linked3.o __add_egr_x 12440 12360 -80 (-0.64%) 364 357 -7 (-1.92%) async_stack_depth.bpf.linked3.o async_call_root_check 145 145 +0 (+0.00%) 3 3 +0 (+0.00%) async_stack_depth.bpf.linked3.o pseudo_call_check 139 139 +0 (+0.00%) 3 3 +0 (+0.00%) atomic_bounds.bpf.linked3.o sub 7 7 +0 (+0.00%) 0 0 +0 (+0.00%) bench_local_storage_create.bpf.linked3.o kmalloc 5 5 +0 (+0.00%) 0 0 +0 (+0.00%) bench_local_storage_create.bpf.linked3.o sched_process_fork 22 22 +0 (+0.00%) 2 2 +0 (+0.00%) bench_local_storage_create.bpf.linked3.o socket_post_create 23 23 +0 (+0.00%) 2 2 +0 (+0.00%) bind4_prog.bpf.linked3.o bind_v4_prog 358 358 +0 (+0.00%) 33 33 +0 (+0.00%) bind6_prog.bpf.linked3.o bind_v6_prog 429 429 +0 (+0.00%) 37 37 +0 (+0.00%) bind_perm.bpf.linked3.o bind_v4_prog 15 15 +0 (+0.00%) 1 1 +0 (+0.00%) PIPING TO HEAD ============== [vmuser@archvm bpf]$ sudo ./veristat -C ~/baseline-results-selftests.csv ~/sanity2-results-selftests.csv -e file,prog,insns,states -s '|insns_diff|' | head -n12 File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ----------------------------------------------------- ---------------------------------------------------- --------- --------- ------------ ---------- ---------- ------------- test_seg6_loop.bpf.linked3.o __add_egr_x 12440 12360 -80 (-0.64%) 364 357 -7 (-1.92%) async_stack_depth.bpf.linked3.o async_call_root_check 145 145 +0 (+0.00%) 3 3 +0 (+0.00%) async_stack_depth.bpf.linked3.o pseudo_call_check 139 139 +0 (+0.00%) 3 3 +0 (+0.00%) atomic_bounds.bpf.linked3.o sub 7 7 +0 (+0.00%) 0 0 +0 (+0.00%) bench_local_storage_create.bpf.linked3.o kmalloc 5 5 +0 (+0.00%) 0 0 +0 (+0.00%) bench_local_storage_create.bpf.linked3.o sched_process_fork 22 22 +0 (+0.00%) 2 2 +0 (+0.00%) bench_local_storage_create.bpf.linked3.o socket_post_create 23 23 +0 (+0.00%) 2 2 +0 (+0.00%) bind4_prog.bpf.linked3.o bind_v4_prog 358 358 +0 (+0.00%) 33 33 +0 (+0.00%) bind6_prog.bpf.linked3.o bind_v6_prog 429 429 +0 (+0.00%) 37 37 +0 (+0.00%) bind_perm.bpf.linked3.o bind_v4_prog 15 15 +0 (+0.00%) 1 1 +0 (+0.00%) Note all the wasted whitespace in the "PIPING TO HEAD" variant. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09veristat: add ability to sort by stat's absolute valueAndrii Nakryiko1-12/+56
Add ability to sort results by absolute values of specified stats. This is especially useful to find biggest deviations in comparison mode. When comparing verifier change effect against a large base of BPF object files, it's necessary to see big changes both in positive and negative directions, as both might be a signal for regressions or bugs. The syntax is natural, e.g., adding `-s '|insns_diff|'^` will instruct veristat to sort by absolute value of instructions difference in ascending order. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Disable CONFIG_DEBUG_INFO_REDUCED in config.aarch64Anders Roxell1-0/+1
Building an arm64 kernel and seftests/bpf with defconfig + selftests/bpf/config and selftests/bpf/config.aarch64 the fragment CONFIG_DEBUG_INFO_REDUCED is enabled in arm64's defconfig, it should be disabled in file sefltests/bpf/config.aarch64 since if its not disabled CONFIG_DEBUG_INFO_BTF wont be enabled. Signed-off-by: Anders Roxell <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Consolidate VIRTIO/9P configs in config.vm fileManu Bretelle5-38/+15
Those configs are needed to be able to run VM somewhat consistently. For instance, ATM, s390x is missing the `CONFIG_VIRTIO_CONSOLE` which prevents s390x kernels built in CI to leverage qemu-guest-agent. By moving them to `config,vm`, we should have selftest kernels which are equal in term of VM functionalities when they include this file. The set of config unabled were picked using grep -h -E '(_9P|_VIRTIO)' config.x86_64 config | sort | uniq added to `config.vm` and then grep -vE '(_9P|_VIRTIO)' config.{x86_64,aarch64,s390x} as a side-effect, some config may have disappeared to the aarch64 and s390x kernels, but they should not be needed. CI will tell. Signed-off-by: Manu Bretelle <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftsets/bpf: Retry map update for non-preallocated per-cpu mapHou Tao1-1/+19
BPF CI failed due to map_percpu_stats_percpu_hash from time to time [1]. It seems that the failure reason is per-cpu bpf memory allocator may not be able to allocate per-cpu pointer successfully and it can not refill free llist timely, and bpf_map_update_elem() will return -ENOMEM. So mitigate the problem by retrying the update operation for non-preallocated per-cpu map. [1]: https://github.com/kernel-patches/bpf/actions/runs/6713177520/job/18244865326?pr=5909 Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Export map_update_retriable()Hou Tao2-5/+17
Export map_update_retriable() to make it usable for other map_test cases. These cases may only need retry for specific errno, so add a new callback parameter to let map_update_retriable() decide whether or not the errno is retriable. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Use value with enough-size when updating per-cpu mapHou Tao1-3/+18
When updating per-cpu map in map_percpu_stats test, patch_map_thread() only passes 4-bytes-sized value to bpf_map_update_elem(). The expected size of the value is 8 * num_possible_cpus(), so fix it by passing a value with enough-size for per-cpu map update. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: satisfy compiler by having explicit return in btf testAndrii Nakryiko1-0/+1
Some compilers complain about get_pprint_mapv_size() not returning value in some code paths. Fix with explicit return. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: fix RELEASE=1 build for tc_optsAndrii Nakryiko1-5/+1
Compiler complains about malloc(). We also don't need to dynamically allocate anything, so make the life easier by using statically sized buffer. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Add malloc failure checks in bpf_iterYuran Pereira1-1/+5
Since some malloc calls in bpf_iter may at times fail, this patch adds the appropriate fail checks, and ensures that any previously allocated resource is appropriately destroyed before returning the function. Signed-off-by: Yuran Pereira <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Kui-Feng Lee <[email protected]> Link: https://lore.kernel.org/r/DB3PR10MB6835F0ECA792265FA41FC39BE8A3A@DB3PR10MB6835.EURPRD10.PROD.OUTLOOK.COM Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-09selftests/bpf: Convert CHECK macros to ASSERT_* macros in bpf_iterYuran Pereira1-44/+35
As it was pointed out by Yonghong Song [1], in the bpf selftests the use of the ASSERT_* series of macros is preferred over the CHECK macro. This patch replaces all CHECK calls in bpf_iter with the appropriate ASSERT_* macros. [1] https://lore.kernel.org/lkml/[email protected] Suggested-by: Yonghong Song <[email protected]> Signed-off-by: Yuran Pereira <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Kui-Feng Lee <[email protected]> Link: https://lore.kernel.org/r/DB3PR10MB6835E9C8DFCA226DD6FEF914E8A3A@DB3PR10MB6835.EURPRD10.PROD.OUTLOOK.COM Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-08selftests/bpf: Enable cpu v4 tests for LoongArchHengqi Chen6-6/+12
Enable the cpu v4 tests for LoongArch. Currently, we don't have BPF trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg` still failed, will followup. Test result attached below: # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap #316/1 verifier_bswap/BSWAP, 16:OK #316/2 verifier_bswap/BSWAP, 16 @unpriv:OK #316/3 verifier_bswap/BSWAP, 32:OK #316/4 verifier_bswap/BSWAP, 32 @unpriv:OK #316/5 verifier_bswap/BSWAP, 64:OK #316/6 verifier_bswap/BSWAP, 64 @unpriv:OK #316 verifier_bswap:OK #330/1 verifier_gotol/gotol, small_imm:OK #330/2 verifier_gotol/gotol, small_imm @unpriv:OK #330 verifier_gotol:OK #338/1 verifier_ldsx/LDSX, S8:OK #338/2 verifier_ldsx/LDSX, S8 @unpriv:OK #338/3 verifier_ldsx/LDSX, S16:OK #338/4 verifier_ldsx/LDSX, S16 @unpriv:OK #338/5 verifier_ldsx/LDSX, S32:OK #338/6 verifier_ldsx/LDSX, S32 @unpriv:OK #338/7 verifier_ldsx/LDSX, S8 range checking, privileged:OK #338/8 verifier_ldsx/LDSX, S16 range checking:OK #338/9 verifier_ldsx/LDSX, S16 range checking @unpriv:OK #338/10 verifier_ldsx/LDSX, S32 range checking:OK #338/11 verifier_ldsx/LDSX, S32 range checking @unpriv:OK #338 verifier_ldsx:OK #349/1 verifier_movsx/MOV32SX, S8:OK #349/2 verifier_movsx/MOV32SX, S8 @unpriv:OK #349/3 verifier_movsx/MOV32SX, S16:OK #349/4 verifier_movsx/MOV32SX, S16 @unpriv:OK #349/5 verifier_movsx/MOV64SX, S8:OK #349/6 verifier_movsx/MOV64SX, S8 @unpriv:OK #349/7 verifier_movsx/MOV64SX, S16:OK #349/8 verifier_movsx/MOV64SX, S16 @unpriv:OK #349/9 verifier_movsx/MOV64SX, S32:OK #349/10 verifier_movsx/MOV64SX, S32 @unpriv:OK #349/11 verifier_movsx/MOV32SX, S8, range_check:OK #349/12 verifier_movsx/MOV32SX, S8, range_check @unpriv:OK #349/13 verifier_movsx/MOV32SX, S16, range_check:OK #349/14 verifier_movsx/MOV32SX, S16, range_check @unpriv:OK #349/15 verifier_movsx/MOV32SX, S16, range_check 2:OK #349/16 verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK #349/17 verifier_movsx/MOV64SX, S8, range_check:OK #349/18 verifier_movsx/MOV64SX, S8, range_check @unpriv:OK #349/19 verifier_movsx/MOV64SX, S16, range_check:OK #349/20 verifier_movsx/MOV64SX, S16, range_check @unpriv:OK #349/21 verifier_movsx/MOV64SX, S32, range_check:OK #349/22 verifier_movsx/MOV64SX, S32, range_check @unpriv:OK #349/23 verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK #349/24 verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK #349 verifier_movsx:OK #361/1 verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK #361/2 verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK #361/3 verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK #361/4 verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK #361/5 verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK #361/6 verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK #361/7 verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK #361/8 verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK #361/9 verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK #361/10 verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK #361/11 verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK #361/12 verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK #361/13 verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK #361/14 verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK #361/15 verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK #361/16 verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK #361/17 verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK #361/18 verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK #361/19 verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK #361/20 verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK #361/21 verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK #361/22 verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK #361/23 verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK #361/24 verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK #361/25 verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK #361/26 verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK #361/27 verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK #361/28 verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK #361/29 verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK #361/30 verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK #361/31 verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK #361/32 verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK #361/33 verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK #361/34 verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK #361/35 verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK #361/36 verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK #361/37 verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK #361/38 verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK #361/39 verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK #361/40 verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK #361/41 verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK #361/42 verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK #361/43 verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK #361/44 verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK #361/45 verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK #361/46 verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK #361/47 verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK #361/48 verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK #361/49 verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK #361/50 verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK #361/51 verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK #361/52 verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK #361/53 verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK #361/54 verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK #361/55 verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK #361/56 verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK #361/57 verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK #361/58 verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK #361/59 verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK #361/60 verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK #361/61 verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK #361/62 verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK #361/63 verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK #361/64 verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK #361/65 verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK #361/66 verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK #361/67 verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK #361/68 verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK #361/69 verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK #361/70 verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK #361/71 verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK #361/72 verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK #361/73 verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK #361/74 verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK #361/75 verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK #361/76 verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK #361/77 verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK #361/78 verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK #361/79 verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK #361/80 verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK #361/81 verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK #361/82 verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK #361/83 verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK #361/84 verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK #361/85 verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK #361/86 verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK #361/87 verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK #361/88 verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK #361/89 verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK #361/90 verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK #361/91 verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK #361/92 verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK #361/93 verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK #361/94 verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK #361/95 verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK #361/96 verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK #361/97 verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK #361/98 verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK #361/99 verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK #361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK #361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK #361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK #361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK #361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK #361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK #361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK #361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK #361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK #361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK #361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK #361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK #361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK #361/113 verifier_sdiv/SDIV32, zero divisor:OK #361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK #361/115 verifier_sdiv/SDIV64, zero divisor:OK #361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK #361/117 verifier_sdiv/SMOD32, zero divisor:OK #361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK #361/119 verifier_sdiv/SMOD64, zero divisor:OK #361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK #361 verifier_sdiv:OK Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t ldsx_insn test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) #116/1 ldsx_insn/map_val and probed_memory:FAIL #116/2 ldsx_insn/ctx_member_sign_ext:OK #116/3 ldsx_insn/ctx_member_narrow_sign_ext:OK #116 ldsx_insn:FAIL All error logs: test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) #116/1 ldsx_insn/map_val and probed_memory:FAIL #116 ldsx_insn:FAIL Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Huacai Chen <[email protected]>
2023-11-07selftests/bpf: get trusted cgrp from bpf_iter__cgroup directlyChuyi Zhou1-12/+4
Commit f49843afde (selftests/bpf: Add tests for css_task iter combining with cgroup iter) added a test which demonstrates how css_task iter can be combined with cgroup iter. That test used bpf_cgroup_from_id() to convert bpf_iter__cgroup->cgroup to a trusted ptr which is pointless now, since with the previous fix, we can get a trusted cgroup directly from bpf_iter__cgroup. Signed-off-by: Chuyi Zhou <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-11-02selftests/bpf: Fix broken build where char is unsignedBjörn Töpel1-1/+1
There are architectures where char is not signed. If so, the following error is triggered: | xdp_hw_metadata.c:435:42: error: result of comparison of constant -1 \ | with expression of type 'char' is always true \ | [-Werror,-Wtautological-constant-out-of-range-compare] | 435 | while ((opt = getopt(argc, argv, "mh")) != -1) { | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~ | 1 error generated. Correct by changing the char to int. Fixes: bb6a88885fde ("selftests/bpf: Add options and frags to xdp_hw_metadata") Signed-off-by: Björn Töpel <[email protected]> Acked-by: Larysa Zaremba <[email protected]> Tested-by: Anders Roxell <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01selftests/bpf: precision tracking test for BPF_NEG and BPF_ENDShung-Hsi Yu2-0/+95
As seen from previous commit that fix backtracking for BPF_ALU | BPF_TO_BE | BPF_END, both BPF_NEG and BPF_END require special handling. Add tests written with inline assembly to check that the verifier does not incorrecly use the src_reg field of BPF_NEG and BPF_END (including bswap added in v4). Suggested-by: Eduard Zingerman <[email protected]> Signed-off-by: Shung-Hsi Yu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01selftests/bpf: Add test for using css_task iter in sleepable progsChuyi Zhou2-0/+20
This Patch add a test to prove css_task iter can be used in normal sleepable progs. Signed-off-by: Chuyi Zhou <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01selftests/bpf: Add tests for css_task iter combining with cgroup iterChuyi Zhou2-0/+77
This patch adds a test which demonstrates how css_task iter can be combined with cgroup iter and it won't cause deadlock, though cgroup iter is not sleepable. Signed-off-by: Chuyi Zhou <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01bpf: Relax allowlist for css_task iterChuyi Zhou1-2/+2
The newly added open-coded css_task iter would try to hold the global css_set_lock in bpf_iter_css_task_new, so the bpf side has to be careful in where it allows to use this iter. The mainly concern is dead locking on css_set_lock. check_css_task_iter_allowlist() in verifier enforced css_task can only be used in bpf_lsm hooks and sleepable bpf_iter. This patch relax the allowlist for css_task iter. Any lsm and any iter (even non-sleepable) and any sleepable are safe since they would not hold the css_set_lock before entering BPF progs context. This patch also fixes the misused BPF_TRACE_ITER in check_css_task_iter_allowlist which compared bpf_prog_type with bpf_attach_type. Fixes: 9c66dc94b62ae ("bpf: Introduce css_task open-coded iterator kfuncs") Signed-off-by: Chuyi Zhou <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01selftests/bpf: fix test_maps' use of bpf_map_create_optsAndrii Nakryiko1-15/+5
Use LIBBPF_OPTS() macro to properly initialize bpf_map_create_opts in test_maps' tests. Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01bpf: Add __bpf_hook_{start,end} macrosDave Marchevsky1-4/+2
Not all uses of __diag_ignore_all(...) in BPF-related code in order to suppress warnings are wrapping kfunc definitions. Some "hook point" definitions - small functions meant to be used as attach points for fentry and similar BPF progs - need to suppress -Wmissing-declarations. We could use __bpf_kfunc_{start,end}_defs added in the previous patch in such cases, but this might be confusing to someone unfamiliar with BPF internals. Instead, this patch adds __bpf_hook_{start,end} macros, currently having the same effect as __bpf_kfunc_{start,end}_defs, then uses them to suppress warnings for two hook points in the kernel itself and some bpf_testmod hook points as well. Signed-off-by: Dave Marchevsky <[email protected]> Cc: Yafang Shao <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Yafang Shao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01selftests/bpf: fix test_bpffsManu Bretelle1-3/+8
Currently this tests tries to umount /sys/kernel/debug (TDIR) but the system it is running on may have mounts below. For example, danobi/vmtest [0] VMs have mount -t tracefs tracefs /sys/kernel/debug/tracing as part of their init. This change instead creates a "random" directory under /tmp and uses this as TDIR. If the directory already exists, ignore the error and keep moving on. Test: Originally: $ vmtest -k $KERNEL_REPO/arch/x86_64/boot/bzImage "./test_progs -vv -a test_bpffs" => bzImage ===> Booting ===> Setting up VM ===> Running command [ 2.138818] bpf_testmod: loading out-of-tree module taints kernel. [ 2.140913] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel bpf_testmod.ko is already unloaded. Loading bpf_testmod.ko... Successfully loaded bpf_testmod.ko. test_test_bpffs:PASS:clone 0 nsec fn:PASS:unshare 0 nsec fn:PASS:mount / 0 nsec fn:FAIL:umount /sys/kernel/debug unexpected error: -1 (errno 16) bpf_testmod.ko is already unloaded. Loading bpf_testmod.ko... Successfully loaded bpf_testmod.ko. test_test_bpffs:PASS:clone 0 nsec test_test_bpffs:PASS:waitpid 0 nsec test_test_bpffs:FAIL:bpffs test failed 255#282 test_bpffs:FAIL Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED Successfully unloaded bpf_testmod.ko. Command failed with exit code: 1 After this change: $ vmtest -k $(make image_name) 'cd tools/testing/selftests/bpf && ./test_progs -vv -a test_bpffs' => bzImage ===> Booting ===> Setting up VM ===> Running command [ 2.295696] bpf_testmod: loading out-of-tree module taints kernel. [ 2.296468] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel bpf_testmod.ko is already unloaded. Loading bpf_testmod.ko... Successfully loaded bpf_testmod.ko. test_test_bpffs:PASS:clone 0 nsec fn:PASS:unshare 0 nsec fn:PASS:mount / 0 nsec fn:PASS:mount tmpfs 0 nsec fn:PASS:mkdir /tmp/test_bpffs_testdir/fs1 0 nsec fn:PASS:mkdir /tmp/test_bpffs_testdir/fs2 0 nsec fn:PASS:mount bpffs /tmp/test_bpffs_testdir/fs1 0 nsec fn:PASS:mount bpffs /tmp/test_bpffs_testdir/fs2 0 nsec fn:PASS:reading /tmp/test_bpffs_testdir/fs1/maps.debug 0 nsec fn:PASS:reading /tmp/test_bpffs_testdir/fs2/progs.debug 0 nsec fn:PASS:creating /tmp/test_bpffs_testdir/fs1/a 0 nsec fn:PASS:creating /tmp/test_bpffs_testdir/fs1/a/1 0 nsec fn:PASS:creating /tmp/test_bpffs_testdir/fs1/b 0 nsec fn:PASS:create_map(ARRAY) 0 nsec fn:PASS:pin map 0 nsec fn:PASS:stat(/tmp/test_bpffs_testdir/fs1/a) 0 nsec fn:PASS:renameat2(/fs1/a, /fs1/b, RENAME_EXCHANGE) 0 nsec fn:PASS:stat(/tmp/test_bpffs_testdir/fs1/b) 0 nsec fn:PASS:b should have a's inode 0 nsec fn:PASS:access(/tmp/test_bpffs_testdir/fs1/b/1) 0 nsec fn:PASS:stat(/tmp/test_bpffs_testdir/fs1/map) 0 nsec fn:PASS:renameat2(/fs1/c, /fs1/b, RENAME_EXCHANGE) 0 nsec fn:PASS:stat(/tmp/test_bpffs_testdir/fs1/b) 0 nsec fn:PASS:b should have c's inode 0 nsec fn:PASS:access(/tmp/test_bpffs_testdir/fs1/c/1) 0 nsec fn:PASS:renameat2(RENAME_NOREPLACE) 0 nsec fn:PASS:access(/tmp/test_bpffs_testdir/fs1/b) 0 nsec bpf_testmod.ko is already unloaded. Loading bpf_testmod.ko... Successfully loaded bpf_testmod.ko. test_test_bpffs:PASS:clone 0 nsec test_test_bpffs:PASS:waitpid 0 nsec test_test_bpffs:PASS:bpffs test 0 nsec #282 test_bpffs:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Successfully unloaded bpf_testmod.ko. [0] https://github.com/danobi/vmtest This is a follow-up of https://lore.kernel.org/bpf/[email protected]/T/ v1 -> v2: - use a TDIR name that is related to test - use C-style comments Signed-off-by: Manu Bretelle <[email protected]> Acked-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01selftests/bpf: Add test for immediate spilled to stackHao Sun1-0/+32
Add a test to check if the verifier correctly reason about the sign of an immediate spilled to stack by BPF_ST instruction. Signed-off-by: Hao Sun <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-11-01Merge tag 'for-6.7/io_uring-sockopt-2023-10-30' of git://git.kernel.dk/linuxLinus Torvalds1-6/+107
Pull io_uring {get,set}sockopt support from Jens Axboe: "This adds support for using getsockopt and setsockopt via io_uring. The main use cases for this is to enable use of direct descriptors, rather than first instantiating a normal file descriptor, doing the option tweaking needed, then turning it into a direct descriptor. With this support, we can avoid needing a regular file descriptor completely. The net and bpf bits have been signed off on their side" * tag 'for-6.7/io_uring-sockopt-2023-10-30' of git://git.kernel.dk/linux: selftests/bpf/sockopt: Add io_uring support io_uring/cmd: Introduce SOCKET_URING_OP_SETSOCKOPT io_uring/cmd: Introduce SOCKET_URING_OP_GETSOCKOPT io_uring/cmd: return -EOPNOTSUPP if net is disabled selftests/net: Extract uring helpers to be reusable tools headers: Grab copy of io_uring.h io_uring/cmd: Pass compat mode in issue_flags net/socket: Break down __sys_getsockopt net/socket: Break down __sys_setsockopt bpf: Add sockptr support for setsockopt bpf: Add sockptr support for getsockopt
2023-11-01Merge 'bpf-next 2023-10-16' into loongarch-nextHuacai Chen92-926/+5605
LoongArch architecture changes for 6.7 (BPF CPU v4 support) depend on the bpf changes to fix conflictions in selftests and work, so merge them to create a base.