aboutsummaryrefslogtreecommitdiff
path: root/tools/testing/selftests/bpf/prog_tests
AgeCommit message (Collapse)AuthorFilesLines
2019-07-16selftests/bpf: fix perf_buffer on s390Ilya Leoshkevich2-17/+3
perf_buffer test fails for exactly the same reason test_attach_probe used to fail: different nanosleep syscall kprobe name. Reuse the test_attach_probe fix. Fixes: ee5cf82ce04a ("selftests/bpf: test perf buffer API") Signed-off-by: Ilya Leoshkevich <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-07-16selftests/bpf: skip nmi test when perf hw events are disabledIlya Leoshkevich1-1/+32
Some setups (e.g. virtual machines) might run with hardware perf events disabled. If this is the case, skip the test_send_signal_nmi test. Add a separate test involving a software perf event. This allows testing the perf event path regardless of hardware perf event support. Signed-off-by: Ilya Leoshkevich <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-07-15selftests/bpf: fix attach_probe on s390Ilya Leoshkevich1-0/+2
attach_probe test fails, because it cannot install a kprobe on a non-existent sys_nanosleep symbol. Use the correct symbol name for the nanosleep syscall on 64-bit s390. Don't bother adding one for 31-bit mode, since tests are compiled only in 64-bit mode. Fixes: 1e8611bbdfc9 ("selftests/bpf: add kprobe/uprobe selftests") Signed-off-by: Ilya Leoshkevich <[email protected]> Acked-by: Vasily Gorbik <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-07-08selftests/bpf: test perf buffer APIAndrii Nakryiko1-0/+100
Add test verifying perf buffer API functionality. Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Yonghong Song <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-07-05selftests/bpf: convert existing tracepoint tests to new APIsAndrii Nakryiko3-81/+32
Convert some existing tests that attach to tracepoints to use bpf_program__attach_tracepoint API instead. Signed-off-by: Andrii Nakryiko <[email protected]> Reviewed-by: Stanislav Fomichev <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-07-05selftests/bpf: add kprobe/uprobe selftestsAndrii Nakryiko1-0/+166
Add tests verifying kprobe/kretprobe/uprobe/uretprobe APIs work as expected. Signed-off-by: Andrii Nakryiko <[email protected]> Reviewed-by: Stanislav Fomichev <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-07-05selftests/bpf: switch test to new attach_perf_event APIAndrii Nakryiko1-16/+15
Use new bpf_program__attach_perf_event() in test previously relying on direct ioctl manipulations. Signed-off-by: Andrii Nakryiko <[email protected]> Reviewed-by: Stanislav Fomichev <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-06-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller1-11/+56
Alexei Starovoitov says: ==================== pull-request: bpf-next 2019-06-19 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) new SO_REUSEPORT_DETACH_BPF setsocktopt, from Martin. 2) BTF based map definition, from Andrii. 3) support bpf_map_lookup_elem for xskmap, from Jonathan. 4) bounded loops and scalar precision logic in the verifier, from Alexei. ==================== Signed-off-by: David S. Miller <[email protected]>
2019-06-19selftests/bpf: add realistic loop testsAlexei Starovoitov1-11/+56
Add a bunch of loop tests. Most of them are created by replacing '#pragma unroll' with '#pragma clang loop unroll(disable)' Several tests are artificially large: /* partial unroll. llvm will unroll loop ~150 times. * C loop count -> 600. * Asm loop count -> 4. * 16k insns in loop body. * Total of 5 such loops. Total program size ~82k insns. */ "./pyperf600.o", /* no unroll at all. * C loop count -> 600. * ASM loop count -> 600. * ~110 insns in loop body. * Total of 5 such loops. Total program size ~1500 insns. */ "./pyperf600_nounroll.o", /* partial unroll. 19k insn in a loop. * Total program size 20.8k insn. * ~350k processed_insns */ "./strobemeta.o", Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-06-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-0/+1
Honestly all the conflicts were simple overlapping changes, nothing really interesting to report. Signed-off-by: David S. Miller <[email protected]>
2019-05-29selftests: bpf: fix compiler warning in flow_dissector testAlakesh Haloi1-0/+1
Add missing header file following compiler warning: prog_tests/flow_dissector.c: In function ‘tx_tap’: prog_tests/flow_dissector.c:175:9: warning: implicit declaration of function ‘writev’; did you mean ‘write’? [-Wimplicit-function-declaration] return writev(fd, iov, ARRAY_SIZE(iov)); ^~~~~~ write Fixes: 0905beec9f52 ("selftests/bpf: run flow dissector tests in skb-less mode") Signed-off-by: Alakesh Haloi <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-05-24selftests: bpf: enable hi32 randomization for all testsJiong Wang1-0/+1
The previous libbpf patch allows user to specify "prog_flags" to bpf program load APIs. To enable high 32-bit randomization for a test, we need to set BPF_F_TEST_RND_HI32 in "prog_flags". To enable such randomization for all tests, we need to make sure all places are passing BPF_F_TEST_RND_HI32. Changing them one by one is not convenient, also, it would be better if a test could be switched to "normal" running mode without code change. Given the program load APIs used across bpf selftests are mostly: bpf_prog_load: load from file bpf_load_program: load from raw insns A test_stub.c is implemented for bpf seltests, it offers two functions for testing purpose: bpf_prog_test_load bpf_test_load_program The are the same as "bpf_prog_load" and "bpf_load_program", except they also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to customize any "prog_flags", it makes little sense to put these two functions into libbpf. Then, the following CFLAGS are passed to compilations for host programs: -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program They migrate the used load APIs to the test version, hence enable high 32-bit randomization for these tests without changing source code. Besides all these, there are several testcases are using "bpf_prog_load_attr" directly, their call sites are updated to pass BPF_F_TEST_RND_HI32. Signed-off-by: Jiong Wang <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-05-24tools/bpf: add selftest in test_progs for bpf_send_signal() helperYonghong Song1-0/+198
The test covered both nmi and tracepoint perf events. $ ./test_progs ... test_send_signal_tracepoint:PASS:tracepoint 0 nsec ... test_send_signal_common:PASS:tracepoint 0 nsec ... test_send_signal_common:PASS:perf_event 0 nsec ... test_send_signal:OK Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-05-23selftests/bpf: add pyperf scale testAlexei Starovoitov1-13/+18
Add a snippet of pyperf bpf program used to collect python stack traces as a scale test for the verifier. At 189 loop iterations llvm 9.0 starts ignoring '#pragma unroll' and generates partially unrolled loop instead. Hence use 50, 100, and 180 loop iterations to stress test. Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-05-16selftests/bpf: add prog detach to flow_dissector testStanislav Fomichev1-0/+1
In case we are not running in a namespace (which we don't do by default), let's try to detach the bpf program that we use for eth_get_headlen tests. Fixes: 0905beec9f52 ("selftests/bpf: run flow dissector tests in skb-less mode") Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-05-16selftests/bpf: add missing \n to flow_dissector CHECK errorsStanislav Fomichev1-4/+4
Otherwise, in case of an error, everything gets mushed together. Fixes: a5cb33464e53 ("selftests/bpf: make flow dissector tests more extensible") Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-05-09selftests: bpf: initialize bpf_object pointers where neededLorenz Bauer3-2/+5
There are a few tests which call bpf_object__close on uninitialized bpf_object*, which may segfault. Explicitly zero-initialise these pointers to avoid this. Signed-off-by: Lorenz Bauer <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-04-26selftests: bpf: test writable buffers in raw tpsMatt Mullins2-0/+122
This tests that: * a BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE cannot be attached if it uses either: * a variable offset to the tracepoint buffer, or * an offset beyond the size of the tracepoint buffer * a tracer can modify the buffer provided when attached to a writable tracepoint in bpf_prog_test_run Signed-off-by: Matt Mullins <[email protected]> Acked-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-04-23bpf/flow_dissector: don't adjust nhoff by ETH_HLEN in BPF_PROG_TEST_RUNStanislav Fomichev1-14/+9
Now that we use skb-less flow dissector let's return true nhoff and thoff. We used to adjust them by ETH_HLEN because that's how it was done in the skb case. For VLAN tests that looks confusing: nhoff is pointing to vlan parts :-\ Warning, this is an API change for BPF_PROG_TEST_RUN! Feel free to drop if you think that it's too late at this point to fix it. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-23selftests/bpf: run flow dissector tests in skb-less modeStanislav Fomichev1-2/+100
Export last_dissection map from flow dissector and use a known place in tun driver to trigger BPF flow dissection. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-23selftests/bpf: add flow dissector bpf_skb_load_bytes helper testStanislav Fomichev1-0/+48
When flow dissector is called without skb, we want to make sure bpf_skb_load_bytes invocations return error. Add small test which tries to read single byte from a packet. bpf_skb_load_bytes should always fail under BPF_PROG_TEST_RUN because it was converted to the skb-less mode. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-17selftests/bpf: fix a compilation errorYonghong Song1-2/+2
I hit the following compilation error with gcc 4.8.5. prog_tests/flow_dissector.c: In function ‘test_flow_dissector’: prog_tests/flow_dissector.c:155:2: error: ‘for’ loop initial declarations are only allowed in C99 mode for (int i = 0; i < ARRAY_SIZE(tests); i++) { ^ prog_tests/flow_dissector.c:155:2: note: use option -std=c99 or -std=gnu99 to compile your code Let us fix the issue by avoiding this particular c99 feature. Fixes: a5cb33464e53 ("selftests/bpf: make flow dissector tests more extensible") Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-04-16selftests/bpf: make flow dissector tests more extensibleStanislav Fomichev1-81/+116
Rewrite selftest to iterate over an array with input packet and expected flow_keys. This should make it easier to extend this test with additional cases without too much boilerplate. Signed-off-by: Stanislav Fomichev <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller5-6/+301
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-04-12 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Improve BPF verifier scalability for large programs through two optimizations: i) remove verifier states that are not useful in pruning, ii) stop walking parentage chain once first LIVE_READ is seen. Combined gives approx 20x speedup. Increase limits for accepting large programs under root, and add various stress tests, from Alexei. 2) Implement global data support in BPF. This enables static global variables for .data, .rodata and .bss sections to be properly handled which allows for more natural program development. This also opens up the possibility to optimize program workflow by compiling ELFs only once and later only rewriting section data before reload, from Daniel and with test cases and libbpf refactoring from Joe. 3) Add config option to generate BTF type info for vmlinux as part of the kernel build process. DWARF debug info is converted via pahole to BTF. Latter relies on libbpf and makes use of BTF deduplication algorithm which results in 100x savings compared to DWARF data. Resulting .BTF section is typically about 2MB in size, from Andrii. 4) Add BPF verifier support for stack access with variable offset from helpers and add various test cases along with it, from Andrey. 5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header so that L2 encapsulation can be used for tc tunnels, from Alan. 6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that users can define a subset of allowed __sk_buff fields that get fed into the test program, from Stanislav. 7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up various UBSAN warnings in bpftool, from Yonghong. 8) Generate a pkg-config file for libbpf, from Luca. 9) Dump program's BTF id in bpftool, from Prashant. 10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP program, from Magnus. 11) kallsyms related fixes for the case when symbols are not present in BPF selftests and samples, from Daniel ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-11selftests: bpf: add selftest for __sk_buff context in BPF_PROG_TEST_RUNStanislav Fomichev1-0/+89
Simple test that sets cb to {1,2,3,4,5} and priority to 6, runs bpf program that fails if cb is not what we expect and increments cb[i] and priority. When the test finishes, we check that cb is now {2,3,4,5,6} and priority is 7. We also test the sanity checks: * ctx_in is provided, but ctx_size_in is zero (same for ctx_out/ctx_size_out) * unexpected non-zero fields in __sk_buff return EINVAL Signed-off-by: Stanislav Fomichev <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-09bpf, selftest: test global data/bss/rodata sectionsJoe Stringer1-0/+157
Add tests for libbpf relocation of static variable references into the .data, .rodata and .bss sections of the ELF, also add read-only test for .rodata. All passing: # ./test_progs [...] test_global_data:PASS:load program 0 nsec test_global_data:PASS:pass global data run 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .data reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .data reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .bss reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_number:PASS:relocate .rodata reference 925 nsec test_global_data_string:PASS:relocate .rodata reference 925 nsec test_global_data_string:PASS:relocate .data reference 925 nsec test_global_data_string:PASS:relocate .bss reference 925 nsec test_global_data_string:PASS:relocate .data reference 925 nsec test_global_data_string:PASS:relocate .bss reference 925 nsec test_global_data_struct:PASS:relocate .rodata reference 925 nsec test_global_data_struct:PASS:relocate .bss reference 925 nsec test_global_data_struct:PASS:relocate .rodata reference 925 nsec test_global_data_struct:PASS:relocate .data reference 925 nsec test_global_data_rdonly:PASS:test .rodata read-only map 925 nsec [...] Summary: 229 PASSED, 0 FAILED Note map helper signatures have been changed to avoid warnings when passing in const data. Joint work with Daniel Borkmann. Signed-off-by: Joe Stringer <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-04-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-0/+68
Minor comment merge conflict in mlx5. Staging driver has a fixup due to the skb->xmit_more changes in 'net-next', but was removed in 'net'. Signed-off-by: David S. Miller <[email protected]>
2019-04-04samples, selftests/bpf: add NULL check for ksym_searchDaniel T. Lee1-2/+2
Since, ksym_search added with verification logic for symbols existence, it could return NULL when the kernel symbols are not loaded. This commit will add NULL check logic after ksym_search. Signed-off-by: Daniel T. Lee <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-04selftests/bpf: add few verifier scale testsAlexei Starovoitov1-0/+49
Add 3 basic tests that stress verifier scalability. test_verif_scale1.c calls non-inlined jhash() function 90 times on different position in the packet. This test simulates network packet parsing. jhash function is ~140 instructions and main program is ~1200 insns. test_verif_scale2.c force inlines jhash() function 90 times. This program is ~15k instructions long. test_verif_scale3.c calls non-inlined jhash() function 90 times on But this time jhash has to process 32-bytes from the packet instead of 14-bytes in tests 1 and 2. jhash function is ~230 insns and main program is ~1200 insns. $ test_progs -s can be used to see verifier stats. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-03selftests/bpf: fix vlan handling in flow dissector programStanislav Fomichev1-0/+68
When we tail call PROG(VLAN) from parse_eth_proto we don't need to peek back to handle vlan proto because we didn't adjust nhoff/thoff yet. Use flow_keys->n_proto, that we set in parse_eth_proto instead and properly increment nhoff as well. Also, always use skb->protocol and don't look at skb->vlan_present. skb->vlan_present indicates that vlan information is stored out-of-band in skb->vlan_{tci,proto} and vlan header is already pulled from skb. That means, skb->vlan_present == true is not relevant for BPF flow dissector. Add simple test cases with VLAN tagged frames: * single vlan for ipv4 * double vlan for ipv6 Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-02selftests: bpf: fix -Wformat-invalid-specifier for bpf_obj_id.cStanislav Fomichev1-4/+4
Use standard C99 %zu for sizeof, not GCC's custom %Zu: bpf_obj_id.c:76:48: warning: invalid conversion specifier 'Z' Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-03-26selftests: bpf: don't depend on hardcoded perf sample_freqStanislav Fomichev1-1/+15
When running stacktrace_build_id_nmi, try to query kernel.perf_event_max_sample_rate sysctl and use it as a sample_freq. If there was an error reading sysctl, fallback to 5000. kernel.perf_event_max_sample_rate sysctl can drift and/or can be adjusted by the perf tool, so assuming a fixed number might be problematic on a long running machine. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-12selftests/bpf: fix segfault of test_progs when prog loading failedYonghong Song2-2/+2
The test_progs subtests, test_spin_lock() and test_map_lock(), requires BTF present to run successfully. Currently, when BTF failed to load, test_progs will segfault, $ ./test_progs ... 12: (bf) r1 = r8 13: (85) call bpf_spin_lock#93 map 'hash_map' has to have BTF in order to use bpf_spin_lock libbpf: -- END LOG -- libbpf: failed to load program 'map_lock_demo' libbpf: failed to load object './test_map_lock.o' test_map_lock:bpf_prog_load errno 13 Segmentation fault The segfault is caused by uninitialized variable "obj", which is used in bpf_object__close(obj), when bpf prog failed to load. Initializing variable "obj" to NULL in two occasions fixed the problem. $ ./test_progs ... Summary: 219 PASSED, 2 FAILED Fixes: b4d4556c3266 ("selftests/bpf: add bpf_spin_lock verifier tests") Fixes: ba72a7b4badb ("selftests/bpf: test for BPF_F_LOCK") Reported-by: Daniel Borkmann <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Acked-by: Song Liu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-03-07selftests: bpf: test_progs: initialize duration in singal_pending testStanislav Fomichev1-1/+1
CHECK macro implicitly uses duration. We call CHECK() a couple of times before duration is initialized from bpf_prog_test_run(). Explicitly set duration to 0 to avoid compiler warnings. Fixes: 740f8a657221 ("selftests/bpf: make sure signal interrupts BPF_PROG_TEST_RUN") Signed-off-by: Stanislav Fomichev <[email protected]> Acked-by: Yonghong Song <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-03-02selftests: bpf: break up test_progs - miscStanislav Fomichev9-0/+749
Move the rest of prog tests into separate files. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-02selftests: bpf: break up test_progs - spinlockStanislav Fomichev2-0/+104
Move spinlock prog tests into separate files. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-02selftests: bpf: break up test_progs - tracepointStanislav Fomichev4-0/+431
Move tracepoint prog tests into separate files. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-02selftests: bpf: break up test_progs - stackmapStanislav Fomichev4-0/+477
Move stackmap prog tests into separate files. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-02selftests: bpf: break up test_progs - xdpStanislav Fomichev3-0/+159
Move xdp prog tests into separate files. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-02selftests: bpf: break up test_progs - pkt accessStanislav Fomichev2-0/+53
Move pkt access prog tests into separate files. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-03-02selftests: bpf: break up test_progs - preparationsStanislav Fomichev1-0/+1
Add new prog_tests directory where tests are supposed to land. Each prog_tests/<filename>.c is expected to have a global function with signature 'void test_<filename>(void)'. Makefile automatically generates prog_tests/tests.h file with entry for each prog_tests file: #ifdef DECLARE extern void test_<filename>(void); ... #endif #ifdef CALL test_<filename>(); ... #endif prog_tests/tests.h is included in test_progs.c in two places with appropriate defines. This scheme allows us to move each function with a separate patch without breaking anything. Compared to the recent verifier split, each separate file here is a compilation unit and test_progs.[ch] is now used as a place to put some common routines that might be used by multiple tests. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>