aboutsummaryrefslogtreecommitdiff
path: root/tools/testing/selftests/bpf/progs
AgeCommit message (Collapse)AuthorFilesLines
2020-06-22selftests/bpf: Add __ksym extern selftestAndrii Nakryiko1-0/+32
Validate libbpf is able to handle weak and strong kernel symbol externs in BPF code correctly. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Hao Luo <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-22selftests/bpf: Test access to bpf map pointerAndrey Ignatov1-0/+686
Add selftests to test access to map pointers from bpf program for all map types except struct_ops (that one would need additional work). verifier test focuses mostly on scenarios that must be rejected. prog_tests test focuses on accessing multiple fields both scalar and a nested struct from bpf program and verifies that those fields have expected values. Signed-off-by: Andrey Ignatov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/139a6a17f8016491e39347849b951525335c6eb4.1592600985.git.rdna@fb.com
2020-06-17selftests/bpf: Make sure optvals > PAGE_SIZE are bypassedStanislav Fomichev1-2/+52
We are relying on the fact, that we can pass > sizeof(int) optvals to the SOL_IP+IP_FREEBIND option (the kernel will take first 4 bytes). In the BPF program we check that we can only touch PAGE_SIZE bytes, but the real optlen is PAGE_SIZE * 2. In both cases, we override it to some predefined value and trim the optlen. Also, let's modify exiting IP_TOS usecase to test optlen=0 case where BPF program just bypasses the data as is. Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-12libbpf: Support pre-initializing .bss global variablesAndrii Nakryiko1-4/+15
Remove invalid assumption in libbpf that .bss map doesn't have to be updated in kernel. With addition of skeleton and memory-mapped initialization image, .bss doesn't have to be all zeroes when BPF map is created, because user-code might have initialized those variables from user-space. Fixes: eba9c5f498a1 ("libbpf: Refactor global data map initialization") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-11selftests/bpf: Add cgroup_skb/egress test for load_bytes_relativeYiFei Zhu1-0/+48
When cgroup_skb/egress triggers the MAC header is not set. Added a test that asserts reading MAC header is a -EFAULT but NET header succeeds. The test result from within the eBPF program is stored in an 1-element array map that the userspace then reads and asserts on. Another assertion is added that reading from a large offset, past the end of packet, returns -EFAULT. Signed-off-by: YiFei Zhu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/bpf/9028ccbea4385a620e69c0a104f469ffd655c01e.1591812755.git.zhuyifei@google.com
2020-06-09bpf: Selftests and tools use struct bpf_devmap_val from uapiJesper Dangaard Brouer2-3/+2
Sync tools uapi bpf.h header file and update selftests that use struct bpf_devmap_val. Signed-off-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/159170951195.2102545.1833108712124273987.stgit@firesoul
2020-06-02bpf, selftests: Adapt cls_redirect to call csum_level helperDaniel Borkmann1-3/+6
Adapt bpf_skb_adjust_room() to pass in BPF_F_ADJ_ROOM_NO_CSUM_RESET flag and use the new bpf_csum_level() helper to inc/dec the checksum level by one after the encap/decap. Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Lorenz Bauer <[email protected]> Link: https://lore.kernel.org/bpf/e7458f10e3f3d795307cbc5ad870112671d9c6f7.1591108731.git.daniel@iogearbox.net
2020-06-01selftests/bpf: Convert test_flow_dissector to use BPF skeletonJakub Sitnicki1-10/+10
Switch flow dissector test setup from custom BPF object loader to BPF skeleton to save boilerplate and prepare for testing higher-level API for attaching flow dissector with bpf_link. To avoid depending on program order in the BPF object when populating the flow dissector PROG_ARRAY map, change the program section names to contain the program index into the map. This follows the example set by tailcall tests. Signed-off-by: Jakub Sitnicki <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-06-01selftests/bpf: Add test for SO_BINDTODEVICE opt of bpf_setsockoptFerenc Fejes1-0/+33
This test intended to verify if SO_BINDTODEVICE option works in bpf_setsockopt. Because we already in the SOL_SOCKET level in this connect bpf prog its safe to verify the sanity in the beginning of the connect_v4_prog by calling the bind_to_device test helper. The testing environment already created by the test_sock_addr.sh script so this test assume that two netdevices already existing in the system: veth pair with names test_sock_addr1 and test_sock_addr2. The test will try to bind the socket to those devices first. Then the test assume there are no netdevice with "nonexistent_dev" name so the bpf_setsockopt will give use ENODEV error. At the end the test remove the device binding from the socket by binding it to an empty name. Signed-off-by: Ferenc Fejes <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/3f055b8e45c65639c5c73d0b4b6c589e60b86f15.1590871065.git.fejes@inf.elte.hu
2020-06-01bpf, selftests: Add test for ktls with skb bpf ingress policyJohn Fastabend1-1/+45
This adds a test for bpf ingress policy. To ensure data writes happen as expected with extra TLS headers we run these tests with data verification enabled by default. This will test receive packets have "PASS" stamped into the front of the payload. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/159079363965.5745.3390806911628980210.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <[email protected]>
2020-06-01selftest: Add tests for XDP programs in devmap entriesDavid Ahern2-0/+66
Add tests to verify ability to add an XDP program to a entry in a DEVMAP. Add negative tests to show DEVMAP programs can not be attached to devices as a normal XDP program, and accesses to egress_ifindex require BPF_XDP_DEVMAP attach type. Signed-off-by: David Ahern <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Toke Høiland-Jørgensen <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2020-06-01bpf: Add BPF ringbuf and perf buffer benchmarksAndrii Nakryiko2-0/+93
Extend bench framework with ability to have benchmark-provided child argument parser for custom benchmark-specific parameters. This makes bench generic code modular and independent from any specific benchmark. Also implement a set of benchmarks for new BPF ring buffer and existing perf buffer. 4 benchmarks were implemented: 2 variations for each of BPF ringbuf and perfbuf:, - rb-libbpf utilizes stock libbpf ring_buffer manager for reading data; - rb-custom implements custom ring buffer setup and reading code, to eliminate overheads inherent in generic libbpf code due to callback functions and the need to update consumer position after each consumed record, instead of batching updates (due to pessimistic assumption that user callback might take long time and thus could unnecessarily hold ring buffer space for too long); - pb-libbpf uses stock libbpf perf_buffer code with all the default settings, though uses higher-performance raw event callback to minimize unnecessary overhead; - pb-custom implements its own custom consumer code to minimize any possible overhead of generic libbpf implementation and indirect function calls. All of the test support default, no data notification skipped, mode, as well as sampled mode (with --rb-sampled flag), which allows to trigger epoll notification less frequently and reduce overhead. As will be shown, this mode is especially critical for perf buffer, which suffers from high overhead of wakeups in kernel. Otherwise, all benchamrks implement similar way to generate a batch of records by using fentry/sys_getpgid BPF program, which pushes a bunch of records in a tight loop and records number of successful and dropped samples. Each record is a small 8-byte integer, to minimize the effect of memory copying with bpf_perf_event_output() and bpf_ringbuf_output(). Benchmarks that have only one producer implement optional back-to-back mode, in which record production and consumption is alternating on the same CPU. This is the highest-throughput happy case, showing ultimate performance achievable with either BPF ringbuf or perfbuf. All the below scenarios are implemented in a script in benchs/run_bench_ringbufs.sh. Tests were performed on 28-core/56-thread Intel Xeon CPU E5-2680 v4 @ 2.40GHz CPU. Single-producer, parallel producer ================================== rb-libbpf 12.054 ± 0.320M/s (drops 0.000 ± 0.000M/s) rb-custom 8.158 ± 0.118M/s (drops 0.001 ± 0.003M/s) pb-libbpf 0.931 ± 0.007M/s (drops 0.000 ± 0.000M/s) pb-custom 0.965 ± 0.003M/s (drops 0.000 ± 0.000M/s) Single-producer, parallel producer, sampled notification ======================================================== rb-libbpf 11.563 ± 0.067M/s (drops 0.000 ± 0.000M/s) rb-custom 15.895 ± 0.076M/s (drops 0.000 ± 0.000M/s) pb-libbpf 9.889 ± 0.032M/s (drops 0.000 ± 0.000M/s) pb-custom 9.866 ± 0.028M/s (drops 0.000 ± 0.000M/s) Single producer on one CPU, consumer on another one, both running at full speed. Curiously, rb-libbpf has higher throughput than objectively faster (due to more lightweight consumer code path) rb-custom. It appears that faster consumer causes kernel to send notifications more frequently, because consumer appears to be caught up more frequently. Performance of perfbuf suffers from default "no sampling" policy and huge overhead that causes. In sampled mode, rb-custom is winning very significantly eliminating too frequent in-kernel wakeups, the gain appears to be more than 2x. Perf buffer achieves even more impressive wins, compared to stock perfbuf settings, with 10x improvements in throughput with 1:500 sampling rate. The trade-off is that with sampling, application might not get next X events until X+1st arrives, which is not always acceptable. With steady influx of events, though, this shouldn't be a problem. Overall, single-producer performance of ring buffers seems to be better no matter the sampled/non-sampled modes, but it especially beats ring buffer without sampling due to its adaptive notification approach. Single-producer, back-to-back mode ================================== rb-libbpf 15.507 ± 0.247M/s (drops 0.000 ± 0.000M/s) rb-libbpf-sampled 14.692 ± 0.195M/s (drops 0.000 ± 0.000M/s) rb-custom 21.449 ± 0.157M/s (drops 0.000 ± 0.000M/s) rb-custom-sampled 20.024 ± 0.386M/s (drops 0.000 ± 0.000M/s) pb-libbpf 1.601 ± 0.015M/s (drops 0.000 ± 0.000M/s) pb-libbpf-sampled 8.545 ± 0.064M/s (drops 0.000 ± 0.000M/s) pb-custom 1.607 ± 0.022M/s (drops 0.000 ± 0.000M/s) pb-custom-sampled 8.988 ± 0.144M/s (drops 0.000 ± 0.000M/s) Here we test a back-to-back mode, which is arguably best-case scenario both for BPF ringbuf and perfbuf, because there is no contention and for ringbuf also no excessive notification, because consumer appears to be behind after the first record. For ringbuf, custom consumer code clearly wins with 21.5 vs 16 million records per second exchanged between producer and consumer. Sampled mode actually hurts a bit due to slightly slower producer logic (it needs to fetch amount of data available to decide whether to skip or force notification). Perfbuf with wakeup sampling gets 5.5x throughput increase, compared to no-sampling version. There also doesn't seem to be noticeable overhead from generic libbpf handling code. Perfbuf back-to-back, effect of sample rate =========================================== pb-sampled-1 1.035 ± 0.012M/s (drops 0.000 ± 0.000M/s) pb-sampled-5 3.476 ± 0.087M/s (drops 0.000 ± 0.000M/s) pb-sampled-10 5.094 ± 0.136M/s (drops 0.000 ± 0.000M/s) pb-sampled-25 7.118 ± 0.153M/s (drops 0.000 ± 0.000M/s) pb-sampled-50 8.169 ± 0.156M/s (drops 0.000 ± 0.000M/s) pb-sampled-100 8.887 ± 0.136M/s (drops 0.000 ± 0.000M/s) pb-sampled-250 9.180 ± 0.209M/s (drops 0.000 ± 0.000M/s) pb-sampled-500 9.353 ± 0.281M/s (drops 0.000 ± 0.000M/s) pb-sampled-1000 9.411 ± 0.217M/s (drops 0.000 ± 0.000M/s) pb-sampled-2000 9.464 ± 0.167M/s (drops 0.000 ± 0.000M/s) pb-sampled-3000 9.575 ± 0.273M/s (drops 0.000 ± 0.000M/s) This benchmark shows the effect of event sampling for perfbuf. Back-to-back mode for highest throughput. Just doing every 5th record notification gives 3.5x speed up. 250-500 appears to be the point of diminishing return, with almost 9x speed up. Most benchmarks use 500 as the default sampling for pb-raw and pb-custom. Ringbuf back-to-back, effect of sample rate =========================================== rb-sampled-1 1.106 ± 0.010M/s (drops 0.000 ± 0.000M/s) rb-sampled-5 4.746 ± 0.149M/s (drops 0.000 ± 0.000M/s) rb-sampled-10 7.706 ± 0.164M/s (drops 0.000 ± 0.000M/s) rb-sampled-25 12.893 ± 0.273M/s (drops 0.000 ± 0.000M/s) rb-sampled-50 15.961 ± 0.361M/s (drops 0.000 ± 0.000M/s) rb-sampled-100 18.203 ± 0.445M/s (drops 0.000 ± 0.000M/s) rb-sampled-250 19.962 ± 0.786M/s (drops 0.000 ± 0.000M/s) rb-sampled-500 20.881 ± 0.551M/s (drops 0.000 ± 0.000M/s) rb-sampled-1000 21.317 ± 0.532M/s (drops 0.000 ± 0.000M/s) rb-sampled-2000 21.331 ± 0.535M/s (drops 0.000 ± 0.000M/s) rb-sampled-3000 21.688 ± 0.392M/s (drops 0.000 ± 0.000M/s) Similar benchmark for ring buffer also shows a great advantage (in terms of throughput) of skipping notifications. Skipping every 5th one gives 4x boost. Also similar to perfbuf case, 250-500 seems to be the point of diminishing returns, giving roughly 20x better results. Keep in mind, for this test, notifications are controlled manually with BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP. As can be seen from previous benchmarks, adaptive notifications based on consumer's positions provides same (or even slightly better due to simpler load generator on BPF side) benefits in favorable back-to-back scenario. Over zealous and fast consumer, which is almost always caught up, will make thoughput numbers smaller. That's the case when manual notification control might prove to be extremely beneficial. Ringbuf back-to-back, reserve+commit vs output ============================================== reserve 22.819 ± 0.503M/s (drops 0.000 ± 0.000M/s) output 18.906 ± 0.433M/s (drops 0.000 ± 0.000M/s) Ringbuf sampled, reserve+commit vs output ========================================= reserve-sampled 15.350 ± 0.132M/s (drops 0.000 ± 0.000M/s) output-sampled 14.195 ± 0.144M/s (drops 0.000 ± 0.000M/s) BPF ringbuf supports two sets of APIs with various usability and performance tradeoffs: bpf_ringbuf_reserve()+bpf_ringbuf_commit() vs bpf_ringbuf_output(). This benchmark clearly shows superiority of reserve+commit approach, despite using a small 8-byte record size. Single-producer, consumer/producer competing on the same CPU, low batch count ============================================================================= rb-libbpf 3.045 ± 0.020M/s (drops 3.536 ± 0.148M/s) rb-custom 3.055 ± 0.022M/s (drops 3.893 ± 0.066M/s) pb-libbpf 1.393 ± 0.024M/s (drops 0.000 ± 0.000M/s) pb-custom 1.407 ± 0.016M/s (drops 0.000 ± 0.000M/s) This benchmark shows one of the worst-case scenarios, in which producer and consumer do not coordinate *and* fight for the same CPU. No batch count and sampling settings were able to eliminate drops for ringbuffer, producer is just too fast for consumer to keep up. But ringbuf and perfbuf still able to pass through quite a lot of messages, which is more than enough for a lot of applications. Ringbuf, multi-producer contention ================================== rb-libbpf nr_prod 1 10.916 ± 0.399M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 2 4.931 ± 0.030M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 3 4.880 ± 0.006M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 4 3.926 ± 0.004M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 8 4.011 ± 0.004M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 12 3.967 ± 0.016M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 16 2.604 ± 0.030M/s (drops 0.001 ± 0.002M/s) rb-libbpf nr_prod 20 2.233 ± 0.003M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 24 2.085 ± 0.015M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 28 2.055 ± 0.004M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 32 1.962 ± 0.004M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 36 2.089 ± 0.005M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 40 2.118 ± 0.006M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 44 2.105 ± 0.004M/s (drops 0.000 ± 0.000M/s) rb-libbpf nr_prod 48 2.120 ± 0.058M/s (drops 0.000 ± 0.001M/s) rb-libbpf nr_prod 52 2.074 ± 0.024M/s (drops 0.007 ± 0.014M/s) Ringbuf uses a very short-duration spinlock during reservation phase, to check few invariants, increment producer count and set record header. This is the biggest point of contention for ringbuf implementation. This benchmark evaluates the effect of multiple competing writers on overall throughput of a single shared ringbuffer. Overall throughput drops almost 2x when going from single to two highly-contended producers, gradually dropping with additional competing producers. Performance drop stabilizes at around 20 producers and hovers around 2mln even with 50+ fighting producers, which is a 5x drop compared to non-contended case. Good kernel implementation in kernel helps maintain decent performance here. Note, that in the intended real-world scenarios, it's not expected to get even close to such a high levels of contention. But if contention will become a problem, there is always an option of sharding few ring buffers across a set of CPUs. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2020-06-01selftests/bpf: Add BPF ringbuf selftestsAndrii Nakryiko2-0/+155
Both singleton BPF ringbuf and BPF ringbuf with map-in-map use cases are tested. Also reserve+submit/discards and output variants of API are validated. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2020-06-01bpf, selftests: Test probe_* helpers from SCHED_CLSJohn Fastabend1-0/+28
Lets test using probe* in SCHED_CLS network programs as well just to be sure these keep working. Its cheap to add the extra test and provides a second context to test outside of sk_msg after we generalized probe* helpers to all networking types. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/159033911685.12355.15951980509828906214.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <[email protected]>
2020-06-01bpf, selftests: Add sk_msg helpers load and attach testJohn Fastabend1-0/+47
The test itself is not particularly useful but it encodes a common pattern we have. Namely do a sk storage lookup then depending on data here decide if we need to do more work or alternatively allow packet to PASS. Then if we need to do more work consult task_struct for more information about the running task. Finally based on this additional information drop or pass the data. In this case the suspicious check is not so realisitic but it encodes the general pattern and uses the helpers so we test the workflow. This is a load test to ensure verifier correctly handles this case. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/159033909665.12355.6166415847337547879.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <[email protected]>
2020-05-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-0/+8
The MSCC bug fix in 'net' had to be slightly adjusted because the register accesses are done slightly differently in net-next. Signed-off-by: David S. Miller <[email protected]>
2020-05-21bpf: Selftests, add printk to test_sk_lookup_kern to encode null ptr checkJohn Fastabend1-0/+1
Adding a printk to test_sk_lookup_kern created the reported failure where a pointer type is checked twice for NULL. Lets add it to the progs test test_sk_lookup_kern.c so we test the case from C all the way into the verifier. We already have printk's in selftests so seems OK to add another one. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/159009170603.6313.1715279795045285176.stgit@john-Precision-5820-Tower
2020-05-20bpf: Prevent mmap()'ing read-only maps as writableAndrii Nakryiko1-0/+8
As discussed in [0], it's dangerous to allow mapping BPF map, that's meant to be frozen and is read-only on BPF program side, because that allows user-space to actually store a writable view to the page even after it is frozen. This is exacerbated by BPF verifier making a strong assumption that contents of such frozen map will remain unchanged. To prevent this, disallow mapping BPF_F_RDONLY_PROG mmap()'able BPF maps as writable, ever. [0] https://lore.kernel.org/bpf/CAEf4BzYGWYhXdp6BJ7_=9OQPJxQpgug080MMjdSB72i9R+5c6g@mail.gmail.com/ Fixes: fc9702273e2e ("bpf: Add mmap() support for BPF_MAP_TYPE_ARRAY") Suggested-by: Jann Horn <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Reviewed-by: Jann Horn <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-19selftests/bpf: Convert bpf_iter_test_kern{3, 4}.c to define own bpf_iter_metaAndrii Nakryiko2-0/+30
b9f4c01f3e0b ("selftest/bpf: Make bpf_iter selftest compilable against old vmlinux.h") missed the fact that bpf_iter_test_kern{3,4}.c are not just including bpf_iter_test_kern_common.h and need similar bpf_iter_meta re-definition explicitly. Fixes: b9f4c01f3e0b ("selftest/bpf: Make bpf_iter selftest compilable against old vmlinux.h") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-19selftest/bpf: Make bpf_iter selftest compilable against old vmlinux.hAndrii Nakryiko6-0/+98
It's good to be able to compile bpf_iter selftest even on systems that don't have the very latest vmlinux.h, e.g., for libbpf tests against older kernels in Travis CI. To that extent, re-define bpf_iter_meta and corresponding bpf_iter context structs in each selftest. To avoid type clashes with vmlinux.h, rename vmlinux.h's definitions to get them out of the way. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-19bpf, testing: Add get{peer, sock}name selftests to test_progsDaniel Borkmann2-4/+125
Extend the existing connect_force_port test to assert get{peer,sock}name programs as well. The workflow for e.g. IPv4 is as follows: i) server binds to concrete port, ii) client calls getsockname() on server fd which exposes 1.2.3.4:60000 to client, iii) client connects to service address 1.2.3.4:60000 binds to concrete local address (127.0.0.1:22222) and remaps service address to a concrete backend address (127.0.0.1:60123), iv) client then calls getsockname() on its own fd to verify local address (127.0.0.1:22222) and getpeername() on its own fd which then publishes service address (1.2.3.4:60000) instead of actual backend. Same workflow is done for IPv6 just with different address/port tuples. # ./test_progs -t connect_force_port #14 connect_force_port:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Acked-by: Andrey Ignatov <[email protected]> Link: https://lore.kernel.org/bpf/3343da6ad08df81af715a95d61a84fb4a960f2bf.1589841594.git.daniel@iogearbox.net
2020-05-16bpf: Selftests, remove prints from sockmap testsJohn Fastabend1-155/+3
The prints in the test_sockmap programs were only useful when we didn't have enough control over test infrastructure to know from user program what was being pushed into kernel side. Now that we have or will shortly have better test controls lets remove the printers. This means we can remove half the programs and cleanup bpf side. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Jakub Sitnicki <[email protected]> Link: https://lore.kernel.org/bpf/158939720756.15176.9806965887313279429.stgit@john-Precision-5820-Tower
2020-05-16bpf: Selftests, move sockmap bpf prog header into progsJohn Fastabend1-0/+451
Moves test_sockmap_kern.h into progs directory but does not change code at all. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Jakub Sitnicki <[email protected]> Link: https://lore.kernel.org/bpf/158939718921.15176.5766299102332077086.stgit@john-Precision-5820-Tower
2020-05-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-2/+2
Move the bpf verifier trace check into the new switch statement in HEAD. Resolve the overlapping changes in hinic, where bug fixes overlap the addition of VF support. Signed-off-by: David S. Miller <[email protected]>
2020-05-14selftests/bpf: Xdp_adjust_tail add grow tail testsJesper Dangaard Brouer1-0/+33
Extend BPF selftest xdp_adjust_tail with grow tail tests, which is added as subtest's. The first grow test stays in same form as original shrink test. The second grow test use the newer bpf_prog_test_run_xattr() calls, and does extra checking of data contents. Signed-off-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/158945350567.97035.9632611946765811876.stgit@firesoul
2020-05-14selftests/bpf: Adjust BPF selftest for xdp_adjust_tailJesper Dangaard Brouer1-6/+6
Current selftest for BPF-helper xdp_adjust_tail only shrink tail. Make it more clear that this is a shrink test case. Signed-off-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/158945350058.97035.17280775016196207372.stgit@firesoul
2020-05-14selftests/bpf: Test for sk helpers in cgroup skbAndrey Ignatov1-0/+97
Test bpf_sk_lookup_tcp, bpf_sk_release, bpf_sk_cgroup_id and bpf_sk_ancestor_cgroup_id helpers from cgroup skb program. The test creates a testing cgroup, starts a TCPv6 server inside the cgroup and creates two client sockets: one inside testing cgroup and one outside. Then it attaches cgroup skb program to the cgroup that checks all TCP segments coming to the server and allows only those coming from the cgroup of the server. If a segment comes from a peer outside of the cgroup, it'll be dropped. Finally the test checks that client from inside testing cgroup can successfully connect to the server, but client outside the cgroup fails to connect by timeout. The main goal of the test is to check newly introduced bpf_sk_{,ancestor_}cgroup_id helpers. It also checks a couple of socket lookup helpers (tcp & release), but lookup helpers were introduced much earlier and covered by other tests. Here it's mostly checked that they can be called from cgroup skb. Signed-off-by: Andrey Ignatov <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/171f4c5d75e8ff4fe1c4e8c1c12288b5240a4549.1589486450.git.rdna@fb.com
2020-05-14selftests/bpf: Enforce returning 0 for fentry/fexit programsYonghong Song1-2/+2
There are a few fentry/fexit programs returning non-0. The tests with these programs will break with the previous patch which enfoced return-0 rules. Fix them properly. Fixes: ac065870d928 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros") Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-13selftest/bpf: Add BPF triggering benchmarkAndrii Nakryiko1-0/+47
It is sometimes desirable to be able to trigger BPF program from user-space with minimal overhead. sys_enter would seem to be a good candidate, yet in a lot of cases there will be a lot of noise from syscalls triggered by other processes on the system. So while searching for low-overhead alternative, I've stumbled upon getpgid() syscall, which seems to be specific enough to not suffer from accidental syscall by other apps. This set of benchmarks compares tp, raw_tp w/ filtering by syscall ID, kprobe, fentry and fmod_ret with returning error (so that syscall would not be executed), to determine the lowest-overhead way. Here are results on my machine (using benchs/run_bench_trigger.sh script): base : 9.200 ± 0.319M/s tp : 6.690 ± 0.125M/s rawtp : 8.571 ± 0.214M/s kprobe : 6.431 ± 0.048M/s fentry : 8.955 ± 0.241M/s fmodret : 8.903 ± 0.135M/s So it seems like fmodret doesn't give much benefit for such lightweight syscall. Raw tracepoint is pretty decent despite additional filtering logic, but it will be called for any other syscall in the system, which rules it out. Fentry, though, seems to be adding the least amoung of overhead and achieves 97.3% of performance of baseline no-BPF-attached syscall. Using getpgid() seems to be preferable to set_task_comm() approach from test_overhead, as it's about 2.35x faster in a baseline performance. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-13selftest/bpf: Fmod_ret prog and implement test_overhead as part of benchAndrii Nakryiko1-0/+6
Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement user-space benchmarking part into benchmark runner to compare results. Results with ./bench are consistently somewhat lower than test_overhead's, but relative performance of various types of BPF programs stay consisten (e.g., kretprobe is noticeably slower). This slowdown seems to be coming from the fact that test_overhead is single-threaded, while benchmark always spins off at least one thread for producer. This has been confirmed by hacking multi-threaded test_overhead variant and also single-threaded bench variant. Resutls are below. run_bench_rename.sh script from benchs/ subdirectory was used to produce results for ./bench. Single-threaded implementations =============================== /* bench: single-threaded, atomics */ base : 4.622 ± 0.049M/s kprobe : 3.673 ± 0.052M/s kretprobe : 2.625 ± 0.052M/s rawtp : 4.369 ± 0.089M/s fentry : 4.201 ± 0.558M/s fexit : 4.309 ± 0.148M/s fmodret : 4.314 ± 0.203M/s /* selftest: single-threaded, no atomics */ task_rename base 4555K events per sec task_rename kprobe 3643K events per sec task_rename kretprobe 2506K events per sec task_rename raw_tp 4303K events per sec task_rename fentry 4307K events per sec task_rename fexit 4010K events per sec task_rename fmod_ret 3984K events per sec Multi-threaded implementations ============================== /* bench: multi-threaded w/ atomics */ base : 3.910 ± 0.023M/s kprobe : 3.048 ± 0.037M/s kretprobe : 2.300 ± 0.015M/s rawtp : 3.687 ± 0.034M/s fentry : 3.740 ± 0.087M/s fexit : 3.510 ± 0.009M/s fmodret : 3.485 ± 0.050M/s /* selftest: multi-threaded w/ atomics */ task_rename base 3872K events per sec task_rename kprobe 3068K events per sec task_rename kretprobe 2350K events per sec task_rename raw_tp 3731K events per sec task_rename fentry 3639K events per sec task_rename fexit 3558K events per sec task_rename fmod_ret 3511K events per sec /* selftest: multi-threaded, no atomics */ task_rename base 3945K events per sec task_rename kprobe 3298K events per sec task_rename kretprobe 2451K events per sec task_rename raw_tp 3718K events per sec task_rename fentry 3782K events per sec task_rename fexit 3543K events per sec task_rename fmod_ret 3526K events per sec Note that the fact that ./bench benchmark always uses atomic increments for counting, while test_overhead doesn't, doesn't influence test results all that much. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-11bpf, libbpf: Replace zero-length array with flexible-arrayGustavo A. R. Silva1-1/+1
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] sizeof(flexible-array-member) triggers a warning because flexible array members have incomplete type[1]. There are some instances of code in which the sizeof operator is being incorrectly/erroneously applied to zero-length arrays and the result is zero. Such instances may be hiding some bugs. So, this work (flexible-array member conversions) will also help to get completely rid of those sorts of issues. This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/20200507185057.GA13981@embeddedor
2020-05-09tools/bpf: selftests: Add bpf_iter selftestsYonghong Song5-0/+100
The added test includes the following subtests: - test verifier change for btf_id_or_null - test load/create_iter/read for ipv6_route/netlink/bpf_map/task/task_file - test anon bpf iterator - test anon bpf iterator reading one char at a time - test file bpf iterator - test overflow (single bpf program output not overflow) - test overflow (single bpf program output overflows) - test bpf prog returning 1 The ipv6_route tests the following verifier change - access fields in the variable length array of the structure. The netlink load tests the following verifier change - put a btf_id ptr value in a stack and accessible to tracing/iter programs. The anon bpf iterator also tests link auto attach through skeleton. $ test_progs -n 2 #2/1 btf_id_or_null:OK #2/2 ipv6_route:OK #2/3 netlink:OK #2/4 bpf_map:OK #2/5 task:OK #2/6 task_file:OK #2/7 anon:OK #2/8 anon-read-one-char:OK #2/9 file:OK #2/10 overflow:OK #2/11 overflow-e2big:OK #2/12 prog-ret-1:OK #2 bpf_iter:OK Summary: 1/12 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-09tools/bpf: selftests: Add iter progs for bpf_map/task/task_fileYonghong Song3-0/+79
The implementation is arbitrary, just to show how the bpf programs can be written for bpf_map/task/task_file. They can be costomized for specific needs. For example, for bpf_map, the iterator prints out: $ cat /sys/fs/bpf/my_bpf_map id refcnt usercnt locked_vm 3 2 0 20 6 2 0 20 9 2 0 20 12 2 0 20 13 2 0 20 16 2 0 20 19 2 0 20 %%% END %%% For task, the iterator prints out: $ cat /sys/fs/bpf/my_task tgid gid 1 1 2 2 .... 1944 1944 1948 1948 1949 1949 1953 1953 === END === For task/file, the iterator prints out: $ cat /sys/fs/bpf/my_task_file tgid gid fd file 1 1 0 ffffffff95c97600 1 1 1 ffffffff95c97600 1 1 2 ffffffff95c97600 .... 1895 1895 255 ffffffff95c8fe00 1932 1932 0 ffffffff95c8fe00 1932 1932 1 ffffffff95c8fe00 1932 1932 2 ffffffff95c8fe00 1932 1932 3 ffffffff95c185c0 This is able to print out all open files (fd and file->f_op), so user can compare f_op against a particular kernel file operations to find what it is. For example, from /proc/kallsyms, we can find ffffffff95c185c0 r eventfd_fops so we will know tgid 1932 fd 3 is an eventfd file descriptor. Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-09tools/bpf: selftests: Add iterator programs for ipv6_route and netlinkYonghong Song2-0/+128
Two bpf programs are added in this patch for netlink and ipv6_route target. On my VM, I am able to achieve identical results compared to /proc/net/netlink and /proc/net/ipv6_route. $ cat /proc/net/netlink sk Eth Pid Groups Rmem Wmem Dump Locks Drops Inode 000000002c42d58b 0 0 00000000 0 0 0 2 0 7 00000000a4e8b5e1 0 1 00000551 0 0 0 2 0 18719 00000000e1b1c195 4 0 00000000 0 0 0 2 0 16422 000000007e6b29f9 6 0 00000000 0 0 0 2 0 16424 .... 00000000159a170d 15 1862 00000002 0 0 0 2 0 1886 000000009aca4bc9 15 3918224839 00000002 0 0 0 2 0 19076 00000000d0ab31d2 15 1 00000002 0 0 0 2 0 18683 000000008398fb08 16 0 00000000 0 0 0 2 0 27 $ cat /sys/fs/bpf/my_netlink sk Eth Pid Groups Rmem Wmem Dump Locks Drops Inode 000000002c42d58b 0 0 00000000 0 0 0 2 0 7 00000000a4e8b5e1 0 1 00000551 0 0 0 2 0 18719 00000000e1b1c195 4 0 00000000 0 0 0 2 0 16422 000000007e6b29f9 6 0 00000000 0 0 0 2 0 16424 .... 00000000159a170d 15 1862 00000002 0 0 0 2 0 1886 000000009aca4bc9 15 3918224839 00000002 0 0 0 2 0 19076 00000000d0ab31d2 15 1 00000002 0 0 0 2 0 18683 000000008398fb08 16 0 00000000 0 0 0 2 0 27 $ cat /proc/net/ipv6_route fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001 lo fe80000000000000c04b03fffe7827ce 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0 ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000003 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo $ cat /sys/fs/bpf/my_ipv6_route fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001 lo fe80000000000000c04b03fffe7827ce 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0 ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000003 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo Signed-off-by: Yonghong Song <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-09bpf: Allow any port in bpf_bind helperStanislav Fomichev2-0/+56
We want to have a tighter control on what ports we bind to in the BPF_CGROUP_INET{4,6}_CONNECT hooks even if it means connect() becomes slightly more expensive. The expensive part comes from the fact that we now need to call inet_csk_get_port() that verifies that the port is not used and allocates an entry in the hash table for it. Since we can't rely on "snum || !bind_address_no_port" to prevent us from calling POST_BIND hook anymore, let's add another bind flag to indicate that the call site is BPF program. v5: * fix wrong AF_INET (should be AF_INET6) in the bpf program for v6 v3: * More bpf_bind documentation refinements (Martin KaFai Lau) * Add UDP tests as well (Martin KaFai Lau) * Don't start the thread, just do socket+bind+listen (Martin KaFai Lau) v2: * Update documentation (Andrey Ignatov) * Pass BIND_FORCE_ADDRESS_NO_PORT conditionally (Andrey Ignatov) Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrey Ignatov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-01selftests/bpf: Use reno instead of dctcpStanislav Fomichev1-3/+3
Andrey pointed out that we can use reno instead of dctcp for CC tests and drop CONFIG_TCP_CONG_DCTCP=y requirement. Fixes: beecf11bc218 ("bpf: Bpf_{g,s}etsockopt for struct bpf_sock_addr") Suggested-by: Andrey Ignatov <[email protected]> Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-01bpf: Bpf_{g,s}etsockopt for struct bpf_sock_addrStanislav Fomichev1-0/+46
Currently, bpf_getsockopt and bpf_setsockopt helpers operate on the 'struct bpf_sock_ops' context in BPF_PROG_TYPE_SOCK_OPS program. Let's generalize them and make them available for 'struct bpf_sock_addr'. That way, in the future, we can allow those helpers in more places. As an example, let's expose those 'struct bpf_sock_addr' based helpers to BPF_CGROUP_INET{4,6}_CONNECT hooks. That way we can override CC before the connection is made. v3: * Expose custom helpers for bpf_sock_addr context instead of doing generic bpf_sock argument (as suggested by Daniel). Even with try_socket_lock that doesn't sleep we have a problem where context sk is already locked and socket lock is non-nestable. v2: * s/BPF_PROG_TYPE_CGROUP_SOCKOPT/BPF_PROG_TYPE_SOCK_OPS/ Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Acked-by: John Fastabend <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-05-01bpf: Add selftest for BPF_ENABLE_STATSSong Liu1-0/+18
Add test for BPF_ENABLE_STATS, which should enable run_time_ns stats. ~/selftests/bpf# ./test_progs -t enable_stats -v test_enable_stats:PASS:skel_open_and_load 0 nsec test_enable_stats:PASS:get_stats_fd 0 nsec test_enable_stats:PASS:attach_raw_tp 0 nsec test_enable_stats:PASS:get_prog_info 0 nsec test_enable_stats:PASS:check_stats_enabled 0 nsec test_enable_stats:PASS:check_run_cnt_valid 0 nsec Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Song Liu <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-29selftests/bpf: Use SOCKMAP for server sockets in bpf_sk_assign testJakub Sitnicki1-50/+32
Update bpf_sk_assign test to fetch the server socket from SOCKMAP, now that map lookup from BPF in SOCKMAP is enabled. This way the test TC BPF program doesn't need to know what address server socket is bound to. Signed-off-by: Jakub Sitnicki <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: John Fastabend <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-28libbpf: Add BTF-defined map-in-map supportAndrii Nakryiko1-0/+76
As discussed at LPC 2019 ([0]), this patch brings (a quite belated) support for declarative BTF-defined map-in-map support in libbpf. It allows to define ARRAY_OF_MAPS and HASH_OF_MAPS BPF maps without any user-space initialization code involved. Additionally, it allows to initialize outer map's slots with references to respective inner maps at load time, also completely declaratively. Despite a weak type system of C, the way BTF-defined map-in-map definition works, it's actually quite hard to accidentally initialize outer map with incompatible inner maps. This being C, of course, it's still possible, but even that would be caught at load time and error returned with helpful debug log pointing exactly to the slot that failed to be initialized. As an example, here's a rather advanced HASH_OF_MAPS declaration and initialization example, filling slots #0 and #4 with two inner maps: #include <bpf/bpf_helpers.h> struct inner_map { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 1); __type(key, int); __type(value, int); } inner_map1 SEC(".maps"), inner_map2 SEC(".maps"); struct outer_hash { __uint(type, BPF_MAP_TYPE_HASH_OF_MAPS); __uint(max_entries, 5); __uint(key_size, sizeof(int)); __array(values, struct inner_map); } outer_hash SEC(".maps") = { .values = { [0] = &inner_map2, [4] = &inner_map1, }, }; Here's the relevant part of libbpf debug log showing pretty clearly of what's going on with map-in-map initialization: libbpf: .maps relo #0: for 6 value 0 rel.r_offset 96 name 260 ('inner_map1') libbpf: .maps relo #0: map 'outer_arr' slot [0] points to map 'inner_map1' libbpf: .maps relo #1: for 7 value 32 rel.r_offset 112 name 249 ('inner_map2') libbpf: .maps relo #1: map 'outer_arr' slot [2] points to map 'inner_map2' libbpf: .maps relo #2: for 7 value 32 rel.r_offset 144 name 249 ('inner_map2') libbpf: .maps relo #2: map 'outer_hash' slot [0] points to map 'inner_map2' libbpf: .maps relo #3: for 6 value 0 rel.r_offset 176 name 260 ('inner_map1') libbpf: .maps relo #3: map 'outer_hash' slot [4] points to map 'inner_map1' libbpf: map 'inner_map1': created successfully, fd=4 libbpf: map 'inner_map2': created successfully, fd=5 libbpf: map 'outer_hash': created successfully, fd=7 libbpf: map 'outer_hash': slot [0] set to map 'inner_map2' fd=5 libbpf: map 'outer_hash': slot [4] set to map 'inner_map1' fd=4 Notice from the log above that fd=6 (not logged explicitly) is used for inner "prototype" map, necessary for creation of outer map. It is destroyed immediately after outer map is created. See also included selftest with some extra comments explaining extra details of usage. Additionally, similar initialization syntax and libbpf functionality can be used to do initialization of BPF_PROG_ARRAY with references to BPF sub-programs. This can be done in follow up patches, if there will be a demand for this. [0] https://linuxplumbersconf.org/event/4/contributions/448/ Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Toke Høiland-Jørgensen <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-28selftests/bpf: Test bpf_link's get_next_id, get_fd_by_id, and get_obj_infoAndrii Nakryiko1-11/+3
Extend bpf_obj_id selftest to verify bpf_link's observability APIs. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-28selftests/bpf: fix test_sysctl_prog with alu32Alexei Starovoitov1-1/+1
Similar to commit b7a0d65d80a0 ("bpf, testing: Workaround a verifier failure for test_progs") fix test_sysctl_prog.c as well. Signed-off-by: Alexei Starovoitov <[email protected]>
2020-04-26selftests/bpf: Add cls_redirect classifierLorenz Bauer2-0/+1112
cls_redirect is a TC clsact based replacement for the glb-redirect iptables module available at [1]. It enables what GitHub calls "second chance" flows [2], similarly proposed by the Beamer paper [3]. In contrast to glb-redirect, it also supports migrating UDP flows as long as connected sockets are used. cls_redirect is in production at Cloudflare, as part of our own L4 load balancer. We have modified the encapsulation format slightly from glb-redirect: glbgue_chained_routing.private_data_type has been repurposed to form a version field and several flags. Both have been arranged in a way that a private_data_type value of zero matches the current glb-redirect behaviour. This means that cls_redirect will understand packets in glb-redirect format, but not vice versa. The test suite only covers basic features. For example, cls_redirect will correctly forward path MTU discovery packets, but this is not exercised. It is also possible to switch the encapsulation format to GRE on the last hop, which is also not tested. There are two major distinctions from glb-redirect: first, cls_redirect relies on receiving encapsulated packets directly from a router. This is because we don't have access to the neighbour tables from BPF, yet. See forward_to_next_hop for details. Second, cls_redirect performs decapsulation instead of using separate ipip and sit tunnel devices. This avoids issues with the sit tunnel [4] and makes deploying the classifier easier: decapsulated packets appear on the same interface, so existing firewall rules continue to work as expected. The code base started it's life on v4.19, so there are most likely still hold overs from old workarounds. In no particular order: - The function buf_off is required to defeat a clang optimization that leads to the verifier rejecting the program due to pointer arithmetic in the wrong order. - The function pkt_parse_ipv6 is force inlined, because it would otherwise be rejected due to returning a pointer to stack memory. - The functions fill_tuple and classify_tcp contain kludges, because we've run out of function arguments. - The logic in general is rather nested, due to verifier restrictions. I think this is either because the verifier loses track of constants on the stack, or because it can't track enum like variables. 1: https://github.com/github/glb-director/tree/master/src/glb-redirect 2: https://github.com/github/glb-director/blob/master/docs/development/second-chance-design.md 3: https://www.usenix.org/conference/nsdi18/presentation/olteanu 4: https://github.com/github/glb-director/issues/64 Signed-off-by: Lorenz Bauer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-24selftests/bpf: Fix a couple of broken test_btf casesStanislav Fomichev3-39/+15
Commit 51c39bb1d5d1 ("bpf: Introduce function-by-function verification") introduced function linkage flag and changed the error message from "vlen != 0" to "Invalid func linkage" and broke some fake BPF programs. Adjust the test accordingly. AFACT, the programs don't really need any arguments and only look at BTF for maps, so let's drop the args altogether. Before: BTF raw test[103] (func (Non zero vlen)): do_test_raw:3703:FAIL expected err_str:vlen != 0 magic: 0xeb9f version: 1 flags: 0x0 hdr_len: 24 type_off: 0 type_len: 72 str_off: 72 str_len: 10 btf_total_size: 106 [1] INT (anon) size=4 bits_offset=0 nr_bits=32 encoding=SIGNED [2] INT (anon) size=4 bits_offset=0 nr_bits=32 encoding=(none) [3] FUNC_PROTO (anon) return=0 args=(1 a, 2 b) [4] FUNC func type_id=3 Invalid func linkage BTF libbpf test[1] (test_btf_haskv.o): libbpf: load bpf program failed: Invalid argument libbpf: -- BEGIN DUMP LOG --- libbpf: Validating test_long_fname_2() func#1... Arg#0 type PTR in test_long_fname_2() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 libbpf: -- END LOG -- libbpf: failed to load program 'dummy_tracepoint' libbpf: failed to load object 'test_btf_haskv.o' do_test_file:4201:FAIL bpf_object__load: -4007 BTF libbpf test[2] (test_btf_newkv.o): libbpf: load bpf program failed: Invalid argument libbpf: -- BEGIN DUMP LOG --- libbpf: Validating test_long_fname_2() func#1... Arg#0 type PTR in test_long_fname_2() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 libbpf: -- END LOG -- libbpf: failed to load program 'dummy_tracepoint' libbpf: failed to load object 'test_btf_newkv.o' do_test_file:4201:FAIL bpf_object__load: -4007 BTF libbpf test[3] (test_btf_nokv.o): libbpf: load bpf program failed: Invalid argument libbpf: -- BEGIN DUMP LOG --- libbpf: Validating test_long_fname_2() func#1... Arg#0 type PTR in test_long_fname_2() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 libbpf: -- END LOG -- libbpf: failed to load program 'dummy_tracepoint' libbpf: failed to load object 'test_btf_nokv.o' do_test_file:4201:FAIL bpf_object__load: -4007 Fixes: 51c39bb1d5d1 ("bpf: Introduce function-by-function verification") Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-24selftests/bpf: Add test for freplace program with expected_attach_typeToke Høiland-Jørgensen2-12/+34
This adds a new selftest that tests the ability to attach an freplace program to a program type that relies on the expected_attach_type of the target program to pass verification. Signed-off-by: Toke Høiland-Jørgensen <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-04-02bpf, lsm: Fix the file_mprotect LSM test.KP Singh1-4/+4
The test was previously using an mprotect on the heap memory allocated using malloc and was expecting the allocation to be always using sbrk(2). This is, however, not always true and in certain conditions malloc may end up using anonymous mmaps for heap alloctions. This means that the following condition that is used in the "lsm/file_mprotect" program is not sufficent to detect all mprotect calls done on heap memory: is_heap = (vma->vm_start >= vma->vm_mm->start_brk && vma->vm_end <= vma->vm_mm->brk); The test is updated to use an mprotect on memory allocated on the stack. While this would result in the splitting of the vma, this happens only after the security_file_mprotect hook. So, the condition used in the BPF program holds true. Fixes: 03e54f100d57 ("bpf: lsm: Add selftests for BPF_PROG_TYPE_LSM") Reported-by: Alexei Starovoitov <[email protected]> Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-30Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller7-2/+319
Signed-off-by: David S. Miller <[email protected]>
2020-03-30selftests/bpf: Test FD-based cgroup attachmentAndrii Nakryiko1-0/+24
Add selftests to exercise FD-based cgroup BPF program attachments and their intermixing with legacy cgroup BPF attachments. Auto-detachment and program replacement (both unconditional and cmpxchng-like) are tested as well. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-03-30bpf: Test_progs, add test to catch retval refine error handlingJohn Fastabend1-0/+26
Before this series the verifier would clamp return bounds of bpf_get_stack() to [0, X] and this led the verifier to believe that a JMP_JSLT 0 would be false and so would prune that path. The result is anything hidden behind that JSLT would be unverified. Add a test to catch this case by hiding an goto pc-1 behind the check which will cause an infinite loop if not rejected. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/158560423908.10843.11783152347709008373.stgit@john-Precision-5820-Tower
2020-03-30selftests: bpf: Extend sk_assign tests for UDPJoe Stringer1-4/+65
Add support for testing UDP sk_assign to the existing tests. Signed-off-by: Joe Stringer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Lorenz Bauer <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]