aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2023-12-14selftests/net: convert fib_nexthop_multiprefix to run it in unique namespaceHangbin Liu1-50/+48
Here is the test result after conversion. ]# ./fib_nexthop_multiprefix.sh TEST: IPv4: host 0 to host 1, mtu 1300 [ OK ] TEST: IPv6: host 0 to host 1, mtu 1300 [ OK ] TEST: IPv4: host 0 to host 2, mtu 1350 [ OK ] TEST: IPv6: host 0 to host 2, mtu 1350 [ OK ] TEST: IPv4: host 0 to host 3, mtu 1400 [ OK ] TEST: IPv6: host 0 to host 3, mtu 1400 [ OK ] TEST: IPv4: host 0 to host 1, mtu 1300 [ OK ] TEST: IPv6: host 0 to host 1, mtu 1300 [ OK ] TEST: IPv4: host 0 to host 2, mtu 1350 [ OK ] TEST: IPv6: host 0 to host 2, mtu 1350 [ OK ] TEST: IPv4: host 0 to host 3, mtu 1400 [ OK ] TEST: IPv6: host 0 to host 3, mtu 1400 [ OK ] Acked-by: David Ahern <[email protected]> Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/net: fix grep checking for fib_nexthop_multiprefixHangbin Liu1-2/+2
When running fib_nexthop_multiprefix test I saw all IPv6 test failed. e.g. ]# ./fib_nexthop_multiprefix.sh TEST: IPv4: host 0 to host 1, mtu 1300 [ OK ] TEST: IPv6: host 0 to host 1, mtu 1300 [FAIL] With -v it shows COMMAND: ip netns exec h0 /usr/sbin/ping6 -s 1350 -c5 -w5 2001:db8:101::1 PING 2001:db8:101::1(2001:db8:101::1) 1350 data bytes From 2001:db8:100::64 icmp_seq=1 Packet too big: mtu=1300 --- 2001:db8:101::1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms Route get 2001:db8:101::1 via 2001:db8:100::64 dev eth0 src 2001:db8:100::1 metric 1024 expires 599sec mtu 1300 pref medium Searching for: 2001:db8:101::1 from :: via 2001:db8:100::64 dev eth0 src 2001:db8:100::1 .* mtu 1300 The reason is when CONFIG_IPV6_SUBTREES is not enabled, rt6_fill_node() will not put RTA_SRC info. After fix: ]# ./fib_nexthop_multiprefix.sh TEST: IPv4: host 0 to host 1, mtu 1300 [ OK ] TEST: IPv6: host 0 to host 1, mtu 1300 [ OK ] Fixes: 735ab2f65dce ("selftests: Add test with multiple prefixes using single nexthop") Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/net: convert fcnal-test.sh to run it in unique namespaceHangbin Liu2-18/+14
Here is the test result after conversion. There are some failures, but it also exists on my system without this patch. So it's not affectec by this patch and I will check the reason later. ]# time ./fcnal-test.sh /usr/bin/which: no nettest in (/root/.local/bin:/root/bin:/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin) ########################################################################### IPv4 ping ########################################################################### ################################################################# No VRF SYSCTL: net.ipv4.raw_l3mdev_accept=0 TEST: ping out - ns-B IP [ OK ] TEST: ping out, device bind - ns-B IP [ OK ] TEST: ping out, address bind - ns-B IP [ OK ] ... ################################################################# SNAT on VRF TEST: IPv4 TCP connection over VRF with SNAT [ OK ] TEST: IPv6 TCP connection over VRF with SNAT [ OK ] Tests passed: 893 Tests failed: 21 real 52m48.178s user 0m34.158s sys 1m42.976s BTW, this test needs a really long time. So expand the timeout to 1h. Acked-by: David Ahern <[email protected]> Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/net: convert srv6_end_dt6_l3vpn_test.sh to run it in unique namespaceHangbin Liu1-25/+21
As the name \${rt-${rt}} may make reader confuse, convert the variable hs/rt in setup_rt/hs to hid, rid. Here is the test result after conversion. ]# ./srv6_end_dt6_l3vpn_test.sh ################################################################################ TEST SECTION: IPv6 routers connectivity test ################################################################################ TEST: Routers connectivity: rt-1 -> rt-2 [ OK ] TEST: Routers connectivity: rt-2 -> rt-1 [ OK ] ... TEST: Hosts isolation: hs-t200-4 -X-> hs-t100-2 [ OK ] Tests passed: 18 Tests failed: 0 Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/net: convert srv6_end_dt4_l3vpn_test.sh to run it in unique namespaceHangbin Liu1-27/+21
As the name \${rt-${rt}} may make reader confuse, convert the variable hs/rt in setup_rt/hs to hid, rid. Here is the test result after conversion. ]# ./srv6_end_dt4_l3vpn_test.sh ################################################################################ TEST SECTION: IPv6 routers connectivity test ################################################################################ TEST: Routers connectivity: rt-1 -> rt-2 [ OK ] TEST: Routers connectivity: rt-2 -> rt-1 [ OK ] ... TEST: Hosts isolation: hs-t200-4 -X-> hs-t100-2 [ OK ] Tests passed: 18 Tests failed: 0 Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/net: convert srv6_end_dt46_l3vpn_test.sh to run it in unique namespaceHangbin Liu1-27/+24
As the name \${rt-${rt}} may make reader confuse, convert the variable hs/rt in setup_rt/hs to hid, rid. Here is the test result after conversion. ]# ./srv6_end_dt46_l3vpn_test.sh ################################################################################ TEST SECTION: IPv6 routers connectivity test ################################################################################ TEST: Routers connectivity: rt-1 -> rt-2 [ OK ] TEST: Routers connectivity: rt-2 -> rt-1 [ OK ] ... TEST: IPv4 Hosts isolation: hs-t200-4 -X-> hs-t100-2 [ OK ] Tests passed: 34 Tests failed: 0 Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/net: add variable NS_LIST for lib.shHangbin Liu1-0/+8
Add a global variable NS_LIST to store all the namespaces that setup_ns created, so the caller could call cleanup_all_ns() instead of remember all the netns names when using cleanup_ns(). Signed-off-by: Hangbin Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: print prototypes for recursive stuffJakub Kicinski1-5/+39
We avoid printing forward declarations and prototypes for most types by sorting things topologically. But if structs nest we do need the forward declarations, there's no other way. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: store recursive nests by a pointerJakub Kicinski1-2/+14
To avoid infinite nesting store recursive structs by pointer. If recursive struct is placed in the op directly - the first instance can be stored by value. That makes the code much less of a pain for majority of practical uses. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: re-sort ignoring recursive nestsJakub Kicinski1-21/+31
We try to keep the structures and helpers "topologically sorted", to avoid forward declarations. When recursive nests are at play we need to sort twice, because structs which end up being marked as recursive will get a full set of forward declarations, so we should ignore them for the purpose of sorting. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: record information about recursive nestsJakub Kicinski1-2/+17
Track which nests are recursive. Non-recursive nesting gets rendered in C as directly nested structs. For recursive ones we need to put a pointer in, rather than full struct. Track this information, no change to generated code, yet. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: fill in implementations for TypeUnusedJakub Kicinski1-0/+9
Fill in more empty handlers for TypeUnused. When 'unused' attr gets specified in a nested set we have to cleanly skip it during code generation. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: support fixed headers in genetlinkJakub Kicinski3-8/+45
Support genetlink families using simple fixed headers. Assume fixed header is identical for all ops of the family for now. Fixed headers are added to the request and reply structs as a _hdr member, and copied to/from netlink messages appropriately. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: use enum user type for members and argsJakub Kicinski1-3/+2
Commit 30c902001534 ("tools: ynl-gen: use enum name from the spec") added pre-cooked user type for enums. Use it to fix ignoring enum-name provided in the spec. This changes a type in struct ethtool_tunnel_udp_entry but is generally inconsequential for current families. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14tools: ynl-gen: add missing request free helpers for dumpsJakub Kicinski1-0/+1
The code gen generates a prototype for dump request free in the header, but no implementation in the source. Reviewed-by: Donald Hunter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14selftests/bpf: utilize string values for delegate_xxx mount optionsAndrii Nakryiko1-20/+32
Use both hex-based and string-based way to specify delegate mount options for BPF FS. Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski8-9/+20
Cross-merge networking fixes after downstream PR. Conflicts: drivers/net/ethernet/intel/iavf/iavf_ethtool.c 3a0b5a2929fd ("iavf: Introduce new state machines for flow director") 95260816b489 ("iavf: use iavf_schedule_aq_request() helper") https://lore.kernel.org/all/[email protected]/ drivers/net/ethernet/broadcom/bnxt/bnxt.c c13e268c0768 ("bnxt_en: Fix HWTSTAMP_FILTER_ALL packet timestamp logic") c2f8063309da ("bnxt_en: Refactor RX VLAN acceleration logic.") a7445d69809f ("bnxt_en: Add support for new RX and TPA_START completion types for P7") 1c7fd6ee2fe4 ("bnxt_en: Rename some macros for the P5 chips") https://lore.kernel.org/all/[email protected]/ drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c bd6781c18cb5 ("bnxt_en: Fix wrong return value check in bnxt_close_nic()") 84793a499578 ("bnxt_en: Skip nic close/open when configuring tstamp filters") https://lore.kernel.org/all/[email protected]/ drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c 3d7a3f2612d7 ("net/mlx5: Nack sync reset request when HotPlug is enabled") cecf44ea1a1f ("net/mlx5: Allow sync reset flow when BF MGT interface device is present") https://lore.kernel.org/all/[email protected]/ No adjacent changes. Signed-off-by: Jakub Kicinski <[email protected]>
2023-12-14bpf: xfrm: Add selftest for bpf_xdp_get_xfrm_state()Daniel Xu2-2/+65
This commit extends test_tunnel selftest to test the new XDP xfrm state lookup kfunc. Co-developed-by: Antony Antony <[email protected]> Signed-off-by: Antony Antony <[email protected]> Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/e704e9a4332e3eac7b458e4bfdec8fcc6984cdb6.1702593901.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-14bpf: selftests: Move xfrm tunnel test to test_progsDaniel Xu3-95/+151
test_progs is better than a shell script b/c C is a bit easier to maintain than shell. Also it's easier to use new infra like memory mapped global variables from C via bpf skeleton. Co-developed-by: Antony Antony <[email protected]> Signed-off-by: Antony Antony <[email protected]> Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/a350db9e08520c64544562d88ec005a039124d9b.1702593901.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-14bpf: selftests: test_tunnel: Use vmlinux.h declarationsDaniel Xu2-55/+22
vmlinux.h declarations are more ergnomic, especially when working with kfuncs. The uapi headers are often incomplete for kfunc definitions. This commit also switches bitfield accesses to use CO-RE helpers. Switching to vmlinux.h definitions makes the verifier very unhappy with raw bitfield accesses. The error is: ; md.u.md2.dir = direction; 33: (69) r1 = *(u16 *)(r2 +11) misaligned stack access off (0x0; 0x0)+-64+11 size 2 Fix by using CO-RE-aware bitfield reads and writes. Co-developed-by: Antony Antony <[email protected]> Signed-off-by: Antony Antony <[email protected]> Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/884bde1d9a351d126a3923886b945ea6b1b0776b.1702593901.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-14bpf: selftests: test_tunnel: Setup fresh topology for each subtestDaniel Xu1-5/+2
This helps with determinism b/c individual setup/teardown prevents leaking state between different subtests. Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/0fb59fa16fb58cca7def5239df606005a3e8dd0e.1702593901.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-14selftests/bpf: Remove flaky test_btf_id testYonghong Song1-5/+0
With previous patch, one of subtests in test_btf_id becomes flaky and may fail. The following is a failing example: Error: #26 btf Error: #26/174 btf/BTF ID Error: #26/174 btf/BTF ID btf_raw_create:PASS:check 0 nsec btf_raw_create:PASS:check 0 nsec test_btf_id:PASS:check 0 nsec ... test_btf_id:PASS:check 0 nsec test_btf_id:FAIL:check BTF lingersdo_test_get_info:FAIL:check failed: -1 The test tries to prove a btf_id not available after the map is closed. But btf_id is freed only after workqueue and a rcu grace period, compared to previous case just after a rcu grade period. Depending on system workload, workqueue could take quite some time to execute function bpf_map_free_deferred() which may cause the test failure. Instead of adding arbitrary delays, let us remove the logic to check btf_id availability after map is closed. Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-14perf top: Uniform the event name for the hybrid machineKan Liang4-27/+28
It's hard to distinguish the default cycles events among hybrid PMUs. For example, $ perf top Available samples 385 cycles:P 903 cycles:P The other tool, e.g., perf record, uniforms the event name and adds the hybrid PMU name before opening the event. So the events can be easily distinguished. Apply the same methodology for the perf top as well. The evlist__uniquify_name() will be invoked by both record and top. Move it to util/evlist.c With the patch: $ perf top Available samples 148 cpu_atom/cycles:P/ 1K cpu_core/cycles:P/ Reviewed-by: Ian Rogers <[email protected]> Signed-off-by: Kan Liang <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Hector Martin <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-12-14perf top: Use evsel's cpus to replace user_requested_cpusKan Liang1-2/+2
perf top errors out on a hybrid machine $perf top Error: The cycles:P event is not supported. The perf top expects that the "cycles" is collected on all CPUs in the system. But for hybrid there is no single "cycles" event which can cover all CPUs. Perf has to split it into two cycles events, e.g., cpu_core/cycles/ and cpu_atom/cycles/. Each event has its own CPU mask. If a event is opened on the unsupported CPU. The open fails. That's the reason of the above error out. Perf should only open the cycles event on the corresponding CPU. The commit ef91871c960e ("perf evlist: Propagate user CPU maps intersecting core PMU maps") intersect the requested CPU map with the CPU map of the PMU. Use the evsel's cpus to replace user_requested_cpus. The evlist's threads are also propagated to the evsel's threads in __perf_evlist__propagate_maps(). For a system-wide event, perf appends a dummy event and assign it to the evsel's threads. For a per-thread event, the evlist's thread_map is assigned to the evsel's threads. The same as the other tools, e.g., perf record, using the evsel's threads when opening an event. Reported-by: Arnaldo Carvalho de Melo <[email protected]> Reviewed-by: Ian Rogers <[email protected]> Signed-off-by: Kan Liang <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Hector Martin <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Closes: https://lore.kernel.org/linux-perf-users/[email protected]/ Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-12-14perf unwind-libunwind: Fix base address for .eh_frameNamhyung Kim1-1/+1
The base address of a DSO mapping should start at the start of the file. Usually DSOs are mapped from the pgoff 0 so it doesn't matter when it uses the start of the map address. But generated DSOs for JIT codes doesn't start from the 0 so it should subtract the offset to calculate the .eh_frame table offsets correctly. Fixes: dc2cf4ca866f5715 ("perf unwind: Fix segbase for ld.lld linked objects") Reviewed-by: Ian Rogers <[email protected]> Signed-off-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Milian Wolff <[email protected]> Cc: Pablo Galindo <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-12-14perf unwind-libdw: Handle JIT-generated DSOs properlyNamhyung Kim1-4/+17
Usually DSOs are mapped from the beginning of the file, so the base address of the DSO can be calculated by map->start - map->pgoff. However, JIT DSOs which are generated by `perf inject -j`, are mapped only the code segment. This makes unwind-libdw code confusing and rejects processing unwinds in the JIT DSOs. It should use the map start address as base for them to fix the confusion. Fixes: 1fe627da30331024 ("perf unwind: Take pgoff into account when reporting elf to libdwfl") Signed-off-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Milian Wolff <[email protected]> Cc: Pablo Galindo <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-12-14perf genelf: Set ELF program header addresses properlyNamhyung Kim1-3/+3
The text section starts after the ELF headers so PHDR.p_vaddr and others should have the correct addresses. Fixes: babd04386b1df8c3 ("perf jit: Include program header in ELF files") Reviewed-by: Ian Rogers <[email protected]> Signed-off-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Fangrui Song <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Lieven Hey <[email protected]> Cc: Milian Wolff <[email protected]> Cc: Pablo Galindo <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-12-14perf stat: Combine the -A/--no-aggr and --no-merge optionsIan Rogers5-29/+33
The -A or --no-aggr option disables aggregation of core events: $ perf stat -A -e cycles,data_total -a true Performance counter stats for 'system wide': CPU0 1,287,665 cycles CPU1 1,831,681 cycles CPU2 27,345,998 cycles CPU3 1,964,799 cycles CPU4 236,174 cycles CPU5 3,302,825 cycles CPU6 9,201,446 cycles CPU7 1,403,043 cycles CPU0 110.90 MiB data_total 0.008961761 seconds time elapsed The --no-merge option disables the aggregation of uncore events: $ perf stat --no-merge -e cycles,data_total -a true Performance counter stats for 'system wide': 38,482,778 cycles 15.04 MiB data_total [uncore_imc_free_running_1] 15.00 MiB data_total [uncore_imc_free_running_0] 0.005915155 seconds time elapsed Having two options confuses users who generally don't appreciate the difference in PMUs. Keep all the options but make it so they all disable aggregation both of core and uncore events: $ perf stat -A -e cycles,data_total -a true Performance counter stats for 'system wide': CPU0 85,878 cycles CPU1 88,179 cycles CPU2 60,872 cycles CPU3 3,265,567 cycles CPU4 82,357 cycles CPU5 83,383 cycles CPU6 84,156 cycles CPU7 220,803 cycles CPU0 2.38 MiB data_total [uncore_imc_free_running_0] CPU0 2.38 MiB data_total [uncore_imc_free_running_1] 0.001397205 seconds time elapsed Update the relevant 'perf stat' man page information. Reviewed-by: Kan Liang <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Athira Jajeev <[email protected]> Cc: Changbin Du <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Garry <[email protected]> Cc: K Prateek Nayak <[email protected]> Cc: Kaige Ye <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-12-14add selftest for statmount/listmountMiklos Szeredi4-0/+621
Initial selftest for the new statmount() and listmount() syscalls. Signed-off-by: Miklos Szeredi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
2023-12-14selftests/xsk: Fix for SEND_RECEIVE_UNALIGNED testTushar Vyavahare1-9/+16
Fix test broken by shared umem test and framework enhancement commit. Correct the current implementation of pkt_stream_replace_half() by ensuring that nb_valid_entries are not set to half, as this is not true for all the tests. Ensure that the expected value for valid_entries for the SEND_RECEIVE_UNALIGNED test equals the total number of packets sent, which is 4096. Create a new function called pkt_stream_pkt_set() that allows for packet modification to meet specific requirements while ensuring the accurate maintenance of the valid packet count to prevent inconsistencies in packet tracking. Fixes: 6d198a89c004 ("selftests/xsk: Add a test for shared umem feature") Reported-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Tushar Vyavahare <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Maciej Fijalkowski <[email protected]> Acked-by: Magnus Karlsson <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2023-12-13bpf: sockmap, test for unconnected af_unix sockJohn Fastabend1-0/+34
Add test to sockmap_basic to ensure af_unix sockets that are not connected can not be added to the map. Ensure we keep DGRAM sockets working however as these will not be connected typically. Signed-off-by: John Fastabend <[email protected]> Acked-by: Jakub Sitnicki <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-12-13selftests/bpf: Check VLAN tag and proto in xdp_metadataLarysa Zaremba3-2/+26
Verify, whether VLAN tag and proto are set correctly. To simulate "stripped" VLAN tag on veth, send test packet from VLAN interface. Also, add TO_STR() macro for convenience. Acked-by: Stanislav Fomichev <[email protected]> Signed-off-by: Larysa Zaremba <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13selftests/bpf: Add AF_INET packet generation to xdp_metadataLarysa Zaremba1-19/+97
The easiest way to simulate stripped VLAN tag in veth is to send a packet from VLAN interface, attached to veth. Unfortunately, this approach is incompatible with AF_XDP on TX side, because VLAN interfaces do not have such feature. Check both packets sent via AF_XDP TX and regular socket. AF_INET packet will also have a filled-in hash type (XDP_RSS_TYPE_L4), unlike AF_XDP packet, so more values can be checked. Signed-off-by: Larysa Zaremba <[email protected]> Acked-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13selftests/bpf: Add flags and VLAN hint to xdp_hw_metadataLarysa Zaremba3-12/+76
Add VLAN hint to the xdp_hw_metadata program. Also, to make metadata layout more straightforward, add flags field to pass information about validity of every separate hint separately. Acked-by: Stanislav Fomichev <[email protected]> Signed-off-by: Larysa Zaremba <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13selftests/bpf: Allow VLAN packets in xdp_hw_metadataLarysa Zaremba2-1/+17
Make VLAN c-tag and s-tag XDP hint testing more convenient by not skipping VLAN-ed packets. Allow both 802.1ad and 802.1Q headers. Acked-by: Stanislav Fomichev <[email protected]> Signed-off-by: Larysa Zaremba <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13xdp: Add VLAN tag hintLarysa Zaremba2-0/+4
Implement functionality that enables drivers to expose VLAN tag to XDP code. VLAN tag is represented by 2 variables: - protocol ID, which is passed to bpf code in BE - VLAN TCI, in host byte order Acked-by: Stanislav Fomichev <[email protected]> Signed-off-by: Larysa Zaremba <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13selftests/bpf: add tests for LIBBPF_BPF_TOKEN_PATH envvarAndrii Nakryiko1-0/+112
Add new subtest validating LIBBPF_BPF_TOKEN_PATH envvar semantics. Extend existing test to validate that LIBBPF_BPF_TOKEN_PATH allows to disable implicit BPF token creation by setting envvar to empty string. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13libbpf: support BPF token path setting through LIBBPF_BPF_TOKEN_PATH envvarAndrii Nakryiko2-6/+21
To allow external admin authority to override default BPF FS location (/sys/fs/bpf) for implicit BPF token creation, teach libbpf to recognize LIBBPF_BPF_TOKEN_PATH envvar. If it is specified and user application didn't explicitly specify neither bpf_token_path nor bpf_token_fd option, it will be treated exactly like bpf_token_path option, overriding default /sys/fs/bpf location and making BPF token mandatory. Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13selftests/bpf: add tests for BPF object load with implicit tokenAndrii Nakryiko1-0/+76
Add a test to validate libbpf's implicit BPF token creation from default BPF FS location (/sys/fs/bpf). Also validate that disabling this implicit BPF token creation works. Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13selftests/bpf: add BPF object loading tests with explicit token passingAndrii Nakryiko3-0/+185
Add a few tests that attempt to load BPF object containing privileged map, program, and the one requiring mandatory BTF uploading into the kernel (to validate token FD propagation to BPF_BTF_LOAD command). Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13libbpf: wire up BPF token support at BPF object levelAndrii Nakryiko4-12/+158
Add BPF token support to BPF object-level functionality. BPF token is supported by BPF object logic either as an explicitly provided BPF token from outside (through BPF FS path or explicit BPF token FD), or implicitly (unless prevented through bpf_object_open_opts). Implicit mode is assumed to be the most common one for user namespaced unprivileged workloads. The assumption is that privileged container manager sets up default BPF FS mount point at /sys/fs/bpf with BPF token delegation options (delegate_{cmds,maps,progs,attachs} mount options). BPF object during loading will attempt to create BPF token from /sys/fs/bpf location, and pass it for all relevant operations (currently, map creation, BTF load, and program load). In this implicit mode, if BPF token creation fails due to whatever reason (BPF FS is not mounted, or kernel doesn't support BPF token, etc), this is not considered an error. BPF object loading sequence will proceed with no BPF token. In explicit BPF token mode, user provides explicitly either custom BPF FS mount point path or creates BPF token on their own and just passes token FD directly. In such case, BPF object will either dup() token FD (to not require caller to hold onto it for entire duration of BPF object lifetime) or will attempt to create BPF token from provided BPF FS location. If BPF token creation fails, that is considered a critical error and BPF object load fails with an error. Libbpf provides a way to disable implicit BPF token creation, if it causes any troubles (BPF token is designed to be completely optional and shouldn't cause any problems even if provided, but in the world of BPF LSM, custom security logic can be installed that might change outcome dependin on the presence of BPF token). To disable libbpf's default BPF token creation behavior user should provide either invalid BPF token FD (negative), or empty bpf_token_path option. BPF token presence can influence libbpf's feature probing, so if BPF object has associated BPF token, feature probing is instructed to use BPF object-specific feature detection cache and token FD. Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13libbpf: wire up token_fd into feature probing logicAndrii Nakryiko5-46/+66
Adjust feature probing callbacks to take into account optional token_fd. In unprivileged contexts, some feature detectors would fail to detect kernel support just because BPF program, BPF map, or BTF object can't be loaded due to privileged nature of those operations. So when BPF object is loaded with BPF token, this token should be used for feature probing. This patch is setting support for this scenario, but we don't yet pass non-zero token FD. This will be added in the next patch. We also switched BPF cookie detector from using kprobe program to tracepoint one, as tracepoint is somewhat less dangerous BPF program type and has higher likelihood of being allowed through BPF token in the future. This change has no effect on detection behavior. Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13libbpf: move feature detection code into its own fileAndrii Nakryiko6-466/+479
It's quite a lot of well isolated code, so it seems like a good candidate to move it out of libbpf.c to reduce its size. Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13libbpf: further decouple feature checking logic from bpf_objectAndrii Nakryiko3-11/+22
Add feat_supported() helper that accepts feature cache instead of bpf_object. This allows low-level code in bpf.c to not know or care about higher-level concept of bpf_object, yet it will be able to utilize custom feature checking in cases where BPF token might influence the outcome. Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13libbpf: split feature detectors definitions from cached resultsAndrii Nakryiko1-6/+12
Split a list of supported feature detectors with their corresponding callbacks from actual cached supported/missing values. This will allow to have more flexible per-token or per-object feature detectors in subsequent refactorings. Acked-by: John Fastabend <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13bpf: selftests: Add verifier tests for CO-RE bitfield writesDaniel Xu2-0/+102
Add some tests that exercise BPF_CORE_WRITE_BITFIELD() macro. Since some non-trivial bit fiddling is going on, make sure various edge cases (such as adjacent bitfields and bitfields at the edge of structs) are exercised. Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/72698a1080fa565f541d5654705255984ea2a029.1702325874.git.dxu@dxuuu.xyz Signed-off-by: Martin KaFai Lau <[email protected]>
2023-12-13bpf: selftests: test_loader: Support __btf_path() annotationDaniel Xu2-0/+8
This commit adds support for per-prog btf_custom_path. This is necessary for testing CO-RE relocations on non-vmlinux types using test_loader infrastructure. Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/660ea7f2fdbdd5103bc1af87c9fc931f05327926.1702325874.git.dxu@dxuuu.xyz Signed-off-by: Martin KaFai Lau <[email protected]>
2023-12-13libbpf: Add BPF_CORE_WRITE_BITFIELD() macroDaniel Xu1-0/+32
=== Motivation === Similar to reading from CO-RE bitfields, we need a CO-RE aware bitfield writing wrapper to make the verifier happy. Two alternatives to this approach are: 1. Use the upcoming `preserve_static_offset` [0] attribute to disable CO-RE on specific structs. 2. Use broader byte-sized writes to write to bitfields. (1) is a bit hard to use. It requires specific and not-very-obvious annotations to bpftool generated vmlinux.h. It's also not generally available in released LLVM versions yet. (2) makes the code quite hard to read and write. And especially if BPF_CORE_READ_BITFIELD() is already being used, it makes more sense to to have an inverse helper for writing. === Implementation details === Since the logic is a bit non-obvious, I thought it would be helpful to explain exactly what's going on. To start, it helps by explaining what LSHIFT_U64 (lshift) and RSHIFT_U64 (rshift) is designed to mean. Consider the core of the BPF_CORE_READ_BITFIELD() algorithm: val <<= __CORE_RELO(s, field, LSHIFT_U64); val = val >> __CORE_RELO(s, field, RSHIFT_U64); Basically what happens is we lshift to clear the non-relevant (blank) higher order bits. Then we rshift to bring the relevant bits (bitfield) down to LSB position (while also clearing blank lower order bits). To illustrate: Start: ........XXX...... Lshift: XXX......00000000 Rshift: 00000000000000XXX where `.` means blank bit, `0` means 0 bit, and `X` means bitfield bit. After the two operations, the bitfield is ready to be interpreted as a regular integer. Next, we want to build an alternative (but more helpful) mental model on lshift and rshift. That is, to consider: * rshift as the total number of blank bits in the u64 * lshift as number of blank bits left of the bitfield in the u64 Take a moment to consider why that is true by consulting the above diagram. With this insight, we can now define the following relationship: bitfield _ | | 0.....00XXX0...00 | | | | |______| | | lshift | | |____| (rshift - lshift) That is, we know the number of higher order blank bits is just lshift. And the number of lower order blank bits is (rshift - lshift). Finally, we can examine the core of the write side algorithm: mask = (~0ULL << rshift) >> lshift; // 1 val = (val & ~mask) | ((nval << rpad) & mask); // 2 1. Compute a mask where the set bits are the bitfield bits. The first left shift zeros out exactly the number of blank bits, leaving a bitfield sized set of 1s. The subsequent right shift inserts the correct amount of higher order blank bits. 2. On the left of the `|`, mask out the bitfield bits. This creates 0s where the new bitfield bits will go. On the right of the `|`, bring nval into the correct bit position and mask out any bits that fall outside of the bitfield. Finally, by bor'ing the two halves, we get the final set of bits to write back. [0]: https://reviews.llvm.org/D133361 Co-developed-by: Eduard Zingerman <[email protected]> Signed-off-by: Eduard Zingerman <[email protected]> Co-developed-by: Jonathan Lemon <[email protected]> Signed-off-by: Jonathan Lemon <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Xu <[email protected]> Link: https://lore.kernel.org/r/4d3dd215a4fd57d980733886f9c11a45e1a9adf3.1702325874.git.dxu@dxuuu.xyz Signed-off-by: Martin KaFai Lau <[email protected]>
2023-12-13selftests/bpf: fix compiler warnings in RELEASE=1 modeAndrii Nakryiko2-2/+2
When compiling BPF selftests with RELEASE=1, we get two new warnings, which are treated as errors. Fix them. Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: John Fastabend <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-12-13perf hisi-ptt: Fix one memory leakage in hisi_ptt_process_auxtrace_event()Yicong Yang1-0/+1
ASan complains a memory leakage in hisi_ptt_process_auxtrace_event() that the data buffer is not freed. Since currently we only support the raw dump trace mode, the data buffer is used only within this function. So fix this by freeing the data buffer before going out. Fixes: 5e91e57e68090c0e ("perf auxtrace arm64: Add support for parsing HiSilicon PCIe Trace packet") Reviewed-by: Ian Rogers <[email protected]> Signed-off-by: Yicong Yang <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Junhao He <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Qi Liu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>