aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2022-11-22selftests: net: Add cross-compilation support for BPF programsBjörn Töpel1-4/+41
The selftests/net does not have proper cross-compilation support, and does not properly state libbpf as a dependency. Mimic/copy the BPF build from selftests/bpf, which has the nice side-effect that libbpf is built as well. Signed-off-by: Björn Töpel <[email protected]> Reviewed-by: Anders Roxell <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Paolo Abeni <[email protected]>
2022-11-21selftests/bpf: Make sure zero-len skbs aren't redirectableStanislav Fomichev2-0/+183
LWT_XMIT to test L3 case, TC to test L2 case. v2: - s/veth_ifindex/ipip_ifindex/ in two places (Martin) - add comment about which condition triggers the rejection (Martin) Signed-off-by: Stanislav Fomichev <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2022-11-21selftests/bpf: Make test_bench_attach serialJiri Olsa1-3/+1
Alexei hit another rcu warnings because of this test. Making test_bench_attach serial so it does not disrupts other tests during parallel tests run. While this change is not the fix, it should be less likely to hit it with this test being executed serially. Reported-by: Alexei Starovoitov <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21selftests/bpf: Filter out default_idle from kprobe_multi benchJiri Olsa1-1/+3
Alexei hit following rcu warning when running prog_test -j. [ 128.049567] WARNING: suspicious RCU usage [ 128.049569] 6.1.0-rc2 #912 Tainted: G O ... [ 128.050944] kprobe_multi_link_handler+0x6c/0x1d0 [ 128.050947] ? kprobe_multi_link_handler+0x42/0x1d0 [ 128.050950] ? __cpuidle_text_start+0x8/0x8 [ 128.050952] ? __cpuidle_text_start+0x8/0x8 [ 128.050958] fprobe_handler.part.1+0xac/0x150 [ 128.050964] 0xffffffffa02130c8 [ 128.050991] ? default_idle+0x5/0x20 [ 128.050998] default_idle+0x5/0x20 It's caused by bench test attaching kprobe_multi link to default_idle function, which is not executed in rcu safe context so the kprobe handler on top of it will trigger the rcu warning. Filtering out default_idle function from the bench test. Reported-by: Alexei Starovoitov <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21bpf: Set and check spin lock value in sk_storage_map_testXu Kuohai1-16/+20
Update sk_storage_map_test to make sure kernel does not copy user non-zero value spin lock to kernel, and does not copy kernel spin lock value to user. If user spin lock value is copied to kernel, this test case will make kernel spin on the copied lock, resulting in rcu stall and softlockup. Signed-off-by: Xu Kuohai <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-21selftests/bpf: Add test for cgroup iterator on a dead cgroupHou Tao1-0/+76
The test closes both iterator link fd and cgroup fd, and removes the cgroup file to make a dead cgroup before reading from cgroup iterator. It also uses kern_sync_rcu() and usleep() to wait for the release of start cgroup. If the start cgroup is not pinned by cgroup iterator, reading from iterator fd will trigger use-after-free. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Acked-by: Hao Luo <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-21selftests/bpf: Add cgroup helper remove_cgroup()Hou Tao2-0/+20
Add remove_cgroup() to remove a cgroup which doesn't have any children or live processes. It will be used by the following patch to test cgroup iterator on a dead cgroup. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-21selftests/net: Find nettest in current directoryDaniel Díaz2-8/+13
The `nettest` binary, built from `selftests/net/nettest.c`, was expected to be found in the path during test execution of `fcnal-test.sh` and `pmtu.sh`, leading to tests getting skipped when the binary is not installed in the system, as can be seen in these logs found in the wild [1]: # TEST: vti4: PMTU exceptions [SKIP] [ 350.600250] IPv6: ADDRCONF(NETDEV_CHANGE): veth_b: link becomes ready [ 350.607421] IPv6: ADDRCONF(NETDEV_CHANGE): veth_a: link becomes ready # 'nettest' command not found; skipping tests # xfrm6udp not supported # TEST: vti6: PMTU exceptions (ESP-in-UDP) [SKIP] [ 351.605102] IPv6: ADDRCONF(NETDEV_CHANGE): veth_b: link becomes ready [ 351.612243] IPv6: ADDRCONF(NETDEV_CHANGE): veth_a: link becomes ready # 'nettest' command not found; skipping tests # xfrm4udp not supported The `unicast_extensions.sh` tests also rely on `nettest`, but it runs fine there because it looks for the binary in the current working directory [2]: The same mechanism that works for the Unicast extensions tests is here copied over to the PMTU and functional tests. [1] https://lkft.validation.linaro.org/scheduler/job/5839508#L6221 [2] https://lkft.validation.linaro.org/scheduler/job/5839508#L7958 Signed-off-by: Daniel Díaz <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-11-21NFC: nci: Extend virtual NCI deinit testDmitry Vyukov1-0/+11
Extend the test to check the scenario when NCI core tries to send data to already closed device to ensure that nothing bad happens. Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Bongsu Jeon <[email protected]> Cc: Krzysztof Kozlowski <[email protected]> Cc: Jakub Kicinski <[email protected]> Cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
2022-11-21tools/arch/x86: intel_sdsi: Add support for reading meter certificatesDavid E. Box1-2/+108
Add option to read and decode On Demand meter certificates. Link: https://github.com/intel/intel-sdsi/blob/master/meter-certificate.rst Signed-off-by: David E. Box <[email protected]> Reviewed-by: Hans de Goede <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Hans de Goede <[email protected]>
2022-11-21tools/arch/x86: intel_sdsi: Add support for new GUIDDavid E. Box1-11/+25
The structure and content of the On Demand registers is based on the GUID which is read from hardware through sysfs. Add support for decoding the registers of a new GUID 0xF210D9EF. Signed-off-by: David E. Box <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Hans de Goede <[email protected]> Signed-off-by: Hans de Goede <[email protected]>
2022-11-21tools/arch/x86: intel_sdsi: Read more On Demand registersDavid E. Box1-5/+45
Add decoding of the following On Demand register fields: 1. NVRAM content authorization error status 2. Enabled features: telemetry and attestation 3. Key provisioning status 4. NVRAM update limit 5. PCU_CR3_CAPID_CFG Link: https://github.com/intel/intel-sdsi/blob/master/state-certificate-encoding.rst Signed-off-by: David E. Box <[email protected]> Reviewed-by: Hans de Goede <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Hans de Goede <[email protected]>
2022-11-21tools/arch/x86: intel_sdsi: Add Intel On Demand textDavid E. Box1-6/+6
Intel Software Defined Silicon (SDSi) is now officially known as Intel On Demand. Change text in tool to indicate this. Signed-off-by: David E. Box <[email protected]> Reviewed-by: Hans de Goede <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Hans de Goede <[email protected]>
2022-11-21tools/arch/x86: intel_sdsi: Add support for reading state certificatesDavid E. Box1-70/+198
Add option to read and decode On Demand state certificates. Link: https://github.com/intel/intel-sdsi/blob/master/state-certificate-encoding.rst Signed-off-by: David E. Box <[email protected]> Reviewed-by: Hans de Goede <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Hans de Goede <[email protected]>
2022-11-20bpftool: remove function free_btf_vmlinux()Sahid Orentino Ferdjaoui1-6/+1
The function contains a single btf__free() call which can be inlined. Credits to Yonghong Song. Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Acked-by: Yonghong Song <[email protected]> Suggested-by: Yonghong Song <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpftool: clean-up usage of libbpf_get_error()Sahid Orentino Ferdjaoui8-45/+39
bpftool is now totally compliant with libbpf 1.0 mode and is not expected to be compiled with pre-1.0, let's clean-up the usage of libbpf_get_error(). The changes stay aligned with returned errors always negative. - In tools/bpf/bpftool/btf.c This fixes an uninitialized local variable `err` in function do_dump() because it may now be returned without having been set. - This also removes the checks on NULL pointers before calling btf__free() because that function already does the check. Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpftool: fix error message when function can't register struct_opsSahid Orentino Ferdjaoui1-3/+2
It is expected that errno be passed to strerror(). This also cleans this part of code from using libbpf_get_error(). Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Acked-by: Yonghong Song <[email protected]> Suggested-by: Quentin Monnet <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpftool: replace return value PTR_ERR(NULL) with 0Sahid Orentino Ferdjaoui1-4/+2
There is no reasons to keep PTR_ERR() when kern_btf=NULL, let's just return 0. This also cleans this part of code from using libbpf_get_error(). Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Acked-by: Yonghong Song <[email protected]> Suggested-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpftool: remove support of --legacy option for bpftoolSahid Orentino Ferdjaoui7-36/+6
Following: commit bd054102a8c7 ("libbpf: enforce strict libbpf 1.0 behaviors") commit 93b8952d223a ("libbpf: deprecate legacy BPF map definitions") The --legacy option is no longer relevant as libbpf no longer supports it. libbpf_set_strict_mode() is a no-op operation. Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Acked-by: Yonghong Song <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf: Add type cast unit testsYonghong Song3-0/+198
Three tests are added. One is from John Fastabend ({1]) which tests tracing style access for xdp program from the kernel ctx. Another is a tc test to test both kernel ctx tracing style access and explicit non-ctx type cast. The third one is for negative tests including two tests, a tp_bpf test where the bpf_rdonly_cast() returns a untrusted ptr which cannot be used as helper argument, and a tracepoint test where the kernel ctx is a u64. Also added the test to DENYLIST.s390x since s390 does not currently support calling kernel functions in JIT mode. [1] https://lore.kernel.org/bpf/[email protected]/ Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf/selftests: Add selftests for new task kfuncsDavid Vernet5-0/+640
A previous change added a series of kfuncs for storing struct task_struct objects as referenced kptrs. This patch adds a new task_kfunc test suite for validating their expected behavior. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncsDavid Vernet2-3/+3
Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal to the verifier that it should enforce that a BPF program passes it a "safe", trusted pointer. Currently, "safe" means that the pointer is either PTR_TO_CTX, or is refcounted. There may be cases, however, where the kernel passes a BPF program a safe / trusted pointer to an object that the BPF program wishes to use as a kptr, but because the object does not yet have a ref_obj_id from the perspective of the verifier, the program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS kfunc. The solution is to expand the set of pointers that are considered trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs with these pointers without getting rejected by the verifier. There is already a PTR_UNTRUSTED flag that is set in some scenarios, such as when a BPF program reads a kptr directly from a map without performing a bpf_kptr_xchg() call. These pointers of course can and should be rejected by the verifier. Unfortunately, however, PTR_UNTRUSTED does not cover all the cases for safety that need to be addressed to adequately protect kfuncs. Specifically, pointers obtained by a BPF program "walking" a struct are _not_ considered PTR_UNTRUSTED according to BPF. For example, say that we were to add a kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal that a task was unsafe to pass to a kfunc, the verifier would mistakenly allow the following unsafe BPF program to be loaded: SEC("tp_btf/task_newtask") int BPF_PROG(unsafe_acquire_task, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired, *nested; nested = task->last_wakee; /* Would not be rejected by the verifier. */ acquired = bpf_task_acquire(nested); if (!acquired) return 0; bpf_task_release(acquired); return 0; } To address this, this patch defines a new type flag called PTR_TRUSTED which tracks whether a PTR_TO_BTF_ID pointer is safe to pass to a KF_TRUSTED_ARGS kfunc or a BPF helper function. PTR_TRUSTED pointers are passed directly from the kernel as a tracepoint or struct_ops callback argument. Any nested pointer that is obtained from walking a PTR_TRUSTED pointer is no longer PTR_TRUSTED. From the example above, the struct task_struct *task argument is PTR_TRUSTED, but the 'nested' pointer obtained from 'task->last_wakee' is not PTR_TRUSTED. A subsequent patch will add kfuncs for storing a task kfunc as a kptr, and then another patch will add selftests to validate. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-18libbpf: Ignore hashmap__find() result explicitly in btf_dumpAndrii Nakryiko1-1/+1
Coverity is reporting that btf_dump_name_dups() doesn't check return result of hashmap__find() call. This is intentional, so make it explicit with (void) cast. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-18selftests: cgroup: fix unsigned comparison with less than zeroYueHaibing1-2/+3
'size' is unsigned, it never less than zero. Link: https://lkml.kernel.org/r/[email protected] Fixes: 6c26df84e1f2 ("selftests: cgroup: return -errno from cg_read()/cg_write() on failure") Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Yosry Ahmed <[email protected]> Acked-by: Roman Gushchin <[email protected]> Reviewed-by: Kamalesh Babulal <[email protected]> Cc: David Rientjes <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: zefan li <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18selftests/vm: add local_config.h and local_config.mk to .gitignoreZhao Gongyi1-0/+2
Add local_config.h and local_config.mk to .gitignore to avoid accidentally checking it in. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Zhao Gongyi <[email protected]> Cc: Shuah Khan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18tools/accounting/procacct: remove some unused variablesXiongfeng Wang1-6/+1
Drop the following unused variables inherited from getdelays.c: 'aggr_len', 'len2', 'cmd_type', 'tid', 'containerset', 'containerpath' and 'sigset'. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Xiongfeng Wang <[email protected]> Cc: "Dr. Thomas Orgis" <[email protected]> Cc: Ismael Luceno <[email protected]> Cc: Xiongfeng Wang <[email protected]> Cc: Yang Yingliang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18proc: fixup uptime selftestAlexey Dobriyan1-1/+2
syscall(3) returns -1 and sets errno on error, unlike "syscall" instruction. Systems which have <= 32/64 CPUs are unaffected. Test won't bounce to all CPUs before completing if there are more of them. Link: https://lkml.kernel.org/r/Y1bUiT7VRXlXPQa1@p183 Fixes: 1f5bd0547654 ("proc: selftests: test /proc/uptime") Signed-off-by: Alexey Dobriyan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18selftests/bpf: Skip spin lock failure test on s390xKumar Kartikeya Dwivedi1-0/+6
Instead of adding the whole test to DENYLIST.s390x, which also has success test cases that should be run, just skip over failure test cases in case the JIT does not support kfuncs. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-18Merge tag 'char-misc-6.1-rc6' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver fixes from Greg KH: "Here are some small char/misc and other driver fixes for 6.1-rc6 to resolve some reported problems. Included in here are: - iio driver fixes - binder driver fix - nvmem driver fix - vme_vmci information leak fix - parport fix - slimbus configuration fix - coreboot firmware bugfix - speakup build fix and crash fix All of these have been in linux-next for a while with no reported issues" * tag 'char-misc-6.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (22 commits) firmware: coreboot: Register bus in module init nvmem: u-boot-env: fix crc32_data_offset on redundant u-boot-env slimbus: qcom-ngd: Fix build error when CONFIG_SLIM_QCOM_NGD_CTRL=y && CONFIG_QCOM_RPROC_COMMON=m docs: update mediator contact information in CoC doc slimbus: stream: correct presence rate frequencies nvmem: lan9662-otp: Fix compatible string binder: validate alloc->mm in ->mmap() handler parport_pc: Avoid FIFO port location truncation siox: fix possible memory leak in siox_device_add() misc/vmw_vmci: fix an infoleak in vmci_host_do_receive_datagram() speakup: replace utils' u_char with unsigned char speakup: fix a segfault caused by switching consoles tools: iio: iio_generic_buffer: Fix read size iio: imu: bno055: uninitialized variable bug in bno055_trigger_handler() iio: adc: at91_adc: fix possible memory leak in at91_adc_allocate_trigger() iio: adc: mp2629: fix potential array out of bound access iio: adc: mp2629: fix wrong comparison of channel iio: pressure: ms5611: changed hardcoded SPI speed to value limited iio: pressure: ms5611: fixed value compensation bug iio: accel: bma400: Ensure VDDIO is enable defore reading the chip ID. ...
2022-11-18kselftest/arm64: Use preferred form for predicate load/storesMark Brown1-2/+2
The preferred form of the str/ldr for predicate registers with an immediate of zero is to omit the zero, and the clang built in assembler rejects the zero immediate. Drop the immediate. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-11-18selftests/net: fix missing xdp_dummyWang Yufen5-15/+23
After commit afef88e65554 ("selftests/bpf: Store BPF object files with .bpf.o extension"), we should use xdp_dummy.bpf.o instade of xdp_dummy.o. In addition, use the BPF_FILE variable to save the BPF object file name, which can be better identified and modified. Fixes: afef88e65554 ("selftests/bpf: Store BPF object files with .bpf.o extension") Signed-off-by: Wang Yufen <[email protected]> Cc: Daniel Müller <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-11-18selftests: add a selftest for sctp vrfXin Long3-0/+317
This patch adds 12 small test cases: 01-04 test for the sysctl net.sctp.l3mdev_accept. 05-10 test for only binding to a right l3mdev device, the connection can be created. 11-12 test for two socks binding to different l3mdev devices at the same time, each of them can process the packets from the corresponding peer. The tests run for both IPv4 and IPv6 SCTP. Signed-off-by: Xin Long <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-11-17selftests: mptcp: fix mibit vs mbit mix upMatthieu Baerts1-2/+3
The estimated time was supposing the rate was expressed in mibit (bit * 1024^2) but it is in mbit (bit * 1000^2). This makes the threshold higher but in a more realistic way to avoid false positives reported by CI instances. Before this patch, the thresholds were at 7561/4005ms and now they are at 7906/4178ms. While at it, also fix a typo in the linked comment, spotted by Mat. Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/310 Fixes: 1a418cb8e888 ("mptcp: simult flow self-tests") Suggested-by: Paolo Abeni <[email protected]> Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Matthieu Baerts <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-17selftests: mptcp: run mptcp_sockopt from a new netnsMatthieu Baerts1-4/+5
Not running it from a new netns causes issues if some MPTCP settings are modified, e.g. if MPTCP is disabled from the sysctl knob, if multiple addresses are available and added to the MPTCP path-manager, etc. In these cases, the created connection will not behave as expected, e.g. unable to create an MPTCP socket, more than one subflow is seen, etc. A new "sandbox" net namespace is now created and used to run mptcp_sockopt from this controlled environment. Fixes: ce9979129a0b ("selftests: mptcp: add mptcp getsockopt test cases") Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Matthieu Baerts <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-17selftests: mptcp: gives slow test-case more timePaolo Abeni1-3/+3
On slow or busy VM, some test-cases still fail because the data transfer completes before the endpoint manipulation actually took effect. Address the issue by artificially increasing the runtime for the relevant test-cases. Fixes: ef360019db40 ("selftests: mptcp: signal addresses testcases") Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/309 Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-17selftests/bpf: Temporarily disable linked list testsKumar Kartikeya Dwivedi4-16/+34
The latest clang nightly as of writing crashes with the given test case for BPF linked lists wherever global glock, ghead, glock2 are used, hence comment out the parts that cause the crash, and prepare this commit so that it can be reverted when the fix has been made. More context in [0]. [0]: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add BTF sanity testsKumar Kartikeya Dwivedi1-0/+485
Preparing the metadata for bpf_list_head involves a complicated parsing step and type resolution for the contained value. Ensure that corner cases are tested against and invalid specifications in source are duly rejected. Also include tests for incorrect ownership relationships in the BTF. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add BPF linked list API testsKumar Kartikeya Dwivedi6-0/+1264
Include various tests covering the success and failure cases. Also, run the success cases at runtime to verify correctness of linked list manipulation routines, in addition to ensuring successful verification. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add failure test cases for spin lock pairingKumar Kartikeya Dwivedi2-1/+292
First, ensure that whenever a bpf_spin_lock is present in an allocation, the reg->id is preserved. This won't be true for global variables however, since they have a single map value per map, hence the verifier harcodes it to 0 (so that multiple pseudo ldimm64 insns can yield the same lock object per map at a given offset). Next, add test cases for all possible combinations (kptr, global, map value, inner map value). Since we lifted restriction on locking in inner maps, also add test cases for them. Currently, each lookup into an inner map gets a fresh reg->id, so even if the reg->map_ptr is same, they will be treated as separate allocations and the incorrect unlock pairing will be rejected. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Update spinlock selftestKumar Kartikeya Dwivedi3-47/+51
Make updates in preparation for adding more test cases to this selftest: - Convert from CHECK_ to ASSERT macros. - Use BPF skeleton - Fix typo sping -> spin - Rename spinlock.c -> spin_lock.c Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add __contains macro to bpf_experimental.hKumar Kartikeya Dwivedi1-0/+2
Add user facing __contains macro which provides a convenient wrapper over the verbose kernel specific BTF declaration tag required to annotate BPF list head structs in user types. Acked-by: Dave Marchevsky <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce single ownership BPF linked list APIKumar Kartikeya Dwivedi1-0/+28
Add a linked list API for use in BPF programs, where it expects protection from the bpf_spin_lock in the same allocation as the bpf_list_head. For now, only one bpf_spin_lock can be present hence that is assumed to be the one protecting the bpf_list_head. The following functions are added to kick things off: // Add node to beginning of list void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node); // Add node to end of list void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node); // Remove node at beginning of list and return it struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head); // Remove node at end of list and return it struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head); The lock protecting the bpf_list_head needs to be taken for all operations. The verifier ensures that the lock that needs to be taken is always held, and only the correct lock is taken for these operations. These checks are made statically by relying on the reg->id preserved for registers pointing into regions having both bpf_spin_lock and the objects protected by it. The comment over check_reg_allocation_locked in this change describes the logic in detail. Note that bpf_list_push_front and bpf_list_push_back are meant to consume the object containing the node in the 1st argument, however that specific mechanism is intended to not release the ref_obj_id directly until the bpf_spin_unlock is called. In this commit, nothing is done, but the next commit will be introducing logic to handle this case, so it has been left as is for now. bpf_list_pop_front and bpf_list_pop_back delete the first or last item of the list respectively, and return pointer to the element at the list_node offset. The user can then use container_of style macro to get the actual entry type. The verifier however statically knows the actual type, so the safety properties are still preserved. With these additions, programs can now manage their own linked lists and store their objects in them. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce bpf_obj_dropKumar Kartikeya Dwivedi1-0/+13
Introduce bpf_obj_drop, which is the kfunc used to free allocated objects (allocated using bpf_obj_new). Pairing with bpf_obj_new, it implicitly destructs the fields part of object automatically without user intervention. Just like the previous patch, btf_struct_meta that is needed to free up the special fields is passed as a hidden argument to the kfunc. For the user, a convenience macro hides over the kernel side kfunc which is named bpf_obj_drop_impl. Continuing the previous example: void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; bpf_obj_drop(f); } Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce bpf_obj_newKumar Kartikeya Dwivedi1-0/+25
Introduce type safe memory allocator bpf_obj_new for BPF programs. The kernel side kfunc is named bpf_obj_new_impl, as passing hidden arguments to kfuncs still requires having them in prototype, unlike BPF helpers which always take 5 arguments and have them checked using bpf_func_proto in verifier, ignoring unset argument types. Introduce __ign suffix to ignore a specific kfunc argument during type checks, then use this to introduce support for passing type metadata to the bpf_obj_new_impl kfunc. The user passes BTF ID of the type it wants to allocates in program BTF, the verifier then rewrites the first argument as the size of this type, after performing some sanity checks (to ensure it exists and it is a struct type). The second argument is also fixed up and passed by the verifier. This is the btf_struct_meta for the type being allocated. It would be needed mostly for the offset array which is required for zero initializing special fields while leaving the rest of storage in unitialized state. It would also be needed in the next patch to perform proper destruction of the object's special fields. Under the hood, bpf_obj_new will call bpf_mem_alloc and bpf_mem_free, using the any context BPF memory allocator introduced recently. To this end, a global instance of the BPF memory allocator is initialized on boot to be used for this purpose. This 'bpf_global_ma' serves all allocations for bpf_obj_new. In the future, bpf_obj_new variants will allow specifying a custom allocator. Note that now that bpf_obj_new can be used to allocate objects that can be linked to BPF linked list (when future linked list helpers are available), we need to also free the elements using bpf_mem_free. However, since the draining of elements is done outside the bpf_spin_lock, we need to do migrate_disable around the call since bpf_list_head_free can be called from map free path where migration is enabled. Otherwise, when called from BPF programs migration is already disabled. A convenience macro is included in the bpf_experimental.h header to hide over the ugly details of the implementation, leading to user code looking similar to a language level extension which allocates and constructs fields of a user type. struct bar { struct bpf_list_node node; }; struct foo { struct bpf_spin_lock lock; struct bpf_list_head head __contains(bar, node); }; void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; ... } A key piece of this story is still missing, i.e. the free function, which will come in the next patch. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Rewrite kfunc argument handlingKumar Kartikeya Dwivedi3-4/+4
As we continue to add more features, argument types, kfunc flags, and different extensions to kfuncs, the code to verify the correctness of the kfunc prototype wrt the passed in registers has become ad-hoc and ugly to read. To make life easier, and make a very clear split between different stages of argument processing, move all the code into verifier.c and refactor into easier to read helpers and functions. This also makes sharing code within the verifier easier with kfunc argument processing. This will be more and more useful in later patches as we are now moving to implement very core BPF helpers as kfuncs, to keep them experimental before baking into UAPI. Remove all kfunc related bits now from btf_check_func_arg_match, as users have been converted away to refactored kfunc argument handling. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski16-29/+35907
include/linux/bpf.h 1f6e04a1c7b8 ("bpf: Fix offset calculation error in __copy_map_value and zero_map_value") aa3496accc41 ("bpf: Refactor kptr_off_tab into btf_record") f71b2f64177a ("bpf: Refactor map->off_arr handling") https://lore.kernel.org/all/[email protected]/ Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-18random: use random.trust_{bootloader,cpu} command line option onlyJason A. Donenfeld1-2/+0
It's very unusual to have both a command line option and a compile time option, and apparently that's confusing to people. Also, basically everybody enables the compile time option now, which means people who want to disable this wind up having to use the command line option to ensure that anyway. So just reduce the number of moving pieces and nix the compile time option in favor of the more versatile command line option. Signed-off-by: Jason A. Donenfeld <[email protected]>
2022-11-17libbpf: Check the validity of size in user_ring_buffer__reserve()Hou Tao1-0/+4
The top two bits of size are used as busy and discard flags, so reject the reservation that has any of these special bits in the size. With the addition of validity check, these is also no need to check whether or not total_size is overflowed. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-17libbpf: Handle size overflow for user ringbuf mmapHou Tao1-2/+8
Similar with the overflow problem on ringbuf mmap, in user_ringbuf_map() 2 * max_entries may overflow u32 when mapping writeable region. Fixing it by casting the size of writable mmap region into a __u64 and checking whether or not there will be overflow during mmap. Fixes: b66ccae01f1d ("bpf: Add libbpf logic for user-space ring buffer") Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-11-17libbpf: Handle size overflow for ringbuf mmapHou Tao1-4/+8
The maximum size of ringbuf is 2GB on x86-64 host, so 2 * max_entries will overflow u32 when mapping producer page and data pages. Only casting max_entries to size_t is not enough, because for 32-bits application on 64-bits kernel the size of read-only mmap region also could overflow size_t. So fixing it by casting the size of read-only mmap region into a __u64 and checking whether or not there will be overflow during mmap. Fixes: bf99c936f947 ("libbpf: Add BPF ring buffer support") Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]