aboutsummaryrefslogtreecommitdiff
path: root/tools/testing
AgeCommit message (Collapse)AuthorFilesLines
2022-11-21KVM: selftests: Rename 'evmcs_test' to 'hyperv_evmcs'Vitaly Kuznetsov3-2/+2
Conform to the rest of Hyper-V emulation selftests which have 'hyperv' prefix. Get rid of '_test' suffix as well as the purpose of this code is fairly obvious. Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: hyperv_svm_test: Introduce L2 TLB flush testVitaly Kuznetsov2-5/+58
Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: evmcs_test: Introduce L2 TLB flush testVitaly Kuznetsov2-2/+50
Enable Hyper-V L2 TLB flush and check that Hyper-V TLB flush hypercalls from L2 don't exit to L1 unless 'TlbLockCount' is set in the Partition assist page. Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Introduce rdmsr_from_l2() and use it for MSR-Bitmap testsVitaly Kuznetsov2-21/+23
Hyper-V MSR-Bitmap tests do RDMSR from L2 to exit to L1. While 'evmcs_test' correctly clobbers all GPRs (which are not preserved), 'hyperv_svm_test' does not. Introduce a more generic rdmsr_from_l2() to avoid code duplication and remove hardcoding of MSRs. Do not put it in common code because it is really just a selftests bug rather than a processor feature that requires it. Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Stuff RAX/RCX with 'safe' values in vmmcall()/vmcall()Vitaly Kuznetsov3-10/+24
vmmcall()/vmcall() are used to exit from L2 to L1 and no concrete hypercall ABI is currenty followed. With the introduction of Hyper-V L2 TLB flush it becomes (theoretically) possible that L0 will take responsibility for handling the call and no L1 exit will happen. Prevent this by stuffing RAX (KVM ABI) and RCX (Hyper-V ABI) with 'safe' values. While on it, convert vmmcall() to 'static inline', make it setup stack frame and move to include/x86_64/svm_util.h. Signed-off-by: Vitaly Kuznetsov <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Allocate Hyper-V partition assist pageVitaly Kuznetsov2-0/+10
In preparation to testing Hyper-V L2 TLB flush hypercalls, allocate so-called Partition assist page. Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Create a vendor independent helper to allocate Hyper-V ↵Vitaly Kuznetsov6-33/+48
specific test pages There's no need to pollute VMX and SVM code with Hyper-V specific stuff and allocate Hyper-V specific test pages for all test as only few really need them. Create a dedicated struct and an allocation helper. Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Split off load_evmcs() from load_vmcs()Vitaly Kuznetsov3-22/+24
In preparation to putting Hyper-V specific test pages to a dedicated struct, move eVMCS load logic from load_vmcs(). Tests call load_vmcs() directly and the only one which needs 'enlightened' version is evmcs_test so there's not much gain in having this merged. Temporary pass both GPA and HVA to load_evmcs(). Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Move Hyper-V VP assist page enablement out of evmcs.hVitaly Kuznetsov5-38/+56
Hyper-V VP assist page is not eVMCS specific, it is also used for enlightened nSVM. Move the code to vendor neutral place. Reviewed-by: Maxim Levitsky <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Sync 'struct hv_vp_assist_page' definition with hyperv-tlfs.hVitaly Kuznetsov1-5/+17
'struct hv_vp_assist_page' definition doesn't match TLFS. Also, define 'struct hv_nested_enlightenments_control' and use it instead of opaque '__u64'. Reviewed-by: Maxim Levitsky <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Sync 'struct hv_enlightened_vmcs' definition with hyperv-tlfs.hVitaly Kuznetsov1-4/+6
'struct hv_enlightened_vmcs' definition in selftests is not '__packed' and so we rely on the compiler doing the right padding. This is not obvious so it seems beneficial to use the same definition as in kernel. Reviewed-by: Maxim Levitsky <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21KVM: selftests: Hyper-V PV TLB flush selftestVitaly Kuznetsov4-0/+693
Introduce a selftest for Hyper-V PV TLB flush hypercalls (HvFlushVirtualAddressSpace/HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressList/HvFlushVirtualAddressListEx). The test creates one 'sender' vCPU and two 'worker' vCPU which do busy loop reading from a certain GVA checking the observed value. Sender vCPU swaos the data page with another page filled with a different value. The expectation for workers is also altered. Without TLB flush on worker vCPUs, they may continue to observe old value. To guard against accidental TLB flushes for worker vCPUs the test is repeated 100 times. Hyper-V TLB flush hypercalls are tested in both 'normal' and 'XMM fast' modes. Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-21NFC: nci: Extend virtual NCI deinit testDmitry Vyukov1-0/+11
Extend the test to check the scenario when NCI core tries to send data to already closed device to ensure that nothing bad happens. Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Bongsu Jeon <[email protected]> Cc: Krzysztof Kozlowski <[email protected]> Cc: Jakub Kicinski <[email protected]> Cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
2022-11-20bpftool: remove support of --legacy option for bpftoolSahid Orentino Ferdjaoui1-3/+3
Following: commit bd054102a8c7 ("libbpf: enforce strict libbpf 1.0 behaviors") commit 93b8952d223a ("libbpf: deprecate legacy BPF map definitions") The --legacy option is no longer relevant as libbpf no longer supports it. libbpf_set_strict_mode() is a no-op operation. Signed-off-by: Sahid Orentino Ferdjaoui <[email protected]> Acked-by: Yonghong Song <[email protected]> Reviewed-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf: Add type cast unit testsYonghong Song3-0/+198
Three tests are added. One is from John Fastabend ({1]) which tests tracing style access for xdp program from the kernel ctx. Another is a tc test to test both kernel ctx tracing style access and explicit non-ctx type cast. The third one is for negative tests including two tests, a tp_bpf test where the bpf_rdonly_cast() returns a untrusted ptr which cannot be used as helper argument, and a tracepoint test where the kernel ctx is a u64. Also added the test to DENYLIST.s390x since s390 does not currently support calling kernel functions in JIT mode. [1] https://lore.kernel.org/bpf/[email protected]/ Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf/selftests: Add selftests for new task kfuncsDavid Vernet5-0/+640
A previous change added a series of kfuncs for storing struct task_struct objects as referenced kptrs. This patch adds a new task_kfunc test suite for validating their expected behavior. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-20bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncsDavid Vernet2-3/+3
Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal to the verifier that it should enforce that a BPF program passes it a "safe", trusted pointer. Currently, "safe" means that the pointer is either PTR_TO_CTX, or is refcounted. There may be cases, however, where the kernel passes a BPF program a safe / trusted pointer to an object that the BPF program wishes to use as a kptr, but because the object does not yet have a ref_obj_id from the perspective of the verifier, the program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS kfunc. The solution is to expand the set of pointers that are considered trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs with these pointers without getting rejected by the verifier. There is already a PTR_UNTRUSTED flag that is set in some scenarios, such as when a BPF program reads a kptr directly from a map without performing a bpf_kptr_xchg() call. These pointers of course can and should be rejected by the verifier. Unfortunately, however, PTR_UNTRUSTED does not cover all the cases for safety that need to be addressed to adequately protect kfuncs. Specifically, pointers obtained by a BPF program "walking" a struct are _not_ considered PTR_UNTRUSTED according to BPF. For example, say that we were to add a kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal that a task was unsafe to pass to a kfunc, the verifier would mistakenly allow the following unsafe BPF program to be loaded: SEC("tp_btf/task_newtask") int BPF_PROG(unsafe_acquire_task, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired, *nested; nested = task->last_wakee; /* Would not be rejected by the verifier. */ acquired = bpf_task_acquire(nested); if (!acquired) return 0; bpf_task_release(acquired); return 0; } To address this, this patch defines a new type flag called PTR_TRUSTED which tracks whether a PTR_TO_BTF_ID pointer is safe to pass to a KF_TRUSTED_ARGS kfunc or a BPF helper function. PTR_TRUSTED pointers are passed directly from the kernel as a tracepoint or struct_ops callback argument. Any nested pointer that is obtained from walking a PTR_TRUSTED pointer is no longer PTR_TRUSTED. From the example above, the struct task_struct *task argument is PTR_TRUSTED, but the 'nested' pointer obtained from 'task->last_wakee' is not PTR_TRUSTED. A subsequent patch will add kfuncs for storing a task kfunc as a kptr, and then another patch will add selftests to validate. Signed-off-by: David Vernet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-18selftests: cgroup: fix unsigned comparison with less than zeroYueHaibing1-2/+3
'size' is unsigned, it never less than zero. Link: https://lkml.kernel.org/r/[email protected] Fixes: 6c26df84e1f2 ("selftests: cgroup: return -errno from cg_read()/cg_write() on failure") Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Yosry Ahmed <[email protected]> Acked-by: Roman Gushchin <[email protected]> Reviewed-by: Kamalesh Babulal <[email protected]> Cc: David Rientjes <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: zefan li <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18selftests/vm: add local_config.h and local_config.mk to .gitignoreZhao Gongyi1-0/+2
Add local_config.h and local_config.mk to .gitignore to avoid accidentally checking it in. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Zhao Gongyi <[email protected]> Cc: Shuah Khan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18proc: fixup uptime selftestAlexey Dobriyan1-1/+2
syscall(3) returns -1 and sets errno on error, unlike "syscall" instruction. Systems which have <= 32/64 CPUs are unaffected. Test won't bounce to all CPUs before completing if there are more of them. Link: https://lkml.kernel.org/r/Y1bUiT7VRXlXPQa1@p183 Fixes: 1f5bd0547654 ("proc: selftests: test /proc/uptime") Signed-off-by: Alexey Dobriyan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-11-18selftests/bpf: Skip spin lock failure test on s390xKumar Kartikeya Dwivedi1-0/+6
Instead of adding the whole test to DENYLIST.s390x, which also has success test cases that should be run, just skip over failure test cases in case the JIT does not support kfuncs. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-18KVM: selftests: Export vm_vaddr_unused_gap() to make it possible to request ↵Vitaly Kuznetsov2-2/+3
unmapped ranges Currently, tests can only request a new vaddr range by using vm_vaddr_alloc()/vm_vaddr_alloc_page()/vm_vaddr_alloc_pages() but these functions allocate and map physical pages too. Make it possible to request unmapped range too. Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: selftests: Fill in vm->vpages_mapped bitmap in virt_map() tooVitaly Kuznetsov1-2/+3
Similar to vm_vaddr_alloc(), virt_map() needs to reflect the mapping in vm->vpages_mapped. While on it, remove unneeded code wrapping in vm_vaddr_alloc(). Reviewed-by: Andrew Jones <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: selftests: Hyper-V PV IPI selftestVitaly Kuznetsov5-4/+349
Introduce a selftest for Hyper-V PV IPI hypercalls (HvCallSendSyntheticClusterIpi, HvCallSendSyntheticClusterIpiEx). The test creates one 'sender' vCPU and two 'receiver' vCPU and then issues various combinations of send IPI hypercalls in both 'normal' and 'fast' (with XMM input where necessary) mode. Later, the test checks whether IPIs were delivered to the expected destination vCPU[s]. Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: selftests: Move the function doing Hyper-V hypercall to a common headerVitaly Kuznetsov2-19/+20
All Hyper-V specific tests issuing hypercalls need this. Reviewed-by: Maxim Levitsky <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: selftests: Move HYPERV_LINUX_OS_ID definition to a common headerVitaly Kuznetsov2-4/+5
HYPERV_LINUX_OS_ID needs to be written to HV_X64_MSR_GUEST_OS_ID by each Hyper-V specific selftest. Reviewed-by: Maxim Levitsky <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: selftests: Better XMM read/write helpersVitaly Kuznetsov1-34/+36
set_xmm()/get_xmm() helpers are fairly useless as they only read 64 bits from 128-bit registers. Moreover, these helpers are not used. Borrow _kvm_read_sse_reg()/_kvm_write_sse_reg() from KVM limiting them to XMM0-XMM8 for now. Reviewed-by: Maxim Levitsky <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18x86/hyperv: KVM: Rename "hv_enlightenments" to "hv_vmcb_enlightenments"Sean Christopherson2-3/+3
Now that KVM isn't littered with "struct hv_enlightenments" casts, rename the struct to "hv_vmcb_enlightenments" to highlight the fact that the struct is specifically for SVM's VMCB. No functional change intended. Signed-off-by: Sean Christopherson <[email protected]> Reviewed-by: Michael Kelley <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: SVM: Add a proper field for Hyper-V VMCB enlightenmentsSean Christopherson2-3/+5
Add a union to provide hv_enlightenments side-by-side with the sw_reserved bytes that Hyper-V's enlightenments overlay. Casting sw_reserved everywhere is messy, confusing, and unnecessarily unsafe. No functional change intended. Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18KVM: selftests: Move "struct hv_enlightenments" to x86_64/svm.hSean Christopherson2-18/+17
Move Hyper-V's VMCB "struct hv_enlightenments" to the svm.h header so that the struct can be referenced in "struct vmcb_control_area". Alternatively, a dedicated header for SVM+Hyper-V could be added, a la x86_64/evmcs.h, but it doesn't appear that Hyper-V will end up needing a wholesale replacement for the VMCB. No functional change intended. Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18x86/hyperv: Move VMCB enlightenment definitions to hyperv-tlfs.hSean Christopherson1-3/+3
Move Hyper-V's VMCB enlightenment definitions to the TLFS header; the definitions come directly from the TLFS[*], not from KVM. No functional change intended. [*] https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/datatypes/hv_svm_enlightened_vmcb_fields [vitaly: rename VMCB_HV_ -> HV_VMCB_ to match the rest of hyperv-tlfs.h, keep svm/hyperv.h] Signed-off-by: Sean Christopherson <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-11-18kselftest/arm64: Use preferred form for predicate load/storesMark Brown1-2/+2
The preferred form of the str/ldr for predicate registers with an immediate of zero is to omit the zero, and the clang built in assembler rejects the zero immediate. Drop the immediate. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-11-18selftests/net: fix missing xdp_dummyWang Yufen5-15/+23
After commit afef88e65554 ("selftests/bpf: Store BPF object files with .bpf.o extension"), we should use xdp_dummy.bpf.o instade of xdp_dummy.o. In addition, use the BPF_FILE variable to save the BPF object file name, which can be better identified and modified. Fixes: afef88e65554 ("selftests/bpf: Store BPF object files with .bpf.o extension") Signed-off-by: Wang Yufen <[email protected]> Cc: Daniel Müller <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-11-18selftests: add a selftest for sctp vrfXin Long3-0/+317
This patch adds 12 small test cases: 01-04 test for the sysctl net.sctp.l3mdev_accept. 05-10 test for only binding to a right l3mdev device, the connection can be created. 11-12 test for two socks binding to different l3mdev devices at the same time, each of them can process the packets from the corresponding peer. The tests run for both IPv4 and IPv6 SCTP. Signed-off-by: Xin Long <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-11-17selftests: mptcp: fix mibit vs mbit mix upMatthieu Baerts1-2/+3
The estimated time was supposing the rate was expressed in mibit (bit * 1024^2) but it is in mbit (bit * 1000^2). This makes the threshold higher but in a more realistic way to avoid false positives reported by CI instances. Before this patch, the thresholds were at 7561/4005ms and now they are at 7906/4178ms. While at it, also fix a typo in the linked comment, spotted by Mat. Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/310 Fixes: 1a418cb8e888 ("mptcp: simult flow self-tests") Suggested-by: Paolo Abeni <[email protected]> Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Matthieu Baerts <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-17selftests: mptcp: run mptcp_sockopt from a new netnsMatthieu Baerts1-4/+5
Not running it from a new netns causes issues if some MPTCP settings are modified, e.g. if MPTCP is disabled from the sysctl knob, if multiple addresses are available and added to the MPTCP path-manager, etc. In these cases, the created connection will not behave as expected, e.g. unable to create an MPTCP socket, more than one subflow is seen, etc. A new "sandbox" net namespace is now created and used to run mptcp_sockopt from this controlled environment. Fixes: ce9979129a0b ("selftests: mptcp: add mptcp getsockopt test cases") Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Matthieu Baerts <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-17selftests: mptcp: gives slow test-case more timePaolo Abeni1-3/+3
On slow or busy VM, some test-cases still fail because the data transfer completes before the endpoint manipulation actually took effect. Address the issue by artificially increasing the runtime for the relevant test-cases. Fixes: ef360019db40 ("selftests: mptcp: signal addresses testcases") Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/309 Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-17selftests/bpf: Temporarily disable linked list testsKumar Kartikeya Dwivedi4-16/+34
The latest clang nightly as of writing crashes with the given test case for BPF linked lists wherever global glock, ghead, glock2 are used, hence comment out the parts that cause the crash, and prepare this commit so that it can be reverted when the fix has been made. More context in [0]. [0]: https://lore.kernel.org/bpf/[email protected] Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add BTF sanity testsKumar Kartikeya Dwivedi1-0/+485
Preparing the metadata for bpf_list_head involves a complicated parsing step and type resolution for the contained value. Ensure that corner cases are tested against and invalid specifications in source are duly rejected. Also include tests for incorrect ownership relationships in the BTF. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add BPF linked list API testsKumar Kartikeya Dwivedi6-0/+1264
Include various tests covering the success and failure cases. Also, run the success cases at runtime to verify correctness of linked list manipulation routines, in addition to ensuring successful verification. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add failure test cases for spin lock pairingKumar Kartikeya Dwivedi2-1/+292
First, ensure that whenever a bpf_spin_lock is present in an allocation, the reg->id is preserved. This won't be true for global variables however, since they have a single map value per map, hence the verifier harcodes it to 0 (so that multiple pseudo ldimm64 insns can yield the same lock object per map at a given offset). Next, add test cases for all possible combinations (kptr, global, map value, inner map value). Since we lifted restriction on locking in inner maps, also add test cases for them. Currently, each lookup into an inner map gets a fresh reg->id, so even if the reg->map_ptr is same, they will be treated as separate allocations and the incorrect unlock pairing will be rejected. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Update spinlock selftestKumar Kartikeya Dwivedi3-47/+51
Make updates in preparation for adding more test cases to this selftest: - Convert from CHECK_ to ASSERT macros. - Use BPF skeleton - Fix typo sping -> spin - Rename spinlock.c -> spin_lock.c Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17selftests/bpf: Add __contains macro to bpf_experimental.hKumar Kartikeya Dwivedi1-0/+2
Add user facing __contains macro which provides a convenient wrapper over the verbose kernel specific BTF declaration tag required to annotate BPF list head structs in user types. Acked-by: Dave Marchevsky <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce single ownership BPF linked list APIKumar Kartikeya Dwivedi1-0/+28
Add a linked list API for use in BPF programs, where it expects protection from the bpf_spin_lock in the same allocation as the bpf_list_head. For now, only one bpf_spin_lock can be present hence that is assumed to be the one protecting the bpf_list_head. The following functions are added to kick things off: // Add node to beginning of list void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node); // Add node to end of list void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node); // Remove node at beginning of list and return it struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head); // Remove node at end of list and return it struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head); The lock protecting the bpf_list_head needs to be taken for all operations. The verifier ensures that the lock that needs to be taken is always held, and only the correct lock is taken for these operations. These checks are made statically by relying on the reg->id preserved for registers pointing into regions having both bpf_spin_lock and the objects protected by it. The comment over check_reg_allocation_locked in this change describes the logic in detail. Note that bpf_list_push_front and bpf_list_push_back are meant to consume the object containing the node in the 1st argument, however that specific mechanism is intended to not release the ref_obj_id directly until the bpf_spin_unlock is called. In this commit, nothing is done, but the next commit will be introducing logic to handle this case, so it has been left as is for now. bpf_list_pop_front and bpf_list_pop_back delete the first or last item of the list respectively, and return pointer to the element at the list_node offset. The user can then use container_of style macro to get the actual entry type. The verifier however statically knows the actual type, so the safety properties are still preserved. With these additions, programs can now manage their own linked lists and store their objects in them. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce bpf_obj_dropKumar Kartikeya Dwivedi1-0/+13
Introduce bpf_obj_drop, which is the kfunc used to free allocated objects (allocated using bpf_obj_new). Pairing with bpf_obj_new, it implicitly destructs the fields part of object automatically without user intervention. Just like the previous patch, btf_struct_meta that is needed to free up the special fields is passed as a hidden argument to the kfunc. For the user, a convenience macro hides over the kernel side kfunc which is named bpf_obj_drop_impl. Continuing the previous example: void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; bpf_obj_drop(f); } Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Introduce bpf_obj_newKumar Kartikeya Dwivedi1-0/+25
Introduce type safe memory allocator bpf_obj_new for BPF programs. The kernel side kfunc is named bpf_obj_new_impl, as passing hidden arguments to kfuncs still requires having them in prototype, unlike BPF helpers which always take 5 arguments and have them checked using bpf_func_proto in verifier, ignoring unset argument types. Introduce __ign suffix to ignore a specific kfunc argument during type checks, then use this to introduce support for passing type metadata to the bpf_obj_new_impl kfunc. The user passes BTF ID of the type it wants to allocates in program BTF, the verifier then rewrites the first argument as the size of this type, after performing some sanity checks (to ensure it exists and it is a struct type). The second argument is also fixed up and passed by the verifier. This is the btf_struct_meta for the type being allocated. It would be needed mostly for the offset array which is required for zero initializing special fields while leaving the rest of storage in unitialized state. It would also be needed in the next patch to perform proper destruction of the object's special fields. Under the hood, bpf_obj_new will call bpf_mem_alloc and bpf_mem_free, using the any context BPF memory allocator introduced recently. To this end, a global instance of the BPF memory allocator is initialized on boot to be used for this purpose. This 'bpf_global_ma' serves all allocations for bpf_obj_new. In the future, bpf_obj_new variants will allow specifying a custom allocator. Note that now that bpf_obj_new can be used to allocate objects that can be linked to BPF linked list (when future linked list helpers are available), we need to also free the elements using bpf_mem_free. However, since the draining of elements is done outside the bpf_spin_lock, we need to do migrate_disable around the call since bpf_list_head_free can be called from map free path where migration is enabled. Otherwise, when called from BPF programs migration is already disabled. A convenience macro is included in the bpf_experimental.h header to hide over the ugly details of the implementation, leading to user code looking similar to a language level extension which allocates and constructs fields of a user type. struct bar { struct bpf_list_node node; }; struct foo { struct bpf_spin_lock lock; struct bpf_list_head head __contains(bar, node); }; void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; ... } A key piece of this story is still missing, i.e. the free function, which will come in the next patch. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17bpf: Rewrite kfunc argument handlingKumar Kartikeya Dwivedi3-4/+4
As we continue to add more features, argument types, kfunc flags, and different extensions to kfuncs, the code to verify the correctness of the kfunc prototype wrt the passed in registers has become ad-hoc and ugly to read. To make life easier, and make a very clear split between different stages of argument processing, move all the code into verifier.c and refactor into easier to read helpers and functions. This also makes sharing code within the verifier easier with kfunc argument processing. This will be more and more useful in later patches as we are now moving to implement very core BPF helpers as kfuncs, to keep them experimental before baking into UAPI. Remove all kfunc related bits now from btf_check_func_arg_match, as users have been converted away to refactored kfunc argument handling. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2022-11-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski10-7/+35807
include/linux/bpf.h 1f6e04a1c7b8 ("bpf: Fix offset calculation error in __copy_map_value and zero_map_value") aa3496accc41 ("bpf: Refactor kptr_off_tab into btf_record") f71b2f64177a ("bpf: Refactor map->off_arr handling") https://lore.kernel.org/all/[email protected]/ Signed-off-by: Jakub Kicinski <[email protected]>
2022-11-18random: use random.trust_{bootloader,cpu} command line option onlyJason A. Donenfeld1-2/+0
It's very unusual to have both a command line option and a compile time option, and apparently that's confusing to people. Also, basically everybody enables the compile time option now, which means people who want to disable this wind up having to use the command line option to ensure that anyway. So just reduce the number of moving pieces and nix the compile time option in favor of the more versatile command line option. Signed-off-by: Jason A. Donenfeld <[email protected]>
2022-11-17selftests/bpf: Pass target triple to get_sys_includes macroBjörn Töpel1-4/+8
When cross-compiling [1], the get_sys_includes make macro should use the target system include path, and not the build hosts system include path. Make clang honor the CROSS_COMPILE triple. [1] e.g. "ARCH=riscv CROSS_COMPILE=riscv64-linux-gnu- make" Signed-off-by: Björn Töpel <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Tested-by: Anders Roxell <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]