aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2022-03-09memblock tests: Add memblock_alloc_try_nid tests for top downKarolina Drobnik4-2/+692
Add tests for memblock_alloc_try_nid for top down allocation direction. As the definition of this function is pretty close to the core memblock_alloc_range_nid, the test cases implemented here cover most of the code paths related to the memory allocations. The tested scenarios are: - Region can be allocated within the requested range (both with aligned and misaligned boundaries) - Region can be allocated between two already existing entries - Not enough space between already reserved regions - Memory range is too narrow but memory can be allocated before the maximum address - Edge cases: + Minimum address is below memblock_start_of_DRAM() + Maximum address is above memblock_end_of_DRAM() Add checks for both allocation directions: - Region starts at the min_addr and ends at max_addr - Maximum address is too close to the beginning of the available memory - Memory at the range boundaries is reserved but there is enough space to allocate a new region Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/d6c282e0f9f62c15bf74c216214604764232d637.1646055639.git.karolinadrobnik@gmail.com
2022-03-09memblock tests: Add memblock_alloc_from tests for bottom upKarolina Drobnik1-4/+171
Add checks for memblock_alloc_from for bottom up allocation direction. The tested scenarios are: - Not enough space to allocate memory at the minimal address - Minimal address parameter is smaller than the start address of the available memory - Minimal address parameter is too close to the end of the available memory Add test case wrappers to test both directions in the same context. Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/506cf5293c8a21c012b7ea87b14af07754d3e656.1646055639.git.karolinadrobnik@gmail.com
2022-03-09memblock tests: Add memblock_alloc_from tests for top downKarolina Drobnik4-1/+239
Add checks for memblock_alloc_from for default allocation direction. The tested scenarios are: - Not enough space to allocate memory at the minimal address - Minimal address parameter is smaller than the start address of the available memory - Minimal address is too close to the available memory Add simple memblock_alloc_from test that can be used to test both allocation directions (minimal address is aligned or misaligned). Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/3dd645f437975fd393010b95b8faa85d2b86490a.1646055639.git.karolinadrobnik@gmail.com
2022-03-09memblock tests: Add memblock_alloc tests for bottom upKarolina Drobnik1-4/+320
Add checks for memblock_alloc for bottom up allocation direction. The tested scenarios are: - Region can be allocated on the first fit (with and without region merging) - Region can be allocated on the second fit (with and without region merging) Add test case wrappers to test both directions in the same context. Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/426674eee20d99dca49caf1ee0142a83dccbc98d.1646055639.git.karolinadrobnik@gmail.com
2022-03-09memblock tests: Add memblock_alloc tests for top downKarolina Drobnik4-1/+447
Add checks for memblock_alloc for top down allocation direction. The tested scenarios are: - Region can be allocated on the first fit (with and without region merging) - Region can be allocated on the second fit (with and without region merging) Add checks for both allocation directions: - Region can be allocated between two already existing entries - Limited memory available - All memory is reserved - No available memory registered with memblock Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/26ccf409b8ff0394559d38d792b2afb24b55887c.1646055639.git.karolinadrobnik@gmail.com
2022-03-09memblock tests: Add simulation of physical memoryKarolina Drobnik4-2/+37
Allocation functions that return virtual addresses (with an exception of _raw variant) clear the allocated memory after reserving it. This requires valid memory ranges in memblock.memory. Introduce memory_block variable to store memory that can be registered with memblock data structure. Move assert.h and size.h includes to common.h to share them between the test files. Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/dce115503c74a6936c44694b00014658a1bb6522.1646055639.git.karolinadrobnik@gmail.com
2022-03-09memblock tests: Split up reset_memblock functionKarolina Drobnik3-32/+33
All memblock data structure fields are reset in one function. In some test cases, it's preferred to reset memory region arrays without modifying other values like allocation direction flag. Extract two functions from reset_memblock, so it's possible to reset different parts of memblock: - reset_memblock_regions - reset region arrays and their counters - reset_memblock_attributes - set other fields to their default values Update checks in basic_api.c to use new definitions. Remove reset_memblock call from memblock_initialization_check, so the true initial values are tested. Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Link: https://lore.kernel.org/r/5cc1ba9a0ade922dbf4ba450165b81a9ed17d4a9.1646055639.git.karolinadrobnik@gmail.com
2022-03-08selftests: mptcp: add implicit endpoint test casePaolo Abeni2-1/+126
Ensure implicit endpoint are created when expected and that the user-space can update them Reviewed-by: Matthieu Baerts <[email protected]> Co-developed-by: Matthieu Baerts <[email protected]> Signed-off-by: Matthieu Baerts <[email protected]> Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-03-08mptcp: introduce implicit endpointsPaolo Abeni1-2/+2
In some edge scenarios, an MPTCP subflows can use a local address mapped by a "implicit" endpoint created by the in-kernel path manager. Such endpoints presence can be confusing, as it's creation is hard to track and will prevent the later endpoint creation from the user-space using the same address. Define a new endpoint flag to mark implicit endpoints and allow the user-space to replace implicit them with user-provided data at endpoint creation time. Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-03-08mptcp: more careful RM_ADDR generationPaolo Abeni1-6/+36
The in-kernel MPTCP path manager, when processing the MPTCP_PM_CMD_FLUSH_ADDR command, generates RM_ADDR events for each known local address. While that is allowed by the RFC, it makes unpredictable the exact number of RM_ADDR generated when both ends flush the PM addresses. This change restricts the RM_ADDR generation to previously explicitly announced addresses, and adjust the expected results in a bunch of related self-tests. Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-03-08selftests: mptcp: Rename wait functionMat Martineau1-2/+2
The "selftests: mptcp: improve 'fair usage on close' stability" commit changed that self test to check the TcpAttemptFails MIB instead of looking for TW sockets. The associated bash function wasn't renamed in that commit because of the merge conflicts it would cause, so this commit updates the function name as Paolo originally intended. Cc: Paolo Abeni <[email protected]> Reviewed-by: Matthieu Baerts <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-03-08selftests: mptcp: join: allow running -cCiMatthieu Baerts1-39/+28
Without this patch, no tests would be ran when launching: mptcp_join.sh -cCi In any order or a combination with 2 of these letters. The recommended way with getopt is first parse all options and then act. This allows to do some actions in priority, e.g. display the help menu and stop. But also some global variables changing the behaviour of this selftests -- like the ones behind -cCi options -- can be set before running the different tests. By doing that, we can also avoid long and unreadable regex. Signed-off-by: Matthieu Baerts <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-03-08Improve stability of find_vma BPF testMykola Lysenko1-9/+19
Remove unneeded spleep and increase length of dummy CPU intensive computation to guarantee test process execution. Also, complete aforemention computation as soon as test success criteria is met Signed-off-by: Mykola Lysenko <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-08Improve send_signal BPF test stabilityMykola Lysenko2-8/+11
Substitute sleep with dummy CPU intensive computation. Finish aforemention computation as soon as signal was delivered to the test process. Make the BPF code to only execute when PID global variable is set Signed-off-by: Mykola Lysenko <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-08Improve perf related BPF tests (sample_freq issue)Mykola Lysenko4-5/+5
Linux kernel may automatically reduce kernel.perf_event_max_sample_rate value when running tests in parallel on slow systems. Linux kernel checks against this limit when opening perf event with freq=1 parameter set. The lower bound is 1000. This patch reduces sample_freq value to 1000 in all BPF tests that use sample_freq to ensure they always can open perf event. Signed-off-by: Mykola Lysenko <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-08tools: Fix unavoidable GCC call in Clang buildsAdrian Ratiu1-0/+4
In ChromeOS and Gentoo we catch any unwanted mixed Clang/LLVM and GCC/binutils usage via toolchain wrappers which fail builds. This has revealed that GCC is called unconditionally in Clang configured builds to populate GCC_TOOLCHAIN_DIR. Allow the user to override CLANG_CROSS_FLAGS to avoid the GCC call - in our case we set the var directly in the ebuild recipe. In theory Clang could be able to autodetect these settings so this logic could be removed entirely, but in practice as the commit cebdb7374577 ("tools: Help cross-building with clang") mentions, this does not always work, so giving distributions more control to specify their flags & sysroot is beneficial. Suggested-by: Manoj Gupta <[email protected]> Suggested-by: Nathan Chancellor <[email protected]> Signed-off-by: Adrian Ratiu <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Link: https://lore.kernel.org/lkml/87czjk4osi.fsf@ryzen9.i-did-not-set--mail-host-address--so-tickle-me Link: https://lore.kernel.org/bpf/[email protected]
2022-03-08KVM: selftests: Add test to populate a VM with the max possible guest memSean Christopherson3-0/+294
Add a selftest that enables populating a VM with the maximum amount of guest memory allowed by the underlying architecture. Abuse KVM's memslots by mapping a single host memory region into multiple memslots so that the selftest doesn't require a system with terabytes of RAM. Default to 512gb of guest memory, which isn't all that interesting, but should work on all MMUs and doesn't take an exorbitant amount of memory or time. E.g. testing with ~64tb of guest memory takes the better part of an hour, and requires 200gb of memory for KVM's page tables when using 4kb pages. To inflicit maximum abuse on KVM' MMU, default to 4kb pages (or whatever the not-hugepage size is) in the backing store (memfd). Use memfd for the host backing store to ensure that hugepages are guaranteed when requested, and to give the user explicit control of the size of hugepage being tested. By default, spin up as many vCPUs as there are available to the selftest, and distribute the work of dirtying each 4kb chunk of memory across all vCPUs. Dirtying guest memory forces KVM to populate its page tables, and also forces KVM to write back accessed/dirty information to struct page when the guest memory is freed. On x86, perform two passes with a MMU context reset between each pass to coerce KVM into dropping all references to the MMU root, e.g. to emulate a vCPU dropping the last reference. Perform both passes and all rendezvous on all architectures in the hope that arm64 and s390x can gain similar shenanigans in the future. Measure and report the duration of each operation, which is helpful not only to verify the test is working as intended, but also to easily evaluate the performance differences different page sizes. Provide command line options to limit the amount of guest memory, set the size of each slot (i.e. of the host memory region), set the number of vCPUs, and to enable usage of hugepages. Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-03-08KVM: selftests: Define cpu_relax() helpers for s390 and x86Sean Christopherson2-0/+13
Add cpu_relax() for s390 and x86 for use in arch-agnostic tests. arm64 already defines its own version. Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-03-08KVM: selftests: Split out helper to allocate guest mem via memfdSean Christopherson2-18/+25
Extract the code for allocating guest memory via memfd out of vm_userspace_mem_region_add() and into a new helper, kvm_memfd_alloc(). A future selftest to populate a guest with the maximum amount of guest memory will abuse KVM's memslots to alias guest memory regions to a single memfd-backed host region, i.e. needs to back a guest with memfd memory without a 1:1 association between a memslot and a memfd instance. No functional change intended. Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-03-08KVM: selftests: Move raw KVM_SET_USER_MEMORY_REGION helper to utilsSean Christopherson3-27/+36
Move set_memory_region_test's KVM_SET_USER_MEMORY_REGION helper to KVM's utils so that it can be used by other tests. Provide a raw version as well as an assert-success version to reduce the amount of boilerplate code need for basic usage. No functional change intended. Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2022-03-08selftests/bpf: Make test_lwt_ip_encap more stable and fasterFelix Maurer1-1/+9
In test_lwt_ip_encap, the ingress IPv6 encap test failed from time to time. The failure occured when an IPv4 ping through the IPv6 GRE encapsulation did not receive a reply within the timeout. The IPv4 ping and the IPv6 ping in the test used different timeouts (1 sec for IPv4 and 6 sec for IPv6), probably taking into account that IPv6 might need longer to successfully complete. However, when IPv4 pings (with the short timeout) are encapsulated into the IPv6 tunnel, the delays of IPv6 apply. The actual reason for the long delays with IPv6 was that the IPv6 neighbor discovery sometimes did not complete in time. This was caused by the outgoing interface only having a tentative link local address, i.e., not having completed DAD for that lladdr. The ND was successfully retried after 1 sec but that was too late for the ping timeout. The IPv6 addresses for the test were already added with nodad. However, for the lladdrs, DAD was still performed. We now disable DAD in the test netns completely and just assume that the two lladdrs on each veth pair do not collide. This removes all the delays for IPv6 traffic in the test. Without the delays, we can now also reduce the delay of the IPv6 ping to 1 sec. This makes the whole test complete faster because we don't need to wait for the excessive timeout for each IPv6 ping that is supposed to fail. Fixes: 0fde56e4385b0 ("selftests: bpf: add test_lwt_ip_encap selftest") Signed-off-by: Felix Maurer <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/4987d549d48b4e316cd5b3936de69c8d4bc75a4f.1646305899.git.fmaurer@redhat.com
2022-03-08turbostat: fix PC6 displaying on some systemsArtem Bityutskiy1-1/+1
'MSR_PKG_CST_CONFIG_CONTROL' encodes the deepest allowed package C-state limit, and turbostat decodes it. Before this patch: turbostat does not recognize value "3" on Ice Lake Xeon (ICX) and Sapphire Rapids Xeon (SPR), treats it as "unknown", and does not display any package C-states in the results table. After this patch: turbostat recognizes value 3 on ICX and SPR, treats it as "PC6", and correctly displays package C-states in the results table. Signed-off-by: Artem Bityutskiy <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2022-03-08Revert "netfilter: nat: force port remap to prevent shadowing well-known ports"Florian Westphal1-3/+2
This reverts commit 878aed8db324bec64f3c3f956e64d5ae7375a5de. This change breaks existing setups where conntrack is used with asymmetric paths. In these cases, the NAT transformation occurs on the syn-ack instead of the syn: 1. SYN x:12345 -> y -> 443 // sent by initiator, receiverd by responder 2. SYNACK y:443 -> x:12345 // First packet seen by conntrack, as sent by responder 3. tuple_force_port_remap() gets called, sees: 'tcp from 443 to port 12345 NAT' -> pick a new source port, inititor receives 4. SYNACK y:$RANDOM -> x:12345 // connection is never established While its possible to avoid the breakage with NOTRACK rules, a kernel update should not break working setups. An alternative to the revert is to augment conntrack to tag mid-stream connections plus more code in the nat core to skip NAT for such connections, however, this leads to more interaction/integration between conntrack and NAT. Therefore, revert, users will need to add explicit nat rules to avoid port shadowing. Link: https://lore.kernel.org/netfilter-devel/[email protected]/#R Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2051413 Signed-off-by: Florian Westphal <[email protected]>
2022-03-08selftests: tpm: add async space test with noneexisting handleTadeusz Struk1-0/+16
Add a test for /dev/tpmrm0 in async mode that checks if the code handles invalid handles correctly. Cc: Jarkko Sakkinen <[email protected]> Cc: Shuah Khan <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Tested-by: Jarkko Sakkinen<[email protected]> Signed-off-by: Tadeusz Struk <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2022-03-08selftests: tpm2: Determine available PCR bankStefan Berger2-8/+52
Determine an available PCR bank to be used by a test case by querying the capability TPM2_GET_CAP. The TPM2 returns TPML_PCR_SELECTIONS that contains an array of TPMS_PCR_SELECTIONs indicating available PCR banks and the bitmasks that show which PCRs are enabled in each bank. Collect the data in a dictionary. From the dictionary determine the PCR bank that has the PCRs enabled that the test needs. This avoids test failures with TPM2's that either to not have a SHA-1 bank or whose SHA-1 bank is disabled. Signed-off-by: Stefan Berger <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2022-03-07bpf/docs: Update list of architectures supported.KP Singh1-1/+1
vmtest.sh also supports s390x now. Signed-off-by: KP Singh <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-07bpf/docs: Update vmtest docs for static linkingKP Singh1-0/+8
Dynamic linking when compiling on the host can cause issues when the libc version does not match the one in the VM image. Update the docs to explain how to do this. Before: ./vmtest.sh -- ./test_progs -t test_ima ./test_progs: /usr/lib/libc.so.6: version `GLIBC_2.33' not found (required by ./test_progs) After: LDLIBS=-static ./vmtest.sh -- ./test_progs -t test_ima test_ima:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Reported-by: "Geyslan G. Bem" <[email protected]> Signed-off-by: KP Singh <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-07libbpf: Fix array_size.cocci warningGuo Zhengkui2-3/+4
Fix the following coccicheck warning: tools/lib/bpf/bpf.c:114:31-32: WARNING: Use ARRAY_SIZE tools/lib/bpf/xsk.c:484:34-35: WARNING: Use ARRAY_SIZE tools/lib/bpf/xsk.c:485:35-36: WARNING: Use ARRAY_SIZE It has been tested with gcc (Debian 8.3.0-6) 8.3.0 on x86_64. Signed-off-by: Guo Zhengkui <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-07libbpf: Unmap rings when umem deletedlic1211-0/+11
xsk_umem__create() does mmap for fill/comp rings, but xsk_umem__delete() doesn't do the unmap. This works fine for regular cases, because xsk_socket__delete() does unmap for the rings. But for the case that xsk_socket__create_shared() fails, umem rings are not unmapped. fill_save/comp_save are checked to determine if rings have already be unmapped by xsk. If fill_save and comp_save are NULL, it means that the rings have already been used by xsk. Then they are supposed to be unmapped by xsk_socket__delete(). Otherwise, xsk_umem__delete() does the unmap. Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and devices") Signed-off-by: Cheng Li <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]~
2022-03-07Merge tag 'x86_bugs_for_v5.17' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 spectre fixes from Borislav Petkov: - Mitigate Spectre v2-type Branch History Buffer attacks on machines which support eIBRS, i.e., the hardware-assisted speculation restriction after it has been shown that such machines are vulnerable even with the hardware mitigation. - Do not use the default LFENCE-based Spectre v2 mitigation on AMD as it is insufficient to mitigate such attacks. Instead, switch to retpolines on all AMD by default. - Update the docs and add some warnings for the obviously vulnerable cmdline configurations. * tag 'x86_bugs_for_v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT x86/speculation: Warn about Spectre v2 LFENCE mitigation x86/speculation: Update link to AMD speculation whitepaper x86/speculation: Use generic retpoline by default on AMD x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting Documentation/hw-vuln: Update spectre doc x86/speculation: Add eIBRS + Retpoline options x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCE
2022-03-07kselftest/arm64: Log the PIDs of the parent and child in sve-ptraceMark Brown1-0/+2
If the test triggers a problem it may well result in a log message from the kernel such as a WARN() or BUG(). If these include a PID it can help with debugging to know if it was the parent or child process that triggered the issue, since the test is just creating a new thread the process name will be the same either way. Print the PIDs of the parent and child on startup so users have this information to hand should it be needed. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Shuah Khan <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-03-07Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds2-0/+4
Pull virtio fixes from Michael Tsirkin: "Some last minute fixes that took a while to get ready. Not regressions, but they look safe and seem to be worth to have" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: tools/virtio: handle fallout from folio work tools/virtio: fix virtio_test execution vhost: remove avail_event arg from vhost_update_avail_event() virtio: drop default for virtio-mem vdpa: fix use-after-free on vp_vdpa_remove virtio-blk: Remove BUG_ON() in virtio_queue_rq() virtio-blk: Don't use MAX_DISCARD_SEGMENTS if max_discard_seg is zero vhost: fix hung thread due to erroneous iotlb entries vduse: Fix returning wrong type in vduse_domain_alloc_iova() vdpa/mlx5: add validation for VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command vdpa/mlx5: should verify CTRL_VQ feature exists for MQ vdpa: factor out vdpa_set_features_unlocked for vdpa internal use virtio_console: break out of buf poll on remove virtio: document virtio_reset_device virtio: acknowledge all features before access virtio: unexport virtio_finalize_features
2022-03-08selftest/powerpc: Add PAPR sysfs attributes sniff testPratik R. Sampat4-0/+117
Include a testcase to check if the sysfs files for energy and frequency related have its related attribute files exist and populated Signed-off-by: Pratik R. Sampat <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-03-08selftests/powerpc: Add test for real address error handlingGanesh Goudar4-1/+75
Add test for real address or control memory address access error handling, using NX-GZIP engine. The error is injected by accessing the control memory address using illegal instruction, on successful handling the process attempting to access control memory address using illegal instruction receives SIGBUS. Signed-off-by: Ganesh Goudar <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-03-07nds32: Remove the architectureAlan Kao5-37/+0
The nds32 architecture, also known as AndeStar V3, is a custom 32-bit RISC target designed by Andes Technologies. Support was added to the kernel in 2016 as the replacement RISC-V based V5 processors were already announced, and maintained by (current or former) Andes employees. As explained by Alan Kao, new customers are now all using RISC-V, and all known nds32 users are already on longterm stable kernels provided by Andes, with no development work going into mainline support any more. While the port is still in a reasonably good shape, it only gets worse over time without active maintainers, so it seems best to remove it before it becomes unusable. As always, if it turns out that there are mainline users after all, and they volunteer to maintain the port in the future, the removal can be reverted. Link: https://lore.kernel.org/linux-mm/[email protected]/ Link: https://lore.kernel.org/lkml/[email protected]/ Link: https://www.andestech.com/en/products-solutions/andestar-architecture/ Signed-off-by: Alan Kao <[email protected]> [arnd: rewrite changelog to provide more background] Signed-off-by: Arnd Bergmann <[email protected]>
2022-03-07Merge branch 'topic/func-desc-lkdtm' into nextMichael Ellerman1-0/+1
Merge a topic branch we are maintaining with some cross-architecture changes to function descriptor handling and their use in LKDTM. From Christophe's cover letter: Fix LKDTM for PPC64/IA64/PARISC PPC64/IA64/PARISC have function descriptors. LKDTM doesn't work on those three architectures because LKDTM messes up function descriptors with functions. This series does some cleanup in the three architectures and refactors function descriptors so that it can then easily use it in a generic way in LKDTM.
2022-03-07selftests: net: fix array_size.cocci warningGuo Zhengkui1-1/+1
Fit the following coccicheck warning: tools/testing/selftests/net/reuseport_bpf_numa.c:89:28-29: WARNING: Use ARRAY_SIZE. It has been tested with gcc (Debian 8.3.0-6) 8.3.0 on x86_64. Signed-off-by: Guo Zhengkui <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-03-06tools/virtio: handle fallout from folio workMichael S. Tsirkin1-0/+3
just add a stub Signed-off-by: Michael S. Tsirkin <[email protected]>
2022-03-06tools/virtio: fix virtio_test executionStefano Garzarella1-0/+1
virtio_test hangs on __vring_new_virtqueue() because `vqs_list_lock` is not initialized. Let's initialize it in vdev_info_init(). Signed-off-by: Stefano Garzarella <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Michael S. Tsirkin <[email protected]> Acked-by: Jason Wang <[email protected]>
2022-03-05selftests/bpf: Add a test for btf_type_tag "percpu"Hao Luo3-29/+215
Add test for percpu btf_type_tag. Similar to the "user" tag, we test the following cases: 1. __percpu struct field. 2. __percpu as function parameter. 3. per_cpu_ptr() accepts dynamically allocated __percpu memory. Because the test for "user" and the test for "percpu" are very similar, a little bit of refactoring has been done in btf_tag.c. Basically, both tests share the same function for loading vmlinux and module btf. Example output from log: > ./test_progs -v -t btf_tag libbpf: prog 'test_percpu1': BPF program load failed: Permission denied libbpf: prog 'test_percpu1': -- BEGIN PROG LOAD LOG -- ... ; g = arg->a; 1: (61) r1 = *(u32 *)(r1 +0) R1 is ptr_bpf_testmod_btf_type_tag_1 access percpu memory: off=0 ... test_btf_type_tag_mod_percpu:PASS:btf_type_tag_percpu 0 nsec #26/6 btf_tag/btf_type_tag_percpu_mod1:OK libbpf: prog 'test_percpu2': BPF program load failed: Permission denied libbpf: prog 'test_percpu2': -- BEGIN PROG LOAD LOG -- ... ; g = arg->p->a; 2: (61) r1 = *(u32 *)(r1 +0) R1 is ptr_bpf_testmod_btf_type_tag_1 access percpu memory: off=0 ... test_btf_type_tag_mod_percpu:PASS:btf_type_tag_percpu 0 nsec #26/7 btf_tag/btf_type_tag_percpu_mod2:OK libbpf: prog 'test_percpu_load': BPF program load failed: Permission denied libbpf: prog 'test_percpu_load': -- BEGIN PROG LOAD LOG -- ... ; g = (__u64)cgrp->rstat_cpu->updated_children; 2: (79) r1 = *(u64 *)(r1 +48) R1 is ptr_cgroup_rstat_cpu access percpu memory: off=48 ... test_btf_type_tag_vmlinux_percpu:PASS:btf_type_tag_percpu_load 0 nsec #26/8 btf_tag/btf_type_tag_percpu_vmlinux_load:OK load_btfs:PASS:could not load vmlinux BTF 0 nsec test_btf_type_tag_vmlinux_percpu:PASS:btf_type_tag_percpu 0 nsec test_btf_type_tag_vmlinux_percpu:PASS:btf_type_tag_percpu_helper 0 nsec #26/9 btf_tag/btf_type_tag_percpu_vmlinux_helper:OK Signed-off-by: Hao Luo <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-05selftests/bpf: Add tests for kfunc register offset checksKumar Kartikeya Dwivedi1-0/+83
Include a few verifier selftests that test against the problems being fixed by previous commits, i.e. release kfunc always require PTR_TO_BTF_ID fixed and var_off to be 0, and negative offset is not permitted and returns a helpful error message. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-05bpf: Disallow negative offset in check_ptr_off_regKumar Kartikeya Dwivedi2-5/+5
check_ptr_off_reg only allows fixed offset to be set for PTR_TO_BTF_ID, where reg->off < 0 doesn't make sense. This would shift the pointer backwards, and fails later in btf_struct_ids_match or btf_struct_walk due to out of bounds access (since offset is interpreted as unsigned). Improve the verifier by rejecting this case by using a better error message for BPF helpers and kfunc, by putting a check inside the check_func_arg_reg_off function. Also, update existing verifier selftests to work with new error string. Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-05kselftest/vm: fix tests build with old libcChengming Zhou1-0/+1
The error message when I build vm tests on debian10 (GLIBC 2.28): userfaultfd.c: In function `userfaultfd_pagemap_test': userfaultfd.c:1393:37: error: `MADV_PAGEOUT' undeclared (first use in this function); did you mean `MADV_RANDOM'? if (madvise(area_dst, test_pgsize, MADV_PAGEOUT)) ^~~~~~~~~~~~ MADV_RANDOM This patch includes these newer definitions from UAPI linux/mman.h, is useful to fix tests build on systems without these definitions in glibc sys/mman.h. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Chengming Zhou <[email protected]> Reviewed-by: Shuah Khan <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2022-03-05selftests/vm: cleanup hugetlb file after mremap testMike Kravetz2-8/+21
The hugepage-mremap test will create a file in a hugetlb filesystem. In a default 'run_vmtests' run, the file will contain all the hugetlb pages. After the test, the file remains and there are no free hugetlb pages for subsequent tests. This causes those hugetlb tests to fail. Change hugepage-mremap to take the name of the hugetlb file as an argument. Unlink the file within the test, and just to be sure remove the file in the run_vmtests script. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Kravetz <[email protected]> Reviewed-by: Shuah Khan <[email protected]> Acked-by: Yosry Ahmed <[email protected]> Reviewed-by: Muchun Song <[email protected]> Reviewed-by: Mina Almasry <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2022-03-05memblock tests: Fix testing with 32-bit physical addressesKarolina Drobnik1-2/+4
Building memblock simulator on x86_64 with 32BIT_PHYS_ADDR_T=1 produces "cast to pointer from integer of different size" warnings. Fix them by building the binary in 32-bit environment when using 32-bit physical addresses. Signed-off-by: Karolina Drobnik <[email protected]> Signed-off-by: Mike Rapoport <[email protected]>
2022-03-05selftests/bpf: Add custom SEC() handling selftestAndrii Nakryiko2-0/+239
Add a selftest validating various aspects of libbpf's handling of custom SEC() handlers. It also demonstrates how libraries can ensure very early callbacks registration and unregistration using __attribute__((constructor))/__attribute__((destructor)) functions. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Tested-by: Alan Maguire <[email protected]> Reviewed-by: Alan Maguire <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-05libbpf: Support custom SEC() handlersAndrii Nakryiko4-53/+268
Allow registering and unregistering custom handlers for BPF program. This allows user applications and libraries to plug into libbpf's declarative SEC() definition handling logic. This allows to offload complex and intricate custom logic into external libraries, but still provide a great user experience. One such example is USDT handling library, which has a lot of code and complexity which doesn't make sense to put into libbpf directly, but it would be really great for users to be able to specify BPF programs with something like SEC("usdt/<path-to-binary>:<usdt_provider>:<usdt_name>") and have correct BPF program type set (BPF_PROGRAM_TYPE_KPROBE, as it is uprobe) and even support BPF skeleton's auto-attach logic. In some cases, it might be even good idea to override libbpf's default handling, like for SEC("perf_event") programs. With custom library, it's possible to extend logic to support specifying perf event specification right there in SEC() definition without burdening libbpf with lots of custom logic or extra library dependecies (e.g., libpfm4). With current patch it's possible to override libbpf's SEC("perf_event") handling and specify a completely custom ones. Further, it's possible to specify a generic fallback handling for any SEC() that doesn't match any other custom or standard libbpf handlers. This allows to accommodate whatever legacy use cases there might be, if necessary. See doc comments for libbpf_register_prog_handler() and libbpf_unregister_prog_handler() for detailed semantics. This patch also bumps libbpf development version to v0.8 and adds new APIs there. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Tested-by: Alan Maguire <[email protected]> Reviewed-by: Alan Maguire <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-05libbpf: Allow BPF program auto-attach handlers to bail outAndrii Nakryiko1-55/+85
Allow some BPF program types to support auto-attach only in subste of cases. Currently, if some BPF program type specifies attach callback, it is assumed that during skeleton attach operation all such programs either successfully attach or entire skeleton attachment fails. If some program doesn't support auto-attachment from skeleton, such BPF program types shouldn't have attach callback specified. This is limiting for cases when, depending on how full the SEC("") definition is, there could either be enough details to support auto-attach or there might not be and user has to use some specific API to provide more details at runtime. One specific example of such desired behavior might be SEC("uprobe"). If it's specified as just uprobe auto-attach isn't possible. But if it's SEC("uprobe/<some_binary>:<some_func>") then there are enough details to support auto-attach. Note that there is a somewhat subtle difference between auto-attach behavior of BPF skeleton and using "generic" bpf_program__attach(prog) (which uses the same attach handlers under the cover). Skeleton allow some programs within bpf_object to not have auto-attach implemented and doesn't treat that as an error. Instead such BPF programs are just skipped during skeleton's (optional) attach step. bpf_program__attach(), on the other hand, is called when user *expects* auto-attach to work, so if specified program doesn't implement or doesn't support auto-attach functionality, that will be treated as an error. Another improvement to the way libbpf is handling SEC()s would be to not require providing dummy kernel function name for kprobe. Currently, SEC("kprobe/whatever") is necessary even if actual kernel function is determined by user at runtime and bpf_program__attach_kprobe() is used to specify it. With changes in this patch, it's possible to support both SEC("kprobe") and SEC("kprobe/<actual_kernel_function"), while only in the latter case auto-attach will be performed. In the former one, such kprobe will be skipped during skeleton attach operation. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Tested-by: Alan Maguire <[email protected]> Reviewed-by: Alan Maguire <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2022-03-04selftests: mptcp: update output info of chk_rm_nrGeliang Tang1-8/+9
This patch updated the output info of chk_rm_nr. Renamed 'sf' to 'rmsf', which means 'remove subflow'. Added the display of whether the inverted namespaces has been used to check the mib counters. The new output looks like this: 002 remove multiple subflows syn[ ok ] - synack[ ok ] - ack[ ok ] rm [ ok ] - rmsf [ ok ] 003 remove single address syn[ ok ] - synack[ ok ] - ack[ ok ] add[ ok ] - echo [ ok ] rm [ ok ] - rmsf [ ok ] invert Signed-off-by: Geliang Tang <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2022-03-04selftests: mptcp: add more arguments for chk_join_nrGeliang Tang1-14/+33
This patch added five more arguments for chk_join_nr(). The default values of them are all zero. The first two, csum_ns1 and csum_ns1, are passed to chk_csum_nr(), to check the mib counters of the checksum errors in ns1 and ns2. A '+' can be added into this two arguments to represent that multiple checksum errors are allowed when doing this check. For example, chk_csum_nr "" +2 +2 indicates that two or more checksum errors are allowed in both ns1 and ns2. The remaining two, fail_nr and rst_nr, are passed to chk_fail_nr() and chk_rst_nr() respectively, to check the sending and receiving mib counters of MP_FAIL and MP_RST. Also did some cleanups in chk_fail_nr(), renamed two local variables and updated the output message. Signed-off-by: Geliang Tang <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>