aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2021-12-02libbpf: Use __u32 fields in bpf_map_create_optsAndrii Nakryiko1-4/+4
Corresponding Linux UAPI struct uses __u32, not int, so keep it consistent. Fixes: 992c4225419a ("libbpf: Unify low-level map creation APIs w/ new bpf_map_create()") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02tools/resolve_btfids: Skip unresolved symbol warning for empty BTF setsKumar Kartikeya Dwivedi1-3/+5
resolve_btfids prints a warning when it finds an unresolved symbol, (id == 0) in id_patch. This can be the case for BTF sets that are empty (due to disabled config options), hence printing warnings for certain builds, most recently seen in [0]. The reason behind this is because id->cnt aliases id->id in btf_id struct, leading to empty set showing up as ID 0 when we get to id_patch, which triggers the warning. Since sets are an exception here, accomodate by reusing hole in btf_id for bool is_set member, setting it to true for BTF set when setting id->cnt, and use that to skip extraneous warning. [0]: https://lore.kernel.org/all/[email protected] Before: ; ./tools/bpf/resolve_btfids/resolve_btfids -v -b vmlinux net/ipv4/tcp_cubic.ko adding symbol tcp_cubic_kfunc_ids WARN: resolve_btfids: unresolved symbol tcp_cubic_kfunc_ids patching addr 0: ID 0 [tcp_cubic_kfunc_ids] sorting addr 4: cnt 0 [tcp_cubic_kfunc_ids] update ok for net/ipv4/tcp_cubic.ko After: ; ./tools/bpf/resolve_btfids/resolve_btfids -v -b vmlinux net/ipv4/tcp_cubic.ko adding symbol tcp_cubic_kfunc_ids patching addr 0: ID 0 [tcp_cubic_kfunc_ids] sorting addr 4: cnt 0 [tcp_cubic_kfunc_ids] update ok for net/ipv4/tcp_cubic.ko Fixes: 0e32dfc80bae ("bpf: Enable TCP congestion control kfunc from modules") Reported-by: Pavel Skripkin <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests/bpf: Update test names for xchg and cmpxchgPaul E. McKenney1-2/+2
The test_cmpxchg() and test_xchg() functions say "test_run add". Therefore, make them say "test_run cmpxchg" and "test_run xchg", respectively. Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/20211201005030.GA3071525@paulmck-ThinkPad-P17-Gen-1
2021-12-02selftests/bpf: Build testing_helpers.o out of treeJean-Philippe Brucker1-18/+22
Add $(OUTPUT) prefix to testing_helpers.o, so it can be built out of tree when necessary. At the moment, in addition to being built in-tree even when out-of-tree is required, testing_helpers.o is not built with the right recipe when cross-building. For consistency the other helpers, cgroup_helpers and trace_helpers, can also be passed as objects instead of source. Use *_HELPERS variable to keep the Makefile readable. Fixes: f87c1930ac29 ("selftests/bpf: Merge test_stub.c into testing_helpers.c") Signed-off-by: Jean-Philippe Brucker <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski11-106/+318
Signed-off-by: Jakub Kicinski <[email protected]>
2021-12-02Merge tag 'net-5.16-rc4' of ↵Linus Torvalds4-5/+32
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from wireless, and wireguard. Mostly scattered driver changes this week, with one big clump in mv88e6xxx. Nothing of note, really. Current release - regressions: - smc: keep smc_close_final()'s error code during active close Current release - new code bugs: - iwlwifi: various static checker fixes (int overflow, leaks, missing error codes) - rtw89: fix size of firmware header before transfer, avoid crash - mt76: fix timestamp check in tx_status; fix pktid leak; - mscc: ocelot: fix missing unlock on error in ocelot_hwstamp_set() Previous releases - regressions: - smc: fix list corruption in smc_lgr_cleanup_early - ipv4: convert fib_num_tclassid_users to atomic_t Previous releases - always broken: - tls: fix authentication failure in CCM mode - vrf: reset IPCB/IP6CB when processing outbound pkts, prevent incorrect processing - dsa: mv88e6xxx: fixes for various device errata - rds: correct socket tunable error in rds_tcp_tune() - ipv6: fix memory leak in fib6_rule_suppress - wireguard: reset peer src endpoint when netns exits - wireguard: improve resilience to DoS around incoming handshakes - tcp: fix page frag corruption on page fault which involves TCP - mpls: fix missing attributes in delete notifications - mt7915: fix NULL pointer dereference with ad-hoc mode Misc: - rt2x00: be more lenient about EPROTO errors during start - mlx4_en: update reported link modes for 1/10G" * tag 'net-5.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (85 commits) net: dsa: b53: Add SPI ID table gro: Fix inconsistent indenting selftests: net: Correct case name net/rds: correct socket tunable error in rds_tcp_tune() mctp: Don't let RTM_DELROUTE delete local routes net/smc: Keep smc_close_final rc during active close ibmvnic: drop bad optimization in reuse_tx_pools() ibmvnic: drop bad optimization in reuse_rx_pools() net/smc: fix wrong list_del in smc_lgr_cleanup_early Fix Comment of ETH_P_802_3_MIN ethernet: aquantia: Try MAC address from device tree ipv4: convert fib_num_tclassid_users to atomic_t net: avoid uninit-value from tcp_conn_request net: annotate data-races on txq->xmit_lock_owner octeontx2-af: Fix a memleak bug in rvu_mbox_init() net/mlx4_en: Fix an use-after-free bug in mlx4_en_try_alloc_resources() vrf: Reset IPCB/IP6CB when processing outbound pkts in vrf dev xmit net: qlogic: qlcnic: Fix a NULL pointer dereference in qlcnic_83xx_add_rings() net: dsa: mv88e6xxx: Link in pcs_get_state() if AN is bypassed net: dsa: mv88e6xxx: Fix inband AN for 2500base-x on 88E6393X family ...
2021-12-02selftests/bpf: Add CO-RE relocations to verifier scale test.Alexei Starovoitov1-2/+2
Add 182 CO-RE relocations to verifier scale test. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests/bpf: Revert CO-RE removal in test_ksyms_weak.Alexei Starovoitov1-1/+1
The commit 087cba799ced ("selftests/bpf: Add weak/typeless ksym test for light skeleton") added test_ksyms_weak to light skeleton testing, but remove CO-RE access. Revert that part of commit, since light skeleton can use CO-RE in the kernel. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests/bpf: Additional test for CO-RE in the kernel.Alexei Starovoitov3-1/+119
Add a test where randmap() function is appended to three different bpf programs. That action checks struct bpf_core_relo replication logic and offset adjustment in gen loader part of libbpf. Fourth bpf program has 360 CO-RE relocations from vmlinux, bpf_testmod, and non-existing type. It tests candidate cache logic. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests/bpf: Convert map_ptr_kern test to use light skeleton.Alexei Starovoitov2-10/+9
To exercise CO-RE in the kernel further convert map_ptr_kern test to light skeleton. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests/bpf: Improve inner_map test coverage.Alexei Starovoitov1-2/+14
Check that hash and array inner maps are properly initialized. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests/bpf: Add lskel version of kfunc test.Alexei Starovoitov2-1/+25
Add light skeleton version of kfunc_call_test_subprog test. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02libbpf: Clean gen_loader's attach kind.Alexei Starovoitov1-1/+3
The gen_loader has to clear attach_kind otherwise the programs without attach_btf_id will fail load if they follow programs with attach_btf_id. Fixes: 67234743736a ("libbpf: Generate loader program out of BPF ELF file.") Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02libbpf: Support init of inner maps in light skeleton.Alexei Starovoitov3-3/+31
Add ability to initialize inner maps in light skeleton. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02libbpf: Use CO-RE in the kernel in light skeleton.Alexei Starovoitov3-33/+120
Without lskel the CO-RE relocations are processed by libbpf before any other work is done. Instead, when lskel is needed, remember relocation as RELO_CORE kind. Then when loader prog is generated for a given bpf program pass CO-RE relos of that program to gen loader via bpf_gen__record_relo_core(). The gen loader will remember them as-is and pass it later as-is into the kernel. The normal libbpf flow is to process CO-RE early before call relos happen. In case of gen_loader the core relos have to be added to other relos to be copied together when bpf static function is appended in different places to other main bpf progs. During the copy the append_subprog_relos() will adjust insn_idx for normal relos and for RELO_CORE kind too. When that is done each struct reloc_desc has good relos for specific main prog. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02libbpf: Cleanup struct bpf_core_cand.Andrii Nakryiko2-15/+17
Remove two redundant fields from struct bpf_core_cand. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02bpf: Pass a set of bpf_core_relo-s to prog_load command.Alexei Starovoitov2-54/+58
struct bpf_core_relo is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the struct bpf_core_relo becomes uapi de-jure. Add an ability to pass a set of 'struct bpf_core_relo' to prog_load command and let the kernel perform CO-RE relocations. Note the struct bpf_line_info and struct bpf_func_info have the same layout when passed from LLVM to libbpf and from libbpf to the kernel except "insn_off" fields means "byte offset" when LLVM generates it. Then libbpf converts it to "insn index" to pass to the kernel. The struct bpf_core_relo's "insn_off" field is always "byte offset". Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02bpf: Define enum bpf_core_relo_kind as uapi.Alexei Starovoitov4-60/+63
enum bpf_core_relo_kind is generated by llvm and processed by libbpf. It's a de-facto uapi. With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure. Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind, bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf program into llvm. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02bpf: Prepare relo_core.c for kernel duty.Alexei Starovoitov1-11/+65
Make relo_core.c to be compiled for the kernel and for user space libbpf. Note the patch is reducing BPF_CORE_SPEC_MAX_LEN from 64 to 32. This is the maximum number of nested structs and arrays. For example: struct sample { int a; struct { int b[10]; }; }; struct sample *s = ...; int *y = &s->b[5]; This field access is encoded as "0:1:0:5" and spec len is 4. The follow up patch might bump it back to 64. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02libbpf: Replace btf__type_by_id() with btf_type_by_id().Alexei Starovoitov3-13/+10
To prepare relo_core.c to be compiled in the kernel and the user space replace btf__type_by_id with btf_type_by_id. In libbpf btf__type_by_id and btf_type_by_id have different behavior. bpf_core_apply_relo_insn() needs behavior of uapi btf__type_by_id vs internal btf_type_by_id, but type_id range check is already done in bpf_core_apply_relo(), so it's safe to replace it everywhere. The kernel btf_type_by_id() does the check anyway. It doesn't hurt. Suggested-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-12-02selftests: net: remove meaningless help optionLi Zhijian1-2/+0
$ ./fcnal-test.sh -t help Test names: help Looks it intent to list the available tests but it didn't do the right thing. I will add another option the do that in the later patch. Signed-off-by: Li Zhijian <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-12-02selftests: net: Correct case nameLi Zhijian1-2/+2
ipv6_addr_bind/ipv4_addr_bind are function names. Previously, bind test would not be run by default due to the wrong case names Fixes: 34d0302ab861 ("selftests: Add ipv6 address bind tests to fcnal-test") Fixes: 75b2b2b3db4c ("selftests: Add ipv4 address bind tests to fcnal-test") Signed-off-by: Li Zhijian <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2021-11-30tools/memory-model: litmus: Add two tests for unlock(A)+lock(B) orderingBoqun Feng3-0/+76
The memory model has been updated to provide a stronger ordering guarantee for unlock(A)+lock(B) on the same CPU/thread. Therefore add two litmus tests describing this new guarantee, these tests are simple yet can clearly show the usage of the new guarantee, also they can serve as the self tests for the modification in the model. Co-developed-by: Alan Stern <[email protected]> Signed-off-by: Alan Stern <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/memory-model: doc: Describe the requirement of the litmus-tests directoryBoqun Feng1-0/+12
It's better that we have some "standard" about which test should be put in the litmus-tests directory because it helps future contributors understand whether they should work on litmus-tests in kernel or Paul's GitHub repo. Therefore explain a little bit on what a "representative" litmus test is. Signed-off-by: Boqun Feng <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/memory-model: Provide extra ordering for unlock+lock pair on the same CPUBoqun Feng2-22/+28
A recent discussion[1] shows that we are in favor of strengthening the ordering of unlock + lock on the same CPU: a unlock and a po-after lock should provide the so-called RCtso ordering, that is a memory access S po-before the unlock should be ordered against a memory access R po-after the lock, unless S is a store and R is a load. The strengthening meets programmers' expection that "sequence of two locked regions to be ordered wrt each other" (from Linus), and can reduce the mental burden when using locks. Therefore add it in LKMM. [1]: https://lore.kernel.org/lkml/[email protected]/ Co-developed-by: Alan Stern <[email protected]> Signed-off-by: Alan Stern <[email protected]> Signed-off-by: Boqun Feng <[email protected]> Reviewed-by: Michael Ellerman <[email protected]> (powerpc) Acked-by: Palmer Dabbelt <[email protected]> (RISC-V) Acked-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Properly redirect kvm-remote.sh "echo" commandsPaul E. McKenney1-5/+5
The echo commands following initialization of the "oldrun" variable need to be "tee"d to $oldrun/remote-log. This commit fixes several stragglers. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Fix incorrectly redirected "exit" in kvm-remote.shPaul E. McKenney1-1/+1
The "exit 4" in kvm-remote.sh is pointlessly redirected, so this commit removes the redirection. Fixes: 0092eae4cb4e ("torture: Add kvm-remote.sh script for distributed rcutorture test runs") Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30rcutorture: Test RCU Tasks lock-contention detectionPaul E. McKenney1-0/+1
This commit adjusts the TRACE02 scenario to use a pair of callback-flood kthreads. This in turn forces lock contention on the single RCU Tasks Trace callback queue, which forces use of all CPUs' queues, thus testing this transition. (No, there is not yet any way to transition back. Cc: Neeraj Upadhyay <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30rcutorture: Cause TREE02 and TREE10 scenarios to do more callback floodingPaul E. McKenney2-0/+2
This commit enables two callback-flood kthreads for the TREE02 scenario and 28 for the TREE10 scenario. Cc: Neeraj Upadhyay <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Retry download once before giving upPaul E. McKenney1-2/+9
Currently, a transient network error can kill a run if it happens while downloading the tarball to one of the target systems. This commit therefore does a 60-second wait and then a retry. If further experience indicates, a more elaborate mechanism might be used later. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Make kvm-find-errors.sh report link-time undefined symbolsPaul E. McKenney2-3/+4
This commit makes kvm-find-errors.sh check for and report undefined symbols that are detected at link time. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Catch kvm.sh help text up with actual optionsPaul E. McKenney1-3/+6
This commit brings the kvm.sh script's help text up to date with recently (and some not-so-recently) added parameters. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/nolibc: Implement gettid()Mark Brown1-0/+18
Allow test programs to determine their thread ID. Signed-off-by: Mark Brown <[email protected]> Cc: Willy Tarreau <[email protected]> Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/nolibc: x86-64: Use `mov $60,%eax` instead of `mov $60,%rax`Ammar Faizi1-1/+1
Note that mov to 32-bit register will zero extend to 64-bit register. Thus `mov $60,%eax` has the same effect with `mov $60,%rax`. Use the shorter opcode to achieve the same thing. ``` b8 3c 00 00 00 mov $60,%eax (5 bytes) [1] 48 c7 c0 3c 00 00 00 mov $60,%rax (7 bytes) [2] ``` Currently, we use [2]. Change it to [1] for shorter code. Signed-off-by: Ammar Faizi <[email protected]> Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/nolibc: x86: Remove `r8`, `r9` and `r10` from the clobber listAmmar Faizi1-14/+19
Linux x86-64 syscall only clobbers rax, rcx and r11 (and "memory"). - rax for the return value. - rcx to save the return address. - r11 to save the rflags. Other registers are preserved. Having r8, r9 and r10 in the syscall clobber list is harmless, but this results in a missed-optimization. As the syscall doesn't clobber r8-r10, GCC should be allowed to reuse their value after the syscall returns to userspace. But since they are in the clobber list, GCC will always miss this opportunity. Remove them from the x86-64 syscall clobber list to help GCC generate better code and fix the comment. See also the x86-64 ABI, section A.2 AMD64 Linux Kernel Conventions, A.2.1 Calling Conventions [1]. Extra note: Some people may think it does not really give a benefit to remove r8, r9 and r10 from the syscall clobber list because the impression of syscall is a C function call, and function call always clobbers those 3. However, that is not the case for nolibc.h, because we have a potential to inline the "syscall" instruction (which its opcode is "0f 05") to the user functions. All syscalls in the nolibc.h are written as a static function with inline ASM and are likely always inline if we use optimization flag, so this is a profit not to have r8, r9 and r10 in the clobber list. Here is the example where this matters. Consider the following C code: ``` #include "tools/include/nolibc/nolibc.h" #define read_abc(a, b, c) __asm__ volatile("nop"::"r"(a),"r"(b),"r"(c)) int main(void) { int a = 0xaa; int b = 0xbb; int c = 0xcc; read_abc(a, b, c); write(1, "test\n", 5); read_abc(a, b, c); return 0; } ``` Compile with: gcc -Os test.c -o test -nostdlib With r8, r9, r10 in the clobber list, GCC generates this: 0000000000001000 <main>: 1000: f3 0f 1e fa endbr64 1004: 41 54 push %r12 1006: 41 bc cc 00 00 00 mov $0xcc,%r12d 100c: 55 push %rbp 100d: bd bb 00 00 00 mov $0xbb,%ebp 1012: 53 push %rbx 1013: bb aa 00 00 00 mov $0xaa,%ebx 1018: 90 nop 1019: b8 01 00 00 00 mov $0x1,%eax 101e: bf 01 00 00 00 mov $0x1,%edi 1023: ba 05 00 00 00 mov $0x5,%edx 1028: 48 8d 35 d1 0f 00 00 lea 0xfd1(%rip),%rsi 102f: 0f 05 syscall 1031: 90 nop 1032: 31 c0 xor %eax,%eax 1034: 5b pop %rbx 1035: 5d pop %rbp 1036: 41 5c pop %r12 1038: c3 ret GCC thinks that syscall will clobber r8, r9, r10. So it spills 0xaa, 0xbb and 0xcc to callee saved registers (r12, rbp and rbx). This is clearly extra memory access and extra stack size for preserving them. But syscall does not actually clobber them, so this is a missed optimization. Now without r8, r9, r10 in the clobber list, GCC generates better code: 0000000000001000 <main>: 1000: f3 0f 1e fa endbr64 1004: 41 b8 aa 00 00 00 mov $0xaa,%r8d 100a: 41 b9 bb 00 00 00 mov $0xbb,%r9d 1010: 41 ba cc 00 00 00 mov $0xcc,%r10d 1016: 90 nop 1017: b8 01 00 00 00 mov $0x1,%eax 101c: bf 01 00 00 00 mov $0x1,%edi 1021: ba 05 00 00 00 mov $0x5,%edx 1026: 48 8d 35 d3 0f 00 00 lea 0xfd3(%rip),%rsi 102d: 0f 05 syscall 102f: 90 nop 1030: 31 c0 xor %eax,%eax 1032: c3 ret Cc: Andy Lutomirski <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: [email protected] Cc: "H. Peter Anvin" <[email protected]> Cc: David Laight <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Signed-off-by: Ammar Faizi <[email protected]> Link: https://gitlab.com/x86-psABIs/x86-64-ABI/-/wikis/x86-64-psABI [1] Link: https://lore.kernel.org/lkml/[email protected]/ Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/nolibc: fix incorrect truncation of exit codeWilly Tarreau1-8/+5
Ammar Faizi reported that our exit code handling is wrong. We truncate it to the lowest 8 bits but the syscall itself is expected to take a regular 32-bit signed integer, not an unsigned char. It's the kernel that later truncates it to the lowest 8 bits. The difference is visible in strace, where the program below used to show exit(255) instead of exit(-1): int main(void) { return -1; } This patch applies the fix to all archs. x86_64, i386, arm64, armv7 and mips were all tested and confirmed to work fine now. Risc-v was not tested but the change is trivial and exactly the same as for other archs. Reported-by: Ammar Faizi <[email protected]> Cc: [email protected] Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/nolibc: i386: fix initial stack alignmentWilly Tarreau1-1/+9
After re-checking in the spec and comparing stack offsets with glibc, The last pushed argument must be 16-byte aligned (i.e. aligned before the call) so that in the callee esp+4 is multiple of 16, so the principle is the 32-bit equivalent to what Ammar fixed for x86_64. It's possible that 32-bit code using SSE2 or MMX could have been affected. In addition the frame pointer ought to be zero at the deepest level. Link: https://gitlab.com/x86-psABIs/i386-ABI/-/wikis/Intel386-psABI Cc: Ammar Faizi <[email protected]> Cc: [email protected] Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30tools/nolibc: x86-64: Fix startup code bugAmmar Faizi1-2/+8
Before this patch, the `_start` function looks like this: ``` 0000000000001170 <_start>: 1170: pop %rdi 1171: mov %rsp,%rsi 1174: lea 0x8(%rsi,%rdi,8),%rdx 1179: and $0xfffffffffffffff0,%rsp 117d: sub $0x8,%rsp 1181: call 1000 <main> 1186: movzbq %al,%rdi 118a: mov $0x3c,%rax 1191: syscall 1193: hlt 1194: data16 cs nopw 0x0(%rax,%rax,1) 119f: nop ``` Note the "and" to %rsp with $-16, it makes the %rsp be 16-byte aligned, but then there is a "sub" with $0x8 which makes the %rsp no longer 16-byte aligned, then it calls main. That's the bug! What actually the x86-64 System V ABI mandates is that right before the "call", the %rsp must be 16-byte aligned, not after the "call". So the "sub" with $0x8 here breaks the alignment. Remove it. An example where this rule matters is when the callee needs to align its stack at 16-byte for aligned move instruction, like `movdqa` and `movaps`. If the callee can't align its stack properly, it will result in segmentation fault. x86-64 System V ABI also mandates the deepest stack frame should be zero. Just to be safe, let's zero the %rbp on startup as the content of %rbp may be unspecified when the program starts. Now it looks like this: ``` 0000000000001170 <_start>: 1170: pop %rdi 1171: mov %rsp,%rsi 1174: lea 0x8(%rsi,%rdi,8),%rdx 1179: xor %ebp,%ebp # zero the %rbp 117b: and $0xfffffffffffffff0,%rsp # align the %rsp 117f: call 1000 <main> 1184: movzbq %al,%rdi 1188: mov $0x3c,%rax 118f: syscall 1191: hlt 1192: data16 cs nopw 0x0(%rax,%rax,1) 119d: nopl (%rax) ``` Cc: Bedirhan KURT <[email protected]> Cc: Louvian Lyndal <[email protected]> Reported-by: Peter Cordes <[email protected]> Signed-off-by: Ammar Faizi <[email protected]> [wt: I did this on purpose due to a misunderstanding of the spec, other archs will thus have to be rechecked, particularly i386] Cc: [email protected] Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30rcu: Remove the RCU_FAST_NO_HZ Kconfig optionPaul E. McKenney3-3/+0
All of the uses of CONFIG_RCU_FAST_NO_HZ=y that I have seen involve systems with RCU callbacks offloaded. In this situation, all that this Kconfig option does is slow down idle entry/exit with an additional allways-taken early exit. If this is the only use case, then this Kconfig option nothing but an attractive nuisance that needs to go away. This commit therefore removes the RCU_FAST_NO_HZ Kconfig option. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Remove RCU_FAST_NO_HZ from rcu scenariosPaul E. McKenney6-6/+0
All of the rcu scenarios that mentioning CONFIG_RCU_FAST_NO_HZ disable it. But this Kconfig option is disabled by default, so this commit removes the pointless "CONFIG_RCU_FAST_NO_HZ=n" lines from these scenarios. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30torture: Remove RCU_FAST_NO_HZ from rcuscale and refscale scenariosPaul E. McKenney6-6/+0
All of the rcuscale and refscale scenarios that mention the Kconfig option CONFIG_RCU_FAST_NO_HZ disable it. But this Kconfig option is disabled by default, so this commit removes the pointless "CONFIG_RCU_FAST_NO_HZ=n" lines from these scenarios. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30rcutorture: Add CONFIG_PREEMPT_DYNAMIC=n to tiny scenariosPaul E. McKenney5-0/+5
With CONFIG_PREEMPT_DYNAMIC=y, the kernel builds with CONFIG_PREEMPTION=y because preemption can be enabled at runtime. This prevents any tests of Tiny RCU or Tiny SRCU from running correctly. This commit therefore explicitly sets CONFIG_PREEMPT_DYNAMIC=n for those scenarios. Signed-off-by: Paul E. McKenney <[email protected]>
2021-11-30libbpf: Avoid reload of imm for weak, unresolved, repeating ksymKumar Kartikeya Dwivedi1-3/+2
Alexei pointed out that we can use BPF_REG_0 which already contains imm from move_blob2blob computation. Note that we now compare the second insn's imm, but this should not matter, since both will be zeroed out for the error case for the insn populated earlier. Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-30libbpf: Avoid double stores for success/failure case of ksym relocationsKumar Kartikeya Dwivedi1-16/+21
Instead, jump directly to success case stores in case ret >= 0, else do the default 0 value store and jump over the success case. This is better in terms of readability. Readjust the code for kfunc relocation as well to follow a similar pattern, also leads to easier to follow code now. Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-30selftest/bpf/benchs: Add bpf_loop benchmarkJoanne Koong7-1/+203
Add benchmark to measure the throughput and latency of the bpf_loop call. Testing this on my dev machine on 1 thread, the data is as follows: nr_loops: 10 bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op nr_loops: 100 bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op nr_loops: 500 bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op nr_loops: 1000 bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op nr_loops: 5000 bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op nr_loops: 10000 bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op nr_loops: 50000 bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op nr_loops: 100000 bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op nr_loops: 500000 bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op nr_loops: 1000000 bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op >From this data, we can see that the latency per loop decreases as the number of loops increases. On this particular machine, each loop had an overhead of about ~4 ns, and we were able to run ~250 million loops per second. Signed-off-by: Joanne Koong <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-30selftests/bpf: Measure bpf_loop verifier performanceJoanne Koong5-4/+169
This patch tests bpf_loop in pyperf and strobemeta, and measures the verifier performance of replacing the traditional for loop with bpf_loop. The results are as follows: ~strobemeta~ Baseline verification time 6808200 usec stack depth 496 processed 554252 insns (limit 1000000) max_states_per_insn 16 total_states 15878 peak_states 13489 mark_read 3110 #192 verif_scale_strobemeta:OK (unrolled loop) Using bpf_loop verification time 31589 usec stack depth 96+400 processed 1513 insns (limit 1000000) max_states_per_insn 2 total_states 106 peak_states 106 mark_read 60 #193 verif_scale_strobemeta_bpf_loop:OK ~pyperf600~ Baseline verification time 29702486 usec stack depth 368 processed 626838 insns (limit 1000000) max_states_per_insn 7 total_states 30368 peak_states 30279 mark_read 748 #182 verif_scale_pyperf600:OK (unrolled loop) Using bpf_loop verification time 148488 usec stack depth 320+40 processed 10518 insns (limit 1000000) max_states_per_insn 10 total_states 705 peak_states 517 mark_read 38 #183 verif_scale_pyperf600_bpf_loop:OK Using the bpf_loop helper led to approximately a 99% decrease in the verification time and in the number of instructions. Signed-off-by: Joanne Koong <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-30selftests/bpf: Add bpf_loop testJoanne Koong2-0/+257
Add test for bpf_loop testing a variety of cases: various nr_loops, null callback ctx, invalid flags, nested callbacks. Signed-off-by: Joanne Koong <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-30bpf: Add bpf_loop helperJoanne Koong1-0/+25
This patch adds the kernel-side and API changes for a new helper function, bpf_loop: long bpf_loop(u32 nr_loops, void *callback_fn, void *callback_ctx, u64 flags); where long (*callback_fn)(u32 index, void *ctx); bpf_loop invokes the "callback_fn" **nr_loops** times or until the callback_fn returns 1. The callback_fn can only return 0 or 1, and this is enforced by the verifier. The callback_fn index is zero-indexed. A few things to please note: ~ The "u64 flags" parameter is currently unused but is included in case a future use case for it arises. ~ In the kernel-side implementation of bpf_loop (kernel/bpf/bpf_iter.c), bpf_callback_t is used as the callback function cast. ~ A program can have nested bpf_loop calls but the program must still adhere to the verifier constraint of its stack depth (the stack depth cannot exceed MAX_BPF_STACK)) ~ Recursive callback_fns do not pass the verifier, due to the call stack for these being too deep. ~ The next patch will include the tests and benchmark Signed-off-by: Joanne Koong <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-30Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds4-80/+257
Pull kvm fixes from Paolo Bonzini: "ARM64: - Fix constant sign extension affecting TCR_EL2 and preventing running on ARMv8.7 models due to spurious bits being set - Fix use of helpers using PSTATE early on exit by always sampling it as soon as the exit takes place - Move pkvm's 32bit handling into a common helper RISC-V: - Fix incorrect KVM_MAX_VCPUS value - Unmap stage2 mapping when deleting/moving a memslot x86: - Fix and downgrade BUG_ON due to uninitialized cache - Many APICv and MOVE_ENC_CONTEXT_FROM fixes - Correctly emulate TLB flushes around nested vmentry/vmexit and when the nested hypervisor uses VPID - Prevent modifications to CPUID after the VM has run - Other smaller bugfixes Generic: - Memslot handling bugfixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (44 commits) KVM: fix avic_set_running for preemptable kernels KVM: VMX: clear vmx_x86_ops.sync_pir_to_irr if APICv is disabled KVM: SEV: accept signals in sev_lock_two_vms KVM: SEV: do not take kvm->lock when destroying KVM: SEV: Prohibit migration of a VM that has mirrors KVM: SEV: Do COPY_ENC_CONTEXT_FROM with both VMs locked selftests: sev_migrate_tests: add tests for KVM_CAP_VM_COPY_ENC_CONTEXT_FROM KVM: SEV: move mirror status to destination of KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM KVM: SEV: initialize regions_list of a mirror VM KVM: SEV: cleanup locking for KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM KVM: SEV: do not use list_replace_init on an empty list KVM: x86: Use a stable condition around all VT-d PI paths KVM: x86: check PIR even for vCPUs with disabled APICv KVM: VMX: prepare sync_pir_to_irr for running with APICv disabled KVM: selftests: page_table_test: fix calculation of guest_test_phys_mem KVM: x86/mmu: Handle "default" period when selectively waking kthread KVM: MMU: shadow nested paging does not have PKU KVM: x86/mmu: Remove spurious TLB flushes in TDP MMU zap collapsible path KVM: x86/mmu: Use yield-safe TDP MMU root iter in MMU notifier unmapping KVM: X86: Use vcpu->arch.walk_mmu for kvm_mmu_invlpg() ...
2021-11-30tools: Fix math.h breakageMatthew Wilcox (Oracle)3-21/+29
Commit 98e1385ef24b ("include/linux/radix-tree.h: replace kernel.h with the necessary inclusions") broke the radix tree test suite in two different ways; first by including math.h which didn't exist in the tools directory, and second by removing an implicit include of spinlock.h before lockdep.h. Fix both issues. Cc: Andy Shevchenko <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Andy Shevchenko <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>