aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-03-11devlink: Add comments to use netlink gen toolWilliam Tu1-1/+4
Add the comment to remind people not to manually modify the net/devlink/netlink_gen.c, but to use tools/net/ynl/ynl-regen.sh to generate it. Signed-off-by: William Tu <[email protected]> Suggested-by: Jiri Pirko <[email protected]> Reviewed-by: Jiri Pirko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11nfp: flower: handle acti_netdevs allocation failureDuoming Zhou1-0/+5
The kmalloc_array() in nfp_fl_lag_do_work() will return null, if the physical memory has run out. As a result, if we dereference the acti_netdevs, the null pointer dereference bugs will happen. This patch adds a check to judge whether allocation failure occurs. If it happens, the delayed work will be rescheduled and try again. Fixes: bb9a8d031140 ("nfp: flower: monitor and offload LAG groups") Signed-off-by: Duoming Zhou <[email protected]> Reviewed-by: Louis Peens <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11net/packet: Add getsockopt support for PACKET_COPY_THRESHJuntong Deng2-3/+6
Currently getsockopt does not support PACKET_COPY_THRESH, and we are unable to get the value of PACKET_COPY_THRESH socket option through getsockopt. This patch adds getsockopt support for PACKET_COPY_THRESH. In addition, this patch converts access to copy_thresh to READ_ONCE/WRITE_ONCE. Signed-off-by: Juntong Deng <[email protected]> Reviewed-by: Willem de Bruijn <[email protected]> Reviewed-by: Jason Xing <[email protected]> Link: https://lore.kernel.org/r/AM6PR03MB58487A9704FD150CF76F542899272@AM6PR03MB5848.eurprd03.prod.outlook.com Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11net/netlink: Add getsockopt support for NETLINK_LISTEN_ALL_NSIDJuntong Deng1-0/+3
Currently getsockopt does not support NETLINK_LISTEN_ALL_NSID, and we are unable to get the value of NETLINK_LISTEN_ALL_NSID socket option through getsockopt. This patch adds getsockopt support for NETLINK_LISTEN_ALL_NSID. Signed-off-by: Juntong Deng <[email protected]> Link: https://lore.kernel.org/r/AM6PR03MB58482322B7B335308DA56FE599272@AM6PR03MB5848.eurprd03.prod.outlook.com Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11Merge tag 'x86-apic-2024-03-10' of ↵Linus Torvalds86-1569/+1555
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 APIC updates from Thomas Gleixner: "Rework of APIC enumeration and topology evaluation. The current implementation has a couple of shortcomings: - It fails to handle hybrid systems correctly. - The APIC registration code which handles CPU number assignents is in the middle of the APIC code and detached from the topology evaluation. - The various mechanisms which enumerate APICs, ACPI, MPPARSE and guest specific ones, tweak global variables as they see fit or in case of XENPV just hack around the generic mechanisms completely. - The CPUID topology evaluation code is sprinkled all over the vendor code and reevaluates global variables on every hotplug operation. - There is no way to analyze topology on the boot CPU before bringing up the APs. This causes problems for infrastructure like PERF which needs to size certain aspects upfront or could be simplified if that would be possible. - The APIC admission and CPU number association logic is incomprehensible and overly complex and needs to be kept around after boot instead of completing this right after the APIC enumeration. This update addresses these shortcomings with the following changes: - Rework the CPUID evaluation code so it is common for all vendors and provides information about the APIC ID segments in a uniform way independent of the number of segments (Thread, Core, Module, ..., Die, Package) so that this information can be computed instead of rewriting global variables of dubious value over and over. - A few cleanups and simplifcations of the APIC, IO/APIC and related interfaces to prepare for the topology evaluation changes. - Seperation of the parser stages so the early evaluation which tries to find the APIC address can be seperately overridden from the late evaluation which enumerates and registers the local APIC as further preparation for sanitizing the topology evaluation. - A new registration and admission logic which - encapsulates the inner workings so that parsers and guest logic cannot longer fiddle in it - uses the APIC ID segments to build topology bitmaps at registration time - provides a sane admission logic - allows to detect the crash kernel case, where CPU0 does not run on the real BSP, automatically. This is required to prevent sending INIT/SIPI sequences to the real BSP which would reset the whole machine. This was so far handled by a tedious command line parameter, which does not even work in nested crash scenarios. - Associates CPU number after the enumeration completed and prevents the late registration of APICs, which was somehow tolerated before. - Converting all parsers and guest enumeration mechanisms over to the new interfaces. This allows to get rid of all global variable tweaking from the parsers and enumeration mechanisms and sanitizes the XEN[PV] handling so it can use CPUID evaluation for the first time. - Mopping up existing sins by taking the information from the APIC ID segment bitmaps. This evaluates hybrid systems correctly on the boot CPU and allows for cleanups and fixes in the related drivers, e.g. PERF. The series has been extensively tested and the minimal late fallout due to a broken ACPI/MADT table has been addressed by tightening the admission logic further" * tag 'x86-apic-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (76 commits) x86/topology: Ignore non-present APIC IDs in a present package x86/apic: Build the x86 topology enumeration functions on UP APIC builds too smp: Provide 'setup_max_cpus' definition on UP too smp: Avoid 'setup_max_cpus' namespace collision/shadowing x86/bugs: Use fixed addressing for VERW operand x86/cpu/topology: Get rid of cpuinfo::x86_max_cores x86/cpu/topology: Provide __num_[cores|threads]_per_package x86/cpu/topology: Rename topology_max_die_per_package() x86/cpu/topology: Rename smp_num_siblings x86/cpu/topology: Retrieve cores per package from topology bitmaps x86/cpu/topology: Use topology logical mapping mechanism x86/cpu/topology: Provide logical pkg/die mapping x86/cpu/topology: Simplify cpu_mark_primary_thread() x86/cpu/topology: Mop up primary thread mask handling x86/cpu/topology: Use topology bitmaps for sizing x86/cpu/topology: Let XEN/PV use topology from CPUID/MADT x86/xen/smp_pv: Count number of vCPUs early x86/cpu/topology: Assign hotpluggable CPUIDs during init x86/cpu/topology: Reject unknown APIC IDs on ACPI hotplug x86/topology: Add a mechanism to track topology via APIC IDs ...
2024-03-11Merge branch 'bpf-introduce-bpf-arena'Andrii Nakryiko37-40/+2028
Alexei Starovoitov says: ==================== bpf: Introduce BPF arena. From: Alexei Starovoitov <[email protected]> v2->v3: - contains bpf bits only, but cc-ing past audience for continuity - since prerequisite patches landed, this series focus on the main functionality of bpf_arena. - adopted Andrii's approach to support arena in libbpf. - simplified LLVM support. Instead of two instructions it's now only one. - switched to cond_break (instead of open coded iters) in selftests - implemented several follow-ups that will be sent after this set . remember first IP and bpf insn that faulted in arena. report to user space via bpftool . copy paste and tweak glob_match() aka mini-regex as a selftests/bpf - see patch 1 for detailed description of bpf_arena v1->v2: - Improved commit log with reasons for using vmap_pages_range() in arena. Thanks to Johannes - Added support for __arena global variables in bpf programs - Fixed race conditions spotted by Barret - Fixed wrap32 issue spotted by Barret - Fixed bpf_map_mmap_sz() the way Andrii suggested The work on bpf_arena was inspired by Barret's work: https://github.com/google/ghost-userspace/blob/main/lib/queue.bpf.h that implements queues, lists and AVL trees completely as bpf programs using giant bpf array map and integer indices instead of pointers. bpf_arena is a sparse array that allows to use normal C pointers to build such data structures. Last few patches implement page_frag allocator, link list and hash table as bpf programs. v1: bpf programs have multiple options to communicate with user space: - Various ring buffers (perf, ftrace, bpf): The data is streamed unidirectionally from bpf to user space. - Hash map: The bpf program populates elements, and user space consumes them via bpf syscall. - mmap()-ed array map: Libbpf creates an array map that is directly accessed by the bpf program and mmap-ed to user space. It's the fastest way. Its disadvantage is that memory for the whole array is reserved at the start. ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Andrii Nakryiko <[email protected]>
2024-03-11selftests/bpf: Add bpf_arena_htab test.Alexei Starovoitov6-0/+243
bpf_arena_htab.h - hash table implemented as bpf program Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11selftests/bpf: Add bpf_arena_list test.Alexei Starovoitov4-0/+314
bpf_arena_alloc.h - implements page_frag allocator as a bpf program. bpf_arena_list.h - doubly linked link list as a bpf program. Compiled as a bpf program and as native C code. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11selftests/bpf: Add unit tests for bpf_arena_alloc/free_pagesAlexei Starovoitov6-2/+227
Add unit tests for bpf_arena_alloc/free_pages() functionality and bpf_arena_common.h with a set of common helpers and macros that is used in this test and the following patches. Also modify test_loader that didn't support running bpf_prog_type_syscall programs. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Add helper macro bpf_addr_space_cast()Alexei Starovoitov1-0/+43
Introduce helper macro bpf_addr_space_cast() that emits: rX = rX instruction with off = BPF_ADDR_SPACE_CAST and encodes dest and src address_space-s into imm32. It's useful with older LLVM that doesn't emit this insn automatically. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11libbpf: Recognize __arena global variables.Andrii Nakryiko3-13/+120
LLVM automatically places __arena variables into ".arena.1" ELF section. In order to use such global variables bpf program must include definition of arena map in ".maps" section, like: struct { __uint(type, BPF_MAP_TYPE_ARENA); __uint(map_flags, BPF_F_MMAPABLE); __uint(max_entries, 1000); /* number of pages */ __ulong(map_extra, 2ull << 44); /* start of mmap() region */ } arena SEC(".maps"); libbpf recognizes both uses of arena and creates single `struct bpf_map *` instance in libbpf APIs. ".arena.1" ELF section data is used as initial data image, which is exposed through skeleton and bpf_map__initial_value() to the user, if they need to tune it before the load phase. During load phase, this initial image is copied over into mmap()'ed region corresponding to arena, and discarded. Few small checks here and there had to be added to make sure this approach works with bpf_map__initial_value(), mostly due to hard-coded assumption that map->mmaped is set up with mmap() syscall and should be munmap()'ed. For arena, .arena.1 can be (much) smaller than maximum arena size, so this smaller data size has to be tracked separately. Given it is enforced that there is only one arena for entire bpf_object instance, we just keep it in a separate field. This can be generalized if necessary later. All global variables from ".arena.1" section are accessible from user space via skel->arena->name_of_var. For bss/data/rodata the skeleton/libbpf perform the following sequence: 1. addr = mmap(MAP_ANONYMOUS) 2. user space optionally modifies global vars 3. map_fd = bpf_create_map() 4. bpf_update_map_elem(map_fd, addr) // to store values into the kernel 5. mmap(addr, MAP_FIXED, map_fd) after step 5 user spaces see the values it wrote at step 2 at the same addresses arena doesn't support update_map_elem. Hence skeleton/libbpf do: 1. addr = malloc(sizeof SEC ".arena.1") 2. user space optionally modifies global vars 3. map_fd = bpf_create_map(MAP_TYPE_ARENA) 4. real_addr = mmap(map->map_extra, MAP_SHARED | MAP_FIXED, map_fd) 5. memcpy(real_addr, addr) // this will fault-in and allocate pages At the end look and feel of global data vs __arena global data is the same from bpf prog pov. Another complication is: struct { __uint(type, BPF_MAP_TYPE_ARENA); } arena SEC(".maps"); int __arena foo; int bar; ptr1 = &foo; // relocation against ".arena.1" section ptr2 = &arena; // relocation against ".maps" section ptr3 = &bar; // relocation against ".bss" section Fo the kernel ptr1 and ptr2 has point to the same arena's map_fd while ptr3 points to a different global array's map_fd. For the verifier: ptr1->type == unknown_scalar ptr2->type == const_ptr_to_map ptr3->type == ptr_to_map_value After verification, from JIT pov all 3 ptr-s are normal ld_imm64 insns. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpftool: Recognize arena map typeAlexei Starovoitov2-2/+2
Teach bpftool to recognize arena map type. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11libbpf: Add support for bpf_arena.Alexei Starovoitov2-8/+46
mmap() bpf_arena right after creation, since the kernel needs to remember the address returned from mmap. This is user_vm_start. LLVM will generate bpf_arena_cast_user() instructions where necessary and JIT will add upper 32-bit of user_vm_start to such pointers. Fix up bpf_map_mmap_sz() to compute mmap size as map->value_size * map->max_entries for arrays and PAGE_SIZE * map->max_entries for arena. Don't set BTF at arena creation time, since it doesn't support it. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11libbpf: Add __arg_arena to bpf_helpers.hAlexei Starovoitov1-0/+1
Add __arg_arena to bpf_helpers.h Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Recognize btf_decl_tag("arg: Arena") as PTR_TO_ARENA.Alexei Starovoitov3-4/+31
In global bpf functions recognize btf_decl_tag("arg:arena") as PTR_TO_ARENA. Note, when the verifier sees: __weak void foo(struct bar *p) it recognizes 'p' as PTR_TO_MEM and 'struct bar' has to be a struct with scalars. Hence the only way to use arena pointers in global functions is to tag them with "arg:arena". Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Recognize addr_space_cast instruction in the verifier.Alexei Starovoitov5-9/+109
rY = addr_space_cast(rX, 0, 1) tells the verifier that rY->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have to be in 32-bit domain. The verifier will mark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as kern_vm_start + 32bit_addr memory accesses. rY = addr_space_cast(rX, 1, 0) tells the verifier that rY->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set then convert cast_user to mov32 as well. Otherwise JIT will convert it to: rY = (u32)rX; if (rY) rY |= arena->user_vm_start & ~(u64)~0U; Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Add x86-64 JIT support for bpf_addr_space_cast instruction.Alexei Starovoitov3-1/+47
LLVM generates bpf_addr_space_cast instruction while translating pointers between native (zero) address space and __attribute__((address_space(N))). The addr_space=1 is reserved as bpf_arena address space. rY = addr_space_cast(rX, 0, 1) is processed by the verifier and converted to normal 32-bit move: wX = wY rY = addr_space_cast(rX, 1, 0) has to be converted by JIT: aux_reg = upper_32_bits of arena->user_vm_start aux_reg <<= 32 wX = wY // clear upper 32 bits of dst register if (wX) // if not zero add upper bits of user_vm_start wX |= aux_reg JIT can do it more efficiently: mov dst_reg32, src_reg32 // 32-bit move shl dst_reg, 32 or dst_reg, user_vm_start rol dst_reg, 32 xor r11, r11 test dst_reg32, dst_reg32 // check if lower 32-bit are zero cmove r11, dst_reg // if so, set dst_reg to zero // Intel swapped src/dst register encoding in CMOVcc Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Eduard Zingerman <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Add x86-64 JIT support for PROBE_MEM32 pseudo instructions.Alexei Starovoitov3-1/+194
Add support for [LDX | STX | ST], PROBE_MEM32, [B | H | W | DW] instructions. They are similar to PROBE_MEM instructions with the following differences: - PROBE_MEM has to check that the address is in the kernel range with src_reg + insn->off >= TASK_SIZE_MAX + PAGE_SIZE check - PROBE_MEM doesn't support store - PROBE_MEM32 relies on the verifier to clear upper 32-bit in the register - PROBE_MEM32 adds 64-bit kern_vm_start address (which is stored in %r12 in the prologue) Due to bpf_arena constructions such %r12 + %reg + off16 access is guaranteed to be within arena virtual range, so no address check at run-time. - PROBE_MEM32 allows STX and ST. If they fault the store is a nop. When LDX faults the destination register is zeroed. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Disasm support for addr_space_cast instruction.Alexei Starovoitov3-0/+18
LLVM generates rX = addr_space_cast(rY, dst_addr_space, src_addr_space) instruction when pointers in non-zero address space are used by the bpf program. Recognize this insn in uapi and in bpf disassembler. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-11bpf: Introduce bpf_arena.Alexei Starovoitov9-2/+635
Introduce bpf_arena, which is a sparse shared memory region between the bpf program and user space. Use cases: 1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous region, like memcached or any key/value storage. The bpf program implements an in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a value without going to user space. 2. The bpf program builds arbitrary data structures in bpf_arena (hash tables, rb-trees, sparse arrays), while user space consumes it. 3. bpf_arena is a "heap" of memory from the bpf program's point of view. The user space may mmap it, but bpf program will not convert pointers to user base at run-time to improve bpf program speed. Initially, the kernel vm_area and user vma are not populated. User space can fault in pages within the range. While servicing a page fault, bpf_arena logic will insert a new page into the kernel and user vmas. The bpf program can allocate pages from that region via bpf_arena_alloc_pages(). This kernel function will insert pages into the kernel vm_area. The subsequent fault-in from user space will populate that page into the user vma. The BPF_F_SEGV_ON_FAULT flag at arena creation time can be used to prevent fault-in from user space. In such a case, if a page is not allocated by the bpf program and not present in the kernel vm_area, the user process will segfault. This is useful for use cases 2 and 3 above. bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pages either at a specific address within the arena or allocates a range with the maple tree. bpf_arena_free_pages() is analogous to munmap(), which frees pages and removes the range from the kernel vm_area and from user process vmas. bpf_arena can be used as a bpf program "heap" of up to 4GB. The speed of bpf program is more important than ease of sharing with user space. This is use case 3. In such a case, the BPF_F_NO_USER_CONV flag is recommended. It will tell the verifier to treat the rX = bpf_arena_cast_user(rY) instruction as a 32-bit move wX = wY, which will improve bpf prog performance. Otherwise, bpf_arena_cast_user is translated by JIT to conditionally add the upper 32 bits of user vm_start (if the pointer is not NULL) to arena pointers before they are stored into memory. This way, user space sees them as valid 64-bit pointers. Diff https://github.com/llvm/llvm-project/pull/84410 enables LLVM BPF backend generate the bpf_addr_space_cast() instruction to cast pointers between address_space(1) which is reserved for bpf_arena pointers and default address space zero. All arena pointers in a bpf program written in C language are tagged as __attribute__((address_space(1))). Hence, clang provides helpful diagnostics when pointers cross address space. Libbpf and the kernel support only address_space == 1. All other address space identifiers are reserved. rX = bpf_addr_space_cast(rY, /* dst_as */ 1, /* src_as */ 0) tells the verifier that rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have to be in the 32-bit domain. The verifier will mark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as kern_vm_start + 32bit_addr memory accesses. The behavior is similar to copy_from_kernel_nofault() except that no address checks are necessary. The address is guaranteed to be in the 4GB range. If the page is not present, the destination register is zeroed on read, and the operation is ignored on write. rX = bpf_addr_space_cast(rY, 0, 1) tells the verifier that rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set, then the verifier converts such cast instructions to mov32. Otherwise, JIT will emit native code equivalent to: rX = (u32)rY; if (rY) rX |= clear_lo32_bits(arena->user_vm_start); /* replace hi32 bits in rX */ After such conversion, the pointer becomes a valid user pointer within bpf_arena range. The user process can access data structures created in bpf_arena without any additional computations. For example, a linked list built by a bpf program can be walked natively by user space. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Reviewed-by: Barret Rhoden <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2024-03-12tpm: tis_i2c: Add compatible string nuvoton,npct75xLukas Wunner1-0/+2
Add "nuvoton,npct75x" as well as the fallback compatible string "tcg,tpm-tis-i2c" to the TPM TIS I²C driver. They're used by: arch/arm/boot/dts/aspeed/aspeed-bmc-ibm-bonnell.dts arch/arm/boot/dts/aspeed/aspeed-bmc-ibm-everest.dts And by all accounts, NPCT75x is supported by the driver: https://lore.kernel.org/all/[email protected]/ https://lore.kernel.org/all/[email protected]/ Signed-off-by: Lukas Wunner <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2024-03-12tpm_tis: Add compatible string atmel,at97sc3204Lukas Wunner1-0/+1
Commit 420d439849ca ("tpm_tis: Allow tpm_tis to be bound using DT") added the fallback compatible "tcg,tpm-tis-mmio" to the TPM TIS driver, but not the chip-specific "atmel,at97sc3204". However it did document it as a valid compatible string. Add it to tis_of_platform_match[] for consistency. Signed-off-by: Lukas Wunner <[email protected]> Cc: Jason Gunthorpe <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2024-03-12tpm_tis_spi: Add compatible string atmel,attpm20pLukas Wunner1-0/+1
Commit 4f2a348aa365 ("arm64: dts: imx8mm-venice-gw73xx: add TPM device") added a devicetree node for the Trusted Platform Module on certain Gateworks boards. The commit only used the generic "tcg,tpm_tis-spi" compatible string, but public documentation shows that the chip is an ATTPM20P from Atmel (nowadays Microchip): https://trac.gateworks.com/wiki/tpm Add the chip to the supported compatible strings of the TPM TIS SPI driver. For reference, a datasheet is available at: https://ww1.microchip.com/downloads/en/DeviceDoc/ATTPM20P-Trusted-Platform-Module-TPM-2.0-SPI-Interface-Summary-Data-Sheet-DS40002082A.pdf Signed-off-by: Lukas Wunner <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Cc: Tim Harvey <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2024-03-12dt-bindings: tpm: Add compatible string atmel,attpm20pLukas Wunner1-0/+1
Commit 4f2a348aa365 ("arm64: dts: imx8mm-venice-gw73xx: add TPM device") added a devicetree node for the Trusted Platform Module on certain Gateworks boards. The commit only used the generic "tcg,tpm_tis-spi" compatible string, but public documentation shows that the chip is an ATTPM20P from Atmel (nowadays Microchip): https://trac.gateworks.com/wiki/tpm Add the chip to the supported compatible strings of the TPM TIS SPI schema. For reference, a datasheet is available at: https://ww1.microchip.com/downloads/en/DeviceDoc/ATTPM20P-Trusted-Platform-Module-TPM-2.0-SPI-Interface-Summary-Data-Sheet-DS40002082A.pdf Signed-off-by: Lukas Wunner <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Cc: Tim Harvey <[email protected]> Acked-by: Rob Herring <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2024-03-12tpm,tpm_tis: Avoid warning splat at shutdownLino Sanfilippo1-2/+1
If interrupts are not activated the work struct 'free_irq_work' is not initialized. This results in a warning splat at module shutdown. Fix this by always initializing the work regardless of whether interrupts are activated or not. cc: [email protected] Fixes: 481c2d14627d ("tpm,tpm_tis: Disable interrupts after 1000 unhandled IRQs") Reported-by: Jarkko Sakkinen <[email protected]> Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Lino Sanfilippo <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2024-03-12tpm/tpm_ftpm_tee: fix all kernel-doc warningsRandy Dunlap1-3/+3
Change @pdev to @dev in 2 places to match the function parameters. Correct one function name in kernel-doc comment to match the function implementation. This prevents these warnings: tpm_ftpm_tee.c:217: warning: Function parameter or struct member 'dev' not described in 'ftpm_tee_probe' tpm_ftpm_tee.c:217: warning: Excess function parameter 'pdev' description in 'ftpm_tee_probe' tpm_ftpm_tee.c:313: warning: Function parameter or struct member 'dev' not described in 'ftpm_tee_remove' tpm_ftpm_tee.c:313: warning: Excess function parameter 'pdev' description in 'ftpm_tee_remove' tpm_ftpm_tee.c:348: warning: expecting prototype for ftpm_tee_shutdown(). Prototype was for ftpm_plat_tee_shutdown() instead Signed-off-by: Randy Dunlap <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]>
2024-03-11ravb: Correct buffer size to map for R-Car RxNiklas Söderlund1-1/+1
When creating a helper to allocate and align an skb one location where the skb data size was updated was missed. This can lead to a warning being printed when the memory is being unmapped as it now always unmap the maximum frame size, instead of the size after it have been aligned. This was correctly done for RZ/G2L but missed for R-Car. Fixes: cfbad64706c1 ("ravb: Create helper to allocate skb and align it") Signed-off-by: Niklas Söderlund <[email protected]> Reviewed-by: Sergey Shtylyov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Reviewed-by: Geert Uytterhoeven <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11net: amt: Remove generic .ndo_get_stats64Breno Leitao1-1/+0
Commit 3e2f544dd8a33 ("net: get stats64 if device if driver is configured") moved the callback to dev_get_tstats64() to net core, so, unless the driver is doing some custom stats collection, it does not need to set .ndo_get_stats64. Since this driver is now relying in NETDEV_PCPU_STAT_TSTATS, then, it doesn't need to set the dev_get_tstats64() generic .ndo_get_stats64 function pointer. Signed-off-by: Breno Leitao <[email protected]> Reviewed-by: Taehee Yoo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11net: amt: Move stats allocation to coreBreno Leitao1-7/+2
With commit 34d21de99cea9 ("net: Move {l,t,d}stats allocation to core and convert veth & vrf"), stats allocation could be done on net core instead of this driver. With this new approach, the driver doesn't have to bother with error handling (allocation failure checking, making sure free happens in the right spot, etc). This is core responsibility now. Move amt driver to leverage the core allocation. Signed-off-by: Breno Leitao <[email protected]> Reviewed-by: Taehee Yoo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11netlink: specs: support generating code for genl socket privJakub Kicinski5-0/+69
The family struct is auto-generated for new families, support use of the sock_priv_* mechanism added in commit a731132424ad ("genetlink: introduce per-sock family private storage"). For example if the family wants to use struct sk_buff as its private struct (unrealistic but just for illustration), it would add to its spec: kernel-family: headers: [ "linux/skbuff.h" ] sock-priv: struct sk_buff ynl-gen-c will declare the appropriate priv size and hook in function prototypes to be implemented by the family. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11tools: ynl: remove trailing semicolonJakub Kicinski1-1/+1
Commit e8a6c515ff5f ("tools: ynl: allow user to pass enum string instead of scalar value") added a semicolon at the end of a line. Reviewed-by: Jiri Pirko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11net: ipv6: exthdrs: get rid of ipv6_skb_net()Justin Iurman1-9/+1
Get rid of ipv6_skb_net() which is only used in ipv6_hop_ioam(). Signed-off-by: Justin Iurman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11Merge branch 'selftests-mptcp-various-improvements'Jakub Kicinski8-286/+312
Matthieu Baerts says: ==================== selftests: mptcp: various improvements In this series from Geliang, there are various improvements in MPTCP selftests: sharing code, doing actions the same way, colours, etc. Patch 1 prints all error messages to stdout: what was done in almost all other MPTCP selftests. This can be now easily changed later if needed. Patch 2 makes sure the test counter is continuous in mptcp_connect.sh. Patch 3 aligns the messages that are printed in mptcp_connect.sh. Patch 4 prints each test results in mptcp_sockopt.sh, similar to what we have in the TAP output. Patch 5 moves the different test counters to a single one in mptcp_lib.sh, to uniform how it is used. Patch 6 moves how titles are printed from mptcp_join.sh to the lib, to be reused in patch 7 by all other MPTCP selftests. Patch 8 uses the '+=' operator to append strings instead of repeating twice the variable name: that's shorter, easier to read. Patch 9 adds colours for the [ OK ], [SKIP], [FAIL] and INFO keywords in all MPTCP selftests. Patch 10 to 12 are some preparation patches for patch 13: patch 10 modifies how some 'test_fail' helpers, patch 11 moves a helper from userspace_pm.sh to the lib, and patch 12 changes where titles are printed in userspace_pm.sh. Patch 13 moves some duplicated helpers from mptcp_join.sh and userspace_pm.sh to mptcp_lib.sh. Patch 14 moves duplicated read-only variables from mptcp_join.sh and userspace_pm.sh to mptcp_lib.sh as well. Patch 15 uses explicit variables instead of hard-coded numbers for the exit status. ==================== Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-0-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: use KSFT_SKIP/KSFT_PASS/KSFT_FAILGeliang Tang6-26/+25
This patch uses the public var KSFT_SKIP in mptcp_lib.sh instead of ksft_skip, and drop 'ksft_skip=4' in mptcp_join.sh. Use KSFT_PASS and KSFT_FAIL macros instead of 0 and 1 after 'exit ' and 'ret=' in all scripts: exit 0 -> exit ${KSFT_PASS} exit 1 -> exit ${KSFT_FAIL} ret=0 -> ret=${KSFT_PASS} ret=1 -> ret=${KSFT_FAIL} Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-15-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: declare event macros in mptcp_libGeliang Tang3-23/+29
MPTCP event macros (SUB_ESTABLISHED, LISTENER_CREATED, LISTENER_CLOSED), and the protocol family macros (AF_INET, AF_INET6) are defined in both mptcp_join.sh and userspace_pm.sh. In order not to duplicate code, this patch declares them all in mptcp_lib.sh with MPTCP_LIB_ prefixs. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-14-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: add mptcp_lib_verify_listener_eventsGeliang Tang3-38/+30
To avoid duplicated code in different MPTCP selftests, we can add and use helpers defined in mptcp_lib.sh. The helper verify_listener_events() is defined both in mptcp_join.sh and userspace_pm.sh, export it into mptcp_lib.sh and rename it with mptcp_lib_ prefix. Use this new helper in both scripts. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-13-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: print_test out of verify_listener_eventsGeliang Tang1-6/+2
verify_listener_events() helper will be exported into mptcp_lib.sh as a public function, but print_test() is invoked in it, which is a private function in userspace_pm.sh only. So this patch moves print_test() out of verify_listener_events(). Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-12-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: extract mptcp_lib_check_expectedGeliang Tang2-31/+32
Extract the main part of check_expected() in userspace_pm.sh to a new function mptcp_lib_check_expected() in mptcp_lib.sh. It will be used in both mptcp_john.sh and userspace_pm.sh. check_expected_one() is moved into mptcp_lib.sh too as mptcp_lib_check_expected_one(). Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-11-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: call test_fail without argumentGeliang Tang2-6/+13
This patch modifies test_fail() to call mptcp_lib_pr_fail() only if there are arguments (if [ ${#} -gt 0 ]) in userspace_pm.sh, add arguments "unexpected type: ${type}" when calling test_fail() from test_remove(). Then mptcp_lib_pr_fail() can be used in check_expected_one() instead of test_fail(). The same in mptcp_join.sh, calling fail_test() without argument, and adapt this helper not to call print_fail() in this case. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-10-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: print test results with colorsGeliang Tang8-87/+90
To unify the output formats of all test scripts, this patch adds four more helpers: mptcp_lib_pr_ok() mptcp_lib_pr_skip() mptcp_lib_pr_fail() mptcp_lib_pr_info() to print out [ OK ], [SKIP], [FAIL] and 'INFO: ' with colors. Use them in all scripts to print the "ok/skip/fail/info' using the same 'format'. Having colors helps to quickly identify issues when looking at a long list of output logs and results. Note that now all print the same keywords, which was not the case before, but it is good to uniform that. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-9-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: use += operator to append stringsGeliang Tang2-40/+43
This patch uses addition assignment operator (+=) to append strings instead of duplicating the variable name in mptcp_connect.sh and mptcp_join.sh. This can make the statements shorter. Note: in mptcp_connect.sh, add a local variable extra in do_transfer to save the various extra warning logs, using += to append it. And add a new variable tc_info to save various tc info, also using += to append it. This can make the code more readable and prepare for the next commit. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-8-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: print test results with countersGeliang Tang6-14/+16
This patch adds a new helper mptcp_lib_print_title(), a wrapper of mptcp_lib_inc_test_counter() and mptcp_lib_pr_title_counter(), to print out test counter in each test result and increase the counter. Use this helper to print out test counters for every tests in diag.sh, mptcp_connect.sh, mptcp_sockopt.sh, pm_netlink.sh, simult_flows.sh, and userspace_pm.sh. diag.sh: 01 no msk on netns creation [ ok ] 02 listen match for dport 10000 [ ok ] 03 listen match for sport 10000 [ ok ] 04 listen match for saddr and sport [ ok ] 05 all listen sockets [ ok ] mptcp_connect.sh: 01 New MPTCP socket can be blocked via sysctl [ OK ] 02 Validating network environment with pings [ OK ] INFO: Using loss of 0.85% delay 31 ms reorder .. with delay 7ms on ns3eth4 03 ns1 MPTCP -> ns1 (10.0.1.1:10000 ) MPTCP (duration 69ms) [ OK ] 04 ns1 MPTCP -> ns1 (10.0.1.1:10001 ) TCP (duration 20ms) [ OK ] 05 ns1 TCP -> ns1 (10.0.1.1:10002 ) MPTCP (duration 16ms) [ OK ] mptcp_sockopt.sh: 01 Transfer v4 [ OK ] 02 Mark v4 [ OK ] 03 Transfer v6 [ OK ] 04 Mark v6 [ OK ] 05 SOL_MPTCP sockopt v4 [ OK ] pm_netlink.sh: 01 defaults addr list [ OK ] 02 simple add/get addr [ OK ] 03 dump addrs [ OK ] 04 simple del addr [ OK ] 05 dump addrs after del [ OK ] simult_flows.sh: 01 balanced bwidth 7391 max 8456 [ OK ] 02 balanced bwidth - reverse direction 7403 max 8456 [ OK ] 03 balanced bwidth with unbalanced delay 7429 max 8456 [ OK ] 04 balanced bwidth with unbalanced delay - reverse ... 7485 max 8456 [ OK ] 05 unbalanced bwidth 7549 max 8456 [ OK ] userspace_pm.sh: 01 Created network namespaces ns1, ns2 [ OK ] INFO: Make connections 02 Established IPv4 MPTCP Connection ns2 => ns1 [ OK ] 03 Established IPv6 MPTCP Connection ns2 => ns1 [ OK ] INFO: Announce tests 04 ADD_ADDR 10.0.2.2 (ns2) => ns1, invalid token [ OK ] 05 ADD_ADDR id:67 10.0.2.2 (ns2) => ns1, reuse port [ OK ] Having test counters helps to quickly identify issues when looking at a long list of output logs and results. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-7-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: add print_title in mptcp_libGeliang Tang2-10/+13
This patch adds a new variable MPTCP_LIB_TEST_FORMAT as the test title printing format. Also add a helper mptcp_lib_print_title() to use this format to print the test title with test counters. They are used in mptcp_join.sh first. Each MPTCP selftest is having subtests, and it helps to give them a number to quickly identify them. This can be managed by mptcp_lib.sh, reusing what has been done here. The following commit will use these new helpers in the other tests. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-6-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: export TEST_COUNTER variableGeliang Tang5-16/+14
Variable TEST_COUNT are used in mptcp_connect.sh and mptcp_join.sh as test counters, which are initialized to 0, while variable test_cnt are used in diag.sh and simult_flows.sh, which are initialized to 1. To maintain consistency, this patch renames them all as MPTCP_LIB_TEST_COUNTER, initializes it to 1, and exports it into mptcp_lib.sh. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-5-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: sockopt: print every test resultGeliang Tang1-17/+25
Only total test results are printed out in mptcp_sockopt.sh: PASS: all packets had packet mark set PASS: SOL_MPTCP getsockopt has expected information PASS: TCP_INQ cmsg/ioctl -t tcp PASS: TCP_INQ cmsg/ioctl -6 -t tcp PASS: TCP_INQ cmsg/ioctl -r tcp PASS: TCP_INQ cmsg/ioctl -6 -r tcp PASS: TCP_INQ cmsg/ioctl -r tcp -t tcp They mismatch with the test results: ok 1 - mptcp_sockopt: mark ipv4 ok 2 - mptcp_sockopt: transfer ipv4 ok 3 - mptcp_sockopt: mark ipv6 ok 4 - mptcp_sockopt: transfer ipv6 ok 5 - mptcp_sockopt: sockopt v4 ok 6 - mptcp_sockopt: sockopt v6 ok 7 - mptcp_sockopt: TCP_INQ: -t tcp ok 8 - mptcp_sockopt: TCP_INQ: -6 -t tcp ok 9 - mptcp_sockopt: TCP_INQ: -r tcp ok 10 - mptcp_sockopt: TCP_INQ: -6 -r tcp ok 11 - mptcp_sockopt: TCP_INQ: -r tcp -t tcp 'mptcp_sockopt.sh' now display more detailed results + why (what you had in a former patch from v6, merged here). It no longer displays 'PASS:', because it is duplicated info now that the detailed are displayed: Transfer v4 [ OK ] Mark v4 [ OK ] Transfer v6 [ OK ] Mark v6 [ OK ] SOL_MPTCP sockopt v4 [ OK ] SOL_MPTCP sockopt v6 [ OK ] TCP_INQ cmsg/ioctl -t tcp [ OK ] TCP_INQ cmsg/ioctl -6 -t tcp [ OK ] TCP_INQ cmsg/ioctl -r tcp [ OK ] TCP_INQ cmsg/ioctl -6 -r tcp [ OK ] TCP_INQ cmsg/ioctl -r tcp -t tcp [ OK ] Also fix the TAP output: ok 1 - mptcp_sockopt: transfer ipv4 ok 2 - mptcp_sockopt: mark ipv4 ok 3 - mptcp_sockopt: transfer ipv6 ok 4 - mptcp_sockopt: mark ipv6 ok 5 - mptcp_sockopt: sockopt v4 ok 6 - mptcp_sockopt: sockopt v6 ok 7 - mptcp_sockopt: TCP_INQ: -t tcp ok 8 - mptcp_sockopt: TCP_INQ: -6 -t tcp ok 9 - mptcp_sockopt: TCP_INQ: -r tcp ok 10 - mptcp_sockopt: TCP_INQ: -6 -r tcp ok 11 - mptcp_sockopt: TCP_INQ: -r tcp -t tcp Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-4-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: connect: fix misaligned outputGeliang Tang1-3/+10
The first [ OK ] in the output of mptcp_connect.sh misaligns with the others: New MPTCP socket can be blocked via sysctl [ OK ] INFO: validating network environment with pings INFO: Using loss of 0.85% delay 16 ms reorder 95% 70% with delay 4ms on ns1 MPTCP -> ns1 (10.0.1.1:10000 ) MPTCP (duration 184ms) [ OK ] ns1 MPTCP -> ns1 (10.0.1.1:10001 ) TCP (duration 50ms) [ OK ] ns1 TCP -> ns1 (10.0.1.1:10002 ) MPTCP (duration 55ms) [ OK ] This patch aligns them by using 69 chars to display the first two lines, and 50 chars for the other. Since 19 chars are used to display duration time. Also print out a [ OK ] at the end of the 2nd line for consistency. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-3-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: connect: add dedicated port counterGeliang Tang1-3/+3
This patch adds a new dedicated counter 'port' instead of TEST_COUNT to increase port numbers in mptcp_connect.sh. This can avoid outputting discontinuous test counters. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-2-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11selftests: mptcp: print all error messages to stdoutGeliang Tang2-10/+11
Some error messages are printed to stderr while the others are printed to 'stdout'. As part of the unification, this patch drop "1>&2" to let all errors messages are printed to 'stdout'. Signed-off-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://lore.kernel.org/r/20240308-upstream-net-next-20240308-selftests-mptcp-unification-v1-1-4f42c347b653@kernel.org Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11net: wan: framer/pef2256: Convert to platform remove callback returning voidUwe Kleine-König1-4/+2
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <[email protected]> Acked-by: Herve Codina <[email protected]> Link: https://lore.kernel.org/r/9684419fd714cc489a3ef36d838d3717bb6aec6d.1709886922.git.u.kleine-koenig@pengutronix.de Signed-off-by: Jakub Kicinski <[email protected]>
2024-03-11Merge tag 'timers-core-2024-03-10' of ↵Linus Torvalds43-562/+3198
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer updates from Thomas Gleixner: "A large set of updates and features for timers and timekeeping: - The hierarchical timer pull model When timer wheel timers are armed they are placed into the timer wheel of a CPU which is likely to be busy at the time of expiry. This is done to avoid wakeups on potentially idle CPUs. This is wrong in several aspects: 1) The heuristics to select the target CPU are wrong by definition as the chance to get the prediction right is close to zero. 2) Due to #1 it is possible that timers are accumulated on a single target CPU 3) The required computation in the enqueue path is just overhead for dubious value especially under the consideration that the vast majority of timer wheel timers are either canceled or rearmed before they expire. The timer pull model avoids the above by removing the target computation on enqueue and queueing timers always on the CPU on which they get armed. This is achieved by having separate wheels for CPU pinned timers and global timers which do not care about where they expire. As long as a CPU is busy it handles both the pinned and the global timers which are queued on the CPU local timer wheels. When a CPU goes idle it evaluates its own timer wheels: - If the first expiring timer is a pinned timer, then the global timers can be ignored as the CPU will wake up before they expire. - If the first expiring timer is a global timer, then the expiry time is propagated into the timer pull hierarchy and the CPU makes sure to wake up for the first pinned timer. The timer pull hierarchy organizes CPUs in groups of eight at the lowest level and at the next levels groups of eight groups up to the point where no further aggregation of groups is required, i.e. the number of levels is log8(NR_CPUS). The magic number of eight has been established by experimention, but can be adjusted if needed. In each group one busy CPU acts as the migrator. It's only one CPU to avoid lock contention on remote timer wheels. The migrator CPU checks in its own timer wheel handling whether there are other CPUs in the group which have gone idle and have global timers to expire. If there are global timers to expire, the migrator locks the remote CPU timer wheel and handles the expiry. Depending on the group level in the hierarchy this handling can require to walk the hierarchy downwards to the CPU level. Special care is taken when the last CPU goes idle. At this point the CPU is the systemwide migrator at the top of the hierarchy and it therefore cannot delegate to the hierarchy. It needs to arm its own timer device to expire either at the first expiring timer in the hierarchy or at the first CPU local timer, which ever expires first. This completely removes the overhead from the enqueue path, which is e.g. for networking a true hotpath and trades it for a slightly more complex idle path. This has been in development for a couple of years and the final series has been extensively tested by various teams from silicon vendors and ran through extensive CI. There have been slight performance improvements observed on network centric workloads and an Intel team confirmed that this allows them to power down a die completely on a mult-die socket for the first time in a mostly idle scenario. There is only one outstanding ~1.5% regression on a specific overloaded netperf test which is currently investigated, but the rest is either positive or neutral performance wise and positive on the power management side. - Fixes for the timekeeping interpolation code for cross-timestamps: cross-timestamps are used for PTP to get snapshots from hardware timers and interpolated them back to clock MONOTONIC. The changes address a few corner cases in the interpolation code which got the math and logic wrong. - Simplifcation of the clocksource watchdog retry logic to automatically adjust to handle larger systems correctly instead of having more incomprehensible command line parameters. - Treewide consolidation of the VDSO data structures. - The usual small improvements and cleanups all over the place" * tag 'timers-core-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits) timer/migration: Fix quick check reporting late expiry tick/sched: Fix build failure for CONFIG_NO_HZ_COMMON=n vdso/datapage: Quick fix - use asm/page-def.h for ARM64 timers: Assert no next dyntick timer look-up while CPU is offline tick: Assume timekeeping is correctly handed over upon last offline idle call tick: Shut down low-res tick from dying CPU tick: Split nohz and highres features from nohz_mode tick: Move individual bit features to debuggable mask accesses tick: Move got_idle_tick away from common flags tick: Assume the tick can't be stopped in NOHZ_MODE_INACTIVE mode tick: Move broadcast cancellation up to CPUHP_AP_TICK_DYING tick: Move tick cancellation up to CPUHP_AP_TICK_DYING tick: Start centralizing tick related CPU hotplug operations tick/sched: Don't clear ts::next_tick again in can_stop_idle_tick() tick/sched: Rename tick_nohz_stop_sched_tick() to tick_nohz_full_stop_tick() tick: Use IS_ENABLED() whenever possible tick/sched: Remove useless oneshot ifdeffery tick/nohz: Remove duplicate between lowres and highres handlers tick/nohz: Remove duplicate between tick_nohz_switch_to_nohz() and tick_setup_sched_timer() hrtimer: Select housekeeping CPU during migration ...