aboutsummaryrefslogtreecommitdiff
path: root/tools/lib
AgeCommit message (Collapse)AuthorFilesLines
2022-12-06libbpf: Parse usdt args without offset on x86 (e.g. 8@(%rsp))Timo Hunziker1-0/+8
Parse USDT arguments like "8@(%rsp)" on x86. These are emmited by SystemTap. The argument syntax is similar to the existing "memory dereference case" but the offset left out as it's zero (i.e. read the value from the address in the register). We treat it the same as the the "memory dereference case", but set the offset to 0. I've tested that this fixes the "unrecognized arg #N spec: 8@(%rsp).." error I've run into when attaching to a probe with such an argument. Attaching and reading the correct argument values works. Something similar might be needed for the other supported architectures. [0] Closes: https://github.com/libbpf/libbpf/issues/559 Signed-off-by: Timo Hunziker <timo.hunziker@gmx.ch> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221203123746.2160-1-timo.hunziker@eclipso.ch
2022-12-02libbpf: Improve usability of libbpf MakefileXin Liu1-0/+17
Current libbpf Makefile does not contain the help command, which is inconvenient to use. Similar to the Makefile help command of the perf, a help command is provided to list the commands supported by libbpf make and the functions of the commands. Signed-off-by: Xin Liu <liuxin350@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221202081738.128513-1-liuxin350@huawei.com
2022-11-30libbpf: Avoid enum forward-declarations in public API in C++ modeAndrii Nakryiko1-0/+7
C++ enum forward declarations are fundamentally not compatible with pure C enum definitions, and so libbpf's use of `enum bpf_stats_type;` forward declaration in libbpf/bpf.h public API header is causing C++ compilation issues. More details can be found in [0], but it comes down to C++ supporting enum forward declaration only with explicitly specified backing type: enum bpf_stats_type: int; In C (and I believe it's a GCC extension also), such forward declaration is simply: enum bpf_stats_type; Further, in Linux UAPI this enum is defined in pure C way: enum bpf_stats_type { BPF_STATS_RUN_TIME = 0; } And even though in both cases backing type is int, which can be confirmed by looking at DWARF information, for C++ compiler actual enum definition and forward declaration are incompatible. To eliminate this problem, for C++ mode define input argument as int, which makes enum unnecessary in libbpf public header. This solves the issue and as demonstrated by next patch doesn't cause any unwanted compiler warnings, at least with default warnings setting. [0] https://stackoverflow.com/questions/42766839/c11-enum-forward-causes-underlying-type-mismatch [1] Closes: https://github.com/libbpf/libbpf/issues/249 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221130200013.2997831-1-andrii@kernel.org
2022-11-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski3-8/+22
tools/lib/bpf/ringbuf.c 927cbb478adf ("libbpf: Handle size overflow for ringbuf mmap") b486d19a0ab0 ("libbpf: checkpatch: Fixed code alignments in ringbuf.c") https://lore.kernel.org/all/20221121122707.44d1446a@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-11-22selftests/bpf: Workaround for llvm nop-4 bugAlexei Starovoitov1-1/+2
Currently LLVM fails to recognize .data.* as data section and defaults to .text section. Later BPF backend tries to emit 4-byte NOP instruction which doesn't exist in BPF ISA and aborts. The fix for LLVM is pending: https://reviews.llvm.org/D138477 While waiting for the fix lets workaround the linked_list test case by using .bss.* prefix which is properly recognized by LLVM as BSS section. Fix libbpf to support .bss. prefix and adjust tests. Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-11-18libbpf: Ignore hashmap__find() result explicitly in btf_dumpAndrii Nakryiko1-1/+1
Coverity is reporting that btf_dump_name_dups() doesn't check return result of hashmap__find() call. This is intentional, so make it explicit with (void) cast. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221117192824.4093553-1-andrii@kernel.org
2022-11-17libbpf: Check the validity of size in user_ring_buffer__reserve()Hou Tao1-0/+4
The top two bits of size are used as busy and discard flags, so reject the reservation that has any of these special bits in the size. With the addition of validity check, these is also no need to check whether or not total_size is overflowed. Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221116072351.1168938-5-houtao@huaweicloud.com
2022-11-17libbpf: Handle size overflow for user ringbuf mmapHou Tao1-2/+8
Similar with the overflow problem on ringbuf mmap, in user_ringbuf_map() 2 * max_entries may overflow u32 when mapping writeable region. Fixing it by casting the size of writable mmap region into a __u64 and checking whether or not there will be overflow during mmap. Fixes: b66ccae01f1d ("bpf: Add libbpf logic for user-space ring buffer") Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221116072351.1168938-4-houtao@huaweicloud.com
2022-11-17libbpf: Handle size overflow for ringbuf mmapHou Tao1-4/+8
The maximum size of ringbuf is 2GB on x86-64 host, so 2 * max_entries will overflow u32 when mapping producer page and data pages. Only casting max_entries to size_t is not enough, because for 32-bits application on 64-bits kernel the size of read-only mmap region also could overflow size_t. So fixing it by casting the size of read-only mmap region into a __u64 and checking whether or not there will be overflow during mmap. Fixes: bf99c936f947 ("libbpf: Add BPF ring buffer support") Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221116072351.1168938-3-houtao@huaweicloud.com
2022-11-17libbpf: Use page size as max_entries when probing ring buffer mapHou Tao1-1/+1
Using page size as max_entries when probing ring buffer map, else the probe may fail on host with 64KB page size (e.g., an ARM64 host). After the fix, the output of "bpftool feature" on above host will be correct. Before : eBPF map_type ringbuf is NOT available eBPF map_type user_ringbuf is NOT available After : eBPF map_type ringbuf is available eBPF map_type user_ringbuf is available Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221116072351.1168938-2-houtao@huaweicloud.com
2022-11-14libbpf: checkpatch: Fixed code alignments in ringbuf.cKang Minchul1-2/+2
Fixed some checkpatch issues in ringbuf.c Signed-off-by: Kang Minchul <tegongkang@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221113190648.38556-4-tegongkang@gmail.com
2022-11-14libbpf: Fixed various checkpatch issues in libbpf.cKang Minchul1-17/+28
Fixed following checkpatch issues: WARNING: Block comments use a trailing */ on a separate line + * other BPF program's BTF object */ WARNING: Possible repeated word: 'be' + * name. This is important to be be able to find corresponding BTF ERROR: switch and case should be at the same indent + switch (ext->kcfg.sz) { + case 1: *(__u8 *)ext_val = value; break; + case 2: *(__u16 *)ext_val = value; break; + case 4: *(__u32 *)ext_val = value; break; + case 8: *(__u64 *)ext_val = value; break; + default: ERROR: trailing statements should be on next line + case 1: *(__u8 *)ext_val = value; break; ERROR: trailing statements should be on next line + case 2: *(__u16 *)ext_val = value; break; ERROR: trailing statements should be on next line + case 4: *(__u32 *)ext_val = value; break; ERROR: trailing statements should be on next line + case 8: *(__u64 *)ext_val = value; break; ERROR: code indent should use tabs where possible + }$ WARNING: please, no spaces at the start of a line + }$ WARNING: Block comments use a trailing */ on a separate line + * for faster search */ ERROR: code indent should use tabs where possible +^I^I^I^I^I^I &ext->kcfg.is_signed);$ WARNING: braces {} are not necessary for single statement blocks + if (err) { + return err; + } ERROR: code indent should use tabs where possible +^I^I^I^I sizeof(*obj->btf_modules), obj->btf_module_cnt + 1);$ Signed-off-by: Kang Minchul <tegongkang@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221113190648.38556-3-tegongkang@gmail.com
2022-11-14libbpf: checkpatch: Fixed code alignments in btf.cKang Minchul1-2/+3
Fixed some checkpatch issues in btf.c Signed-off-by: Kang Minchul <tegongkang@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221113190648.38556-2-tegongkang@gmail.com
2022-11-14libbpf: Use correct return pointer in attach_raw_tpJiri Olsa1-1/+1
We need to pass '*link' to final libbpf_get_error, because that one holds the return value, not 'link'. Fixes: 4fa5bcfe07f7 ("libbpf: Allow BPF program auto-attach handlers to bail out") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221114145257.882322-1-jolsa@kernel.org
2022-11-14libbpf: Fix uninitialized warning in btf_dump_dump_type_dataDavid Michael1-1/+1
GCC 11.3.0 fails to compile btf_dump.c due to the following error, which seems to originate in btf_dump_struct_data where the returned value would be uninitialized if btf_vlen returns zero. btf_dump.c: In function ‘btf_dump_dump_type_data’: btf_dump.c:2363:12: error: ‘err’ may be used uninitialized in this function [-Werror=maybe-uninitialized] 2363 | if (err < 0) | ^ Fixes: 920d16af9b42 ("libbpf: BTF dumper support for typed data") Signed-off-by: David Michael <fedora.dm0@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/87zgcu60hq.fsf@gmail.com
2022-11-11libbpf: Hashmap.h update to fix build issues using LLVM14Eduard Zingerman1-1/+2
A fix for the LLVM compilation error while building bpftool. Replaces the expression: _Static_assert((p) == NULL || ...) by expression: _Static_assert((__builtin_constant_p((p)) ? (p) == NULL : 0) || ...) When "p" is not a constant the former is not considered to be a constant expression by LLVM 14. The error was introduced in the following patch-set: [1]. The error was reported here: [2]. [1] https://lore.kernel.org/bpf/20221109142611.879983-1-eddyz87@gmail.com/ [2] https://lore.kernel.org/all/202211110355.BcGcbZxP-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Fixes: c302378bc157 ("libbpf: Hashmap interface update to allow both long and void* keys/values") Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221110223240.1350810-1-eddyz87@gmail.com
2022-11-09libbpf: Resolve unambigous forward declarationsEduard Zingerman1-4/+139
Resolve forward declarations that don't take part in type graphs comparisons if declaration name is unambiguous. Example: CU #1: struct foo; // standalone forward declaration struct foo *some_global; CU #2: struct foo { int x; }; struct foo *another_global; The `struct foo` from CU #1 is not a part of any definition that is compared against another definition while `btf_dedup_struct_types` processes structural types. The the BTF after `btf_dedup_struct_types` the BTF looks as follows: [1] STRUCT 'foo' size=4 vlen=1 ... [2] INT 'int' size=4 ... [3] PTR '(anon)' type_id=1 [4] FWD 'foo' fwd_kind=struct [5] PTR '(anon)' type_id=4 This commit adds a new pass `btf_dedup_resolve_fwds`, that maps such forward declarations to structs or unions with identical name in case if the name is not ambiguous. The pass is positioned before `btf_dedup_ref_types` so that types [3] and [5] could be merged as a same type after [1] and [4] are merged. The final result for the example above looks as follows: [1] STRUCT 'foo' size=4 vlen=1 'x' type_id=2 bits_offset=0 [2] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED [3] PTR '(anon)' type_id=1 For defconfig kernel with BTF enabled this removes 63 forward declarations. Examples of removed declarations: `pt_regs`, `in6_addr`. The running time of `btf__dedup` function is increased by about 3%. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221109142611.879983-3-eddyz87@gmail.com
2022-11-09libbpf: Hashmap interface update to allow both long and void* keys/valuesEduard Zingerman7-109/+119
An update for libbpf's hashmap interface from void* -> void* to a polymorphic one, allowing both long and void* keys and values. This simplifies many use cases in libbpf as hashmaps there are mostly integer to integer. Perf copies hashmap implementation from libbpf and has to be updated as well. Changes to libbpf, selftests/bpf and perf are packed as a single commit to avoid compilation issues with any future bisect. Polymorphic interface is acheived by hiding hashmap interface functions behind auxiliary macros that take care of necessary type casts, for example: #define hashmap_cast_ptr(p) \ ({ \ _Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\ #p " pointee should be a long-sized integer or a pointer"); \ (long *)(p); \ }) bool hashmap_find(const struct hashmap *map, long key, long *value); #define hashmap__find(map, key, value) \ hashmap_find((map), (long)(key), hashmap_cast_ptr(value)) - hashmap__find macro casts key and value parameters to long and long* respectively - hashmap_cast_ptr ensures that value pointer points to a memory of appropriate size. This hack was suggested by Andrii Nakryiko in [1]. This is a follow up for [2]. [1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@mail.gmail.com/ [2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@meta.com/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221109142611.879983-2-eddyz87@gmail.com
2022-11-04libbpf: Resolve enum fwd as full enum64 and vice versaEduard Zingerman1-50/+25
Changes de-duplication logic for enums in the following way: - update btf_hash_enum to ignore size and kind fields to get ENUM and ENUM64 types in a same hash bucket; - update btf_compat_enum to consider enum fwd to be compatible with full enum64 (and vice versa); This allows BTF de-duplication in the following case: // CU #1 enum foo; struct s { enum foo *a; } *x; // CU #2 enum foo { x = 0xfffffffff // big enough to force enum64 }; struct s { enum foo *a; } *y; De-duplicated BTF prior to this commit: [1] ENUM64 'foo' encoding=UNSIGNED size=8 vlen=1 'x' val=68719476735ULL [2] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none) [3] STRUCT 's' size=8 vlen=1 'a' type_id=4 bits_offset=0 [4] PTR '(anon)' type_id=1 [5] PTR '(anon)' type_id=3 [6] STRUCT 's' size=8 vlen=1 'a' type_id=8 bits_offset=0 [7] ENUM 'foo' encoding=UNSIGNED size=4 vlen=0 [8] PTR '(anon)' type_id=7 [9] PTR '(anon)' type_id=6 De-duplicated BTF after this commit: [1] ENUM64 'foo' encoding=UNSIGNED size=8 vlen=1 'x' val=68719476735ULL [2] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none) [3] STRUCT 's' size=8 vlen=1 'a' type_id=4 bits_offset=0 [4] PTR '(anon)' type_id=1 [5] PTR '(anon)' type_id=3 Enum forward declarations in C do not provide information about enumeration values range. Thus the `btf_type->size` field is meaningless for forward enum declarations. In fact, GCC does not encode size in DWARF for forward enum declarations (but dwarves sets enumeration size to a default value of `sizeof(int) * 8` when size is not specified see dwarf_loader.c:die__create_new_enumeration). Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221101235413.1824260-1-eddyz87@gmail.com
2022-11-02Merge tag 'for-netdev' of ↵Jakub Kicinski4-78/+125
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== bpf-next 2022-11-02 We've added 70 non-merge commits during the last 14 day(s) which contain a total of 96 files changed, 3203 insertions(+), 640 deletions(-). The main changes are: 1) Make cgroup local storage available to non-cgroup attached BPF programs such as tc BPF ones, from Yonghong Song. 2) Avoid unnecessary deadlock detection and failures wrt BPF task storage helpers, from Martin KaFai Lau. 3) Add LLVM disassembler as default library for dumping JITed code in bpftool, from Quentin Monnet. 4) Various kprobe_multi_link fixes related to kernel modules, from Jiri Olsa. 5) Optimize x86-64 JIT with emitting BMI2-based shift instructions, from Jie Meng. 6) Improve BPF verifier's memory type compatibility for map key/value arguments, from Dave Marchevsky. 7) Only create mmap-able data section maps in libbpf when data is exposed via skeletons, from Andrii Nakryiko. 8) Add an autoattach option for bpftool to load all object assets, from Wang Yufen. 9) Various memory handling fixes for libbpf and BPF selftests, from Xu Kuohai. 10) Initial support for BPF selftest's vmtest.sh on arm64, from Manu Bretelle. 11) Improve libbpf's BTF handling to dedup identical structs, from Alan Maguire. 12) Add BPF CI and denylist documentation for BPF selftests, from Daniel Müller. 13) Check BPF cpumap max_entries before doing allocation work, from Florian Lehner. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (70 commits) samples/bpf: Fix typo in README bpf: Remove the obsolte u64_stats_fetch_*_irq() users. bpf: check max_entries before allocating memory bpf: Fix a typo in comment for DFS algorithm bpftool: Fix spelling mistake "disasembler" -> "disassembler" selftests/bpf: Fix bpftool synctypes checking failure selftests/bpf: Panic on hard/soft lockup docs/bpf: Add documentation for new cgroup local storage selftests/bpf: Add test cgrp_local_storage to DENYLIST.s390x selftests/bpf: Add selftests for new cgroup local storage selftests/bpf: Fix test test_libbpf_str/bpf_map_type_str bpftool: Support new cgroup local storage libbpf: Support new cgroup local storage bpf: Implement cgroup storage available to non-cgroup-attached bpf progs bpf: Refactor some inode/task/sk storage functions for reuse bpf: Make struct cgroup btf id global selftests/bpf: Tracing prog can still do lookup under busy lock selftests/bpf: Ensure no task storage failure for bpf_lsm.s prog due to deadlock detection bpf: Add new bpf_task_storage_delete proto with no deadlock detection bpf: bpf_task_storage_delete_recur does lookup first before the deadlock check ... ==================== Link: https://lore.kernel.org/r/20221102062120.5724-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-25libbpf: Support new cgroup local storageYonghong Song2-0/+2
Add support for new cgroup local storage. Acked-by: David Vernet <void@manifault.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221026042856.673989-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-10-25libbpf: Btf dedup identical struct test needs check for nested structs/arraysAlan Maguire1-3/+5
When examining module BTF, it is common to see core kernel structures such as sk_buff, net_device duplicated in the module. After adding debug messaging to BTF it turned out that much of the problem was down to the identical struct test failing during deduplication; sometimes the compiler adds identical structs. However it turns out sometimes that type ids of identical struct members can also differ, even when the containing structs are still identical. To take an example, for struct sk_buff, debug messaging revealed that the identical struct matching was failing for the anon struct "headers"; specifically for the first field: __u8 __pkt_type_offset[0]; /* 128 0 */ Looking at the code in BTF deduplication, we have code that guards against the possibility of identical struct definitions, down to type ids, and identical array definitions. However in this case we have a struct which is being defined twice but does not have identical type ids since each duplicate struct has separate type ids for the above array member. A similar problem (though not observed) could occur for struct-in-struct. The solution is to make the "identical struct" test check members not just for matching ids, but to also check if they in turn are identical structs or arrays. The results of doing this are quite dramatic (for some modules at least); I see the number of type ids drop from around 10000 to just over 1000 in one module for example. For testing use latest pahole or apply [1], otherwise dedups can fail for the reasons described there. Also fix return type of btf_dedup_identical_arrays() as suggested by Andrii to match boolean return type used elsewhere. Fixes: efdd3eb8015e ("libbpf: Accommodate DWARF/compiler bug with duplicated structs") Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1666622309-22289-1-git-send-email-alan.maguire@oracle.com [1] https://lore.kernel.org/bpf/1666364523-9648-1-git-send-email-alan.maguire
2022-10-21libbpf: Avoid allocating reg_name with sscanf in parse_usdt_arg()Xu Kuohai1-10/+6
The reg_name in parse_usdt_arg() is used to hold register name, which is short enough to be held in a 16-byte array, so we could define reg_name as char reg_name[16] to avoid dynamically allocating reg_name with sscanf. Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221018145538.2046842-1-xukuohai@huaweicloud.com
2022-10-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-3/+2
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-19libbpf: only add BPF_F_MMAPABLE flag for data maps with global varsAndrii Nakryiko1-19/+78
Teach libbpf to not add BPF_F_MMAPABLE flag unnecessarily for ARRAY maps that are backing data sections, if such data sections don't expose any variables to user-space. Exposed variables are those that have STB_GLOBAL or STB_WEAK ELF binding and correspond to BTF VAR's BTF_VAR_GLOBAL_ALLOCATED linkage. The overall idea is that if some data section doesn't have any variable that is exposed through BPF skeleton, then there is no reason to make such BPF array mmapable. Making BPF array mmapable is not a free no-op action, because BPF verifier doesn't allow users to put special objects (such as BPF spin locks, RB tree nodes, linked list nodes, kptrs, etc; anything that has a sensitive internal state that should not be modified arbitrarily from user space) into mmapable arrays, as there is no way to prevent user space from corrupting such sensitive state through direct memory access through memory-mapped region. By making sure that libbpf doesn't add BPF_F_MMAPABLE flag to BPF array maps corresponding to data sections that only have static variables (which are not supposed to be visible to user space according to libbpf and BPF skeleton rules), users now can have spinlocks, kptrs, etc in either default .bss/.data sections or custom .data.* sections (assuming there are no global variables in such sections). The only possible hiccup with this approach is the need to use global variables during BPF static linking, even if it's not intended to be shared with user space through BPF skeleton. To allow such scenarios, extend libbpf's STV_HIDDEN ELF visibility attribute handling to variables. Libbpf is already treating global hidden BPF subprograms as static subprograms and adjusts BTF accordingly to make BPF verifier verify such subprograms as static subprograms with preserving entire BPF verifier state between subprog calls. This patch teaches libbpf to treat global hidden variables as static ones and adjust BTF information accordingly as well. This allows to share variables between multiple object files during static linking, but still keep them internal to BPF program and not get them exposed through BPF skeleton. Note, that if the user has some advanced scenario where they absolutely need BPF_F_MMAPABLE flag on .data/.bss/.rodata BPF array map despite only having static variables, they still can achieve this by forcing it through explicit bpf_map__set_map_flags() API. Acked-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/r/20221019002816.359650-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-10-19libbpf: clean up and refactor BTF fixup stepAndrii Nakryiko1-54/+42
Refactor libbpf's BTF fixup step during BPF object open phase. The only functional change is that we now ignore BTF_VAR_GLOBAL_EXTERN variables during fix up, not just BTF_VAR_STATIC ones, which shouldn't cause any change in behavior as there shouldn't be any extern variable in data sections for valid BPF object anyways. Otherwise it's just collapsing two functions that have no reason to be separate, and switching find_elf_var_offset() helper to return entire symbol pointer, not just its offset. This will be used by next patch to get ELF symbol visibility. While refactoring, also "normalize" debug messages inside btf_fixup_datasec() to follow general libbpf style and print out data section name consistently, where it's available. Acked-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221019002816.359650-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-10-18Merge tag 'for-netdev' of ↵Jakub Kicinski6-21/+117
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2022-10-18 We've added 33 non-merge commits during the last 14 day(s) which contain a total of 31 files changed, 874 insertions(+), 538 deletions(-). The main changes are: 1) Add RCU grace period chaining to BPF to wait for the completion of access from both sleepable and non-sleepable BPF programs, from Hou Tao & Paul E. McKenney. 2) Improve helper UAPI by explicitly defining BPF_FUNC_xxx integer values. In the wild we have seen OS vendors doing buggy backports where helper call numbers mismatched. This is an attempt to make backports more foolproof, from Andrii Nakryiko. 3) Add libbpf *_opts API-variants for bpf_*_get_fd_by_id() functions, from Roberto Sassu. 4) Fix libbpf's BTF dumper for structs with padding-only fields, from Eduard Zingerman. 5) Fix various libbpf bugs which have been found from fuzzing with malformed BPF object files, from Shung-Hsi Yu. 6) Clean up an unneeded check on existence of SSE2 in BPF x86-64 JIT, from Jie Meng. 7) Fix various ASAN bugs in both libbpf and selftests when running the BPF selftest suite on arm64, from Xu Kuohai. 8) Fix missing bpf_iter_vma_offset__destroy() call in BPF iter selftest and use in-skeleton link pointer to remove an explicit bpf_link__destroy(), from Jiri Olsa. 9) Fix BPF CI breakage by pointing to iptables-legacy instead of relying on symlinked iptables which got upgraded to iptables-nft, from Martin KaFai Lau. 10) Minor BPF selftest improvements all over the place, from various others. * tag 'for-netdev' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (33 commits) bpf/docs: Update README for most recent vmtest.sh bpf: Use rcu_trace_implies_rcu_gp() for program array freeing bpf: Use rcu_trace_implies_rcu_gp() in local storage map bpf: Use rcu_trace_implies_rcu_gp() in bpf memory allocator rcu-tasks: Provide rcu_trace_implies_rcu_gp() selftests/bpf: Use sys_pidfd_open() helper when possible libbpf: Fix null-pointer dereference in find_prog_by_sec_insn() libbpf: Deal with section with no data gracefully libbpf: Use elf_getshdrnum() instead of e_shnum selftest/bpf: Fix error usage of ASSERT_OK in xdp_adjust_tail.c selftests/bpf: Fix error failure of case test_xdp_adjust_tail_grow selftest/bpf: Fix memory leak in kprobe_multi_test selftests/bpf: Fix memory leak caused by not destroying skeleton libbpf: Fix memory leak in parse_usdt_arg() libbpf: Fix use-after-free in btf_dump_name_dups selftests/bpf: S/iptables/iptables-legacy/ in the bpf_nf and xdp_synproxy test selftests/bpf: Alphabetize DENYLISTs selftests/bpf: Add tests for _opts variants of bpf_*_get_fd_by_id() libbpf: Introduce bpf_link_get_fd_by_id_opts() libbpf: Introduce bpf_btf_get_fd_by_id_opts() ... ==================== Link: https://lore.kernel.org/r/20221018210631.11211-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-14libperf: Do not include non-UAPI linux/compiler.h headerArnaldo Carvalho de Melo1-3/+2
Its just for that __packed define, so use it expanded as __attribute__((packed)), like the other files in /usr/include do. This was problem was preventing building the libperf examples on ALT Linux and Fedora 35, fix it. Reported-by: Vitaly Chikunov <vt@altlinux.org> Acked-by: Ian Rogers <irogers@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Dmitry Levin <ldv@altlinux.org Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lore.kernel.org/lkml/Y0lnpl2Ix7VljVDc@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-13libbpf: Fix null-pointer dereference in find_prog_by_sec_insn()Shung-Hsi Yu1-0/+3
When there are no program sections, obj->programs is left unallocated, and find_prog_by_sec_insn()'s search lands on &obj->programs[0] == NULL, and will cause null-pointer dereference in the following access to prog->sec_idx. Guard the search with obj->nr_programs similar to what's being done in __bpf_program__iter() to prevent null-pointer access from happening. Fixes: db2b8b06423c ("libbpf: Support CO-RE relocations for multi-prog sections") Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221012022353.7350-4-shung-hsi.yu@suse.com
2022-10-13libbpf: Deal with section with no data gracefullyShung-Hsi Yu1-1/+5
ELF section data pointer returned by libelf may be NULL (if section has SHT_NOBITS), so null check section data pointer before attempting to copy license and kversion section. Fixes: cb1e5e961991 ("bpf tools: Collect version and license from ELF sections") Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221012022353.7350-3-shung-hsi.yu@suse.com
2022-10-13libbpf: Use elf_getshdrnum() instead of e_shnumShung-Hsi Yu1-4/+9
This commit replace e_shnum with the elf_getshdrnum() helper to fix two oss-fuzz-reported heap-buffer overflow in __bpf_object__open. Both reports are incorrectly marked as fixed and while still being reproducible in the latest libbpf. # clusterfuzz-testcase-minimized-bpf-object-fuzzer-5747922482888704 libbpf: loading object 'fuzz-object' from buffer libbpf: sec_cnt is 0 libbpf: elf: section(1) .data, size 0, link 538976288, flags 2020202020202020, type=2 libbpf: elf: section(2) .data, size 32, link 538976288, flags 202020202020ff20, type=1 ================================================================= ==13==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000000c0 at pc 0x0000005a7b46 bp 0x7ffd12214af0 sp 0x7ffd12214ae8 WRITE of size 4 at 0x6020000000c0 thread T0 SCARINESS: 46 (4-byte-write-heap-buffer-overflow-far-from-bounds) #0 0x5a7b45 in bpf_object__elf_collect /src/libbpf/src/libbpf.c:3414:24 #1 0x5733c0 in bpf_object_open /src/libbpf/src/libbpf.c:7223:16 #2 0x5739fd in bpf_object__open_mem /src/libbpf/src/libbpf.c:7263:20 ... The issue lie in libbpf's direct use of e_shnum field in ELF header as the section header count. Where as libelf implemented an extra logic that, when e_shnum == 0 && e_shoff != 0, will use sh_size member of the initial section header as the real section header count (part of ELF spec to accommodate situation where section header counter is larger than SHN_LORESERVE). The above inconsistency lead to libbpf writing into a zero-entry calloc area. So intead of using e_shnum directly, use the elf_getshdrnum() helper provided by libelf to retrieve the section header counter into sec_cnt. Fixes: 0d6988e16a12 ("libbpf: Fix section counting logic") Fixes: 25bbbd7a444b ("libbpf: Remove assumptions about uniqueness of .rodata/.data/.bss maps") Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=40868 Link: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=40957 Link: https://lore.kernel.org/bpf/20221012022353.7350-2-shung-hsi.yu@suse.com
2022-10-13libbpf: Fix memory leak in parse_usdt_arg()Xu Kuohai1-7/+4
In the arm64 version of parse_usdt_arg(), when sscanf returns 2, reg_name is allocated but not freed. Fix it. Fixes: 0f8619929c57 ("libbpf: Usdt aarch64 arg parsing support") Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20221011120108.782373-3-xukuohai@huaweicloud.com
2022-10-13libbpf: Fix use-after-free in btf_dump_name_dupsXu Kuohai1-3/+26
ASAN reports an use-after-free in btf_dump_name_dups: ERROR: AddressSanitizer: heap-use-after-free on address 0xffff927006db at pc 0xaaaab5dfb618 bp 0xffffdd89b890 sp 0xffffdd89b928 READ of size 2 at 0xffff927006db thread T0 #0 0xaaaab5dfb614 in __interceptor_strcmp.part.0 (test_progs+0x21b614) #1 0xaaaab635f144 in str_equal_fn tools/lib/bpf/btf_dump.c:127 #2 0xaaaab635e3e0 in hashmap_find_entry tools/lib/bpf/hashmap.c:143 #3 0xaaaab635e72c in hashmap__find tools/lib/bpf/hashmap.c:212 #4 0xaaaab6362258 in btf_dump_name_dups tools/lib/bpf/btf_dump.c:1525 #5 0xaaaab636240c in btf_dump_resolve_name tools/lib/bpf/btf_dump.c:1552 #6 0xaaaab6362598 in btf_dump_type_name tools/lib/bpf/btf_dump.c:1567 #7 0xaaaab6360b48 in btf_dump_emit_struct_def tools/lib/bpf/btf_dump.c:912 #8 0xaaaab6360630 in btf_dump_emit_type tools/lib/bpf/btf_dump.c:798 #9 0xaaaab635f720 in btf_dump__dump_type tools/lib/bpf/btf_dump.c:282 #10 0xaaaab608523c in test_btf_dump_incremental tools/testing/selftests/bpf/prog_tests/btf_dump.c:236 #11 0xaaaab6097530 in test_btf_dump tools/testing/selftests/bpf/prog_tests/btf_dump.c:875 #12 0xaaaab6314ed0 in run_one_test tools/testing/selftests/bpf/test_progs.c:1062 #13 0xaaaab631a0a8 in main tools/testing/selftests/bpf/test_progs.c:1697 #14 0xffff9676d214 in __libc_start_main ../csu/libc-start.c:308 #15 0xaaaab5d65990 (test_progs+0x185990) 0xffff927006db is located 11 bytes inside of 16-byte region [0xffff927006d0,0xffff927006e0) freed by thread T0 here: #0 0xaaaab5e2c7c4 in realloc (test_progs+0x24c7c4) #1 0xaaaab634f4a0 in libbpf_reallocarray tools/lib/bpf/libbpf_internal.h:191 #2 0xaaaab634f840 in libbpf_add_mem tools/lib/bpf/btf.c:163 #3 0xaaaab636643c in strset_add_str_mem tools/lib/bpf/strset.c:106 #4 0xaaaab6366560 in strset__add_str tools/lib/bpf/strset.c:157 #5 0xaaaab6352d70 in btf__add_str tools/lib/bpf/btf.c:1519 #6 0xaaaab6353e10 in btf__add_field tools/lib/bpf/btf.c:2032 #7 0xaaaab6084fcc in test_btf_dump_incremental tools/testing/selftests/bpf/prog_tests/btf_dump.c:232 #8 0xaaaab6097530 in test_btf_dump tools/testing/selftests/bpf/prog_tests/btf_dump.c:875 #9 0xaaaab6314ed0 in run_one_test tools/testing/selftests/bpf/test_progs.c:1062 #10 0xaaaab631a0a8 in main tools/testing/selftests/bpf/test_progs.c:1697 #11 0xffff9676d214 in __libc_start_main ../csu/libc-start.c:308 #12 0xaaaab5d65990 (test_progs+0x185990) previously allocated by thread T0 here: #0 0xaaaab5e2c7c4 in realloc (test_progs+0x24c7c4) #1 0xaaaab634f4a0 in libbpf_reallocarray tools/lib/bpf/libbpf_internal.h:191 #2 0xaaaab634f840 in libbpf_add_mem tools/lib/bpf/btf.c:163 #3 0xaaaab636643c in strset_add_str_mem tools/lib/bpf/strset.c:106 #4 0xaaaab6366560 in strset__add_str tools/lib/bpf/strset.c:157 #5 0xaaaab6352d70 in btf__add_str tools/lib/bpf/btf.c:1519 #6 0xaaaab6353ff0 in btf_add_enum_common tools/lib/bpf/btf.c:2070 #7 0xaaaab6354080 in btf__add_enum tools/lib/bpf/btf.c:2102 #8 0xaaaab6082f50 in test_btf_dump_incremental tools/testing/selftests/bpf/prog_tests/btf_dump.c:162 #9 0xaaaab6097530 in test_btf_dump tools/testing/selftests/bpf/prog_tests/btf_dump.c:875 #10 0xaaaab6314ed0 in run_one_test tools/testing/selftests/bpf/test_progs.c:1062 #11 0xaaaab631a0a8 in main tools/testing/selftests/bpf/test_progs.c:1697 #12 0xffff9676d214 in __libc_start_main ../csu/libc-start.c:308 #13 0xaaaab5d65990 (test_progs+0x185990) The reason is that the key stored in hash table name_map is a string address, and the string memory is allocated by realloc() function, when the memory is resized by realloc() later, the old memory may be freed, so the address stored in name_map references to a freed memory, causing use-after-free. Fix it by storing duplicated string address in name_map. Fixes: 919d2b1dbb07 ("libbpf: Allow modification of BTF and add btf__add_str API") Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20221011120108.782373-2-xukuohai@huaweicloud.com
2022-10-11Merge tag 'perf-tools-for-v6.1-1-2022-10-07' of ↵Linus Torvalds6-19/+46
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tools updates from Arnaldo Carvalho de Melo: - Add support for AMD on 'perf mem' and 'perf c2c', the kernel enablement patches went via tip. Example: $ sudo perf mem record -- -c 10000 ^C[ perf record: Woken up 227 times to write data ] [ perf record: Captured and wrote 58.760 MB perf.data (836978 samples) ] $ sudo perf mem report -F mem,sample,snoop Samples: 836K of event 'ibs_op//', Event count (approx.): 8418762 Memory access Samples Snoop N/A 700620 N/A L1 hit 126675 N/A L2 hit 424 N/A L3 hit 664 HitM L3 hit 10 N/A Local RAM hit 2 N/A Remote RAM (1 hop) hit 8558 N/A Remote Cache (1 hop) hit 3 N/A Remote Cache (1 hop) hit 2 HitM Remote Cache (2 hops) hit 10 HitM Remote Cache (2 hops) hit 6 N/A Uncached hit 4 N/A $ - "perf lock" improvements: - Add -E/--entries option to limit the number of entries to display, say to ask for just the top 5 contended locks. - Add -q/--quiet option to suppress header and debug messages. - Add a 'perf test' kernel lock contention entry to test 'perf lock'. - "perf lock contention" improvements: - Ask BPF's bpf_get_stackid() to skip some callchain entries. The ones closer to the tooling are bpf related and not that interesting, the ones calling the locking function are the ones we're interested in, example of a full, unskipped callstack: - Allow changing the callstack depth and number of entries to skip. 1 10.74 us 10.74 us 10.74 us spinlock __bpf_trace_contention_begin+0xb 0xffffffffc03b5c47 bpf_prog_bf07ae9e2cbd02c5_contention_begin+0x117 0xffffffffc03b5c47 bpf_prog_bf07ae9e2cbd02c5_contention_begin+0x117 0xffffffffbb8b8e75 bpf_trace_run2+0x35 0xffffffffbb7eab9b __bpf_trace_contention_begin+0xb 0xffffffffbb7ebe75 queued_spin_lock_slowpath+0x1f5 0xffffffffbc1c26ff _raw_spin_lock+0x1f 0xffffffffbb841015 tick_do_update_jiffies64+0x25 0xffffffffbb8409ee tick_irq_enter+0x9e - Show full callstack in verbose mode (-v option), sometimes this is desirable instead of showing just one callstack entry. - Allow multiple time ranges in 'perf record --delay' to help in reducing the amount of data collected from hardware tracing (Intel PT, etc) when there is a rough idea of periods of time where events of interest take time. - Add Intel PT to record only decoder debug messages when error happens. - Improve layout of Intel PT man page. - Add new branch types: alignment, data and inst faults and arch specific ones, such as fiq, debug_halt, debug_exit, debug_inst and debug_data on arm64. Kernel enablement went thru the tip tree. - Fix 'perf probe' error log check in 'perf test' when no debuginfo is available. - Fix 'perf stat' aggregation mode logic, it should be looking at the CPU not at the core number. - Fix flags parsing in 'perf trace' filters. - Introduce compact encoding of CPU range encoding on perf.data, to avoid having a bitmap with all the CPUs. - Improvements to the 'perf stat' metrics, including adding "core_wide", and computing "smt" from the CPU topology. - Add support to the new PERF_FORMAT_LOST perf_event_attr.read_format, that allows tooling to ask for the precise number of lost samples for a given event. - Add 'addr' sort key to see just the address of sampled instructions: $ perf record -o- true | perf report -i- -s addr [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.000 MB - ] # Samples: 12 of event 'cycles:u' # Event count (approx.): 252512 # # Overhead Address # ........ .................. 42.96% 0x7f96f08443d7 29.55% 0x7f96f0859b50 14.76% 0x7f96f0852e02 8.30% 0x7f96f0855028 4.43% 0xffffffff8de01087 perf annotate: Toggle full address <-> offset display - Add 'f' hotkey to the 'perf annotate' TUI interface when in 'disassembler output' mode ('o' hotkey) to toggle showing full virtual address or just the offset. - Cache DSO build-ids when synthesizing PERF_RECORD_MMAP records for pre-existing threads, at the start of a 'perf record' session, speeding up that record startup phase. - Add a command line option to specify build ids in 'perf inject'. - Update JSON event files for the Intel alderlake, broadwell, broadwellde, broadwellx, cascadelakex, haswell, haswellx, icelake, icelakex, ivybridge, ivytown, jaketown, sandybridge, sapphirerapids, skylake, skylakex, and tigerlake processors. - Update vendor JSON event files for the ARM Neoverse V1 and E1 platforms. - Add a 'perf test' entry for 'perf mem' where a struct has false sharing and this gets detected in the 'perf mem' output, tested with Intel, AMD and ARM64 systems. - Add a 'perf test' entry to test the resolution of java symbols, where an output like this is expected: 8.18% jshell jitted-50116-29.so [.] Interpreter 0.75% Thread-1 jitted-83602-1670.so [.] jdk.internal.jimage.BasicImageReader.getString(int) - Add tests for the ARM64 CoreSight hardware tracing feature, with specially crafted pureloop, memcpy, thread loop and unroll tread that then gets traced and the output compared with expected output. Documentation explaining it is also included. - Add per thread Intel PT 'perf test' entry to check that PERF_RECORD_TEXT_POKE events are recorded per CPU, resulting in a mixture of per thread and per CPU events and mmaps, verify that this gets all recorded correctly. - Introduce pthread mutex wrappers to allow for building with clang's -Wthread-safety, i.e. using the "guarded_by" "pt_guarded_by" "lockable", "exclusive_lock_function", "exclusive_trylock_function", "exclusive_locks_required", and "no_thread_safety_analysis" compiler function attributes. - Fix empty version number when building outside of a git repo. - Improve feature detection display when multiple versions of a feature are present, such as for binutils libbfd, that has a mix of possible ways to detect according to the Linux distribution. Previously in some cases we had: Auto-detecting system features <SNIP> ... libbfd: [ on ] ... libbfd-liberty: [ on ] ... libbfd-liberty-z: [ on ] <SNIP> Now for this case we show just the main feature: Auto-detecting system features <SNIP> ... libbfd: [ on ] <SNIP> - Remove some unused structs, variables, macros, function prototypes and includes from various places. * tag 'perf-tools-for-v6.1-1-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (169 commits) perf script: Add missing fields in usage hint perf mem: Print "LFB/MAB" for PERF_MEM_LVLNUM_LFB perf mem/c2c: Avoid printing empty lines for unsupported events perf mem/c2c: Add load store event mappings for AMD perf mem/c2c: Set PERF_SAMPLE_WEIGHT for LOAD_STORE events perf mem: Add support for printing PERF_MEM_LVLNUM_{CXL|IO} perf amd ibs: Sync arch/x86/include/asm/amd-ibs.h header with the kernel tools headers UAPI: Sync include/uapi/linux/perf_event.h header with the kernel perf stat: Fix cpu check to use id.cpu.cpu in aggr_printout() perf test coresight: Add relevant documentation about ARM64 CoreSight testing perf test: Add git ignore for tmp and output files of ARM CoreSight tests perf test coresight: Add unroll thread test shell script perf test coresight: Add unroll thread test tool perf test coresight: Add thread loop test shell scripts perf test coresight: Add thread loop test tool perf test coresight: Add memcpy thread test shell script perf test coresight: Add memcpy thread test tool perf test: Add git ignore for perf data generated by the ARM CoreSight tests perf test: Add arm64 asm pureloop test shell script perf test: Add asm pureloop test tool ...
2022-10-10libbpf: Introduce bpf_link_get_fd_by_id_opts()Roberto Sassu3-1/+14
Introduce bpf_link_get_fd_by_id_opts(), for symmetry with bpf_map_get_fd_by_id_opts(), to let the caller pass the newly introduced data structure bpf_get_fd_by_id_opts. Keep the existing bpf_link_get_fd_by_id(), and call bpf_link_get_fd_by_id_opts() with NULL as opts argument, to prevent setting open_flags. Currently, the kernel does not support non-zero open_flags for bpf_link_get_fd_by_id_opts(), and a call with them will result in an error returned by the bpf() system call. The caller should always pass zero open_flags. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221006110736.84253-6-roberto.sassu@huaweicloud.com
2022-10-10libbpf: Introduce bpf_btf_get_fd_by_id_opts()Roberto Sassu3-1/+14
Introduce bpf_btf_get_fd_by_id_opts(), for symmetry with bpf_map_get_fd_by_id_opts(), to let the caller pass the newly introduced data structure bpf_get_fd_by_id_opts. Keep the existing bpf_btf_get_fd_by_id(), and call bpf_btf_get_fd_by_id_opts() with NULL as opts argument, to prevent setting open_flags. Currently, the kernel does not support non-zero open_flags for bpf_btf_get_fd_by_id_opts(), and a call with them will result in an error returned by the bpf() system call. The caller should always pass zero open_flags. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221006110736.84253-5-roberto.sassu@huaweicloud.com
2022-10-10libbpf: Introduce bpf_prog_get_fd_by_id_opts()Roberto Sassu3-1/+14
Introduce bpf_prog_get_fd_by_id_opts(), for symmetry with bpf_map_get_fd_by_id_opts(), to let the caller pass the newly introduced data structure bpf_get_fd_by_id_opts. Keep the existing bpf_prog_get_fd_by_id(), and call bpf_prog_get_fd_by_id_opts() with NULL as opts argument, to prevent setting open_flags. Currently, the kernel does not support non-zero open_flags for bpf_prog_get_fd_by_id_opts(), and a call with them will result in an error returned by the bpf() system call. The caller should always pass zero open_flags. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221006110736.84253-4-roberto.sassu@huaweicloud.com
2022-10-10libbpf: Introduce bpf_get_fd_by_id_opts and bpf_map_get_fd_by_id_opts()Roberto Sassu3-1/+22
Define a new data structure called bpf_get_fd_by_id_opts, with the member open_flags, to be used by callers of the _opts variants of bpf_*_get_fd_by_id() to specify the permissions needed for the file descriptor to be obtained. Also, introduce bpf_map_get_fd_by_id_opts(), to let the caller pass a bpf_get_fd_by_id_opts structure. Finally, keep the existing bpf_map_get_fd_by_id(), and call bpf_map_get_fd_by_id_opts() with NULL as opts argument, to request read-write permissions (current behavior). Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221006110736.84253-3-roberto.sassu@huaweicloud.com
2022-10-10libbpf: Fix LIBBPF_1.0.0 declaration in libbpf.mapRoberto Sassu1-1/+1
Add the missing LIBBPF_0.8.0 at the end of the LIBBPF_1.0.0 declaration, similarly to other version declarations. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221006110736.84253-2-roberto.sassu@huaweicloud.com
2022-10-10Merge tag 'bitmap-6.1-rc1' of https://github.com/norov/linuxLinus Torvalds1-80/+69
Pull bitmap updates from Yury Norov: - Fix unsigned comparison to -1 in CPUMAP_FILE_MAX_BYTES (Phil Auld) - cleanup nr_cpu_ids vs nr_cpumask_bits mess (me) This series cleans that mess and adds new config FORCE_NR_CPUS that allows to optimize cpumask subsystem if the number of CPUs is known at compile-time. - optimize find_bit() functions (me) Reworks find_bit() functions based on new FIND_{FIRST,NEXT}_BIT() macros. - add find_nth_bit() (me) Adds find_nth_bit(), which is ~70 times faster than bitcounting with for_each() loop: for_each_set_bit(bit, mask, size) if (n-- == 0) return bit; Also adds bitmap_weight_and() to let people replace this pattern: tmp = bitmap_alloc(nbits); bitmap_and(tmp, map1, map2, nbits); weight = bitmap_weight(tmp, nbits); bitmap_free(tmp); with a single bitmap_weight_and() call. - repair cpumask_check() (me) After switching cpumask to use nr_cpu_ids, cpumask_check() started generating many false-positive warnings. This series fixes it. - Add for_each_cpu_andnot() and for_each_cpu_andnot() (Valentin Schneider) Extends the API with one more function and applies it in sched/core. * tag 'bitmap-6.1-rc1' of https://github.com/norov/linux: (28 commits) sched/core: Merge cpumask_andnot()+for_each_cpu() into for_each_cpu_andnot() lib/test_cpumask: Add for_each_cpu_and(not) tests cpumask: Introduce for_each_cpu_andnot() lib/find_bit: Introduce find_next_andnot_bit() cpumask: fix checking valid cpu range lib/bitmap: add tests for for_each() loops lib/find: optimize for_each() macros lib/bitmap: introduce for_each_set_bit_wrap() macro lib/find_bit: add find_next{,_and}_bit_wrap cpumask: switch for_each_cpu{,_not} to use for_each_bit() net: fix cpu_max_bits_warn() usage in netif_attrmask_next{,_and} cpumask: add cpumask_nth_{,and,andnot} lib/bitmap: remove bitmap_ord_to_pos lib/bitmap: add tests for find_nth_bit() lib: add find_nth{,_and,_andnot}_bit() lib/bitmap: add bitmap_weight_and() lib/bitmap: don't call __bitmap_weight() in kernel code tools: sync find_bit() implementation lib/find_bit: optimize find_next_bit() functions lib/find_bit: create find_first_zero_bit_le() ...
2022-10-06perf tools: Remove special handling of system-wide evselNamhyung Kim1-3/+0
For system-wide evsels, the thread map should be dummy - i.e. it has a single entry of -1. But the code guarantees such a thread map, so no need to handle it specially. No functional change intended. Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20221003204647.1481128-6-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-06libperf: Propagate maps only if necessaryNamhyung Kim2-7/+5
The current code propagate evsel's cpu map settings to evlist when it's added to an evlist. But the evlist->all_cpus and each evsel's cpus will be updated in perf_evlist__set_maps() later. No need to do it before evlist's cpus are set actually. In fact it discards this intermediate all_cpus maps at the beginning of perf_evlist__set_maps(). Let's not do this. It's only needed when an evsel is added after the evlist cpu/thread maps are set. Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20221003204647.1481128-3-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-06libperf: Populate system-wide evsel mapsNamhyung Kim1-6/+9
Setting proper cpu and thread maps for system wide evsels regardless of user requested cpu in __perf_evlist__propagate_maps(). Those evsels need to be active on all cpus always. Do it in the libperf so that we can guarantee it has proper maps. Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20221003204647.1481128-2-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-05bpftool: Print newline before '}' for struct with padding only fieldsEduard Zingerman1-1/+5
btf_dump_emit_struct_def attempts to print empty structures at a single line, e.g. `struct empty {}`. However, it has to account for a case when there are no regular but some padding fields in the struct. In such case `vlen` would be zero, but size would be non-zero. E.g. here is struct bpf_timer from vmlinux.h before this patch: struct bpf_timer { long: 64; long: 64;}; And after this patch: struct bpf_dynptr { long: 64; long: 64; }; Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221001104425.415768-1-eddyz87@gmail.com
2022-10-04Merge tag 'net-next-6.1' of ↵Linus Torvalds19-222/+817
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next Pull networking updates from Jakub Kicinski: "Core: - Introduce and use a single page frag cache for allocating small skb heads, clawing back the 10-20% performance regression in UDP flood test from previous fixes. - Run packets which already went thru HW coalescing thru SW GRO. This significantly improves TCP segment coalescing and simplifies deployments as different workloads benefit from HW or SW GRO. - Shrink the size of the base zero-copy send structure. - Move TCP init under a new slow / sleepable version of DO_ONCE(). BPF: - Add BPF-specific, any-context-safe memory allocator. - Add helpers/kfuncs for PKCS#7 signature verification from BPF programs. - Define a new map type and related helpers for user space -> kernel communication over a ring buffer (BPF_MAP_TYPE_USER_RINGBUF). - Allow targeting BPF iterators to loop through resources of one task/thread. - Add ability to call selected destructive functions. Expose crash_kexec() to allow BPF to trigger a kernel dump. Use CAP_SYS_BOOT check on the loading process to judge permissions. - Enable BPF to collect custom hierarchical cgroup stats efficiently by integrating with the rstat framework. - Support struct arguments for trampoline based programs. Only structs with size <= 16B and x86 are supported. - Invoke cgroup/connect{4,6} programs for unprivileged ICMP ping sockets (instead of just TCP and UDP sockets). - Add a helper for accessing CLOCK_TAI for time sensitive network related programs. - Support accessing network tunnel metadata's flags. - Make TCP SYN ACK RTO tunable by BPF programs with TCP Fast Open. - Add support for writing to Netfilter's nf_conn:mark. Protocols: - WiFi: more Extremely High Throughput (EHT) and Multi-Link Operation (MLO) work (802.11be, WiFi 7). - vsock: improve support for SO_RCVLOWAT. - SMC: support SO_REUSEPORT. - Netlink: define and document how to use netlink in a "modern" way. Support reporting missing attributes via extended ACK. - IPSec: support collect metadata mode for xfrm interfaces. - TCPv6: send consistent autoflowlabel in SYN_RECV state and RST packets. - TCP: introduce optional per-netns connection hash table to allow better isolation between namespaces (opt-in, at the cost of memory and cache pressure). - MPTCP: support TCP_FASTOPEN_CONNECT. - Add NEXT-C-SID support in Segment Routing (SRv6) End behavior. - Adjust IP_UNICAST_IF sockopt behavior for connected UDP sockets. - Open vSwitch: - Allow specifying ifindex of new interfaces. - Allow conntrack and metering in non-initial user namespace. - TLS: support the Korean ARIA-GCM crypto algorithm. - Remove DECnet support. Driver API: - Allow selecting the conduit interface used by each port in DSA switches, at runtime. - Ethernet Power Sourcing Equipment and Power Device support. - Add tc-taprio support for queueMaxSDU parameter, i.e. setting per traffic class max frame size for time-based packet schedules. - Support PHY rate matching - adapting between differing host-side and link-side speeds. - Introduce QUSGMII PHY mode and 1000BASE-KX interface mode. - Validate OF (device tree) nodes for DSA shared ports; make phylink-related properties mandatory on DSA and CPU ports. Enforcing more uniformity should allow transitioning to phylink. - Require that flash component name used during update matches one of the components for which version is reported by info_get(). - Remove "weight" argument from driver-facing NAPI API as much as possible. It's one of those magic knobs which seemed like a good idea at the time but is too indirect to use in practice. - Support offload of TLS connections with 256 bit keys. New hardware / drivers: - Ethernet: - Microchip KSZ9896 6-port Gigabit Ethernet Switch - Renesas Ethernet AVB (EtherAVB-IF) Gen4 SoCs - Analog Devices ADIN1110 and ADIN2111 industrial single pair Ethernet (10BASE-T1L) MAC+PHY. - Rockchip RV1126 Gigabit Ethernet (a version of stmmac IP). - Ethernet SFPs / modules: - RollBall / Hilink / Turris 10G copper SFPs - HALNy GPON module - WiFi: - CYW43439 SDIO chipset (brcmfmac) - CYW89459 PCIe chipset (brcmfmac) - BCM4378 on Apple platforms (brcmfmac) Drivers: - CAN: - gs_usb: HW timestamp support - Ethernet PHYs: - lan8814: cable diagnostics - Ethernet NICs: - Intel (100G): - implement control of FCS/CRC stripping - port splitting via devlink - L2TPv3 filtering offload - nVidia/Mellanox: - tunnel offload for sub-functions - MACSec offload, w/ Extended packet number and replay window offload - significantly restructure, and optimize the AF_XDP support, align the behavior with other vendors - Huawei: - configuring DSCP map for traffic class selection - querying standard FEC statistics - querying SerDes lane number via ethtool - Marvell/Cavium: - egress priority flow control - MACSec offload - AMD/SolarFlare: - PTP over IPv6 and raw Ethernet - small / embedded: - ax88772: convert to phylink (to support SFP cages) - altera: tse: convert to phylink - ftgmac100: support fixed link - enetc: standard Ethtool counters - macb: ZynqMP SGMII dynamic configuration support - tsnep: support multi-queue and use page pool - lan743x: Rx IP & TCP checksum offload - igc: add xdp frags support to ndo_xdp_xmit - Ethernet high-speed switches: - Marvell (prestera): - support SPAN port features (traffic mirroring) - nexthop object offloading - Microchip (sparx5): - multicast forwarding offload - QoS queuing offload (tc-mqprio, tc-tbf, tc-ets) - Ethernet embedded switches: - Marvell (mv88e6xxx): - support RGMII cmode - NXP (felix): - standardized ethtool counters - Microchip (lan966x): - QoS queuing offload (tc-mqprio, tc-tbf, tc-cbs, tc-ets) - traffic policing and mirroring - link aggregation / bonding offload - QUSGMII PHY mode support - Qualcomm 802.11ax WiFi (ath11k): - cold boot calibration support on WCN6750 - support to connect to a non-transmit MBSSID AP profile - enable remain-on-channel support on WCN6750 - Wake-on-WLAN support for WCN6750 - support to provide transmit power from firmware via nl80211 - support to get power save duration for each client - spectral scan support for 160 MHz - MediaTek WiFi (mt76): - WiFi-to-Ethernet bridging offload for MT7986 chips - RealTek WiFi (rtw89): - P2P support" * tag 'net-next-6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1864 commits) eth: pse: add missing static inlines once: rename _SLOW to _SLEEPABLE net: pse-pd: add regulator based PSE driver dt-bindings: net: pse-dt: add bindings for regulator based PoDL PSE controller ethtool: add interface to interact with Ethernet Power Equipment net: mdiobus: search for PSE nodes by parsing PHY nodes. net: mdiobus: fwnode_mdiobus_register_phy() rework error handling net: add framework to support Ethernet PSE and PDs devices dt-bindings: net: phy: add PoDL PSE property net: marvell: prestera: Propagate nh state from hw to kernel net: marvell: prestera: Add neighbour cache accounting net: marvell: prestera: add stub handler neighbour events net: marvell: prestera: Add heplers to interact with fib_notifier_info net: marvell: prestera: Add length macros for prestera_ip_addr net: marvell: prestera: add delayed wq and flush wq on deinit net: marvell: prestera: Add strict cleanup of fib arbiter net: marvell: prestera: Add cleanup of allocated fib_nodes net: marvell: prestera: Add router nexthops ABI eth: octeon: fix build after netif_napi_add() changes net/mlx5: E-Switch, Return EBUSY if can't get mode lock ...
2022-10-04perf subcmd: Set environment variable "PREFIX"Leo Yan1-0/+3
Set environment variable "PREFIX", it will be used by invoked shell script, e.g. the shell script uses it to find lib paths. Signed-off-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220925025835.70364-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-04perf tools: Add debug messages and comments for testingAdrian Hunter1-0/+2
Add debug messages to enable scripts to track aspects of 'perf record' behaviour. The messages will be consumed after 'perf record' has run, with the exception of "perf record has started" which is consequently flushed. Put comments so developers know which messages are also being used by test scripts. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220912083412.7058-11-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-04perf cpumap: Add range data encodingIan Rogers1-0/+14
Often cpumaps encode a range of all CPUs, add a compact encoding that doesn't require a bit mask or list of all CPUs. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Colin Ian King <colin.king@intel.com> Cc: Dave Marchevsky <davemarchevsky@fb.com> Cc: German Gomez <german.gomez@arm.com> Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Kees Kook <keescook@chromium.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/r/20220614143353.1559597-7-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-04perf events: Prefer union over variable length arrayIan Rogers1-1/+10
It is possible for casts to introduce alignment issues, prefer a union for perf_record_event_update. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Colin Ian King <colin.king@intel.com> Cc: Dave Marchevsky <davemarchevsky@fb.com> Cc: German Gomez <german.gomez@arm.com> Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Kees Kook <keescook@chromium.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Riccardo Mancini <rickyman7@gmail.com> Cc: Song Liu <songliubraving@fb.com> Cc: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/r/20220614143353.1559597-6-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-10-04perf record: Fix way of handling non-perf-event pollfdsAdrian Hunter1-2/+3
perf record __cmd_record() does not poll evlist pollfds. Instead it polls thread_data[0].pollfd. That happens whether or not threads are being used. perf record duplicates evlist mmap pollfds as needed for separate threads. The non-perf-event represented by evlist->ctl_fd has to handled separately, which is done explicitly, duplicating it into the thread_data[0] pollfds. That approach neglects any other non-perf-event file descriptors. Currently there is also done_fd which needs the same handling. Add a new generalized approach. Add fdarray_flag__non_perf_event to identify the file descriptors that need the special handling. For those cases, also keep a mapping of the evlist pollfd index and thread pollfd index, so that the evlist revents can be updated. Although this patch adds the new handling, it does not take it into use. There is no functional change, but it is the precursor to a fix, so is marked as a fix. Fixes: 415ccb58f68a6beb ("perf record: Introduce thread specific data array") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20220824072814.16422-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>