aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2023-08-09nexthop: Fix infinite nexthop dump when using maximum nexthop IDIdo Schimmel1-0/+5
A netlink dump callback can return a positive number to signal that more information needs to be dumped or zero to signal that the dump is complete. In the second case, the core netlink code will append the NLMSG_DONE message to the skb in order to indicate to user space that the dump is complete. The nexthop dump callback always returns a positive number if nexthops were filled in the provided skb, even if the dump is complete. This means that a dump will span at least two recvmsg() calls as long as nexthops are present. In the last recvmsg() call the dump callback will not fill in any nexthops because the previous call indicated that the dump should restart from the last dumped nexthop ID plus one. # ip nexthop add id 1 blackhole # strace -e sendto,recvmsg -s 5 ip nexthop sendto(3, [[{nlmsg_len=24, nlmsg_type=RTM_GETNEXTHOP, nlmsg_flags=NLM_F_REQUEST|NLM_F_DUMP, nlmsg_seq=1691394315, nlmsg_pid=0}, {nh_family=AF_UNSPEC, nh_scope=RT_SCOPE_UNIVERSE, nh_protocol=RTPROT_UNSPEC, nh_flags=0}], {nlmsg_len=0, nlmsg_type=0 /* NLMSG_??? */, nlmsg_flags=0, nlmsg_seq=0, nlmsg_pid=0}], 152, 0, NULL, 0) = 152 recvmsg(3, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base=NULL, iov_len=0}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_TRUNC}, MSG_PEEK|MSG_TRUNC) = 36 recvmsg(3, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base=[{nlmsg_len=36, nlmsg_type=RTM_NEWNEXTHOP, nlmsg_flags=NLM_F_MULTI, nlmsg_seq=1691394315, nlmsg_pid=343}, {nh_family=AF_INET, nh_scope=RT_SCOPE_UNIVERSE, nh_protocol=RTPROT_UNSPEC, nh_flags=0}, [[{nla_len=8, nla_type=NHA_ID}, 1], {nla_len=4, nla_type=NHA_BLACKHOLE}]], iov_len=32768}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 36 id 1 blackhole recvmsg(3, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base=NULL, iov_len=0}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_TRUNC}, MSG_PEEK|MSG_TRUNC) = 20 recvmsg(3, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base=[{nlmsg_len=20, nlmsg_type=NLMSG_DONE, nlmsg_flags=NLM_F_MULTI, nlmsg_seq=1691394315, nlmsg_pid=343}, 0], iov_len=32768}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 20 +++ exited with 0 +++ This behavior is both inefficient and buggy. If the last nexthop to be dumped had the maximum ID of 0xffffffff, then the dump will restart from 0 (0xffffffff + 1) and never end: # ip nexthop add id $((2**32-1)) blackhole # ip nexthop id 4294967295 blackhole id 4294967295 blackhole [...] Fix by adjusting the dump callback to return zero when the dump is complete. After the fix only one recvmsg() call is made and the NLMSG_DONE message is appended to the RTM_NEWNEXTHOP response: # ip nexthop add id $((2**32-1)) blackhole # strace -e sendto,recvmsg -s 5 ip nexthop sendto(3, [[{nlmsg_len=24, nlmsg_type=RTM_GETNEXTHOP, nlmsg_flags=NLM_F_REQUEST|NLM_F_DUMP, nlmsg_seq=1691394080, nlmsg_pid=0}, {nh_family=AF_UNSPEC, nh_scope=RT_SCOPE_UNIVERSE, nh_protocol=RTPROT_UNSPEC, nh_flags=0}], {nlmsg_len=0, nlmsg_type=0 /* NLMSG_??? */, nlmsg_flags=0, nlmsg_seq=0, nlmsg_pid=0}], 152, 0, NULL, 0) = 152 recvmsg(3, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base=NULL, iov_len=0}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_TRUNC}, MSG_PEEK|MSG_TRUNC) = 56 recvmsg(3, {msg_name={sa_family=AF_NETLINK, nl_pid=0, nl_groups=00000000}, msg_namelen=12, msg_iov=[{iov_base=[[{nlmsg_len=36, nlmsg_type=RTM_NEWNEXTHOP, nlmsg_flags=NLM_F_MULTI, nlmsg_seq=1691394080, nlmsg_pid=342}, {nh_family=AF_INET, nh_scope=RT_SCOPE_UNIVERSE, nh_protocol=RTPROT_UNSPEC, nh_flags=0}, [[{nla_len=8, nla_type=NHA_ID}, 4294967295], {nla_len=4, nla_type=NHA_BLACKHOLE}]], [{nlmsg_len=20, nlmsg_type=NLMSG_DONE, nlmsg_flags=NLM_F_MULTI, nlmsg_seq=1691394080, nlmsg_pid=342}, 0]], iov_len=32768}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 56 id 4294967295 blackhole +++ exited with 0 +++ Note that if the NLMSG_DONE message cannot be appended because of size limitations, then another recvmsg() will be needed, but the core netlink code will not invoke the dump callback and simply reply with a NLMSG_DONE message since it knows that the callback previously returned zero. Add a test that fails before the fix: # ./fib_nexthops.sh -t basic [...] TEST: Maximum nexthop ID dump [FAIL] [...] And passes after it: # ./fib_nexthops.sh -t basic [...] TEST: Maximum nexthop ID dump [ OK ] [...] Fixes: ab84be7e54fc ("net: Initial nexthop code") Reported-by: Petr Machata <[email protected]> Closes: https://lore.kernel.org/netdev/[email protected]/ Signed-off-by: Ido Schimmel <[email protected]> Reviewed-by: Petr Machata <[email protected]> Reviewed-by: David Ahern <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-08-09tools: ynl-gen: add missing empty line between policiesJakub Kicinski1-0/+1
We're missing empty line between policies. DPLL will need this. Tested-by: Vadim Fedorenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-08-09tools: ynl-gen: avoid rendering empty validate fieldJiri Pirko1-3/+4
When dont-validate flags are filtered out for do/dump op, the list may be empty. In that case, avoid rendering the validate field. Fixes: fa8ba3502ade ("ynl-gen-c.py: render netlink policies static for split ops") Signed-off-by: Jiri Pirko <[email protected]> Reviewed-by: Jakub Kicinski <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2023-08-09x86/cpu: Fix Gracemont uarchPeter Zijlstra1-1/+1
Alderlake N is an E-core only product using Gracemont micro-architecture. It fits the pre-existing naming scheme perfectly fine, adhere to it. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Hans de Goede <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-08-09io_uring: kill io_uring userspace examplesPavel Begunkov9-1440/+0
There are tons of io_uring tests and examples in liburing and on the Internet. If you're looking for a benchmark, io_uring-bench.c is just an acutely outdated version of fio/io_uring. And for basic condensed init template for likes of selftests take a peek at io_uring_zerocopy_tx.c. Kill tools/io_uring/, it's a burden keeping it here. Signed-off-by: Pavel Begunkov <[email protected]> Link: https://lore.kernel.org/r/7c740701d3b475dcad8c92602a551044f72176b4.1691543666.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <[email protected]>
2023-08-09tools/power/x86/intel-speed-select: v1.17 releaseSrinivas Pandruvada1-1/+1
This version addresses issues with: - CPU count display for power domain != 0 - Support more than 8 sockets - Error on max CPU count exceeds in one request - Prevent trying CPU 0 hotplug for kernel version 6.5 or later - Change mem-frequency display to max-mem-frequency Signed-off-by: Srinivas Pandruvada <[email protected]>
2023-08-09tools/power/x86/intel-speed-select: Change mem-frequency display nameSrinivas Pandruvada1-1/+1
The mem-frequency displayed by each profile is not the actual memory frequency of DIMMs, but the maximum the CPU can support. Change the mem-frequency field to max-mem-frequency. Signed-off-by: Srinivas Pandruvada <[email protected]>
2023-08-09KVM: riscv: selftests: Add get-reg-list testHaibo Xu3-0/+876
get-reg-list test is used to check for KVM registers regressions during VM migration which happens when destination host kernel missing registers that the source host kernel has. The blessed list registers was created by running on v6.5-rc3 Signed-off-by: Haibo Xu <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-09KVM: selftests: Add skip_set facility to get_reg_list testHaibo Xu2-6/+16
Add new skips_set members to vcpu_reg_sublist so as to skip set operation on some registers. Suggested-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-09KVM: selftests: Only do get/set tests on present blessed listHaibo Xu1-11/+18
Only do the get/set tests on present and blessed registers since we don't know the capabilities of any new ones. Suggested-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-09KVM: arm64: selftests: Move finalize_vcpu back to run_testHaibo Xu3-17/+21
No functional changes. Just move the finalize_vcpu call back to run_test and do weak function trick to prepare for the opration in riscv. Suggested-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-09KVM: arm64: selftests: Move reject_set check logic to a functionHaibo Xu2-1/+11
No functional changes. Just move the reject_set check logic to a function so we can check for a specific errno. This is a preparation for support reject_set in riscv. Suggested-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-09KVM: arm64: selftests: Finish generalizing get-reg-listAndrew Jones1-5/+21
Add some unfortunate #ifdeffery to ensure the common get-reg-list.c can be compiled and run with other architectures. The next architecture to support get-reg-list should now only need to provide $(ARCH_DIR)/get-reg-list.c where arch-specific print_reg() and vcpu_configs[] get defined. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-09KVM: arm64: selftests: Split get-reg-list test codeAndrew Jones3-358/+398
Split the arch-neutral test code out of aarch64/get-reg-list.c into get-reg-list.c. To do this we invent a new make variable $(SPLIT_TESTS) which expects common parts to be in the KVM selftests root and the counterparts to have the same name, but be in $(ARCH_DIR). There's still some work to be done to de-aarch64 the common get-reg-list.c, but we leave that to the next patch to avoid modifying too much code while moving it. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-08selftests/bpf: relax expected log messages to allow emitting BPF_STEduard Zingerman2-6/+33
Update [1] to LLVM BPF backend seeks to enable generation of BPF_ST instruction when CPUv4 is selected. This affects expected log messages for the following selftests: - log_fixup/missing_map - spin_lock/lock_id_mapval_preserve - spin_lock/lock_id_innermapval_preserve Expected messages in these tests hard-code instruction numbers for BPF programs compiled from C. These instruction numbers change when BPF_ST is allowed because single BPF_ST instruction replaces a pair of BPF_MOV/BPF_STX instructions, e.g.: r1 = 42; *(u32 *)(r10 - 8) = r1; ---> *(u32 *)(r10 - 8) = 42; This commit updates expected log messages to avoid matching specific instruction numbers (program position still could be uniquely identified). [1] https://reviews.llvm.org/D140804 "[BPF] support for BPF_ST instruction in codegen" Signed-off-by: Eduard Zingerman <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-08-08selftests/bpf: remove duplicated functionsKui-Feng Lee1-76/+17
The file cgroup_tcp_skb.c contains redundant implementations of the similar functions (create_server_sock_v6(), connect_client_server_v6() and get_sock_port_v6()) found in network_helpers.c. Let's eliminate these duplicated functions. Changes from v1: - Remove get_sock_port_v6() as well. v1: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Kui-Feng Lee <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-08-08tools/power/x86/intel-speed-select: Prevent CPU 0 offlineSrinivas Pandruvada1-0/+34
Kernel 6.5 version deprecated CPU 0 hotplug. This will cause all requests to fail to offline CPU 0. Check version number of kernel and ignore CPU 0 hotplug request with debug aid to use cgroup isolation feature for CPU 0. Signed-off-by: Srinivas Pandruvada <[email protected]>
2023-08-08tools/power/x86/intel-speed-select: Error on CPU count exceed in requestSrinivas Pandruvada1-1/+13
There is a limit on number of CPUs in one request. This is set to 256. Currently tool silently ignores request for count over 256. Give an error message to indicate this. Signed-off-by: Srinivas Pandruvada <[email protected]>
2023-08-08tools/power/x86/intel-speed-select: Support more than 8 sockets.Frank Ramsay1-1/+1
MAX_PACKAGE_COUNT limits the intel-speed-select to systems with 8 sockets or fewer. On a system with more than 8 sockets intel-speed-select silently ignores everything beyond the 8th socket, rendering the tool useless for those systems. Increase MAX_PACKAGE_COUNT to support systems with up to 32 sockets. Signed-off-by: Frank Ramsay <[email protected]> Signed-off-by: Srinivas Pandruvada <[email protected]>
2023-08-08tools/power/x86/intel-speed-select: Fix CPU count displaySrinivas Pandruvada1-0/+1
Fix CPU count display for power domain != 0. In the function punit_id is always 0, so it never incremented cpu count for power domain id != 0. Update punit_id after call to update_punit_cpu_info() to what is actually received from the kernel. Signed-off-by: Srinivas Pandruvada <[email protected]>
2023-08-08iocost_monitor: improve it by adding iocg wait_msChengming Zhou1-4/+8
The iocg can have three throttled metrics: wait, debt, delay. This patch add missing wait_ms to IocgStat to show the latest wait_ms of iocg. As we are here, group iocg usage percents "inflt%" and "usage%" together, and group iocg throttled metrics "wait", "debt" and "delay" together. Effect after changes: nvme0n1 RUN per=50.0ms cur_per=177105.713:v1053528.587 busy= +0 vrate=135.00%:270.00% params=ssd_dfl(CQ) active weight hweight% inflt% usage% wait debt delay InterfererGroup0 * 100/ 100 54.28/ 9.09 0.34 24.07 0.00 0.00 0.00 interfered * 84/ 1000 45.72/ 90.91 0.48 41.09 0.00 0.00 0.00 Signed-off-by: Chengming Zhou <[email protected]> Acked-by: Tejun Heo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
2023-08-08iocost_monitor: print vrate inuse along with base_vrateChengming Zhou1-2/+5
The real vrate iocost inuse is not base_vrate, but the atomic vtime_rate. We need iocost_monitor tool to display this real vrate that iocost use, to check if the boosted compensated vrate is normal. Effect after change: nvme0n1 RUN per=50.0ms cur_per=172116.580:v1040587.433 busy= +0 \ vrate=135.00%:270.00% params=ssd_dfl(CQ) ^ | this is real vrate inuse Signed-off-by: Chengming Zhou <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
2023-08-08iocost_monitor: fix kernel queue kobj changesChengming Zhou1-1/+1
When I use iocost_monitor on nvme0n1, this error shows up: "Could not find ioc for nvme0n1" There is no kobj in struct queue in recent kernel, it seems that the commit 2bd85221a625 ("block: untangle request_queue refcounting from sysfs") move the queue kobj to struct gendisk. Fix it by using mq_kobj which is at the same level with queue kobj. Signed-off-by: Chengming Zhou <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
2023-08-08selftests/rseq: Use rseq_unqual_scalar_typeof in macrosMathieu Desnoyers6-13/+13
Use rseq_unqual_scalar_typeof() rather than typeof() in macros to remove the volatile qualifier (if there is one in the input argument), thus generating better assembly code in those scenarios. Also add extra brackets around the "p" parameter in RSEQ_READ_ONCE(), RSEQ_WRITE_ONCE(), and rseq_unqual_scalar_typeof() across architectures to preserve expectations of operator priority. Here is an example that shows how operator priority may be an issue with missing parentheses: #define m(p) \ do { \ __typeof__(*p) v = 0; \ } while (0) void fct(unsigned long long *p1) { m(p1 + 1); /* works */ m(1 + p1); /* broken */ } Signed-off-by: Mathieu Desnoyers <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2023-08-08selftests/rseq: Fix arm64 buggy load-acquire/store-release macrosMathieu Desnoyers1-28/+30
The arm64 load-acquire/store-release macros from the Linux kernel rseq selftests are buggy. Remplace them by a working implementation. Signed-off-by: Mathieu Desnoyers <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2023-08-08selftests/rseq: Implement rseq_unqual_scalar_typeofMathieu Desnoyers1-0/+26
Allow defining variables and perform cast with a typeof which removes the volatile and const qualifiers. This prevents declaring a stack variable with a volatile qualifier within a macro, which would generate sub-optimal assembler. This is imported from the "librseq" project. Signed-off-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2023-08-08selftests/rseq: Fix CID_ID typo in MakefileMathieu Desnoyers1-1/+1
Ensure that the basic percpu ops tests are effectively built against mm_cid. Signed-off-by: Mathieu Desnoyers <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2023-08-08perf stat: Don't display zero tool countsIan Rogers1-0/+5
Andi reported (see link below) a regression when printing the 'duration_time' tool event, where it gets printed as "not counted" for most of the CPUs, fix it by skipping zero counts for tool events. Reported-by: Andi Kleen <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Tested-by: Andi Kleen <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Athira Rajeev <[email protected]> Cc: Claire Jensen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/all/ZMlrzcVrVi1lTDmn@tassilo/ Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-08KVM: arm64: selftests: Delete core_reg_fixupAndrew Jones1-73/+10
core_reg_fixup() complicates sharing the get-reg-list test with other architectures. Rather than work at keeping it, with plenty of #ifdeffery, just delete it, as it's unlikely to test a kernel based on anything older than v5.2 with the get-reg-list test, which is a test meant to check for regressions in new kernels. (And, an older version of the test can still be used for older kernels if necessary.) Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-08KVM: arm64: selftests: Rename vcpu_config and add to kvm_util.hAndrew Jones2-38/+38
Rename vcpu_config to vcpu_reg_list to be more specific and add it to kvm_util.h. While it may not get used outside get-reg-list tests, exporting it doesn't hurt, as long as it has a unique enough name. This is a step in the direction of sharing most of the get- reg-list test code between architectures. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-08KVM: arm64: selftests: Remove print_reg's dependency on vcpu_configAndrew Jones1-26/+26
print_reg() and its helpers only use the vcpu_config pointer for config_name(). So just pass the config name in instead, which is used as a prefix in asserts. print_reg() can now be compiled independently of config_name(). Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-08KVM: arm64: selftests: Drop SVE cap check in print_regAndrew Jones1-14/+1
The check doesn't prove much anyway, as the reg lists could be messed up too. Just drop the check to simplify making print_reg more independent. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-08KVM: arm64: selftests: Replace str_with_index with strdup_printfAndrew Jones3-18/+22
The original author of aarch64/get-reg-list.c (me) was wearing tunnel vision goggles when implementing str_with_index(). There's no reason to have such a special case string function. Instead, take inspiration from glib and implement strdup_printf. The implementation builds on vasprintf() which requires _GNU_SOURCE, but we require _GNU_SOURCE in most files already. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Haibo Xu <[email protected]> Signed-off-by: Anup Patel <[email protected]>
2023-08-08perf script: Print "cgroup" field on the same line as "comm"Ivan Babrou1-11/+11
Commit 3fd7a168bf51 ("perf script: Add 'cgroup' field for output") added support for printing cgroup path in perf script output. It was okay if you didn't want any stacks: $ sudo perf script --comms jpegtran:23f4bf -F comm,tid,cpu,time,cgroup jpegtran:23f4bf 3321915 [013] 404718.587488: /idle.slice/polish.service jpegtran:23f4bf 3321915 [031] 404718.592073: /idle.slice/polish.service With stacks it gets messier as cgroup is printed after the stack: $ perf script --comms jpegtran:23f4bf -F comm,tid,cpu,time,cgroup,ip,sym jpegtran:23f4bf 3321915 [013] 404718.587488: 5c554 compress_output 570d9 jpeg_finish_compress 3476e jpegtran_main 330ee jpegtran::main 326e2 core::ops::function::FnOnce::call_once (inlined) 326e2 std::sys_common::backtrace::__rust_begin_short_backtrace /idle.slice/polish.service jpegtran:23f4bf 3321915 [031] 404718.592073: 8474d jsimd_encode_mcu_AC_first_prepare_sse2.PADDING 55af68e62fff [unknown] /idle.slice/polish.service Let's instead print cgroup on the same line as comm: $ perf script --comms jpegtran:23f4bf -F comm,tid,cpu,time,cgroup,ip,sym jpegtran:23f4bf 3321915 [013] 404718.587488: /idle.slice/polish.service 5c554 compress_output 570d9 jpeg_finish_compress 3476e jpegtran_main 330ee jpegtran::main 326e2 core::ops::function::FnOnce::call_once (inlined) 326e2 std::sys_common::backtrace::__rust_begin_short_backtrace jpegtran:23f4bf 3321915 [031] 404718.592073: /idle.slice/polish.service 8474d jsimd_encode_mcu_AC_first_prepare_sse2.PADDING 55af68e62fff [unknown] Fixes: 3fd7a168bf514979 ("perf script: Add 'cgroup' field for output") Signed-off-by: Ivan Babrou <[email protected]> Acked-by: Ian Rogers <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-08tools arch x86: Sync the msr-index.h copy with the kernel sourcesArnaldo Carvalho de Melo1-0/+1
To pick up the changes from these csets: 522b1d69219d8f08 ("x86/cpu/amd: Add a Zenbleed fix") That cause no changes to tooling: $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > before $ cp arch/x86/include/asm/msr-index.h tools/arch/x86/include/asm/msr-index.h $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > after $ diff -u before after $ Just silences this perf build warning: Warning: Kernel ABI header differences: diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h Cc: Adrian Hunter <[email protected]> Cc: Borislav Petkov (AMD) <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-08Revert "perf report: Append inlines to non-DWARF callchains"Arnaldo Carvalho de Melo1-5/+0
This reverts commit 46d21ec067490ab9cdcc89b9de5aae28786a8b8e. The tests were made with a specific workload, further tests on a recently updated fedora 38 system with a system wide perf.data file shows 'perf report' taking excessive time resolving inlines in vmlinux, so lets revert this until a full investigation and improvement on the addr2line support code is made. Reported-by: Jesper Dangaard Brouer <[email protected]> Acked-by: Artem Savkov <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Milian Wolff <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-07workqueue: Implement non-strict affinity scope for unbound workqueuesTejun Heo2-11/+26
An unbound workqueue can be served by multiple worker_pools to improve locality. The segmentation is achieved by grouping CPUs into pods. By default, the cache boundaries according to cpus_share_cache() define the CPUs are grouped. Let's a workqueue is allowed to run on all CPUs and the system has two L3 caches. The workqueue would be mapped to two worker_pools each serving one L3 cache domains. While this improves locality, because the pod boundaries are strict, it limits the total bandwidth a given issuer can consume. For example, let's say there is a thread pinned to a CPU issuing enough work items to saturate the whole machine. With the machine segmented into two pods, no matter how many work items it issues, it can only use half of the CPUs on the system. While this limitation has existed for a very long time, it wasn't very pronounced because the affinity grouping used to be always by NUMA nodes. With cache boundaries as the default and support for even finer grained scopes (smt and cpu), it is now an a lot more pressing problem. This patch implements non-strict affinity scope where the pod boundaries aren't enforced strictly. Going back to the previous example, the workqueue would still be mapped to two worker_pools; however, the affinity enforcement would be soft. The workers in both pools would have their cpus_allowed set to the whole machine thus allowing the scheduler to migrate them anywhere on the machine. However, whenever an idle worker is woken up, the workqueue code asks the scheduler to bring back the task within the pod if the worker is outside. ie. work items start executing within its affinity scope but can be migrated outside as the scheduler sees fit. This removes the hard cap on utilization while maintaining the benefits of affinity scopes. After the earlier ->__pod_cpumask changes, the implementation is pretty simple. When non-strict which is the new default: * pool_allowed_cpus() returns @pool->attrs->cpumask instead of ->__pod_cpumask so that the workers are allowed to run on any CPU that the associated workqueues allow. * If the idle worker task's ->wake_cpu is outside the pod, kick_pool() sets the field to a CPU within the pod. This would be the first use of task_struct->wake_cpu outside scheduler proper, so it isn't clear whether this would be acceptable. However, other methods of migrating tasks are significantly more expensive and are likely prohibitively so if we want to do this on every work item. This needs discussion with scheduler folks. There is also a race window where setting ->wake_cpu wouldn't be effective as the target task is still on CPU. However, the window is pretty small and this being a best-effort optimization, it doesn't seem to warrant more complexity at the moment. While the non-strict cache affinity scopes seem to be the best option, the performance picture interacts with the affinity scope and is a bit complicated to fully discuss in this patch, so the behavior is made easily selectable through wqattrs and sysfs and the next patch will add documentation to discuss performance implications. v2: pool->attrs->affn_strict is set to true for per-cpu worker_pools. Signed-off-by: Tejun Heo <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Linus Torvalds <[email protected]>
2023-08-07workqueue: Add multiple affinity scopes and interface to select themTejun Heo1-6/+9
Add three more affinity scopes - WQ_AFFN_CPU, SMT and CACHE - and make CACHE the default. The code changes to actually add the additional scopes are trivial. Also add module parameter "workqueue.default_affinity_scope" to override the default scope and "affinity_scope" sysfs file to configure it per workqueue. wq_dump.py and documentations are updated accordingly. This enables significant flexibility in configuring how unbound workqueues behave. If affinity scope is set to "cpu", it'll behave close to a per-cpu workqueue. On the other hand, "system" removes all locality boundaries. Many modern machines have multiple L3 caches often while being mostly uniform in terms of memory access. Thus, workqueue's previous behavior of spreading work items in each NUMA node had negative performance implications from unncessarily crossing L3 boundaries between issue and execution. However, picking a finer grained affinity scope also has a downside in that an issuer in one group can't utilize CPUs in other groups. While dependent on the specifics of workload, there's usually a noticeable penalty in crossing L3 boundaries, so let's default to CACHE. This issue will be further addressed and documented with examples in future patches. Signed-off-by: Tejun Heo <[email protected]>
2023-08-07workqueue: Add tools/workqueue/wq_dump.py which prints out workqueue ↵Tejun Heo1-0/+166
configuration Lack of visibility has always been a pain point for workqueues. While the recently added wq_monitor.py improved the situation, it's still difficult to understand what worker pools are active in the system, how workqueues map to them and why. The lack of visibility into how workqueues are configured is going to become more noticeable as workqueue improves locality awareness and provides more mechanisms to customize locality related behaviors. Now that the basic framework for more flexible locality support is in place, this is a good time to improve the situation. This patch adds tools/workqueues/wq_dump.py which prints out the topology configuration, worker pools and how workqueues are mapped to pools. Read the command's help message for more details. Signed-off-by: Tejun Heo <[email protected]>
2023-08-07selftests/bpf: Add bpf_get_func_ip test for uprobe inside functionJiri Olsa2-4/+60
Adding get_func_ip test for uprobe inside function that validates the get_func_ip helper returns correct probe address value. Tested-by: Alan Maguire <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-08-07selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entryJiri Olsa2-2/+34
Adding get_func_ip tests for uprobe on function entry that validates that bpf_get_func_ip returns proper values from both uprobe and return uprobe. Tested-by: Alan Maguire <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-08-07bpf: Add support for bpf_get_func_ip helper for uprobe programJiri Olsa1-1/+6
Adding support for bpf_get_func_ip helper for uprobe program to return probed address for both uprobe and return uprobe. We discussed this in [1] and agreed that uprobe can have special use of bpf_get_func_ip helper that differs from kprobe. The kprobe bpf_get_func_ip returns: - address of the function if probe is attach on function entry for both kprobe and return kprobe - 0 if the probe is not attach on function entry The uprobe bpf_get_func_ip returns: - address of the probe for both uprobe and return uprobe The reason for this semantic change is that kernel can't really tell if the probe user space address is function entry. The uprobe program is actually kprobe type program attached as uprobe. One of the consequences of this design is that uprobes do not have its own set of helpers, but share them with kprobes. As we need different functionality for bpf_get_func_ip helper for uprobe, I'm adding the bool value to the bpf_trace_run_ctx, so the helper can detect that it's executed in uprobe context and call specific code. The is_uprobe bool is set as true in bpf_prog_run_array_sleepable, which is currently used only for executing bpf programs in uprobe. Renaming bpf_prog_run_array_sleepable to bpf_prog_run_array_uprobe to address that it's only used for uprobes and that it sets the run_ctx.is_uprobe as suggested by Yafang Shao. Suggested-by: Andrii Nakryiko <[email protected]> Tested-by: Alan Maguire <[email protected]> [1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/ Signed-off-by: Jiri Olsa <[email protected]> Tested-by: Viktor Malik <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-08-07Merge tag 'x86_bugs_srso' of ↵Linus Torvalds2-2/+5
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/srso fixes from Borislav Petkov: "Add a mitigation for the speculative RAS (Return Address Stack) overflow vulnerability on AMD processors. In short, this is yet another issue where userspace poisons a microarchitectural structure which can then be used to leak privileged information through a side channel" * tag 'x86_bugs_srso' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/srso: Tie SBPB bit setting to microcode patch detection x86/srso: Add a forgotten NOENDBR annotation x86/srso: Fix return thunks in generated code x86/srso: Add IBPB on VMEXIT x86/srso: Add IBPB x86/srso: Add SRSO_NO support x86/srso: Add IBPB_BRTYPE support x86/srso: Add a Speculative RAS Overflow mitigation x86/bugs: Increase the x86 bugs vector size to two u32s
2023-08-07selftests/bpf: Add a movsx selftest for sign-extension of R10Yonghong Song1-0/+22
A movsx selftest is added for sign-extension of frame pointer R10. The verification fails for both privileged and unprivileged prog runs. Signed-off-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-08-07perf probe: Make synthesize_perf_probe_point() private to probe-event.cArnaldo Carvalho de Melo2-2/+3
Not used in any other place, so just make it static. Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/ZM0pjfOe6R4X%[email protected]/ Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-07perf probe: Free string returned by synthesize_perf_probe_point() on failure ↵Arnaldo Carvalho de Melo1-2/+6
in synthesize_perf_probe_command() Building perf with EXTRA_CFLAGS="-fsanitize=address" a leak was detected elsewhere and lead to an audit, where we found that synthesize_perf_probe_command() may leak synthesize_perf_probe_point() return on failure, fix it. Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-07perf probe: Free string returned by synthesize_perf_probe_point() on failure ↵Arnaldo Carvalho de Melo1-2/+3
to add a probe Building perf with EXTRA_CFLAGS="-fsanitize=address" a leak is detect when trying to add a probe to a non-existent function: # perf probe -x ~/bin/perf dso__neW Probe point 'dso__neW' not found. Error: Failed to add events. ================================================================= ==296634==ERROR: LeakSanitizer: detected memory leaks Direct leak of 128 byte(s) in 1 object(s) allocated from: #0 0x7f67642ba097 in calloc (/lib64/libasan.so.8+0xba097) #1 0x7f67641a76f1 in allocate_cfi (/lib64/libdw.so.1+0x3f6f1) Direct leak of 65 byte(s) in 1 object(s) allocated from: #0 0x7f67642b95b5 in __interceptor_realloc.part.0 (/lib64/libasan.so.8+0xb95b5) #1 0x6cac75 in strbuf_grow util/strbuf.c:64 #2 0x6ca934 in strbuf_init util/strbuf.c:25 #3 0x9337d2 in synthesize_perf_probe_point util/probe-event.c:2018 #4 0x92be51 in try_to_find_probe_trace_events util/probe-event.c:964 #5 0x93d5c6 in convert_to_probe_trace_events util/probe-event.c:3512 #6 0x93d6d5 in convert_perf_probe_events util/probe-event.c:3529 #7 0x56f37f in perf_add_probe_events /var/home/acme/git/perf-tools-next/tools/perf/builtin-probe.c:354 #8 0x572fbc in __cmd_probe /var/home/acme/git/perf-tools-next/tools/perf/builtin-probe.c:738 #9 0x5730f2 in cmd_probe /var/home/acme/git/perf-tools-next/tools/perf/builtin-probe.c:766 #10 0x635d81 in run_builtin /var/home/acme/git/perf-tools-next/tools/perf/perf.c:323 #11 0x6362c1 in handle_internal_command /var/home/acme/git/perf-tools-next/tools/perf/perf.c:377 #12 0x63667a in run_argv /var/home/acme/git/perf-tools-next/tools/perf/perf.c:421 #13 0x636b8d in main /var/home/acme/git/perf-tools-next/tools/perf/perf.c:537 #14 0x7f676302950f in __libc_start_call_main (/lib64/libc.so.6+0x2950f) SUMMARY: AddressSanitizer: 193 byte(s) leaked in 2 allocation(s). # synthesize_perf_probe_point() returns a "detachec" strbuf, i.e. a malloc'ed string that needs to be free'd. An audit will be performed to find other such cases. Acked-by: Masami Hiramatsu <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-08-07Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-1/+5
Pull kvm fixes from Paolo Bonzini: "x86: - Fix SEV race condition ARM: - Fixes for the configuration of SVE/SME traps when hVHE mode is in use - Allow use of pKVM on systems with FF-A implementations that are v1.0 compatible - Request/release percpu IRQs (arch timer, vGIC maintenance) correctly when pKVM is in use - Fix function prototype after __kvm_host_psci_cpu_entry() rename - Skip to the next instruction when emulating writes to TCR_EL1 on AmpereOne systems Selftests: - Fix missing include" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: selftests/rseq: Fix build with undefined __weak KVM: SEV: remove ghcb variable declarations KVM: SEV: only access GHCB fields once KVM: SEV: snapshot the GHCB before accessing it KVM: arm64: Skip instruction after emulating write to TCR_EL1 KVM: arm64: fix __kvm_host_psci_cpu_entry() prototype KVM: arm64: Fix resetting SME trap values on reset for (h)VHE KVM: arm64: Fix resetting SVE trap values on reset for hVHE KVM: arm64: Use the appropriate feature trap register when activating traps KVM: arm64: Helper to write to appropriate feature trap register based on mode KVM: arm64: Disable SME traps for (h)VHE at setup KVM: arm64: Use the appropriate feature trap register for SVE at EL2 setup KVM: arm64: Factor out code for checking (h)VHE mode into a macro KVM: arm64: Rephrase percpu enable/disable tracking in terms of hyp KVM: arm64: Fix hardware enable/disable flows for pKVM KVM: arm64: Allow pKVM on v1.0 compatible FF-A implementations
2023-08-06tools: Get rid of IRQ_MOVE_CLEANUP_VECTOR from toolsXin Li2-8/+1
IRQ_MOVE_CLEANUP_VECTOR is not longer in use. Remove the last traces. Signed-off-by: Xin Li <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-08-06tools/nolibc: unistd.h: reorder the syscall macrosZhangjin Wu1-2/+2
Tune the macros in the using order and align most of them. Signed-off-by: Zhangjin Wu <[email protected]> Signed-off-by: Willy Tarreau <[email protected]>