aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2023-04-06perf vendor events intel: Update free running tigerlake eventsIan Rogers2-36/+50
Fix the topic, PMU name, event code and umask. These updates were generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py with this PR: https://github.com/intel/perfmon/pull/66 Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Xing Zhengjun <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf vendor events intel: Update free running snowridgex eventsIan Rogers2-66/+30
Fix the PMU names, event code and umask. Remove UNC_IIO_BANDWIDTH_OUT events that aren't supported. These updates were generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py with this PR: https://github.com/intel/perfmon/pull/66 Signed-off-by: Ian Rogers <[email protected]> : Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Xing Zhengjun <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf vendor events intel: Correct knightslanding memory topicIan Rogers2-36/+38
Correct the memory topic of events for the imc related PMUs. These updates were generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py with this PR: https://github.com/intel/perfmon/pull/66 Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Xing Zhengjun <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf vendor events intel: Update free running icelakex eventsIan Rogers2-58/+30
Fix the PMU names, event code and umask. Remove UNC_IIO_BANDWIDTH_OUT events that aren't supported. These updates were generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py with this PR: https://github.com/intel/perfmon/pull/66 Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Xing Zhengjun <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf vendor events intel: Update free running alderlake eventsIan Rogers2-8/+24
Fix the PMU name, event code and umask. These updates were generated by: https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py with this PR: https://github.com/intel/perfmon/pull/66 Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Xing Zhengjun <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf pmu: Sort and remove duplicates using JSON PMU nameIan Rogers1-16/+31
We may have a lot of copies of a particular uncore PMU, such as uncore_cha_0 to uncore_cha_59 on Intel sapphirerapids. The JSON events may match each of PMUs and so the events are copied to it. In 'perf list' this means we see the same JSON event 60 times as events on different PMUs don't have duplicates removed. There are 284 uncore_cha events on sapphirerapids. Rather than use the PMU's name to sort and remove duplicates, use the JSON PMU name. This reduces the 60 copies back down to 1 and has the side effect of speeding things like the "perf all PMU test" shell test. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf pmu: Improve name/comments, avoid a memory allocationIan Rogers2-10/+25
Improve documentation around perf_pmu_alias pmu_name and on functions. Reduce the scope of pmu_uncore_alias_match to just file. Rename perf_pmu__valid_suffix to the more revealing perf_pmu__match_ignoring_suffix. Add a short-cut to perf_pmu__match_ignoring_suffix for PMU names that don't also have a socket value, and can therefore avoid a memory allocation. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf pmu: Fewer const castsIan Rogers1-6/+6
struct pmu_event has const char*s, only unit needs to be non-const for the sake of passing as an out argument to strtod(). Reduce the const casts from 4 down to 1. Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Do not try to update if hash map is fullNamhyung Kim1-3/+19
It doesn't delete data in the task_data and lock_stat maps. The data is kept there until it's consumed by userspace at the end. But it calls bpf_map_update_elem() again and again, and the data will be discarded if the map is full. This is not good. Worse, in the bpf_map_update_elem(), it keeps trying to get a new node even if the map was full. I guess it makes sense if it deletes some node like in the tstamp map (that's why I didn't make the change there). In a pre-allocated hash map, that means it'd iterate all CPU to check the freelist. And it has a bad performance impact on large machines. I've checked it on my 64 CPU machine with this. $ perf bench sched messaging -g 1000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 1000 groups == 40000 processes run Total time: 2.825 [sec] And I used the task mode, so that it can guarantee the map is full. The default map entry size is 16K and this workload has 40K tasks. Before: $ sudo ./perf lock con -abt -E3 -- perf bench sched messaging -g 1000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 1000 groups == 40000 processes run Total time: 11.299 [sec] contended total wait max wait avg wait pid comm 19284 3.51 s 3.70 ms 181.91 us 1305863 sched-messaging 243 84.09 ms 466.67 us 346.04 us 1336608 sched-messaging 177 66.35 ms 12.08 ms 374.88 us 1220416 node For some reason, it didn't report the data failures. But you can see the total time in the workload is increased a lot (2.8 -> 11.3). If it fails early when the map is full, it goes back to normal. After: $ sudo ./perf lock con -abt -E3 -- perf bench sched messaging -g 1000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 1000 groups == 40000 processes run Total time: 3.044 [sec] contended total wait max wait avg wait pid comm 18743 591.92 ms 442.96 us 31.58 us 1431454 sched-messaging 51 210.64 ms 207.45 ms 4.13 ms 1468724 sched-messaging 81 68.61 ms 65.79 ms 847.07 us 1463183 sched-messaging === output for debug === bad: 1164137, total: 2253341 bad rate: 51.66 % histogram of failure reasons task: 0 stack: 0 time: 0 data: 1164137 Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Revise needs_callstack() conditionNamhyung Kim2-2/+2
It needs callstacks for two reasons: * for stack aggregation mode, the map key is the stack id and it can also show the full stack traces when -v is used * for other aggregation modes, the stack filter can be used to limit lock contentions from known call paths The -v option is meaningful (in terms of stack trace) only for stack aggregation mode, so it should not set the save_callstack for other mode like with -t or -l options. I've noticed this with the following command line: $ sudo ./perf lock con -ablv -E 3 -M 16 -- ./perf bench sched messaging ... contended total wait max wait avg wait address symbol 88 4.59 ms 108.07 us 52.13 us ffff935757f46ec0 (spinlock) 33 905.22 us 73.67 us 27.43 us ffff935757f41700 (spinlock) 28 703.69 us 79.28 us 25.13 us ffff938a3d9b0c80 rq_lock (spinlock) === output for debug === bad: 12272, total: 12421 bad rate: 98.80 % histogram of failure reasons task: 8285 stack: 3987 <---------- here time: 0 data: 0 It should not have any failure on stacks since it doesn't use it. No functional change intended. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Update total/bad stats for hidden entriesNamhyung Kim3-1/+15
When -E option is used, it only prints the given number of entries but the event stat at the end should have the numbers for entire entries. Likewise, -S option will hide entries that don't have the named function in the callstack. Also update event stat for them. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Add data failure statNamhyung Kim4-2/+8
It's possible to fail to update the data when the lock_stat map is full. We should check that case and show the number at the end. $ sudo ./perf lock con -ablv -E3 -- ./perf bench sched messaging ... contended total wait max wait avg wait address symbol 6157 208.48 ms 69.29 us 33.86 us ffff934c001c1f00 (spinlock) 4030 72.04 ms 61.84 us 17.88 us ffff934c000415c0 (spinlock) 3201 50.30 ms 47.73 us 15.71 us ffff934c2eead850 (spinlock) === output for debug === bad: 0, total: 13388 bad rate: 0.00 % histogram of failure reasons task: 0 stack: 0 time: 0 data: 0 <----- added Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Update default map size to 16384Namhyung Kim4-6/+7
The BPF hash map will align the map size to a power of 2. So 10k would be 16k anyway. Let's have the actual size to avoid confusions. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Use -M for --map-nr-entriesNamhyung Kim2-1/+2
Users often want to change the map size, let's add a short option (-M) for that. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf lock contention: Simplify parse_lock_type()Namhyung Kim1-35/+8
The get_type_flag() should check both str and name fields in the lock_type_table so that it can find the appropriate flag without retrying with ':R' or ':W' suffix from the caller. Also fix a typo in the rt-mutex. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Hao Luo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06tools: Rename __fallthrough to fallthroughLiam Howlett15-23/+21
Rename the fallthrough attribute to better align with the kernel version. Copy the definition from include/linux/compiler_attributes.h including the #else clause. Adding the #else clause allows the tools compiler.h header to drop the check for a definition entirely and keeps both definitions together. Change any __fallthrough statements to fallthrough anywhere it was used within perf. This allows other tools to use the same key word as the kernel. Committer notes: Did some missing conversions to: builtin-list.c Also included gtk.h before the 'fallthrough' definition in: tools/perf/ui/gtk/hists.c tools/perf/ui/gtk/helpline.c tools/perf/ui/gtk/browser.c As it is the arg name for a macro in glib.h: /var/home/acme/git/perf-tools-next/tools/include/linux/compiler-gcc.h:16:55: error: missing binary operator before token "(" 16 | # define fallthrough __attribute__((__fallthrough__)) | ^ /usr/include/glib-2.0/glib/gmacros.h:637:28: note: in expansion of macro ‘fallthrough’ 637 | #if g_macro__has_attribute(fallthrough) Reviewed-by: Miguel Ojeda <[email protected]> Signed-off-by: Liam Howlett <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Tom Rix <[email protected]> Cc: [email protected] <[email protected]> Cc: [email protected] <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf pmu: Fix a few potential fd leaksIan Rogers1-2/+10
Ensure fd is closed on error paths. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: German Gomez <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06perf pmu: Make parser reentrantIan Rogers4-13/+37
By default bison uses global state for compatibility with yacc. Make the parser reentrant so that it may be used in asynchronous and multithreaded situations. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Gaosheng Cui <[email protected]> Cc: German Gomez <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jing Zhang <[email protected]> Cc: Leo Yan <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ravi Bangoria <[email protected]> Cc: Rob Herring <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Suzuki Poulouse <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2023-04-06selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> ↵Yonghong Song1-0/+460
<non_const>' Add various tests for code pattern '<const> <cond_op> <non_const>' to exercise the previous verifier patch. The following are veristat changed number of processed insns stat comparing the previous patch vs. this patch: File Program Insns (A) Insns (B) Insns (DIFF) ----------------------------------------------------- ---------------------------------------------------- --------- --------- ------------- test_seg6_loop.bpf.linked3.o __add_egr_x 12423 12314 -109 (-0.88%) Only one program is affected with minor change. Signed-off-by: Yonghong Song <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-06bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifierYonghong Song1-1/+1
Currently, the verifier does not handle '<const> <cond_op> <non_const>' well. For example, ... 10: (79) r1 = *(u64 *)(r10 -16) ; R1_w=scalar() R10=fp0 11: (b7) r2 = 0 ; R2_w=0 12: (2d) if r2 > r1 goto pc+2 13: (b7) r0 = 0 14: (95) exit 15: (65) if r1 s> 0x1 goto pc+3 16: (0f) r0 += r1 ... At insn 12, verifier decides both true and false branch are possible, but actually only false branch is possible. Currently, the verifier already supports patterns '<non_const> <cond_op> <const>. Add support for patterns '<const> <cond_op> <non_const>' in a similar way. Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10' due to this change. Signed-off-by: Yonghong Song <[email protected]> Acked-by: Dave Marchevsky <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-06selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deductionYonghong Song2-0/+181
Add various tests for code pattern '<non-const> NE/EQ <const>' implemented in the previous verifier patch. Without the verifier patch, these new tests will fail. Signed-off-by: Yonghong Song <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-06KVM: selftests: Verify LBRs are disabled if vPMU is disabledSean Christopherson1-0/+29
Verify that disabling the guest's vPMU via CPUID also disables LBRs. KVM has had at least one bug where LBRs would remain enabled even though the intent was to disable everything PMU related. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Add negative testcase for PEBS format in PERF_CAPABILITIESSean Christopherson1-0/+10
Expand the immutable features sub-test for PERF_CAPABILITIES to verify KVM rejects any attempt to use a PEBS format other than the host's. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Refactor LBR_FMT test to avoid use of separate macroSean Christopherson1-7/+6
Rework the LBR format test to use the bitfield instead of a separate mask macro, mainly so that adding a nearly-identical PEBS format test doesn't have to copy-paste-tweak the macro too. No functional change intended. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Drop "all done!" printf() from PERF_CAPABILITIES testSean Christopherson1-2/+0
Drop the arbitrary "done" message from the VMX PMU caps test, it's pretty obvious the test is done when the process exits. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Test post-KVM_RUN writes to PERF_CAPABILITIESSean Christopherson1-0/+13
Now that KVM disallows changing PERF_CAPABILITIES after KVM_RUN, expand the host side checks to verify KVM rejects any attempts to change bits from userspace. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Expand negative testing of guest writes to PERF_CAPABILITIESSean Christopherson1-7/+54
Test that the guest can't write 0 to PERF_CAPABILITIES, can't write the current value, and can't toggle _any_ bits. There is no reason to special case the LBR format. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Test all immutable non-format bits in PERF_CAPABILITIESSean Christopherson1-3/+27
Add negative testing of all immutable bits in PERF_CAPABILITIES, i.e. single bits that are reserved-0 or are effectively reserved-1 by KVM. Omit LBR and PEBS format bits from the test as it's easier to test them manually than it is to add safeguards to the comment path, e.g. toggling a single bit can yield a format of '0', which is legal as a "disable" value. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Test all fungible features in PERF_CAPABILITIESSean Christopherson1-5/+24
Verify that userspace can set all fungible features in PERF_CAPABILITIES. Drop the now unused #define of the "full-width writes" flag. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Drop now-redundant checks on PERF_CAPABILITIES writesSean Christopherson1-6/+0
Now that vcpu_set_msr() verifies the expected "read what was wrote" semantics of all durable MSRs, including PERF_CAPABILITIES, drop the now-redundant manual checks in the VMX PMU caps test. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Verify KVM preserves userspace writes to "durable" MSRsSean Christopherson1-1/+16
Assert that KVM provides "read what you wrote" semantics for all "durable" MSRs (for lack of a better name). The extra coverage is cheap from a runtime performance perspective, and verifying the behavior in the common helper avoids gratuitous copy+paste in individual tests. Note, this affects all tests that set MSRs from userspace! Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Print out failing MSR and value in vcpu_set_msr()Sean Christopherson1-8/+24
Reimplement vcpu_set_msr() as a macro and pretty print the failing MSR (when possible) and the value if KVM_SET_MSRS fails instead of using the using the standard KVM_IOCTL_ERROR(). KVM_SET_MSRS is somewhat odd in that it returns the index of the last successful write, i.e. will be '0' on failure barring an entirely different KVM bug. And for writing MSRs, the MSR being written and the value being written are almost always relevant to the failure, i.e. just saying "failed!" doesn't help debug. Place the string goo in a separate macro in anticipation of using it to further expand MSR testing. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Assert that full-width PMC writes are supported if PDCM=1Sean Christopherson1-0/+3
KVM emulates full-width PMC writes in software, assert that KVM reports full-width writes as supported if PERF_CAPABILITIES is supported. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Move 0/initial value PERF_CAPS checks to dedicated sub-testSean Christopherson1-6/+19
Use a separate sub-test to verify userspace can clear PERF_CAPABILITIES and restore it to the KVM-supported value, as the testcase isn't unique to the LBR format. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06KVM: selftests: Split PMU caps sub-tests to avoid writing MSR after KVM_RUNSean Christopherson1-20/+31
Split the PERF_CAPABILITIES subtests into two parts so that the LBR format testcases don't execute after KVM_RUN. Similar to the guest CPUID model, KVM will soon disallow changing PERF_CAPABILITIES after KVM_RUN, at which point attempting to set the MSR after KVM_RUN will yield false positives and/or false negatives depending on what the test is trying to do. Land the LBR format test in a more generic "immutable features" test in anticipation of expanding its scope to other immutable features. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2023-04-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2-0/+2
Conflicts: drivers/net/ethernet/google/gve/gve.h 3ce934558097 ("gve: Secure enough bytes in the first TX desc for all TCP pkts") 75eaae158b1b ("gve: Add XDP DROP and TX support for GQI-QPL format") https://lore.kernel.org/all/[email protected]/ https://lore.kernel.org/all/[email protected]/ Adjacent changes: net/can/isotp.c 051737439eae ("can: isotp: fix race between isotp_sendsmg() and isotp_release()") 96d1c81e6a04 ("can: isotp: add module parameter for maximum pdu size") Signed-off-by: Jakub Kicinski <[email protected]>
2023-04-06Merge tag 'net-6.3-rc6-2' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from wireless and can. Current release - regressions: - wifi: mac80211: - fix potential null pointer dereference - fix receiving mesh packets in forwarding=0 networks - fix mesh forwarding Current release - new code bugs: - virtio/vsock: fix leaks due to missing skb owner Previous releases - regressions: - raw: fix NULL deref in raw_get_next(). - sctp: check send stream number after wait_for_sndbuf - qrtr: - fix a refcount bug in qrtr_recvmsg() - do not do DEL_SERVER broadcast after DEL_CLIENT - wifi: brcmfmac: fix SDIO suspend/resume regression - wifi: mt76: fix use-after-free in fw features query. - can: fix race between isotp_sendsmg() and isotp_release() - eth: mtk_eth_soc: fix remaining throughput regression - eth: ice: reset FDIR counter in FDIR init stage Previous releases - always broken: - core: don't let netpoll invoke NAPI if in xmit context - icmp: guard against too small mtu - ipv6: fix an uninit variable access bug in __ip6_make_skb() - wifi: mac80211: fix the size calculation of ieee80211_ie_len_eht_cap() - can: fix poll() to not report false EPOLLOUT events - eth: gve: secure enough bytes in the first TX desc for all TCP pkts" * tag 'net-6.3-rc6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (47 commits) net: stmmac: check fwnode for phy device before scanning for phy net: stmmac: Add queue reset into stmmac_xdp_open() function selftests: net: rps_default_mask.sh: delete veth link specifically net: fec: make use of MDIO C45 quirk can: isotp: fix race between isotp_sendsmg() and isotp_release() can: isotp: isotp_ops: fix poll() to not report false EPOLLOUT events can: isotp: isotp_recvmsg(): use sock_recv_cmsgs() to get SOCK_RXQ_OVFL infos can: j1939: j1939_tp_tx_dat_new(): fix out-of-bounds memory access gve: Secure enough bytes in the first TX desc for all TCP pkts netlink: annotate lockless accesses to nlk->max_recvmsg_len ethtool: reset #lanes when lanes is omitted ping: Fix potentail NULL deref for /proc/net/icmp. raw: Fix NULL deref in raw_get_next(). ice: Reset FDIR counter in FDIR init stage ice: fix wrong fallback logic for FDIR net: stmmac: fix up RX flow hash indirection table when setting channels net: ethernet: ti: am65-cpsw: Fix mdio cleanup in probe wifi: mt76: ignore key disable commands wifi: ath11k: reduce the MHI timeout to 20s ipv6: Fix an uninit variable access bug in __ip6_make_skb() ...
2023-04-06Merge tag 'linux-kselftest-fixes-6.3-rc6' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull Kselftest fixes from Shuah Khan: "One single fix to mount_setattr_test build failure" * tag 'linux-kselftest-fixes-6.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests mount: Fix mount_setattr_test builds failed
2023-04-06ACPICA: actbl2: Replace 1-element arrays with flexible arraysKees Cook1-2/+2
ACPICA commit 44f1af0664599e87bebc3a1260692baa27b2f264 Similar to "Replace one-element array with flexible-array", replace the 1-element array with a proper flexible array member as defined by C99. This allows the code to operate without tripping compile-time and run- time bounds checkers (e.g. via __builtin_object_size(), -fsanitize=bounds, and/or -fstrict-flex-arrays=3). The sizeof() uses with struct acpi_nfit_flush_address and struct acpi_nfit_smbios have been adjusted to drop the open-coded subtraction of the trailing single element. The result is no binary differences in .text nor .data sections. Link: https://github.com/acpica/acpica/commit/44f1af06 Signed-off-by: Bob Moore <[email protected]> Co-developed-by: Dan Williams <[email protected]> Signed-off-by: Dan Williams <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2023-04-06ACPICA: Update all copyrights/signons to 2023Bob Moore10-10/+10
ACPICA commit 25bddd1824b1e450829468a64bbdcb38074ba3d2 Copyright updates to 2023. Link: https://github.com/acpica/acpica/commit/25bddd18 Signed-off-by: Bob Moore <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2023-04-06selftests: xsk: Add test UNALIGNED_INV_DESC_4K1_FRAME_SIZEKal Conley2-0/+25
Add unaligned descriptor test for frame size of 4001. Using an odd frame size ensures that the end of the UMEM is not near a page boundary. This allows testing descriptors that staddle the end of the UMEM but not a page. This test used to fail without the previous commit ("xsk: Fix unaligned descriptor validation"). Signed-off-by: Kal Conley <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Martin KaFai Lau <[email protected]>
2023-04-06selftests/bpf: fix xdp_redirect xdp-features selftest for veth driverLorenzo Bianconi1-3/+27
xdp-features supported by veth driver are no more static, but they depends on veth configuration (e.g. if GRO is enabled/disabled or TX/RX queue configuration). Take it into account in xdp_redirect xdp-features selftest for veth driver. Fixes: fccca038f300 ("veth: take into account device reconfiguration for xdp_features flag") Signed-off-by: Lorenzo Bianconi <[email protected]> Link: https://lore.kernel.org/r/bc35455cfbb1d4f7f52536955ded81ad47d8dc54.1680777371.git.lorenzo@kernel.org Signed-off-by: Martin KaFai Lau <[email protected]>
2023-04-06selftests/net: fix typo in tcp_mmapEric Dumazet1-1/+1
kernel test robot reported the following warning: All warnings (new ones prefixed by >>): tcp_mmap.c: In function 'child_thread': >> tcp_mmap.c:211:61: warning: 'lu' may be used uninitialized in this function [-Wmaybe-uninitialized] 211 | zc.length = min(chunk_size, FILE_SZ - lu); We want to read FILE_SZ bytes, so the correct expression should be (FILE_SZ - total) Fixes: 5c5945dc695c ("selftests/net: Add SHA256 computation over data sent in tcp_mmap") Reported-by: kernel test robot <[email protected]> Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Signed-off-by: Eric Dumazet <[email protected]> Cc: Xiaoyan Li <[email protected]> Cc: Kuniyuki Iwashima <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Paolo Abeni <[email protected]>
2023-04-06selftests/clone3: fix number of tests in ksft_set_planTobias Klauser1-2/+2
Commit 515bddf0ec41 ("selftests/clone3: test clone3 with CLONE_NEWTIME") added an additional test, so the number passed to ksft_set_plan needs to be bumped accordingly. Also use ksft_finished() to print results and exit. This will catch future mismatches between ksft_set_plan() and the number of tests being run. Fixes: 515bddf0ec41 ("selftests/clone3: test clone3 with CLONE_NEWTIME") Cc: Christian Brauner <[email protected]> Signed-off-by: Tobias Klauser <[email protected]> Reviewed-by: Christian Brauner <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
2023-04-05bpftool: Clean up _bpftool_once_attr() calls in bash completionQuentin Monnet1-13/+3
In bpftool's bash completion file, function _bpftool_once_attr() is able to process multiple arguments. There are a few locations where this function is called multiple times in a row, each time for a single argument; let's pass all arguments instead to minimize the number of function calls required for the completion. Signed-off-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-05bpftool: Support printing opcodes and source file references in CFGQuentin Monnet7-15/+83
Add support for displaying opcodes or/and file references (filepath, line and column numbers) when dumping the control flow graphs of loaded BPF programs with bpftool. The filepaths in the records are absolute. To avoid blocks on the graph to get too wide, we truncate them when they get too long (but we always keep the entire file name). In the unlikely case where the resulting file name is ambiguous, it remains possible to get the full path with a regular dump (no CFG). Signed-off-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-05bpftool: Support "opcodes", "linum", "visual" simultaneouslyQuentin Monnet3-39/+47
When dumping a program, the keywords "opcodes" (for printing the raw opcodes), "linum" (for displaying the filename, line number, column number along with the source code), and "visual" (for generating the control flow graph for translated programs) are mutually exclusive. But there's no reason why they should be. Let's make it possible to pass several of them at once. The "file FILE" option, which makes bpftool output a binary image to a file, remains incompatible with the others. Signed-off-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-05bpftool: Return an error on prog dumps if both CFG and JSON are requiredQuentin Monnet2-6/+11
We do not support JSON output for control flow graphs of programs with bpftool. So far, requiring both the CFG and JSON output would result in producing a null JSON object. It makes more sense to raise an error directly when parsing command line arguments and options, so that users know they won't get any output they might expect. If JSON is required for the graph, we leave it to Graphviz instead: # bpftool prog dump xlated <REF> visual | dot -Tjson Suggested-by: Eduard Zingerman <[email protected]> Signed-off-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-05bpftool: Support inline annotations when dumping the CFG of a programQuentin Monnet6-26/+88
We support dumping the control flow graph of loaded programs to the DOT format with bpftool, but so far this feature wouldn't display the source code lines available through BTF along with the eBPF bytecode. Let's add support for these annotations, to make it easier to read the graph. In prog.c, we move the call to dump_xlated_cfg() in order to pass and use the full struct dump_data, instead of creating a minimal one in draw_bb_node(). We pass the pointer to this struct down to dump_xlated_for_graph() in xlated_dumper.c, where most of the logics is added. We deal with BTF mostly like we do for plain or JSON output, except that we cannot use a "nr_skip" value to skip a given number of linfo records (we don't process the BPF instructions linearly, and apart from the root of the graph we don't know how many records we should skip, so we just store the last linfo and make sure the new one we find is different before printing it). When printing the source instructions to the label of a DOT graph node, there are a few subtleties to address. We want some special newline markers, and there are some characters that we must escape. To deal with them, we introduce a new dedicated function btf_dump_linfo_dotlabel() in btf_dumper.c. We'll reuse this function in a later commit to format the filepath, line, and column references as well. Signed-off-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2023-04-05bpftool: Fix bug for long instructions in program CFG dumpsQuentin Monnet1-0/+7
When dumping the control flow graphs for programs using the 16-byte long load instruction, we need to skip the second part of this instruction when looking for the next instruction to process. Otherwise, we end up printing "BUG_ld_00" from the kernel disassembler in the CFG. Fixes: efcef17a6d65 ("tools: bpftool: generate .dot graph from CFG information") Signed-off-by: Quentin Monnet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>