Age | Commit message (Collapse) | Author | Files | Lines |
|
Add perf_evlist__mmap_cb_idx function to call auxtrace_mmap_params__set_idx()
on each new index during perf_evlist__mmap_ops call.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add the perf_evlist_mmap_ops::mmap callback to be called in
mmap_per_evsel() to actually mmap the map.
Add libperf's perf_evlist__mmap_cb_mmap() function as libperf's mmap
callback.
New mmaped map gets refcount set to 2 in mmap__mmap(), we follow that in
mmap callback. We will move this to common place after we switch to
perf_evlist__mmap().
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add the perf_evlist_mmap_ops::get callback to be called in
mmap_per_evsel() to get/allocate the 'struct perf_mmap' object.
Add the libperf's perf_evlist__mmap_cb_get() function as libperf's get
callback.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add the perf_evlist_mmap_ops::idx callback to be called in
mmap_per_cpu() and mmap_per_thread() with current cpu and thread
indexes.
It's used by current aux code, so perf will use this callback to set the
aux index.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To be able to pass specific callbacks to evlist's mmap.
There will be a specific call to this function from perf's
evlist__mmap() and libperf's perf_evlist__mmap() functions in following
changes.
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Jiri Olsa <[email protected]>
|
|
Add libperf's version of perf_evlist__mmap()/munmap() functions and
exporting them in the perf/evlist.h header.
It's the backbone of what we have in perf code. The following changes
will add needed callbacks and then we'll finally switch the perf code to
use libperf's version.
Add mmap/mmap_ovw 'struct perf_mmap' object arrays to hold maps for
libperf's evlist.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__read_event() from tools/perf to libperf and export it in
the perf/mmap.h header.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__read_init() from tools/perf to libperf and export it in
the perf/mmap.h header.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__read_init() from tools/perf to libperf and export it in
perf/mmap.h header.
And add pr_debug2()/pr_debug3() macros support, because the code is
using them.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__consume() vrom tools/perf to libperf and export it in
the perf/mmap.h header.
Move also the needed helpers perf_mmap__write_tail(),
perf_mmap__read_head() and perf_mmap__empty().
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
We will move this code to libperf shortly, so we need to free it of
'struct auxtrace_mmap' usage, because it won't be available in libperf
(for now).
The perf_event_mmap_page::aux_size is set when the aux mmap is mapped,
so the check is equivalent.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__put() from tools/perf to libperf.
Once perf_mmap__put() is moved, we need a way to call application
related unmap code (AIO and aux related code for eprf), when the map
goes away.
Add the perf_mmap::unmap callback to do that.
The unmap path from perf is:
perf_mmap__put (libperf)
perf_mmap__munmap (libperf)
map->unmap_cb -> perf_mmap__unmap_cb (perf)
mmap__munmap (perf)
Committer notes:
Add missing linux/kernel.h to tools/perf/lib/mmap.c to get the BUG_ON
definition.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__unmap() from tools/perf to libperf, to internal header
internal/mmap.h. It will be used in the following patches. And rename
the existing perf's function to mmap__munmap().
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__get() from tools/perf to libperf in the internal header
internal/mmap.h.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__mmap() from tools/perf to libperf, it will be used in
the following patches. And rename the existing perf's function to
mmap__mmap().
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Move perf_mmap__mmap_len() from tools/perf wto libperf, it will be used
in the following patches. And rename the existing perf's function to
mmap__mmap_len().
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add libperf's version of mmap params 'struct perf_mmap_param' object
with the basics: 'prot' and 'mask'. Encapsulate it in the current
'struct mmap_params' object.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add perf_mmap__init() function to initialize 'struct perf_mmap' objects.
Add it to a new mmap.c source file, that will carry all the mmap related
functions.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Being const + weak breaks with some compilers that constant-propagate
from the weak symbol. This behavior is outside of the specification, but
in LLVM is chosen to match GCC's behavior.
LLVM's implementation was set in this patch:
https://github.com/llvm/llvm-project/commit/f49573d1eedcf1e44893d5a062ac1b72c8419646
A const + weak symbol is set to be weak_odr:
https://llvm.org/docs/LangRef.html
ODR is one definition rule, and given there is one constant definition
constant-propagation is possible. It is possible to get this code to
miscompile with LLVM when applying link time optimization. As compilers
become more aggressive, this is likely to break in more instances.
Move the definition of sample_reg_masks to the conditional part of
perf_regs.h and guard usage with HAVE_PERF_REGS_SUPPORT. This avoids the
weak symbol.
Fix an issue when HAVE_PERF_REGS_SUPPORT isn't defined from patch v1.
In v3, add perf_regs.c for architectures that HAVE_PERF_REGS_SUPPORT but
don't declare sample_regs_masks.
Further notes:
Jiri asked:
"Is this just a precaution or you actualy saw some breakage?"
Ian answered:
"We saw a breakage with clang with thinlto enabled for linking. Our
compiler team had recently seen, and were surprised by, a similar issue
and were able to dig out the weak ODR issue."
Signed-off-by: Ian Rogers <[email protected]>
Reviewed-by: Nick Desaulniers <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Alexey Budankov <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: [email protected]
Cc: Guo Ren <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: [email protected]
Cc: Mao Han <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
similar to commit 2c57da356800 ("selftests: kvm: fix sync_regs_test with
newer gccs") and commit 204c91eff798a ("KVM: selftests: do not blindly
clobber registers in guest asm") we better do not rely on gcc leaving
r11 untouched. We can write the simple ucall inline and have the guest
code completely as small assembler function.
Signed-off-by: Christian Borntraeger <[email protected]>
Suggested-by: Paolo Bonzini <[email protected]>
Reviewed-by: Thomas Huth <[email protected]>
|
|
'struct xdp_umem_reg' has 4 bytes of padding at the end that makes
valgrind complain about passing uninitialized stack memory to the
syscall:
Syscall param socketcall.setsockopt() points to uninitialised byte(s)
at 0x4E7AB7E: setsockopt (in /usr/lib64/libc-2.29.so)
by 0x4BDE035: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:172)
Uninitialised value was created by a stack allocation
at 0x4BDDEBA: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:140)
Padding bytes appeared after introducing of a new 'flags' field.
memset() is required to clear them.
Fixes: 10d30e301732 ("libbpf: add flags to umem config")
Signed-off-by: Ilya Maximets <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Existing padding test case for btf_dump has a good test that was
supposed to test padding generation at the end of a struct, but its
expected output was specified incorrectly. Fix this.
Fixes: 2d2a3ad872f8 ("selftests/bpf: add btf_dump BTF-to-C conversion tests")
Reported-by: John Fastabend <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Convert test_btf_dump into a part of test_progs, instead of
a stand-alone test binary.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Fix a case where explicit padding at the end of a struct is necessary
due to non-standart alignment requirements of fields (which BTF doesn't
capture explicitly).
Fixes: 351131b51c7a ("libbpf: add btf_dump API for BTF-to-C conversion")
Reported-by: John Fastabend <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Tested-by: John Fastabend <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Continuing from the previous cset comment, now that filter expression
works:
# perf trace -e msr:* --filter="msr!=FS_BASE && msr != IA32_TSC_DEADLINE && msr != 0x830 && msr != 0x83f && msr !=IA32_SPEC_CTRL" --filter-pids 3750
0.000 Timer/5033 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
0.009 Timer/5033 msr:write_msr(msr: LSTAR, val: -1398800368)
0.010 Timer/5033 msr:write_msr(msr: TSC_AUX, val: 4)
0.050 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
45.661 gnome-terminal/12595 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
45.672 gnome-terminal/12595 msr:write_msr(msr: LSTAR, val: -1398800368)
45.675 gnome-terminal/12595 msr:write_msr(msr: TSC_AUX, val: 3)
54.852 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
130.508 Timer/4050 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
130.527 Timer/4050 msr:write_msr(msr: LSTAR, val: -1398800368)
130.531 Timer/4050 msr:write_msr(msr: TSC_AUX, val: 3)
140.924 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
164.738 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
603.578 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
620.809 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
690.115 JS Watchdog/4259 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
690.136 JS Watchdog/4259 msr:write_msr(msr: LSTAR, val: -1398800368)
690.141 JS Watchdog/4259 msr:write_msr(msr: TSC_AUX, val: 3)
690.186 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
759.016 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
^C[root@quaco ~]#
Or look at the first 3 write_msr events for that IA32_TSC_DEADLINE to learn why
it happens so often:
# perf trace --max-events=3 --max-stack=8 -e msr:* --filter="msr==IA32_TSC_DEADLINE" --filter-pids 3750
0.000 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 19296732550862)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
lapic_next_deadline ([kernel.kallsyms])
clockevents_program_event ([kernel.kallsyms])
hrtimer_interrupt ([kernel.kallsyms])
smp_apic_timer_interrupt ([kernel.kallsyms])
apic_timer_interrupt ([kernel.kallsyms])
cpuidle_enter_state ([kernel.kallsyms])
32.646 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 19296800134158)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
lapic_next_deadline ([kernel.kallsyms])
clockevents_program_event ([kernel.kallsyms])
hrtimer_start_range_ns ([kernel.kallsyms])
tick_nohz_restart_sched_tick ([kernel.kallsyms])
tick_nohz_idle_exit ([kernel.kallsyms])
do_idle ([kernel.kallsyms])
32.802 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 19297507436922)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
lapic_next_deadline ([kernel.kallsyms])
clockevents_program_event ([kernel.kallsyms])
hrtimer_try_to_cancel ([kernel.kallsyms])
hrtimer_cancel ([kernel.kallsyms])
tick_nohz_restart_sched_tick ([kernel.kallsyms])
tick_nohz_idle_exit ([kernel.kallsyms])
#
And if some of the strings can't be found:
# trace -e msr:* --filter="msr!=SPECULATIVE_EXECUTION_PROBLEMS_SOLUTION && msr != IA32_TSC_DEADLINE && msr != 0x830 && msr != 0x83f && msr !=IA32_SPEC_CTRL" --filter-pids 3750
"SPECULATIVE_EXECUTION_PROBLEMS_SOLUTION" not found for "msr" in "msr:read_msr", can't set filter "(msr!=SPECULATIVE_EXECUTION_PROBLEMS_SOLUTION && msr != IA32_TSC_DEADLINE && msr != 0x830 && msr != 0x83f && msr !=IA32_SPEC_CTRL) && (common_pid != 28131 && common_pid != 3750)"
#
Next step is to automatically wire up the pre-existing strarrays, which there
are quite a few.
The strtoul() methods will be further enhanced to allow for looking at other
arguments in a syscall/tracepoint, just like going from integer to string
(scnprintf methods), so that those "val" lines for the msr tracepoints can be
properly formatted or even resolved into some string.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
So that one can try things like:
# perf trace -e msr:* --filter="msr!=FS_BASE && msr != IA32_TSC_DEADLINE && msr != 0x830 && msr != 0x83f && msr !=IA32_SPEC_CTRL" --filter-pids 3750
That, at this point in the patchset, without any strtoul in place for
tracepoint arguments, will result in:
No resolver (strtoul) for "msr" in "msr:read_msr", can't set filter "(msr!=FS_BASE && msr != IA32_TSC_DEADLINE && msr != 0x830 && msr != 0x83f && msr !=IA32_SPEC_CTRL) && (common_pid != 25407 && common_pid != 3750)"
#
See you in the next cset!
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
And also for 'struct strarray', since its needed to implement
strarrays__strtoul(). This just traverses the entries and when finding a
match, returns (offset + index), i.e. the value associated with the
searched string.
E.g. "EFER" (MSR_EFER) returns:
# grep -w EFER -B2 /tmp/build/perf/trace/beauty/generated/x86_arch_MSRs_array.c
#define x86_64_specific_MSRs_offset 0xc0000080
static const char *x86_64_specific_MSRs[] = {
[0xc0000080 - x86_64_specific_MSRs_offset] = "EFER",
#
0xc0000080
This will be auto-attached to 'struct syscall_arg_fmt' entries
associated with strarrays as soon as we add a ->strarray and ->strarrays
to 'struct syscall_arg_fmt'.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
This will go from a string to a number, so that filter expressions can
be constructed with strings and then, before applying the tracepoint
filters (or eBPF, in the future) we can map those strings to numbers.
The first one will be for 'msr' tracepoint arguments, but real quickly
we will be able to reuse all strarrays for that.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Similar to what is in 'perf record', works just like there:
# perf trace -e msr:*
328.297 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.302 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.306 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.317 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.322 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.327 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.331 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.336 :0/0 msr:write_msr(msr: FS_BASE, val: 140240388381888)
328.340 :0/0 ^Cmsr:write_msr(msr: FS_BASE, val: 140240388381888)
#
So, for a system wide trace session looking at the write_msr tracepoint
we see a flood of MSR_FS_BASE, we need to get the number for that:
# grep FS_BASE /tmp/build/perf/trace/beauty/generated/x86_arch_MSRs_array.c
[0xc0000100 - x86_64_specific_MSRs_offset] = "FS_BASE",
#
And then use it in a filter:
# perf trace -e msr:* --filter="msr!=0xc0000100"
<SNIP>
942.177 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 3056931068232)
942.199 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 3057135655252)
942.203 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 3056931068222)
942.231 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 3056998373022)
942.241 :0/0 msr:write_msr(msr: IA32_TSC_DEADLINE, val: 3056931068236)
<SNIP>
#
Ok, lets filter that too, too noisy:
# grep TSC_DEADLINE /tmp/build/perf/trace/beauty/generated/x86_arch_MSRs_array.c
[0x000006E0] = "IA32_TSC_DEADLINE",
#
# perf trace -e msr:* --filter="msr!=0xc0000100 && msr!=0x6e0" -a sleep 0.1
0.000 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
0.066 CPU 0/KVM/4895 msr:write_msr(msr: IA32_SPEC_CTRL, val: 6)
0.070 CPU 0/KVM/4895 msr:write_msr(msr: 0x830, val: 34359740667)
0.099 CPU 0/KVM/4895 msr:read_msr(msr: IA32_SYSENTER_ESP, val: -2199021993472)
0.100 CPU 0/KVM/4895 msr:read_msr(msr: IA32_APICBASE, val: 4276096000)
0.101 CPU 0/KVM/4895 msr:read_msr(msr: IA32_DEBUGCTLMSR)
0.109 :0/0 msr:write_msr(msr: IA32_SPEC_CTRL)
1.000 :0/0 msr:write_msr(msr: 0x830, val: 17179871485)
18.893 :0/0 msr:write_msr(msr: 0x83f, val: 246)
28.810 :0/0 msr:write_msr(msr: 0x830, val: 68719479037)
40.117 CPU 0/KVM/4895 msr:write_msr(msr: IA32_SPEC_CTRL, val: 6)
40.127 CPU 0/KVM/4895 msr:read_msr(msr: IA32_DEBUGCTLMSR)
40.139 CPU 0/KVM/4895 msr:write_msr(msr: LSTAR, val: -2130661312)
40.141 CPU 0/KVM/4895 msr:write_msr(msr: SYSCALL_MASK, val: 14080)
40.142 CPU 0/KVM/4895 msr:write_msr(msr: TSC_AUX)
40.144 CPU 0/KVM/4895 msr:write_msr(msr: KERNEL_GS_BASE)
40.147 CPU 0/KVM/4895 msr:write_msr(msr: IA32_SPEC_CTRL)
40.148 CPU 0/KVM/4895 msr:write_msr(msr: IA32_FLUSH_CMD, val: 1)
40.151 CPU 0/KVM/4895 msr:write_msr(msr: IA32_SPEC_CTRL, val: 6)
^C
#
One can combine that with filtering pids as well:
# perf trace -e msr:* --filter="msr!=0xc0000100 && msr!=0x6e0" --filter-pids 4895 -a sleep 0.09
0.000 :0/0 msr:write_msr(msr: 0x830, val: 4294969597)
0.291 gnome-terminal/2790 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
0.294 gnome-terminal/2790 msr:write_msr(msr: LSTAR, val: -1935671280)
0.295 gnome-terminal/2790 msr:write_msr(msr: TSC_AUX, val: 6)
10.940 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 4294969597)
15.943 gnome-shell/2096 msr:write_msr(msr: 0x830, val: 4294969597)
16.975 :0/0 msr:write_msr(msr: 0x830, val: 4294969597)
19.560 :0/0 msr:write_msr(msr: 0x83f, val: 246)
25.162 :0/0 msr:read_msr(msr: IA32_TSC_ADJUST)
25.807 JS Watchdog/3635 msr:write_msr(msr: IA32_SPEC_CTRL, val: 6)
25.820 :0/0 msr:write_msr(msr: IA32_SPEC_CTRL)
25.941 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 4294969597)
26.941 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 4294969597)
29.942 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 4294969597)
45.313 :0/0 msr:write_msr(msr: 0x83f, val: 246)
56.945 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 4294969597)
60.946 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 4294969597)
74.096 JS Watchdog/8971 msr:write_msr(msr: IA32_SPEC_CTRL, val: 6)
74.130 :0/0 msr:write_msr(msr: IA32_SPEC_CTRL)
79.673 :0/0 msr:write_msr(msr: 0x83f, val: 246)
79.947 gnome-terminal/2790 msr:write_msr(msr: 0x830, val: 17179871485)
#
Or for just a pid, with callchains:
# grep SYSCALL_MAS /tmp/build/perf/trace/beauty/generated/x86_arch_MSRs_array.c
[0xc0000084 - x86_64_specific_MSRs_offset] = "SYSCALL_MASK",
# perf trace -e msr:* --filter="msr==0xc0000084" --pid 2790 --call-graph=dwarf
0.000 gnome-terminal/2790 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
kvm_on_user_return ([kvm])
fire_user_return_notifiers ([kernel.kallsyms])
exit_to_usermode_loop ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
__GI___poll (inlined)
9299.073 gnome-terminal/2790 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
kvm_on_user_return ([kvm])
fire_user_return_notifiers ([kernel.kallsyms])
exit_to_usermode_loop ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
__GI___poll (inlined)
9348.374 gnome-terminal/2790 msr:write_msr(msr: SYSCALL_MASK, val: 292608)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
kvm_on_user_return ([kvm])
fire_user_return_notifiers ([kernel.kallsyms])
exit_to_usermode_loop ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
__GI___poll (inlined)
<SNIP>
#
Ok, just another form of KVM to emit MSRs :-)
Next step: elliminate those greps by getting the filter expression,
looking for arg names, then for the arrays associated with it to do a
reverse lookup.
Also allow those filters to be associated with strace-like syscall
names.
After that: augment the 'val' arg for 'msr:write_msr' based on the first
arg, 'msr'.
Then, do that with eBPF too, not just with tracepoint filters.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Marcelo Tosatti <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
We'll need this to support 'perf trace e tracepoint --filter=expr', as
the command line tracepoint filter is attache to the preceding evsel,
just like in 'perf record' and when we go to set pid filters, which we
do at the minimum to filter 'perf trace' own syscalls, we need to
append, not set the tp filter.
Cc: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Will be used by 'perf trace' to support 'perf trace --filter', we need
to append to any pre-existing filter.
When parse_filter() gets invoked to process --filter, it'll set the
filter to that specified on the command line, later on, when we filter
out 'perf trace' own pid to avoid an event feedback loop, we need to
preserve the command line filter put in place by parse_filter().
Cc: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Will be used to append such lists to existing filters.
Cc: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
So that we can go from:
# perf trace -e msr:write_msr --max-stack=16 sleep 1
0.000 sleep/6740 msr:write_msr(msr: 3221225728, val: 139636317451648)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
do_arch_prctl_64 ([kernel.kallsyms])
__x64_sys_arch_prctl ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
init_tls (/usr/lib64/ld-2.29.so)
dl_main (/usr/lib64/ld-2.29.so)
_dl_sysdep_start (/usr/lib64/ld-2.29.so)
_dl_start (/usr/lib64/ld-2.29.so)
#
To:
# perf trace -e msr:write_msr --max-stack=16 sleep 1
0.000 sleep/8519 msr:write_msr(msr: FS_BASE, val: 139878031705472)
do_trace_write_msr ([kernel.kallsyms])
do_trace_write_msr ([kernel.kallsyms])
do_arch_prctl_64 ([kernel.kallsyms])
__x64_sys_arch_prctl ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
init_tls (/usr/lib64/ld-2.29.so)
dl_main (/usr/lib64/ld-2.29.so)
_dl_sysdep_start (/usr/lib64/ld-2.29.so)
_dl_start (/usr/lib64/ld-2.29.so)
#
This, in reverse, will allow for symbolic system call/tracepoint
filtering.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
We need to wrap those autogenerated string arrays with the
strarrays__scnprintf() formatter, do it.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
For instance 'msr' appears in several tracepoints, so we can associate
it with a single scnprintf() routine auto-generated from kernel headers,
as will be done in followup patches.
Start with an empty array of associations.
Cc: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
This way we generate the source with the table for later use by plugins,
etc.
I.e. after running:
$ make -C tools/perf O=/tmp/build/perf
We end up with:
$ head /tmp/build/perf/trace/beauty/generated/x86_arch_MSRs_array.c
static const char *x86_MSRs[] = {
[0x00000000] = "IA32_P5_MC_ADDR",
[0x00000001] = "IA32_P5_MC_TYPE",
[0x00000010] = "IA32_TSC",
[0x00000017] = "IA32_PLATFORM_ID",
[0x0000001b] = "IA32_APICBASE",
[0x00000020] = "KNC_PERFCTR0",
[0x00000021] = "KNC_PERFCTR1",
[0x00000028] = "KNC_EVNTSEL0",
[0x00000029] = "KNC_EVNTSEL1",
$
Now its just a matter of using it, first in a libtracevent plugin.
At some point we should move tools/perf/trace/beauty to tools/beauty/,
so that it can be used more generally and even made available externally
like libbpf, libperf, libtraevent, etc.
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Without parameters it'll parse tools/arch/x86/include/asm/msr-index.h
and output a table usable by tools, that will be wired up later to a
libtraceevent plugin registered from perf's glue code:
$ tools/perf/trace/beauty/tracepoints/x86_msr.sh
static const char *x86_MSRs[] = {
<SNIP>
[0x00000034] = "SMI_COUNT",
[0x0000003a] = "IA32_FEATURE_CONTROL",
[0x0000003b] = "IA32_TSC_ADJUST",
[0x00000040] = "LBR_CORE_FROM",
[0x00000048] = "IA32_SPEC_CTRL",
[0x00000049] = "IA32_PRED_CMD",
<SNIP>
[0x0000010b] = "IA32_FLUSH_CMD",
[0x0000010F] = "TSX_FORCE_ABORT",
<SNIP>
[0x00000198] = "IA32_PERF_STATUS",
[0x00000199] = "IA32_PERF_CTL",
<SNIP>
[0x00000da0] = "IA32_XSS",
[0x00000dc0] = "LBR_INFO_0",
[0x00000ffc] = "IA32_BNDCFGS_RSVD",
};
#define x86_64_specific_MSRs_offset 0xc0000080
static const char *x86_64_specific_MSRs[] = {
[0xc0000080 - x86_64_specific_MSRs_offset] = "EFER",
[0xc0000081 - x86_64_specific_MSRs_offset] = "STAR",
[0xc0000082 - x86_64_specific_MSRs_offset] = "LSTAR",
[0xc0000083 - x86_64_specific_MSRs_offset] = "CSTAR",
[0xc0000084 - x86_64_specific_MSRs_offset] = "SYSCALL_MASK",
<SNIP>
[0xc0000103 - x86_64_specific_MSRs_offset] = "TSC_AUX",
[0xc0000104 - x86_64_specific_MSRs_offset] = "AMD64_TSC_RATIO",
};
#define x86_AMD_V_KVM_MSRs_offset 0xc0010000
static const char *x86_AMD_V_KVM_MSRs[] = {
[0xc0010000 - x86_AMD_V_KVM_MSRs_offset] = "K7_EVNTSEL0",
<SNIP>
[0xc0010114 - x86_AMD_V_KVM_MSRs_offset] = "VM_CR",
[0xc0010115 - x86_AMD_V_KVM_MSRs_offset] = "VM_IGNNE",
[0xc0010117 - x86_AMD_V_KVM_MSRs_offset] = "VM_HSAVE_PA",
<SNIP>
[0xc0010240 - x86_AMD_V_KVM_MSRs_offset] = "F15H_NB_PERF_CTL",
[0xc0010241 - x86_AMD_V_KVM_MSRs_offset] = "F15H_NB_PERF_CTR",
[0xc0010280 - x86_AMD_V_KVM_MSRs_offset] = "F15H_PTSC",
};
Then these will in turn be hooked up in a follow up patch to be used by
strarrays__scnprintf().
Cc: Adrian Hunter <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
We need it for things like MSRs that are sparse and go over MAXINT.
Cc: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Luis Cláudio Gonçalves <[email protected]>
Cc: Namhyung Kim <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Since the following commit:
b4adfe8e05f1 ("locking/lockdep: Remove unused argument in __lock_release")
@nested is no longer used in lock_release(), so remove it from all
lock_release() calls and friends.
Signed-off-by: Qian Cai <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Will Deacon <[email protected]>
Acked-by: Daniel Vetter <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Newer versions of GCC (>= 9) demand that the size of the string to be
copied must be explicitly smaller than the size of the destination.
Thus, the NULL char has to be taken into account on strncpy.
This will avoid the following compiling error:
tlbie_test.c: In function 'main':
tlbie_test.c:639:4: error: 'strncpy' specified bound 100 equals destination size
strncpy(logdir, optarg, LOGDIR_NAME_SIZE);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
Signed-off-by: Desnes A. Nunes do Rosario <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Out of the three nc implementations widely in use, at least two (BSD netcat
and nmap-ncat) do not support -l combined with -s. Modify the nc invocation
to be accepted by all of them.
Fixes: 17a90a788473 ("selftests/bpf: test that GSO works in lwt_ip_encap")
Signed-off-by: Jiri Benc <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Link: https://lore.kernel.org/bpf/9f177682c387f3f943bb64d849e6c6774df3c5b4.1570539863.git.jbenc@redhat.com
|
|
Many distributions enable rp_filter. However, the flow dissector test
generates packets that have 1.1.1.1 set as (inner) source address without
this address being reachable. This causes the selftest to fail.
The selftests should not assume a particular initial configuration. Switch
off rp_filter.
Fixes: 50b3ed57dee9 ("selftests/bpf: test bpf flow dissection")
Signed-off-by: Jiri Benc <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Petar Penkov <[email protected]>
Link: https://lore.kernel.org/bpf/513a298f53e99561d2f70b2e60e2858ea6cda754.1570539863.git.jbenc@redhat.com
|
|
Validate BPF_CORE_READ correctness and handling of up to 9 levels of
nestedness using cyclic task->(group_leader->)*->tgid chains.
Also add a test of maximum-dpeth BPF_CORE_READ_STR_INTO() macro.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: John Fastabend <[email protected]>
Acked-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Add few macros simplifying BCC-like multi-level probe reads, while also
emitting CO-RE relocations for each read.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: John Fastabend <[email protected]>
Acked-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Move bpf_helpers.h, bpf_tracing.h, and bpf_endian.h into libbpf. Move
bpf_helper_defs.h generation into libbpf's Makefile. Ensure all those
headers are installed along the other libbpf headers. Also, adjust
selftests and samples include path to include libbpf now.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Split-off PT_REGS-related helpers into bpf_tracing.h header. Adjust
selftests and samples to include it where necessary.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: John Fastabend <[email protected]>
Acked-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
To allow adding a variadic BPF_CORE_READ macro with slightly different
syntax and semantics, define CORE_READ in CO-RE reloc tests, which is
a thin wrapper around low-level bpf_core_read() macro, which in turn is
just a wrapper around bpf_probe_read().
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: John Fastabend <[email protected]>
Acked-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Split off few legacy things from bpf_helpers.h into separate
bpf_legacy.h file:
- load_{byte|half|word};
- remove extra inner_idx and numa_node fields from bpf_map_def and
introduce bpf_map_def_legacy for use in samples;
- move BPF_ANNOTATE_KV_PAIR into bpf_legacy.h.
Adjust samples and selftests accordingly by either including
bpf_legacy.h and using bpf_map_def_legacy, or switching to BTF-defined
maps altogether.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: John Fastabend <[email protected]>
Acked-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Having GCC provide its own bpf-helper.h is not the right approach and is
going to be changed. Undo bpf_helpers.h change before moving
bpf_helpers.h into libbpf.
Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Song Liu <[email protected]>
Acked-by: Ilya Leoshkevich <[email protected]>
Acked-by: John Fastabend <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull Kselftest fixes from Shuah Khan:
"Fixes for existing tests and the framework.
Cristian Marussi's patches add the ability to skip targets (tests) and
exclude tests that didn't build from run-list. These patches improve
the Kselftest results. Ability to skip targets helps avoid running
tests that aren't supported in certain environments. As an example,
bpf tests from mainline aren't supported on stable kernels and have
dependency on bleeding edge llvm. Being able to skip bpf on systems
that can't meet this llvm dependency will be helpful.
Kselftest can be built and installed from the main Makefile. This
change help simplify Kselftest use-cases which addresses request from
users.
Kees Cook added per test timeout support to limit individual test
run-time"
* tag 'linux-kselftest-5.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
selftests: watchdog: Add command line option to show watchdog_info
selftests: watchdog: Validate optional file argument
selftests/kselftest/runner.sh: Add 45 second timeout per test
kselftest: exclude failed TARGETS from runlist
kselftest: add capability to skip chosen TARGETS
selftests: Add kselftest-all and kselftest-install targets
|