Age | Commit message (Collapse) | Author | Files | Lines |
|
An earlier kernel patch allowed enabling PT and LBR at the same time on
Goldmont.
commit ccbebba4c6bf ("perf/x86/intel/pt: Bypass PT vs. LBR exclusivity
if the core supports it")
However, users still cannot use Intel PT and LBRs simultaneously. $
sudo perf record -e cycles,intel_pt//u -b -- sleep 1 Error: PMU
Hardware doesn't support sampling/overflow-interrupts.
PT implicitly adds dummy event in perf tool. dummy event is software
event which doesn't support LBR.
Always setting no branch for dummy event in Intel PT.
Signed-off-by: Kan Liang <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The reason of introducing the tracking event (a dummy software event) is
to collect side-band information. Additional sampling is wasteful.
no_aux_samples should be set for tracking event.
Signed-off-by: Kan Liang <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Macro fusion merges two instructions to a single micro-op. Intel core
platform performs this hardware optimization under limited
circumstances.
For example, CMP + JCC can be "fused" and executed /retired together.
While with sampling this can result in the sample sometimes being on the
JCC and sometimes on the CMP. So for the fused instruction pair, they
could be considered together.
On Nehalem, fused instruction pairs:
cmp/test + jcc.
On other new CPU:
cmp/test/add/sub/and/inc/dec + jcc.
This patch adds an x86-specific function which checks if 2 instructions
are in a "fused" pair. For non-x86 arch, the function is just NULL.
Changelog:
v4: Move the CPU model checking to symbol__disassemble and save the CPU
family/model in arch structure.
It avoids checking every time when jump arrow printed.
v3: Add checking for Nehalem (CMP, TEST). For other newer Intel CPUs
just check it by default (CMP, TEST, ADD, SUB, AND, INC, DEC).
v2: Remove the original weak function. Arnaldo points out that doing it
as a weak function that will be overridden by the host arch doesn't
work. So now it's implemented as an arch-specific function.
Committer fix:
Do not access evsel->evlist->env->cpuid, ->env can be null, introduce
perf_evsel__env_cpuid(), just like perf_evsel__env_arch(), also used in
this function call.
The original patch was segfaulting 'perf top' + annotation.
But this essentially disables this fused instructions augmentation in
'perf top', the right thing is to get the cpuid from the running kernel,
left for a later patch tho.
Signed-off-by: Yao Jin <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add ptwrite to the op code map and the perf tools new instructions test.
To run the test:
$ tools/perf/perf test "x86 ins"
39: Test x86 instruction decoder - new instructions : Ok
Or to see the details:
$ tools/perf/perf test -v "x86 ins" 2>&1 | grep ptwrite
For information about ptwrite, refer the Intel SDM.
Signed-off-by: Adrian Hunter <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Branch tracing is enabled by default, so a fake config bit called 'pt'
(pass-through) was added to allow the 'branch enable' bit to have affect.
Add default config 'pt,branch' which will allow users to disable branch
tracing using 'branch=0' instead of having to specify 'pt,branch=0'.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Andi Kleen <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
These defines were probably dragged in from sampling support in earlier
patches. They can be put back when needed.
Signed-off-by: Kim Phillips <[email protected]>
Acked-by: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Removing various instances of unnecessary includes, reducing the maze of
header dependencies.
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Not needed in this header, added to the places that need poll(), wait()
and a few other prototypes.
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
And remove it from util.h, disentangling it a bit more.
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Removing it from util.h, part of an effort to disentangle the includes
hell, that makes changes to util.h or something included by it to cause
a complete rebuild of the tools.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Needed to use the PRI[xu](32,64) formatting macros.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To test it, build samples/statx/test_statx, which I did as:
$ make headers_install
$ cc -I ~/git/linux/usr/include samples/statx/test-statx.c -o /tmp/statx
And then use perf trace on it:
# perf trace -e statx /tmp/statx /etc/passwd
statx(/etc/passwd) = 0
results=7ff
Size: 3496 Blocks: 8 IO Block: 4096 regular file
Device: fd:00 Inode: 280156 Links: 1
Access: (0644/-rw-r--r--) Uid: 0 Gid: 0
Access: 2017-03-29 16:01:01.650073438-0300
Modify: 2017-03-10 16:25:14.156479354-0300
Change: 2017-03-10 16:25:14.171479328-0300
0.000 ( 0.007 ms): statx/30648 statx(dfd: CWD, filename: 0x7ef503f4, flags: SYMLINK_NOFOLLOW, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7fff7ef4eb10) = 0
#
Using the test-stat.c options to change the mask:
# perf trace -e statx /tmp/statx -O /etc/passwd > /dev/null
0.000 ( 0.008 ms): statx/30745 statx(dfd: CWD, filename: 0x3a0753f4, flags: SYMLINK_NOFOLLOW, mask: BTIME, buffer: 0x7ffd3a0735c0) = 0
#
# perf trace -e statx /tmp/statx -A /etc/passwd > /dev/null
0.000 ( 0.010 ms): statx/30757 statx(dfd: CWD, filename: 0xa94e63f4, flags: SYMLINK_NOFOLLOW|NO_AUTOMOUNT, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7ffea94e49d0) = 0
#
# trace --no-inherit -e statx /tmp/statx -F /etc/passwd > /dev/null
0.000 ( 0.011 ms): statx(dfd: CWD, filename: 0x3b02d3f3, flags: SYMLINK_NOFOLLOW|STATX_FORCE_SYNC, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7ffd3b02c850) = 0
#
# trace --no-inherit -e statx /tmp/statx -F -L /etc/passwd > /dev/null
0.000 ( 0.008 ms): statx(dfd: CWD, filename: 0x15cff3f3, flags: STATX_FORCE_SYNC, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7fff15cfdda0) = 0
#
# trace --no-inherit -e statx /tmp/statx -D -O /etc/passwd > /dev/null
0.000 ( 0.009 ms): statx(dfd: CWD, filename: 0xfa37f3f3, flags: SYMLINK_NOFOLLOW|STATX_DONT_SYNC, mask: BTIME, buffer: 0x7ffffa37da20) = 0
#
Adding a probe to get the filename collected as well:
# perf probe 'vfs_getname=getname_flags:72 pathname=result->name:string'
Added new event:
probe:vfs_getname (on getname_flags:72 with pathname=result->name:string)
You can now use it in all perf tools, such as:
perf record -e probe:vfs_getname -aR sleep 1
# trace --no-inherit -e statx /tmp/statx -D -O /etc/passwd > /dev/null
0.169 ( 0.007 ms): statx(dfd: CWD, filename: /etc/passwd, flags: SYMLINK_NOFOLLOW|STATX_DONT_SYNC, mask: BTIME, buffer: 0x7ffda9bf50f0) = 0
#
Same technique could be used to collect and beautify the result put in
the 'buffer' argument.
Finally do a system wide 'perf trace' session looking for any use of statx,
then run the test proggie with various flags:
# trace -e statx
16612.967 ( 0.028 ms): statx/4562 statx(dfd: CWD, filename: /tmp/statx, flags: SYMLINK_NOFOLLOW, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7ffef195d660) = 0
33064.447 ( 0.011 ms): statx/4569 statx(dfd: CWD, filename: /tmp/statx, flags: SYMLINK_NOFOLLOW|STATX_FORCE_SYNC, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7ffc5484c790) = 0
36050.891 ( 0.023 ms): statx/4576 statx(dfd: CWD, filename: /tmp/statx, flags: SYMLINK_NOFOLLOW, mask: BTIME, buffer: 0x7ffeb18b66e0) = 0
38039.889 ( 0.023 ms): statx/4584 statx(dfd: CWD, filename: /tmp/statx, flags: SYMLINK_NOFOLLOW, mask: TYPE|MODE|NLINK|UID|GID|ATIME|MTIME|CTIME|INO|SIZE|BLOCKS|BTIME, buffer: 0x7fff1db0ea90) = 0
^C#
This one also starts moving the beautifiers from files directly included
in builtin-trace.c to separate objects + a beauty.h header with
prototypes, so that we can add test cases in tools/perf/tests/ to fire
syscalls with various arguments and then get them intercepted as
syscalls:sys_enter_foo or raw_syscalls:sys_enter + sys_exit to then
format and check that the formatted output is the one we expect.
Cc: Adrian Hunter <[email protected]>
Cc: Al Viro <[email protected]>
Cc: David Ahern <[email protected]>
Cc: David Howells <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
SDT marker argument is in N@OP format. N is the size of argument and OP
is the actual assembly operand. OP is arch dependent component and hence
it's parsing logic also should be placed under tools/perf/arch/.
Signed-off-by: Ravi Bangoria <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Alexis Berlemont <[email protected]>
Cc: Hemant Kumar <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Naveen N. Rao <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
I found couple of events using al, bl, cl and dl registers for argument.
These are not directly accepted by uprobe_events and thus needs to be
mapped to ax, bx, cx and dx respectively.
Few ex,
/usr/bin/qemu-system-s390x
css_adapter_interrupt: 1@%bl
css_chpid_add: 1@%cl 1@%sil 1@%dl
dma_bdrv_io: 8@%rbx 8@%rbp -8@%r14 1@%al
/usr/bin/postgres
buffer__read__done: ... -1@-bash -1@%al
buffer__read__start: ... -1@%al
I don't find any sdt events using ah, bh,... registers. But I also don't
see any reason to not use them, so there might be rare events using
these registers, and if so, perf should have a renaming logic for them
too.
Signed-off-by: Ravi Bangoria <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Alexis Berlemont <[email protected]>
Cc: Hemant Kumar <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Naveen N. Rao <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
'perf probe' is failing for sdt markers whose arguments has rNN (with
postfix b/w/d), %rsp, %esp, %sil etc. registers. Add renaming logic for
these registers.
Signed-off-by: Ravi Bangoria <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Alexis Berlemont <[email protected]>
Cc: Madhavan Srinivasan <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Naveen N. Rao <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
An sdt probe can be associated with arguments but they were not passed
to the user probe tracing interface (uprobe_events); this patch adapts
the sdt argument descriptors according to the uprobe input format.
As the uprobe parser does not support scaled address mode, perf will
skip arguments which cannot be adapted to the uprobe format.
Here are the results:
$ perf buildid-cache -v --add test_sdt
$ perf probe -x test_sdt sdt_libfoo:table_frob
$ perf probe -x test_sdt sdt_libfoo:table_diddle
$ perf record -e sdt_libfoo:table_frob -e sdt_libfoo:table_diddle test_sdt
$ perf script
test_sdt ... 666.255678: sdt_libfoo:table_frob: (4004d7) arg0=0 arg1=0
test_sdt ... 666.255683: sdt_libfoo:table_diddle: (40051a) arg0=0 arg1=0
test_sdt ... 666.255686: sdt_libfoo:table_frob: (4004d7) arg0=1 arg1=2
test_sdt ... 666.255689: sdt_libfoo:table_diddle: (40051a) arg0=3 arg1=4
test_sdt ... 666.255692: sdt_libfoo:table_frob: (4004d7) arg0=2 arg1=4
test_sdt ... 666.255694: sdt_libfoo:table_diddle: (40051a) arg0=6 arg1=8
Signed-off-by: Alexis Berlemont <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Hemant Kumar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Having "test" in almost all test descriptions is redundant, simplify it
removing and rewriting tests with such descriptions.
End result:
# perf test
1: vmlinux symtab matches kallsyms : Ok
2: Detect openat syscall event : Ok
3: Detect openat syscall event on all cpus : Ok
4: Read samples using the mmap interface : Ok
5: Parse event definition strings : Ok
6: PERF_RECORD_* events & perf_sample fields : Ok
7: Parse perf pmu format : Ok
8: DSO data read : Ok
9: DSO data cache : Ok
10: DSO data reopen : Ok
11: Roundtrip evsel->name : Ok
12: Parse sched tracepoints fields : Ok
13: syscalls:sys_enter_openat event fields : Ok
14: Setup struct perf_event_attr : Ok
15: Match and link multiple hists : Ok
16: 'import perf' in python : Ok
17: Breakpoint overflow signal handler : Ok
18: Breakpoint overflow sampling : Ok
19: Number of exit events of a simple workload : Ok
20: Software clock events period values : Ok
21: Object code reading : Ok
22: Sample parsing : Ok
23: Use a dummy software event to keep tracking: Ok
24: Parse with no sample_id_all bit set : Ok
25: Filter hist entries : Ok
26: Lookup mmap thread : Ok
27: Share thread mg : Ok
28: Sort output of hist entries : Ok
29: Cumulate child hist entries : Ok
30: Track with sched_switch : Ok
31: Filter fds with revents mask in a fdarray : Ok
32: Add fd to a fdarray, making it autogrow : Ok
33: kmod_path__parse : Ok
34: Thread map : Ok
35: LLVM search and compile :
35.1: Basic BPF llvm compile : Ok
35.2: kbuild searching : Ok
35.3: Compile source for BPF prologue generation: Ok
35.4: Compile source for BPF relocation : Ok
36: Session topology : Ok
37: BPF filter :
37.1: Basic BPF filtering : Ok
37.2: BPF prologue generation : Ok
37.3: BPF relocation checker : Ok
38: Synthesize thread map : Ok
39: Synthesize cpu map : Ok
40: Synthesize stat config : Ok
41: Synthesize stat : Ok
42: Synthesize stat round : Ok
43: Synthesize attr update : Ok
44: Event times : Ok
45: Read backward ring buffer : Ok
46: Print cpu map : Ok
47: Probe SDT events : Ok
48: is_printable_array : Ok
49: Print bitmap : Ok
50: perf hooks : Ok
51: x86 rdpmc : Ok
52: Convert perf time to TSC : Ok
53: DWARF unwind : Ok
54: x86 instruction decoder - new instructions : Ok
55: Intel cqm nmi context read : Skip
#
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Another step in supporting cross annotation.
The arch specific tables are put in:
tools/perf/arch/$ARCH/annotation/instructions.c
which, so far, just plug instructions to a bunch of parsers/formatters,
but may have more as the need arises.
This is an alternative implementation to a previous attempt made by Ravi
Bangoria.
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Chris Riyder <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kim Phillips <[email protected]>
Cc: Markus Trippelsdorf <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Naveen N. Rao <[email protected]>
Cc: Pawel Moll <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ravi Bangoria <[email protected]>
Cc: Russell King <[email protected]>
Cc: Taeung Song <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Introduced in commit f9afc6197e9b ("x86: Wire up protection keys system
calls")
This will make 'perf trace' aware of them on x86_64.
Cc: Adrian Hunter <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To get up to the recent compat pread/pwrite changes, that albeit not
being used by 'perf trace' due to some raw_syscalls tracepoint
limitations, trigger this warning when building perf:
Warning: x86_64's syscall_64.tbl differs from kernel
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Implement the code to match CPU types to mapfile types for x86 based on
CPUID. This extends an existing similar function, but changes it to use
the x86 mapfile cpu description. This allows to resolve event lists
generated by jevents.
Signed-off-by: Andi Kleen <[email protected]>
Signed-off-by: Sukadev Bhattiprolu <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Madhavan Srinivasan <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The address filter is needed to help decode the trace, so store it in
the AUXTRACE_INFO event.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Previously the maximum non-turbo ratio was calculated from TSC assuming
a 100 MHz multiplier which is correct for current hardware supporting
Intel PT. However more recent kernels also now export the value, so use
that in preference to the calculated value.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Mathieu Poirier <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
In 293d5b439483 ("perf probe: Support probing on offline cross-arch binary")
DWARF register tables were introduced for many architectures, with the one for
the "dx" register being broken for x86_64, which got noticed by the 'perf test
bpf' testcase, that has this difference from a successful run to one that
fails, with the aforementioned patch:
-Writing event: p:perf_bpf_probe/func _text+5197232 f_mode=+68(%di):x32 offset=%si:s64 orig=dx:s32
-Failed to write event: Invalid argument
-bpf_probe: failed to apply perf probe eventsFailed to add events selected by BPF
+Writing event: p:perf_bpf_probe/func _text+5197232 f_mode=+68(%di):x32 offset=%si:s64 orig=%dx:s32
Add the missing '%' to '%dx' to fix this.
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Fixes: 293d5b439483 ("perf probe: Support probing on offline cross-arch binary")
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Support probing on offline cross-architecture binary by adding getting
the target machine arch from ELF and choose correct register string for
the machine.
Here is an example:
-----
$ perf probe --vmlinux=./vmlinux-arm --definition 'do_sys_open $params'
p:probe/do_sys_open do_sys_open+0 dfd=%r5:s32 filename=%r1:u32 flags=%r6:s32 mode=%r3:u16
-----
Here, we can get probe/do_sys_open from above and append it to to the target
machine's tracing/kprobe_events file in the tracefs mountput, usually
/sys/kernel/debug/tracing/kprobe_events (or /sys/kernel/tracing/kprobe_events).
Signed-off-by: Masami Hiramatsu <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/147214229717.23638.6440579792548044658.stgit@devbox
[ Add definition for EM_AARCH64 to fix the build on at least centos 6, debian 7 & ubuntu 12.04.5 ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
In order to successfully decode Intel PT traces, context switch events
are needed from the moment the trace starts. Currently that is ensured
by using the 'immediate' flag which enables the switch event when it is
opened.
However, since commit 86c2786994bd ("perf intel-pt: Add support for
PERF_RECORD_SWITCH") that might not always happen. When tracing
system-wide the context switch event is added to the tracking event
which was not set as 'immediate'. Change that so it is.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: [email protected] # v4.4+
Fixes: 86c2786994bd ("perf intel-pt: Add support for PERF_RECORD_SWITCH")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm updates from Dan Williams:
- Replace pcommit with ADR / directed-flushing.
The pcommit instruction, which has not shipped on any product, is
deprecated. Instead, the requirement is that platforms implement
either ADR, or provide one or more flush addresses per nvdimm.
ADR (Asynchronous DRAM Refresh) flushes data in posted write buffers
to the memory controller on a power-fail event.
Flush addresses are defined in ACPI 6.x as an NVDIMM Firmware
Interface Table (NFIT) sub-structure: "Flush Hint Address Structure".
A flush hint is an mmio address that when written and fenced assures
that all previous posted writes targeting a given dimm have been
flushed to media.
- On-demand ARS (address range scrub).
Linux uses the results of the ACPI ARS commands to track bad blocks
in pmem devices. When latent errors are detected we re-scrub the
media to refresh the bad block list, userspace can also request a
re-scrub at any time.
- Support for the Microsoft DSM (device specific method) command
format.
- Support for EDK2/OVMF virtual disk device memory ranges.
- Various fixes and cleanups across the subsystem.
* tag 'libnvdimm-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (41 commits)
libnvdimm-btt: Delete an unnecessary check before the function call "__nd_device_register"
nfit: do an ARS scrub on hitting a latent media error
nfit: move to nfit/ sub-directory
nfit, libnvdimm: allow an ARS scrub to be triggered on demand
libnvdimm: register nvdimm_bus devices with an nd_bus driver
pmem: clarify a debug print in pmem_clear_poison
x86/insn: remove pcommit
Revert "KVM: x86: add pcommit support"
nfit, tools/testing/nvdimm/: unify shutdown paths
libnvdimm: move ->module to struct nvdimm_bus_descriptor
nfit: cleanup acpi_nfit_init calling convention
nfit: fix _FIT evaluation memory leak + use after free
tools/testing/nvdimm: add manufacturing_{date|location} dimm properties
tools/testing/nvdimm: add virtual ramdisk range
acpi, nfit: treat virtual ramdisk SPA as pmem region
pmem: kill __pmem address space
pmem: kill wmb_pmem()
libnvdimm, pmem: use nvdimm_flush() for namespace I/O writes
fs/dax: remove wmb_pmem()
libnvdimm, pmem: flush posted-write queues on shutdown
...
|
|
The pcommit instruction is being deprecated in favor of either ADR
(asynchronous DRAM refresh: flush-on-power-fail) at the platform level, or
posted-write-queue flush addresses as defined by the ACPI 6.x NFIT (NVDIMM
Firmware Interface Table).
Cc: Thomas Gleixner <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Josh Poimboeuf <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Xiao Guangrong <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Ross Zwisler <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
|
|
Previous patches added support for Intel's AVX-512 instructions to the
kernel and perf tools instruction decoders.
AVX-512 instructions are documented in Intel Architecture Instruction
Set Extensions Programming Reference (February 2016).
Add a representative set of instructions to perf's "new instructions"
test. e.g.
perf test "new instructions"
Or to view a particular instruction:
perf test -v "new instructions" 2>&1 | grep vbroadcasti64x4
Signed-off-by: Adrian Hunter <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: X86 ML <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
vcvtph2ps does not have an immediate operand, so remove the erroneous
'Ib' from its opcode map entry. Add vcvtph2ps to the perf tools new
instructions test to verify it.
Signed-off-by: Adrian Hunter <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: X86 ML <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Another case of a file using definitions and getting them by chance,
from indirect header inclusion, fix it.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The tools so far have been using the strerror_r() GNU variant, that
returns a string, be it the buffer passed or something else.
But that, besides being tricky in cases where we expect that the
function using strerror_r() returns the error formatted in a provided
buffer (we have to check if it returned something else and copy that
instead), breaks the build on systems not using glibc, like Alpine
Linux, where musl libc is used.
So, introduce yet another wrapper, str_error_r(), that has the GNU
interface, but uses the portable XSI variant of strerror_r(), so that
users rest asured that the provided buffer is used and it is what is
returned.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Noticed by the build system, that emitted this warning:
Warning: x86_64's syscall_64.tbl differs from kernel
This was due to the wiring up of the recently added preadv2 & pwritev2
syscalls to the compat code, which hadn't been done by the patch
introducing those syscalls: 4babf2c5efb7 ("x86: wire up preadv2 and
pwritev2").
The patch doing the compat wiring was:
482dd2ef1244 ("x86/syscalls: Wire up compat readv2/writev2 syscalls")
This just silences the perf build warning, as compat syscalls still
can't be supported in 'perf trace´ due to limitations in the
raw_syscalls:sys_{enter,exit} tracepoints it relies on.
Reported-by: Ingo Molnar <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Milian Wolff <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To match the semantics for list.h in the kernel, that are used to
implement those macros.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Milian Wolff <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Taeung Song <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Because at the destructor we will call close() and that will do the
disable. And we destructors can accept NULL, just like free(), so no
need to check it.
Cc: Adrian Hunter <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Jiri Olsa <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Support x86(32-bit) cross platform callchain unwind.
Signed-off-by: He Kuang <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Ekaterina Tumanova <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Sukadev Bhattiprolu <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
CONFIG_LIBUNWIND/NO_LIBUNWIND are changed to CONFIG_LOCAL_LIBUNWIND/
NO_LOCAL_LIBUNWIND for retaining local unwind features. The new
CONFIG_LIBUNWIND stands for either local or remote or both unwind are
supported, and NO_LIBUNWIND means that neither local nor remote unwind
is supported.
LIBUNWIND_LIBS is eliminated in LDFLAGS if local libunwind is not
supported.
Signed-off-by: He Kuang <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Ekaterina Tumanova <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Sukadev Bhattiprolu <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Add basic plumbing for TopDown in perf stat
TopDown is intended to replace the frontend cycles idle/ backend cycles
idle metrics in standard perf stat output. These metrics are not
reliable in many workloads, due to out of order effects.
This implements a new --topdown mode in perf stat (similar to
--transaction) that measures the pipe line bottlenecks using
standardized formulas. The measurement can be all done with 5 counters
(one fixed counter)
The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring
that describe the CPU pipeline behavior on a high level.
The full top down methology has many hierarchical metrics. This
implementation only supports level 1 which can be collected without
multiplexing. A full implementation of top down on top of perf is
available in pmu-tools toplev. (http://github.com/andikleen/pmu-tools)
The current version works on Intel Core CPUs starting with Sandy Bridge,
and Atom CPUs starting with Silvermont. In principle the generic
metrics should be also implementable on other out of order CPUs.
TopDown level 1 uses a set of abstracted metrics which are generic to
out of order CPU cores (although some CPUs may not implement all of
them):
topdown-total-slots Available slots in the pipeline
topdown-slots-issued Slots issued into the pipeline
topdown-slots-retired Slots successfully retired
topdown-fetch-bubbles Pipeline gaps in the frontend
topdown-recovery-bubbles Pipeline gaps during recovery
from misspeculation
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
Add a new --topdown options to enable events. When --topdown is
specified set up events for all topdown events supported by the kernel.
Add topdown-* as a special case to the event parser, as is needed for
all events containing -.
The actual code to compute the metrics is in follow-on patches.
v2: Use standard sysctl read function.
v3: Move x86 specific code to arch/
v4: Enable --metric-only implicitly for topdown.
v5: Add --single-thread option to not force per core mode
v6: Fix output order of topdown metrics
v7: Allow combining with -d
v8: Remove --single-thread again
v9: Rename functions, adding arch_ and topdown_.
v10: Expand man page and describe TopDown better
Paste intro into commit description.
Print error when malloc fails.
Signed-off-by: Andi Kleen <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
It is possible that all events in an evlist are overwritable.
perf_event__synth_time_conv() should not crash in this case.
record__pick_pc() is used to check avaliability.
Signed-off-by: Wang Nan <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: He Kuang <[email protected]>
[ Split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Introduced in commit 4babf2c5efb7 ("x86: wire up preadv2 and pwritev2").
This will make 'perf trace' aware of them.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Milian Wolff <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Turn current clean output:
$ make clean
rm -f arch/x86/include/generated/asm/syscalls_64.c
CLEAN libbpf
CLEAN libapi
into:
$ make clean
CLEAN x86
CLEAN libapi
CLEAN libbpf
Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: TJ <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Instead receive a callchain_param pointer to configure callchain
aspects, not doing so if NULL is passed.
This will allow fine grained control over which evsels in an evlist
gets callchains enabled.
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Milian Wolff <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
To fix the build on Fedora Rawhide (gcc 6.0.0 20160311 (Red Hat 6.0.0-0.17):
CC /tmp/build/perf/arch/x86/util/dwarf-regs.o
arch/x86/util/dwarf-regs.c:66:36: error: 'x86_32_regoffset_table' defined but not used [-Werror=unused-const-variable=]
static const struct pt_regs_offset x86_32_regoffset_table[] = {
^~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
We used libaudit to map ids to syscall names and vice-versa, but that
imposes a delay in supporting new syscalls, having to wait for libaudit
to get those new syscalls on its tables.
To remove that delay, for x86_64 initially, grab a copy of
arch/x86/entry/syscalls/syscall_64.tbl and use it to generate those
tables.
Syscalls currently not available in audit-libs:
# trace -e copy_file_range,membarrier,mlock2,pread64,pwrite64,timerfd_create,userfaultfd
Error: Invalid syscall copy_file_range, membarrier, mlock2, pread64, pwrite64, timerfd_create, userfaultfd
Hint: try 'perf list syscalls:sys_enter_*'
Hint: and: 'man syscalls'
#
With this patch:
# trace -e copy_file_range,membarrier,mlock2,pread64,pwrite64,timerfd_create,userfaultfd
8505.733 ( 0.010 ms): gnome-shell/2519 timerfd_create(flags: 524288) = 36
8506.688 ( 0.005 ms): gnome-shell/2519 timerfd_create(flags: 524288) = 40
30023.097 ( 0.025 ms): qemu-system-x8/24629 pwrite64(fd: 18, buf: 0x7f63ae382000, count: 4096, pos: 529592320) = 4096
31268.712 ( 0.028 ms): qemu-system-x8/24629 pwrite64(fd: 18, buf: 0x7f63afd8b000, count: 4096, pos: 2314133504) = 4096
31268.854 ( 0.016 ms): qemu-system-x8/24629 pwrite64(fd: 18, buf: 0x7f63afda2000, count: 4096, pos: 2314137600) = 4096
Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
[ Added make dep for 'prepare' in 'LIBPERF_IN', fix by Wang Nan to fix parallell build ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
For Intel PT / BTS, define the environment variable that selects TSC
timestamps in the jitdump file.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Intel PT uses TSC as a timestamp, so add support for using TSC instead
of the monotonic clock. Use of TSC is selected by an environment
variable "JITDUMP_USE_ARCH_TIMESTAMP" and flagged in the jitdump file
with flag JITDUMP_FLAGS_ARCH_TIMESTAMP.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Sukadev Bhattiprolu <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
[ Added the fixup from He Kuang to make it build on other arches, ]
[ such as aarch64, to avoid inserting this bisectiong breakage upstream ]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Intel PT uses the time members from the perf_event_mmap_page to convert
between TSC and perf time.
Due to a lack of foresight when Intel PT was implemented, those time
members were recorded in the (implementation dependent) AUXTRACE_INFO
event, the structure of which is generally inaccessible outside of the
Intel PT decoder. However now the conversion between TSC and perf time
is needed when processing a jitdump file when Intel PT has been used for
tracing.
So add a user event to record the time members. 'perf record' will
synthesize the event if the information is available. And session
processing will put a copy of the event on the session so that tools
like 'perf inject' can easily access it.
Signed-off-by: Adrian Hunter <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Build jitdump only on architectures defined in util/genelf.h file, to avoid
breaking the build on such arches.
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Colin Ian King <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
Commit b9511cd761fa ("perf/x86: Fix time_shift in perf_event_mmap_page")
altered the time conversion algorithms documented in the perf_event.h
header file, to use 64-bit shifts. That was done to make the code more
future-proof (i.e. some time in the future a 32-bit shift could be
allowed). Reflect those changes in perf tools.
Signed-off-by: Adrian Hunter <[email protected]>
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
The sa_flags field is not being initialized, so a garbage value is being
passed to sigaction. Initialize it to zero.
Signed-off-by: Colin Ian King <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|