diff options
author | Nadav Amit <[email protected]> | 2018-10-05 13:27:17 -0700 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2018-10-06 15:52:16 +0200 |
commit | d5a581d84ae6b8a4a740464b80d8d9cf1e7947b2 (patch) | |
tree | 754b18d150fb4bff9643c97f8124e329dc241c1b /tools/perf/scripts/python/exported-sql-viewer.py | |
parent | 0474d5d9d2f7f3b11262f7bf87d0e7314ead9200 (diff) |
x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs
As described in:
77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")
GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.
The workaround is to set an assembly macro and call it from the inline
assembly block - which is pretty pointless indirection in the static_cpu_has()
case, but is worth it to improve overall inlining quality.
The patch slightly increases the kernel size:
text data bss dec hex filename
18162879 10226256 2957312 31346447 1de4f0f ./vmlinux before
18163528 10226300 2957312 31347140 1de51c4 ./vmlinux after (+693)
And enables the inlining of function such as free_ldt_pgtables().
Tested-by: Kees Cook <[email protected]>
Signed-off-by: Nadav Amit <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Link: https://lore.kernel.org/lkml/[email protected]/T/#u
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/exported-sql-viewer.py')
0 files changed, 0 insertions, 0 deletions