diff options
author | Andy Lutomirski <[email protected]> | 2016-01-28 15:11:25 -0800 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2016-01-29 09:46:38 +0100 |
commit | 302f5b260c322696cbeb962a263a4d2d99864aed (patch) | |
tree | 5220f43391b35c862d52f47c12f1cb5877ac838a /arch/x86/entry/syscall_64.c | |
parent | cfcbadb49dabb05efa23e1a0f95f3391c0a815bc (diff) |
x86/entry/64: Always run ptregs-using syscalls on the slow path
64-bit syscalls currently have an optimization in which they are
called with partial pt_regs. A small handful require full
pt_regs.
In the 32-bit and compat cases, I cleaned this up by forcing
full pt_regs for all syscalls. The performance hit doesn't
really matter as the affected system calls are fundamentally
heavy and this is the 32-bit compat case.
I want to clean up the 64-bit case as well, but I don't want to
hurt fast path performance. To do that, I want to force the
syscalls that use pt_regs onto the slow path. This will enable
us to make slow path syscalls be real ABI-compliant C functions.
Use the new syscall entry qualification machinery for this.
'stub_clone' is now 'stub_clone/ptregs'.
The next patch will eliminate the stubs, and we'll just have
'sys_clone/ptregs'.
As of this patch, two-phase entry tracing is no longer used. It
has served its purpose (namely a huge speedup on some workloads
prior to more general opportunistic SYSRET support), and once
the dust settles I'll send patches to back it out.
The implementation is heavily based on a patch from Brian Gerst:
http://lkml.kernel.org/g/[email protected]
Originally-From: Brian Gerst <[email protected]>
Signed-off-by: Andy Lutomirski <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Frédéric Weisbecker <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Linux Kernel Mailing List <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/b9beda88460bcefec6e7d792bd44eca9b760b0c4.1454022279.git.luto@kernel.org
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'arch/x86/entry/syscall_64.c')
-rw-r--r-- | arch/x86/entry/syscall_64.c | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/arch/x86/entry/syscall_64.c b/arch/x86/entry/syscall_64.c index a1d408772ae6..9dbc5abb6162 100644 --- a/arch/x86/entry/syscall_64.c +++ b/arch/x86/entry/syscall_64.c @@ -6,11 +6,14 @@ #include <asm/asm-offsets.h> #include <asm/syscall.h> -#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long) ; +#define __SYSCALL_64_QUAL_(sym) sym +#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_##sym + +#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long __SYSCALL_64_QUAL_##qual(sym)(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); #include <asm/syscalls_64.h> #undef __SYSCALL_64 -#define __SYSCALL_64(nr, sym, qual) [nr] = sym, +#define __SYSCALL_64(nr, sym, qual) [nr] = __SYSCALL_64_QUAL_##qual(sym), extern long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); |