diff options
author | Pan Xinhui <[email protected]> | 2016-11-02 05:08:28 -0400 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2016-11-22 12:48:05 +0100 |
commit | d9345c65eb7930ac6755cf593ee7686f4029ccf4 (patch) | |
tree | 997e488123a15a9725bd6c6ee454500f4b63a66d | |
parent | 02cb689b2c102178c83e763e4f34b3efe7f969e2 (diff) |
sched/core: Introduce the vcpu_is_preempted(cpu) interface
This patch is the first step to add support to improve lock holder
preemption beaviour.
vcpu_is_preempted(cpu) does the obvious thing: it tells us whether a
vCPU is preempted or not.
Defaults to false on architectures that don't support it.
Suggested-by: Peter Zijlstra (Intel) <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Signed-off-by: Pan Xinhui <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[ Translated the changelog to English. ]
Acked-by: Christian Borntraeger <[email protected]>
Acked-by: Paolo Bonzini <[email protected]>
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r-- | include/linux/sched.h | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index dc37cbe2b13c..37261afbf16a 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -3510,6 +3510,18 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu) #endif /* CONFIG_SMP */ +/* + * In order to reduce various lock holder preemption latencies provide an + * interface to see if a vCPU is currently running or not. + * + * This allows us to terminate optimistic spin loops and block, analogous to + * the native optimistic spin heuristic of testing if the lock owner task is + * running or not. + */ +#ifndef vcpu_is_preempted +# define vcpu_is_preempted(cpu) false +#endif + extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask); extern long sched_getaffinity(pid_t pid, struct cpumask *mask); |