aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPeter Zijlstra <[email protected]>2014-11-10 10:54:35 +0100
committerIngo Molnar <[email protected]>2014-11-16 10:04:17 +0100
commit7af683350cb0ddd0e9d3819b4eb7abe9e2d3e709 (patch)
tree1e05f9a67523237c633bf1ec18afd1fd6327e2d7
parentc123588b3b193d06588dfb51f475407f835ebfb2 (diff)
sched/numa: Avoid selecting oneself as swap target
Because the whole numa task selection stuff runs with preemption enabled (its long and expensive) we can end up migrating and selecting oneself as a swap target. This doesn't really work out well -- we end up trying to acquire the same lock twice for the swap migrate -- so avoid this. Reported-and-Tested-by: Sasha Levin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r--kernel/sched/fair.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 34baa60f8a7b..3af3d1e7df9b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1180,6 +1180,13 @@ static void task_numa_compare(struct task_numa_env *env,
raw_spin_unlock_irq(&dst_rq->lock);
/*
+ * Because we have preemption enabled we can get migrated around and
+ * end try selecting ourselves (current == env->p) as a swap candidate.
+ */
+ if (cur == env->p)
+ goto unlock;
+
+ /*
* "imp" is the fault differential for the source task between the
* source and destination node. Calculate the total differential for
* the source task and potential destination task. The more negative