diff options
author | Peter Zijlstra <[email protected]> | 2017-09-06 12:51:31 +0200 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2017-09-07 09:29:31 +0200 |
commit | a731ebe6f17bff9e7ca12ef227f9da4d5bdf8425 (patch) | |
tree | bc8e65cc2ff460199206c3c80ce9f6b0527b364a | |
parent | 24e700e291d52bd200212487e2b654c0aa3f07a2 (diff) |
sched/fair: Fix wake_affine_llc() balancing rules
Chris Wilson reported that the SMT balance rules got the +1 on the
wrong side, resulting in a bias towards the current LLC; which the
load-balancer would then try and undo.
Reported-by: Chris Wilson <[email protected]>
Tested-by: Chris Wilson <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: 90001d67be2f ("sched/fair: Fix wake_affine() for !NUMA_BALANCING")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8d5868771cb3..9dd2ce1e5ca2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5435,7 +5435,7 @@ wake_affine_llc(struct sched_domain *sd, struct task_struct *p, return false; /* if this cache has capacity, come here */ - if (this_stats.has_capacity && this_stats.nr_running < prev_stats.nr_running+1) + if (this_stats.has_capacity && this_stats.nr_running+1 < prev_stats.nr_running) return true; /* |