diff options
| author | Mike Galbraith <[email protected]> | 2015-07-14 17:39:50 +0200 |
|---|---|---|
| committer | Ingo Molnar <[email protected]> | 2015-08-03 12:21:23 +0200 |
| commit | 63b0e9edceec10fa41ec33393a1515a5ff444277 (patch) | |
| tree | d895837f47d954fc91e98862d853dfd810e56feb /include/linux | |
| parent | fbd705a0c6184580d0e2fbcbd47a37b6e5822511 (diff) | |
sched/fair: Beef up wake_wide()
Josef Bacik reported that Facebook sees better performance with their
1:N load (1 dispatch/node, N workers/node) when carrying an old patch
to try very hard to wake to an idle CPU. While looking at wake_wide(),
I noticed that it doesn't pay attention to the wakeup of a many partner
waker, returning 1 only when waking one of its many partners.
Correct that, letting explicit domain flags override the heuristic.
While at it, adjust task_struct bits, we don't need a 64-bit counter.
Tested-by: Josef Bacik <[email protected]>
Signed-off-by: Mike Galbraith <[email protected]>
[ Tidy things up. ]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: kernel-team<[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/sched.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 7412070a25cc..65a8a8651596 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1359,9 +1359,9 @@ struct task_struct { #ifdef CONFIG_SMP struct llist_node wake_entry; int on_cpu; - struct task_struct *last_wakee; - unsigned long wakee_flips; + unsigned int wakee_flips; unsigned long wakee_flip_decay_ts; + struct task_struct *last_wakee; int wake_cpu; #endif |