diff options
author | Rik van Riel <[email protected]> | 2014-06-23 11:46:13 -0400 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2014-07-05 11:17:34 +0200 |
commit | 28a21745190a0ca613cab817bfe3dc65373158bf (patch) | |
tree | 258f0b418b980f3cc19ec7762487bc15b35a59e1 /fs/jbd2/commit.c | |
parent | f0b8a4afd6a8c500161e45065a91738b490bf5ae (diff) |
sched/numa: Move power adjustment into load_too_imbalanced()
Currently the NUMA code scales the load on each node with the
amount of CPU power available on that node, but it does not
apply any adjustment to the load of the task that is being
moved over.
On systems with SMT/HT, this results in a task being weighed
much more heavily than a CPU core, and a task move that would
even out the load between nodes being disallowed.
The correct thing is to apply the power correction to the
numbers after we have first applied the move of the tasks'
loads to them.
This also allows us to do the power correction with a multiplication,
rather than a division.
Also drop two function arguments for load_too_unbalanced, since it
takes various factors from env already.
Signed-off-by: Rik van Riel <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Cc: [email protected]
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Diffstat (limited to 'fs/jbd2/commit.c')
0 files changed, 0 insertions, 0 deletions