diff options
author | Vincent Guittot <[email protected]> | 2016-11-08 10:53:47 +0100 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2016-11-16 10:29:11 +0100 |
commit | d03266910a533d874c01ef2ca8dc73009f2925fa (patch) | |
tree | 71022b0227d650600512374141ec706fa16355b4 | |
parent | 4e5160766fcc9f41bbd38bac11f92dce993644aa (diff) |
sched/fair: Fix task group initialization
The moves of tasks are now propagated down to root and the utilization
of cfs_rq reflects reality so it doesn't need to be estimated at init.
Signed-off-by: Vincent Guittot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Dietmar Eggemann <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 090a9bb51ab2..02605f2826a2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9198,7 +9198,7 @@ void online_fair_sched_group(struct task_group *tg) se = tg->se[i]; raw_spin_lock_irq(&rq->lock); - post_init_entity_util_avg(se); + attach_entity_cfs_rq(se); sync_throttle(tg, i); raw_spin_unlock_irq(&rq->lock); } |