diff options
author | Chengming Zhou <[email protected]> | 2022-08-18 20:47:59 +0800 |
---|---|---|
committer | Peter Zijlstra <[email protected]> | 2022-08-23 11:01:18 +0200 |
commit | 5d6da83c44af70ede7bfd0fd6d1ef8a3b3e0402c (patch) | |
tree | e6924adb72d878b590c9464b4338411688aa8252 | |
parent | 39c4261191bf05e7eb310f852980a6d0afe5582a (diff) |
sched/fair: Reset sched_avg last_update_time before set_task_rq()
set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked
task's sched_avg when migrate, which is not needed for already detached
task.
task_change_group_fair() will detached the task sched_avg from prev cfs_rq
first, so reset sched_avg last_update_time before set_task_rq() to avoid that.
Signed-off-by: Chengming Zhou <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Dietmar Eggemann <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2c0eb2a4e341..e4c0929a6e71 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11660,12 +11660,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) static void task_change_group_fair(struct task_struct *p) { detach_task_cfs_rq(p); - set_task_rq(p, task_cpu(p)); #ifdef CONFIG_SMP /* Tell se's cfs_rq has been changed -- migrated */ p->se.avg.last_update_time = 0; #endif + set_task_rq(p, task_cpu(p)); attach_task_cfs_rq(p); } |