diff options
author | Vincent Guittot <[email protected]> | 2022-01-11 14:46:56 +0100 |
---|---|---|
committer | Peter Zijlstra <[email protected]> | 2022-01-18 12:09:58 +0100 |
commit | 98b0d890220d45418cfbc5157b3382e6da5a12ab (patch) | |
tree | a00783bf4513fbf4ea55500defa57a28f6ac6a16 /net/lapb/lapb_timer.c | |
parent | a06247c6804f1a7c86a2e5398a4c1f1db1471848 (diff) |
sched/pelt: Relax the sync of util_sum with util_avg
Rick reported performance regressions in bugzilla because of cpu frequency
being lower than before:
https://bugzilla.kernel.org/show_bug.cgi?id=215045
He bisected the problem to:
commit 1c35b07e6d39 ("sched/fair: Ensure _sum and _avg values stay consistent")
This commit forces util_sum to be synced with the new util_avg after
removing the contribution of a task and before the next periodic sync. By
doing so util_sum is rounded to its lower bound and might lost up to
LOAD_AVG_MAX-1 of accumulated contribution which has not yet been
reflected in util_avg.
Instead of always setting util_sum to the low bound of util_avg, which can
significantly lower the utilization of root cfs_rq after propagating the
change down into the hierarchy, we revert the change of util_sum and
propagate the difference.
In addition, we also check that cfs's util_sum always stays above the
lower bound for a given util_avg as it has been observed that
sched_entity's util_sum is sometimes above cfs one.
Fixes: 1c35b07e6d39 ("sched/fair: Ensure _sum and _avg values stay consistent")
Reported-by: Rick Yiu <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Dietmar Eggemann <[email protected]>
Tested-by: Sachin Sant <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Diffstat (limited to 'net/lapb/lapb_timer.c')
0 files changed, 0 insertions, 0 deletions