aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTejun Heo <[email protected]>2024-08-05 12:39:10 -1000
committerPeter Zijlstra <[email protected]>2024-08-07 12:44:16 +0200
commit924e2904da9b5edec61611918b98ab1f7fccc461 (patch)
treedd3efc7fa0f9ee8f44b1289cad1988f3f1f599d2
parentcea5a3472ac43f18590e1bd6b842f808347a810c (diff)
sched/fair: Make balance_fair() test sched_fair_runnable() instead of rq->nr_running
balance_fair() skips newidle balancing if rq->nr_running - there are already tasks on the rq, so no need to try to pull tasks. This tests the total number of queued tasks on the CPU instead of only the fair class, but is still correct as the rq can currently only have fair class tasks while balance_fair() is running. However, with the addition of sched_ext below the fair class, this will not hold anymore and make put_prev_task_balance() skip sched_ext's balance() incorrectly as, when a CPU has only lower priority class tasks, rq->nr_running would still be positive and balance_fair() would return 1 even when fair doesn't have any tasks to run. Update balance_fair() to use sched_fair_runnable() which tests rq->cfs.nr_running which is updated by bandwidth throttling. Note that pick_next_task_fair() already uses sched_fair_runnable() in its optimized path for the same purpose. Reported-by: Peter Zijlstra <[email protected]> Signed-off-by: Tejun Heo <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Chengming Zhou <[email protected]> Reviewed-by: K Prateek Nayak <[email protected]> Link: https://lore.kernel.org/r/[email protected]
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 795ceef5e7e1..6d39a824bbe1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8355,7 +8355,7 @@ static void set_cpus_allowed_fair(struct task_struct *p, struct affinity_context
static int
balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
{
- if (rq->nr_running)
+ if (sched_fair_runnable(rq))
return 1;
return sched_balance_newidle(rq, rf) != 0;