diff options
author | Peter Zijlstra <[email protected]> | 2023-10-31 09:53:08 +0100 |
---|---|---|
committer | Frederic Weisbecker <[email protected]> | 2023-11-01 21:39:58 +0100 |
commit | 85d68222ddc5f4522e456d97d201166acb50f716 (patch) | |
tree | a4104a2c1aa357fc29666bd774daebade3aaffd4 /tools/testing/selftests/bpf/progs/test_autoload.c | |
parent | 2656821f1f202d58224551b71eff41aafd1edf8b (diff) |
rcu: Break rcu_node_0 --> &rq->__lock order
Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in
do_set_cpus_allowed()") added a kfree() call to free any user
provided affinity mask, if present. It was changed later to use
kfree_rcu() in commit 9a5418bc48ba ("sched/core: Use kfree_rcu()
in do_set_cpus_allowed()") to avoid a circular locking dependency
problem.
It turns out that even kfree_rcu() isn't safe for avoiding
circular locking problem. As reported by kernel test robot,
the following circular locking dependency now exists:
&rdp->nocb_lock --> rcu_node_0 --> &rq->__lock
Solve this by breaking the rcu_node_0 --> &rq->__lock chain by moving
the resched_cpu() out from under rcu_node lock.
[peterz: heavily borrowed from Waiman's Changelog]
[paulmck: applied Z qiang feedback]
Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()")
Reported-by: kernel test robot <[email protected]>
Acked-by: Waiman Long <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/oe-lkp/[email protected]
Signed-off-by: Paul E. McKenney <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Diffstat (limited to 'tools/testing/selftests/bpf/progs/test_autoload.c')
0 files changed, 0 insertions, 0 deletions