diff options
author | Frederic Weisbecker <[email protected]> | 2021-10-11 16:51:30 +0200 |
---|---|---|
committer | Paul E. McKenney <[email protected]> | 2021-12-07 16:24:44 -0800 |
commit | 118e0d4a1bc85d4ecea0427e440a72d21ffbfa6a (patch) | |
tree | 201c06531e250335ce5120a0331ed0c301f13a05 | |
parent | 614ddad17f22a22e035e2ea37a04815f50362017 (diff) |
rcu/nocb: Make local rcu_nocb_lock_irqsave() safe against concurrent deoffloading
rcu_nocb_lock_irqsave() can be preempted between the call to
rcu_segcblist_is_offloaded() and the actual locking. This matters now
that rcu_core() is preemptible on PREEMPT_RT and the (de-)offloading
process can interrupt the softirq or the rcuc kthread.
As a result we may locklessly call into code that requires nocb locking.
In practice this is a problem while we accelerate callbacks on rcu_core().
Simply disabling interrupts before (instead of after) checking the NOCB
offload state fixes the issue.
Reported-and-tested-by: Valentin Schneider <[email protected]>
Tested-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Josh Triplett <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Neeraj Upadhyay <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
-rw-r--r-- | kernel/rcu/tree.h | 16 |
1 files changed, 10 insertions, 6 deletions
diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 305cf6aeb408..4f6c67b3ccd5 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -447,12 +447,16 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp, static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp); #ifdef CONFIG_RCU_NOCB_CPU static void __init rcu_organize_nocb_kthreads(void); -#define rcu_nocb_lock_irqsave(rdp, flags) \ -do { \ - if (!rcu_segcblist_is_offloaded(&(rdp)->cblist)) \ - local_irq_save(flags); \ - else \ - raw_spin_lock_irqsave(&(rdp)->nocb_lock, (flags)); \ + +/* + * Disable IRQs before checking offloaded state so that local + * locking is safe against concurrent de-offloading. + */ +#define rcu_nocb_lock_irqsave(rdp, flags) \ +do { \ + local_irq_save(flags); \ + if (rcu_segcblist_is_offloaded(&(rdp)->cblist)) \ + raw_spin_lock(&(rdp)->nocb_lock); \ } while (0) #else /* #ifdef CONFIG_RCU_NOCB_CPU */ #define rcu_nocb_lock_irqsave(rdp, flags) local_irq_save(flags) |