diff options
| author | Linus Torvalds <[email protected]> | 2022-12-12 07:47:15 -0800 | 
|---|---|---|
| committer | Linus Torvalds <[email protected]> | 2022-12-12 07:47:15 -0800 | 
| commit | 1fab45ab6e823f9d7e5bc9520b2aa6564d6d58a7 (patch) | |
| tree | 0fed32c7ec3b36f8050c49281c3161ec3834df9a /kernel/rcu/tree_plugin.h | |
| parent | 830b3c68c1fb1e9176028d02ef86f3cf76aa2476 (diff) | |
| parent | 87492c06e68d802852c7ba76b4d3fde50807d72a (diff) | |
Merge tag 'rcu.2022.12.02a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu
Pull RCU updates from Paul McKenney:
 - Documentation updates. This is the second in a series from an ongoing
   review of the RCU documentation.
 - Miscellaneous fixes.
 - Introduce a default-off Kconfig option that depends on RCU_NOCB_CPU
   that, on CPUs mentioned in the nohz_full or rcu_nocbs boot-argument
   CPU lists, causes call_rcu() to introduce delays.
   These delays result in significant power savings on nearly idle
   Android and ChromeOS systems. These savings range from a few percent
   to more than ten percent.
   This series also includes several commits that change call_rcu() to a
   new call_rcu_hurry() function that avoids these delays in a few
   cases, for example, where timely wakeups are required. Several of
   these are outside of RCU and thus have acks and reviews from the
   relevant maintainers.
 - Create an srcu_read_lock_nmisafe() and an srcu_read_unlock_nmisafe()
   for architectures that support NMIs, but which do not provide
   NMI-safe this_cpu_inc(). These NMI-safe SRCU functions are required
   by the upcoming lockless printk() work by John Ogness et al.
 - Changes providing minor but important increases in torture test
   coverage for the new RCU polled-grace-period APIs.
 - Changes to torturescript that avoid redundant kernel builds, thus
   providing about a 30% speedup for the torture.sh acceptance test.
* tag 'rcu.2022.12.02a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (49 commits)
  net: devinet: Reduce refcount before grace period
  net: Use call_rcu_hurry() for dst_release()
  workqueue: Make queue_rcu_work() use call_rcu_hurry()
  percpu-refcount: Use call_rcu_hurry() for atomic switch
  scsi/scsi_error: Use call_rcu_hurry() instead of call_rcu()
  rcu/rcutorture: Use call_rcu_hurry() where needed
  rcu/rcuscale: Use call_rcu_hurry() for async reader test
  rcu/sync: Use call_rcu_hurry() instead of call_rcu
  rcuscale: Add laziness and kfree tests
  rcu: Shrinker for lazy rcu
  rcu: Refactor code a bit in rcu_nocb_do_flush_bypass()
  rcu: Make call_rcu() lazy to save power
  rcu: Implement lockdep_rcu_enabled for !CONFIG_DEBUG_LOCK_ALLOC
  srcu: Debug NMI safety even on archs that don't require it
  srcu: Explain the reason behind the read side critical section on GP start
  srcu: Warn when NMI-unsafe API is used in NMI
  arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  arch/loongarch: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option
  rcu: Fix __this_cpu_read() lockdep warning in rcu_force_quiescent_state()
  rcu-tasks: Make grace-period-age message human-readable
  ...
Diffstat (limited to 'kernel/rcu/tree_plugin.h')
| -rw-r--r-- | kernel/rcu/tree_plugin.h | 5 | 
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index e3142ee35fc6..7b0fe741a088 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1221,11 +1221,13 @@ static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp)   * We don't include outgoingcpu in the affinity set, use -1 if there is   * no outgoing CPU.  If there are no CPUs left in the affinity set,   * this function allows the kthread to execute on any CPU. + * + * Any future concurrent calls are serialized via ->boost_kthread_mutex.   */  static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)  {  	struct task_struct *t = rnp->boost_kthread_task; -	unsigned long mask = rcu_rnp_online_cpus(rnp); +	unsigned long mask;  	cpumask_var_t cm;  	int cpu; @@ -1234,6 +1236,7 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)  	if (!zalloc_cpumask_var(&cm, GFP_KERNEL))  		return;  	mutex_lock(&rnp->boost_kthread_mutex); +	mask = rcu_rnp_online_cpus(rnp);  	for_each_leaf_node_possible_cpu(rnp, cpu)  		if ((mask & leaf_node_cpu_bit(rnp, cpu)) &&  		    cpu != outgoingcpu)  |