aboutsummaryrefslogtreecommitdiff
path: root/kernel/rcupdate.c
AgeCommit message (Collapse)AuthorFilesLines
2008-02-13rcupdate: fix commentPaul E. McKenney1-1/+4
This comment caused some consternation during fastcall removal. Make it truthful. Signed-off-by: Paul E. McKenney <[email protected]> Cc: Harvey Harrison <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-01-25Preempt-RCU: fix rcu_barrier for preemptive environment.Paul E. McKenney1-0/+10
Fix rcu_barrier() to work properly in preemptive kernel environment. Also, the ordering of callback must be preserved while moving callbacks to another CPU during CPU hotplug. Signed-off-by: Gautham R Shenoy <[email protected]> Signed-off-by: Dipankar Sarma <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Steven Rostedt <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-01-25Preempt-RCU: reorganize RCU code into rcuclassic.c and rcupdate.cPaul E. McKenney1-550/+25
This patch re-organizes the RCU code to enable multiple implementations of RCU. Users of RCU continues to include rcupdate.h and the RCU interfaces remain the same. This is in preparation for subsequently merging the preemptible RCU implementation. Signed-off-by: Gautham R Shenoy <[email protected]> Signed-off-by: Dipankar Sarma <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Steven Rostedt <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-01-25Preempt-RCU: Use softirq instead of tasklets forDipankar Sarma1-8/+17
This patch makes RCU use softirq instead of tasklets. It also adds a memory barrier after raising the softirq inorder to ensure that the cpu sees the most recently updated value of rcu->cur while processing callbacks. The discussion of the related theoretical race pointed out by James Huang can be found here --> http://lkml.org/lkml/2007/11/20/603 Signed-off-by: Gautham R Shenoy <[email protected]> Signed-off-by: Steven Rostedt <[email protected]> Signed-off-by: Dipankar Sarma <[email protected]> Reviewed-by: Steven Rostedt <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-01-22rcu: fix section mismatchRandy Dunlap1-1/+1
rcu_online_cpu() should be __cpuinit instead of __devinit. WARNING: vmlinux.o(.text+0x4b6d5): Section mismatch: reference to .init.text: (between 'rcu_cpu_notify' and 'wakeme_after_rcu') Signed-off-by: Randy Dunlap <[email protected]> Cc: Sam Ravnborg <[email protected]> Acked-by: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-10-17Clean up duplicate includes in kernel/Jesper Juhl1-1/+0
This patch cleans up duplicate includes in kernel/ Signed-off-by: Jesper Juhl <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Reviewed-by: Satyam Sharma <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-10-11lockdep: annotate rcu_read_{,un}lock{,_bh}Peter Zijlstra1-0/+8
lockdep annotate rcu_read_{,un}lock{,_bh} in order to catch imbalanced usage. Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Paul E. McKenney <[email protected]>
2007-05-09Add suspend-related notifications for CPU hotplugRafael J. Wysocki1-0/+2
Since nonboot CPUs are now disabled after tasks and devices have been frozen and the CPU hotplug infrastructure is used for this purpose, we need special CPU hotplug notifications that will help the CPU-hotplug-aware subsystems distinguish normal CPU hotplug events from CPU hotplug events related to a system-wide suspend or resume operation in progress. This patch introduces such notifications and causes them to be used during suspend and resume transitions. It also changes all of the CPU-hotplug-aware subsystems to take these notifications into consideration (for now they are handled in the same way as the corresponding "normal" ones). [[email protected]: cleanups] Signed-off-by: Rafael J. Wysocki <[email protected]> Cc: Gautham R Shenoy <[email protected]> Cc: Pavel Machek <[email protected]> Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-12-07[PATCH] rcu: add a prefetch() in rcu_do_batch()Eric Dumazet1-1/+3
On some workloads, (for example when lot of close() syscalls are done), RCU qlen can be quite large, and RCU heads are no longer in cpu cache when rcu_do_batch() is called. This patch adds a prefetch() in rcu_do_batch() to give CPU a hint to bring back cache lines containing 'struct rcu_head's. Most list manipulations macros include prefetch(), but not open coded ones (at least with current C compilers :) ) I got a nice speedup on a trivial benchmark (3.48 us per iteration instead of 3.95 us on a 1.6 GHz Pentium-M) while (1) { pipe(p); close(fd[0]); close(fd[1]);} Signed-off-by: Eric Dumazet <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-10-04[PATCH] rcu: simplify/improve batch tuningOleg Nesterov1-8/+3
Kill a hard-to-calculate 'rsinterval' boot parameter and per-cpu rcu_data.last_rs_qlen. Instead, it adds adds a flag rcu_ctrlblk.signaled, which records the fact that one of CPUs has sent a resched IPI since the last rcu_start_batch(). Roughly speaking, we need two rcu_start_batch()s in order to move callbacks from ->nxtlist to ->donelist. This means that when ->qlen exceeds qhimark and continues to grow, we should send a resched IPI, and then do it again after we gone through a quiescent state. On the other hand, if it was already sent, we don't need to do it again when another CPU detects overflow of the queue. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-09-13[PATCH] rcu_do_batch: make ->qlen decrement irq safeOleg Nesterov1-1/+5
rcu_do_batch() decrements rdp->qlen with irqs enabled. This is not good, it can also be modified by call_rcu() from interrupt. Decrement ->qlen once with irqs disabled, after a main loop. Signed-off-by: Oleg Nesterov <[email protected]> Cc: Dipankar Sarma <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-07-31[PATCH] cpu hotplug: replace __devinit* with __cpuinit* for cpu notificationsChandra Seetharaman1-2/+2
Few of the callback functions and notifier blocks that are associated with cpu notifications incorrectly have __devinit and __devinitdata. They should be __cpuinit and __cpuinitdata instead. It makes no functional difference but wastes text area when CONFIG_HOTPLUG is enabled and CONFIG_HOTPLUG_CPU is not. This patch fixes all those instances. Signed-off-by: Chandra Seetharaman <[email protected]> Cc: Ashok Raj <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-07-03[PATCH] lockdep: locking init debugging improvementIngo Molnar1-2/+2
Locking init improvement: - introduce and use __SPIN_LOCK_UNLOCKED for array initializations, to pass in the name string of locks, used by debugging Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Arjan van de Ven <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-06-27[PATCH] cpu hotplug: revert initdata patch submitted for 2.6.17Chandra Seetharaman1-1/+1
This patch reverts notifier_block changes made in 2.6.17 Signed-off-by: Chandra Seetharaman <[email protected]> Cc: Ashok Raj <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-06-27[PATCH] cpu hotplug: revert init patch submitted for 2.6.17Chandra Seetharaman1-1/+1
In 2.6.17, there was a problem with cpu_notifiers and XFS. I provided a band-aid solution to solve that problem. In the process, i undid all the changes you both were making to ensure that these notifiers were available only at init time (unless CONFIG_HOTPLUG_CPU is defined). We deferred the real fix to 2.6.18. Here is a set of patches that fixes the XFS problem cleanly and makes the cpu notifiers available only at init time (unless CONFIG_HOTPLUG_CPU is defined). If CONFIG_HOTPLUG_CPU is defined then cpu notifiers are available at run time. This patch reverts the notifier_call changes made in 2.6.17 Signed-off-by: Chandra Seetharaman <[email protected]> Cc: Ashok Raj <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-06-27[PATCH] rcutorture: add call_rcu_bh() operationsPaul E. McKenney1-0/+10
Add operations for the call_rcu_bh() variant of RCU. Also add an rcu_batches_completed_bh() function, which is needed by rcutorture. Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-06-23[PATCH] Make RCU API inaccessible to non-GPL Linux kernel modulesPaul E. McKenney1-11/+2
Remove synchronize_kernel() (deprecated 2-APR-2005 in http://lkml.org/lkml/2005/4/3/11) and makes the RCU API inaccessible to non-GPL Linux kernel modules (as was announced more than one year ago in http://lkml.org/lkml/2005/4/3/8). Tested on x86 and ppc64. Signed-off-by: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-05-15[PATCH] RCU: introduce rcu_needs_cpu() interfaceHeiko Carstens1-0/+19
With "Paul E. McKenney" <[email protected]> Introduce rcu_needs_cpu() interface. This can be used to tell if there will be a new rcu batch on a cpu soon by looking at the curlist pointer. This can be used to avoid to enter a tickless idle state where the cpu would miss that a new batch is ready when rcu_start_batch would be called on a different cpu. Signed-off-by: Heiko Carstens <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-04-26[PATCH] Remove __devinit and __cpuinit from notifier_call definitionsChandra Seetharaman1-1/+1
Few of the notifier_chain_register() callers use __init in the definition of notifier_call. It is incorrect as the function definition should be available after the initializations (they do not unregister them during initializations). This patch fixes all such usages to _not_ have the notifier_call __init section. Signed-off-by: Chandra Seetharaman <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-04-26[PATCH] Remove __devinitdata from notifier block definitionsChandra Seetharaman1-1/+1
Few of the notifier_chain_register() callers use __devinitdata in the definition of notifier_block data structure. It is incorrect as the data structure should be available after the initializations (they do not unregister them during initializations). This was leading to an oops when notifier_chain_register() call is invoked for those callback chains after initialization. This patch fixes all such usages to _not_ have the notifier_block data structure in the init data section. Signed-off-by: Chandra Seetharaman <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-24[PATCH] rcu_process_callbacks: don't cli() while testing ->nxtlistOleg Nesterov1-3/+2
__rcu_process_callbacks() disables interrupts to protect itself from call_rcu() which adds new entries to ->nxtlist. However we can check "->nxtlist != NULL" with interrupts enabled, we can't get "false positives" because call_rcu() can only change this condition from 0 to 1. Tested with rcutorture.ko. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Dipankar Sarma <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-23[PATCH] kernel/rcupdate.c: make two structs staticAdrian Bunk1-2/+2
This patch makes two needlessly global structs static. Signed-off-by: Adrian Bunk <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-23[PATCH] convert kernel/rcupdate.c:rcu_barrier_sema to mutexIngo Molnar1-5/+5
Convert kernel/rcupdate's rcu_barrier_sema to mutex. Signed-off-by: Ingo Molnar <[email protected]> Acked-by: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-20[PATCH] add EXPORT_SYMBOL_GPL_FUTURE() to RCU subsystemGreg Kroah-Hartman1-3/+3
As the RCU symbols are going to be changed to GPL in the near future, lets warn users that this is going to happen. Cc: Paul McKenney <[email protected]> Acked-by: Dipankar Sarma <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2006-03-08[PATCH] rcu batch tuningDipankar Sarma1-18/+58
This patch adds new tunables for RCU queue and finished batches. There are two types of controls - number of completed RCU updates invoked in a batch (blimit) and monitoring for high rate of incoming RCUs on a cpu (qhimark, qlowmark). By default, the per-cpu batch limit is set to a small value. If the input RCU rate exceeds the high watermark, we do two things - force quiescent state on all cpus and set the batch limit of the CPU to INTMAX. Setting batch limit to INTMAX forces all finished RCUs to be processed in one shot. If we have more than INTMAX RCUs queued up, then we have bigger problems anyway. Once the incoming queued RCUs fall below the low watermark, the batch limit is set to the default. Signed-off-by: Dipankar Sarma <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: "David S. Miller" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-10[PATCH] rcu: fix hotplug-cpu ->donelist leakOleg Nesterov1-1/+2
Pointed out by Srivatsa Vaddagiri <[email protected]>. rcu_do_batch() stops after processing maxbatch callbacks on ->donelist leaving rcu_tasklet in TASKLET_STATE_SCHED state. If CPU_DEAD event happens remaining ->donelist entries are lost, rcu_offline_cpu() kills this tasklet. With this patch ->donelist migrates along with ->curlist and ->nxtlist to the current cpu. Compile tested. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Cc: Srivatsa Vaddagiri <[email protected]> Cc: Dipankar Sarma <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-10[PATCH] rcu: join rcu_ctrlblk and rcu_stateOleg Nesterov1-44/+38
This patch moves rcu_state into the rcu_ctrlblk. I think there are no reasons why we should have 2 different variables to control rcu state. Every user of rcu_state has also "rcu_ctrlblk *rcp" in the parameter list. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-09[PATCH] rcu: don't set ->next_pending in rcu_start_batch()Oleg Nesterov1-7/+4
I think it is better to set ->next_pending in the caller, when it is needed. This saves one parameter, and this coincides with cpu_quiet() beahaviour, which sets ->completed = ->cur itself. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-09[PATCH] rcu: uninline __rcu_pending()Oleg Nesterov1-0/+30
__rcu_pending() is rather fat and called twice from rcu_pending(). rcu_pending() has multiple callers, and not that small too. This patch uninlines both of them. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-08[PATCH] rcu file: use atomic primitivesNick Piggin1-14/+0
Use atomic_inc_not_zero for rcu files instead of special case rcuref. Signed-off-by: Nick Piggin <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-08[PATCH] RCU signal handlingIngo Molnar1-0/+1
RCU tasklist_lock and RCU signal handling: send signals RCU-read-locked instead of tasklist_lock read-locked. This is a scalability improvement on SMP and a preemption-latency improvement under PREEMPT_RCU. Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: William Irwin <[email protected]> Cc: Roland McGrath <[email protected]> Cc: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-08[PATCH] Change maxaligned_in_smp alignemnt macros to internodealigned_in_smp ↵Ravikiran G Thirumalai1-2/+2
macros ____cacheline_maxaligned_in_smp is currently used to align critical structures and avoid false sharing. It uses per-arch L1_CACHE_SHIFT_MAX and people find L1_CACHE_SHIFT_MAX useless. However, we have been using ____cacheline_maxaligned_in_smp to align structures on the internode cacheline size. As per Andi's suggestion, following patch kills ____cacheline_maxaligned_in_smp and introduces INTERNODE_CACHE_SHIFT, which defaults to L1_CACHE_SHIFT for all arches. Arches needing L3/Internode cacheline alignment can define INTERNODE_CACHE_SHIFT in the arch asm/cache.h. Patch replaces ____cacheline_maxaligned_in_smp with ____cacheline_internodealigned_in_smp With this patch, L1_CACHE_SHIFT_MAX can be killed Signed-off-by: Ravikiran Thirumalai <[email protected]> Signed-off-by: Shai Fultheim <[email protected]> Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-12-12[PATCH] Fix RCU race in access of nohz_cpu_maskSrivatsa Vaddagiri1-5/+13
Accessing nohz_cpu_mask before incrementing rcp->cur is racy. It can cause tickless idle CPUs to be included in rsp->cpumask, which will extend graceperiods unnecessarily. Fix this race. It has been tested using extensions to RCU torture module that forces various CPUs to become idle. Signed-off-by: Srivatsa Vaddagiri <[email protected]> Cc: Dipankar Sarma <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-12-12[PATCH] add rcu_barrier() synchronization pointDipankar Sarma1-0/+41
This introduces a new interface - rcu_barrier() which waits until all the RCUs queued until this call have been completed. Reiser4 needs this, because we do more than just freeing memory object in our RCU callback: we also remove it from the list hanging off super-block. This means, that before freeing reiser4-specific portion of super-block (during umount) we have to wait until all pending RCU callbacks are executed. The only change of reiser4 made to the original patch, is exporting of rcu_barrier(). Cc: Hans Reiser <[email protected]> Cc: Vladimir V. Saveliev <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-10-30[PATCH] RCU torture-testing kernel modulePaul E. McKenney1-0/+10
This patch is a rewrite of the one submitted on October 1st, using modules (http://marc.theaimsgroup.com/?l=linux-kernel&m=112819093522998&w=2). This rewrite adds a tristate CONFIG_RCU_TORTURE_TEST, which enables an intense torture test of the RCU infratructure. This is needed due to the continued changes to the RCU infrastructure to accommodate dynamic ticks, CPU hotplug, realtime, and so on. Most of the code is in a separate file that is compiled only if the CONFIG variable is set. Documentation on how to run the test and interpret the output is also included. This code has been tested on i386 and ppc64, and an earlier version of the code has received extensive testing on a number of architectures as part of the PREEMPT_RT patchset. Signed-off-by: "Paul E. McKenney" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-10-17[PATCH] rcu: keep rcu callback event counterEric Dumazet1-0/+11
This makes call_rcu() keep track of how many events there are on the RCU list, and cause a reschedule event when the list gets too long. This helps keep RCU event lists down. Signed-off-by: Linus Torvalds <[email protected]>
2005-10-17Increase default RCU batching sharplyLinus Torvalds1-1/+1
Dipankar made RCU limit the batch size to improve latency, but that approach is unworkable: it can cause the RCU queues to grow without bounds, since the batch limiter ended up limiting the callbacks. So make the limit much higher, and start planning on instead limiting the batch size by doing RCU callbacks more often if the queue looks like it might be growing too long. Signed-off-by: Linus Torvalds <[email protected]>
2005-09-09[PATCH] files: rcuref APIsDipankar Sarma1-0/+14
Adds a set of primitives to do reference counting for objects that are looked up without locks using RCU. Signed-off-by: Ravikiran Thirumalai <[email protected]> Signed-off-by: Dipankar Sarma <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-05-01[PATCH] Deprecate synchronize_kernel, GPL replacementPaul E. McKenney1-2/+14
The synchronize_kernel() primitive is used for quite a few different purposes: waiting for RCU readers, waiting for NMIs, waiting for interrupts, and so on. This makes RCU code harder to read, since synchronize_kernel() might or might not have matching rcu_read_lock()s. This patch creates a new synchronize_rcu() that is to be used for RCU readers and a new synchronize_sched() that is used for the rest. These two new primitives currently have the same implementation, but this is might well change with additional real-time support. Both new primitives are GPL-only, the old primitive is deprecated. Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-05-01[PATCH] kernel/rcupdate.c: make the exports EXPORT_SYMBOL_GPLPaul E. McKenney1-3/+3
The gpl exports need to be put back. Moving them to GPL -- but in a measured manner, as I proposed on this list some months ago -- is fine. Changing these particular exports precipitously is most definitely -not- fine. Here is my earlier proposal: http://marc.theaimsgroup.com/?l=linux-kernel&m=110520930301813&w=2 See below for a patch that puts the exports back, along with an updated version of my earlier patch that starts the process of moving them to GPL. I will also be following this message with RFC patches that introduce two (EXPORT_SYMBOL_GPL) interfaces to replace synchronize_kernel(), which then becomes deprecated. Signed-off-by: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-04-16Linux-2.6.12-rc2Linus Torvalds1-0/+470
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!