aboutsummaryrefslogtreecommitdiff
path: root/kernel/workqueue.c
AgeCommit message (Collapse)AuthorFilesLines
2010-06-29workqueue: merge feature parameters into flagsTejun Heo1-10/+7
Currently, __create_workqueue_key() takes @singlethread and @freezeable paramters and store them separately in workqueue_struct. Merge them into a single flags parameter and field and use WQ_FREEZEABLE and WQ_SINGLE_THREAD. Signed-off-by: Tejun Heo <[email protected]>
2010-06-29workqueue: misc/cosmetic updatesTejun Heo1-47/+84
Make the following updates in preparation of concurrency managed workqueue. None of these changes causes any visible behavior difference. * Add comments and adjust indentations to data structures and several functions. * Rename wq_per_cpu() to get_cwq() and swap the position of two parameters for consistency. Convert a direct per_cpu_ptr() access to wq->cpu_wq to get_cwq(). * Add work_static() and Update set_wq_data() such that it sets the flags part to WORK_STRUCT_PENDING | WORK_STRUCT_STATIC if static | @extra_flags. * Move santiy check on work->entry emptiness from queue_work_on() to __queue_work() which all queueing paths share. * Make __queue_work() take @cpu and @wq instead of @cwq. * Restructure flush_work() and __create_workqueue_key() to make them easier to modify. Signed-off-by: Tejun Heo <[email protected]>
2010-06-29workqueue: kill RT workqueueTejun Heo1-6/+0
With stop_machine() converted to use cpu_stop, RT workqueue doesn't have any user left. Kill RT workqueue support. Signed-off-by: Tejun Heo <[email protected]>
2010-06-14lockdep: Add an in_workqueue_context() lockdep-based test functionPaul E. McKenney1-0/+15
Some recent uses of RCU make use of workqueues. In these uses, execution within the context of a specific workqueue takes the place of the usual RCU read-side primitives such as rcu_read_lock(), and flushing of workqueues takes the place of the usual RCU grace-period primitives. Checking for correct use of rcu_dereference() in such cases requires a test of whether the code is executing in the context of a particular workqueue. This commit adds an in_workqueue_context() function that provides this test. This new function is only defined when lockdep is enabled, which allows it to be used as the second argument of rcu_dereference_check(). Signed-off-by: Paul E. McKenney <[email protected]>
2010-05-27kernel/: convert cpu notifier to return encapsulate errno valueAkinobu Mita1-4/+5
By the previous modification, the cpu notifier can return encapsulate errno value. This converts the cpu notifiers for kernel/*.c Signed-off-by: Akinobu Mita <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-04-30workqueue: change cancel_work_sync() to clear work->dataOleg Nesterov1-1/+11
In short: change cancel_work_sync(work) to mark this work as "never queued" upon return. When cancel_work_sync(work) succeeds, we know that this work can't be queued or running, and since we own WORK_STRUCT_PENDING nobody can change the bits in work->data under us. This means we can also clear the "cwq" part along with _PENDING bit lockless before return, unless the work is queued nobody can assume get_wq_data() is stable even under cwq->lock. This change can speedup the subsequent cancel/flush requests, and as Dmitry pointed out this simplifies the usage of work_struct's which can be queued on different workqueues. Consider this pseudo code from the input subsystem: struct workqueue_struct *WQ; struct work_struct *WORK; for (;;) { WQ = create_workqueue(); ... if (condition()) queue_work(WQ, WORK); ... cancel_work_sync(WORK); destroy_workqueue(WQ); } If condition() returns T and then F, cancel_work_sync() will crash the kernel because WORK->data still points to the already destroyed workqueue. With this patch the code like above becomes correct. Suggested-by: Dmitry Torokhov <[email protected]> Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2010-04-30workqueue: warn about flush_scheduled_work()Alan Stern1-0/+24
This patch (as1319) adds kerneldoc and a pointed warning to flush_scheduled_work(). Signed-off-by: Alan Stern <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2010-04-30workqueue: flush_delayed_work: keep the original workqueue for re-queueingOleg Nesterov1-1/+1
flush_delayed_work() always uses keventd_wq for re-queueing, but it should use the workqueue this dwork was queued on. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2009-12-10Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wqLinus Torvalds1-3/+128
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: Add debugobjects support
2009-11-17workqueue: fix race condition in schedule_on_each_cpu()Tejun Heo1-15/+13
Commit 65a64464349883891e21e74af16c05d6e1eeb4e9 ("HWPOISON: Allow schedule_on_each_cpu() from keventd") which allows schedule_on_each_cpu() to be called from keventd added a race condition. schedule_on_each_cpu() may race with cpu hotplug and end up executing the function twice on a cpu. Fix it by moving direct execution into the section protected with get/put_online_cpus(). While at it, update code such that direct execution is done after works have been scheduled for all other cpus and drop unnecessary cpu != orig test from flush loop. Signed-off-by: Tejun Heo <[email protected]> Cc: Andi Kleen <[email protected]> Acked-by: Oleg Nesterov <[email protected]> Cc: Ingo Molnar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2009-11-16workqueue: Add debugobjects supportThomas Gleixner1-3/+128
Add debugobject support to track the life time of work_structs. While at it, remove duplicate definition of INIT_DELAYED_WORK_ON_STACK(). Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2009-10-29Merge branch 'hwpoison-2.6.32' of ↵Linus Torvalds1-2/+19
git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6 * 'hwpoison-2.6.32' of git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6: HWPOISON: fix invalid page count in printk output HWPOISON: Allow schedule_on_each_cpu() from keventd HWPOISON: fix/proc/meminfo alignment HWPOISON: fix oops on ksm pages HWPOISON: Fix page count leak in hwpoison late kill in do_swap_page HWPOISON: return early on non-LRU pages HWPOISON: Add brief hwpoison description to Documentation HWPOISON: Clean up PR_MCE_KILL interface
2009-10-19HWPOISON: Allow schedule_on_each_cpu() from keventdAndi Kleen1-2/+19
Right now when calling schedule_on_each_cpu() from keventd there is a deadlock because it tries to schedule a work item on the current CPU too. This happens via lru_add_drain_all() in hwpoison. Just call the function for the current CPU in this case. This is actually faster too. Debugging with Fengguang Wu & Max Asbock Signed-off-by: Andi Kleen <[email protected]>
2009-10-14workqueue: add 'flush_delayed_work()' to run and wait for delayed workLinus Torvalds1-0/+18
It basically turns a delayed work into an immediate work, and then waits for it to finish, thus allowing you to force (and wait for) an immediate flush of a delayed work. We'll want to use this in the tty layer to clean up tty_flush_to_ldisc(). Acked-by: Oleg Nesterov <[email protected]> [ Fixed to use 'del_timer_sync()' as noted by Oleg ] Signed-off-by: Linus Torvalds <[email protected]>
2009-09-11Merge branch 'sched-core-for-linus' of ↵Linus Torvalds1-2/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (64 commits) sched: Fix sched::sched_stat_wait tracepoint field sched: Disable NEW_FAIR_SLEEPERS for now sched: Keep kthreads at default priority sched: Re-tune the scheduler latency defaults to decrease worst-case latencies sched: Turn off child_runs_first sched: Ensure that a child can't gain time over it's parent after fork() sched: enable SD_WAKE_IDLE sched: Deal with low-load in wake_affine() sched: Remove short cut from select_task_rq_fair() sched: Turn on SD_BALANCE_NEWIDLE sched: Clean up topology.h sched: Fix dynamic power-balancing crash sched: Remove reciprocal for cpu_power sched: Try to deal with low capacity, fix update_sd_power_savings_stats() sched: Try to deal with low capacity sched: Scale down cpu_power due to RT tasks sched: Implement dynamic cpu_power sched: Add smt_gain sched: Update the cpu_power sum during load-balance sched: Add SD_PREFER_SIBLING ...
2009-09-09sched: Keep kthreads at default priorityMike Galbraith1-2/+0
Removes kthread/workqueue priority boost, they increase worst-case desktop latencies. Signed-off-by: Mike Galbraith <[email protected]> Acked-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-04workqueues: Improve schedule_work() documentationBart Van Assche1-1/+6
Two important aspects of the schedule_work() function are not yet documented: - that it is allowed to pass a struct work_struct * to this function that is already on the kernel-global workqueue; - the meaning of its return value. The patch below documents both aspects. Signed-off-by: Bart Van Assche <[email protected]> Cc: "Greg Kroah-Hartman" <[email protected]> Cc: Andrew Morton <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-06-02ftrace, workqueuetrace: make workqueue tracepoints use TRACE_EVENT macroZhaolei1-9/+2
v3: [email protected]: Change TRACE_EVENT definition to new format introduced by Steven Rostedt: consolidate trace and trace_event headers v2: [email protected]: print the function names instead of addr, and zap the work addr v1: [email protected]: Make workqueue tracepoints use TRACE_EVENT macro TRACE_EVENT is a more generic way to define tracepoints. Doing so adds these new capabilities to the tracepoints: - zero-copy and per-cpu splice() tracing - binary tracing without printf overhead - structured logging records exposed under /debug/tracing/events - trace events embedded in function tracer output and other plugins - user-defined, per tracepoint filter expressions Then, this patch converts DEFINE_TRACE to TRACE_EVENT in workqueue related tracepoints. [ Impact: expand workqueue tracer to events tracing ] Signed-off-by: Zhao Lei <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]>
2009-04-09work_on_cpu(): rewrite it to create a kernel thread on demandAndrew Morton1-17/+19
Impact: circular locking bugfix The various implemetnations and proposed implemetnations of work_on_cpu() are vulnerable to various deadlocks because they all used queues of some form. Unrelated pieces of kernel code thus gained dependencies wherein if one work_on_cpu() caller holds a lock which some other work_on_cpu() callback also takes, the kernel could rarely deadlock. Fix this by creating a short-lived kernel thread for each work_on_cpu() invokation. This is not terribly fast, but the only current caller of work_on_cpu() is pci_call_probe(). It would be nice to find some other way of doing the node-local allocations in the PCI probe code so that we can zap work_on_cpu() altogether. The code there is rather nasty. I can't think of anything simple at this time... Cc: Ingo Molnar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Rusty Russell <[email protected]>
2009-04-05Merge branch 'tracing-for-linus' of ↵Linus Torvalds1-1/+15
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'tracing-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (413 commits) tracing, net: fix net tree and tracing tree merge interaction tracing, powerpc: fix powerpc tree and tracing tree interaction ring-buffer: do not remove reader page from list on ring buffer free function-graph: allow unregistering twice trace: make argument 'mem' of trace_seq_putmem() const tracing: add missing 'extern' keywords to trace_output.h tracing: provide trace_seq_reserve() blktrace: print out BLK_TN_MESSAGE properly blktrace: extract duplidate code blktrace: fix memory leak when freeing struct blk_io_trace blktrace: fix blk_probes_ref chaos blktrace: make classic output more classic blktrace: fix off-by-one bug blktrace: fix the original blktrace blktrace: fix a race when creating blk_tree_root in debugfs blktrace: fix timestamp in binary output tracing, Text Edit Lock: cleanup tracing: filter fix for TRACE_EVENT_FORMAT events ftrace: Using FTRACE_WARN_ON() to check "freed record" in ftrace_release() x86: kretprobe-booster interrupt emulation code fix ... Fix up trivial conflicts in arch/parisc/include/asm/ftrace.h include/linux/memory.h kernel/extable.c kernel/module.c
2009-04-02workqueue: avoid recursion in run_workqueue()Lai Jiangshan1-30/+11
1) lockdep will complain when run_workqueue() performs recursion. 2) The recursive implementation of run_workqueue() means that flush_workqueue() and its documentation are inconsistent. This may hide deadlocks and other bugs. 3) The recursion in run_workqueue() will poison cwq->current_work, but flush_work() and __cancel_work_timer(), etcetera need a reliable cwq->current_work. Signed-off-by: Lai Jiangshan <[email protected]> Acked-by: Oleg Nesterov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Rusty Russell <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2009-04-02Merge branch 'tracing/core-v2' into tracing-for-linusIngo Molnar1-1/+15
Conflicts: include/linux/slub_def.h lib/Kconfig.debug mm/slob.c mm/slub.c
2009-03-30cpumask: use new cpumask_ functions in core code.Rusty Russell1-3/+3
Impact: cleanup Time to clean up remaining laggards using the old cpu_ functions. Signed-off-by: Rusty Russell <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected]
2009-02-03Merge branches 'tracing/ftrace', 'tracing/kmemtrace' and 'linus' into ↵Ingo Molnar1-10/+10
tracing/core
2009-01-19work_on_cpu: Use our own workqueue.Rusty Russell1-1/+7
Impact: remove potential clashes with generic kevent workqueue Annoyingly, some places we want to use work_on_cpu are already in workqueues. As per Ingo's suggestion, we create a different workqueue for work_on_cpu. Signed-off-by: Rusty Russell <[email protected]> Signed-off-by: Mike Travis <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-19work_on_cpu: don't try to get_online_cpus() in work_on_cpu.Rusty Russell1-10/+4
Impact: remove potential circular lock dependency with cpu hotplug lock This has caused more problems than it solved, with a pile of cpu hotplug locking issues. Followup patches will get_online_cpus() in callers that need it, but if they don't do it they're no worse than before when they were using set_cpus_allowed without locking. Signed-off-by: Rusty Russell <[email protected]> Signed-off-by: Mike Travis <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-14tracing: add a new workqueue tracerFrederic Weisbecker1-1/+15
Impact: new tracer The workqueue tracer provides some statistical informations about each cpu workqueue thread such as the number of the works inserted and executed since their creation. It can help to evaluate the amount of work each of them have to perform. For example it can help a developer to decide whether he should choose a per cpu workqueue instead of a singlethreaded one. It only traces statistical informations for now but it will probably later provide event tracing too. Such a tracer could help too, and be improved, to help rt priority sorted workqueue development. To have a snapshot of the workqueues state at any time, just do cat /debugfs/tracing/trace_stat/workqueues Ie: 1 125 125 reiserfs/1 1 0 0 scsi_tgtd/1 1 0 0 aio/1 1 0 0 ata/1 1 114 114 kblockd/1 1 0 0 kintegrityd/1 1 2147 2147 events/1 0 0 0 kpsmoused 0 105 105 reiserfs/0 0 0 0 scsi_tgtd/0 0 0 0 aio/0 0 0 0 ata_aux 0 0 0 ata/0 0 0 0 cqueue 0 0 0 kacpi_notify 0 0 0 kacpid 0 149 149 kblockd/0 0 0 0 kintegrityd/0 0 1000 1000 khelper 0 2270 2270 events/0 Changes in V2: _ Drop the static array based on NR_CPU and dynamically allocate the stat array with num_possible_cpus() and other cpu mask facilities.... _ Trace workqueue insertion at a bit lower level (insert_work instead of queue_work) to handle even the workqueue barriers. Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Steven Rostedt <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-01cpumask: convert kernel/workqueue.cRusty Russell1-12/+14
Impact: Reduce memory usage, use new cpumask API. cpu_populated_map becomes a cpumask_var_t, and cpu_singlethread_map is simply a cpumask pointer: it's simply the cpumask containing the first possible CPU anyway. Signed-off-by: Rusty Russell <[email protected]>
2008-11-14Merge branch 'master' into nextJames Morris1-0/+45
Conflicts: security/keys/internal.h security/keys/process_keys.c security/keys/request_key.c Fixed conflicts above by using the non 'tsk' versions. Signed-off-by: James Morris <[email protected]>
2008-11-14CRED: Rename is_single_threaded() to is_wq_single_threaded()David Howells1-4/+4
Rename is_single_threaded() to is_wq_single_threaded() so that a new is_single_threaded() can be created that refers to tasks rather than waitqueues. Signed-off-by: David Howells <[email protected]> Reviewed-by: James Morris <[email protected]> Signed-off-by: James Morris <[email protected]>
2008-11-06cpumask: introduce new API, without changing anythingRusty Russell1-0/+45
Impact: introduce new APIs We want to deprecate cpumasks on the stack, as we are headed for gynormous numbers of CPUs. Eventually, we want to head towards an undefined 'struct cpumask' so they can never be declared on stack. 1) New cpumask functions which take pointers instead of copies. (cpus_* -> cpumask_*) 2) Several new helpers to reduce requirements for temporary cpumasks (cpumask_first_and, cpumask_next_and, cpumask_any_and) 3) Helpers for declaring cpumasks on or offstack for large NR_CPUS (cpumask_var_t, alloc_cpumask_var and free_cpumask_var) 4) 'struct cpumask' for explicitness and to mark new-style code. 5) Make iterator functions stop at nr_cpu_ids (a runtime constant), not NR_CPUS for time efficiency and for smaller dynamic allocations in future. 6) cpumask_copy() so we can allocate less than a full cpumask eventually (for alloc_cpumask_var), and so we can eliminate the 'struct cpumask' definition eventually. 7) work_on_cpu() helper for doing task on a CPU, rather than saving old cpumask for current thread and manipulating it. 8) smp_call_function_many() which is smp_call_function_mask() except taking a cpumask pointer. Note that this patch simply introduces the new functions and leaves the obsolescent ones in place. This is to simplify the transition patches. Signed-off-by: Rusty Russell <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-10-22workqueue: introduce create_rt_workqueueHeiko Carstens1-1/+6
create_rt_workqueue will create a real time prioritized workqueue. This is needed for the conversion of stop_machine to a workqueue based implementation. This patch adds yet another parameter to __create_workqueue_key to tell it that we want an rt workqueue. However it looks like we rather should have something like "int type" instead of singlethread, freezable and rt. Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Rusty Russell <[email protected]> Cc: Ingo Molnar <[email protected]>
2008-10-16Remove Andrew Morton's old email accountsFrancois Cami1-1/+1
People can use the real name an an index into MAINTAINERS to find the current email address. Signed-off-by: Francois Cami <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-08-12Merge branch 'core/locking' into core/urgentIngo Molnar1-12/+12
2008-08-11lockdep: rename map_[acquire|release]() => lock_map_[acquire|release]()Ingo Molnar1-12/+12
the names were too generic: drivers/uio/uio.c:87: error: expected identifier or '(' before 'do' drivers/uio/uio.c:87: error: expected identifier or '(' before 'while' drivers/uio/uio.c:113: error: 'map_release' undeclared here (not in a function) Signed-off-by: Ingo Molnar <[email protected]>
2008-08-11lockdep: map_acquirePeter Zijlstra1-12/+12
Most the free-standing lock_acquire() usages look remarkably similar, sweep them into a new helper. Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-07-30workqueues: add comments to __create_workqueue_key()Oleg Nesterov1-1/+12
Dmitry Adamushko pointed out that the error handling in __create_workqueue_key() is not clear, add the comment. Signed-off-by: Oleg Nesterov <[email protected]> Cc: Dmitry Adamushko <[email protected]> Cc: Ingo Molnar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: do CPU_UP_CANCELED if CPU_UP_PREPARE failsOleg Nesterov1-3/+6
The bug was pointed out by Akinobu Mita <[email protected]>, and this patch is based on his original patch. workqueue_cpu_callback(CPU_UP_PREPARE) expects that if it returns NOTIFY_BAD, _cpu_up() will send CPU_UP_CANCELED then. However, this is not true since "cpu hotplug: cpu: deliver CPU_UP_CANCELED only to NOTIFY_OKed callbacks with CPU_UP_PREPARE" commit: a0d8cdb652d35af9319a9e0fb7134de2a276c636 The callback which has returned NOTIFY_BAD will not receive CPU_UP_CANCELED. Change the code to fulfil the CPU_UP_CANCELED logic if CPU_UP_PREPARE fails. Signed-off-by: Oleg Nesterov <[email protected]> Reported-by: Akinobu Mita <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: schedule_on_each_cpu() can use schedule_work_on()Oleg Nesterov1-2/+1
schedule_on_each_cpu() can use schedule_work_on() to avoid the code duplication. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: queue_work() can use queue_work_on()Oleg Nesterov1-7/+4
queue_work() can use queue_work_on() to avoid the code duplication. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: lockdep annotations for flush_work()Oleg Nesterov1-0/+5
Add lockdep annotations to flush_work() and update the comment. Signed-off-by: Oleg Nesterov <[email protected]> Cc: Jarek Poplawski <[email protected]> Acked-by: Johannes Berg <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: make get_online_cpus() useable for work->func()Oleg Nesterov1-9/+9
workqueue_cpu_callback(CPU_DEAD) flushes cwq->thread under cpu_maps_update_begin(). This means that the multithreaded workqueues can't use get_online_cpus() due to the possible deadlock, very bad and very old problem. Introduce the new state, CPU_POST_DEAD, which is called after cpu_hotplug_done() but before cpu_maps_update_done(). Change workqueue_cpu_callback() to use CPU_POST_DEAD instead of CPU_DEAD. This means that create/destroy functions can't rely on get_online_cpus() any longer and should take cpu_add_remove_lock instead. [[email protected]: fix CONFIG_SMP=n] Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Gautham R Shenoy <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Max Krasnyansky <[email protected]> Cc: Paul Jackson <[email protected]> Cc: Paul Menage <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Vegard Nossum <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Ingo Molnar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: schedule_on_each_cpu: use flush_work()Oleg Nesterov1-1/+2
Change schedule_on_each_cpu() to use flush_work() instead of flush_workqueue(), this way we don't wait for other work_struct's which can be queued meanwhile. Signed-off-by: Oleg Nesterov <[email protected]> Cc: Jarek Poplawski <[email protected]> Cc: Max Krasnyansky <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: implement flush_work()Oleg Nesterov1-0/+46
Most of users of flush_workqueue() can be changed to use cancel_work_sync(), but sometimes we really need to wait for the completion and cancelling is not an option. schedule_on_each_cpu() is good example. Add the new helper, flush_work(work), which waits for the completion of the specific work_struct. More precisely, it "flushes" the result of of the last queue_work() which is visible to the caller. For example, this code queue_work(wq, work); /* WINDOW */ queue_work(wq, work); flush_work(work); doesn't necessary work "as expected". What can happen in the WINDOW above is - wq starts the execution of work->func() - the caller migrates to another CPU now, after the 2nd queue_work() this work is active on the previous CPU, and at the same time it is queued on another. In this case flush_work(work) may return before the first work->func() completes. It is trivial to add another helper int flush_work_sync(struct work_struct *work) { return flush_work(work) || wait_on_work(work); } which works "more correctly", but it has to iterate over all CPUs and thus it much slower than flush_work(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Max Krasnyansky <[email protected]> Acked-by: Jarek Poplawski <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-25workqueues: insert_work: use "list_head *" instead of "int tail"Oleg Nesterov1-10/+7
insert_work() inserts the new work_struct before or after cwq->worklist, depending on the "int tail" parameter. Change it to accept "list_head *" instead, this shrinks .text a bit and allows us to insert the barrier after specific work_struct. Signed-off-by: Oleg Nesterov <[email protected]> Cc: Jarek Poplawski <[email protected]> Cc: Max Krasnyansky <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-24pm: introduce new interfaces schedule_work_on() and queue_work_on()Zhang Rui1-1/+38
This interface allows adding a job on a specific cpu. Although a work struct on a cpu will be scheduled to other cpu if the cpu dies, there is a recursion if a work task tries to offline the cpu it's running on. we need to schedule the task to a specific cpu in this case. http://bugzilla.kernel.org/show_bug.cgi?id=10897 [[email protected]: cleanups] Signed-off-by: Zhang Rui <[email protected]> Tested-by: Rus <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]> Acked-by: Pavel Machek <[email protected]> Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-06Merge commit 'v2.6.26-rc9' into cpus4096Ingo Molnar1-1/+1
2008-07-04Christoph has movedChristoph Lameter1-1/+1
Remove all [email protected] addresses from the kernel tree since they will become invalid on June 27th. Change my maintainer email address for the slab allocators to [email protected] (which will be the new email address for the future). Signed-off-by: Christoph Lameter <[email protected]> Signed-off-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Matt Mackall <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-05-23core: use performance variant for_each_cpu_mask_nrMike Travis1-3/+3
Change references from for_each_cpu_mask to for_each_cpu_mask_nr where appropriate Reviewed-by: Paul Jackson <[email protected]> Reviewed-by: Christoph Lameter <[email protected]> Signed-off-by: Mike Travis <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2008-05-01workqueue: remove redundant function invocationAndrew Liu1-4/+2
timer_stats_timer_set_start_info is invoked twice, additionally, the invocation of this function can be moved to where it is only called when a delay is really required. Signed-off-by: Andrew Liu <[email protected]> Cc: Pavel Machek <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>