aboutsummaryrefslogtreecommitdiff
path: root/kernel/sched.c
AgeCommit message (Collapse)AuthorFilesLines
2009-09-02sched: Provide iowait countersArjan van de Ven1-0/+4
For counting how long an application has been waiting for (disk) IO, there currently is only the HZ sample driven information available, while for all other counters in this class, a high resolution version is available via CONFIG_SCHEDSTATS. In order to make an improved bootchart tool possible, we also need a higher resolution version of the iowait time. This patch below adds this scheduler statistic to the kernel. Signed-off-by: Arjan van de Ven <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-29sched: Rename init_cfs_rq => init_tg_cfs_rqAnirban Sinha1-5/+5
... so that it does not share a common name with a function within the same scope. Signed-off-by: Anirban Sinha <[email protected]> LKML-Reference: <DDFD17CC94A9BD49A82147DDF7D545C501EA98A6@exchange.ZeugmaSystems.local> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-28sched: Fix division by zero - reallyPeter Zijlstra1-21/+29
When re-computing the shares for each task group's cpu representation we need the ratio of weight on each cpu vs the total weight of the sched domain. Since load-balancing is loosely (read not) synchronized, the weight of individual cpus can change between doing the sum and calculating the ratio. The previous patch dealt with only one of the race scenarios, this patch side steps them all by saving a snapshot of all the individual cpu weights, thereby always working on a consistent set. Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: Balbir Singh <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Yinghai Lu <[email protected]> LKML-Reference: <1251371336.18584.77.camel@twins> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-23rcu: Renamings to increase RCU clarityPaul E. McKenney1-1/+1
Make RCU-sched, RCU-bh, and RCU-preempt be underlying implementations, with "RCU" defined in terms of one of the three. Update the outdated rcu_qsctr_inc() names, as these functions no longer increment anything. Signed-off-by: Paul E. McKenney <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] LKML-Reference: <12509746132696-git-send-email-> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-21sched: Avoid division by zeroPeter Zijlstra1-13/+10
Patch a5004278f0525dcb9aa43703ef77bf371ea837cd (sched: Fix cgroup smp fairness) introduced the possibility of a divide-by-zero because load-balancing is not synchronized between sched_domains. This can cause the state of cpus to change between the first and second loop over the sched domain in tg_shares_up(). Reported-by: Yinghai Lu <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Jes Sorensen <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Linus Torvalds <[email protected]> LKML-Reference: <1250855934.7538.30.camel@twins> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-20sched: Use for_each_class macro in move_one_task()Hiroshi Shimamoto1-1/+2
Replace for loop with the macro for_each_class to cleanup. Signed-off-by: Hiroshi Shimamoto <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Consolidate definition of variable sd in __build_sched_domainsAndreas Herrmann1-9/+4
Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of NUMA sched groups from __build_sched_domainsAndreas Herrmann1-63/+67
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of ALLNODES sched groups from __build_sched_domainsAndreas Herrmann1-5/+8
For the sake of completeness. Now all calls to init_sched_build_groups() are contained in build_sched_groups(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of CPU sched groups from __build_sched_domainsAndreas Herrmann1-9/+9
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of MC sched groups from __build_sched_domainsAndreas Herrmann1-13/+10
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of SMT sched groups from __build_sched_domainsAndreas Herrmann1-11/+20
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of SMT sched domain from __build_sched_domainsAndreas Herrmann1-13/+19
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of MC sched domain from __build_sched_domainsAndreas Herrmann1-12/+18
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of CPU sched domain from __build_sched_domainsAndreas Herrmann1-9/+17
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out build of NUMA sched domain from __build_sched_domainsAndreas Herrmann1-25/+32
... to further strip down __build_sched_domains(). Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Separate out allocation/free/goto-hell from __build_sched_domainsAndreas Herrmann1-72/+99
Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-18sched: Use structure to store local data in __build_sched_domainsAndreas Herrmann1-76/+89
Signed-off-by: Andreas Herrmann <[email protected]> Cc: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-15Merge commit 'v2.6.31-rc6' into core/rcuIngo Molnar1-17/+44
Merge reason: the branch was on pre-rc1 .30, update to latest. Signed-off-by: Ingo Molnar <[email protected]>
2009-08-14Merge branch 'percpu-for-linus' into percpu-for-nextTejun Heo1-17/+44
Conflicts: arch/sparc/kernel/smp_64.c arch/x86/kernel/cpu/perf_counter.c arch/x86/kernel/setup_percpu.c drivers/cpufreq/cpufreq_ondemand.c mm/percpu.c Conflicts in core and arch percpu codes are mostly from commit ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many num_possible_cpus() with nr_cpu_ids. As for-next branch has moved all the first chunk allocators into mm/percpu.c, the changes are moved from arch code to mm/percpu.c. Signed-off-by: Tejun Heo <[email protected]>
2009-08-03cputime: Optimize jiffies_to_cputime(1)Stanislaw Gruszka1-5/+4
For powerpc with CONFIG_VIRT_CPU_ACCOUNTING jiffies_to_cputime(1) is not compile time constant and run time calculations are quite expensive. To optimize we use precomputed value. For all other architectures is is preprocessor definition. Signed-off-by: Stanislaw Gruszka <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02lockdep: Introduce lockdep_assert_held()Peter Zijlstra1-0/+2
Add a lockdep helper to validate that we indeed are the owner of a lock. Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02sched: Ensure the migration task doesn't go away during usePeter Zijlstra1-0/+4
Like sched_migrate_task(), set_cpus_allowed_ptr() should hold onto the migration thread too. Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02sched: Fully integrate cpus_active_map and root-domain codeGregory Haskins1-1/+1
Reflect "active" cpus in the rq->rd->online field, instead of the online_map. The motivation is that things that use the root-domain code (such as cpupri) only care about cpus classified as "active" anyway. By synchronizing the root-domain state with the active map, we allow several optimizations. For instance, we can remove an extra cpumask_and from the scheduler hotpath by utilizing rq->rd->online (since it is now a cached version of cpu_active_map & rq->rd->span). Signed-off-by: Gregory Haskins <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Acked-by: Max Krasnyansky <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02sched: Enhance the pre/post scheduling logicGregory Haskins1-32/+50
We currently have an explicit "needs_post" vtable method which returns a stack variable for whether we should later run post-schedule. This leads to an awkward exchange of the variable as it bubbles back up out of the context switch. Peter Zijlstra observed that this information could be stored in the run-queue itself instead of handled on the stack. Therefore, we revert to the method of having context_switch return void, and update an internal rq->post_schedule variable when we require further processing. In addition, we fix a race condition where we try to access current->sched_class without holding the rq->lock. This is technically racy, as the sched-class could change out from under us. Instead, we reference the per-rq post_schedule variable with the runqueue unlocked, but with preemption disabled to see if we need to reacquire the rq->lock. Finally, we clean the code up slightly by removing the #ifdef CONFIG_SMP conditionals from the schedule() call, and implement some inline helper functions instead. This patch passes checkpatch, and rt-migrate. Signed-off-by: Gregory Haskins <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02sched: Check for pushing rt tasks after all schedulingSteven Rostedt1-11/+27
The current method for pushing RT tasks after scheduling only happens after a context switch. But we found cases where a task is set up on a run queue to be pushed but the push never happens because the schedule chooses the same task. This bug was found with the help of Gregory Haskins and the use of ftrace (trace_printk). It tooks several days for both of us analyzing the code and the trace output to find this. Signed-off-by: Steven Rostedt <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02sched: Optimize unused cgroup configurationPeter Zijlstra1-2/+14
When cgroup group scheduling is built in, skip some code paths if we don't have any (but the root) cgroups configured. Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02sched: Fix cgroup smp fairnessPeter Zijlstra1-8/+20
Commit ec4e0e2fe018992d980910db901637c814575914 ("fix inconsistency when redistribute per-cpu tg->cfs_rq shares") broke cgroup smp fairness. In order to avoid starvation of newly placed tasks, we never quite set the share of an empty cpu group-task to 0, but instead we set it as if there's a single NICE-0 task present. If however we actually set this in cfs_rq[cpu]->shares, that means the total shares for that group will be slightly inflated every time we balance, causing the observed unfairness. Fix this by setting cfs_rq[cpu]->shares to 0 but actually setting the effective weight of the related se to the inflated number. Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <1248696557.6987.1615.camel@twins> Signed-off-by: Ingo Molnar <[email protected]>
2009-08-02Merge branch 'sched/urgent' into sched/coreIngo Molnar1-2/+2
Merge reason: avoid upcoming patch conflict. Signed-off-by: Ingo Molnar <[email protected]>
2009-07-24sched: Fix return value of migration_init()Thomas Gleixner1-1/+1
migration_init() returns the return value of the hotplug notifier. In the success case this is NOTIFY_OK which is 1. initcall_debug evaluates that as an error code because init calls are expected to return 0 on success. Signed-off-by: Thomas Gleixner <[email protected]>
2009-07-18sched: Pull up the might_sleep() check into cond_resched()Frederic Weisbecker1-7/+5
might_sleep() is called late-ish in cond_resched(), after the need_resched()/preempt enabled/system running tests are checked. It's better to check the sleeps while atomic earlier and not depend on some environment datas that reduce the chances to detect a problem. Also define cond_resched_*() helpers as macros, so that the FILE/LINE reported in the sleeping while atomic warning displays the real origin and not sched.h Changes in v2: - Call __might_sleep() directly instead of might_sleep() which may call cond_resched() - Turn cond_resched() into a macro so that the file:line couple reported refers to the caller of cond_resched() and not __cond_resched() itself. Changes in v3: - Also propagate this __might_sleep() pull up to cond_resched_lock() and cond_resched_softirq() Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-18sched: Add a preempt count base offset to __might_sleep()Frederic Weisbecker1-4/+11
Add a preempt count base offset to compare against the current preempt level count. It prepares to pull up the might_sleep check from cond_resched() to cond_resched_lock() and cond_resched_bh(). For these two helpers, we need to respectively ensure that once we'll unlock the given spinlock / reenable local softirqs, we will reach a sleepable state. Signed-off-by: Frederic Weisbecker <[email protected]> [ Move and rename preempt_count_equals() ] Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-18sched: Cover the CONFIG_DEBUG_SPINLOCK_SLEEP off-case for __might_sleep()Frederic Weisbecker1-2/+1
Cover the off case for __might_sleep(), so that we avoid #ifdefs in files that make use of it. Especially, this prepares for the __might_sleep() pull up on cond_resched(). Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-18sched: Remove obsolete comment in __cond_resched()Frederic Weisbecker1-5/+0
Remove the outdated comment from __cond_resched() related to the now removed Big Kernel Semaphore. Reported-by: Arnd Bergmann <[email protected]> Reported-by: Ingo Molnar <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-18sched: Drop the need_resched() loop from cond_resched()Frederic Weisbecker1-5/+3
The schedule() function is a loop that reschedules the current task while the TIF_NEED_RESCHED flag is set: void schedule(void) { need_resched: /* schedule code */ if (need_resched()) goto need_resched; } And cond_resched() repeat this loop: do { add_preempt_count(PREEMPT_ACTIVE); schedule(); sub_preempt_count(PREEMPT_ACTIVE); } while(need_resched()); This loop is needless because schedule() already did the check and nothing can set TIF_NEED_RESCHED between schedule() exit and the loop check in need_resched(). Then remove this needless loop. Signed-off-by: Frederic Weisbecker <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-18Merge branch 'linus' into sched/coreIngo Molnar1-15/+42
Merge reason: branch had an old upstream base (-rc1-ish), but also merge to avoid a conflict. Signed-off-by: Ingo Molnar <[email protected]>
2009-07-18sched: fix load average accounting vs. cpu hotplugThomas Gleixner1-2/+2
The new load average code clears rq->calc_load_active on CPU_ONLINE. That's wrong as the new onlined CPU might have got a scheduler tick already and accounted the delta to the stale value of the time we offlined the CPU. Clear the value when we cleanup the dead CPU instead. Also move the update of the calc_load_update time for the newly online CPU to CPU_UP_PREPARE to avoid that the CPU plays catch up with the stale update time value. Signed-off-by: Thomas Gleixner <[email protected]>
2009-07-16Merge branch 'sched-fixes-for-linus' of ↵Linus Torvalds1-10/+33
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/linux-2.6-sched * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/peterz/linux-2.6-sched: sched: Fix bug in SCHED_IDLE interaction with group scheduling sched: Fix rt_rq->pushable_tasks initialization in init_rt_rq() sched: Reset sched stats on fork() sched_rt: Fix overload bug on rt group scheduling sched: Documentation/sched-rt-group: Fix style issues & bump version
2009-07-10sched: optimize cond_resched()Peter Zijlstra1-5/+9
Optimize cond_resched() by removing one conditional. Currently cond_resched() checks system_state == SYSTEM_RUNNING in order to avoid scheduling before the scheduler is running. We can however, as per suggestion of Matt, use PREEMPT_ACTIVE to accomplish that very same. Suggested-by: Matt Mackall <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Acked-by: Matt Mackall <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2009-07-10sched: Fix rt_rq->pushable_tasks initialization in init_rt_rq()Fabio Checconi1-1/+1
init_rt_rq() initializes only rq->rt.pushable_tasks, and not the pushable_tasks field of the passed rt_rq. The plist is not used uninitialized since the only pushable_tasks plists used are the ones of root rt_rqs; anyway reinitializing the list on every group creation corrupts the root plist, losing its previous contents. Signed-off-by: Fabio Checconi <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> CC: Gregory Haskins <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-10sched: Reset sched stats on fork()Lucas De Marchi1-9/+31
The sched_stat fields are currently not reset upon fork. Ingo's recent commit 6c594c21fcb02c662f11c97be4d7d2b73060a205 did reset nr_migrations, but it didn't reset any of the others. This patch resets all sched_stat fields on fork. Signed-off-by: Lucas De Marchi <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-10sched_rt: Fix overload bug on rt group schedulingPeter Zijlstra1-0/+1
Fixes an easily triggerable BUG() when setting process affinities. Make sure to count the number of migratable tasks in the same place: the root rt_rq. Otherwise the number doesn't make sense and we'll hit the BUG in set_cpus_allowed_rt(). Also, make sure we only count tasks, not groups (this is probably already taken care of by the fact that rt_se->nr_cpus_allowed will be 0 for groups, but be more explicit) Tested-by: Thomas Gleixner <[email protected]> CC: [email protected] Signed-off-by: Peter Zijlstra <[email protected]> Acked-by: Gregory Haskins <[email protected]> LKML-Reference: <1247067476.9777.57.camel@twins> Signed-off-by: Ingo Molnar <[email protected]>
2009-07-03rcu: Add synchronize_sched_expedited() primitivePaul E. McKenney1-2/+127
This adds the synchronize_sched_expedited() primitive that implements the "big hammer" expedited RCU grace periods. This primitive is placed in kernel/sched.c rather than kernel/rcupdate.c due to its need to interact closely with the migration_thread() kthread. The idea is to wake up this kthread with req->task set to NULL, in response to which the kthread reports the quiescent state resulting from the kthread having been scheduled. Because this patch needs to fallback to the slow versions of the primitives in response to some races with CPU onlining and offlining, a new synchronize_rcu_bh() primitive is added as well. Signed-off-by: Paul E. McKenney <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] LKML-Reference: <12459460982947-git-send-email-> Signed-off-by: Ingo Molnar <[email protected]>
2009-06-29sched: Hide runqueues from direct reference at source code level for ↵Hitoshi Mitake1-2/+3
__raw_get_cpu_var() Hide __raw_get_cpu_var() as well - thus all the direct references to runqueues will abstracted out. Signed-off-by: Hitoshi Mitake <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-06-29Merge branch 'linus' into sched/coreIngo Molnar1-14/+16
Merge reason: we will merge a dependent patch. Signed-off-by: Ingo Molnar <[email protected]>
2009-06-24percpu: use DEFINE_PER_CPU_SHARED_ALIGNED()Tejun Heo1-2/+2
There are a few places where ___cacheline_aligned* is used with DEFINE_PER_CPU(). Use DEFINE_PER_CPU_SHARED_ALIGNED() instead. DEFINE_PER_CPU_SHARED_ALIGNED() applies alignment only on SMPs. While all other converted places used _in_smp variant or only get compiled for SMP, net/rds used unconditional ____cacheline_aligned. I don't see any reason these data structures should be aligned on UP and thus converted together. Signed-off-by: Tejun Heo <[email protected]> Cc: Mike Frysinger <[email protected]> Cc: Tony Luck <[email protected]> Cc: Andy Grover <[email protected]>
2009-06-20Merge branch 'perfcounters-fixes-for-linus' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (49 commits) perfcounter: Handle some IO return values perf_counter: Push perf_sample_data through the swcounter code perf_counter tools: Define and use our own u64, s64 etc. definitions perf_counter: Close race in perf_lock_task_context() perf_counter, x86: Improve interactions with fast-gup perf_counter: Simplify and fix task migration counting perf_counter tools: Add a data file header perf_counter: Update userspace callchain sampling uses perf_counter: Make callchain samples extensible perf report: Filter to parent set by default perf_counter tools: Handle lost events perf_counter: Add event overlow handling fs: Provide empty .set_page_dirty() aop for anon inodes perf_counter: tools: Makefile tweaks for 64-bit powerpc perf_counter: powerpc: Add processor back-end for MPC7450 family perf_counter: powerpc: Make powerpc perf_counter code safe for 32-bit kernels perf_counter: powerpc: Change how processor-specific back-ends get selected perf_counter: powerpc: Use unsigned long for register and constraint values perf_counter: powerpc: Enable use of software counters on 32-bit powerpc perf_counter tools: Add and use isprint() ...
2009-06-20Merge branch 'sched-fixes-for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: Fix out of scope variable access in sched_slice() sched: Hide runqueues from direct refer at source code level sched: Remove unneeded __ref tag sched, x86: Fix cpufreq + sched_clock() TSC scaling
2009-06-19perf_counter: Simplify and fix task migration countingPeter Zijlstra1-1/+2
The task migrations counter was causing rare and hard to decypher memory corruptions under load. After a day of debugging and bisection we found that the problem was introduced with: 3f731ca: perf_counter: Fix cpu migration counter Turning them off fixes the crashes. Incidentally, the whole perf_counter_task_migration() logic can be done simpler as well, by injecting a proper sw-counter event. This cleanup also fixed the crashes. The precise failure mode is not completely clear yet, but we are clearly not unhappy about having a fix ;-) Signed-off-by: Peter Zijlstra <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Corey Ashford <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2009-06-18kthreads: simplify migration_thread() exit pathOleg Nesterov1-10/+4
Now that kthread_stop() can be used even if the task has already exited, we can kill the "wait_to_die:" loop in migration_thread(). But we must pin rq->migration_thread after creation. Actually, I don't think CPU_UP_CANCELED or CPU_DEAD should wait for ->migration_thread exit. Perhaps we can simplify this code a bit more. migration_call() can set ->should_stop and forget about this thread. But we need a new helper in kthred.c for that. Signed-off-by: Oleg Nesterov <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Pavel Emelyanov <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Vitaliy Gusev <[email protected] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>