aboutsummaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)AuthorFilesLines
2010-09-15perf events: Clean up pid passingMatt Helsley3-15/+13
The kernel perf event creation path shouldn't use find_task_by_vpid() because a vpid exists in a specific namespace. find_task_by_vpid() uses current's pid namespace which isn't always the correct namespace to use for the vpid in all the places perf_event_create_kernel_counter() (and thus find_get_context()) is called. The goal is to clean up pid namespace handling and prevent bugs like: https://bugzilla.kernel.org/show_bug.cgi?id=17281 Instead of using pids switch find_get_context() to use task struct pointers directly. The syscall is responsible for resolving the pid to a task struct. This moves the pid namespace resolution into the syscall much like every other syscall that takes pid parameters. Signed-off-by: Matt Helsley <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Robin Green <[email protected]> Cc: Prasad <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mahesh Salgaonkar <[email protected]> LKML-Reference: <a134e5e392ab0204961fd1a62c84a222bf5874a9.1284407763.git.matthltc@us.ibm.com> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-15perf events: Split out task search into helperMatt Helsley1-23/+40
Split out the code which searches for non-exiting tasks into its own helper. Creating this helper not only makes the code slightly more readable it prepares to move the search out of find_get_context() in a subsequent commit. Signed-off-by: Matt Helsley <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Robin Green <[email protected]> Cc: Prasad <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mahesh Salgaonkar <[email protected]> LKML-Reference: <561205417b450b8a4bf7488374541d64b4690431.1284407762.git.matthltc@us.ibm.com> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-15hw breakpoints: Fix pid namespace bugMatt Helsley1-1/+2
Hardware breakpoints can't be registered within pid namespaces because tsk->pid is passed rather than the pid in the current namespace. (See https://bugzilla.kernel.org/show_bug.cgi?id=17281 ) This is a quick fix demonstrating the problem but is not the best method of solving the problem since passing pids internally is not the best way to avoid pid namespace bugs. Subsequent patches will show a better solution. Much thanks to Frederic Weisbecker <[email protected]> for doing the bulk of the work finding this bug. Signed-off-by: Matt Helsley <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Robin Green <[email protected]> Cc: Prasad <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mahesh Salgaonkar <[email protected]> LKML-Reference: <f63454af09fb1915717251570423eb9ddd338340.1284407762.git.matthltc@us.ibm.com> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-15watchdog: Avoid kernel crash when disabling watchdogStephane Eranian1-0/+3
In case you boot with the watchdog disabled, i.e., nowatchdog, then, if you try to disable it via /proc/sys/kernel/watchdog, you get a kernel crash. The reason is that you are trying to cancel a hrtimer which has never been initialized. This patch fixes this by skipping execution of watchdog_disable_all_cpus() when the watchdog is marked disabled from boot. Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-15Merge branch 'tip/perf/core' of ↵Ingo Molnar20-184/+409
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
2010-09-14tracing: Remove leftover FTRACE_ENABLE/DISABLE_MCOUNT enumsSteven Rostedt1-14/+4
The enums for FTRACE_ENABLE_MCOUNT and FTRACE_DISABLE_MCOUNT were used as commands to ftrace_run_update_code(). But these commands were used by the old nasty ftrace daemon that has long been slain. This is a clean up patch to remove the references to these enums and simplify the code a little. Reported-by: Wu Zhangjin <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
2010-09-14tracing: Do not trace in irq when funcgraph-irq option is zeroSteven Rostedt1-1/+22
When the function graph tracer funcgraph-irq option is zero, disable tracing in IRQs. This makes the option have two effects. 1) When reading the trace file, do not display the functions that happen in interrupt context (when detected) 2) [*new*] When recording a trace, skip those that are detected to be in interrupt by the 'in_irq()' function Note, in_irq() is updated at irq_enter() and irq_exit(). There are still functions that are recorded by the function graph tracer that is in interrupt context but outside the irq_enter/exit() routines. Signed-off-by: Steven Rostedt <[email protected]>
2010-09-14tracing: Add funcgraph-irq option for function graph tracer.Jiri Olsa1-1/+100
It's handy to be able to disable the irq related output and not to have to jump over each irq related code, when you have no interrest in it. The option is by default enabled, so there's no change to current behaviour. It affects only the final output, so all the irq related data stay in the ring buffer. Signed-off-by: Jiri Olsa <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
2010-09-14tracing: Fix reading of set_ftrace_filter across listsSteven Rostedt1-3/+3
If we do: # cd /sys/kernel/debug # echo 'do_IRQ:traceon schedule:traceon sys_write:traceon' > \ set_ftrace_filter # cat set_ftrace_filter We get the following output: #### all functions enabled #### sys_write:traceon:unlimited schedule:traceon:unlimited do_IRQ:traceon:unlimited This outputs two lists. One is the fact that all functions are currently enabled for function tracing, the other has three probed functions, which happen to have 'traceon' as their commands. Currently, when reading the first list (functions enabled) the seq_file code will receive a "NULL" from the t_next() function causing it to exit early. This makes "read()" from userspace stop reading the code at this boarder. Although read is allowed to do this, some (broken) applications might consider this an end of file and stop early. This patch adds the start of the second list to t_next() when it finishes the first list. It is a simple change and gives the set_ftrace_filter file nicer reading ability. Signed-off-by: Steven Rostedt <[email protected]>
2010-09-14tracing: Keep track of set_ftrace_filter position and allow lseek againSteven Rostedt1-8/+26
This patch keeps track of the index within the elements of set_ftrace_filter and if the position goes backwards, it nicely resets and starts from the beginning again. This allows for lseek and pread to work properly now. Signed-off-by: Steven Rostedt <[email protected]>
2010-09-14tracing: Replace typecasted void pointer in set_ftrace_filter codeSteven Rostedt1-21/+46
The set_ftrace_filter uses seq_file and reads from two lists. The pointer returned by t_next() can either be of type struct dyn_ftrace or struct ftrace_func_probe. If there is a bug (there was one) the wrong pointer may be used and the reference can cause an oops. This patch makes t_next() and friends only return the iterator structure which now has a pointer of type struct dyn_ftrace and struct ftrace_func_probe. The t_show() can now test if the pointer is NULL or not and if the pointer exists, it is guaranteed to be of the correct type. Now if there's a bug, only wrong data will be shown but not an oops. Cc: Chris Wright <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
2010-09-14tracing: Do not reset *pos in set_ftrace_filterSteven Rostedt1-2/+6
After the filtered functions are read, the probed functions are read from the hash in set_ftrace_filter. When the hashed probed functions are read, the *pos passed in is reset. Instead of modifying the pos given to the read function, just record the pos where the filtered functions ended and subtract from that. Signed-off-by: Steven Rostedt <[email protected]>
2010-09-13sched: Improve latencies under load by decreasing minimum scheduling granularityIngo Molnar1-3/+3
Mathieu reported bad latencies with make -j10 kind of kbuild workloads - which is mostly caused by us scheduling with a too coarse granularity. Reduce the minimum granularity some more, to make sure we can meet the latency target. I got the following results (make -j10 kbuild load, average of 3 runs): vanilla: maximum latency: 38278.9 µs average latency: 7730.1 µs patched: maximum latency: 22702.1 µs average latency: 6684.8 µs Mathieu also measured it: | | * wakeup-latency.c (SIGEV_THREAD) with make -j10 | | - Mainline 2.6.35.2 kernel | | maximum latency: 45762.1 µs | average latency: 7348.6 µs | | - With only Peter's smaller min_gran (shown below): | | maximum latency: 29100.6 µs | average latency: 6684.1 µs | Reported-by: Mathieu Desnoyers <[email protected]> Reported-by: Linus Torvalds <[email protected]> Acked-by: Mathieu Desnoyers <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Acked-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-13perf: Fix free_event()Peter Zijlstra1-1/+3
With the context rework stuff we can actually end up freeing an event before it gets attached to a context. Reported-by: Cyrill Gorcunov <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-13perf: Sanitize the RCU logicPeter Zijlstra1-8/+9
Simplify things and simply synchronize against two RCU variants for PMU unregister -- we don't care about performance, its module unload if anything. Reported-by: Frederic Weisbecker <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Paul E. McKenney <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-11Merge branch 'pm-fixes' of ↵Linus Torvalds2-21/+68
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6 * 'pm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6: PM / Hibernate: Avoid hitting OOM during preallocation of memory PM QoS: Correct pr_debug() misuse and improve parameter checks PM: Prevent waiting forever on asynchronous resume after failing suspend
2010-09-11PM / Hibernate: Avoid hitting OOM during preallocation of memoryRafael J. Wysocki1-20/+65
There is a problem in hibernate_preallocate_memory() that it calls preallocate_image_memory() with an argument that may be greater than the total number of available non-highmem memory pages. If that's the case, the OOM condition is guaranteed to trigger, which in turn can cause significant slowdown to occur during hibernation. To avoid that, make preallocate_image_memory() adjust its argument before calling preallocate_image_pages(), so that the total number of saveable non-highem pages left is not less than the minimum size of a hibernation image. Change hibernate_preallocate_memory() to try to allocate from highmem if the number of pages allocated by preallocate_image_memory() is too low. Modify free_unnecessary_pages() to take all possible memory allocation patterns into account. Reported-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]> Tested-by: M. Vefa Bicakci <[email protected]>
2010-09-11Merge branch 'sched-fixes-for-linus' of ↵Linus Torvalds2-2/+6
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, tsc: Fix a preemption leak in restore_sched_clock_state() sched: Move sched_avg_update() to update_cpu_load()
2010-09-11PM QoS: Correct pr_debug() misuse and improve parameter checksmark gross1-1/+3
Correct some pr_debug() misuse and add a stronger parameter check to pm_qos_write() for the ASCII hex value case. Thanks to Dan Carpenter for pointing out the problem! Signed-off-by: mark gross <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2010-09-10perf: Fix perf_init_event()Peter Zijlstra1-2/+5
We ought to return -ENOENT when non of the registered PMUs recognise the requested event. This fixes a boot crash that occurs if no PMU is available but the NMI watchdog tries to register an event. Reported-by: Ingo Molnar <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-10Merge branch 'perf-fixes-for-linus' of ↵Linus Torvalds4-22/+34
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: tracing: t_start: reset FTRACE_ITER_HASH in case of seek/pread perf symbols: Fix multiple initialization of symbol system perf: Fix CPU hotplug perf, trace: Fix module leak tracing/kprobe: Fix handling of C-unlike argument names tracing/kprobes: Fix handling of argument names perf probe: Fix handling of arguments names perf probe: Fix return probe support tracing/kprobe: Fix a memory leak in error case tracing: Do not allow llseek to set_ftrace_filter
2010-09-10perf: Ensure we call add_event_to_ctx() with the right locks heldPeter Zijlstra1-0/+3
Even though we call it from the inherit path, where the child is not yet accessible, we need to hold ctx->lock, add_event_to_ctx() assumes IRQs are disabled. Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09tracing: t_start: reset FTRACE_ITER_HASH in case of seek/preadChris Wright1-0/+2
Be sure to avoid entering t_show() with FTRACE_ITER_HASH set without having properly started the iterator to iterate the hash. This case is degenerate and, as discovered by Robert Swiecki, can cause t_hash_show() to misuse a pointer. This causes a NULL ptr deref with possible security implications. Tracked as CVE-2010-3079. Cc: Robert Swiecki <[email protected]> Cc: Eugene Teo <[email protected]> Cc: <[email protected]> Signed-off-by: Chris Wright <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
2010-09-09swap: revert special hibernation allocationHugh Dickins3-5/+3
Please revert 2.6.36-rc commit d2997b1042ec150616c1963b5e5e919ffd0b0ebf "hibernation: freeze swap at hibernation". It complicated matters by adding a second swap allocation path, just for hibernation; without in any way fixing the issue that it was intended to address - page reclaim after fixing the hibernation image might free swap from a page already imaged as swapcache, letting its swap be reallocated to store a different page of the image: resulting in data corruption if the imaged page were freed as clean then swapped back in. Pages freed to si->swap_map were still in danger of being reallocated by the alternative allocation path. I guess it inadvertently fixed slow SSD swap allocation for hibernation, as reported by Nigel Cunningham: by missing out the discards that occur on the usual swap allocation path; but that was unintentional, and needs a separate fix. Signed-off-by: Hugh Dickins <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Ondrej Zary <[email protected]> Cc: Andrea Gelmini <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Nigel Cunningham <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-09-09kernel/groups.c: fix integer overflow in groups_searchJerome Marchand1-3/+2
gid_t is a unsigned int. If group_info contains a gid greater than MAX_INT, groups_search() function may look on the wrong side of the search tree. This solves some unfair "permission denied" problems. Signed-off-by: Jerome Marchand <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-09-09cgroups: fix API thinkoMichael S. Tsirkin1-6/+7
Add cgroup_attach_task_all() The existing cgroup_attach_task_current_cg() API is called by a thread to attach another thread to all of its cgroups; this is unsuitable for cases where a privileged task wants to attach itself to the cgroups of a less privileged one, since the call must be made from the context of the target task. This patch adds a more generic cgroup_attach_task_all() API that allows both the source task and to-be-moved task to be specified. cgroup_attach_task_current_cg() becomes a specialization of the more generic new function. [[email protected]: rewrote changelog] [[email protected]: address reviewer comments] Signed-off-by: Michael S. Tsirkin <[email protected]> Tested-by: Alex Williamson <[email protected]> Acked-by: Paul Menage <[email protected]> Cc: Li Zefan <[email protected]> Cc: Ben Blum <[email protected]> Cc: Sridhar Samudrala <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-09-09gcov: fix null-pointer dereference for certain module typesPeter Oberparleiter1-64/+180
The gcov-kernel infrastructure expects that each object file is loaded only once. This may not be true, e.g. when loading multiple kernel modules which are linked to the same object file. As a result, loading such kernel modules will result in incorrect gcov results while unloading will cause a null-pointer dereference. This patch fixes these problems by changing the gcov-kernel infrastructure so that multiple profiling data sets can be associated with one debugfs entry. It applies to 2.6.36-rc1. Signed-off-by: Peter Oberparleiter <[email protected]> Reported-by: Werner Spies <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-09-09perf: Fix up delayed_put_task_struct()Peter Zijlstra2-3/+9
I missed a perf_event_ctxp user when converting it to an array. Pull this last user into perf_event.c as well and fix it up. Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Optimize context opsPeter Zijlstra1-0/+6
Assuming we don't mix events of different pmus onto a single context (with the exeption of software events inside a hardware group) we can now assume that all events on a particular context belong to the same pmu, hence we can disable the pmu for the entire context operations. This reduces the amount of hardware writes. The exception for swevents comes from the fact that the sw pmu disable is a nop. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Provide a separate task context for sweventsPeter Zijlstra2-11/+31
Since software events are always schedulable, mixing them up with hardware events (who are not) can lead to funny scheduling oddities. Giving them their own context solves this. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Multiple task contextsPeter Zijlstra1-105/+231
Provide the infrastructure for multiple task contexts. A more flexible approach would have resulted in more pointer chases in the scheduling hot-paths. This approach has the limitation of a static number of task contexts. Since I expect most external PMUs to be system wide, or at least node wide (as per the intel uncore unit) they won't actually need a task context. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Clean up perf_event_context allocationPeter Zijlstra1-15/+26
Unify the two perf_event_context allocation sites. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Move some code aroundPeter Zijlstra1-100/+100
Move all inherit code near each other. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Per-pmu-per-cpu contextsPeter Zijlstra1-68/+110
Allocate per-cpu contexts per pmu. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Per cpu-context rotation timerPeter Zijlstra2-19/+63
Give each cpu-context its own timer so that it is a self contained entity, this eases the way for per-pmu-per-cpu contexts as well as provides the basic infrastructure to allow different rotation times per pmu. Things to look at: - folding the tick and these TICK_NSEC timers - separate task context rotation Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Remove the swevent hash-table from the cpu contextPeter Zijlstra1-46/+58
Separate the swevent hash-table from the cpu_context bits in preparation for per pmu cpu contexts. This keeps the swevent hash a global entity. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Separate find_get_context() from event initializationPeter Zijlstra1-38/+35
Separate find_get_context() from the event allocation and initialization so that we may make find_get_context() depend on the event pmu in a later patch. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Remove the sysfs bitsPeter Zijlstra1-124/+0
Neither the overcommit nor the reservation sysfs parameter were actually working, remove them as they'll only get in the way. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Rework the PMU methodsPeter Zijlstra3-72/+104
Replace pmu::{enable,disable,start,stop,unthrottle} with pmu::{add,del,start,stop}, all of which take a flags argument. The new interface extends the capability to stop a counter while keeping it scheduled on the PMU. We replace the throttled state with the generic stopped state. This also allows us to efficiently stop/start counters over certain code paths (like IRQ handlers). It also allows scheduling a counter without it starting, allowing for a generic frozen state (useful for rotating stopped counters). The stopped state is implemented in two different ways, depending on how the architecture implemented the throttled state: 1) We disable the counter: a) the pmu has per-counter enable bits, we flip that b) we program a NOP event, preserving the counter state 2) We store the counter state and ignore all read/overflow events Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: David Miller <[email protected]> Cc: Michael Cree <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Shrink hw_perf_eventPeter Zijlstra1-7/+6
Use hw_perf_event::period_left instead of hw_perf_event::remaining and win back 8 bytes. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: Frederic Weisbecker <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Default PMU opsPeter Zijlstra1-12/+52
Provide default implementations for the pmu txn methods, this allows us to remove some conditional code. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: David Miller <[email protected]> Cc: Michael Cree <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Per PMU disablePeter Zijlstra1-12/+19
Changes perf_disable() into perf_pmu_disable(). Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: David Miller <[email protected]> Cc: Michael Cree <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Reduce perf_disable() usagePeter Zijlstra1-36/+1
Since the current perf_disable() usage is only an optimization, remove it for now. This eases the removal of the __weak hw_perf_enable() interface. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: David Miller <[email protected]> Cc: Michael Cree <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Unindent labelsPeter Zijlstra1-19/+24
Fixup random annoying style bits. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Register PMU implementationsPeter Zijlstra2-303/+320
Simple registration interface for struct pmu, this provides the infrastructure for removing all the weak functions. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: David Miller <[email protected]> Cc: Michael Cree <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Deconstify struct pmuPeter Zijlstra1-13/+13
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"` Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> Cc: stephane eranian <[email protected]> Cc: Robert Richter <[email protected]> Cc: Will Deacon <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Lin Ming <[email protected]> Cc: Yanmin <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: David Miller <[email protected]> Cc: Michael Cree <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09Merge branch 'perf/urgent' into perf/coreIngo Molnar5-35/+77
Merge reason: Pick up pending fixes before applying dependent new changes. Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09sched: Move sched_avg_update() to update_cpu_load()Suresh Siddha2-2/+6
Currently sched_avg_update() (which updates rt_avg stats in the rq) is getting called from scale_rt_power() (in the load balance context) which doesn't take rq->lock. Fix it by moving the sched_avg_update() to more appropriate update_cpu_load() where the CFS load gets updated as well. Signed-off-by: Suresh Siddha <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <1282596171.2694.3.camel@sbsiddha-MOBL3> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf: Fix CPU hotplugPeter Zijlstra1-3/+3
Since we have UP_PREPARE, we should also have UP_CANCELED. Signed-off-by: Peter Zijlstra <[email protected]> Cc: paulus <[email protected]> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <[email protected]>
2010-09-09perf, trace: Fix module leakLi Zefan1-0/+3
Commit 1c024eca (perf, trace: Optimize tracepoints by using per-tracepoint-per-cpu hlist to track events) caused a module refcount leak. Reported-And-Tested-by: Avi Kivity <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>