aboutsummaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)AuthorFilesLines
2013-02-15posix-cpu-timers: Fix nanosleep task_struct leakStanislaw Gruszka1-2/+21
The trinity fuzzer triggered a task_struct reference leak via clock_nanosleep with CPU_TIMERs. do_cpu_nanosleep() calls posic_cpu_timer_create(), but misses a corresponding posix_cpu_timer_del() which leads to the task_struct reference leak. Reported-and-tested-by: Tommi Rantala <[email protected]> Signed-off-by: Stanislaw Gruszka <[email protected]> Cc: Dave Jones <[email protected]> Cc: John Stultz <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2013-02-14perf/hwbp: Fix cleanup in case of kzalloc failureDaniel Baluta1-1/+1
Obviously this is a typo and could result in memory leaks if kzalloc fails on a given cpu. Signed-off-by: Daniel Baluta <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2013-02-14Merge branch 'fortglx/3.9/time' of git://git.linaro.org/people/jstultz/linux ↵Thomas Gleixner2-2/+14
into timers/core
2013-02-14stop_machine: Use smpboot threadsThomas Gleixner2-86/+52
Use the smpboot thread infrastructure. Mark the stopper thread selfparking and park it after it has finished the take_cpu_down() work. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Arjan van de Veen <[email protected]> Cc: Paul Turner <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Magnus Damm <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2013-02-14stop_machine: Store task reference in a separate per cpu variableThomas Gleixner1-16/+16
To allow the stopper thread being managed by the smpboot thread infrastructure separate out the task storage from the stopper data structure. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Arjan van de Veen <[email protected]> Cc: Paul Turner <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Magnus Damm <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2013-02-14smpboot: Allow selfparking per cpu threadsThomas Gleixner1-2/+3
The stop machine threads are still killed when a cpu goes offline. The reason is that the thread is used to bring the cpu down, so it can't be parked along with the other per cpu threads. Allow a per cpu thread to be excluded from automatic parking, so it can park itself once it's done Add a create callback function as well. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Arjan van de Veen <[email protected]> Cc: Paul Turner <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Magnus Damm <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2013-02-14burying unused conditionalsAl Viro1-12/+0
__ARCH_WANT_SYS_RT_SIGACTION, __ARCH_WANT_SYS_RT_SIGSUSPEND, __ARCH_WANT_COMPAT_SYS_RT_SIGSUSPEND, __ARCH_WANT_COMPAT_SYS_SCHED_RR_GET_INTERVAL - not used anymore CONFIG_GENERIC_{SIGALTSTACK,COMPAT_RT_SIG{ACTION,QUEUEINFO,PENDING,PROCMASK}} - can be assumed always set.
2013-02-14make do_sigaltstack() staticAl Viro1-1/+1
Signed-off-by: Al Viro <[email protected]>
2013-02-13workqueue: rename cpu_workqueue to pool_workqueueTejun Heo2-218/+217
workqueue has moved away from global_cwqs to worker_pools and with the scheduled custom worker pools, wforkqueues will be associated with pools which don't have anything to do with CPUs. The workqueue code went through significant amount of changes recently and mass renaming isn't likely to hurt much additionally. Let's replace 'cpu' with 'pool' so that it reflects the current design. * s/struct cpu_workqueue_struct/struct pool_workqueue/ * s/cpu_wq/pool_wq/ * s/cwq/pwq/ This patch is purely cosmetic. Signed-off-by: Tejun Heo <[email protected]>
2013-02-13workqueue: reimplement is_chained_work() using current_wq_worker()Tejun Heo1-25/+8
is_chained_work() was added before current_wq_worker() and implemented its own ham-fisted way of finding out whether %current is a workqueue worker - it iterates through all possible workers. Drop the custom implementation and reimplement using current_wq_worker(). Signed-off-by: Tejun Heo <[email protected]>
2013-02-13workqueue: fix is_chained_work() regressionTejun Heo1-1/+1
c9e7cf273f ("workqueue: move busy_hash from global_cwq to worker_pool") incorrectly converted is_chained_work() to use get_gcwq() inside for_each_gcwq_cpu() while removing get_gcwq(). As cwq might not exist for all possible workqueue CPUs, @cwq can be NULL and the following cwq deferences can lead to oops. Fix it by using for_each_cwq_cpu() instead, which is the better one to use anyway as we only need to check pools that the wq is associated with. Signed-off-by: Tejun Heo <[email protected]>
2013-02-12kernel/pid.c: reenable interrupts when alloc_pid() fails because init has exitedEric W. Biederman1-1/+1
We're forgetting to reenable local interrupts on an error path. Signed-off-by: "Eric W. Biederman" <[email protected]> Reported-by: Josh Boyer <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-02-12Merge branch 'timers/for-arm' into timers/coreThomas Gleixner1-8/+14
2013-02-12clockevents: Fix generic broadcast for FEAT_C3STOPMark Rutland1-8/+14
Commit 12ad100046: "clockevents: Add generic timer broadcast function" made tick_device_uses_broadcast set up the generic broadcast function for dummy devices (where !tick_device_is_functional(dev)), but neglected to set up the broadcast function for devices that stop in low power states (with the CLOCK_EVT_FEAT_C3STOP flag). When these devices enter low power states they will not have the generic broadcast function assigned, and will bring down the system when an attempt is made to broadcast to them. This patch ensures that the broadcast function is also assigned for devices which require broadcast in low power states. Reported-by: Stephen Warren <[email protected]> Signed-off-by: Mark Rutland <[email protected]> Tested-by: Stephen Warren <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2013-02-09Merge tag 'v3.8-rc6' into next/cleanupOlof Johansson2-10/+12
Linux 3.8-rc6
2013-02-09suspend: enable freeze timeout configuration through sysLi Fei2-2/+29
At present, the value of timeout for freezing is 20s, which is meaningless in case that one thread is frozen with mutex locked and another thread is trying to lock the mutex, as this time of freezing will fail unavoidably. And if there is no new wakeup event registered, the system will waste at most 20s for such meaningless trying of freezing. With this patch, the value of timeout can be configured to smaller value, so such meaningless trying of freezing will be aborted in earlier time, and later freezing can be also triggered in earlier time. And more power will be saved. In normal case on mobile phone, it costs real little time to freeze processes. On some platform, it only costs about 20ms to freeze user space processes and 10ms to freeze kernel freezable threads. Signed-off-by: Liu Chuansheng <[email protected]> Signed-off-by: Li Fei <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2013-02-09PM: Introduce suspend state PM_SUSPEND_FREEZEZhang Rui2-14/+57
PM_SUSPEND_FREEZE state is a general state that does not need any platform specific support, it equals frozen processes + suspended devices + idle processors. Compared with PM_SUSPEND_MEMORY, PM_SUSPEND_FREEZE saves less power because the system is still in a running state. PM_SUSPEND_FREEZE has less resume latency because it does not touch BIOS, and the processors are in idle state. Compared with RTPM/idle, PM_SUSPEND_FREEZE saves more power as 1. the processor has longer sleep time because processes are frozen. The deeper c-state the processor supports, more power saving we can get. 2. PM_SUSPEND_FREEZE uses system suspend code path, thus we can get more power saving from the devices that does not have good RTPM support. This state is useful for 1) platforms that do not have STR, or have a broken STR. 2) platforms that have an extremely low power idle state, which can be used to replace STR. The following describes how PM_SUSPEND_FREEZE state works. 1. echo freeze > /sys/power/state 2. the processes are frozen. 3. all the devices are suspended. 4. all the processors are blocked by a wait queue 5. all the processors idles and enters (Deep) c-state. 6. an interrupt fires. 7. a processor is woken up and handles the irq. 8. if it is a general event, a) the irq handler runs and quites. b) goto step 4. 9. if it is a real wake event, say, power button pressing, keyboard touch, mouse moving, a) the irq handler runs and activate the wakeup source b) wakeup_source_activate() notifies the wait queue. c) system starts resuming from PM_SUSPEND_FREEZE 10. all the devices are resumed. 11. all the processes are unfrozen. 12. system is back to working. Known Issue: The wakeup of this new PM_SUSPEND_FREEZE state may behave differently from the previous suspend state. Take ACPI platform for example, there are some GPEs that only enabled when the system is in sleep state, to wake the system backk from S3/S4. But we are not touching these GPEs during transition to PM_SUSPEND_FREEZE. This means we may lose some wake event. But on the other hand, as we do not disable all the Interrupts during PM_SUSPEND_FREEZE, we may get some extra "wakeup" Interrupts, that are not available for S3/S4. The patches has been tested on an old Sony laptop, and here are the results: Average Power: 1. RPTM/idle for half an hour: 14.8W, 12.6W, 14.1W, 12.5W, 14.4W, 13.2W, 12.9W 2. Freeze for half an hour: 11W, 10.4W, 9.4W, 11.3W 10.5W 3. RTPM/idle for three hours: 11.6W 4. Freeze for three hours: 10W 5. Suspend to Memory: 0.5~0.9W Average Resume Latency: 1. RTPM/idle with a black screen: (From pressing keyboard to screen back) Less than 0.2s 2. Freeze: (From pressing power button to screen back) 2.50s 3. Suspend to Memory: (From pressing power button to screen back) 4.33s >From the results, we can see that all the platforms should benefit from this patch, even if it does not have Low Power S0. Signed-off-by: Zhang Rui <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2013-02-09kprobes: fix wait_for_kprobe_optimizer()Tejun Heo1-8/+15
wait_for_kprobe_optimizer() seems largely broken. It uses optimizer_comp which is never re-initialized, so wait_for_kprobe_optimizer() will never wait for anything once kprobe_optimizer() finishes all pending jobs for the first time. Also, aside from completion, delayed_work_pending() is %false once kprobe_optimizer() starts execution and wait_for_kprobe_optimizer() won't wait for it. Reimplement it so that it flushes optimizing_work until [un]optimizing_lists are empty. Note that this also makes optimizing_work execute immediately if someone's waiting for it, which is the nicer behavior. Only compile tested. Signed-off-by: Tejun Heo <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Cc: Ananth N Mavinakayanahalli <[email protected]> Cc: Anil S Keshavamurthy <[email protected]> Cc: "David S. Miller" <[email protected]>
2013-02-08time, Fix setting of hardware clock in NTP codePrarit Bhargava2-2/+14
At init time, if the system time is "warped" forward in warp_clock() it will differ from the hardware clock by sys_tz.tz_minuteswest. This time difference is not taken into account when ntp updates the hardware clock, and this causes the system time to jump forward by this offset every reboot. The kernel must take this offset into account when writing the system time to the hardware clock in the ntp code. This patch adds persistent_clock_is_local which indicates that an offset has been applied in warp_clock() and accounts for the "warp" before writing the hardware clock. x86 does not have this problem as rtc writes are software limited to a +/-15 minute window relative to the current rtc time. Other arches, such as powerpc, however do a full synchronization of the system time to the rtc and will see this problem. [v2]: generated against tip/timers/core Signed-off-by: Prarit Bhargava <[email protected]> Cc: John Stultz <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2013-02-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller7-19/+44
Synchronize with 'net' in order to sort out some l2tp, wireless, and ipv6 GRE fixes that will be built on top of in 'net-next'. Signed-off-by: David S. Miller <[email protected]>
2013-02-08uprobes/perf: Avoid uprobe_apply() whenever possibleOleg Nesterov1-6/+36
uprobe_perf_open/close call the costly uprobe_apply() every time, we can avoid it if: - "nr_systemwide != 0" is not changed. - There is another process/thread with the same ->mm. - copy_proccess() does inherit_event(). dup_mmap() preserves the inserted breakpoints. - event->attr.enable_on_exec == T, we can rely on uprobe_mmap() called by exec/mmap paths. - tp_target is exiting. Only _close() checks PF_EXITING, I don't think TRACE_REG_PERF_OPEN can hit the dying task too often. Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes/perf: Teach trace_uprobe/perf code to use UPROBE_HANDLER_REMOVEOleg Nesterov1-7/+14
Change uprobe_trace_func() and uprobe_perf_func() to return "int". Change uprobe_dispatcher() to return "trace_ret | perf_ret" although this is not needed, currently TP_FLAG_TRACE/TP_FLAG_PROFILE are mutually exclusive. The only functional change is that uprobe_perf_func() checks the filtering too and returns UPROBE_HANDLER_REMOVE if nobody wants to trace current. Testing: # perf probe -x /lib/libc.so.6 syscall # perf record -e probe_libc:syscall -i perl -e 'fork; syscall -1 for 1..10; wait' # perf report --show-total-period 100.00% 10 perl libc-2.8.so [.] syscall Before this patch: # cat /sys/kernel/debug/tracing/uprobe_profile /lib/libc.so.6 syscall 20 A child process doesn't have a counter, but still it hits this breakoint "copied" by dup_mmap(). After the patch: # cat /sys/kernel/debug/tracing/uprobe_profile /lib/libc.so.6 syscall 11 The child process hits this int3 only once and does unapply_uprobe(). Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes/perf: Teach trace_uprobe/perf code to pre-filterOleg Nesterov1-3/+43
Finally implement uprobe_perf_filter() which checks ->nr_systemwide or ->perf_events to figure out whether we need to insert the breakpoint. uprobe_perf_open/close are changed to do uprobe_apply(true/false) when the new perf event comes or goes away. Note that currently this is very suboptimal: - uprobe_register() called by TRACE_REG_PERF_REGISTER becomes a heavy nop, consumer->filter() always returns F at this stage. As it was already discussed we need uprobe_register_only() to avoid the costly register_for_each_vma() when possible. - uprobe_apply() is oftenly overkill. Unless "nr_systemwide != 0" changes we need uprobe_apply_mm(), unapply_uprobe() is almost what we need. - uprobe_apply() can be simply avoided sometimes, see the next changes. Testing: # perf probe -x /lib/libc.so.6 syscall # perl -e 'syscall -1 while 1' & [1] 530 # perf record -e probe_libc:syscall perl -e 'syscall -1 for 1..10; sleep 1' # perf report --show-total-period 100.00% 10 perl libc-2.8.so [.] syscall Before this patch: # cat /sys/kernel/debug/tracing/uprobe_profile /lib/libc.so.6 syscall 79291 A huge ->nrhit == 79291 reflects the fact that the background process 530 constantly hits this breakpoint too, even if doesn't contribute to the output. After the patch: # cat /sys/kernel/debug/tracing/uprobe_profile /lib/libc.so.6 syscall 10 This shows that only the target process was punished by int3. Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes/perf: Teach trace_uprobe/perf code to track the active perf_event'sOleg Nesterov1-0/+55
Introduce "struct trace_uprobe_filter" which records the "active" perf_event's attached to ftrace_event_call. For the start we simply use list_head, we can optimize this later if needed. For example, we do not really need to record an event with ->parent != NULL, we can rely on parent->child_list. And we can certainly do some optimizations for the case when 2 events have the same ->tp_target or tp_target->mm. Change trace_uprobe_register() to process TRACE_REG_PERF_OPEN/CLOSE and add/del this perf_event to the list. We can probably avoid any locking, but lets start with the "obvioulsy correct" trace_uprobe_filter->rwlock which protects everything. Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes: Introduce uprobe_apply()Oleg Nesterov1-4/+35
Currently it is not possible to change the filtering constraints after uprobe_register(), so a consumer can not, say, start to trace a task/mm which was previously filtered out, or remove the no longer needed bp's. Introduce uprobe_apply() which simply does register_for_each_vma() again to consult uprobe_consumer->filter() and install/remove the breakpoints. The only complication is that register_for_each_vma() can no longer assume that uprobe->consumers should be consulter if is_register == T, so we change it to accept "struct uprobe_consumer *new" instead. Unlike uprobe_register(), uprobe_apply(true) doesn't do "unregister" if register_for_each_vma() fails, it is up to caller to handle the error. Note: we probably need to cleanup the current interface, it is strange that uprobe_apply/unregister need inode/offset. We should either change uprobe_register() to return "struct uprobe *", or add a private ->uprobe member in uprobe_consumer. And in the long term uprobe_apply() should take a single argument, uprobe or consumer, even "bool add" should go away. Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08perf: Introduce hw_perf_event->tp_target and ->tp_listOleg Nesterov1-1/+4
sys_perf_event_open()->perf_init_event(event) is called before find_get_context(event), this means that event->ctx == NULL when class->reg(TRACE_REG_PERF_REGISTER/OPEN) is called and thus it can't know if this event is per-task or system-wide. This patch adds hw_perf_event->tp_target for PERF_TYPE_TRACEPOINT, this is analogous to PERF_TYPE_BREAKPOINT/bp_target we already have. The patch also moves ->bp_target up so that it can overlap with the new member, this can help the compiler to generate the better code. trace_uprobe_register() will use it for prefiltering to avoid the unnecessary breakpoints in mm's we do not want to trace. ->tp_target doesn't have its own reference, but we can rely on the fact that either sys_perf_event_open() holds a reference, or it is equal to event->ctx->task. So this pointer is always valid until free_event(). Also add the "struct list_head tp_list" into this union. It is not strictly necessary, but it can simplify the next changes and we can add it for free. Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes/perf: Always increment trace_uprobe->nhitOleg Nesterov1-2/+1
Move tu->nhit++ from uprobe_trace_func() to uprobe_dispatcher(). ->nhit counts how many time we hit the breakpoint inserted by this uprobe, we do not want to loose this info if uprobe was enabled by sys_perf_event_open(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes/tracing: Kill uprobe_trace_consumer, embed uprobe_consumer into ↵Oleg Nesterov1-29/+6
trace_uprobe trace_uprobe->consumer and "struct uprobe_trace_consumer" add the unnecessary indirection and complicate the code for no reason. This patch simply embeds uprobe_consumer into "struct trace_uprobe", all other changes only fix the compilation errors. Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes/tracing: Introduce is_trace_uprobe_enabled()Oleg Nesterov2-3/+7
probe_event_enable/disable() check tu->consumer != NULL to avoid the wrong uprobe_register/unregister(). We are going to kill this pointer and "struct uprobe_trace_consumer", so we add the new helper, is_trace_uprobe_enabled(), which can rely on TP_FLAG_TRACE/TP_FLAG_PROFILE instead. Note: the current logic doesn't look optimal, it is not clear why TP_FLAG_TRACE/TP_FLAG_PROFILE are mutually exclusive, we will probably change this later. Also kill the unused TP_FLAG_UPROBE. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes/tracing: Ensure inode != NULL in create_trace_uprobe()Oleg Nesterov1-3/+3
probe_event_enable/disable() check tu->inode != NULL at the start. This is ugly, if igrab() can fail create_trace_uprobe() should not succeed and "postpone" the failure. And S_ISREG(inode->i_mode) check added by d24d7dbf is not safe. Note: alloc_uprobe() should probably check igrab() != NULL as well. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes/tracing: Fully initialize uprobe_trace_consumer before uprobe_register()Oleg Nesterov1-6/+7
probe_event_enable() does uprobe_register() and only after that sets utc->tu and tu->consumer/flags. This can race with uprobe_dispatcher() which can miss these assignments or see them out of order. Nothing really bad can happen, but this doesn't look clean/safe. And this does not allow to use uprobe_consumer->filter() we are going to add, it is called by uprobe_register() and it needs utc->tu. Change this code to initialize everything before uprobe_register(), and reset tu->consumer/flags if it fails. We can't race with event_disable(), the caller holds event_mutex, and if we could the code would be wrong anyway. In fact I think uprobe_trace_consumer should die, it buys nothing but complicates the code. We can simply add uprobe_consumer into trace_uprobe. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes/tracing: Fix dentry/mount leak in create_trace_uprobe()Oleg Nesterov1-4/+6
create_trace_uprobe() does kern_path() to find ->d_inode, but forgets to do path_put(). We can do this right after igrab(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Add exports for module useJosh Stone2-0/+9
The original pull message for uprobes (commit 654443e2) noted: This tree includes uprobes support in 'perf probe' - but SystemTap (and other tools) can take advantage of user probe points as well. In order to actually be usable in module-based tools like SystemTap, the interface needs to be exported. This patch first adds the obvious exports for uprobe_register and uprobe_unregister. Then it also adds one for task_user_regset_view, which is necessary to get the correct state of userspace registers. Signed-off-by: Josh Stone <[email protected]> Signed-off-by: Oleg Nesterov <[email protected]>
2013-02-08uprobes: Kill the bogus IS_ERR_VALUE(xol_vaddr) checkOleg Nesterov1-2/+1
utask->xol_vaddr is either zero or valid, remove the bogus IS_ERR_VALUE() check in xol_free_insn_slot(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Do not allocate current->utask unnecessaryOleg Nesterov1-10/+6
handle_swbp() does get_utask() before can_skip_sstep() for no reason, we do not need ->utask if can_skip_sstep() succeeds. Move get_utask() to pre_ssout() who actually starts to use it. Move the initialization of utask->active_uprobe/state as well. This way the whole initialization is consolidated in pre_ssout(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]>
2013-02-08uprobes: Fix utask->xol_vaddr leak in pre_ssout()Oleg Nesterov1-1/+8
pre_ssout() should do xol_free_insn_slot() if arch_uprobe_pre_xol() fails, otherwise nobody will free the allocated slot. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Do not play with utask in xol_get_insn_slot()Oleg Nesterov1-16/+21
pre_ssout()->xol_get_insn_slot() path is confusing and buggy. This patch cleanups the code, the next one fixes the bug. Change xol_get_insn_slot() to only allocate the slot and do nothing more, move the initialization of utask->xol_vaddr/vaddr into pre_ssout(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Turn add_utask() into get_utask()Oleg Nesterov1-18/+9
Rename add_utask() into get_utask() and change it to allocate on demand to simplify the caller. Like get_xol_area() it will have more users. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Fold xol_alloc_area() into get_xol_area()Oleg Nesterov1-22/+16
Currently only xol_get_insn_slot() does get_xol_area() + xol_alloc_area(), but this will have more users and we do not want to copy-and-paste this code. This patch simply moves xol_alloc_area() into get_xol_area() to simplify the current and future code. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Move alloc_page() from xol_add_vma() to xol_alloc_area()Oleg Nesterov1-19/+13
Move alloc_page() from xol_add_vma() to xol_alloc_area() to cleanup the code. This separates the memory allocations and consolidates the -EALREADY cleanups and the error handling. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Change handle_swbp() to expose bp_vaddr to handler_chain()Oleg Nesterov2-10/+9
Change handle_swbp() to set regs->ip = bp_vaddr in advance, this is what consumer->handler() needs but uprobe_get_swbp_addr() is not exported. This also simplifies the code and makes it more consistent across the supported architectures. handle_swbp() becomes the only caller of uprobe_get_swbp_addr(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Ananth N Mavinakayanahalli <[email protected]>
2013-02-08uprobes: Teach handler_chain() to filter out the probed taskOleg Nesterov1-10/+48
Currrently the are 2 problems with pre-filtering: 1. It is not possible to add/remove a task (mm) after uprobe_register() 2. A forked child inherits all breakpoints and uprobe_consumer can not control this. This patch does the first step to improve the filtering. handler_chain() removes the breakpoints installed by this uprobe from current->mm if all handlers return UPROBE_HANDLER_REMOVE. Note that handler_chain() relies on ->register_rwsem to avoid the race with uprobe_register/unregister which can add/del a consumer, or even remove and then insert the new uprobe at the same address. Perhaps we will add uprobe_apply_mm(uprobe, mm, is_register) and teach copy_mm() to do filter(UPROBE_FILTER_FORK), but I think this change makes sense anyway. Note: instead of checking the retcode from uc->handler, we could add uc->filter(UPROBE_FILTER_BPHIT). But I think this is not optimal to call 2 hooks in a row. This buys nothing, and if handler/filter do something nontrivial they will probably do the same work twice. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Reintroduce uprobe_consumer->filter()Oleg Nesterov1-7/+11
Finally add uprobe_consumer->filter() and change consumer_filter() to actually call this method. Note that ->filter() accepts mm_struct, not task_struct. Because: 1. We do not have for_each_mm_user(mm, task). 2. Even if we implement for_each_mm_user(), ->filter() can use it itself. 3. It is not clear who will actually need this interface to do the "nontrivial" filtering. Another argument is "enum uprobe_filter_ctx", consumer->filter() can use it to figure out why/where it was called. For example, perhaps we can add UPROBE_FILTER_PRE_REGISTER used by build_map_info() to quickly "nack" the unwanted mm's. In this case consumer should know that it is called under ->i_mmap_mutex. See the previous discussion at http://marc.info/?t=135214229700002 Perhaps we should pass more arguments, vma/vaddr? Note: this patch obviously can't help to filter out the child created by fork(), this will be addressed later. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Rationalize the usage of filter_chain()Oleg Nesterov1-23/+21
filter_chain() was added into install_breakpoint/remove_breakpoint to simplify the initial changes but this is sub-optimal. This patch shifts the callsite to the callers, register_for_each_vma() and uprobe_mmap(). This way: - It will be easier to add the new arguments. This is the main reason, we can do more optimizations later. - register_for_each_vma(is_register => true) can be optimized, we only need to consult the new consumer. The previous consumers were already asked when they called uprobe_register(). This patch also moves the MMF_HAS_UPROBES check from remove_breakpoint(), this allows to avoid the potentionally costly filter_chain(). Note that register_for_each_vma(is_register => false) doesn't really need to take ->consumer_rwsem, but I don't think it makes sense to optimize this and introduce filter_chain_lockless(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Kill uprobes_mutex[], separate alloc_uprobe() and __uprobe_register()Oleg Nesterov1-36/+15
uprobe_register() and uprobe_unregister() are the only users of mutex_lock(uprobes_hash(inode)), and the only reason why we can't simply remove it is that we need to ensure that delete_uprobe() is not possible after alloc_uprobe() and before consumer_add(). IOW, we need to ensure that when we take uprobe->register_rwsem this uprobe is still valid and we didn't race with _unregister() which called delete_uprobe() in between. With this patch uprobe_register() simply checks uprobe_is_active() and retries if it hits this very unlikely race. uprobes_mutex[] is no longer needed and can be removed. There is another reason for this change, prepare_uprobe() should be folded into alloc_uprobe() and we do not want to hold the extra locks around read_mapping_page/etc. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Introduce uprobe_is_active()Oleg Nesterov1-0/+8
The lifetime of uprobe->rb_node and uprobe->inode is not refcounted, delete_uprobe() is called when we detect that uprobe has no consumers, and it would be deadly wrong to do this twice. Change delete_uprobe() to WARN() if it was already called. We use RB_CLEAR_NODE() to mark uprobe "inactive", then RB_EMPTY_NODE() can be used to detect this case. RB_EMPTY_NODE() is not used directly, we add the trivial helper for the next change. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Kill uprobe_events, use RB_EMPTY_ROOT() insteadOleg Nesterov1-12/+7
uprobe_events counts the number of uprobes in uprobes_tree but it is used as a boolean. We can use RB_EMPTY_ROOT() instead. Probably no_uprobe_events() added by this patch can have more callers, say, mmf_recalc_uprobes(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Anton Arapov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Kill uprobe->copy_mutexOleg Nesterov1-4/+3
Now that ->register_rwsem is safe under ->mmap_sem we can kill ->copy_mutex and abuse down_write(&uprobe->consumer_rwsem). This makes prepare_uprobe() even more ugly, but we should kill it anyway. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Kill UPROBE_RUN_HANDLER flagOleg Nesterov1-18/+5
Simply remove UPROBE_RUN_HANDLER and the corresponding code. It can only help if uprobe has a single consumer, and in fact it is no longer needed after handler_chain() was changed to use ->register_rwsem, we simply can not race with uprobe_register(). Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>
2013-02-08uprobes: Change filter_chain() to iterate ->consumers listOleg Nesterov1-8/+13
Now that it safe to use ->consumer_rwsem under ->mmap_sem we can almost finish the implementation of filter_chain(). It still lacks the actual uc->filter(...) call but othewrwise it is ready, just it pretends that ->filter() always returns true. Signed-off-by: Oleg Nesterov <[email protected]> Acked-by: Srikar Dronamraju <[email protected]>