aboutsummaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)AuthorFilesLines
2010-08-09memblock: Fix memblock_is_region_reserved() to return a booleanBenjamin Herrenschmidt1-1/+1
All callers expect a boolean result which is true if the region overlaps a reserved region. However, the implementation actually returns -1 if there is no overlap, and a region index (0 based) if there is. Make it behave as callers (and common sense) expect. Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2010-08-08kmemleak: Fix typo in the commentHolger Hans Peter Freyther1-1/+1
Fix typo in comment. Signed-off-by: Holger Hans Peter Freyther <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2010-08-07writeback: fix bad _bh spinlock nestingJens Axboe1-2/+3
Fix a bug where a lock is _bh nested within another _bh lock, but forgets to use the _bh variant for unlock. Further more, it's not necessary to test _bh locks, the inner lock can just use spin_lock(). So fix up the bug by making that change. Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: cleanup bdi_registerArtem Bityutskiy1-19/+11
This patch makes sure we first initialize everything and set the BDI_registered flag, and only after this we add the bdi to 'bdi_list'. Current code adds the bdi to the list too early, and as a result I the WARN(!test_bit(BDI_registered, &bdi->state) in bdi forker is triggered. Also, it is in general good practice to make things visible only when they are fully initialized. Also, this patch does few micro clean-ups: 1. Removes the 'exit' label which does not do anything, just returns. This allows to get rid of few braces and 'ret' variable and make the code smaller. 2. If 'kthread_run()' fails, remove the error code it returns, not hard-coded '-ENOMEM'. Theoretically, some day 'kthread_run()' can return something else. Also, in case of failure it is not necessary to set 'bdi->wb.task' to NULL. Signed-off-by: Artem Bityutskiy <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: add new tracepointsArtem Bityutskiy1-0/+2
Add 2 new trace points to the periodic write-back wake up case, just like we do in the 'bdi_queue_work()' function. Namely, introduce: 1. trace_writeback_wake_thread(bdi) 2. trace_writeback_wake_forker_thread(bdi) The first event is triggered every time we wake up a bdi thread to start periodic background write-out. The second event is triggered only when the bdi thread does not exist and should be created by the forker thread. This patch was suggested by Dave Chinner and Christoph Hellwig. Signed-off-by: Artem Bityutskiy <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: remove unnecessary init_timer callArtem Bityutskiy1-1/+0
The 'setup_timer()' function also calls 'init_timer()', so the extra 'init_timer()' call is not needed. Indeed, 'setup_timer()' is basically 'init_timer()' plus callback function and data pointers initialization. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: optimize periodic bdi thread wakeupsArtem Bityutskiy1-16/+57
Whe the first inode for a bdi is marked dirty, we wake up the bdi thread which should take care of the periodic background write-out. However, the write-out will actually start only 'dirty_writeback_interval' centisecs later, so we can delay the wake-up. This change was requested by Nick Piggin who pointed out that if we delay the wake-up, we weed out 2 unnecessary contex switches, which matters because '__mark_inode_dirty()' is a hot-path function. This patch introduces a new function - 'bdi_wakeup_thread_delayed()', which sets up a timer to wake-up the bdi thread and returns. So the wake-up is delayed. We also delete the timer in bdi threads just before writing-back. And synchronously delete it when unregistering bdi. At the unregister point the bdi does not have any users, so no one can arm it again. Since now we take 'bdi->wb_lock' in the timer, which can execute in softirq context, we have to use 'spin_lock_bh()' for 'bdi->wb_lock'. This patch makes this change as well. This patch also moves the 'bdi_wb_init()' function down in the file to avoid forward-declaration of 'bdi_wakeup_thread_delayed()'. Signed-off-by: Artem Bityutskiy <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: prevent unnecessary bdi threads wakeupsArtem Bityutskiy1-3/+10
Finally, we can get rid of unnecessary wake-ups in bdi threads, which are very bad for battery-driven devices. There are two types of activities bdi threads do: 1. process bdi works from the 'bdi->work_list' 2. periodic write-back So there are 2 sources of wake-up events for bdi threads: 1. 'bdi_queue_work()' - submits bdi works 2. '__mark_inode_dirty()' - adds dirty I/O to bdi's The former already has bdi wake-up code. The latter does not, and this patch adds it. '__mark_inode_dirty()' is hot-path function, but this patch adds another 'spin_lock(&bdi->wb_lock)' there. However, it is taken only in rare cases when the bdi has no dirty inodes. So adding this spinlock should be fine and should not affect performance. This patch makes sure bdi threads and the forker thread do not wake-up if there is nothing to do. The forker thread will nevertheless wake up at least every 5 min. to check whether it has to kill a bdi thread. This can also be optimized, but is not worth it. This patch also tidies up the warning about unregistered bid, and turns it from an ugly crocodile to a simple 'WARN()' statement. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: move bdi threads exiting logic to the forker threadArtem Bityutskiy1-11/+58
Currently, bdi threads can decide to exit if there were no useful activities for 5 minutes. However, this causes nasty races: we can easily oops in the 'bdi_queue_work()' if the bdi thread decides to exit while we are waking it up. And even if we do not oops, but the bdi tread exits immediately after we wake it up, we'd lose the wake-up event and have an unnecessary delay (up to 5 secs) in the bdi work processing. This patch makes the forker thread to be the central place which not only creates bdi threads, but also kills them if they were inactive long enough. This better design-wise. Another reason why this change was done is to prepare for the further changes which will prevent the bdi threads from waking up every 5 sec and wasting power. Indeed, when the task does not wake up periodically anymore, it won't be able to exit either. This patch also moves the the 'wake_up_bit()' call from the bdi thread to the forker thread as well. So now the forker thread sets the BDI_pending bit, then forks the task or kills it, then clears the bit and wakes up the waiting process. The only process which may wain on the bit is 'bdi_wb_shutdown()'. This function was changed as well - now it first removes the bdi from the 'bdi_list', then waits on the 'BDI_pending' bit. Once it wakes up, it is guaranteed that the forker thread won't race with it, because the bdi is not visible. Note, the forker thread sets the 'BDI_pending' bit under the 'bdi->wb_lock' which is essential for proper serialization. And additionally, when we change 'bdi->wb.task', we now take the 'bdi->work_lock', to make sure that we do not lose wake-ups which we otherwise would when raced with, say, 'bdi_queue_work()'. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: restructure bdi forker loop a littleArtem Bityutskiy1-30/+39
This patch re-structures the bdi forker a little: 1. Add 'bdi_cap_flush_forker(bdi)' condition check to the bdi loop. The reason for this is that the forker thread can start _before_ the 'BDI_registered' flag is set (see 'bdi_register()'), so the WARN() statement will fire for the default bdi. I observed this warning at boot-up. 2. Introduce an enum 'action' and use "switch" statement in the outer loop. This is a preparation to the further patch which will teach the forker thread killing bdi threads, so we'll have another case in the "switch" statement. This change was suggested by Christoph Hellwig. This patch is just a small step towards the coming change where the forker thread will kill the bdi threads. It should simplify reviewing the following changes, which would otherwise be larger. This patch also amends comments a little. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: do not remove bdi from bdi_listArtem Bityutskiy1-21/+10
The forker thread removes bdis from 'bdi_list' before forking the bdi thread. But this is wrong for at least 2 reasons. Reason #1: if we temporary remove a bdi from the list, we may miss works which would otherwise be given to us. Reason #2: this is racy; indeed, 'bdi_wb_shutdown()' expects that bdis are always in the 'bdi_list' (see 'bdi_remove_from_list()'), and when it races with the forker thread, it can shut down the bdi thread at the same time as the forker creates it. This patch makes sure the forker thread never removes bdis from 'bdi_list' (which was suggested by Christoph Hellwig). In order to make sure that we do not race with 'bdi_wb_shutdown()', we have to hold the 'bdi_lock' while walking the 'bdi_list' and setting the 'BDI_pending' flag. NOTE! The error path is interesting. Currently, when we fail to create a bdi thread, we move the bdi to the tail of 'bdi_list'. But if we never remove the bdi from the list, we cannot move it to the tail either, because then we can mess up the RCU readers which walk the list. And also, we'll have the race described above in "Reason #2". But I not think that adding to the tail is any important so I just do not do that. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: simplify bdi code a littleArtem Bityutskiy1-64/+18
This patch simplifies bdi code a little by removing the 'pending_list' which is redundant. Indeed, currently the forker thread ('bdi_forker_thread()') is working like this: 1. In a loop, fetch all bdi's which have works but have no writeback thread and move them to the 'pending_list'. 2. If the list is empty, sleep for 5 sec. 3. Otherwise, take one bdi from the list, fork the writeback thread for this bdi, and repeat the loop. IOW, it first moves everything to the 'pending_list', then process only one element, and so on. This patch simplifies the algorithm, which is now as follows. 1. Find the first bdi which has a work and remove it from the global list of bdi's (bdi_list). 2. If there was not such bdi, sleep 5 sec. 3. Fork the writeback thread for this bdi and repeat the loop. IOW, now we find the first bdi to process, process it, and so on. This is simpler and involves less lists. The bonus now is that we can get rid of a couple of functions, as well as remove complications which involve 'rcu_call()' and 'bdi->rcu_head'. This patch also makes sure we use 'list_add_tail_rcu()', instead of plain 'list_add_tail()', but this piece of code is going to be removed in the next patch anyway. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: do not lose wake-ups in the forker thread - 2Artem Bityutskiy1-0/+4
Currently, if someone submits jobs for the default bdi, we can lose wake-up events. E.g., this can happen if 'bdi_queue_work()' is called when 'bdi_forker_thread()' is executing code after 'wb_do_writeback(me, 0)', but before 'set_current_state(TASK_INTERRUPTIBLE)'. This situation is unlikely, and the result is not very severe - we'll just delay the execution of the work, but this is still not very nice. This patch fixes the issue by checking whether the default bdi has works before the forker thread goes sleep. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: do not lose wake-ups in the forker thread - 1Artem Bityutskiy1-2/+1
Currently the forker thread can lose wake-ups which may lead to unnecessary delays in processing bdi works. E.g., consider the following scenario. 1. 'bdi_forker_thread()' walks the 'bdi_list', finds out there is nothing to do, and is about to finish the loop. 2. A bdi thread decides to exit because it was inactive for long time. 3. 'bdi_queue_work()' adds a work to the bdi which just exited, so it wakes up the forker thread. 4. but 'bdi_forker_thread()' executes 'set_current_state(TASK_INTERRUPTIBLE)' and goes sleep. We lose a wake-up. Losing the wake-up is not fatal, but this means that the bdi work processing will be delayed by up to 5 sec. This race is theoretical, I never hit it, but it is worth fixing. The fix is to execute 'set_current_state(TASK_INTERRUPTIBLE)' _before_ walking 'bdi_list', not after. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: fix possible race when creating bdi threadsArtem Bityutskiy1-17/+11
This patch fixes a very unlikely race condition on the bdi forker thread error path: when bdi thread creation fails, 'bdi->wb.task' may contain the error code for a short period of time. If at the same time someone submits a work to this bdi, we can end up with an oops 'bdi_queue_work()' while executing 'wake_up_process(wb->task)'. This patch fixes the issue by introducing a temporary variable 'task' and storing the possible error code there, so that 'wb->task' would never take erroneous values. Note, this race is very unlikely and I never hit it, so it is theoretical, but nevertheless worth fixing. This patch also merges 2 comments which were previously separate. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: harmonize writeback threads namingArtem Bityutskiy1-13/+13
The write-back code mixes words "thread" and "task" for the same things. This is not a big deal, but still an inconsistency. hch: a convention I tend to use and I've seen in various places is to always use _task for the storage of the task_struct pointer, and thread everywhere else. This especially helps with having foo_thread for the actual thread and foo_task for a global variable keeping the task_struct pointer This patch renames: * 'bdi_add_default_flusher_task()' -> 'bdi_add_default_flusher_thread()' * 'bdi_forker_task()' -> 'bdi_forker_thread()' because bdi threads are 'bdi_writeback_thread()', so these names are more consistent. This patch also amends commentaries and makes them refer the forker and bdi threads as "thread", not "task". Also, while on it, make 'bdi_add_default_flusher_thread()' declaration use 'static void' instead of 'void static' and make checkpatch.pl happy. Signed-off-by: Artem Bityutskiy <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: Add tracing to write_cache_pagesDave Chinner1-0/+1
Add a trace event to the ->writepage loop in write_cache_pages to give visibility into how the ->writepage call is changing variables within the writeback control structure. Of most interest is how wbc->nr_to_write changes from call to call, especially with filesystems that write multiple pages in ->writepage. Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: Add tracing to balance_dirty_pagesDave Chinner1-0/+4
Tracing high level background writeback events is good, but it doesn't give the entire picture. Add visibility into write throttling to catch IO dispatched by foreground throttling of processing dirtying lots of pages. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: Initial tracing supportDave Chinner1-0/+3
Trace queue/sched/exec parts of the writeback loop. This provides insight into when and why flusher threads are scheduled to run. e.g a sync invocation leaves traces like: sync-[...]: writeback_queue: bdi 8:0: sb_dev 8:1 nr_pages=7712 sync_mode=0 kupdate=0 range_cyclic=0 background=0 flush-8:0-[...]: writeback_exec: bdi 8:0: sb_dev 8:1 nr_pages=7712 sync_mode=0 kupdate=0 range_cyclic=0 background=0 This also lays the foundation for adding more writeback tracing to provide deeper insight into the whole writeback path. The original tracing code is from Jens Axboe, though this version is a rewrite as a result of the code being traced changing significantly. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: merge bdi_writeback_task and bdi_start_fnChristoph Hellwig1-43/+1
Move all code for the writeback thread into fs/fs-writeback.c instead of splitting it over two functions in two files. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07writeback: remove wb_listChristoph Hellwig1-54/+29
The wb_list member of struct backing_device_info always has exactly one element. Just use the direct bdi->wb pointer instead and simplify some code. Also remove bdi_task_init which is now trivial to prepare for the next patch. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07block: unify flags for struct bio and struct requestChristoph Hellwig1-1/+1
Remove the current bio flags and reuse the request flags for the bio, too. This allows to more easily trace the type of I/O from the filesystem down to the block driver. There were two flags in the bio that were missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've renamed two request flags that had a superflous RW in them. Note that the flags are in bio.h despite having the REQ_ name - as blkdev.h includes bio.h that is the only way to go for now. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-07percpu: add __percpu notations to UP allocatorNamhyung Kim1-2/+2
Add __percpu notations to UP percpu allocator. Signed-off-by: Namhyung Kim <[email protected]> Signed-off-by: Tejun Heo <[email protected]>
2010-08-06Merge branch 'for-linus' of ↵Linus Torvalds3-45/+52
git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: slub: Allow removal of slab caches during boot Revert "slub: Allow removal of slab caches during boot" slub numa: Fix rare allocation from unexpected node slab: use deferable timers for its periodic housekeeping slub: Use kmem_cache flags to detect if slab is in debugging mode. slub: Allow removal of slab caches during boot slub: Check kasprintf results in kmem_cache_init() SLUB: Constants need UL slub: Use a constant for a unspecified node. SLOB: Free objects to their own list slab: fix caller tracking on !CONFIG_DEBUG_SLAB && CONFIG_TRACING
2010-08-06Merge branch 'x86-mm-for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Ioremap: fix wrong physical address handling in PAT code x86, tlb: Clean up and correct used type x86, iomap: Fix wrong page aligned size calculation in ioremapping code x86, mm: Create symbolic index into address_markers array x86, ioremap: Fix normal ram range check x86, ioremap: Fix incorrect physical address handling in PAE mode x86-64, mm: Initialize VDSO earlier on 64 bits x86, kmmio/mmiotrace: Fix double free of kmmio_fault_pages
2010-08-06Merge branch 'perf-core-for-linus' of ↵Linus Torvalds4-4/+8
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (162 commits) tracing/kprobes: unregister_trace_probe needs to be called under mutex perf: expose event__process function perf events: Fix mmap offset determination perf, powerpc: fsl_emb: Restore setting perf_sample_data.period perf, powerpc: Convert the FSL driver to use local64_t perf tools: Don't keep unreferenced maps when unmaps are detected perf session: Invalidate last_match when removing threads from rb_tree perf session: Free the ref_reloc_sym memory at the right place x86,mmiotrace: Add support for tracing STOS instruction perf, sched migration: Librarize task states and event headers helpers perf, sched migration: Librarize the GUI class perf, sched migration: Make the GUI class client agnostic perf, sched migration: Make it vertically scrollable perf, sched migration: Parameterize cpu height and spacing perf, sched migration: Fix key bindings perf, sched migration: Ignore unhandled task states perf, sched migration: Handle ignored migrate out events perf: New migration tool overview tracing: Drop cpparg() macro perf: Use tracepoint_synchronize_unregister() to flush any pending tracepoint call ... Fix up trivial conflicts in Makefile and drivers/cpufreq/cpufreq.c
2010-08-06Merge branch 'core-rcu-for-linus' of ↵Linus Torvalds2-2/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: Revert "net: Make accesses to ->br_port safe for sparse RCU" mce: convert to rcu_dereference_index_check() net: Make accesses to ->br_port safe for sparse RCU vfs: add fs.h to define struct file lockdep: Add an in_workqueue_context() lockdep-based test function rcu: add __rcu API for later sparse checking rcu: add an rcu_dereference_index_check() tree/tiny rcu: Add debug RCU head objects mm: remove all rcu head initializations fs: remove all rcu head initializations, except on_stack initializations powerpc: remove all rcu head initializations
2010-08-05Merge branch 'for_linus' of ↵Linus Torvalds1-0/+7
git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb: debug_core,kdb: fix crash when arch does not have single step kgdb,x86: use macro HBP_NUM to replace magic number 4 kgdb,mips: remove unused kgdb_cpu_doing_single_step operations mm,kdb,kgdb: Add a debug reference for the kdb kmap usage KGDB: Remove set but unused newPC ftrace,kdb: Allow dumping a specific cpu's buffer with ftdump ftrace,kdb: Extend kdb to be able to dump the ftrace buffer kgdb,powerpc: Replace hardcoded offset by BREAK_INSTR_SIZE arm,kgdb: Add ability to trap into debugger on notify_die gdbstub: do not directly use dbg_reg_def[] in gdb_cmd_reg_set() gdbstub: Implement gdbserial 'p' and 'P' packets kgdb,arm: Individual register get/set for arm kgdb,mips: Individual register get/set for mips kgdb,x86: Individual register get/set for x86 kgdb,kdb: individual register set and and get API gdbstub: Optimize kgdb's "thread:" response for the gdb serial protocol kgdb: remove custom hex_to_bin()implementation
2010-08-05mm,kdb,kgdb: Add a debug reference for the kdb kmap usageJason Wessel1-0/+7
The kdb kmap should never get used outside of the kernel debugger exception context. Signed-off-by: Jason Wessel<[email protected]> CC: Andrew Morton <[email protected]> CC: Ingo Molnar <[email protected]> CC: [email protected]
2010-08-04Merge branch 'for-linus' of ↵Linus Torvalds1-36/+49
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: percpu: allow limited allocation before slab is online percpu: make @dyn_size always mean min dyn_size in first chunk init functions
2010-08-04Merge branches 'slab/fixes', 'slob/fixes', 'slub/cleanups' and 'slub/fixes' ↵Pekka Enberg3-45/+52
into for-linus
2010-08-04Merge branch 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-0/+33
* 'kvm-updates/2.6.36' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (198 commits) KVM: VMX: Fix host GDT.LIMIT corruption KVM: MMU: using __xchg_spte more smarter KVM: MMU: cleanup spte set and accssed/dirty tracking KVM: MMU: don't atomicly set spte if it's not present KVM: MMU: fix page dirty tracking lost while sync page KVM: MMU: fix broken page accessed tracking with ept enabled KVM: MMU: add missing reserved bits check in speculative path KVM: MMU: fix mmu notifier invalidate handler for huge spte KVM: x86 emulator: fix xchg instruction emulation KVM: x86: Call mask notifiers from pic KVM: x86: never re-execute instruction with enabled tdp KVM: Document KVM_GET_SUPPORTED_CPUID2 ioctl KVM: x86: emulator: inc/dec can have lock prefix KVM: MMU: Eliminate redundant temporaries in FNAME(fetch) KVM: MMU: Validate all gptes during fetch, not just those used for new pages KVM: MMU: Simplify spte fetch() function KVM: MMU: Add gpte_valid() helper KVM: MMU: Add validate_direct_spte() helper KVM: MMU: Add drop_large_spte() helper KVM: MMU: Use __set_spte to link shadow pages ...
2010-08-03slub: Allow removal of slab caches during bootChristoph Lameter1-9/+15
Serialize kmem_cache_create and kmem_cache_destroy using the slub_lock. Only possible after the use of the slub_lock during dynamic dma creation has been removed. Then make sure that the setup of the slab sysfs entries does not race with kmem_cache_create and kmem_cache destroy. If a slab cache is removed before we have setup sysfs then simply skip over the sysfs handling. Cc: Benjamin Herrenschmidt <[email protected]> Cc: Roland Dreier <[email protected]> Signed-off-by: Christoph Lameter <[email protected]> Signed-off-by: Pekka Enberg <[email protected]>
2010-08-03Revert "slub: Allow removal of slab caches during boot"Pekka Enberg1-7/+0
This reverts commit f5b801ac38a9612b380ee9a75ab1861f0594e79f.
2010-08-02Merge commit 'v2.6.35' into perf/coreIngo Molnar1-3/+13
Conflicts: tools/perf/Makefile tools/perf/util/hist.c Merge reason: Resolve the conflicts and update to latest upstream. Signed-off-by: Ingo Molnar <[email protected]>
2010-08-01KVM: Fix a race condition for usage of is_hwpoison_address()Huang Ying1-0/+3
is_hwpoison_address accesses the page table, so the caller must hold current->mm->mmap_sem in read mode. So fix its usage in hva_to_pfn of kvm accordingly. Comment is_hwpoison_address to remind other users. Reported-by: Avi Kivity <[email protected]> Signed-off-by: Huang Ying <[email protected]> Signed-off-by: Avi Kivity <[email protected]>
2010-08-01KVM: Avoid killing userspace through guest SRAO MCE on unmapped pagesHuang Ying1-0/+30
In common cases, guest SRAO MCE will cause corresponding poisoned page be un-mapped and SIGBUS be sent to QEMU-KVM, then QEMU-KVM will relay the MCE to guest OS. But it is reported that if the poisoned page is accessed in guest after unmapping and before MCE is relayed to guest OS, userspace will be killed. The reason is as follows. Because poisoned page has been un-mapped, guest access will cause guest exit and kvm_mmu_page_fault will be called. kvm_mmu_page_fault can not get the poisoned page for fault address, so kernel and user space MMIO processing is tried in turn. In user MMIO processing, poisoned page is accessed again, then userspace is killed by force_sig_info. To fix the bug, kvm_mmu_page_fault send HWPOISON signal to QEMU-KVM and do not try kernel and user space MMIO processing for poisoned page. [xiao: fix warning introduced by avi] Reported-by: Max Asbock <[email protected]> Signed-off-by: Huang Ying <[email protected]> Signed-off-by: Xiao Guangrong <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]> Signed-off-by: Avi Kivity <[email protected]>
2010-07-30mm: fix ia64 crash when gcore reads gate areaHugh Dickins1-3/+13
Debian's ia64 autobuilders have been seeing kernel freeze or reboot when running the gdb testsuite (Debian bug 588574): dannf bisected to 2.6.32 62eede62dafb4a6633eae7ffbeb34c60dba5e7b1 "mm: ZERO_PAGE without PTE_SPECIAL"; and reproduced it with gdb's gcore on a simple target. I'd missed updating the gate_vma handling in __get_user_pages(): that happens to use vm_normal_page() (nowadays failing on the zero page), yet reported success even when it failed to get a page - boom when access_process_vm() tried to copy that to its intermediate buffer. Fix this, resisting cleanups: in particular, leave it for now reporting success when not asked to get any pages - very probably safe to change, but let's not risk it without testing exposure. Why did ia64 crash with 16kB pages, but succeed with 64kB pages? Because setup_gate() pads each 64kB of its gate area with zero pages. Reported-by: Andreas Barth <[email protected]> Bisected-by: dann frazier <[email protected]> Signed-off-by: Hugh Dickins <[email protected]> Tested-by: dann frazier <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
2010-07-29slub numa: Fix rare allocation from unexpected nodeChristoph Lameter1-1/+1
The network developers have seen sporadic allocations resulting in objects coming from unexpected NUMA nodes despite asking for objects from a specific node. This is due to get_partial() calling get_any_partial() if partial slabs are exhausted for a node even if a node was specified and therefore one would expect allocations only from the specified node. get_any_partial() sporadically may return a slab from a foreign node to gradually reduce the size of partial lists on remote nodes and thereby reduce total memory use for a slab cache. The behavior is controlled by the remote_defrag_ratio of each cache. Strictly speaking this is permitted behavior since __GFP_THISNODE was not specified for the allocation but it is certain surprising. This patch makes sure that the remote defrag behavior only occurs if no node was specified. Signed-off-by: Christoph Lameter <[email protected]> Signed-off-by: Pekka Enberg <[email protected]>
2010-07-27vmap: add flag to allow lazy unmap to be disabled at runtimeJeremy Fitzhardinge1-0/+4
Add a flag to force lazy_max_pages() to zero to prevent any outstanding mapped pages. We'll need this for Xen. Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Konrad Rzeszutek Wilk <[email protected]> Acked-by: Nick Piggin <[email protected]>
2010-07-21Merge branch 'linus' into perf/coreIngo Molnar9-23/+592
Merge reason: Pick up the latest perf fixes. Signed-off-by: Ingo Molnar <[email protected]>
2010-07-20x86,nobootmem: make alloc_bootmem_node fall back to other node when 32bit ↵Yinghai Lu2-4/+23
numa is used Borislav Petkov reported his 32bit numa system has problem: [ 0.000000] Reserving total of 4c00 pages for numa KVA remap [ 0.000000] kva_start_pfn ~ 32800 max_low_pfn ~ 375fe [ 0.000000] max_pfn = 238000 [ 0.000000] 8202MB HIGHMEM available. [ 0.000000] 885MB LOWMEM available. [ 0.000000] mapped low ram: 0 - 375fe000 [ 0.000000] low ram: 0 - 375fe000 [ 0.000000] alloc (nid=8 100000 - 7ee00000) (1000000 - ffffffff) 1000 1000 => 34e7000 [ 0.000000] alloc (nid=8 100000 - 7ee00000) (1000000 - ffffffff) 200 40 => 34c9d80 [ 0.000000] alloc (nid=0 100000 - 7ee00000) (1000000 - ffffffffffffffff) 180 40 => 34e6140 [ 0.000000] alloc (nid=1 80000000 - c7e60000) (1000000 - ffffffffffffffff) 240 40 => 80000000 [ 0.000000] BUG: unable to handle kernel paging request at 40000000 [ 0.000000] IP: [<c2c8cff1>] __alloc_memory_core_early+0x147/0x1d6 [ 0.000000] *pdpt = 0000000000000000 *pde = f000ff53f000ff00 ... [ 0.000000] Call Trace: [ 0.000000] [<c2c8b4f8>] ? __alloc_bootmem_node+0x216/0x22f [ 0.000000] [<c2c90c9b>] ? sparse_early_usemaps_alloc_node+0x5a/0x10b [ 0.000000] [<c2c9149e>] ? sparse_init+0x1dc/0x499 [ 0.000000] [<c2c79118>] ? paging_init+0x168/0x1df [ 0.000000] [<c2c780ff>] ? native_pagetable_setup_start+0xef/0x1bb looks like it allocates too much high address for bootmem. Try to cut limit with get_max_mapped() Reported-by: Borislav Petkov <[email protected]> Tested-by: Conny Seidel <[email protected]> Signed-off-by: Yinghai Lu <[email protected]> Cc: <[email protected]> [2.6.34.x] Cc: Ingo Molnar <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Lee Schermerhorn <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-07-20mm/vmscan.c: fix mapping use after freeNick Piggin1-1/+1
We need lock_page_nosync() here because we have no reference to the mapping when taking the page lock. Signed-off-by: Nick Piggin <[email protected]> Reviewed-by: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-07-20slab: use deferable timers for its periodic housekeepingArjan van de Ven1-1/+1
slab has a "once every 2 second" timer for its housekeeping. As the number of logical processors is growing, its more and more common that this 2 second timer becomes the primary wakeup source. This patch turns this housekeeping timer into a deferable timer, which means that the timer does not interrupt idle, but just runs at the next event that wakes the cpu up. The impact is that the timer likely runs a bit later, but during the delay no code is running so there's not all that much reason for a difference in housekeeping to occur because of this delay. Signed-off-by: Arjan van de Ven <[email protected]> Signed-off-by: Pekka Enberg <[email protected]>
2010-07-19kmemleak: Add DocBook style comments to kmemleak.cCatalin Marinas1-21/+59
The description and parameters of the kmemleak API weren't obvious. This patch adds comments clarifying the API usage. Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Pekka Enberg <[email protected]>
2010-07-19kmemleak: Introduce a default off mode for kmemleakJason Baron1-1/+13
Introduce a new DEBUG_KMEMLEAK_DEFAULT_OFF config parameter that allows kmemleak to be disabled by default, but enabled on the command line via: kmemleak=on. Although a reboot is required to turn it on, its still useful to not require a re-compile. Signed-off-by: Jason Baron <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Pekka Enberg <[email protected]>
2010-07-19kmemleak: Show more information for objects found by aliasCatalin Marinas1-1/+3
There may be situations when an object is freed using a pointer inside the memory block. Kmemleak should show more information to help with debugging. Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Pekka Enberg <[email protected]>
2010-07-19kmemleak: Add support for NO_BOOTMEM configurationsCatalin Marinas1-0/+5
With commits 08677214 and 59be5a8e, alloc_bootmem()/free_bootmem() and friends use the early_res functions for memory management when NO_BOOTMEM is enabled. This patch adds the kmemleak calls in the corresponding code paths for bootmem allocations. Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Pekka Enberg <[email protected]> Acked-by: Yinghai Lu <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: [email protected]
2010-07-19kmemleak: Annotate false positive in init_section_page_cgroup()Catalin Marinas1-0/+7
The pointer to the page_cgroup table allocated in init_section_page_cgroup() is stored in section->page_cgroup as (base - pfn). Since this value does not point to the beginning or inside the allocated memory block, kmemleak reports a false positive. This was reported in bugzilla.kernel.org as #16297. Signed-off-by: Catalin Marinas <[email protected]> Reported-by: Adrien Dessemond <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Andrew Morton <[email protected]>
2010-07-19mm: add context argument to shrinker callbackDave Chinner1-3/+5
The current shrinker implementation requires the registered callback to have global state to work from. This makes it difficult to shrink caches that are not global (e.g. per-filesystem caches). Pass the shrinker structure to the callback so that users can embed the shrinker structure in the context the shrinker needs to operate on and get back to it in the callback via container_of(). Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]>