aboutsummaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)AuthorFilesLines
2010-08-09vmscan: kill prev_priority completelyKOSAKI Motohiro4-92/+0
Since 2.6.28 zone->prev_priority is unused. Then it can be removed safely. It reduce stack usage slightly. Now I have to say that I'm sorry. 2 years ago, I thought prev_priority can be integrate again, it's useful. but four (or more) times trying haven't got good performance number. Thus I give up such approach. The rest of this changelog is notes on prev_priority and why it existed in the first place and why it might be not necessary any more. This information is based heavily on discussions between Andrew Morton, Rik van Riel and Kosaki Motohiro who is heavily quotes from. Historically prev_priority was important because it determined when the VM would start unmapping PTE pages. i.e. there are no balances of note within the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there is a potential risk of unnecessarily increasing minor faults as a large amount of read activity of use-once pages could push mapped pages to the end of the LRU and get unmapped. There is no proof this is still a problem but currently it is not considered to be. Active files are not deactivated if the active file list is smaller than the inactive list reducing the liklihood that file-mapped pages are being pushed off the LRU and referenced executable pages are kept on the active list to avoid them getting pushed out by read activity. Even if it is a problem, prev_priority prev_priority wouldn't works nowadays. First of all, current vmscan still a lot of UP centric code. it expose some weakness on some dozens CPUs machine. I think we need more and more improvement. The problem is, current vmscan mix up per-system-pressure, per-zone-pressure and per-task-pressure a bit. example, prev_priority try to boost priority to other concurrent priority. but if the another task have mempolicy restriction, it is unnecessary, but also makes wrong big latency and exceeding reclaim. per-task based priority + prev_priority adjustment make the emulation of per-system pressure. but it have two issue 1) too rough and brutal emulation 2) we need per-zone pressure, not per-system. Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about 2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer. but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the system have higher memory pressure than priority==0 (1/4096*10,000 > 2). prev_priority can't solve such multithreads workload issue. In other word, prev_priority concept assume the sysmtem don't have lots threads." Signed-off-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Mel Gorman <[email protected]> Reviewed-by: Johannes Weiner <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Chris Mason <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Michael Rubin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09vmscan: tracing: add trace event when a page is writtenMel Gorman1-0/+2
Add a trace event for when page reclaim queues a page for IO and records whether it is synchronous or asynchronous. Excessive synchronous IO for a process can result in noticeable stalls during direct reclaim. Excessive IO from page reclaim may indicate that the system is seriously under provisioned for the amount of dirty pages that exist. Signed-off-by: Mel Gorman <[email protected]> Acked-by: Rik van Riel <[email protected]> Acked-by: Larry Woodman <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Chris Mason <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Michael Rubin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09vmscan: tracing: add trace events for LRU page isolationMel Gorman1-0/+16
Add an event for when pages are isolated en-masse from the LRU lists. This event augments the information available on LRU traffic and can be used to evaluate lumpy reclaim. [[email protected]: coding-style fixes] Signed-off-by: Mel Gorman <[email protected]> Acked-by: Rik van Riel <[email protected]> Acked-by: Larry Woodman <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Chris Mason <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Michael Rubin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09vmscan: tracing: add trace events for kswapd wakeup, sleeping and direct reclaimMel Gorman1-4/+20
Add two trace events for kswapd waking up and going asleep for the purposes of tracking kswapd activity and two trace events for direct reclaim beginning and ending. The information can be used to work out how much time a process or the system is spending on the reclamation of pages and in the case of direct reclaim, how many pages were reclaimed for that process. High frequency triggering of these events could point to memory pressure problems. Signed-off-by: Mel Gorman <[email protected]> Acked-by: Rik van Riel <[email protected]> Acked-by: Larry Woodman <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Chris Mason <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Michael Rubin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09vmscan: recalculate lru_pages on each priorityKOSAKI Motohiro1-13/+8
shrink_zones() need relatively long time and lru_pages can change dramatically during shrink_zones(). So lru_pages should be recalculated for each priority. Signed-off-by: KOSAKI Motohiro <[email protected]> Acked-by: Rik van Riel <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Acked-by: Johannes Weiner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09vmscan: zone_reclaim don't call disable_swap_token()KOSAKI Motohiro1-1/+0
Swap token don't works when zone reclaim is enabled since it was born. Because __zone_reclaim() always call disable_swap_token() unconditionally. This kill swap token feature completely. As far as I know, nobody want to that. Remove it. Signed-off-by: KOSAKI Motohiro <[email protected]> Acked-by: Rik van Riel <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: implement writeback livelock avoidance using page taggingJan Kara1-17/+53
We try to avoid livelocks of writeback when some steadily creates dirty pages in a mapping we are writing out. For memory-cleaning writeback, using nr_to_write works reasonably well but we cannot really use it for data integrity writeback. This patch tries to solve the problem. The idea is simple: Tag all pages that should be written back with a special tag (TOWRITE) in the radix tree. This can be done rather quickly and thus livelocks should not happen in practice. Then we start doing the hard work of locking pages and sending them to disk only for those pages that have TOWRITE tag set. Note: Adding new radix tree tag grows radix tree node from 288 to 296 bytes for 32-bit archs and from 552 to 560 bytes for 64-bit archs. However, the number of slab/slub items per page remains the same (13 and 7 respectively). Signed-off-by: Jan Kara <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Chris Mason <[email protected]> Cc: Theodore Ts'o <[email protected]> Cc: Jens Axboe <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09rmap: add anon_vma bug checksAndrea Arcangeli1-0/+3
Verify the refcounting doesn't go wrong, and resurrect the check in __page_check_anon_rmap as in old anon-vma code. Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09rmap: resurrect page_address_in_vma anon_vma checkAndrea Arcangeli1-3/+4
With root anon-vma it's trivial to keep doing the usual check as in old-anon-vma code. Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09rmap: always use anon_vma root pointerAndrea Arcangeli1-6/+12
Always use anon_vma->root pointer instead of anon_vma_chain.prev. Also optimize the map-paths, if a mapping is already established no need to overwrite it with root anon-vma list, we can keep the more finegrined anon-vma and skip the overwrite: see the PageAnon check in !exclusive case. This is also the optimization that hidden the ksm bug as this tends to make ksm_might_need_to_copy skip the copy, but only the proper fix to ksm_might_need_to_copy guarantees not triggering the ksm bug unless ksm is in use. this is an optimization only... [[email protected]: coding-style fixes] Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Rik van Riel <[email protected]> [[email protected]: fix false positive BUG_ON in __page_set_anon_rmap] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09rmap: always add new vmas at the endAndrea Arcangeli1-1/+1
Make sure to always add new VMAs at the end of the list. This is important so rmap_walk does not miss a VMA that was created during the rmap_walk. The old code got this right most of the time due to luck, but was buggy when anon_vma_prepare reused a mergeable anon_vma. Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mmap: remove unnecessary lock from __vma_linkAndrea Arcangeli1-2/+0
There's no anon-vma related mangling happening inside __vma_link anymore so no need of anon_vma locking there. Signed-off-by: Andrea Arcangeli <[email protected]> Signed-off-by: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09shmem: reduce pagefault lock contentionShaohua Li1-21/+49
I'm running a shmem pagefault test case (see attached file) under a 64 CPU system. Profile shows shmem_inode_info->lock is heavily contented and 100% CPUs time are trying to get the lock. In the pagefault (no swap) case, shmem_getpage gets the lock twice, the last one is avoidable if we prealloc a page so we could reduce one time of locking. This is what below patch does. The result of the test case: 2.6.35-rc3: ~20s 2.6.35-rc3 + patch: ~12s so this is 40% improvement. One might argue if we could have better locking for shmem. But even shmem is lockless, the pagefault will soon have pagecache lock heavily contented because shmem must add new page to pagecache. So before we have better locking for pagecache, improving shmem locking doesn't have too much improvement. I did a similar pagefault test against a ramfs file, the test result is ~10.5s. [[email protected]: fix comment, clean up code layout, elimintate code duplication] Signed-off-by: Shaohua Li <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: "Zhang, Yanmin" <[email protected]> Cc: Tim Chen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09tmpfs: make tmpfs scalable with percpu_counter for used blocksTim Chen1-23/+17
The current implementation of tmpfs is not scalable. We found that stat_lock is contended by multiple threads when we need to get a new page, leading to useless spinning inside this spin lock. This patch makes use of the percpu_counter library to maintain local count of used blocks to speed up getting and returning of pages. So the acquisition of stat_lock is unnecessary for getting and returning blocks, improving the performance of tmpfs on system with large number of cpus. On a 4 socket 32 core NHM-EX system, we saw improvement of 270%. The implementation below has a slight chance of race between threads causing a slight overshoot of the maximum configured blocks. However, any overshoot is small, and is bounded by the number of cpus. This happens when the number of used blocks is slightly below the maximum configured blocks when a thread checks the used block count, and another thread allocates the last block before the current thread does. This should not be a problem for tmpfs, as the overshoot is most likely to be a few blocks and bounded. If a strict limit is really desired, then configured the max blocks to be the limit less the number of cpus in system. Signed-off-by: Tim Chen <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09gcc-4.6: mm: fix unused but set warningsAndi Kleen3-5/+1
No real bugs, just some dead code and some fixups. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mempolicy: reduce stack size of migrate_pages()KOSAKI Motohiro1-13/+25
migrate_pages() is using >500 bytes stack. Reduce it. mm/mempolicy.c: In function 'sys_migrate_pages': mm/mempolicy.c:1344: warning: the frame size of 528 bytes is larger than 512 bytes [[email protected]: don't play with a might-be-NULL pointer] Signed-off-by: KOSAKI Motohiro <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Reviewed-by: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: use for_each_online_cpu() in vmstatMinchan Kim1-3/+3
The sum_vm_events passes cpumask for for_each_cpu(). But it's useless since we have for_each_online_cpu. Althougth it's tirival overhead, it's not good about coding consistency. Let's use for_each_online_cpu instead of for_each_cpu with cpumask argument. Signed-off-by: Minchan Kim <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Acked-by: Christoph Lameter <[email protected]> Reviewed-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: fold __out_of_memory into out_of_memoryDavid Rientjes1-36/+29
__out_of_memory() only has a single caller, so fold it into out_of_memory() and add a comment about locking for its call to oom_kill_process(). Signed-off-by: David Rientjes <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: remove constraint argument from select_bad_process and __out_of_memoryDavid Rientjes1-10/+8
select_bad_process() and __out_of_memory() doe not need their enum oom_constraint arguments: it's possible to pass a NULL nodemask if constraint == CONSTRAINT_MEMORY_POLICY in the caller, out_of_memory(). Signed-off-by: David Rientjes <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: rename try_set_zone_oom() to try_set_zonelist_oom()Minchan Kim2-3/+3
We have been used naming try_set_zone_oom and clear_zonelist_oom. The role of functions is to lock of zonelist for preventing parallel OOM. So clear_zonelist_oom makes sense but try_set_zone_oome is rather awkward and unmatched with clear_zonelist_oom. Let's change it with try_set_zonelist_oom. Signed-off-by: Minchan Kim <[email protected]> Acked-by: David Rientjes <[email protected]> Reviewed-by: KOSAKI Motohiro <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: remove unnecessary code and cleanupDavid Rientjes1-46/+10
Remove the redundancy in __oom_kill_task() since: - init can never be passed to this function: it will never be PF_EXITING or selectable from select_bad_process(), and - it will never be passed a task from oom_kill_task() without an ->mm and we're unconcerned about detachment from exiting tasks, there's no reason to protect them against SIGKILL or access to memory reserves. Also moves the kernel log message to a higher level since the verbosity is not always emitted here; we need not print an error message if an exiting task is given a longer timeslice. __oom_kill_task() only has a single caller, so it can be merged into that function at the same time. Signed-off-by: David Rientjes <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: remove special handling for pagefault oomsDavid Rientjes1-29/+57
It is possible to remove the special pagefault oom handler by simply oom locking all system zones and then calling directly into out_of_memory(). All populated zones must have ZONE_OOM_LOCKED set, otherwise there is a parallel oom killing in progress that will lead to eventual memory freeing so it's not necessary to needlessly kill another task. The context in which the pagefault is allocating memory is unknown to the oom killer, so this is done on a system-wide level. If a task has already been oom killed and hasn't fully exited yet, this will be a no-op since select_bad_process() recognizes tasks across the system with TIF_MEMDIE set. Signed-off-by: David Rientjes <[email protected]> Acked-by: Nick Piggin <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: extract panic helper functionDavid Rientjes1-24/+29
There are various points in the oom killer where the kernel must determine whether to panic or not. It's better to extract this to a helper function to remove all the confusion as to its semantics. Also fix a call to dump_header() where tasklist_lock is not read- locked, as required. There's no functional change with this patch. Acked-by: KOSAKI Motohiro <[email protected]> Signed-off-by: David Rientjes <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: avoid oom killer for lowmem allocationsDavid Rientjes1-9/+20
If memory has been depleted in lowmem zones even with the protection afforded to it by /proc/sys/vm/lowmem_reserve_ratio, it is unlikely that killing current users will help. The memory is either reclaimable (or migratable) already, in which case we should not invoke the oom killer at all, or it is pinned by an application for I/O. Killing such an application may leave the hardware in an unspecified state and there is no guarantee that it will be able to make a timely exit. Lowmem allocations are now failed in oom conditions when __GFP_NOFAIL is not used so that the task can perhaps recover or try again later. Previously, the heuristic provided some protection for those tasks with CAP_SYS_RAWIO, but this is no longer necessary since we will not be killing tasks for the purposes of ISA allocations. high_zoneidx is gfp_zone(gfp_flags), meaning that ZONE_NORMAL will be the default for all allocations that are not __GFP_DMA, __GFP_DMA32, __GFP_HIGHMEM, and __GFP_MOVABLE on kernels configured to support those flags. Testing for high_zoneidx being less than ZONE_NORMAL will only return true for allocations that have either __GFP_DMA or __GFP_DMA32. Acked-by: KOSAKI Motohiro <[email protected]> Signed-off-by: David Rientjes <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: enable oom tasklist dump by defaultDavid Rientjes1-1/+1
The oom killer tasklist dump, enabled with the oom_dump_tasks sysctl, is very helpful information in diagnosing why a user's task has been killed. It emits useful information such as each eligible thread's memory usage that can determine why the system is oom, so it should be enabled by default. Signed-off-by: David Rientjes <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: select task from tasklist for mempolicy oomsDavid Rientjes2-36/+112
The oom killer presently kills current whenever there is no more memory free or reclaimable on its mempolicy's nodes. There is no guarantee that current is a memory-hogging task or that killing it will free any substantial amount of memory, however. In such situations, it is better to scan the tasklist for nodes that are allowed to allocate on current's set of nodes and kill the task with the highest badness() score. This ensures that the most memory-hogging task, or the one configured by the user with /proc/pid/oom_adj, is always selected in such scenarios. Signed-off-by: David Rientjes <[email protected]> Reviewed-by: KOSAKI Motohiro <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: sacrifice child with highest badness score for parentDavid Rientjes1-11/+29
When a task is chosen for oom kill, the oom killer first attempts to sacrifice a child not sharing its parent's memory instead. Unfortunately, this often kills in a seemingly random fashion based on the ordering of the selected task's child list. Additionally, it is not guaranteed at all to free a large amount of memory that we need to prevent additional oom killing in the very near future. Instead, we now only attempt to sacrifice the worst child not sharing its parent's memory, if one exists. The worst child is indicated with the highest badness() score. This serves two advantages: we kill a memory-hogging task more often, and we allow the configurable /proc/pid/oom_adj value to be considered as a factor in which child to kill. Reviewers may observe that the previous implementation would iterate through the children and attempt to kill each until one was successful and then the parent if none were found while the new code simply kills the most memory-hogging task or the parent. Note that the only time oom_kill_task() fails, however, is when a child does not have an mm or has a /proc/pid/oom_adj of OOM_DISABLE. badness() returns 0 for both cases, so the final oom_kill_task() will always succeed. Signed-off-by: David Rientjes <[email protected]> Acked-by: Rik van Riel <[email protected]> Acked-by: Nick Piggin <[email protected]> Acked-by: Balbir Singh <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: filter tasks not sharing the same cpusetDavid Rientjes1-8/+2
Tasks that do not share the same set of allowed nodes with the task that triggered the oom should not be considered as candidates for oom kill. Tasks in other cpusets with a disjoint set of mems would be unfairly penalized otherwise because of oom conditions elsewhere; an extreme example could unfairly kill all other applications on the system if a single task in a user's cpuset sets itself to OOM_DISABLE and then uses more memory than allowed. Killing tasks outside of current's cpuset rarely would free memory for current anyway. To use a sane heuristic, we must ensure that killing a task would likely free memory for current and avoid needlessly killing others at all costs just because their potential memory freeing is unknown. It is better to kill current than another task needlessly. Signed-off-by: David Rientjes <[email protected]> Acked-by: Rik van Riel <[email protected]> Acked-by: Nick Piggin <[email protected]> Acked-by: Balbir Singh <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: avoid sending exiting tasks a SIGKILLDavid Rientjes1-1/+1
It's unnecessary to SIGKILL a task that is already PF_EXITING and can actually cause a NULL pointer dereference of the sighand if it has already been detached. Instead, simply set TIF_MEMDIE so it has access to memory reserves and can quickly exit as the comment implies. Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: David Rientjes <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: give current access to memory reserves if it has been killedDavid Rientjes1-0/+10
It's possible to livelock the page allocator if a thread has mm->mmap_sem and fails to make forward progress because the oom killer selects another thread sharing the same ->mm to kill that cannot exit until the semaphore is dropped. The oom killer will not kill multiple tasks at the same time; each oom killed task must exit before another task may be killed. Thus, if one thread is holding mm->mmap_sem and cannot allocate memory, all threads sharing the same ->mm are blocked from exiting as well. In the oom kill case, that means the thread holding mm->mmap_sem will never free additional memory since it cannot get access to memory reserves and the thread that depends on it with access to memory reserves cannot exit because it cannot acquire the semaphore. Thus, the page allocators livelocks. When the oom killer is called and current happens to have a pending SIGKILL, this patch automatically gives it access to memory reserves and returns. Upon returning to the page allocator, its allocation will hopefully succeed so it can quickly exit and free its memory. If not, the page allocator will fail the allocation if it is not __GFP_NOFAIL. Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: David Rientjes <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: dump_tasks use find_lock_task_mm too fixDavid Rientjes1-2/+2
When find_lock_task_mm() returns a thread other than p in dump_tasks(), its name should be displayed instead. This is the thread that will be targeted by the oom killer, not its mm-less parent. This also allows us to safely dereference task->comm without needing get_task_comm(). While we're here, remove the cast on task_cpu(task) as Andrew suggested. Signed-off-by: David Rientjes <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: improve commentary in dump_tasks()David Rientjes1-8/+3
The comments in dump_tasks() should be updated to be more clear about why tasks are filtered and how they are filtered by its argument. An unnecessary comment concerning a check for is_global_init() is removed since it isn't of importance. Suggested-by: Andrew Morton <[email protected]> Signed-off-by: David Rientjes <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: dump_tasks use find_lock_task_mm tooKOSAKI Motohiro1-18/+21
dump_task() should use find_lock_task_mm() too. It is necessary for protecting task-exiting race. dump_tasks() currently filters any task that does not have an attached ->mm since it incorrectly assumes that it must either be in the process of exiting and has detached its memory or that it's a kernel thread; multithreaded tasks may actually have subthreads that have a valid ->mm pointer and thus those threads should actually be displayed. This change finds those threads, if they exist, and emit their information along with the rest of the candidate tasks for kill. Signed-off-by: KOSAKI Motohiro <[email protected]> Signed-off-by: David Rientjes <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Oleg Nesterov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: introduce find_lock_task_mm() to fix !mm false positivesOleg Nesterov1-31/+43
Almost all ->mm == NULL checks in oom_kill.c are wrong. The current code assumes that the task without ->mm has already released its memory and ignores the process. However this is not necessarily true when this process is multithreaded, other live sub-threads can use this ->mm. - Remove the "if (!p->mm)" check in select_bad_process(), it is just wrong. - Add the new helper, find_lock_task_mm(), which finds the live thread which uses the memory and takes task_lock() to pin ->mm - change oom_badness() to use this helper instead of just checking ->mm != NULL. - As David pointed out, select_bad_process() must never choose the task without ->mm, but no matter what oom_badness() returns the task can be chosen if nothing else has been found yet. Change oom_badness() to return int, change it to return -1 if find_lock_task_mm() fails, and change select_bad_process() to check points >= 0. Note! This patch is not enough, we need more changes. - oom_badness() was fixed, but oom_kill_task() still ignores the task without ->mm - oom_forkbomb_penalty() should use find_lock_task_mm() too, and it also needs other changes to actually find the first first-descendant children This will be addressed later. [[email protected]: use in badness(), __oom_kill_task()] Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: David Rientjes <[email protected]> Signed-off-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: PF_EXITING check should take mm into accountOleg Nesterov1-1/+1
select_bad_process() checks PF_EXITING to detect the task which is going to release its memory, but the logic is very wrong. - a single process P with the dead group leader disables select_bad_process() completely, it will always return ERR_PTR() while P can live forever - if the PF_EXITING task has already released its ->mm it doesn't make sense to expect it is goiing to free more memory (except task_struct/etc) Change the code to ignore the PF_EXITING tasks without ->mm. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: David Rientjes <[email protected]> Cc: Balbir Singh <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09oom: check PF_KTHREAD instead of !mm to skip kthreadsOleg Nesterov1-6/+3
select_bad_process() thinks a kernel thread can't have ->mm != NULL, this is not true due to use_mm(). Change the code to check PF_KTHREAD. Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: David Rientjes <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Cc: Balbir Singh <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: extend KSM refcounts to the anon_vma rootRik van Riel3-19/+54
KSM reference counts can cause an anon_vma to exist after the processe it belongs to have already exited. Because the anon_vma lock now lives in the root anon_vma, we need to ensure that the root anon_vma stays around until after all the "child" anon_vmas have been freed. The obvious way to do this is to have a "child" anon_vma take a reference to the root in anon_vma_fork. When the anon_vma is freed at munmap or process exit, we drop the refcount in anon_vma_unlink and possibly free the root anon_vma. The KSM anon_vma reference count function also needs to be modified to deal with the possibility of freeing 2 levels of anon_vma. The easiest way to do this is to break out the KSM magic and make it generic. When compiling without CONFIG_KSM, this code is compiled out. Signed-off-by: Rik van Riel <[email protected]> Tested-by: Larry Woodman <[email protected]> Acked-by: Larry Woodman <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: Linus Torvalds <[email protected]> Tested-by: Dave Young <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: always lock the root (oldest) anon_vmaRik van Riel3-10/+24
Always (and only) lock the root (oldest) anon_vma whenever we do something in an anon_vma. The recently introduced anon_vma scalability is due to the rmap code scanning only the VMAs that need to be scanned. Many common operations still took the anon_vma lock on the root anon_vma, so always taking that lock is not expected to introduce any scalability issues. However, always taking the same lock does mean we only need to take one lock, which means rmap_walk on pages from any anon_vma in the vma is excluded from occurring during an munmap, expand_stack or other operation that needs to exclude rmap_walk and similar functions. Also add the proper locking to vma_adjust. Signed-off-by: Rik van Riel <[email protected]> Tested-by: Larry Woodman <[email protected]> Acked-by: Larry Woodman <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Reviewed-by: KAMEZAWA Hiroyuki <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: Linus Torvalds <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: track the root (oldest) anon_vmaRik van Riel1-2/+16
Track the root (oldest) anon_vma in each anon_vma tree. Because we only take the lock on the root anon_vma, we cannot use the lock on higher-up anon_vmas to lock anything. This makes it impossible to do an indirect lookup of the root anon_vma, since the data structures could go away from under us. However, a direct pointer is safe because the root anon_vma is always the last one that gets freed on munmap or exit, by virtue of the same_vma list order and unlink_anon_vmas walking the list forward. [[email protected]: fix typo] Signed-off-by: Rik van Riel <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: KAMEZAWA Hiroyuki <[email protected]> Tested-by: Larry Woodman <[email protected]> Acked-by: Larry Woodman <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Acked-by: Linus Torvalds <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: change direct call of spin_lock(anon_vma->lock) to inline functionRik van Riel4-21/+21
Subsitute a direct call of spin_lock(anon_vma->lock) with an inline function doing exactly the same. This makes it easier to do the substitution to the root anon_vma lock in a following patch. We will deal with the handful of special locks (nested, dec_and_lock, etc) separately. Signed-off-by: Rik van Riel <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: KAMEZAWA Hiroyuki <[email protected]> Tested-by: Larry Woodman <[email protected]> Acked-by: Larry Woodman <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Acked-by: Linus Torvalds <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: rename anon_vma_lock to vma_lock_anon_vmaRik van Riel1-7/+7
Rename anon_vma_lock to vma_lock_anon_vma. This matches the naming style used in page_lock_anon_vma and will come in really handy further down in this patch series. Signed-off-by: Rik van Riel <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: KAMEZAWA Hiroyuki <[email protected]> Tested-by: Larry Woodman <[email protected]> Acked-by: Larry Woodman <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Acked-by: Linus Torvalds <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09hugetlb: call mmu notifiers on hugepage cowDoug Doan1-0/+6
When a copy-on-write occurs, we take one of two paths in handle_mm_fault: through handle_pte_fault for normal pages, or through hugetlb_fault for huge pages. In the normal page case, we eventually get to do_wp_page and call mmu notifiers via ptep_clear_flush_notify. There is no callout to the mmmu notifiers in the huge page case. This patch fixes that. Signed-off-by: Doug Doan <[email protected]> Acked-by: Mel Gorman <[email protected]> Cc: Andrea Arcangeli <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: provide init_mm mm_context initializerHeiko Carstens1-0/+6
Provide an INIT_MM_CONTEXT intializer macro which can be used to statically initialize mm_struct:mm_context of init_mm. This way we can get rid of code which will do the initialization at run time (on s390). In addition the current code can be found at a place where it is not expected. So let's have a common initializer which architectures can use if needed. This is based on a patch from Suzuki Poulose. Signed-off-by: Heiko Carstens <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Suzuki Poulose <[email protected]> Cc: Alexey Dobriyan <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: use ERR_CASTJulia Lawall1-1/+1
Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more clear what is the purpose of the operation, which otherwise looks like a no-op. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ type T; T x; identifier f; @@ T f (...) { <+... - ERR_PTR(PTR_ERR(x)) + x ...+> } @@ expression x; @@ - ERR_PTR(PTR_ERR(x)) + ERR_CAST(x) // </smpl> Signed-off-by: Julia Lawall <[email protected]> Cc: Nick Piggin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09mm: use memdup_userJulia Lawall1-8/+3
Use memdup_user when user data is immediately copied into the allocated region. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression from,to,size,flag; position p; identifier l1,l2; @@ - to = \(kmalloc@p\|kzalloc@p\)(size,flag); + to = memdup_user(from,size); if ( - to==NULL + IS_ERR(to) || ...) { <+... when != goto l1; - -ENOMEM + PTR_ERR(to) ...+> } - if (copy_from_user(to, from, size) != 0) { - <+... when != goto l2; - -EFAULT - ...+> - } // </smpl> Signed-off-by: Julia Lawall <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-08-09switch shmem.c to ->evice_inode()Al Viro1-4/+4
Signed-off-by: Al Viro <[email protected]>
2010-08-09check ATTR_SIZE contraints in inode_change_okChristoph Hellwig2-12/+31
Make sure we check the truncate constraints early on in ->setattr by adding those checks to inode_change_ok. Also clean up and document inode_change_ok to make this obvious. As a fallout we don't have to call inode_newsize_ok from simple_setsize and simplify it down to a truncate_setsize which doesn't return an error. This simplifies a lot of setattr implementations and means we use truncate_setsize almost everywhere. Get rid of fat_setsize now that it's trivial and mark ext2_setsize static to make the calling convention obvious. Keep the inode_newsize_ok in vmtruncate for now as all callers need an audit for its removal anyway. Note: setattr code in ecryptfs doesn't call inode_change_ok at all and needs a deeper audit, but that is left for later. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2010-08-09always call inode_change_ok early in ->setattrChristoph Hellwig1-4/+6
Make sure we call inode_change_ok before doing any changes in ->setattr, and make sure to call it even if our fs wants to ignore normal UNIX permissions, but use the ATTR_FORCE to skip those. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2010-08-09rename generic_setattrChristoph Hellwig1-1/+1
Despite its name it's now a generic implementation of ->setattr, but rather a helper to copy attributes from a struct iattr to the inode. Rename it to setattr_copy to reflect this fact. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2010-08-09slab: fix object alignmentCarsten Otte1-2/+2
This patch fixes alignment of slab objects in case CONFIG_DEBUG_PAGEALLOC is active. Before this spot in kmem_cache_create, we have this situation: - align contains the required alignment of the object - cachep->obj_offset is 0 or equals align in case of CONFIG_DEBUG_SLAB - size equals the size of the object, or object plus trailing redzone in case of CONFIG_DEBUG_SLAB This spot tries to fill one page per object if the object is in certain size limits, however setting obj_offset to PAGE_SIZE - size does break the object alignment since size may not be aligned with the required alignment. This patch simply adds an ALIGN(size, align) to the equation and fixes the object size detection accordingly. This code in drivers/s390/cio/qdio_setup_init has lead to incorrectly aligned slab objects (sizeof(struct qdio_q) equals 1792): qdio_q_cache = kmem_cache_create("qdio_q", sizeof(struct qdio_q), 256, 0, NULL); Acked-by: Christoph Lameter <[email protected]> Signed-off-by: Carsten Otte <[email protected]> Signed-off-by: Pekka Enberg <[email protected]>