aboutsummaryrefslogtreecommitdiff
path: root/arch/m32r/mm
AgeCommit message (Collapse)AuthorFilesLines
2013-09-12arch: mm: pass userspace fault flag to generic fault handlerJohannes Weiner1-4/+6
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: Johannes Weiner <[email protected]> Reviewed-by: Michal Hocko <[email protected]> Cc: David Rientjes <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: azurIt <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-07-03mm/m32r: prepare for killing free_all_bootmem_node()Jiang Liu1-13/+4
Prepare for killing free_all_bootmem_node() by using free_all_bootmem(). Signed-off-by: Jiang Liu <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-07-03mm/m32r: prepare for removing num_physpages and simplify mem_init()Jiang Liu2-49/+6
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-07-03mm: concentrate modification of totalram_pages into the mm coreJiang Liu1-1/+1
Concentrate code to modify totalram_pages into the mm core, so the arch memory initialized code doesn't need to take care of it. With these changes applied, only following functions from mm core modify global variable totalram_pages: free_bootmem_late(), free_all_bootmem(), free_all_bootmem_node(), adjust_managed_page_count(). With this patch applied, it will be much more easier for us to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <[email protected]> Acked-by: David Howells <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jeremy Fitzhardinge <[email protected]> Cc: Jianguo Wu <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kamezawa Hiroyuki <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Tang Chen <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wen Congyang <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Russell King <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-07-03mm: enhance free_reserved_area() to support poisoning memory with zeroJiang Liu1-2/+2
Address more review comments from last round of code review. 1) Enhance free_reserved_area() to support poisoning freed memory with pattern '0'. This could be used to get rid of poison_init_mem() on ARM64. 2) A previous patch has disabled memory poison for initmem on s390 by mistake, so restore to the original behavior. 3) Remove redundant PAGE_ALIGN() when calling free_reserved_area(). Signed-off-by: Jiang Liu <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: David Howells <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jeremy Fitzhardinge <[email protected]> Cc: Jianguo Wu <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kamezawa Hiroyuki <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Tang Chen <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wen Congyang <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Russell King <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-07-03mm: change signature of free_reserved_area() to fix building warningsJiang Liu1-1/+1
Change signature of free_reserved_area() according to Russell King's suggestion to fix following build warnings: arch/arm/mm/init.c: In function 'mem_init': arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default] free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL); ^ In file included from include/linux/mman.h:4:0, from arch/arm/mm/init.c:15: include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *' extern unsigned long free_reserved_area(unsigned long start, unsigned long end, mm/page_alloc.c: In function 'free_reserved_area': >> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default] In file included from arch/mips/include/asm/page.h:49:0, from include/linux/mmzone.h:20, from include/linux/gfp.h:4, from include/linux/mm.h:8, from mm/page_alloc.c:18: arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int' mm/page_alloc.c: In function 'free_area_init_nodes': mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds] Also address some minor code review comments. Signed-off-by: Jiang Liu <[email protected]> Reported-by: Arnd Bergmann <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Cc: <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: David Howells <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jeremy Fitzhardinge <[email protected]> Cc: Jianguo Wu <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kamezawa Hiroyuki <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Tang Chen <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wen Congyang <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Russell King <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-04-29mm/m32r: use common help functions to free reserved pagesJiang Liu1-23/+3
Use common help functions to free reserved pages. Also include <asm/sections.h> to avoid local declarations. Signed-off-by: Jiang Liu <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2012-03-28Disintegrate asm/system.h for M32RDavid Howells2-2/+0
Disintegrate asm/system.h for M32R. Signed-off-by: David Howells <[email protected]> cc: [email protected]
2011-05-25m32r, mm: set all online nodes in N_NORMAL_MEMORYDavid Rientjes1-0/+1
For m32r, N_NORMAL_MEMORY represents all nodes that have present memory since it does not support HIGHMEM. This patch sets the bit at the time the node is initialized. If N_NORMAL_MEMORY is not accurate, slub may encounter errors since it uses this nodemask to setup per-cache kmem_cache_node data structures. Signed-off-by: David Rientjes <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2011-05-25mm: now that all old mmu_gather code is gone, remove the storagePeter Zijlstra1-2/+0
Fold all the mmu_gather rework patches into one for submission Signed-off-by: Peter Zijlstra <[email protected]> Reported-by: Hugh Dickins <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: David Miller <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Russell King <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Tony Luck <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Mel Gorman <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Namhyung Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2011-03-31Fix common misspellingsLucas De Marchi1-2/+2
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <[email protected]>
2010-06-04m32r: invoke oom-killer from page faultNick Piggin1-10/+4
As explained in commit 1c0fe6e3bd ("mm: invoke oom-killer from page fault") , we want to call the architecture independent oom killer when getting an unexplained OOM from handle_mm_fault, rather than simply killing current. Signed-off-by: Nick Piggin <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo1-0/+1
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <[email protected]> Guess-its-ok-by: Christoph Lameter <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Lee Schermerhorn <[email protected]>
2010-02-20MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King2-4/+4
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Benjamin Herrenschmidt <[email protected]> Signed-off-by: Russell King <[email protected]>
2009-10-04m32r: Fix set_memory() for DISCONTIGMEMHirokazu Takata1-1/+4
In case CONFIG_DISCONTIGMEM is set, the memory size of system was always determined by CONFIG_MEMORY_SIZE and was not changeable. This patch fixes set_memory() of arch/m32r/mm/discontig.c so that we can specify memory size by the "mem=<size>" kernel parameter. Signed-off-by: Hirokazu Takata <[email protected]>
2009-10-04m32r: fix tme_handlerHirokazu Takata1-4/+8
Fix pmd_bad check code of tme_handler (TLB Miss Exception handler). The correct _KERNPG_TABLE value is not 0x263(=611) but 0x163. Signed-off-by: Hirokazu Takata <[email protected]>
2009-09-22arches: drop superfluous casts in nr_free_pages() callersGeert Uytterhoeven1-1/+1
Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()") modified nr_free_pages() to return 'unsigned long' instead of 'unsigned int'. This made the casts to 'unsigned long' in most callers superfluous, so remove them. [[email protected]: coding-style fixes] Signed-off-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Christoph Lameter <[email protected]> Acked-by: Ingo Molnar <[email protected]> Acked-by: Russell King <[email protected]> Acked-by: David S. Miller <[email protected]> Acked-by: Kyle McMartin <[email protected]> Acked-by: WANG Cong <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Haavard Skinnemoen <[email protected]> Cc: Mikael Starvik <[email protected]> Cc: "Luck, Tony" <[email protected]> Cc: Hirokazu Takata <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: David Howells <[email protected]> Acked-by: Benjamin Herrenschmidt <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Michal Simek <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2009-06-21Move FAULT_FLAG_xyz into handle_mm_fault() callersLinus Torvalds1-1/+1
This allows the callers to now pass down the full set of FAULT_FLAG_xyz flags to handle_mm_fault(). All callers have been (mechanically) converted to the new calling convention, there's almost certainly room for architectures to clean up their code and then add FAULT_FLAG_RETRY when that support is added. Signed-off-by: Linus Torvalds <[email protected]>
2009-06-16page allocator: use allocation flags as an index to the zone watermarkMel Gorman1-3/+3
ALLOC_WMARK_MIN, ALLOC_WMARK_LOW and ALLOC_WMARK_HIGH determin whether pages_min, pages_low or pages_high is used as the zone watermark when allocating the pages. Two branches in the allocator hotpath determine which watermark to use. This patch uses the flags as an array index into a watermark array that is indexed with WMARK_* defines accessed via helpers. All call sites that use zone->pages_* are updated to use the helpers for accessing the values and the array offsets for setting. Signed-off-by: Mel Gorman <[email protected]> Reviewed-by: Christoph Lameter <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Lee Schermerhorn <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-09-14generic: make PFN_PHYS explicitly return phys_addr_tJeremy Fitzhardinge1-2/+2
PFN_PHYS, as its name suggests, turns a pfn into a physical address. However, it is a macro which just operates on its argument without modifying its type. pfns are typed unsigned long, but an unsigned long may not be long enough to hold a physical address (32-bit systems with more than 32 bits of physcial address). Make sure we cast to phys_addr_t to return a complete result. Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-07-26m32r: use generic show_mem()Johannes Weiner1-36/+0
Remove arch-specific show_mem() in favor of the generic version. This also removes the following redundant information display: - free swap pages, printed by show_swap_cache_info() - pages in swapcache, printed by show_swap_cache_info() where show_mem() calls show_free_areas(), which calls show_swap_cache_info(). Signed-off-by: Johannes Weiner <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-24bootmem: replace node_boot_start in struct bootmem_dataJohannes Weiner2-5/+2
Almost all users of this field need a PFN instead of a physical address, so replace node_boot_start with node_min_pfn. [[email protected]: fix spurious BUG_ON() in mark_bootmem()] Signed-off-by: Johannes Weiner <[email protected]> Cc: <[email protected]> Signed-off-by: Lee Schermerhorn <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-24mm: drop unneeded pgdat argument from free_area_init_node()Johannes Weiner2-3/+2
free_area_init_node() gets passed in the node id as well as the node descriptor. This is redundant as the function can trivially get the node descriptor itself by means of NODE_DATA() and the node's id. I checked all the users and NODE_DATA() seems to be usable everywhere from where this function is called. Signed-off-by: Johannes Weiner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-07-24mm: move bootmem descriptors definition to a single placeJohannes Weiner1-3/+1
There are a lot of places that define either a single bootmem descriptor or an array of them. Use only one central array with MAX_NUMNODES items instead. Signed-off-by: Johannes Weiner <[email protected]> Acked-by: Ralf Baechle <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Russell King <[email protected]> Cc: Tony Luck <[email protected]> Cc: Hirokazu Takata <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Kyle McMartin <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Paul Mundt <[email protected]> Cc: David S. Miller <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Andy Whitcroft <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-02-07Introduce flags for reserve_bootmem()Bernhard Walle1-2/+3
This patchset adds a flags variable to reserve_bootmem() and uses the BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions between crashkernel area and already used memory. This patch: Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE. If that flag is set, the function returns with -EBUSY if the memory already has been reserved in the past. This is to avoid conflicts. Because that code runs before SMP initialisation, there's no race condition inside reserve_bootmem_core(). [[email protected]: coding-style fixes] [[email protected]: fix powerpc build] Signed-off-by: Bernhard Walle <[email protected]> Cc: <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Vivek Goyal <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-10-19pid namespaces: define is_global_init() and is_container_init()Serge E. Hallyn1-1/+1
is_init() is an ambiguous name for the pid==1 check. Split it into is_global_init() and is_container_init(). A cgroup init has it's tsk->pid == 1. A global init also has it's tsk->pid == 1 and it's active pid namespace is the init_pid_ns. But rather than check the active pid namespace, compare the task structure with 'init_pid_ns.child_reaper', which is initialized during boot to the /sbin/init process and never changes. Changelog: 2.6.22-rc4-mm2-pidns1: - Use 'init_pid_ns.child_reaper' to determine if a given task is the global init (/sbin/init) process. This would improve performance and remove dependence on the task_pid(). 2.6.21-mm2-pidns2: - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc, ppc,avr32}/traps.c for the _exception() call to is_global_init(). This way, we kill only the cgroup if the cgroup's init has a bug rather than force a kernel panic. [[email protected]: fix comment] [[email protected]: Use is_global_init() in arch/m32r/mm/fault.c] [[email protected]: kernel/pid.c: remove unused exports] [[email protected]: Fix capability.c to work with threaded init] Signed-off-by: Serge E. Hallyn <[email protected]> Signed-off-by: Sukadev Bhattiprolu <[email protected]> Acked-by: Pavel Emelianov <[email protected]> Cc: Eric W. Biederman <[email protected]> Cc: Cedric Le Goater <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Herbert Poetzel <[email protected]> Cc: Kirill Korotaev <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-10-16During VM oom condition, kill all threads in process groupWill Schmidt1-1/+1
We have had complaints where a threaded application is left in a bad state after one of it's threads is killed when we hit a VM: out_of_memory condition. Killing just one of the process threads can leave the application in a bad state, whereas killing the entire process group would allow for the application to restart, or be otherwise handled, and makes it very obvious that something has gone wrong. This change allows the entire process group to be taken down, rather than just the one thread. Signed-off-by: Will Schmidt <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Russell King <[email protected]> Cc: Ian Molton <[email protected]> Cc: Haavard Skinnemoen <[email protected]> Cc: Mikael Starvik <[email protected]> Cc: David Howells <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "Luck, Tony" <[email protected]> Cc: Hirokazu Takata <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Roman Zippel <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Kyle McMartin <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Kazumoto Kojima <[email protected]> Cc: Richard Curnow <[email protected]> Cc: William Lee Irwin III <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Chris Zankel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-07-19mm: fault feedback #2Nick Piggin1-12/+11
This patch completes Linus's wish that the fault return codes be made into bit flags, which I agree makes everything nicer. This requires requires all handle_mm_fault callers to be modified (possibly the modifications should go further and do things like fault accounting in handle_mm_fault -- however that would be for another patch). [[email protected]: fix alpha build] [[email protected]: fix s390 build] [[email protected]: fix sparc build] [[email protected]: fix sparc64 build] [[email protected]: fix ia64 build] Signed-off-by: Nick Piggin <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Russell King <[email protected]> Cc: Ian Molton <[email protected]> Cc: Bryan Wu <[email protected]> Cc: Mikael Starvik <[email protected]> Cc: David Howells <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: "Luck, Tony" <[email protected]> Cc: Hirokazu Takata <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Roman Zippel <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Mundt <[email protected]> Cc: Kazumoto Kojima <[email protected]> Cc: Richard Curnow <[email protected]> Cc: William Lee Irwin III <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Paolo 'Blaisorblade' Giarrusso <[email protected]> Cc: Miles Bader <[email protected]> Cc: Chris Zankel <[email protected]> Acked-by: Kyle McMartin <[email protected]> Acked-by: Haavard Skinnemoen <[email protected]> Acked-by: Ralf Baechle <[email protected]> Acked-by: Andi Kleen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> [ Still apparently needs some ARM and PPC loving - Linus ] Signed-off-by: Linus Torvalds <[email protected]>
2007-05-11m32r: fix tme_handler to check _PAGE_PRESENT bitHirokazu Takata1-9/+13
Fix the tlb-miss handler (tme_handler) to check _PAGE_PRESENT bit in order to handle file-mapped or swapped-out pages correctly. This patch is required to fix unexpected page errors for m32r. Signed-off-by: Hitoshi Yamamoto <[email protected]> Signed-off-by: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-05-08header cleaning: don't include smp_lock.h when not usedRandy Dunlap2-2/+0
Remove includes of <linux/smp_lock.h> where it is not used/needed. Suggested by Al Viro. Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc, sparc64, and arm (all 59 defconfigs). Signed-off-by: Randy Dunlap <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-02-11[PATCH] Consolidate bust_spinlocks()Kirill Korotaev2-52/+0
Part of long forgotten patch http://groups.google.com/group/fa.linux.kernel/msg/e98e941ce1cf29f6?dmode=source Since then, m32r grabbed two copies. Leave s390 copy because of important absence of CONFIG_VT, but remove references to non-existent timerlist_lock. ia64 also loses timerlist_lock. Signed-off-by: Alexey Dobriyan <[email protected]> Acked-by: Martin Schwidefsky <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "Luck, Tony" <[email protected]> Cc: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-02-11[PATCH] m32r: cosmetic updates and trivial fixesHirokazu Takata2-8/+2
Cosmetic updates and trivial fixes of m32r arch-dependent files. - Remove RCS ID strings and trailing white lines - Other misc. cosmetic updates Signed-off-by: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-02-11[PATCH] m32r: fix do_page_fault and update_mmu_cacheHirokazu Takata1-21/+19
Fix do_page_fault and update_mmu_cache. * Fix do_page_fault (vmalloc_fault:) to pass error_code correctly to update_mmu_cache by using a thread-fault code for all m32r chips. * Fix update_mmu_cache for OPSP chip - #ifdef CONFIG_CHIP_OPSP portion is a workaround of OPSP; Add a notfound-case operation to update_mmu_cache for OPSP like other m32r chip. - Fix pte_data that was not initialized if no entry found. Signed-off-by: Kazuhiro Inaoka <[email protected]> Signed-off-by: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-12-08[PATCH] m32r: fix ace_handler to pass full 32-bit addressHirokazu Takata1-2/+2
Don't mask the lower 12-bit of the page fault address. In the current m32r kernel implementation, we use an access exception to detect page faults. This patch fixes ace_handler (access exception handler) for m32r. In order to check userspace address in do_page_fault, we have to pass full 32-bit address to do_page_fault. Signed-off-by: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-12-07[PATCH] initrd: remove unused false condition for initrd_startHenry Nestler1-3/+1
After LOADER_TYPE && INITRD_START are true, the short if-condition for INITRD_START can never be false. Remove unused code from the else condition. Signed-off-by: Henry Nestler <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-10-04Remove all inclusions of <linux/config.h>Dave Jones1-1/+0
kbuild explicitly includes this at build time. Signed-off-by: Dave Jones <[email protected]>
2006-10-01[PATCH] Generic ioremap_page_range: m32r conversionHaavard Skinnemoen1-86/+7
Convert m32r to use generic ioremap_page_range() Signed-off-by: Haavard Skinnemoen <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-09-29[PATCH] pidspace: is_init()Sukadev Bhattiprolu1-1/+1
This is an updated version of Eric Biederman's is_init() patch. (http://lkml.org/lkml/2006/2/6/280). It applies cleanly to 2.6.18-rc3 and replaces a few more instances of ->pid == 1 with is_init(). Further, is_init() checks pid and thus removes dependency on Eric's other patches for now. Eric's original description: There are a lot of places in the kernel where we test for init because we give it special properties. Most significantly init must not die. This results in code all over the kernel test ->pid == 1. Introduce is_init to capture this case. With multiple pid spaces for all of the cases affected we are looking for only the first process on the system, not some other process that has pid == 1. Signed-off-by: Eric W. Biederman <[email protected]> Signed-off-by: Sukadev Bhattiprolu <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Serge Hallyn <[email protected]> Cc: Cedric Le Goater <[email protected]> Cc: <[email protected]> Acked-by: Paul Mackerras <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-09-26[PATCH] reduce MAX_NR_ZONES: fix MAX_NR_ZONES array initializationsChristoph Lameter1-1/+1
Fix array initialization in lots of arches The number of zones may now be reduced from 4 to 2 for many arches. Fix the array initialization for the zones array for all architectures so that it is not initializing a fixed number of elements. Signed-off-by: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel4-4/+0
Signed-off-by: Jörn Engel <[email protected]> Signed-off-by: Adrian Bunk <[email protected]>
2006-06-30typo fixes: occuring -> occurringAdrian Bunk1-1/+1
Signed-off-by: Adrian Bunk <[email protected]>
2006-03-27[PATCH] unify PFN_* macrosDave Hansen2-0/+2
Just about every architecture defines some macros to do operations on pfns. They're all virtually identical. This patch consolidates all of them. One minor glitch is that at least i386 uses them in a very skeletal header file. To keep away from #include dependency hell, I stuck the new definitions in a new, isolated header. Of all of the implementations, sh64 is the only one that varied by a bit. It used some masks to ensure that any sign-extension got ripped away before the arithmetic is done. This has been posted to that sh64 maintainers and the development list. Compiles on x86, x86_64, ia64 and ppc64. Signed-off-by: Dave Hansen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-27[PATCH] for_each_online_pgdat: remove sorting pgdatKAMEZAWA Hiroyuki1-6/+0
Because pgdat_list was linked to pgdat_list in *reverse* order, (By default) some of arch has to sort it by themselves. for_each_pgdat has gone..for_each_online_pgdat() uses node_online_map, which doesn't need to be sorted. This patch removes codes for sorting pgdat. Signed-off-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-27[PATCH] for_each_online_pgdat: renaming for_each_pgdatKAMEZAWA Hiroyuki1-1/+1
Replace for_each_pgdat() with for_each_online_pgdat(). Signed-off-by: KAMEZAWA Hiroyuki <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-03-22[PATCH] remove set_page_count() outside mm/Nick Piggin1-2/+2
set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: Nick Piggin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-06[PATCH] m32r: Fix M32104 cache flushing routinesHirokazu Takata1-7/+21
This patch fixes cache memory parameter setting for the M32104 target. So far, its performance seemed to have been degraded due to incorrect cache parameter setting. * arch/m32r/boot/setup.S: Set SFR(Special Fuction Registers) region to be non-cachable explicitly. * arch/m32r/mm/cache.c: Fix cache flushing routines not to switch off the M32104 cache. Signed-off-by: Hayato Fujiwara <[email protected]> Signed-off-by: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-01-06[PATCH] m32r: Support M32104UT target platformHirokazu Takata1-0/+10
This patch is for supporting a new target platform, Renesas M32104UT evaluation board. The M32104UT is an eval board based on an uT-Engine specification. This board has an MMU-less M32R family processor, M32104. http://www-wa0.personal-media.co.jp/pmc/archive/te/te_m32104_e.pdf This board is one of the most popular M32R platform, so we have ported Linux/M32R to it. Signed-off-by: Naoto Sugai <[email protected]> Signed-off-by: Hirokazu Takata <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-10-29[PATCH] memory hotplug locking: node_size_lockDave Hansen1-1/+8
pgdat->node_size_lock is basically only neeeded in one place in the normal code: show_mem(), which is the arch-specific sysrq-m printing function. Strictly speaking, the architectures not doing memory hotplug do no need this locking in show_mem(). However, they are all included for completeness. This should also make any future consolidation of all of the implementations a little more straightforward. This lock is also held in the sparsemem code during a memory removal, as sections are invalidated. This is the place there pfn_valid() is made false for a memory area that's being removed. The lock is only required when doing pfn_valid() operations on memory which the user does not already have a reference on the page, such as in show_mem(). Signed-off-by: Dave Hansen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-10-29[PATCH] mm: init_mm without ptlockHugh Dickins1-3/+1
First step in pushing down the page_table_lock. init_mm.page_table_lock has been used throughout the architectures (usually for ioremap): not to serialize kernel address space allocation (that's usually vmlist_lock), but because pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it. Reverse that: don't lock or unlock init_mm.page_table_lock in any of the architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take and drop it when allocating a new one, to check lest a racing task already did. Similarly no page_table_lock in vmalloc's map_vm_area. Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle user mms, which are converted only by a later patch, for now they have to lock differently according to whether or not it's init_mm. If sources get muddled, there's a danger that an arch source taking init_mm.page_table_lock will be mixed with common source also taking it (or neither take it). So break the rules and make another change, which should break the build for such a mismatch: remove the redundant mm arg from pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13). Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64 used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64 map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free took page_table_lock for no good reason. Signed-off-by: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2005-08-23[PATCH] missing exports on m32rAl Viro1-0/+2
missing exports on m32r Signed-off-by: Al Viro <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>