aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/mm/init.c
AgeCommit message (Collapse)AuthorFilesLines
2020-12-22Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linuxLinus Torvalds1-1/+0
Pull ARM updates from Russell King: - Rework phys/virt translation - Add KASan support - Move DT out of linear map region - Use more PC-relative addressing in assembly - Remove FP emulation handling while in kernel mode - Link with '-z norelro' - remove old check for GCC <= 4.2 in ARM unwinder code - disable big endian if using clang's linker * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (46 commits) ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section ARM: 9038/1: Link with '-z norelro' ARM: 9037/1: uncompress: Add OF_DT_MAGIC macro ARM: 9036/1: uncompress: Fix dbgadtb size parameter name ARM: 9035/1: uncompress: Add be32tocpu macro ARM: 9033/1: arm/smp: Drop the macro S(x,s) ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functions ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbol ARM: 9044/1: vfp: use undef hook for VFP support detection ARM: 9034/1: __div64_32(): straighten up inline asm constraints ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode ARM: 9029/1: Make iwmmxt.S support Clang's integrated assembler ARM: 9028/1: disable KASAN in call stack capturing routines ARM: 9026/1: unwind: remove old check for GCC <= 4.2 ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD ARM: 9024/1: Drop useless cast of "u64" to "long long" ARM: 9023/1: Spelling s/mmeory/memory/ ARM: 9022/1: Change arch/arm/lib/mem*.S to use WEAK instead of .weak ARM: kvm: replace open coded VA->PA calculations with adr_l call ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET ...
2020-12-15arm, arm64: move free_unused_memmap() to generic mmMike Rapoport1-78/+0
ARM and ARM64 free unused parts of the memory map just before the initialization of the page allocator. To allow holes in the memory map both architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID. Allowing holes in the memory map for FLATMEM may be useful for small machines, such as ARC and m68k and will enable those architectures to cease using DISCONTIGMEM and still support more than one memory bank. Move the functions that free unused memory map to generic mm and enable them in case HAVE_ARCH_PFN_VALID=y. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Catalin Marinas <[email protected]> [arm64] Cc: Alexey Dobriyan <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Matt Turner <[email protected]> Cc: Meelis Roos <[email protected]> Cc: Michael Schmitz <[email protected]> Cc: Russell King <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-04ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservationsArd Biesheuvel1-2/+2
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") it rounded beginning of each region upwards and end of each region downwards. However, after that commit free_highmem() rounds the beginning and end of each region downwards, and we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. Restore the original rounding of the region boundaries to avoid freeing reserved pages. Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") Link: https://lore.kernel.org/r/[email protected]/ Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Ard Biesheuvel <[email protected]> Co-developed-by: Mike Rapoport <[email protected]> Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Max Filippov <[email protected]>
2020-10-27ARM: 9012/1: move device tree mapping out of linear regionArd Biesheuvel1-1/+0
On ARM, setting up the linear region is tricky, given the constraints around placement and alignment of the memblocks, and how the kernel itself as well as the DT are placed in physical memory. Let's simplify matters a bit, by moving the device tree mapping to the top of the address space, right between the end of the vmalloc region and the start of the the fixmap region, and create a read-only mapping for it that is independent of the size of the linear region, and how it is organized. Since this region was formerly used as a guard region, which will now be populated fully on LPAE builds by this read-only mapping (which will still be able to function as a guard region for stray writes), bump the start of the [underutilized] fixmap region by 512 KB as well, to ensure that there is always a proper guard region here. Doing so still leaves ample room for the fixmap space, even with NR_CPUS set to its maximum value of 32. Tested-by: Linus Walleij <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Russell King <[email protected]>
2020-10-15Merge tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-1/+1
Pull dma-mapping updates from Christoph Hellwig: - rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits) ARM/ixp4xx: add a missing include of dma-map-ops.h dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling dma-direct: factor out a dma_direct_alloc_from_pool helper dma-direct check for highmem pages in dma_direct_alloc_pages dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h> dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma dma-mapping: move dma-debug.h to kernel/dma/ dma-mapping: remove <asm/dma-contiguous.h> dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h> dma-contiguous: remove dma_contiguous_set_default dma-contiguous: remove dev_set_cma_area dma-contiguous: remove dma_declare_contiguous dma-mapping: split <linux/dma-mapping.h> cma: decrease CMA_ALIGNMENT lower limit to 2 firewire-ohci: use dma_alloc_pages dma-iommu: implement ->alloc_noncoherent dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods dma-mapping: add a new dma_alloc_pages API dma-mapping: remove dma_cache_sync 53c700: convert to dma_alloc_noncoherent ...
2020-10-13arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()Mike Rapoport1-7/+4
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start_pfn = memblock_region_memory_base_pfn(reg); end_pfn = memblock_region_memory_end_pfn(reg); /* do something with start_pfn and end_pfn */ } Rather than iterate over all memblock.memory regions and each time query for their start and end PFNs, use for_each_mem_pfn_range() iterator to get simpler and clearer code. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Baoquan He <[email protected]> Acked-by: Miguel Ojeda <[email protected]> [.clang-format] Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Daniel Axtens <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Emil Renner Berthing <[email protected]> Cc: Hari Bathini <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-10-13arm, xtensa: simplify initialization of high memory pagesMike Rapoport1-40/+8
free_highpages() in both arm and xtensa essentially open-code for_each_free_mem_range() loop to detect high memory pages that were not reserved and that should be initialized and passed to the buddy allocator. Replace open-coded implementation of for_each_free_mem_range() with usage of memblock API to simplify the code. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Max Filippov <[email protected]> [xtensa] Reviewed-by: Max Filippov <[email protected]> [xtensa] Cc: Andy Lutomirski <[email protected]> Cc: Baoquan He <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Daniel Axtens <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Emil Renner Berthing <[email protected]> Cc: Hari Bathini <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jonathan Cameron <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-10-06dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>Christoph Hellwig1-1/+1
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <[email protected]>
2020-08-07mm/sparse: cleanup the code surrounding memory_present()Mike Rapoport1-7/+2
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent functions that call memory_present() for each region in memblock.memory: sparse_memory_present_with_active_regions() and membocks_present(). Moreover, all architectures have a call to either of these functions preceding the call to sparse_init() and in the most cases they are called one after the other. Mark the regions from memblock.memory as present during sparce_init() by making sparse_init() call memblocks_present(), make memblocks_present() and memory_present() functions static and remove redundant sparse_memory_present_with_active_regions() function. Also remove no longer required HAVE_MEMORY_PRESENT configuration option. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arm: add support for folded p4d page tablesMike Rapoport1-1/+1
Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate, and remove __ARCH_USE_5LEVEL_HACK. [[email protected]: fix kexec] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Marek Szyprowski <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: James Morse <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Julien Thierry <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Suzuki K Poulose <[email protected]> Cc: Tony Luck <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-03arm: simplify detection of memory zone boundariesMike Rapoport1-59/+7
free_area_init() only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Hoan Tran <[email protected]> [arm64] Cc: Baoquan He <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-01-25ARM: 8949/1: mm: mark free_memmap as __initOlof Johansson1-1/+1
As of commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly"), free_memmap() might not always be inlined, and thus is triggering a section warning: WARNING: vmlinux.o(.text.unlikely+0x904): Section mismatch in reference from the function free_memmap() to the function .meminit.text:memblock_free() Mark it as __init, since the faller (free_unused_memmap) already is. Fixes: ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") Signed-off-by: Olof Johansson <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-10-27ARM: 8917/1: mm: include <asm/set_memory.h>Ben Dooks (Codethink)1-0/+1
The definitions of set_kernel_text_rw() and set_kernel_text_ro() are in <asm/set_memory.h> but this is not included in init.c which defines these. Silence the following warnings by including the <asm/set_memory.h> header. arch/arm/mm/init.c:669:6: warning: symbol 'set_kernel_text_rw' was not declared. Should it be static? arch/arm/mm/init.c:678:6: warning: symbol 'set_kernel_text_ro' was not declared. Should it be static? Signed-off-by: Ben Dooks <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-10-27ARM: 8916/1: mm: make set_section_perms() staticBen Dooks (Codethink)1-2/+2
The set_section_perms() is not defined outside of the init.c file, so make it static to avoid the following warning: arch/arm/mm/init.c:596:6: warning: symbol 'set_section_perms' was not declared. Should it be static? Signed-off-by: Ben Dooks <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-10-27ARM: 8907/1: arch: reuse addr variable in pfn_validClemens Gruber1-1/+1
Avoid calling __pfn_to_phys twice. Signed-off-by: Clemens Gruber <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-08-28ARM: 8901/1: add a criteria for pfn_valid of armzhaoyang1-0/+5
pfn_valid can be wrong when parsing a invalid pfn whose phys address exceeds BITS_PER_LONG as the MSB will be trimed when shifted. The issue originally arise from bellowing call stack, which corresponding to an access of the /proc/kpageflags from userspace with a invalid pfn parameter and leads to kernel panic. [46886.723249] c7 [<c031ff98>] (stable_page_flags) from [<c03203f8>] [46886.723264] c7 [<c0320368>] (kpageflags_read) from [<c0312030>] [46886.723280] c7 [<c0311fb0>] (proc_reg_read) from [<c02a6e6c>] [46886.723290] c7 [<c02a6e24>] (__vfs_read) from [<c02a7018>] [46886.723301] c7 [<c02a6f74>] (vfs_read) from [<c02a778c>] [46886.723315] c7 [<c02a770c>] (SyS_pread64) from [<c0108620>] (ret_fast_syscall+0x0/0x28) Signed-off-by: Zhaoyang Huang <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-08-23ARM: 8874/1: mm: only adjust sections of valid mm structuresDoug Berger1-1/+2
A timing hazard exists when an early fork/exec thread begins exiting and sets its mm pointer to NULL while a separate core tries to update the section information. This commit ensures that the mm pointer is not NULL before setting its section parameters. The arguments provided by commit 11ce4b33aedc ("ARM: 8672/1: mm: remove tasklist locking from update_sections_early()") are equally valid for not requiring grabbing the task_lock around this check. Fixes: 08925c2f124f ("ARM: 8464/1: Update all mm structures with section adjustments") Signed-off-by: Doug Berger <[email protected]> Acked-by: Laura Abbott <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Florian Fainelli <[email protected]> Cc: Rob Herring <[email protected]> Cc: "Steven Rostedt (VMware)" <[email protected]> Cc: Peng Fan <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-07-24arm: use swiotlb for bounce buffering on LPAE configsChristoph Hellwig1-0/+5
The DMA API requires that 32-bit DMA masks are always supported, but on arm LPAE configs they do not currently work when memory is present above 4GB. Wire up the swiotlb code like for all other architectures to provide the bounce buffering in that case. Fixes: 21e07dba9fb11 ("scsi: reduce use of block bounce buffers"). Reported-by: Roger Quadros <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: Vignesh Raghavendra <[email protected]>
2019-07-08Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-6/+16
Pull ARM updates from Russell King: - Add a "cut here" to make it clearer where oops dumps should be cut from - we already have a marker for the end of the dumps. - Add logging severity to show_pte() - Drop unnecessary common-page-size linker flag - Errata workarounds for Cortex A12 857271, Cortex A17 857272 and Cortex A7 814220. - Remove some unused variables that had started to provoke a compiler warning. * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8863/1: stm32: select ARM errata 814220 ARM: 8862/1: errata: 814220-B-Cache maintenance by set/way operations can execute out of order ARM: 8865/1: mm: remove unused variables ARM: 8864/1: Add workaround for I-Cache line size mismatch between CPU cores ARM: 8861/1: errata: Workaround errata A12 857271 / A17 857272 ARM: 8860/1: VDSO: Drop implicit common-page-size linker flag ARM: arrange show_pte() to issue severity-based messages ARM: add "8<--- cut here ---" to kernel dumps
2019-06-20ARM: 8865/1: mm: remove unused variablesYueHaibing1-6/+0
Fix gcc warnings: arch/arm/mm/init.c: In function 'mem_init': arch/arm/mm/init.c:456:13: warning: unused variable 'itcm_end' [-Wunused-variable] extern u32 itcm_end; ^ arch/arm/mm/init.c:455:13: warning: unused variable 'dtcm_end' [-Wunused-variable] extern u32 dtcm_end; ^ They are not used any more since commit 1c31d4e96b8c ("ARM: 8820/1: mm: Stop printing the virtual memory layout") Link: https://lkml.org/lkml/2019/5/12/82 Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-06-20ARM: 8864/1: Add workaround for I-Cache line size mismatch between CPU coresMarek Szyprowski1-0/+16
Some big.LITTLE systems have I-Cache line size mismatch between LITTLE and big cores. This patch adds a workaround for proper I-Cache support on such systems. Without it, some class of the userspace code (typically self-modifying) might suffer from random SIGILL failures. Similar workaround already exists for ARM64 architecture. I has been added by commit 116c81f427ff ("arm64: Work around systems with mismatched cache line sizes"). Signed-off-by: Marek Szyprowski <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-16Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-16/+1
Pull ARM updates from Russell King: "ARM development updates: - more unified assembly conversions for clang - drop obsolete -mauto-it assembler option - remove arm_memory_present in preference to the generic version - remove unused asm/limits.h header - vdso linker update We tried to make the assembler warn if unified syntax was not used, but unfortunately older versions of GCC warn, so the commit had to be reverted" * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: Revert "ARM: 8846/1: warn if divided syntax assembler is used" ARM: 8858/1: vdso: use $(LD) instead of $(CC) to link VDSO ARM: 8855/1: remove unused <asm/limits.h> ARM: 8850/1: use memblocks_present ARM: 8854/1: drop -mauto-it ARM: 8846/1: warn if divided syntax assembler is used ARM: 8853/1: drop WASM to work around LLVM issue ARM: 8852/1: uaccess: use unified assembler language syntax ARM: 8851/1: add TUSERCOND() macro for conditional postfix
2019-05-14initramfs: move the legacy keepinitrd parameter to core codeChristoph Hellwig1-19/+6
No need to handle the freeing disable in arch code when we already have a core hook (and a different name for the option) for it. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Catalin Marinas <[email protected]> [arm64] Acked-by: Mike Rapoport <[email protected]> Cc: Geert Uytterhoeven <[email protected]> [m68k] Cc: Steven Price <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-04-23ARM: 8850/1: use memblocks_presentPeng Fan1-16/+1
arm_memory_present is doing same thing as memblocks_present, so let's use common code memblocks_present instead of platform specific arm_memory_present. Patchwork: https://patchwork.kernel.org/patch/10805693/ Signed-off-by: Peng Fan <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-03-15Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-65/+4
Pull ARM updates from Russell King: - An improvement from Ard Biesheuvel, who noted that the identity map setup was taking a long time due to flush_cache_louis(). - Update a comment about dma_ops from Wolfram Sang. - Remove use of "-p" with ld, where this flag has been a no-op since 2004. - Remove the printing of the virtual memory layout, which is no longer useful since we hide pointers. - Correct SCU help text. - Remove legacy TWD registration method. - Add pgprot_device() implementation for mapping PCI sysfs resource files. - Initialise PFN limits earlier for kmemleak. - Fix argument count to match macro definition (affects clang builds) - Use unified assembler language almost everywhere for clang, and other clang improvements (from Stefan Agner, Nathan Chancellor). - Support security extension for noMMU and other noMMU cleanups (from Vladimir Murzin). - Remove unnecessary SMP bringup code (which was incorrectly copy'n' pasted from the ARM platform implementations) and remove it from the arch code to discourge further copys of it appearing. - Add Cortex A9 erratum preventing kexec working on some SoCs. - AMBA bus identification updates from Mike Leach. - More use of raw spinlocks to avoid -RT kernel issues (from Yang Shi and Sebastian Andrzej Siewior). - MCPM hyp/svc mode mismatch fixes from Marek Szyprowski. * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (32 commits) ARM: 8849/1: NOMMU: Fix encodings for PMSAv8's PRBAR4/PRLAR4 ARM: 8848/1: virt: Align GIC version check with arm64 counterpart ARM: 8847/1: pm: fix HYP/SVC mode mismatch when MCPM is used ARM: 8845/1: use unified assembler in c files ARM: 8844/1: use unified assembler in assembly files ARM: 8843/1: use unified assembler in headers ARM: 8841/1: use unified assembler in macros ARM: 8840/1: use a raw_spinlock_t in unwind ARM: 8839/1: kprobe: make patch_lock a raw_spinlock_t ARM: 8837/1: coresight: etmv4: Update ID register table to add UCI support ARM: 8836/1: drivers: amba: Update component matching to use the CoreSight UCI values. ARM: 8838/1: drivers: amba: Updates to component identification for driver matching. ARM: 8833/1: Ensure that NEON code always compiles with Clang ARM: avoid Cortex-A9 livelock on tight dmb loops ARM: smp: remove arch-provided "pen_release" ARM: actions: remove boot_lock and pen_release ARM: oxnas: remove CPU hotplug implementation ARM: qcom: remove unnecessary boot_lock ARM: 8832/1: NOMMU: Limit visibility for CONFIG_FLASH_{MEM_BASE,SIZE} ARM: 8831/1: NOMMU: pmsa-v8: remove unneeded semicolon ...
2019-03-12memblock: memblock_phys_alloc(): don't panicMike Rapoport1-0/+4
Make the memblock_phys_alloc() function an inline wrapper for memblock_phys_alloc_range() and update the memblock_phys_alloc() callers to check the returned value and panic in case of error. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-12memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_allocMike Rapoport1-1/+1
The calls to memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE) and memblock_phys_alloc(size, align) are equivalent as both try to allocate 'size' bytes with 'align' alignment anywhere in the memory and panic if hte allocation fails. The conversion is done using the following semantic patch: @@ expression size, align; @@ - memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE) + memblock_phys_alloc(size, align) Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Guo Ren <[email protected]> [c-sky] Cc: Heiko Carstens <[email protected]> Cc: Juergen Gross <[email protected]> [Xen] Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-02-01ARM: 8826/1: mm: initialize pfn limits with find_limits()Doug Berger1-16/+4
The max_low_pfn value must be set before sparse_init() is called to keep the early memblock allocations and frees balanced for kmemleak initialization when sparsemem is enabled. This commit accomplishes that by replacing the local variables min, max_low, and max_high with the global limit variables min_low_pfn, max_low_pfn, and max_pfn respectively in bootmem_init(). The global variables are initialized directly by find_limits() and used in the remainder of the function. Fixes: 9099daed9c69 ("mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping") Cc: Catalin Marinas <[email protected]> Acked-by: Mike Rapoport <[email protected]> Signed-off-by: Doug Berger <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-02-01ARM: 8820/1: mm: Stop printing the virtual memory layoutGeert Uytterhoeven1-49/+0
Since commit ad67b74d2469d9b8 ("printk: hash addresses printed with %p"), the virtual memory layout printed during boot up contains "ptrval" instead of actual addresses: Memory: 501296K/524288K available (6144K kernel code, 528K rwdata, 1944K rodata, 1024K init, 7584K bss, 22992K reserved, 0K cma-reserved) Virtual kernel memory layout: vector : 0xffff0000 - 0xffff1000 ( 4 kB) fixmap : 0xffc00000 - 0xfff00000 (3072 kB) vmalloc : 0xe0800000 - 0xff800000 ( 496 MB) lowmem : 0xc0000000 - 0xe0000000 ( 512 MB) modules : 0xbf000000 - 0xc0000000 ( 16 MB) .text : 0x(ptrval) - 0x(ptrval) (7136 kB) .init : 0x(ptrval) - 0x(ptrval) (1024 kB) .data : 0x(ptrval) - 0x(ptrval) ( 529 kB) .bss : 0x(ptrval) - 0x(ptrval) (7585 kB) Instead of changing the printing to "%px", and leaking virtual memory layout information again, just remove the printing completely, cfr. e.g. commits 071929dbdd865f77 ("arm64: Stop printing the virtual memory layout") and 31833332f7987636 ("m68k/mm: Stop printing the virtual memory layout"). All interesting information (actual section sizes) is already printed by mem_init_print_info() just above anyway. Signed-off-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-11-26arch: Move initrd= parsing into do_mounts_initrd.cFlorian Fainelli1-17/+0
ARC, ARM, ARM64 and Unicore32 are all capable of parsing the "initrd=" command line parameter to allow specifying the physical address and size of an initrd. Move that parsing into init/do_mounts_initrd.c such that we no longer duplicate that logic. Signed-off-by: Florian Fainelli <[email protected]> Reviewed-by: Mike Rapoport <[email protected]> Signed-off-by: Rob Herring <[email protected]>
2018-11-26of/fdt: Populate phys_initrd_start/phys_initrd_size from FDTFlorian Fainelli1-6/+0
Now that we have central and global variables holding the physical address and size of the initrd, we can have early_init_dt_check_for_initrd() populate phys_initrd_start/phys_initrd_size for us. This allows us to remove a chunk of code from arch/arm/mm/init.c introduced with commit 65939301acdb ("arm: set initrd_start/initrd_end for fdt scan"). Signed-off-by: Florian Fainelli <[email protected]> Reviewed-by: Mike Rapoport <[email protected]> Signed-off-by: Rob Herring <[email protected]>
2018-11-26arch: Make phys_initrd_start and phys_initrd_size global variablesFlorian Fainelli1-3/+2
Make phys_initrd_start and phys_initrd_size global variables declared in init/do_mounts_initrd.c such that we can later have generic code in drivers/of/fdt.c populate those variables for us. This requires both the ARM and unicore32 implementations to be properly guarded against CONFIG_BLK_DEV_INITRD, and also initialize the variables to the expected default values (unicore32). Signed-off-by: Florian Fainelli <[email protected]> Reviewed-by: Mike Rapoport <[email protected]> Signed-off-by: Rob Herring <[email protected]>
2018-10-31mm: remove include/linux/bootmem.hMike Rapoport1-1/+0
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [[email protected]: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31memblock: rename free_all_bootmem to memblock_free_allMike Rapoport1-1/+1
The conversion is done using sed -i 's@free_all_bootmem@memblock_free_all@' \ $(git grep -l free_all_bootmem) Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-07-11ARM: 8780/1: ftrace: Only set kernel memory back to read-only after bootSteven Rostedt (VMware)1-0/+9
Dynamic ftrace requires modifying the code segments that are usually set to read-only. To do this, a per arch function is called both before and after the ftrace modifications are performed. The "before" function will set kernel code text to read-write to allow for ftrace to make the modifications, and the "after" function will set the kernel code text back to "read-only" to keep the kernel code text protected. The issue happens when dynamic ftrace is tested at boot up. The test is done before the kernel code text has been set to read-only. But the "before" and "after" calls are still performed. The "after" call will change the kernel code text to read-only prematurely, and other boot code that expects this code to be read-write will fail. The solution is to add a variable that is set when the kernel code text is expected to be converted to read-only, and make the ftrace "before" and "after" calls do nothing if that variable is not yet set. This is similar to the x86 solution from commit 162396309745 ("ftrace, x86: make kernel text writable only for conversions"). Link: http://lkml.kernel.org/r/[email protected] Reported-by: Stefan Agner <[email protected]> Tested-by: Stefan Agner <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-03-09ARM: simplify and fix linker script for TCMNicolas Pitre1-11/+0
Let's put the TCM stuff in the __init section directly. No need for a separately freed memory area. Remove redundant linker sections, as well as comments that were more confusing than no comments at all. Finally make it XIP compatible by using LOAD_OFFSET in the section LMA specification. Signed-off-by: Nicolas Pitre <[email protected]> Tested-by: Chris Brandt <[email protected]>
2018-01-21ARM: 8737/1: mm: dump: add checking for writable and executableJinbum Park1-0/+2
Page mappings with full RWX permissions are a security risk. x86, arm64 has an option to walk the page tables and dump any bad pages. (1404d6f13e47 ("arm64: dump: Add checking for writable and exectuable pages")) Add a similar implementation for arm. Reviewed-by: Kees Cook <[email protected]> Tested-by: Laura Abbott <[email protected]> Reviewed-by: Laura Abbott <[email protected]> Signed-off-by: Jinbum Park <[email protected]> Signed-off-by: Russell King <[email protected]>
2017-11-26Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-2/+2
Pull ARM fixes from Russell King: - LPAE fixes for kernel-readonly regions - Fix for get_user_pages_fast on LPAE systems - avoid tying decompressor to a particular platform if DEBUG_LL is enabled - BUG if we attempt to return to userspace but the to-be-restored PSR value keeps us in privileged mode (defeating an issue that ftracetest found) * 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: BUG if jumping to usermode address in kernel mode ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE ARM: 8721/1: mm: dump: check hardware RO bit for LPAE ARM: make decompressor debug output user selectable ARM: fix get_user_pages_fast
2017-11-21ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAEPhilip Derrin1-2/+2
Currently, for ARM kernels with CONFIG_ARM_LPAE and CONFIG_STRICT_KERNEL_RWX enabled, the 2MiB pages mapping the kernel code and rodata are writable. They are marked read-only in a software bit (L_PMD_SECT_RDONLY) but the hardware read-only bit is not set (PMD_SECT_AP2). For user mappings, the logic that propagates the software bit to the hardware bit is in set_pmd_at(); but for the kernel, section_update() writes the PMDs directly, skipping this logic. The fix is to set PMD_SECT_AP2 for read-only sections in section_update(), at the same time as L_PMD_SECT_RDONLY. Fixes: 1e3479225acb ("ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error") Signed-off-by: Philip Derrin <[email protected]> Reported-by: Neil Dick <[email protected]> Tested-by: Neil Dick <[email protected]> Tested-by: Laura Abbott <[email protected]> Reviewed-by: Kees Cook <[email protected]> Cc: [email protected] Signed-off-by: Russell King <[email protected]>
2017-09-28ARM: 8696/1: mm: Remove dead code in mem_init()Vladimir Murzin1-10/+0
The code in question checks memory constrains to set default policy for overcommit; however we support page size of 4K only thus condition is always evaluated to false. Remove that dead code. Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2017-04-26ARM: 8672/1: mm: remove tasklist locking from update_sections_early()Grygorii Strashko1-5/+8
The below backtrace can be observed on -rt kernel with CONFIG_DEBUG_MODULE_RONX (4.9 kernel CONFIG_DEBUG_RODATA) option enabled: BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:993 in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0 1 lock held by migration/0/14: #0: (tasklist_lock){+.+...}, at: [<c01183e8>] update_sections_early+0x24/0xdc irq event stamp: 38 hardirqs last enabled at (37): [<c08f6f7c>] _raw_spin_unlock_irq+0x24/0x68 hardirqs last disabled at (38): [<c01fdfe8>] multi_cpu_stop+0xd8/0x138 softirqs last enabled at (0): [<c01303ec>] copy_process.part.5+0x238/0x1b64 softirqs last disabled at (0): [< (null)>] (null) Preemption disabled at: [<c01fe244>] cpu_stopper_thread+0x80/0x10c CPU: 0 PID: 14 Comm: migration/0 Not tainted 4.9.21-rt16-02220-g49e319c #15 Hardware name: Generic DRA74X (Flattened Device Tree) [<c0112014>] (unwind_backtrace) from [<c010d370>] (show_stack+0x10/0x14) [<c010d370>] (show_stack) from [<c049beb8>] (dump_stack+0xa8/0xd4) [<c049beb8>] (dump_stack) from [<c01631a0>] (___might_sleep+0x1bc/0x2ac) [<c01631a0>] (___might_sleep) from [<c08f7244>] (__rt_spin_lock+0x1c/0x30) [<c08f7244>] (__rt_spin_lock) from [<c08f77a4>] (rt_read_lock+0x54/0x68) [<c08f77a4>] (rt_read_lock) from [<c01183e8>] (update_sections_early+0x24/0xdc) [<c01183e8>] (update_sections_early) from [<c01184b0>] (__fix_kernmem_perms+0x10/0x1c) [<c01184b0>] (__fix_kernmem_perms) from [<c01fe010>] (multi_cpu_stop+0x100/0x138) [<c01fe010>] (multi_cpu_stop) from [<c01fe24c>] (cpu_stopper_thread+0x88/0x10c) [<c01fe24c>] (cpu_stopper_thread) from [<c015edc4>] (smpboot_thread_fn+0x174/0x31c) [<c015edc4>] (smpboot_thread_fn) from [<c015a988>] (kthread+0xf0/0x108) [<c015a988>] (kthread) from [<c0108818>] (ret_from_fork+0x14/0x3c) Freeing unused kernel memory: 1024K (c0d00000 - c0e00000) The stop_machine() is called with cpus = NULL from fix_kernmem_perms() and mark_rodata_ro() which means only one CPU will execute update_sections_early() while all other CPUs will spin and wait. Hence, it's safe to remove tasklist locking from update_sections_early(). As part of this change also mark functions which are local to this module as static. Signed-off-by: Grygorii Strashko <[email protected]> Acked-by: Laura Abbott <[email protected]> Acked-by: Kees Cook <[email protected]> Signed-off-by: Russell King <[email protected]>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar1-0/+1
<linux/sched/task.h> We are going to split <linux/sched/task.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/task.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar1-0/+1
<linux/sched/signal.h> We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/signal.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-02-28Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-23/+41
Pull ARM updates from Russell King: - nommu updates from Afzal Mohammed cleaning up the vectors support - allow DMA memory "mapping" for nommu Benjamin Gaignard - fixing a correctness issue with R_ARM_PREL31 relocations in the module linker - add strlen() prototype for the decompressor - support for DEBUG_VIRTUAL from Florian Fainelli - adjusting memory bounds after memory reservations have been registered - unipher cache handling updates from Masahiro Yamada - initrd and Thumb Kconfig cleanups * 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (23 commits) ARM: mm: round the initrd reservation to page boundaries ARM: mm: clean up initrd initialisation ARM: mm: move initrd init code out of arm_memblock_init() ARM: 8655/1: improve NOMMU definition of pgprot_*() ARM: 8654/1: decompressor: add strlen prototype ARM: 8652/1: cache-uniphier: clean up active way setup code ARM: 8651/1: cache-uniphier: include <linux/errno.h> instead of <linux/types.h> ARM: 8650/1: module: handle negative R_ARM_PREL31 addends correctly ARM: 8649/2: nommu: remove Hivecs configuration is asm ARM: 8648/2: nommu: display vectors base ARM: 8647/2: nommu: dynamic exception base address setting ARM: 8646/1: mmu: decouple VECTORS_BASE from Kconfig ARM: 8644/1: Reduce "CPU: shutdown" message to debug level ARM: 8641/1: treewide: Replace uses of virt_to_phys with __pa_symbol ARM: 8640/1: Add support for CONFIG_DEBUG_VIRTUAL ARM: 8639/1: Define KERNEL_START and KERNEL_END ARM: 8638/1: mtd: lart: Rename partition defines to be prefixed with PART_ ARM: 8637/1: Adjust memory boundaries after reservations ARM: 8636/1: Cleanup sanity_check_meminfo ARM: add CPU_THUMB_CAPABLE to indicate possible Thumb support ...
2017-02-28ARM: mm: round the initrd reservation to page boundariesRussell King1-5/+18
Round the initrd memblock reservation to page boundaries to prevent other data sharing the initrd pages. This prevents an allocation possibly overlapping with the initrd, which would later get trampled on in free_initrd_mem(). Signed-off-by: Russell King <[email protected]>
2017-02-28ARM: mm: clean up initrd initialisationRussell King1-12/+15
Rather than repeatedly testing phys_initrd_size to see if the initrd is still enabled, return from the new function to avoid executing the remaining initialisation. Signed-off-by: Russell King <[email protected]>
2017-02-28ARM: mm: move initrd init code out of arm_memblock_init()Russell King1-4/+9
Move the ARM initrd initialisation code out of arm_memblock_init() into its own function, so it can be cleaned up. Signed-off-by: Russell King <[email protected]>
2017-02-28ARM: 8646/1: mmu: decouple VECTORS_BASE from KconfigAfzal Mohammed1-2/+2
For MMU configurations, VECTORS_BASE is always 0xffff0000, a macro definition will suffice. For no-MMU, exception base address is dynamically determined in subsequent patches. To preserve bisectability, now make the macro applicable for no-MMU scenario too. Thanks to 0-DAY kernel test infrastructure that found the bisectability issue. This macro will be restricted to MMU case upon dynamically determining exception base address for no-MMU. Once exception address is handled dynamically for no-MMU, VECTORS_BASE can be removed from Kconfig. Signed-off-by: afzal mohammed <[email protected]> Tested-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2017-02-28ARM: 8639/1: Define KERNEL_START and KERNEL_ENDFlorian Fainelli1-5/+2
In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of common constants: KERNEL_START and KERNEL_END which abstract CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where relevant. Acked-by: Russell King <[email protected]> Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Russell King <[email protected]>