aboutsummaryrefslogtreecommitdiff
path: root/arch/mips/mm
AgeCommit message (Collapse)AuthorFilesLines
2020-04-04Merge tag 'dma-mapping-5.7' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-6/+1
Pull dma-mapping updates from Christoph Hellwig: - fix an integer overflow in the coherent pool (Kevin Grandemange) - provide support for in-place uncached remapping and use that for openrisc - fix the arm coherent allocator to take the bus limit into account * tag 'dma-mapping-5.7' of git://git.infradead.org/users/hch/dma-mapping: ARM/dma-mapping: merge __dma_supported into arm_dma_supported ARM/dma-mapping: take the bus limit into account in __dma_alloc ARM/dma-mapping: remove get_coherent_dma_mask openrisc: use the generic in-place uncached DMA allocator dma-direct: provide a arch_dma_clear_uncached hook dma-direct: make uncached_kernel_address more general dma-direct: consolidate the error handling in dma_direct_alloc_pages dma-direct: remove the cached_kernel_address hook dma-coherent: fix integer overflow in the reserved-memory dma allocation
2020-04-02mm: allow VM_FAULT_RETRY for multiple timesPeter Xu1-1/+0
The idea comes from a discussion between Linus and Andrea [1]. Before this patch we only allow a page fault to retry once. We achieved this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing handle_mm_fault() the second time. This was majorly used to avoid unexpected starvation of the system by looping over forever to handle the page fault on a single page. However that should hardly happen, and after all for each code path to return a VM_FAULT_RETRY we'll first wait for a condition (during which time we should possibly yield the cpu) to happen before VM_FAULT_RETRY is really returned. This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY flag when we receive VM_FAULT_RETRY. It means that the page fault handler now can retry the page fault for multiple times if necessary without the need to generate another page fault event. Meanwhile we still keep the FAULT_FLAG_TRIED flag so page fault handler can still identify whether a page fault is the first attempt or not. Then we'll have these combinations of fault flags (only considering ALLOW_RETRY flag and TRIED flag): - ALLOW_RETRY and !TRIED: this means the page fault allows to retry, and this is the first try - ALLOW_RETRY and TRIED: this means the page fault allows to retry, and this is not the first try - !ALLOW_RETRY and !TRIED: this means the page fault does not allow to retry at all - !ALLOW_RETRY and TRIED: this is forbidden and should never be used In existing code we have multiple places that has taken special care of the first condition above by checking against (fault_flags & FAULT_FLAG_ALLOW_RETRY). This patch introduces a simple helper to detect the first retry of a page fault by checking against both (fault_flags & FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now even the 2nd try will have the ALLOW_RETRY set, then use that helper in all existing special paths. One example is in __lock_page_or_retry(), now we'll drop the mmap_sem only in the first attempt of page fault and we'll keep it in follow up retries, so old locking behavior will be retained. This will be a nice enhancement for current code [2] at the same time a supporting material for the future userfaultfd-writeprotect work, since in that work there will always be an explicit userfault writeprotect retry for protected pages, and if that cannot resolve the page fault (e.g., when userfaultfd-writeprotect is used in conjunction with swapped pages) then we'll possibly need a 3rd retry of the page fault. It might also benefit other potential users who will have similar requirement like userfault write-protection. GUP code is not touched yet and will be covered in follow up patch. Please read the thread below for more information. [1] https://lore.kernel.org/lkml/[email protected]/ [2] https://lore.kernel.org/lkml/[email protected]/ Suggested-by: Linus Torvalds <[email protected]> Suggested-by: Andrea Arcangeli <[email protected]> Signed-off-by: Peter Xu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Brian Geffon <[email protected]> Cc: Bobby Powers <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Denis Plotnikov <[email protected]> Cc: "Dr . David Alan Gilbert" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: "Kirill A . Shutemov" <[email protected]> Cc: Martin Cracauer <[email protected]> Cc: Marty McFadden <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Maya Gokhale <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Pavel Emelyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-02mm: introduce FAULT_FLAG_DEFAULTPeter Xu1-1/+1
Although there're tons of arch-specific page fault handlers, most of them are still sharing the same initial value of the page fault flags. Say, merely all of the page fault handlers would allow the fault to be retried, and they also allow the fault to respond to SIGKILL. Let's define a default value for the fault flags to replace those initial page fault flags that were copied over. With this, it'll be far easier to introduce new fault flag that can be used by all the architectures instead of touching all the archs. Signed-off-by: Peter Xu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Brian Geffon <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Bobby Powers <[email protected]> Cc: Denis Plotnikov <[email protected]> Cc: "Dr . David Alan Gilbert" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: "Kirill A . Shutemov" <[email protected]> Cc: Martin Cracauer <[email protected]> Cc: Marty McFadden <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Maya Gokhale <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Pavel Emelyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-02mm: introduce fault_signal_pending()Peter Xu1-1/+1
For most architectures, we've got a quick path to detect fatal signal after a handle_mm_fault(). Introduce a helper for that quick path. It cleans the current codes a bit so we don't need to duplicate the same check across archs. More importantly, this will be an unified place that we handle the signal immediately right after an interrupted page fault, so it'll be much easier for us if we want to change the behavior of handling signals later on for all the archs. Note that currently only part of the archs are using this new helper, because some archs have their own way to handle signals. In the follow up patches, we'll try to apply this helper to all the rest of archs. Another note is that the "regs" parameter in the new helper is not used yet. It'll be used very soon. Now we kept it in this patch only to avoid touching all the archs again in the follow up patches. [[email protected]: fix sparse warnings] Link: http://lkml.kernel.org/r/20200311145921.GD479302@xz-x1 Signed-off-by: Peter Xu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Brian Geffon <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Bobby Powers <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Denis Plotnikov <[email protected]> Cc: "Dr . David Alan Gilbert" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: "Kirill A . Shutemov" <[email protected]> Cc: Martin Cracauer <[email protected]> Cc: Marty McFadden <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Maya Gokhale <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Pavel Emelyanov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-03-25MIPS/tlbex: Fix LDDIR usage in setup_pw() for Loongson-3Huacai Chen1-1/+4
LDDIR/LDPTE is Loongson-3's acceleration for Page Table Walking. If BD (Base Directory, the 4th page directory) is not enabled, then GDOffset is biased by BadVAddr[63:62]. So, if GDOffset (aka. BadVAddr[47:36] for Loongson-3) is big enough, "0b11(BadVAddr[63:62])|BadVAddr[47:36]|...." can far beyond pg_swapper_dir. This means the pg_swapper_dir may NOT be accessed by LDDIR correctly, so fix it by set PWDirExt in CP0_PWCtl. Cc: <[email protected]> Signed-off-by: Pei Huang <[email protected]> Signed-off-by: Huacai Chen <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
2020-03-16MIPS: c-r4k: Invalidate BMIPS5000 ZSCM prefetch linesKamal Dasu1-0/+29
Zephyr secondary cache is 256KB, 128B lines. 32B sectors. A secondary cache line can contain two instruction cache lines (64B), or four data cache lines (32B). Hardware prefetch Cache detects stream access, and prefetches ahead of processor access. Add support to invalidate BMIPS5000 cpu zephyr secondary cache module (ZSCM) on DMA from device so that data returned is coherent during DMA read operations. Signed-off-by: Kamal Dasu <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
2020-03-16dma-direct: make uncached_kernel_address more generalChristoph Hellwig1-1/+1
Rename the symbol to arch_dma_set_uncached, and pass a size to it as well as allow an error return. That will allow reusing this hook for in-place pagetable remapping. As the in-place remap doesn't always require an explicit cache flush, also detangle ARCH_HAS_DMA_PREP_COHERENT from ARCH_HAS_DMA_SET_UNCACHED. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Robin Murphy <[email protected]>
2020-03-16dma-direct: remove the cached_kernel_address hookChristoph Hellwig1-5/+0
dma-direct now finds the kernel address for coherent allocations based on the dma address, so the cached_kernel_address hooks is unused and can be removed entirely. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Robin Murphy <[email protected]>
2020-02-28MIPS: reduce print level for cache informationOleksij Rempel4-23/+23
Default printk log level is KERN_WARNING. This makes automatic log parsing problematic, since we get false positive alarms on not critical information. Set all not critical cache related information to KERN_INFO, the same level as used on most kernel drivers. Signed-off-by: Oleksij Rempel <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
2020-01-09MIPS: mm: Place per_cpu on different nodes, if NUMA is enabledThomas Bogendoerfer1-0/+45
Implement placing of per_cpu into memory, which is local to the CPU. Signed-off-by: Thomas Bogendoerfer <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected]
2019-11-28Merge branch 'master' of ↵Linus Torvalds1-12/+6
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux; tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping Pull dma-mapping updates from Christoph Hellwig: - improve dma-debug scalability (Eric Dumazet) - tiny dma-debug cleanup (Dan Carpenter) - check for vmap memory in dma_map_single (Kees Cook) - check for dma_addr_t overflows in dma-direct when using DMA offsets (Nicolas Saenz Julienne) - switch the x86 sta2x11 SOC to use more generic DMA code (Nicolas Saenz Julienne) - fix arm-nommu dma-ranges handling (Vladimir Murzin) - use __initdata in CMA (Shyam Saini) - replace the bus dma mask with a limit (Nicolas Saenz Julienne) - merge the remapping helpers into the main dma-direct flow (me) - switch xtensa to the generic dma remap handling (me) - various cleanups around dma_capable (me) - remove unused dev arguments to various dma-noncoherent helpers (me) * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux: * tag 'dma-mapping-5.5' of git://git.infradead.org/users/hch/dma-mapping: (22 commits) dma-mapping: treat dev->bus_dma_mask as a DMA limit dma-direct: exclude dma_direct_map_resource from the min_low_pfn check dma-direct: don't check swiotlb=force in dma_direct_map_resource dma-debug: clean up put_hash_bucket() powerpc: remove support for NULL dev in __phys_to_dma / __dma_to_phys dma-direct: avoid a forward declaration for phys_to_dma dma-direct: unify the dma_capable definitions dma-mapping: drop the dev argument to arch_sync_dma_for_* x86/PCI: sta2x11: use default DMA address translation dma-direct: check for overflows on 32 bit DMA addresses dma-debug: increase HASH_SIZE dma-debug: reorder struct dma_debug_entry fields xtensa: use the generic uncached segment support dma-mapping: merge the generic remapping helpers into dma-direct dma-direct: provide mmap and get_sgtable method overrides dma-direct: remove the dma_handle argument to __dma_direct_alloc_pages dma-direct: remove __dma_direct_free_pages usb: core: Remove redundant vmap checks kernel: dma-contiguous: mark CMA parameters __initdata/__initconst dma-debug: add a schedule point in debug_dma_dump_mappings() ...
2019-11-22mips: add support for folded p4d page tablesMike Rapoport8-12/+38
Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate, replace 5leve-fixup.h with pgtable-nop4d.h and drop usage of __ARCH_USE_5LEVEL_HACK. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: Mike Rapoport <[email protected]>
2019-11-22mips: drop __pXd_offset() macros that duplicate pXd_index() onesMike Rapoport3-5/+5
The __pXd_offset() macros are identical to the pXd_index() macros and there is no point to keep both of them. All architectures define and use pXd_index() so let's keep only those to make mips consistent with the rest of the kernel. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: Mike Rapoport <[email protected]>
2019-11-20dma-mapping: drop the dev argument to arch_sync_dma_for_*Christoph Hellwig1-6/+6
These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: Christoph Hellwig <[email protected]> Suggested-by: Daniel Vetter <[email protected]>
2019-11-11dma-direct: provide mmap and get_sgtable method overridesChristoph Hellwig1-6/+0
For dma-direct we know that the DMA address is an encoding of the physical address that we can trivially decode. Use that fact to provide implementations that do not need the arch_dma_coherent_to_pfn architecture hook. Note that we still can only support mmap of non-coherent memory only if the architecture provides a way to set an uncached bit in the page tables. This must be true for architectures that use the generic remap helpers, but other architectures can also manually select it. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Max Filippov <[email protected]>
2019-11-01Merge tag 'mips_fixes_5.4_3' into mips-nextPaul Burton1-8/+15
Pull in mips-fixes primarily to gain build fixes in order to allow better testing of mips-next. A few MIPS fixes: - Fix VDSO time-related function behavior for systems where we need to fall back to syscalls, but were instead returning bogus results. - A fix to TLB exception handlers for Cavium Octeon systems where they would inadvertently clobber the $1/$at register. - A build fix for bcm63xx configurations. - Switch to using my @kernel.org email address. Signed-off-by: Paul Burton <[email protected]>
2019-10-31MIPS: Loongson64: Rename CPU TYPESJiaxun Yang4-22/+22
CPU_LOONGSON2 -> CPU_LOONGSON2EF CPU_LOONGSON3 -> CPU_LOONGSON64 As newer loongson-2 products (2G/2H/2K1000) can share kernel implementation with loongson-3 while 2E/2F are less similar with other LOONGSON64 products. Signed-off-by: Jiaxun Yang <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-10-23MIPS: tlbex: Fix build_restore_pagemask KScratch restorePaul Burton1-8/+15
build_restore_pagemask() will restore the value of register $1/$at when its restore_scratch argument is non-zero, and aims to do so by filling a branch delay slot. Commit 0b24cae4d535 ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.") added an EHB instruction (Execution Hazard Barrier) prior to restoring $1 from a KScratch register, in order to resolve a hazard that can result in stale values of the KScratch register being observed. In particular, P-class CPUs from MIPS with out of order execution pipelines such as the P5600 & P6600 are affected. Unfortunately this EHB instruction was inserted in the branch delay slot causing the MFC0 instruction which performs the restoration to no longer execute along with the branch. The result is that the $1 register isn't actually restored, ie. the TLB refill exception handler clobbers it - which is exactly the problem the EHB is meant to avoid for the P-class CPUs. Similarly build_get_pgd_vmalloc() will restore the value of $1/$at when its mode argument equals refill_scratch, and suffers from the same problem. Fix this by in both cases moving the EHB earlier in the emitted code. There's no reason it needs to immediately precede the MFC0 - it simply needs to be between the MTC0 & MFC0. This bug only affects Cavium Octeon systems which use build_fast_tlb_refill_handler(). Signed-off-by: Paul Burton <[email protected]> Fixes: 0b24cae4d535 ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.") Cc: Dmitry Korotin <[email protected]> Cc: [email protected] # v3.15+ Cc: [email protected] Cc: [email protected]
2019-10-09MIPS: Provide unroll() macro, use it for cache opsPaul Burton1-4/+8
Currently we have a lot of duplication in asm/r4kcache.h to handle manually unrolling loops of cache ops for various line sizes, and we have to explicitly handle the difference in cache op immediate width between MIPSr6 & earlier ISA revisions with further duplication. Introduce an unroll() macro in asm/unroll.h which expands to a switch statement which is used to call a function or expand a preprocessor macro a compile-time constant number of times in a row - effectively explicitly unrolling a loop. We make use of this here to remove the cache op duplication & will use it further in later patches. A nice side effect of this is that calculating the cache op offset immediate is now the compiler's responsibility, so we're no longer sensitive to the width change of that immediate in MIPSr6 & will be similarly agnostic to immediate width in any future supported ISA. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]
2019-10-07MIPS: Loongson: Add Loongson-3A R4 basic supportHuacai Chen1-1/+2
All Loongson-3 CPU family: Code-name Brand-name PRId Loongson-3A R1 Loongson-3A1000 0x6305 Loongson-3A R2 Loongson-3A2000 0x6308 Loongson-3A R2.1 Loongson-3A2000 0x630c Loongson-3A R3 Loongson-3A3000 0x6309 Loongson-3A R3.1 Loongson-3A3000 0x630d Loongson-3A R4 Loongson-3A4000 0xc000 Loongson-3B R1 Loongson-3B1000 0x6306 Loongson-3B R2 Loongson-3B1500 0x6307 Features of R4 revision of Loongson-3A: - All R2/R3 features, including SFB, V-Cache, FTLB, RIXI, DSP, etc. - Support variable ASID bits. - Support MSA and VZ extensions. - Support CPUCFG (CPU config) and CSR (Control and Status Register) extensions. - 64 entries of VTLB (classic TLB), 2048 entries of FTLB (8-way set-associative). Now 64-bit Loongson processors has three types of PRID.IMP: 0x6300 is the classic one so we call it PRID_IMP_LOONGSON_64C (e.g., Loongson-2E/ 2F/3A1000/3B1000/3B1500/3A2000/3A3000), 0x6100 is for some processors which has reduced capabilities so we call it PRID_IMP_LOONGSON_64R (e.g., Loongson-2K), 0xc000 is supposed to cover all new processors in general (e.g., Loongson-3A4000+) so we call it PRID_IMP_LOONGSON_64G. Signed-off-by: Huacai Chen <[email protected]> Signed-off-by: Jiaxun Yang <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Fuxin Zhang <[email protected]> Cc: Zhangjin Wu <[email protected]> Cc: Huacai Chen <[email protected]>
2019-09-24mips: use generic mmap top-down layout and brk randomizationAlexandre Ghiti1-96/+0
mips uses a top-down layout by default that exactly fits the generic functions, so get rid of arch specific code and use the generic version by selecting ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT. As ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT selects ARCH_HAS_ELF_RANDOMIZE, use the generic version of arch_randomize_brk since it also fits. Note that this commit also removes the possibility for mips to have elf randomization and no MMU: without MMU, the security added by randomization is worth nothing. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexandre Ghiti <[email protected]> Acked-by: Paul Burton <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Hogan <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-09-24mips: replace arch specific way to determine 32bit task with generic versionAlexandre Ghiti1-1/+2
Mips uses TASK_IS_32BIT_ADDR to determine if a task is 32bit, but this define is mips specific and other arches do not have it: instead, use !IS_ENABLED(CONFIG_64BIT) || is_compat_task() condition. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexandre Ghiti <[email protected]> Acked-by: Paul Burton <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Hogan <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-09-24mips: adjust brk randomization offset to fit generic versionAlexandre Ghiti1-3/+4
This commit simply bumps up to 32MB and 1GB the random offset of brk, compared to 8MB and 256MB, for 32bit and 64bit respectively. Link: http://lkml.kernel.org/r/[email protected] Suggested-by: Kees Cook <[email protected]> Signed-off-by: Alexandre Ghiti <[email protected]> Acked-by: Paul Burton <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Hogan <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-09-24mips: use STACK_TOP when computing mmap base addressAlexandre Ghiti1-2/+2
mmap base address must be computed wrt stack top address, using TASK_SIZE is wrong since STACK_TOP and TASK_SIZE are not equivalent. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexandre Ghiti <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Paul Burton <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Hogan <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-09-24mips: properly account for stack randomization and stack guard gapAlexandre Ghiti1-2/+12
This commit takes care of stack randomization and stack guard gap when computing mmap base address and checks if the task asked for randomization. This fixes the problem uncovered and not fixed for arm here: https://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexandre Ghiti <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Paul Burton <[email protected]> Reviewed-by: Luis Chamberlain <[email protected]> Cc: Albert Ou <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Hogan <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-09-22Merge tag 'mips_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linuxLinus Torvalds7-347/+108
Pull MIPS updates from Paul Burton: "Main MIPS changes: - boot_mem_map is removed, providing a nice cleanup made possible by the recent removal of bootmem. - Some fixes to atomics, in general providing compiler barriers for smp_mb__{before,after}_atomic plus fixes specific to Loongson CPUs or MIPS32 systems using cmpxchg64(). - Conversion to the new generic VDSO infrastructure courtesy of Vincenzo Frascino. - Removal of undefined behavior in set_io_port_base(), fixing the behavior of some MIPS kernel configurations when built with recent clang versions. - Initial MIPS32 huge page support, functional on at least Ingenic SoCs. - pte_special() is now supported for some configurations, allowing among other things generic fast GUP to be used. - Miscellaneous fixes & cleanups. And platform specific changes: - Major improvements to Ingenic SoC support from Paul Cercueil, mostly enabled by the inclusion of the new TCU (timer-counter unit) drivers he's spent a very patient year or so working on. Plus some fixes for X1000 SoCs from Zhou Yanjie. - Netgear R6200 v1 systems are now supported by the bcm47xx platform. - DT updates for BMIPS, Lantiq & Microsemi Ocelot systems" * tag 'mips_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (89 commits) MIPS: Detect bad _PFN_SHIFT values MIPS: Disable pte_special() for MIPS32 with RiXi MIPS: ralink: deactivate PCI support for SOC_MT7621 mips: compat: vdso: Use legacy syscalls as fallback MIPS: Drop Loongson _CACHE_* definitions MIPS: tlbex: Remove cpu_has_local_ebase MIPS: tlbex: Simplify r3k check MIPS: Select R3k-style TLB in Kconfig MIPS: PCI: refactor ioc3 special handling mips: remove ioremap_cachable mips/atomic: Fix smp_mb__{before,after}_atomic() mips/atomic: Fix loongson_llsc_mb() wreckage mips/atomic: Fix cmpxchg64 barriers MIPS: Octeon: remove duplicated include from dma-octeon.c firmware: bcm47xx_nvram: Allow COMPILE_TEST firmware: bcm47xx_nvram: Correct size_t printf format MIPS: Treat Loongson Extensions as ASEs MIPS: Remove dev_err() usage after platform_get_irq() MIPS: dts: mscc: describe the PTP ready interrupt MIPS: dts: mscc: describe the PTP register range ...
2019-09-20MIPS: Detect bad _PFN_SHIFT valuesPaul Burton1-0/+6
2 recent commits have fixed issues where _PFN_SHIFT grew too large due to the introduction of too many pgprot bits in our PTEs for some MIPS32 kernel configurations. Tracking down such issues can be tricky, so add a BUILD_BUG_ON() to help. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]
2019-09-03MIPS: tlbex: Remove cpu_has_local_ebasePaul Burton1-7/+2
The cpu_has_local_ebase macro is, confusingly, not used to indicate whether the EBase register is local to a CPU or not. Instead it indicates whether we want to generate the TLB refill exception vector each time a CPU is brought online. Doing this makes little sense on any system, since we always use the same value for EBase & thus we cannot have different TLB refill exception handlers per CPU. Regenerating the code is not only pointless but also can be actively harmful, as commit 8759934e2b6b ("MIPS: Build uasm-generated code only once to avoid CPU Hotplug problem") described. That commit introduced cpu_has_local_ebase to disable the handler regeneration for Loongson machines, but this is by no means a Loongson-specific problem. Remove cpu_has_local_ebase & simply generate the TLB refill handler once during boot, just like the rest of the TLB exception handlers. Signed-off-by: Paul Burton <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Cc: [email protected]
2019-09-03MIPS: tlbex: Simplify r3k checkPaul Burton1-30/+22
We already know whether a CPU has r3k style exceptions, including TLB exceptions, by checking cpu_has_3kex. Remove the list of CPU types in build_tlb_refill_handler() & check cpu_has_3kex instead. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]
2019-09-03MIPS: Select R3k-style TLB in KconfigPaul Burton1-2/+3
Currently areas where we need to determine whether the TLB is R3k-style need to check for either of CONFIG_CPU_R3000 || CONFIG_CPU_TX39XX. Introduce a new CONFIG_CPU_R3K_TLB & select it from both of the above, allowing us to simplify checks for R3k-style TLBs by only checking for this new Kconfig option. Signed-off-by: Paul Burton <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Cc: [email protected]
2019-08-29dma-mapping: remove arch_dma_mmap_pgprotChristoph Hellwig1-8/+0
arch_dma_mmap_pgprot is used for two things: 1) to override the "normal" uncached page attributes for mapping memory coherent to devices that can't snoop the CPU caches 2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older arm systems and some mips platforms Replace one with the pgprot_dmacoherent macro that is already provided by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE handling to common code with an explicit arch opt-in. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> # m68k Acked-by: Paul Burton <[email protected]> # mips
2019-08-23MIPS: mm: Fix highmem compilePaul Burton1-0/+2
Commit a5718fe8f70f ("MIPS: mm: Drop boot_mem_map") removed the definition of a page variable for some reason, but that variable is still used. Restore it to fix compilation with CONFIG_HIGHMEM enabled. Signed-off-by: Paul Burton <[email protected]>
2019-08-23MIPS: mm: Drop boot_mem_mapJiaxun Yang1-57/+37
Initialize maar by resource map and replace page_is_ram by memblock_is_memory. Signed-off-by: Jiaxun Yang <[email protected]> [[email protected]: - Fix bad MAAR address calculations. - Use ALIGN() & define maar_align to make it clearer what's going on with address manipulations. - Drop the new used field from struct maar_config. - Rework the RAM walk to avoid iterating over the cfg array needlessly to find the first unused entry, then count used entries at the end. Instead just keep the count as we go.] Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-08-11MIPS: tlbex: Explicitly cast _PAGE_NO_EXEC to a booleanNathan Chancellor1-1/+1
clang warns: arch/mips/mm/tlbex.c:634:19: error: use of logical '&&' with constant operand [-Werror,-Wconstant-logical-operand] if (cpu_has_rixi && _PAGE_NO_EXEC) { ^ ~~~~~~~~~~~~~ arch/mips/mm/tlbex.c:634:19: note: use '&' for a bitwise operation if (cpu_has_rixi && _PAGE_NO_EXEC) { ^~ & arch/mips/mm/tlbex.c:634:19: note: remove constant to silence this warning if (cpu_has_rixi && _PAGE_NO_EXEC) { ~^~~~~~~~~~~~~~~~ 1 error generated. Explicitly cast this value to a boolean so that clang understands we intend for this to be a non-zero value. Fixes: 00bf1c691d08 ("MIPS: tlbex: Avoid placing software PTE bits in Entry* PFN fields") Link: https://github.com/ClangBuiltLinux/linux/issues/609 Signed-off-by: Nathan Chancellor <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-08-05MIPS: Ingenic: Fix bugs when detecting X1000's L2 cache.Zhou Yanjie1-7/+20
1.fix bugs when detecting L2 cache sets value. 2.fix bugs when detecting L2 cache ways value. Signed-off-by: Zhou Yanjie <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-07-23MIPS: Remove unused R8000 CPU supportPaul Burton3-244/+0
Our R8000 CPU support can only be included if a system selects CONFIG_SYS_HAS_CPU_R8000. No system does, making all R8000-related CPU support dead code. Remove it. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]
2019-07-23MIPS: Remove unused R5432 CPU supportPaul Burton2-2/+0
Our R5432 CPU support can only be included if a system selects CONFIG_SYS_HAS_CPU_R5432. No system does, making all R5432-related CPU support dead code. Remove it. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]
2019-07-23MIPS: Remove unused R4300 CPU supportPaul Burton2-2/+0
Our R4300 CPU support can only be included if a system selects CONFIG_SYS_HAS_CPU_R4300. No system does, making all R4300-related CPU support dead code. Remove it. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]
2019-07-21MIPS: Rename JZRISC to XBURSTPaul Cercueil2-2/+2
The real name of the CPU present in the JZ line of SoCs from Ingenic is XBurst, not JZRISC. Signed-off-by: Paul Cercueil <[email protected]> [[email protected]: Leave /proc/cpuinfo string as-is.] Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-07-21MIPS: Add partial 32-bit huge page supportDaniel Silsby1-0/+20
This adds initial support for huge pages to 32-bit MIPS systems. Systems with extended addressing enabled (EVA,XPA,Alchemy/Netlogic) are not yet supported. With huge pages enabled, this implementation will increase page table memory overhead to match that of a 64-bit MIPS system. However, the cache-friendliness of page table walks is not affected significantly. Signed-off-by: Daniel Silsby <[email protected]> Signed-off-by: Paul Cercueil <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-07-12Merge tag 'dma-mapping-5.3' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2-19/+9
Pull dma-mapping updates from Christoph Hellwig: - move the USB special case that bounced DMA through a device bar into the USB code instead of handling it in the common DMA code (Laurentiu Tudor and Fredrik Noring) - don't dip into the global CMA pool for single page allocations (Nicolin Chen) - fix a crash when allocating memory for the atomic pool failed during boot (Florian Fainelli) - move support for MIPS-style uncached segments to the common code and use that for MIPS and nios2 (me) - make support for DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING generic (me) - convert nds32 to the generic remapping allocator (me) * tag 'dma-mapping-5.3' of git://git.infradead.org/users/hch/dma-mapping: (29 commits) dma-mapping: mark dma_alloc_need_uncached as __always_inline MIPS: only select ARCH_HAS_UNCACHED_SEGMENT for non-coherent platforms usb: host: Fix excessive alignment restriction for local memory allocations lib/genalloc.c: Add algorithm, align and zeroed family of DMA allocators nios2: use the generic uncached segment support in dma-direct nds32: use the generic remapping allocator for coherent DMA allocations arc: use the generic remapping allocator for coherent DMA allocations dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code dma-mapping: add a dma_alloc_need_uncached helper openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support arc: remove the partial DMA_ATTR_NON_CONSISTENT support arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support ARM: dma-mapping: allow larger DMA mask than supported dma-mapping: truncate dma masks to what dma_addr_t can hold iommu/dma: Apply dma_{alloc,free}_contiguous functions dma-remap: Avoid de-referencing NULL atomic_pool MIPS: use the generic uncached segment support in dma-direct dma-direct: provide generic support for uncached kernel segments au1100fb: fix DMA API abuse ...
2019-07-12MIPS: use the generic get_user_pages_fast codeChristoph Hellwig2-304/+0
The mips code is mostly equivalent to the generic one, minus various bugfixes and an arch override for gup_fast_permitted. Note that this defines ARCH_HAS_PTE_SPECIAL for mips as mips has pte_special and pte_mkspecial implemented and used in the existing gup code. They are no-op stubs, though which makes me a little unsure if this is really right thing to do. Note that this also adds back a missing cpu_has_dc_aliases check for __get_user_pages_fast, which the old code was only doing for get_user_pages_fast. This clearly looks like an oversight, as any condition that makes get_user_pages_fast unsafe also applies to __get_user_pages_fast. [[email protected]: MIPS: don't select ARCH_HAS_PTE_SPECIAL] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Tested-by: Guenter Roeck <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: David Miller <[email protected]> Cc: James Hogan <[email protected]> Cc: Khalid Aziz <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Paul Burton <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Rich Felker <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-07-08Merge branch 'siginfo-linus' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull force_sig() argument change from Eric Biederman: "A source of error over the years has been that force_sig has taken a task parameter when it is only safe to use force_sig with the current task. The force_sig function is built for delivering synchronous signals such as SIGSEGV where the userspace application caused a synchronous fault (such as a page fault) and the kernel responded with a signal. Because the name force_sig does not make this clear, and because the force_sig takes a task parameter the function force_sig has been abused for sending other kinds of signals over the years. Slowly those have been fixed when the oopses have been tracked down. This set of changes fixes the remaining abusers of force_sig and carefully rips out the task parameter from force_sig and friends making this kind of error almost impossible in the future" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (27 commits) signal/x86: Move tsk inside of CONFIG_MEMORY_FAILURE in do_sigbus signal: Remove the signal number and task parameters from force_sig_info signal: Factor force_sig_info_to_task out of force_sig_info signal: Generate the siginfo in force_sig signal: Move the computation of force into send_signal and correct it. signal: Properly set TRACE_SIGNAL_LOSE_INFO in __send_signal signal: Remove the task parameter from force_sig_fault signal: Use force_sig_fault_to_task for the two calls that don't deliver to current signal: Explicitly call force_sig_fault on current signal/unicore32: Remove tsk parameter from __do_user_fault signal/arm: Remove tsk parameter from __do_user_fault signal/arm: Remove tsk parameter from ptrace_break signal/nds32: Remove tsk parameter from send_sigtrap signal/riscv: Remove tsk parameter from do_trap signal/sh: Remove tsk parameter from force_sig_info_fault signal/um: Remove task parameter from send_sigtrap signal/x86: Remove task parameter from send_sigtrap signal: Remove task parameter from force_sig_mceerr signal: Remove task parameter from force_sig signal: Remove task parameter from force_sigsegv ...
2019-06-24MIPS: Add missing EHB in mtc0 -> mfc0 sequence.Dmitry Korotin1-9/+20
Add a missing EHB (Execution Hazard Barrier) in mtc0 -> mfc0 sequence. Without this execution hazard barrier it's possible for the value read back from the KScratch register to be the value from before the mtc0. Reproducible on P5600 & P6600. The hazard is documented in the MIPS Architecture Reference Manual Vol. III: MIPS32/microMIPS32 Privileged Resource Architecture (MD00088), rev 6.03 table 8.1 which includes: Producer | Consumer | Hazard ----------|----------|---------------------------- mtc0 | mfc0 | any coprocessor 0 register Signed-off-by: Dmitry Korotin <[email protected]> [[email protected]: - Commit message tweaks. - Add Fixes tags. - Mark for stable back to v3.15 where P5600 support was introduced.] Signed-off-by: Paul Burton <[email protected]> Fixes: 3d8bfdd03072 ("MIPS: Use C0_KScratch (if present) to hold PGD pointer.") Fixes: 829dcc0a956a ("MIPS: Add MIPS P5600 probe support") Cc: [email protected] Cc: [email protected] # v3.15+
2019-06-16MIPS: Fix bounds check virt_addr_validHauke Mehrtens1-1/+1
The bounds check used the uninitialized variable vaddr, it should use the given parameter kaddr instead. When using the uninitialized value the compiler assumed it to be 0 and optimized this function to just return 0 in all cases. This should make the function check the range of the given address and only do the page map check in case it is in the expected range of virtual addresses. Fixes: 074a1e1167af ("MIPS: Bounds check virt_addr_valid") Cc: [email protected] # v4.12+ Cc: Paul Burton <[email protected]> Signed-off-by: Hauke Mehrtens <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-06-08Merge tag 'mips_fixes_5.2_1' of ↵Linus Torvalds1-1/+6
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: - Declare ginvt() __always_inline due to its use of an argument as an inline asm immediate. - A VDSO build fix following Kbuild changes made this cycle. - A fix for boot failures on txx9 systems following memory initialization changes made this cycle. - Bounds check virt_addr_valid() to prevent it spuriously indicating that bogus addresses are valid, in turn fixing hardened usercopy failures that have been present since v4.12. - Build uImage.gz for pistachio systems by default, since this is the image we need in order to actually boot on a board. - Remove an unused variable in our uprobes code. * tag 'mips_fixes_5.2_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: uprobes: remove set but not used variable 'epc' MIPS: pistachio: Build uImage.gz by default MIPS: Make virt_addr_valid() return bool MIPS: Bounds check virt_addr_valid MIPS: TXx9: Fix boot crash in free_initmem() MIPS: remove a space after -I to cope with header search paths for VDSO MIPS: mark ginvt() as __always_inline
2019-06-03MIPS: use the generic uncached segment support in dma-directChristoph Hellwig1-17/+9
Stop providing the arch alloc/free hooks and just expose the segment offset instead. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Paul Burton <[email protected]>
2019-06-03MIPS: remove the _dma_cache_wback_inv exportChristoph Hellwig1-2/+0
This export is not used in modular code, which is a good thing as everyone should use the proper DMA API instead. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Paul Burton <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156Thomas Gleixner2-28/+2
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 1334 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner1-5/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>