aboutsummaryrefslogtreecommitdiff
path: root/arch/mips/mm
AgeCommit message (Collapse)AuthorFilesLines
2015-09-10dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent}Christoph Hellwig1-7/+0
Since 2009 we have a nice asm-generic header implementing lots of DMA API functions for architectures using struct dma_map_ops, but unfortunately it's still missing a lot of APIs that all architectures still have to duplicate. This series consolidates the remaining functions, although we still need arch opt outs for two of them as a few architectures have very non-standard implementations. This patch (of 5): The coherent DMA allocator works the same over all architectures supporting dma_map operations. This patch consolidates them and converges the minor differences: - the debug_dma helpers are now called from all architectures, including those that were previously missing them - dma_alloc_from_coherent and dma_release_from_coherent are now always called from the generic alloc/free routines instead of the ops dma-mapping-common.h always includes dma-coherent.h to get the defintions for them, or the stubs if the architecture doesn't support this feature - checks for ->alloc / ->free presence are removed. There is only one magic instead of dma_map_ops without them (mic_dma_ops) and that one is x86 only anyway. Besides that only x86 needs special treatment to replace a default devices if none is passed and tweak the gfp_flags. An optional arch hook is provided for that. [[email protected]: fix build] [[email protected]: fix xtensa] Signed-off-by: Christoph Hellwig <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Michal Simek <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Andy Shevchenko <[email protected]> Signed-off-by: Guenter Roeck <[email protected]> Signed-off-by: Max Filippov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-09-03MIPS: Set trap_no field in thread_struct on exception.Ralf Baechle1-4/+4
This reverts commit 7281cd22973008a782860e48ed8d85d00204168c and adds actual functionality to use the field.
2015-09-03MIPS: Add implementation of dma_map_ops.mmap()Alex Smith1-0/+35
The generic implementation of dma_map_ops.mmap(), dma_common_mmap(), is not correct for non-coherent devices. It expects to be passed a virtual address previously returned by dma_alloc_coherent(), which for a non-coherent device will return a KSEG1 address. It then attempts to convert that virtual address to a physical address using virt_to_page() which will yield an incorrect address. Also, dma_common_mmap() does not handle the DMA_ATTR_WRITE_COMBINE attribute, and therefore dma_mmap_writecombine() will not actually set the appropriate pgprot_t flags for write combining. This patch adds an implementation of dma_map_ops.mmap() that correctly handles KSEG1 addresses, and enables write combining when requested. Signed-off-by: Alex Smith <[email protected]> Cc: Sadegh Abbasi <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10808/ Signed-off-by: Ralf Baechle <[email protected]>
2015-09-03MIPS: Refactor dumping of TLB registers for r3k/r4kJames Hogan1-1/+1
The TLB registers are dumped in a couble of places: - sysrq_tlbdump_single() - when dumping TLB state. - do_mcheck() - in response to a machine check error. The main TLB registers also differ between r3k and r4k, but r4k appears to be assumed. Refactor this code into a dump_tlb_regs() function, implemented for both r3k and r4k, and used by both of the above functions. Fixes: d1e9a4f54735 ("MIPS: Add SysRq operation to dump TLBs on all CPUs") Suggested-by: Maciej W. Rozycki <[email protected]> Signed-off-by: James Hogan <[email protected]> Cc: Maciej W. Rozycki <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10721/ Signed-off-by: Ralf Baechle <[email protected]>
2015-09-03MIPS: mm: default platform_maar_init using bootmem dataPaul Burton1-2/+34
Introduce a default weak implementation of platform_maar_init which makes use of the data that platforms already provide to the bootmem allocator. This should hopefully cover the most common configurations, reduce the duplication of information provided by platforms & leaves platforms with the option of providing a custom implementation if required. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Cc: Paolo Bonzini <[email protected]> Cc: Steven J. Hill <[email protected]> Cc: [email protected] Cc: Ard Biesheuvel <[email protected]> Patchwork: https://patchwork.linux-mips.org/patch/10676/ Signed-off-by: Ralf Baechle <[email protected]>
2015-08-26MIPS: Add platform callback before initializing the L2 cacheMarkos Chandras1-0/+10
Allow platforms to perform platform-specific steps before configuring the L2 cache. This is necessary for platforms with CM3 since the L2 parameters no longer live in the Config2 register. Signed-off-by: Markos Chandras <[email protected]> Cc: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10642/ Signed-off-by: Ralf Baechle <[email protected]>
2015-08-26MIPS: CM3: Add support for CM3 L2 cache.Paul Burton1-0/+32
Detect the L2 cache configuration from GCR_L2_CONFIG when a CM3 is present in the system, rather than from Config2 which does not expose the L2 configuration on I6400. Signed-off-by: Paul Burton <[email protected]> Signed-off-by: Markos Chandras <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10641/ Signed-off-by: Ralf Baechle <[email protected]>
2015-08-26MIPS: Add cases for CPU_I6400Markos Chandras1-0/+1
Add a CPU_I6400 case to various switch statements, doing the same thing as for CPU_P5600. Signed-off-by: Markos Chandras <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10635/ Signed-off-by: Ralf Baechle <[email protected]>
2015-08-03MIPS: Partially disable RIXI support.Ralf Baechle1-4/+4
Execution of break instruction, trap instructions, emulation of unaligned loads or floating point instructions - anything that tries to read the instruction's opcode from userspace - needs read access to a page. RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of pags that are executable but not readable. On such a mapping the attempted load of the opcode by the kernel is going to cause an endless loop of page faults. The quick workaround for this is to disable the combinations that the kernel currently isn't able to handle which are executable mappings. Signed-off-by: Ralf Baechle <[email protected]>
2015-08-03MIPS: Handle page faults of executable but unreadable pages correctly.Ralf Baechle1-1/+2
Without this we end taking execeptions in an endless loop hanging the thread. Signed-off-by: Ralf Baechle <[email protected]>
2015-07-10MIPS: c-r4k: Extend way_string arrayPaul Burton1-1/+3
The L2 cache in the I6400 core has 16 ways, so extend the way_string array to take such caches into account. [[email protected]: Other already supported CPUs are free to support more than 8 ways of cache as well.] Signed-off-by: Paul Burton <[email protected]> Signed-off-by: Markos Chandras <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10640/ Signed-off-by: Ralf Baechle <[email protected]>
2015-07-10MIPS: c-r4k: Fix cache flushing for MT coresMarkos Chandras1-3/+11
MT_SMP is not the only SMP option for MT cores. The MT_SMP option allows more than one VPE per core to appear as a secondary CPU in the system. Because of how CM works, it propagates the address-based cache ops to the secondary cores but not the index-based ones. Because of that, the code does not use IPIs to flush the L1 caches on secondary cores because the CM would have done that already. However, the CM functionality is independent of the type of SMP kernel so even in non-MT kernels, IPIs are not necessary. As a result of which, we change the conditional to depend on the CM presence. Moreover, since VPEs on the same core share the same L1 caches, there is no need to send an IPI on all of them so we calculate a suitable cpumask with only one VPE per core. Signed-off-by: Markos Chandras <[email protected]> Cc: <[email protected]> # 3.15+ Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10654/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-27Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds6-39/+69
Pull MIPS updates from Ralf Baechle: - Improvements to the tlb_dump code - KVM fixes - Add support for appended DTB - Minor improvements to the R12000 support - Minor improvements to the R12000 support - Various platform improvments for BCM47xx - The usual pile of minor cleanups - A number of BPF fixes and improvments - Some improvments to the support for R3000 and DECstations - Some improvments to the ATH79 platform support - A major patchset for the JZ4740 SOC adding support for the CI20 platform - Add support for the Pistachio SOC - Minor BMIPS/BCM63xx platform support improvments. - Avoid "SYNC 0" as memory barrier when unlocking spinlocks - Add support for the XWR-1750 board. - Paul's __cpuinit/__cpuinitdata cleanups. - New Malta CPU board support large memory so enable ZONE_DMA32. * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (131 commits) MIPS: spinlock: Adjust arch_spin_lock back-off time MIPS: asmmacro: Ensure 64-bit FP registers are used with MSA MIPS: BCM47xx: Simplify handling SPROM revisions MIPS: Cobalt Don't use module_init in non-modular MTD registration. MIPS: BCM47xx: Move NVRAM driver to the drivers/firmware/ MIPS: use for_each_sg() MIPS: BCM47xx: Don't select BCMA_HOST_PCI MIPS: BCM47xx: Add helper variable for storing NVRAM length MIPS: IRQ/IP27: Move IRQ allocation API to platform code. MIPS: Replace smp_mb with release barrier function in unlocks. MIPS: i8259: DT support MIPS: Malta: Basic DT plumbing MIPS: include errno.h for ENODEV in mips-cm.h MIPS: Define GCR_GIC_STATUS register fields MIPS: BPF: Introduce BPF ASM helpers MIPS: BPF: Use BPF register names to describe the ABI MIPS: BPF: Move register definition to the BPF header MIPS: net: BPF: Replace RSIZE with SZREG MIPS: BPF: Free up some callee-saved registers MIPS: Xtalk: Update xwidget.h with known Xtalk device numbers ...
2015-06-24mm/hugetlb: reduce arch dependent code about huge_pmd_unshareZhang Zhen1-5/+0
Currently we have many duplicates in definitions of huge_pmd_unshare. In all architectures this function just returns 0 when CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N. This patch puts the default implementation in mm/hugetlb.c and lets these architectures use the common code. Signed-off-by: Zhang Zhen <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Tony Luck <[email protected]> Cc: James Hogan <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: David Rientjes <[email protected]> Cc: James Yang <[email protected]> Cc: Aneesh Kumar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-06-22Merge branch 'sched-core-for-linus' of ↵Linus Torvalds3-3/+8
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "The main changes are: - lockless wakeup support for futexes and IPC message queues (Davidlohr Bueso, Peter Zijlstra) - Replace spinlocks with atomics in thread_group_cputimer(), to improve scalability (Jason Low) - NUMA balancing improvements (Rik van Riel) - SCHED_DEADLINE improvements (Wanpeng Li) - clean up and reorganize preemption helpers (Frederic Weisbecker) - decouple page fault disabling machinery from the preemption counter, to improve debuggability and robustness (David Hildenbrand) - SCHED_DEADLINE documentation updates (Luca Abeni) - topology CPU masks cleanups (Bartosz Golaszewski) - /proc/sched_debug improvements (Srikar Dronamraju)" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (79 commits) sched/deadline: Remove needless parameter in dl_runtime_exceeded() sched: Remove superfluous resetting of the p->dl_throttled flag sched/deadline: Drop duplicate init_sched_dl_class() declaration sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target sched/deadline: Make init_sched_dl_class() __init sched/deadline: Optimize pull_dl_task() sched/preempt: Add static_key() to preempt_notifiers sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration sched/stop_machine: Fix deadlock between multiple stop_two_cpus() sched/debug: Add sum_sleep_runtime to /proc/<pid>/sched sched/debug: Replace vruntime with wait_sum in /proc/sched_debug sched/debug: Properly format runnable tasks in /proc/sched_debug sched/numa: Only consider less busy nodes as numa balancing destinations Revert 095bebf61a46 ("sched/numa: Do not move past the balance point if unbalanced") sched/fair: Prevent throttling in early pick_next_task_fair() preempt: Reorganize the notrace definitions a bit preempt: Use preempt_schedule_context() as the official tracing preemption point sched: Make preempt_schedule_context() function-tracing safe x86: Remove cpu_sibling_mask() and cpu_core_mask() x86: Replace cpu_**_mask() with topology_**_cpumask() ...
2015-06-21MIPS: use for_each_sg()Akinobu Mita1-10/+20
This replaces the plain loop over the sglist array with for_each_sg() macro which consists of sg_next() function calls. Since MIPS doesn't select ARCH_HAS_SG_CHAIN, it is not necessary to use for_each_sg() in order to loop over each sg element. But this can help find problems with drivers that do not properly initialize their sg tables when CONFIG_DEBUG_SG is enabled. Signed-off-by: Akinobu Mita <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9930/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: tlbex: Avoid unnecessary _PAGE_PRESENT shiftsJames Hogan1-6/+17
Commit c5b367835cfc ("MIPS: Add support for XPA.") added generation of a shift by _PAGE_PRESENT_SHIFT in build_pte_present() and build_pte_writable(), however except for the XPA case this is always zero making it unnecessary. Make the shift conditional upon _PAGE_PRESENT_SHIFT being non-zero to save an instruction in those cases. Fixes: c5b367835cfc ("MIPS: Add support for XPA.") Signed-off-by: James Hogan <[email protected]> Cc: Steven J. Hill <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9889/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: tlbex: Fix broken offsets on r2 without XPAJames Hogan1-4/+8
Commit c5b367835cfc ("MIPS: Add support for XPA.") changed build_pte_present() and build_pte_writable() to assume a constant offset of _PAGE_READ and _PAGE_WRITE relative to _PAGE_PRESENT, however this is no longer true for some MIPS32R2 builds since commit be0c37c985ed ("MIPS: Rearrange PTE bits into fixed positions.") which moved the _PAGE_READ PTE bit away from the _PAGE_PRESENT bit, with the _PAGE_WRITE bit falling into its place. Make use of the _PAGE_READ and _PAGE_WRITE definitions to calculate the correct mask to apply instead of hard coding 3 (for _PAGE_PRESENT | _PAGE_READ) or 5 (for _PAGE_PRESENT | _PAGE_WRITE). Fixes: c5b367835cfc ("MIPS: Add support for XPA.") Signed-off-by: James Hogan <[email protected]> Cc: Steven J. Hill <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9888/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: tlbex.c: Remove new instance of __cpuinitdata that crept back inPaul Gortmaker1-1/+1
We removed __cpuinit support (leaving no-op stubs) quite some time ago. However a new instance was added in commit c5b367835cfc7a8ef53b9670a409ff ("MIPS: Add support for XPA.") Since we want to clobber the stubs soon, get this removed now. Signed-off-by: Paul Gortmaker <[email protected]> Cc: Steven J. Hill <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9894/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: c-r4k: Remove legacy __cpuinit section that crept inPaul Gortmaker1-1/+1
We removed __cpuinit support (leaving no-op stubs) quite some time ago. However a new instance was added in commit 4caa906ee949b7002cc1558bbe3744 ("MIPS: mm: c-r4k: Build EVA {d,i}cache flushing functions") Since we want to clobber the stubs soon, get this removed now. Signed-off-by: Paul Gortmaker <[email protected]> Cc: Leonid Yegoshin <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9893/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: BCM77xx: Remove legacy __cpuinit{,data} sections that crept inPaul Gortmaker1-1/+1
We removed __cpuinit support (leaving no-op stubs) quite some time ago. However a few more crept in as of commit 6ee1d93455384cef8a0426effe85da2 ("MIPS: BCM47XX: Detect more then 128 MiB of RAM (HIGHMEM)") Since we want to clobber the stubs soon, get this removed now. Signed-off-by: Paul Gortmaker <[email protected]> Cc: Rafał Miłecki <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9892/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: tlb-r3k: Optimise a TLBWI barrier in TLB invalidationMaciej W. Rozycki1-2/+2
Replace an explicit barrier with a useful processor instruction in TLB invalidation, following several other such cases elsewhere in `tlb-r3k.c'. Signed-off-by: Maciej W. Rozycki <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10196/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: tlb-r3k: Move CP0.Wired register initialisation to `tlb_init'Maciej W. Rozycki2-5/+8
Move the initialisation of the CP0.Wired register implemented by Toshiba TX3922 and TX3927 processors from `tx39_cache_init' to `tlb_init' where it belongs, correcting code structure and making sure initialisation does not rely on `tx39_cache_init' being called before `tlb_init' to work correctly. Make `r3k_have_wired_reg' static as it's no longer externally referred to; remove a stale declaration too. Signed-off-by: Maciej W. Rozycki <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10195/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-21MIPS: tlb-r3k: Also invalidate wired TLB entries on bootMaciej W. Rozycki1-11/+13
Most R3k processor implementations have their 8 first TLB entries fixed as wired, so we always skip them in TLB invalidation. That however means any leftover entries present there at boot will stay throughout the life of the kernel, unless replaced with new ones. So rename `local_flush_tlb_all' to `local_flush_tlb_from' and make it accept the TLB entry to start from. Then use 0 initially at bootstrap, and the first regular entry later on, bypassing any wired entries. Wrap the latter arrangement into a new `local_flush_tlb_all' entry point. There is no need to disable interrupts in the call made from `tlb_init' because it's made before the interrupt subsystem has been initialised; this is also true for secondary processors, should we ever support R3k SMP. So move this piece of code to new `local_flush_tlb_all'. Signed-off-by: Maciej W. Rozycki <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10194/ Signed-off-by: Ralf Baechle <[email protected]>
2015-06-06MIPS: c-r4k: Fix typo in probe_scache()Joshua Kinard1-1/+1
Fixes a typo in arch/mips/mm/c-r4k.c's probe_scache(). Signed-off-by: Joshua Kinard <[email protected]> Signed-off-by: Ralf Baechle <[email protected]>
2015-05-19mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in ↵David Hildenbrand1-2/+2
the handler Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: Thomas Gleixner <[email protected]> Signed-off-by: David Hildenbrand <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-05-19sched/preempt, mm/kmap, MIPS: Disable preemption in kmap_coherent() explicitlyDavid Hildenbrand1-0/+2
k(un)map_coherent relies on pagefault_disable() to also disable preemption. Let's make this explicit, to prepare for pagefault_disable() not touching preemption anymore. This patch is based on a patch by Yang Shi on the -rt tree: "k{un}map_coherent are just called when cpu_has_dc_aliases == 1 with VIPT cache. However, actually, the most modern MIPS processors have PIPT dcache without dcache alias issue. In such case, k{un}map_atomic will be called with preempt enabled." Reviewed-and-tested-by: Thomas Gleixner <[email protected]> Signed-off-by: David Hildenbrand <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-05-19sched/preempt, mm/kmap: Explicitly disable/enable preemption in kmap_atomic_*David Hildenbrand1-1/+4
The existing code relies on pagefault_disable() implicitly disabling preemption, so that no schedule will happen between kmap_atomic() and kunmap_atomic(). Let's make this explicit, to prepare for pagefault_disable() not touching preemption anymore. Reviewed-and-tested-by: Thomas Gleixner <[email protected]> Signed-off-by: David Hildenbrand <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-05-15MIPS: tlb-r4k: Fix PG_ELPA commentJames Hogan1-1/+1
The ELPA bit in PageGrain is all about large *physical* addresses, so correct the reference to "large virtual address" in the comment above where it is set for MIPS64. Signed-off-by: James Hogan <[email protected]> Cc: David Daney <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/10038/ Signed-off-by: Ralf Baechle <[email protected]>
2015-04-17Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds7-67/+148
Pull MIPS updates from Ralf Baechle: "This is the main pull request for MIPS for Linux 4.1. Most noteworthy: - Add more Octeon-optimized crypto functions - Octeon crypto preemption and locking fixes - Little endian support for Octeon - Use correct CSR to soft reset Octeons - Support LEDs on the Octeon-based DSR-1000N - Fix PCI interrupt mapping for the Octeon-based DSR-1000N - Mark prom_free_prom_memory() as __init for a number of systems - Support for Imagination's Pistachio SOC. This includes arch and CLK bits. I'd like to merge pinctrl bits later - Improve parallelism of csum_partial for certain pipelines - Organize DTB files in subdirs like other architectures - Implement read_sched_clock for all MIPS platforms other than Octeon - Massive series of 38 fixes and cleanups for the FPU emulator / kernel - Further FPU remulator work to support new features. This sits on a separate branch which also has been pulled into the 4.1 KVM branch - Clean up and fixes for the SEAD3 eval board; remove unused file - Various updates for Netlogic platforms - A number of small updates for Loongson 3 platforms - Increase the memory limit for ATH79 platforms to 256MB - A fair number of fixes and updates for BCM47xx platforms - Finish the implementation of XPA support - MIPS FDC support. No, not floppy controller but Fast Debug Channel :) - Detect the R16000 used in SGI legacy platforms - Fix Kconfig dependencies for the SSB bus support" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (265 commits) MIPS: Makefile: Fix MIPS ASE detection code MIPS: asm: elf: Set O32 default FPU flags MIPS: BCM47XX: Fix detecting Microsoft MN-700 & Asus WL500G MIPS: Kconfig: Disable SMP/CPS for 64-bit MIPS: Hibernate: flush TLB entries earlier MIPS: smp-cps: cpu_set FPU mask if FPU present MIPS: lose_fpu(): Disable FPU when MSA enabled MIPS: ralink: add missing symbol for RALINK_ILL_ACC MIPS: ralink: Fix bad config symbol in PCI makefile. SSB: fix Kconfig dependencies MIPS: Malta: Detect and fix bad memsize values Revert "MIPS: Avoid pipeline stalls on some MIPS32R2 cores." MIPS: Octeon: Delete override of cpu_has_mips_r2_exec_hazard. MIPS: Fix cpu_has_mips_r2_exec_hazard. MIPS: kernel: entry.S: Set correct ISA level for mips_ihb MIPS: asm: spinlock: Fix addiu instruction for R10000_LLSC_WAR case MIPS: r4kcache: Use correct base register for MIPS R6 cache flushes MIPS: Kconfig: Fix typo for the r2-to-r6 emulator kernel parameter MIPS: unaligned: Fix regular load/store instruction emulation for EVA MIPS: unaligned: Surround load/store macros in do {} while statements ...
2015-04-14mm: expose arch_mmap_rnd when availableKees Cook1-2/+2
When an architecture fully supports randomizing the ELF load location, a per-arch mmap_rnd() function is used to find a randomized mmap base. In preparation for randomizing the location of ET_DYN binaries separately from mmap, this renames and exports these functions as arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE for describing this feature on architectures that support it (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390 already supports a separated ET_DYN ASLR from mmap ASLR without the ARCH_BINFMT_ELF_RANDOMIZE_PIE logic). Signed-off-by: Kees Cook <[email protected]> Cc: Hector Marco-Gisbert <[email protected]> Cc: Russell King <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "David A. Long" <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Arun Chandran <[email protected]> Cc: Yann Droneaud <[email protected]> Cc: Min-Hua Chen <[email protected]> Cc: Paul Burton <[email protected]> Cc: Alex Smith <[email protected]> Cc: Markos Chandras <[email protected]> Cc: Vineeth Vijayan <[email protected]> Cc: Jeff Bailey <[email protected]> Cc: Michael Holzheu <[email protected]> Cc: Ben Hutchings <[email protected]> Cc: Behan Webster <[email protected]> Cc: Ismael Ripoll <[email protected]> Cc: Jan-Simon Mller <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-14mips: extract logic for mmap_rnd()Kees Cook1-8/+16
In preparation for splitting out ET_DYN ASLR, extract the mmap ASLR selection into a separate function. Signed-off-by: Kees Cook <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Cc: Ralf Baechle <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-13Merge branch '4.0-fixes' into mips-for-linux-nextRalf Baechle4-54/+119
2015-04-10Revert "MIPS: Avoid pipeline stalls on some MIPS32R2 cores."Ralf Baechle1-19/+2
For a discussion, see http://patchwork.linux-mips.org/patch/9539/. This reverts commit 625c0a21700bdb90844d926a1508a17a77e369c9.
2015-04-02MIPS: c-r4k.c: Fix the 74K D-cache alias erratum workaroundMaciej W. Rozycki1-8/+15
Fix the 74K D-cache alias erratum workaround so that it actually works. Our current code sets MIPS_CACHE_VTAG for the D-cache, but that flag only has any effect for the I-cache. Additionally MIPS_CACHE_PINDEX is set for the D-cache if CP0.Config7.AR is also set for an affected processor, leading to confusing information in the bootstrap log (the flag isn't used beyond that). So delete the setting of MIPS_CACHE_VTAG and rely on MIPS_CACHE_ALIASES, set in a common place, removing I-cache coherency issues seen in GDB testing with software breakpoints, gdbserver and ptrace(2), on affected systems. While at it add a little piece of explanation of what CP0.Config6.SYND is so that people do not have to chase documentation. Signed-off-by: Maciej W. Rozycki <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/8507/ Signed-off-by: Ralf Baechle <[email protected]>
2015-04-01MIPS: Add R16000 detectionJoshua Kinard4-4/+11
This allows the kernel to correctly detect an R16000 MIPS CPU on systems that have those. Otherwise, such systems will detect the CPU as an R14000, due to similarities in the CPU PRId value. Signed-off-by: Joshua Kinard <[email protected]> Cc: Linux MIPS List <[email protected]> Patchwork: https://patchwork.linux-mips.org/patch/9092/ Signed-off-by: Ralf Baechle <[email protected]>
2015-04-01MIPS: DMA: Implement platform hook to perform post-DMA cache flushes.Ralf Baechle1-1/+3
Signed-off-by: Ralf Baechle <[email protected]>
2015-03-25MIPS: Fix race condition in lazy cache flushing.Lars Persson1-0/+12
The lazy cache flushing implemented in the MIPS kernel suffers from a race condition that is exposed by do_set_pte() in mm/memory.c. A pre-condition is a file-system that writes to the page from the CPU in its readpage method and then calls flush_dcache_page(). One example is ubifs. Another pre-condition is that the dcache flush is postponed in __flush_dcache_page(). Upon a page fault for an executable mapping not existing in the page-cache, the following will happen: 1. Write to the page 2. flush_dcache_page 3. flush_icache_page 4. set_pte_at 5. update_mmu_cache (commits the flush of a dcache-dirty page) Between steps 4 and 5 another thread can hit the same page and it will encounter a valid pte. Because the data still is in the L1 dcache the CPU will fetch stale data from L2 into the icache and execute garbage. This fix moves the commit of the cache flush to step 3 to close the race window. It also reduces the amount of flushes on non-executable mappings because we never enter __flush_dcache_page() for non-aliasing CPUs. Regressions can occur in drivers that mistakenly relies on the flush_dcache_page() in get_user_pages() for DMA operations. [[email protected]: Folded in patch 9346 to fix highmem issue.] Signed-off-by: Lars Persson <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9346/ Patchwork: https://patchwork.linux-mips.org/patch/9738/ Signed-off-by: Ralf Baechle <[email protected]>
2015-03-25Revert "MIPS: Remove race window in page fault handling"Lars Persson1-19/+8
Revert commit 2a4a8b1e5d9d ("MIPS: Remove race window in page fault handling") because it increased the number of flushed dcache pages and became a performance problem for some workloads. Signed-off-by: Lars Persson <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9345/ Signed-off-by: Ralf Baechle <[email protected]>
2015-03-19MIPS: Add support for XPA.Steven J. Hill3-14/+95
Add support for extended physical addressing (XPA) so that 32-bit platforms can access equal to or greater than 40 bits of physical addresses. NOTE: 1) XPA and EVA are not the same and cannot be used simultaneously. 2) If you configure your kernel for XPA, the PTEs and all address sizes become 64-bit. 3) Your platform MUST have working HIGHMEM support. Signed-off-by: Steven J. Hill <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9355/ Signed-off-by: Ralf Baechle <[email protected]>
2015-03-18MIPS: Rearrange PTE bits into fixed positions.Steven J. Hill1-2/+2
This patch rearranges the PTE bits into fixed positions for R2 and later cores. In the past, the TLB handling code did runtime checking of RI/XI and adjusted the shifts and rotates in order to fit the largest PFN value into the PTE. The checking now occurs when building the TLB handler, thus eliminating those checks. These new arrangements also define the largest possible PFN value that can fit in the PTE. HUGE page support is only available for 64-bit cores. Layouts of the PTE bits are now: 64-bit, R1 or earlier: CCC D V G [S H] M A W R P 32-bit, R1 or earler: CCC D V G M A W R P 64-bit, R2 or later: CCC D V G RI/R XI [S H] M A W P 32-bit, R2 or later: CCC D V G RI/R XI M A W P [[email protected]: Fix another build error *rant* *rant*] Signed-off-by: Steven J. Hill <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9353/ Signed-off-by: Ralf Baechle <[email protected]>
2015-02-21Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds9-41/+104
Pull MIPS updates from Ralf Baechle: "This is the main pull request for MIPS: - a number of fixes that didn't make the 3.19 release. - a number of cleanups. - preliminary support for Cavium's Octeon 3 SOCs which feature up to 48 MIPS64 R3 cores with FPU and hardware virtualization. - support for MIPS R6 processors. Revision 6 of the MIPS architecture is a major revision of the MIPS architecture which does away with many of original sins of the architecture such as branch delay slots. This and other changes in R6 require major changes throughout the entire MIPS core architecture code and make up for the lion share of this pull request. - finally some preparatory work for eXtendend Physical Address support, which allows support of up to 40 bit of physical address space on 32 bit processors" [ Ahh, MIPS can't leave the PAE brain damage alone. It's like every CPU architect has to make that mistake, but pee in the snow by changing the TLA. But whether it's called PAE, LPAE or XPA, it's horrid crud - Linus ] * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (114 commits) MIPS: sead3: Corrected get_c0_perfcount_int MIPS: mm: Remove dead macro definitions MIPS: OCTEON: irq: add CIB and other fixes MIPS: OCTEON: Don't do acknowledge operations for level triggered irqs. MIPS: OCTEON: More OCTEONIII support MIPS: OCTEON: Remove setting of processor specific CVMCTL icache bits. MIPS: OCTEON: Core-15169 Workaround and general CVMSEG cleanup. MIPS: OCTEON: Update octeon-model.h code for new SoCs. MIPS: OCTEON: Implement DCache errata workaround for all CN6XXX MIPS: OCTEON: Add little-endian support to asm/octeon/octeon.h MIPS: OCTEON: Implement the core-16057 workaround MIPS: OCTEON: Delete unused COP2 saving code MIPS: OCTEON: Use correct instruction to read 64-bit COP0 register MIPS: OCTEON: Save and restore CP2 SHA3 state MIPS: OCTEON: Fix FP context save. MIPS: OCTEON: Save/Restore wider multiply registers in OCTEON III CPUs MIPS: boot: Provide more uImage options MIPS: Remove unneeded #ifdef __KERNEL__ from asm/processor.h MIPS: ip22-gio: Remove legacy suspend/resume support mips: pci: Add ifdef around pci_proc_domain ...
2015-02-20MIPS: mm: Remove dead macro definitionsAndreas Ruprecht2-16/+0
In commit c441d4a54c6e ("MIPS: mm: Only build one microassembler that is suitable"), the Makefile at arch/mips/mm was rewritten to only build the "right" microassembler file, depending on whether CONFIG_CPU_MICROMIPS is set or not. In the files, however, there are still preprocessor definitions depending on CONFIG_CPU_MICROMIPS. The #ifdef around them can now never evaluate to true, so let's remove them altogether. This inconsistency was found using the undertaker-checkpatch tool. Signed-off-by: Andreas Ruprecht <[email protected]> Reviewed-by: Maciej W. Rozycki <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Valentin Rothberg <[email protected]> Cc: Paul Bolle <[email protected]> Patchwork: https://patchwork.linux-mips.org/patch/9267/ Signed-off-by: Ralf Baechle <[email protected]>
2015-02-20MIPS: OCTEON: Implement DCache errata workaround for all CN6XXXDavid Daney1-1/+1
Make messages refer to all CN6XXX. Signed-off-by: David Daney <[email protected]> Signed-off-by: Aleksey Makarov <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/8941/ Signed-off-by: Ralf Baechle <[email protected]>
2015-02-20MIPS: Add set/clear CP0 macros for PageGrain registerSteven J. Hill1-3/+3
Build set and clear macros for the PageGrain register. Signed-off-by: Steven J. Hill <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/9289/ Signed-off-by: Ralf Baechle <[email protected]>
2015-02-17MIPS: mm: scache: Add secondary cache support for MIPS R6 coresMarkos Chandras2-2/+4
The secondary cache initialization and configuration code is processor specific so we need to handle MIPS R6 cores as well. Signed-off-by: Markos Chandras <[email protected]>
2015-02-17MIPS: mm: c-r4k: Set the correct ISA levelMarkos Chandras1-1/+1
The local_r4k_flush_cache_sigtramp function uses the 'cache' instruction inside an asm block. However, MIPS R6 changed the opcode for the cache instruction and as a result of which we need to set the correct ISA level. Signed-off-by: Markos Chandras <[email protected]>
2015-02-17MIPS: mm: tlbex: Use cpu_has_mips_r2_exec_hazard for the EHB instructionLeonid Yegoshin1-3/+3
MIPS uses the cpu_has_mips_r2_exec_hazard macro to determine whether the EHB instruction is available or not. This is necessary for MIPS R6 which also supports the EHB instruction. Signed-off-by: Leonid Yegoshin <[email protected]> Signed-off-by: Markos Chandras <[email protected]>
2015-02-17MIPS: mm: page: Add MIPS R6 supportMarkos Chandras1-4/+26
The MIPS R6 pref instruction only has 9 bits for the immediate field so skip the micro-assembler PREF instruction if the offset does not fit in 9 bits. Moreover, bit 30 (Pref_PrepareForStore) is no longer valid in MIPS R6, so we change the default for all MIPS R6 processors to bit 5 (Pref_StoreStreamed). Signed-off-by: Markos Chandras <[email protected]>
2015-02-16MIPS: mm: Add MIPS R6 instruction encodingsLeonid Yegoshin1-0/+32
MIPS R6 defines new opcodes for ll, sc, cache and pref instructions so we need to take these into consideration in the micro-assembler. Signed-off-by: Leonid Yegoshin <[email protected]> Signed-off-by: Markos Chandras <[email protected]>