aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/include/asm/cacheflush.h
AgeCommit message (Collapse)AuthorFilesLines
2024-07-03mm: remove page_mapping()Matthew Wilcox (Oracle)1-1/+1
All callers are now converted, delete this compatibility wrapper. Also fix up some comments which referred to page_mapping. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Sidhartha Kumar <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-12-14mm: Introduce flush_cache_vmap_early()Alexandre Ghiti1-0/+2
The pcpu setup when using the page allocator sets up a new vmalloc mapping very early in the boot process, so early that it cannot use the flush_cache_vmap() function which may depend on structures not yet initialized (for example in riscv, we currently send an IPI to flush other cpus TLB). But on some architectures, we must call flush_cache_vmap(): for example, in riscv, some uarchs can cache invalid TLB entries so we need to flush the new established mapping to avoid taking an exception. So fix this by introducing a new function flush_cache_vmap_early() which is called right after setting the new page table entry and before accessing this new mapping. This new function implements a local flush tlb on riscv and is no-op for other architectures (same as today). Signed-off-by: Alexandre Ghiti <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Dennis Zhou <[email protected]>
2023-08-24mm: rationalise flush_icache_pages() and flush_icache_page()Matthew Wilcox (Oracle)1-7/+0
Move the default (no-op) implementation of flush_icache_pages() to <linux/cacheflush.h> from <asm-generic/cacheflush.h>. Remove the flush_icache_page() wrapper from each architecture into <linux/cacheflush.h>. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-24arm: implement the new page table range APIMatthew Wilcox (Oracle)1-9/+15
Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clear flag from being per-page to per-folio which makes __dma_page_dev_to_cpu() a bit more exciting. Also add flush_cache_pages(), even though this isn't used by generic code (yet?) [[email protected]: fix potential endless loop in __dma_page_dev_to_cpu()] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: fix folio conversion in __dma_page_dev_to_cpu()] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Marek Szyprowski <[email protected]> Acked-by: Mike Rapoport (IBM) <[email protected]> Reviewed-by: Russell King (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-03-23Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-9/+3
Pull ARM updates from Russell King: "Updates for IRQ stacks and virtually mapped stack support, and ftrace: - Support for IRQ and vmap'ed stacks This covers all the work related to implementing IRQ stacks and vmap'ed stacks for all 32-bit ARM systems that are currently supported by the Linux kernel, including RiscPC and Footbridge. It has been submitted for review in four different waves: - IRQ stacks support for v7 SMP systems [0] - vmap'ed stacks support for v7 SMP systems[1] - extending support for both IRQ stacks and vmap'ed stacks for all remaining configurations, including v6/v7 SMP multiplatform kernels and uniprocessor configurations including v7-M [2] - fixes and updates in [3] - ftrace fixes and cleanups Make all flavors of ftrace available on all builds, regardless of ISA choice, unwinder choice or compiler [4]: - use ADD not POP where possible - fix a couple of Thumb2 related issues - enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness - enable the graph tracer with the EABI unwinder - avoid clobbering frame pointer registers to make Clang happy - Fixes for the above" [0] https://lore.kernel.org/linux-arm-kernel/[email protected]/ [1] https://lore.kernel.org/linux-arm-kernel/[email protected]/ [2] https://lore.kernel.org/linux-arm-kernel/[email protected]/ [3] https://lore.kernel.org/linux-arm-kernel/[email protected]/ [4] https://lore.kernel.org/linux-arm-kernel/[email protected]/ * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (62 commits) ARM: fix building NOMMU ARMv4/v5 kernels ARM: unwind: only permit stack switch when unwinding call_with_stack() ARM: Revert "unwind: dump exception stack from calling frame" ARM: entry: fix unwinder problems caused by IRQ stacks ARM: unwind: set frame.pc correctly for current-thread unwinding ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=y ARM: 9183/1: unwind: avoid spurious warnings on bogus code addresses Revert "ARM: 9144/1: forbid ftrace with clang and thumb2_kernel" ARM: mach-bcm: disable ftrace in SMC invocation routines ARM: cacheflush: avoid clobbering the frame pointer ARM: kprobes: treat R7 as the frame pointer register in Thumb2 builds ARM: ftrace: enable the graph tracer with the EABI unwinder ARM: unwind: track location of LR value in stack frame ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TEST ARM: ftrace: avoid unnecessary literal loads ARM: ftrace: avoid redundant loads or clobbering IP ARM: ftrace: use trampolines to keep .init.text in branching range ARM: ftrace: use ADD not POP to counter PUSH at entry ARM: ftrace: ensure that ADR takes the Thumb bit into account ARM: make get_current() and __my_cpu_offset() __always_inline ...
2022-02-09ARM: cacheflush: avoid clobbering the frame pointerArd Biesheuvel1-9/+3
Thumb2 uses R7 rather than R11 as the frame pointer, and even if we rarely use a frame pointer to begin with when building in Thumb2 mode, there are cases where it is required by the compiler (Clang when inserting profiling hooks via -pg) However, preserving and restoring the frame pointer is risky, as any unhandled exceptions raised in the mean time will produce a bogus backtrace, and it would be better not to touch the frame pointer at all. This is the case even when CONFIG_FRAME_POINTER is not set, as the unwind directive used by the unwinder may also use R7 or R11 as the unwind anchor, even if the frame pointer is not managed strictly according to the frame pointer ABI. So let's tweak the cacheflush asm code not to clobber R7 or R11 at all, so that we can drop R7 from the clobber lists of the inline asm blocks that call these routines, and remove the code that preserves/restores R11. Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]>
2021-11-17Add linux/cacheflush.hMatthew Wilcox (Oracle)1-1/+0
Many architectures do not include asm-generic/cacheflush.h, so turn the includes on their head and add linux/cacheflush.h which includes asm/cacheflush.h. Move the flush_dcache_folio() declaration from asm-generic/cacheflush.h to linux/cacheflush.h and change linux/highmem.h to include linux/cacheflush.h instead of asm/cacheflush.h so that all necessary places will see flush_dcache_folio(). More functions should have their default implementations moved in the future, but those are for follow-on patches. This fixes csky, sparc and sparc64 which were missed in the commit which added flush_dcache_folio(). Fixes: 08b0b0059bf1 ("mm: Add flush_dcache_folio()") Suggested-by: Christoph Hellwig <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]>
2021-10-18mm: Add flush_dcache_folio()Matthew Wilcox (Oracle)1-0/+1
This is a default implementation which calls flush_dcache_page() on each page in the folio. If architectures can do better, they should implement their own version of it. Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Vlastimil Babka <[email protected]>
2021-09-03mm: remove flush_kernel_dcache_pageChristoph Hellwig1-3/+1
flush_kernel_dcache_page is a rather confusing interface that implements a subset of flush_dcache_page by not being able to properly handle page cache mapped pages. The only callers left are in the exec code as all other previous callers were incorrect as they could have dealt with page cache pages. Replace the calls to flush_kernel_dcache_page with calls to flush_dcache_page, which for all architectures does either exactly the same thing, can contains one or more of the following: 1) an optimization to defer the cache flush for page cache pages not mapped into userspace 2) additional flushing for mapped page cache pages if cache aliases are possible Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Linus Torvalds <[email protected]> Reviewed-by: Ira Weiny <[email protected]> Cc: Alex Shi <[email protected]> Cc: Geoff Levand <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Guo Ren <[email protected]> Cc: Helge Deller <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Cercueil <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Ulf Hansson <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-06-08arm: rename flush_cache_user_range to flush_icache_user_rangeChristoph Hellwig1-2/+2
flush_icache_user_range will be the name for a generic primitive. Move the arm name so that arm already has an implementation. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Russell King <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-08arm,sparc,unicore32: remove flush_icache_user_rangeChristoph Hellwig1-3/+0
flush_icache_user_range is only used by <asm-generic/cacheflush.h>, so remove it from the architectures that implement it, but don't use <asm-generic/cacheflush.h>. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Russell King <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Guan Xuetao <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2019-07-08Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-0/+7
Pull ARM updates from Russell King: - Add a "cut here" to make it clearer where oops dumps should be cut from - we already have a marker for the end of the dumps. - Add logging severity to show_pte() - Drop unnecessary common-page-size linker flag - Errata workarounds for Cortex A12 857271, Cortex A17 857272 and Cortex A7 814220. - Remove some unused variables that had started to provoke a compiler warning. * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8863/1: stm32: select ARM errata 814220 ARM: 8862/1: errata: 814220-B-Cache maintenance by set/way operations can execute out of order ARM: 8865/1: mm: remove unused variables ARM: 8864/1: Add workaround for I-Cache line size mismatch between CPU cores ARM: 8861/1: errata: Workaround errata A12 857271 / A17 857272 ARM: 8860/1: VDSO: Drop implicit common-page-size linker flag ARM: arrange show_pte() to issue severity-based messages ARM: add "8<--- cut here ---" to kernel dumps
2019-06-20ARM: 8864/1: Add workaround for I-Cache line size mismatch between CPU coresMarek Szyprowski1-0/+7
Some big.LITTLE systems have I-Cache line size mismatch between LITTLE and big cores. This patch adds a workaround for proper I-Cache support on such systems. Without it, some class of the userspace code (typically self-modifying) might suffer from random SIGILL failures. Similar workaround already exists for ARM64 architecture. I has been added by commit 116c81f427ff ("arm64: Work around systems with mismatched cache line sizes"). Signed-off-by: Marek Szyprowski <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2018-06-15docs: Fix some broken referencesMauro Carvalho Chehab1-1/+1
As we move stuff around, some doc references are broken. Fix some of them via this script: ./scripts/documentation-file-ref-check --fix Manually checked if the produced result is valid, removing a few false-positives. Acked-by: Takashi Iwai <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Acked-by: Stephen Boyd <[email protected]> Acked-by: Charles Keepax <[email protected]> Acked-by: Mathieu Poirier <[email protected]> Reviewed-by: Coly Li <[email protected]> Signed-off-by: Mauro Carvalho Chehab <[email protected]> Acked-by: Jonathan Corbet <[email protected]>
2018-04-11page cache: use xa_lockMatthew Wilcox1-4/+2
Remove the address_space ->tree_lock and use the xa_lock newly added to the radix_tree_root. Rename the address_space ->page_tree to ->i_pages, since we don't really care that it's a tree. [[email protected]: fix nds32, fs/dax.c] Link: http://lkml.kernel.org/r/[email protected]: http://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox <[email protected]> Acked-by: Jeff Layton <[email protected]> Cc: Darrick J. Wong <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Ryusuke Konishi <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-06-30randstruct: opt-out externally exposed function pointer structsKees Cook1-1/+1
Some function pointer structures are used externally to the kernel, like the paravirt structures. These should never be randomized, so mark them as such, in preparation for enabling randstruct's automatic selection of all-function-pointer structures. These markings are verbatim from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: Kees Cook <[email protected]>
2017-05-08treewide: decouple cacheflush.h and set_memory.hLaura Abbott1-1/+0
Now that all call sites, completely decouple cacheflush.h and set_memory.h [[email protected]: kprobes/x86: merge fix for set_memory.h decoupling] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Laura Abbott <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-05-08treewide: move set_memory_* functions away from cacheflush.hLaura Abbott1-20/+1
Patch series "set_memory_* functions header refactor", v3. The set_memory_* APIs came out of a desire to have a better way to change memory attributes. Many of these attributes were linked to cache functionality so the prototypes were put in cacheflush.h. These days, the APIs have grown and have a much wider use than just cache APIs. To support this growth, split off set_memory_* and friends into a separate header file to avoid growing cacheflush.h for APIs that have nothing to do with caches. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Laura Abbott <[email protected]> Acked-by: Russell King <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-02-07arch: Rename CONFIG_DEBUG_RODATA and CONFIG_DEBUG_MODULE_RONXLaura Abbott1-1/+1
Both of these options are poorly named. The features they provide are necessary for system security and should not be considered debug only. Change the names to CONFIG_STRICT_KERNEL_RWX and CONFIG_STRICT_MODULE_RWX to better describe what these options do. Signed-off-by: Laura Abbott <[email protected]> Acked-by: Jessica Yu <[email protected]> Signed-off-by: Kees Cook <[email protected]>
2016-08-26ARM: 8601/1: Remove unused secure_flush_area APIAndy Gross1-17/+0
This patch removes the unused secure_flush_area function. The only consumer of this function has moved to using the streaming DMA APIs. Signed-off-by: Andy Gross <[email protected]> Signed-off-by: Russell King <[email protected]>
2016-02-22asm-generic: Consolidate mark_rodata_ro()Kees Cook1-1/+0
Instead of defining mark_rodata_ro() in each architecture, consolidate it. Signed-off-by: Kees Cook <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Gross <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Ashok Kumar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dan Williams <[email protected]> Cc: David Brown <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: Emese Revfy <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Helge Deller <[email protected]> Cc: James E.J. Bottomley <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Luis R. Rodriguez <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathias Krause <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Nicolas Pitre <[email protected]> Cc: PaX Team <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ross Zwisler <[email protected]> Cc: Russell King <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Toshi Kani <[email protected]> Cc: [email protected] Cc: linux-arch <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-11firmware: qcom_scm-32: replace open-coded call to __cpuc_flush_dcache_area()Russell King1-0/+17
Rathe rthan directly accessing architecture internal functions, provide an "method"-centric wrapper for qcom_scm-32 to do what's necessary to ensure that the secure monitor can see the data. This is called "secure_flush_area" and ensures that the specified memory area is coherent across the secure boundary. Acked-by: Andy Gross <[email protected]> Reviewed-by: Stephen Boyd <[email protected]> Signed-off-by: Russell King <[email protected]>
2015-08-01ARM: reduce visibility of dmac_* functionsRussell King1-4/+0
The dmac_* functions are private to the ARM DMA API implementation, and should not be used by drivers. In order to discourage their use, remove their prototypes and macros from asm/*.h. We have to leave dmac_flush_range() behind as Exynos and MSM IOMMU code use these; once these sites are fixed, this can be moved also. Signed-off-by: Russell King <[email protected]>
2015-05-28ARM: 8380/1: bpf: fix NOMMU buildArnd Bergmann1-0/+7
arch/arm/net/built-in.o: In function `bpf_jit_compile': :(.text+0x2758): undefined reference to `set_memory_ro' arch/arm/net/built-in.o: In function `bpf_jit_free': :(.text+0x27ac): undefined reference to `set_memory_rw' Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-10-16ARM: mm: allow text and rodata sections to be read-onlyKees Cook1-0/+10
This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata read-only. Additionally, this splits rodata from text so that rodata can also be NX, which may lead to wasted memory when aligning to SECTION_SIZE. The read-only areas are made writable during ftrace updates and kexec. Signed-off-by: Kees Cook <[email protected]> Tested-by: Laura Abbott <[email protected]> Acked-by: Nicolas Pitre <[email protected]>
2014-09-30ARM: 8177/1: cacheflush: Fix v7_exit_coherency_flush exynos build breakage ↵Krzysztof Kozlowski1-0/+1
on ARMv6 This fixes build breakage of platsmp.c if ARMv6 was chosen for compile time options (e.g. by building allmodconfig): $ make allmodconfig $ make CC arch/arm/mach-exynos/platsmp.o /tmp/ccdQM0Eg.s: Assembler messages: /tmp/ccdQM0Eg.s:432: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:437: Error: selected processor does not support ARM mode `isb ' /tmp/ccdQM0Eg.s:438: Error: selected processor does not support ARM mode `dsb ' make[1]: *** [arch/arm/mach-exynos/platsmp.o] Error 1 The error was introduced in commit "ARM: EXYNOS: Move code from hotplug.c to platsmp.c". Previously code using v7_exit_coherency_flush() macro was built with '-march=armv7-a' flag but this flag dissapeared during the movement. Fix this by annotating the v7_exit_coherency_flush() asm code with armv7-a architecture. Signed-off-by: Krzysztof Kozlowski <[email protected]> Reported-by: Mark Brown <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Kukjin Kim <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-08-27ARM: 8129/1: errata: work around Cortex-A15 erratum 830321 using dummy strexMark Rutland1-1/+0
On revisions of Cortex-A15 prior to r3p3, a CLREX instruction at PL1 may falsely trigger a watchpoint exception, leading to potential data aborts during exception return and/or livelock. This patch resolves the issue in the following ways: - Replacing our uses of CLREX with a dummy STREX sequence instead (as we did for v6 CPUs). - Removing the clrex code from v7_exit_coherency_flush and derivatives, since this only exists as a minor performance improvement when non-cached exclusives are in use (Linux doesn't use these). Benchmarking on a variety of ARM cores revealed no measurable performance difference with this change applied, so the change is performed unconditionally and no new Kconfig entry is added. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Will Deacon <[email protected]> Cc: [email protected] Signed-off-by: Russell King <[email protected]>
2014-05-25ARM: 8043/1: uprobes need icache flush after xol writeVictor Kamensky1-0/+2
After instruction write into xol area, on ARM V7 architecture code need to flush dcache and icache to sync them up for given set of addresses. Having just 'flush_dcache_page(page)' call is not enough - it is possible to have stale instruction sitting in icache for given xol area slot address. Introduce arch_uprobe_ixol_copy weak function that by default calls uprobes copy_to_page function and than flush_dcache_page function and on ARM define new one that handles xol slot copy in ARM specific way flush_uprobe_xol_access function shares/reuses implementation with/of flush_ptrace_access function and takes care of writing instruction to user land address space on given variety of different cache types on ARM CPUs. Because flush_uprobe_xol_access does not have vma around flush_ptrace_access was split into two parts. First that retrieves set of condition from vma and common that receives those conditions as flags. Note ARM cache flush function need kernel address through which instruction write happened, so instead of using uprobes copy_to_page function changed code to explicitly map page and do memcpy. Note arch_uprobe_copy_ixol function, in similar way as copy_to_user_page function, has preempt_disable/preempt_enable. Signed-off-by: Victor Kamensky <[email protected]> Acked-by: Oleg Nesterov <[email protected]> Reviewed-by: David A. Long <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-05-25ARM: 8055/1: cacheflush: use -st dsb option for ensuring completionWill Deacon1-1/+1
dsb st can be used to ensure completion of pending cache maintenance operations, so use it for the v7 cache maintenance operations. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-02-17ARM: 7957/1: add DSB after icache flush in __flush_icache_all()Vinayak Kale1-0/+1
Add DSB after icache flush to complete the cache maintenance operation. Signed-off-by: Vinayak Kale <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: <[email protected]> Signed-off-by: Russell King <[email protected]>
2013-12-11ARM: mm: Define set_memory_* functions for ARMLaura Abbott1-0/+5
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: Laura Abbott <[email protected]> Signed-off-by: Russell King <[email protected]>
2013-10-29ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling codeNicolas Pitre1-0/+46
This code is becoming duplicated in many places. So let's consolidate it into a handy macro that is known to be right and available for reuse. Signed-off-by: Nicolas Pitre <[email protected]> Acked-by: Dave Martin <[email protected]> Signed-off-by: Russell King <[email protected]>
2013-08-28Merge branch 'for-rmk/cacheflush-v2' of ↵Russell King1-2/+1
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
2013-08-20ARM: cacheflush: don't round address range up to nearest pageWill Deacon1-2/+1
The flush_cache_user_range macro takes a pair of addresses describing the start and end of the virtual address range to flush. Due to an accidental oversight when flush_cache_range_user was introduced, the address range was rounded up so that the start and end addresses were page-aligned. For historical reference, the interesting commits in history.git are: 10eacf1775e1 ("[ARM] Clean up ARM cache handling interfaces (part 1)") 71432e79b76b ("[ARM] Add flush_cache_user_page() for sys_cacheflush()") This patch removes the alignment code, reducing the amount of flushing required for ranges that are not an exact multiple of PAGE_SIZE. Reviewed-by: Catalin Marinas <[email protected]> Reported-by: Jonathan Austin <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2013-08-12ARM: cacheflush: use -ishst dsb variant for ensuring flush completionWill Deacon1-1/+1
flush_cache_vmap contains a dsb to ensure that any cacheflushing operations to flush out newly written ptes have completed. This patch adds the -ishst option to the dsb, since that is all that is required for completing cacheflushing in the inner-shareable domain. Signed-off-by: Will Deacon <[email protected]>
2013-06-17ARM: 7755/1: handle user space mapped pages in flush_kernel_dcache_pageSimon Baatz1-3/+1
Commit f8b63c1 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: Simon Baatz <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Cc: <[email protected]> # 3.2+ Signed-off-by: Russell King <[email protected]>
2013-04-24ARM: cacheflush: add synchronization helpers for mixed cache state accessesNicolas Pitre1-0/+75
Algorithms used by the MCPM layer rely on state variables which are accessed while the cache is either active or inactive, depending on the code path and the active state. This patch introduces generic cache maintenance helpers to provide the necessary cache synchronization for such state variables to always hit main memory in an ordered way. Signed-off-by: Nicolas Pitre <[email protected]> Acked-by: Russell King <[email protected]> Acked-by: Dave Martin <[email protected]>
2012-09-25ARM: mm: implement LoUIS API for cache maintenance opsLorenzo Pieralisi1-0/+15
ARM v7 architecture introduced the concept of cache levels and related control registers. New processors like A7 and A15 embed an L2 unified cache controller that becomes part of the cache level hierarchy. Some operations in the kernel like cpu_suspend and __cpu_disable do not require a flush of the entire cache hierarchy to DRAM but just the cache levels belonging to the Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems correspond to L1. The current cache flushing API used in cpu_suspend and __cpu_disable, flush_cache_all(), ends up flushing the whole cache hierarchy since for v7 it cleans and invalidates all cache levels up to Level of Coherency (LoC) which cripples system performance when used in hot paths like hotplug and cpuidle. Therefore a new kernel cache maintenance API must be added to cope with latest ARM system requirements. This patch adds flush_cache_louis() to the ARM kernel cache maintenance API. This function cleans and invalidates all data cache levels up to the Level of Unification Inner Shareable (LoUIS) and invalidates the instruction cache for processors that support it (> v7). This patch also creates an alias of the cache LoUIS function to flush_kern_all for all processor versions prior to v7, so that the current cache flushing behaviour is unchanged for those processors. v7 cache maintenance code implements a cache LoUIS function that cleans and invalidates the D-cache up to LoUIS and invalidates the I-cache, according to the new API. Reviewed-by: Santosh Shilimkar <[email protected]> Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Lorenzo Pieralisi <[email protected]> Tested-by: Shawn Guo <[email protected]>
2012-07-31ARM: 7479/1: mm: avoid NULL dereference when flushing gate_vma with VIVT cachesWill Deacon1-2/+6
The vivt_flush_cache_{range,page} functions check that the mm_struct of the VMA being flushed has been active on the current CPU before performing the cache maintenance. The gate_vma has a NULL mm_struct pointer and, as such, will cause a kernel fault if we try to flush it with the above operations. This happens during ELF core dumps, which include the gate_vma as it may be useful for debugging purposes. This patch adds checks to the VIVT cache flushing functions so that VMAs with a NULL mm_struct are flushed unconditionally (the vectors page may be dirty if we use it to store the current TLS pointer). Cc: <[email protected]> # 3.4+ Reported-by: Gilles Chanteperdrix <[email protected]> Tested-by: Uros Bizjak <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2012-05-02ARM: 7408/1: cacheflush: return error to userspace when flushing syscall failsWill Deacon1-2/+2
The cacheflush syscall can fail for two reasons: (1) The arguments are invalid (nonsensical address range or no VMA) (2) The region generates a translation fault on a VIPT or PIPT cache This patch allows do_cache_op to return an error code to userspace in the case of the above. The various coherent_user_range implementations are modified to return 0 in the case of VIVT caches or -EFAULT in the case of an abort on v6/v7 cores. Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2012-04-19ARM: 7365/1: drop unused parameter from flush_cache_user_rangeDima Zavin1-1/+1
vma isn't used and flush_cache_user_range isn't a standard macro that is used on several archs with the same prototype. In fact only unicore32 has a macro with the same name (with an identical implementation and no in-tree users). This is a part of a patch proposed by Dima Zavin (with Message-id: [email protected]) that didn't get accepted. Cc: Dima Zavin <[email protected]> Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Uwe Kleine-König <[email protected]> Signed-off-by: Russell King <[email protected]>
2011-03-16Merge branch 'v6v7' into develRussell King1-1/+2
Conflicts: arch/arm/include/asm/cacheflush.h arch/arm/include/asm/proc-fns.h arch/arm/mm/Kconfig
2011-02-12ARM: move cache/processor/fault glue to separate include filesRussell King1-131/+2
This allows the cache/processor/fault glue to be more easily used from assembler code. Tested on Assabet and Tegra 2. Tested-by: Colin Cross <[email protected]> Signed-off-by: Russell King <[email protected]>
2011-02-02ARM: v6/v7 cache: allow cache calls to be optimizedRussell King1-10/+10
The v6 cache call optimization was disabled to allow the optional block cache operations to be subsituted on CPUs which supported those operations. However, as that functionality was removed, we no longer need to prevent this optimization being taken advantage of. The v7 cache call optimization was just a copy of the v6, so also fix that too. Tested-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2011-02-02ARM: v6k: introduce CPU_V6K optionRussell King1-2/+3
Introduce a CPU_V6K configuration option for platforms to select if they have a V6K CPU core. This allows us to identify whether we need to support ARMv6 CPUs without the V6K SMP extensions at build time. Currently CPU_V6K is just an alias for CPU_V6, and all places which reference CPU_V6 are replaced by (CPU_V6 || CPU_V6K). Select CPU_V6K from platforms which are known to be V6K-only. Acked-by: Tony Lindgren <[email protected]> Tested-by: Sourav Poddar <[email protected]> Tested-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2010-10-18Merge branches 'at91', 'dcache', 'ftrace', 'hwbpt', 'misc', 'mmci', 's3c', ↵Russell King1-21/+44
'st-ux' and 'unwind' into devel
2010-10-04ARM: 6405/1: Handle __flush_icache_all for CONFIG_SMP_ON_UPTony Lindgren1-15/+41
Do this by adding flush_icache_all to cache_fns for ARMv6 and 7. As flush_icache_all may neeed to be called from flush_kern_cache_all, add it as the first entry in the cache_fns. Note that now we can remove the ARM_ERRATA_411920 dependency to !SMP so it can be selected on UP ARMv6 processors, such as omap2. Signed-off-by: Tony Lindgren <[email protected]> Signed-off-by: Anand Gadiyar <[email protected]> Signed-off-by: Russell King <[email protected]>
2010-09-19ARM: 6382/1: Remove superfluous flush_kernel_dcache_page()Catalin Marinas1-3/+0
Since page cache pages are now considered 'dirty' by default, the cache flushing is handled via __flush_dcache_page() when a page gets mapped to user space. Highmem pages on VIVT systems are flushed during kunmap() and flush_kernel_dcache_page() was already a no-op in this case. ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE is still defined since ARM needs specific implementations for flush_kernel_vmap_range() and invalidate_kernel_vmap_range(). Cc: Nicolas Pitre <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Signed-off-by: Russell King <[email protected]>
2010-09-19ARM: 6379/1: Assume new page cache pages have dirty D-cacheCatalin Marinas1-3/+3
There are places in Linux where writes to newly allocated page cache pages happen without a subsequent call to flush_dcache_page() (several PIO drivers including USB HCD). This patch changes the meaning of PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly mapped page in update_mmu_cache(). The patch also sets the PG_arch_1 bit in the DMA cache maintenance function to avoid additional cache flushing in update_mmu_cache(). Tested-by: Rabin Vincent <[email protected]> Cc: Nicolas Pitre <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Signed-off-by: Russell King <[email protected]>