aboutsummaryrefslogtreecommitdiff
path: root/arch/nios2/mm/cacheflush.c
AgeCommit message (Collapse)AuthorFilesLines
2023-08-24nios2: fix flush_dcache_page() for usage from irq contextHelge Deller1-2/+3
Since at least kernel 6.1, flush_dcache_page() is called with IRQs disabled, e.g. from aio_complete(). But the current implementation for flush_dcache_page() on NIOS2 unintentionally re-enables IRQs, which may lead to deadlocks. Fix it by using xa_lock_irqsave() and xa_unlock_irqrestore() for the flush_dcache_mmap_*lock() macros instead. Link: https://lkml.kernel.org/r/ZOTF5WWURQNH9+iw@p100 Signed-off-by: Helge Deller <[email protected]> Cc: Dinh Nguyen <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-24nios2: implement the new page table range APIMatthew Wilcox (Oracle)1-36/+43
Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Mike Rapoport (IBM) <[email protected]> Cc: Dinh Nguyen <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2021-04-30mm: move page_mapping_file to pagemap.hMatthew Wilcox (Oracle)1-0/+1
page_mapping_file() is only used by some architectures, and then it is usually only used in one place. Make it a static inline function so other architectures don't have to carry this dead code. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Acked-by: Mike Rapoport <[email protected]> Cc: Huang Ying <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-03-07nios2: update_mmu_cache preload the TLB with the new PTENicholas Piggin1-3/+4
Rather than flush the TLB entry when installing a new PTE to allow the fast TLB reload to re-fill the TLB, just refill the TLB entry when removing the old one. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Ley Foon Tan <[email protected]>
2019-03-07nios2: update_mmu_cache clear the old entry from the TLBNicholas Piggin1-0/+2
Fault paths like do_read_fault will install a Linux pte with the young bit clear. The CPU will fault again because the TLB has not been updated, this time a valid pte exists so handle_pte_fault will just set the young bit with ptep_set_access_flags, which flushes the TLB. The TLB is flushed so the next attempt will go to the fast TLB handler which loads the TLB with the new Linux pte. The access then proceeds. This design is fragile to depend on the young bit being clear after the initial Linux fault. A proposed core mm change to immediately set the young bit upon such a fault, results in ptep_set_access_flags not flushing the TLB because it finds no change to the pte. The spurious fault fix path only flushes the TLB if the access was a store. If it was a load, then this results in an infinite loop of page faults. This change adds a TLB flush in update_mmu_cache, which removes that TLB entry upon the first fault. This will cause the fast TLB handler to load the new pte and avoid the Linux page fault entirely. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Ley Foon Tan <[email protected]>
2018-04-05mm: fix races between swapoff and flush dcacheHuang Ying1-2/+2
Thanks to commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB trunks"), after swapoff the address_space associated with the swap device will be freed. So page_mapping() users which may touch the address_space need some kind of mechanism to prevent the address_space from being freed during accessing. The dcache flushing functions (flush_dcache_page(), etc) in architecture specific code may access the address_space of swap device for anonymous pages in swap cache via page_mapping() function. But in some cases there are no mechanisms to prevent the swap device from being swapoff, for example, CPU1 CPU2 __get_user_pages() swapoff() flush_dcache_page() mapping = page_mapping() ... exit_swap_address_space() ... kvfree(spaces) mapping_mapped(mapping) The address space may be accessed after being freed. But from cachetlb.txt and Russell King, flush_dcache_page() only care about file cache pages, for anonymous pages, flush_anon_page() should be used. The implementation of flush_dcache_page() in all architectures follows this too. They will check whether page_mapping() is NULL and whether mapping_mapped() is true to determine whether to flush the dcache immediately. And they will use interval tree (mapping->i_mmap) to find all user space mappings. While mapping_mapped() and mapping->i_mmap isn't used by anonymous pages in swap cache at all. So, to fix the race between swapoff and flush dcache, __page_mapping() is add to return the address_space for file cache pages and NULL otherwise. All page_mapping() invoking in flush dcache functions are replaced with page_mapping_file(). [[email protected]: simplify page_mapping_file(), per Mike] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: "Huang, Ying" <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Chen Liqin <[email protected]> Cc: Russell King <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-11-26nios2: fix cache coherencyLey Foon Tan1-20/+4
There is intermittent cache coherency issue caught in toolchian tests. Revert to use flushd. Signed-off-by: Ley Foon Tan <[email protected]>
2015-04-24nios2: rework cacheLey Foon Tan1-16/+36
- flush dcache before flush instruction cache - remork update_mmu_cache and flush_dcache_page - add shmparam.h Signed-off-by: Ley Foon Tan <[email protected]>
2015-04-20nios2: remove end address checking for initdaLey Foon Tan1-3/+0
Remove the end address checking for initda function. We need to invalidate each address line for initda instruction, from start to end address. Signed-off-by: Ley Foon Tan <[email protected]>
2015-04-10nios2: fix cache coherency issue when debug with gdbLey Foon Tan1-3/+0
Remove the end address checking for flushda function. We need to flush each address line for flushda instruction, from start to end address. This is because flushda instruction only flush the cache if tag and line fields are matched. Change to use ldwio instruction (bypass cache) to load the instruction that causing trap. Our interest is the actual instruction that executed by the processor, this should be uncached. Note, EA address might be an userspace cached address. Signed-off-by: Ley Foon Tan <[email protected]>
2014-12-08nios2: Cache handlingLey Foon Tan1-0/+271
This patch adds functionality required for cache maintenance. Signed-off-by: Ley Foon Tan <[email protected]>