diff options
author | Mike Kravetz <[email protected]> | 2023-01-03 16:27:32 -0800 |
---|---|---|
committer | Andrew Morton <[email protected]> | 2023-01-18 17:12:55 -0800 |
commit | e9adcfecf572fcfaa9f8525904cf49c709974f73 (patch) | |
tree | a80baa77042b0cafac34b8953541966b60794b6f /mm/page-writeback.c | |
parent | bbc61844b4645d54c147a82654ac974bb7be85de (diff) |
mm: remove zap_page_range and create zap_vma_pages
zap_page_range was originally designed to unmap pages within an address
range that could span multiple vmas. While working on [1], it was
discovered that all callers of zap_page_range pass a range entirely within
a single vma. In addition, the mmu notification call within zap_page
range does not correctly handle ranges that span multiple vmas. When
crossing a vma boundary, a new mmu_notifier_range_init/end call pair with
the new vma should be made.
Instead of fixing zap_page_range, do the following:
- Create a new routine zap_vma_pages() that will remove all pages within
the passed vma. Most users of zap_page_range pass the entire vma and
can use this new routine.
- For callers of zap_page_range not passing the entire vma, instead call
zap_page_range_single().
- Remove zap_page_range.
[1] https://lore.kernel.org/linux-mm/[email protected]/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Kravetz <[email protected]>
Suggested-by: Peter Xu <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Peter Xu <[email protected]>
Acked-by: Heiko Carstens <[email protected]> [s390]
Reviewed-by: Christoph Hellwig <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Christian Brauner <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nadav Amit <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'mm/page-writeback.c')
-rw-r--r-- | mm/page-writeback.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 337cafe9978c..e91f94b3438b 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2690,7 +2690,7 @@ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) * * The caller must hold lock_page_memcg(). Most callers have the folio * locked. A few have the folio blocked from truncation through other - * means (eg zap_page_range() has it mapped and is holding the page table + * means (eg zap_vma_pages() has it mapped and is holding the page table * lock). This can also be called from mark_buffer_dirty(), which I * cannot prove is always protected against truncate. */ |