diff options
author | Matthew Wilcox (Oracle) <[email protected]> | 2020-10-13 16:51:24 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2020-10-13 18:38:29 -0700 |
commit | e6e88712e43b7942df451508aafc2f083266f56b (patch) | |
tree | ee72fadc853f21222d73acc482f343bac49df7f6 | |
parent | f5df8635c5a3c912919c91be64aa198554b0f9ed (diff) |
mm: optimise madvise WILLNEED
Instead of calling find_get_entry() for every page index, use an XArray
iterator to skip over NULL entries, and avoid calling get_page(),
because we only want the swap entries.
[[email protected]: fix LTP soft lockups]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jani Nikula <[email protected]>
Cc: Matthew Auld <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Qian Cai <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/madvise.c | 21 |
1 files changed, 12 insertions, 9 deletions
diff --git a/mm/madvise.c b/mm/madvise.c index 0e0d61003fc6..9b065d412e5f 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -224,25 +224,28 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct address_space *mapping) { - pgoff_t index; + XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start)); + pgoff_t end_index = end / PAGE_SIZE; struct page *page; - swp_entry_t swap; - for (; start < end; start += PAGE_SIZE) { - index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; + rcu_read_lock(); + xas_for_each(&xas, page, end_index) { + swp_entry_t swap; - page = find_get_entry(mapping, index); - if (!xa_is_value(page)) { - if (page) - put_page(page); + if (!xa_is_value(page)) continue; - } + xas_pause(&xas); + rcu_read_unlock(); + swap = radix_to_swp_entry(page); page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, NULL, 0, false); if (page) put_page(page); + + rcu_read_lock(); } + rcu_read_unlock(); lru_add_drain(); /* Push any new pages onto the LRU now */ } |