aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKirill A. Shutemov <[email protected]>2020-06-03 16:00:12 -0700
committerLinus Torvalds <[email protected]>2020-06-03 20:09:46 -0700
commita980df33e9351e5474c06ec0fd96b2f409e2ff22 (patch)
treec902f4e3a65408f225a32aa86dea12120505d318
parentffe945e633b527d5a4577b42cbadec3c7cbcf096 (diff)
khugepaged: drain all LRU caches before scanning pages
Having a page in LRU add cache offsets page refcount and gives false-negative on PageLRU(). It reduces collapse success rate. Drain all LRU add caches before scanning. It happens relatively rare and should not disturb the system too much. Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Zi Yan <[email protected]> Reviewed-by: William Kucharski <[email protected]> Reviewed-by: Zi Yan <[email protected]> Acked-by: Yang Shi <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: John Hubbard <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Ralph Campbell <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r--mm/khugepaged.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index c436fd390296..8a74b9705a65 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2079,6 +2079,8 @@ static void khugepaged_do_scan(void)
barrier(); /* write khugepaged_pages_to_scan to local stack */
+ lru_add_drain_all();
+
while (progress < pages) {
if (!khugepaged_prealloc_page(&hpage, &wait))
break;