diff options
author | Minchan Kim <[email protected]> | 2019-09-25 16:49:11 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2019-09-25 17:51:41 -0700 |
commit | 8940b34a4e082ae11498ddae8432f2ac07685d1c (patch) | |
tree | adf414c3963d76ca7f370348e9e5974cc7a3f26d | |
parent | 9c276cc65a58faf98be8e56962745ec99ab87636 (diff) |
mm: change PAGEREF_RECLAIM_CLEAN with PAGE_REFRECLAIM
The local variable references in shrink_page_list is PAGEREF_RECLAIM_CLEAN
as default. It is for preventing to reclaim dirty pages when CMA try to
migrate pages. Strictly speaking, we don't need it because CMA didn't
allow to write out by .may_writepage = 0 in reclaim_clean_pages_from_list.
Moreover, it has a problem to prevent anonymous pages's swap out even
though force_reclaim = true in shrink_page_list on upcoming patch. So
this patch makes references's default value to PAGEREF_RECLAIM and rename
force_reclaim with ignore_references to make it more clear.
This is a preparatory work for next patch.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Minchan Kim <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Chris Zankel <[email protected]>
Cc: Daniel Colascione <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Hillf Danton <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Joel Fernandes (Google) <[email protected]>
Cc: kbuild test robot <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Oleksandr Natalenko <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: Sonny Rao <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Cc: Tim Murray <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/vmscan.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 4911754c93b7..d8bbaf068c35 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1123,7 +1123,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, struct scan_control *sc, enum ttu_flags ttu_flags, struct reclaim_stat *stat, - bool force_reclaim) + bool ignore_references) { LIST_HEAD(ret_pages); LIST_HEAD(free_pages); @@ -1137,7 +1137,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, struct address_space *mapping; struct page *page; int may_enter_fs; - enum page_references references = PAGEREF_RECLAIM_CLEAN; + enum page_references references = PAGEREF_RECLAIM; bool dirty, writeback; unsigned int nr_pages; @@ -1268,7 +1268,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, } } - if (!force_reclaim) + if (!ignore_references) references = page_check_references(page, sc); switch (references) { |