diff options
author | Yang Shi <[email protected]> | 2020-04-06 20:04:21 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2020-04-07 10:43:38 -0700 |
commit | 6aeff241fe6c4561a842b344c1ca14a700ec8441 (patch) | |
tree | 59acfa02969675d81b54d5cee9b7590833052ef2 | |
parent | d08221a0807b0489f0081476bcd36da88722560b (diff) |
mm/migrate.c: migrate PG_readahead flag
Currently the migration code doesn't migrate PG_readahead flag.
Theoretically this would incur slight performance loss as the application
might have to ramp its readahead back up again. Even though such problem
happens, it might be hidden by something else since migration is typically
triggered by compaction and NUMA balancing, any of which should be more
noticeable.
Migrate the flag after end_page_writeback() since it may clear PG_reclaim
flag, which is the same bit as PG_readahead, for the new page.
[[email protected]: tweak comment]
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mel Gorman <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/migrate.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index c550230664b1..1a205503be3f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) if (PageWriteback(newpage)) end_page_writeback(newpage); + /* + * PG_readahead shares the same bit with PG_reclaim. The above + * end_page_writeback() may clear PG_readahead mistakenly, so set the + * bit after that. + */ + if (PageReadahead(page)) + SetPageReadahead(newpage); + copy_page_owner(page, newpage); mem_cgroup_migrate(page, newpage); |