diff options
| author | Matthew Wilcox <[email protected]> | 2019-05-13 17:16:44 -0700 |
|---|---|---|
| committer | Linus Torvalds <[email protected]> | 2019-05-14 09:47:45 -0700 |
| commit | 5fd4ca2d84b249f0858ce28cf637cf25b61a398f (patch) | |
| tree | a7660b8f5f9fa02945070e2ab918eb645c92ba36 /include/linux | |
| parent | cefdca0a86be517bc390fc4541e3674b8e7803b0 (diff) | |
mm: page cache: store only head pages in i_pages
Transparent Huge Pages are currently stored in i_pages as pointers to
consecutive subpages. This patch changes that to storing consecutive
pointers to the head page in preparation for storing huge pages more
efficiently in i_pages.
Large parts of this are "inspired" by Kirill's patch
https://lore.kernel.org/lkml/[email protected]/
[[email protected]: fix swapcache pages]
Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: hugetlb stores pages in page cache differently]
Link: http://lkml.kernel.org/r/20190404134553.vuvhgmghlkiw2hgl@kshutemo-mobl1
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Matthew Wilcox <[email protected]>
Acked-by: Jan Kara <[email protected]>
Reviewed-by: Kirill Shutemov <[email protected]>
Reviewed-and-tested-by: Song Liu <[email protected]>
Tested-by: William Kucharski <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Tested-by: Qian Cai <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Song Liu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/pagemap.h | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index bcf909d0de5f..2e8438a1216a 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -333,6 +333,19 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping, mapping_gfp_mask(mapping)); } +static inline struct page *find_subpage(struct page *page, pgoff_t offset) +{ + unsigned long mask; + + if (PageHuge(page)) + return page; + + VM_BUG_ON_PAGE(PageTail(page), page); + + mask = (1UL << compound_order(page)) - 1; + return page + (offset & mask); +} + struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset); unsigned find_get_entries(struct address_space *mapping, pgoff_t start, |