diff options
| author | Matthew Wilcox (Oracle) <[email protected]> | 2019-09-23 15:34:52 -0700 |
|---|---|---|
| committer | Linus Torvalds <[email protected]> | 2019-09-24 15:54:08 -0700 |
| commit | 4101196b19d7f905dca5dcf46cd35eb758cf06c0 (patch) | |
| tree | f19a6fe24db9f749ef3e8c808eba6a067a336aa8 /include/linux | |
| parent | 875d91b11a201276ac3a9ab79f8b0fa3dc4ee8fd (diff) | |
mm: page cache: store only head pages in i_pages
Transparent Huge Pages are currently stored in i_pages as pointers to
consecutive subpages. This patch changes that to storing consecutive
pointers to the head page in preparation for storing huge pages more
efficiently in i_pages.
Large parts of this are "inspired" by Kirill's patch
https://lore.kernel.org/lkml/[email protected]/
Kirill and Huang Ying contributed several fixes.
[[email protected]: use compound_nr, squish uninit-var warning]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Matthew Wilcox <[email protected]>
Acked-by: Jan Kara <[email protected]>
Reviewed-by: Kirill Shutemov <[email protected]>
Reviewed-by: Song Liu <[email protected]>
Tested-by: Song Liu <[email protected]>
Tested-by: William Kucharski <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Tested-by: Qian Cai <[email protected]>
Tested-by: Mikhail Gavrilov <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Song Liu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/pagemap.h | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c7552459a15f..37a4d9e32cd3 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -333,6 +333,16 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping, mapping_gfp_mask(mapping)); } +static inline struct page *find_subpage(struct page *page, pgoff_t offset) +{ + if (PageHuge(page)) + return page; + + VM_BUG_ON_PAGE(PageTail(page), page); + + return page + (offset & (compound_nr(page) - 1)); +} + struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset); unsigned find_get_entries(struct address_space *mapping, pgoff_t start, |