Age | Commit message (Collapse) | Author | Files | Lines |
|
This saves five calls to compound_head(), totalling 60 bytes of text.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This is the folio equivalent of page_evictable(). Unfortunately, it's
different from !folio_test_unevictable(), but I think it's used in places
where you have to be a VM expert and can reasonably be expected to know
the difference.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This nets us 178 bytes of savings from removing calls to compound_head.
The three callers all grow a little, but each of them will be converted
to use folios soon, so that's fine.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Reimplement redirty_page_for_writepage() as a wrapper around
folio_redirty_for_writepage(). Account the number of pages in the
folio, add kernel-doc and move the prototype to writeback.h.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Account the number of pages in the folio that we're redirtying.
Turn account_page_dirty() into a wrapper around it. Also turn
the comment on folio_account_redirty() into kernel-doc and
edit it slightly so it makes sense to its potential callers.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Transform clear_page_dirty_for_io() into folio_clear_dirty_for_io()
and add a compatibility wrapper. Also move the declaration to pagemap.h
as this is page cache functionality that doesn't need to be used by the
rest of the kernel.
Increases the size of the kernel by 79 bytes. While we remove a few
calls to compound_head(), we add a call to folio_nr_pages() to get the
stats correct for the eventual support of multi-page folios.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Turn __cancel_dirty_page() into __folio_cancel_dirty() and add wrappers.
Move the prototypes into pagemap.h since this is page cache functionality.
Saves 44 bytes of kernel text in total; 33 bytes from __folio_cancel_dirty
and 11 from two callers of cancel_dirty_page().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Get the statistics right; compound pages were being accounted as a
single page. This didn't matter before now as no filesystem which
supported compound pages did writeback. Also move the declaration
to pagemap.h since this is part of the page cache. Add a wrapper for
account_page_cleaned().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Reimplement __set_page_dirty_nobuffers() as a wrapper around
filemap_dirty_folio(). Eventually folio_mark_dirty() will pass
the folio's mapping to the address space's ->dirty_folio()
operation, so add the parameter to filemap_dirty_folio() now.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Rename writeback_dirty_page() to writeback_dirty_folio() and
wait_on_page_writeback() to folio_wait_writeback().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Turn __set_page_dirty() into a wrapper around __folio_mark_dirty().
Convert account_page_dirtied() into folio_account_dirtied() and account
the number of pages in the folio to support multi-page folios.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Reimplement set_page_dirty() as a wrapper around folio_mark_dirty().
There is no change to filesystems as they were already being called
with the compound_head of the page being marked dirty. We avoid
several calls to compound_head(), both statically (through
using folio_test_dirty() instead of PageDirty() and dynamically by
calling folio_mapping() instead of page_mapping().
Also return bool instead of int to show the range of values actually
returned, and add kernel-doc.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Rename set_page_writeback() to folio_start_writeback() to match
folio_end_writeback(). Do not bother with wrappers that return void;
callers are perfectly capable of ignoring return values.
Add wrappers for set_page_writeback(), set_page_writeback_keepwrite() and
test_set_page_writeback() for compatibililty with existing filesystems.
The main advantage of this patch is getting the statistics right,
although it does eliminate a couple of calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
test_clear_page_writeback() is actually an mm-internal function, although
it's named as if it's a pagecache function. Move it to mm/internal.h,
rename it to __folio_end_writeback() and change the return type to bool.
The conversion from page to folio is mostly about accounting the number
of pages being written back, although it does eliminate a couple of
calls to compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Allow for accounting N pages at once instead of one page at a time.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
When batching events (such as writing back N pages in a single I/O), it
is better to do one flex_proportion operation instead of N. There is
only one caller of __fprop_inc_percpu_max(), and it's the one we're
going to change in the next patch, so rename it instead of adding a
compatibility wrapper.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
|
|
This is the folio equivalent of migrate_page_copy(), which is retained
as a wrapper for filesystems which are not yet converted to folios.
Also convert copy_huge_page() to folio_copy().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Turn migrate_page_states() into a wrapper around folio_migrate_flags().
Also convert two functions only called from folio_migrate_flags() to
be folio-based. ksm_migrate_page() becomes folio_migrate_ksm() and
copy_page_owner() becomes folio_copy_owner(). folio_migrate_flags()
alone shrinks by two thirds -- 1967 bytes down to 642 bytes.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Reimplement migrate_page_move_mapping() as a wrapper around
folio_migrate_mapping(). Saves 193 bytes of kernel text.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Transform page_mkclean() into folio_mkclean() and add a page_mkclean()
wrapper around folio_mkclean().
folio_mkclean is 15 bytes smaller than page_mkclean, but the kernel
is enlarged by 33 bytes due to inlining page_folio() into each caller.
This will go away once the callers are converted to use folio_mkclean().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Convert mark_page_accessed() to folio_mark_accessed(). It already
operated on the entire compound page, but now we can avoid calling
compound_head quite so many times. Shrinks the function from 424 bytes
to 295 bytes (shrinking by 129 bytes). The compatibility wrapper is 30
bytes, plus the 8 bytes for the exported symbol means the kernel shrinks
by 91 bytes.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This replaces activate_page() and eliminates lots of calls to
compound_head(). Saves net 118 bytes of kernel text. There are still
some redundant calls to page_folio() here which will be removed when
pagevec_lru_move_fn() is converted to use folios.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This is a default implementation which calls flush_dcache_page() on
each page in the folio. If architectures can do better, they should
implement their own version of it.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Vladimir Zapolskiy reports:
commit a7259df76702 ("memblock: make memblock_find_in_range method private")
invokes a kernel panic while running kmemleak on OF platforms with nomaped
regions:
Unable to handle kernel paging request at virtual address fff000021e00000
[...]
scan_block+0x64/0x170
scan_gray_list+0xe8/0x17c
kmemleak_scan+0x270/0x514
kmemleak_write+0x34c/0x4ac
Indeed, NOMAP regions don't have linear map entries so an attempt to scan
these areas would fault.
Prevent such faults by excluding NOMAP regions from kmemleak.
Link: https://lore.kernel.org/all/[email protected]
Fixes: a7259df76702 ("memblock: make memblock_find_in_range method private")
Signed-off-by: Mike Rapoport <[email protected]>
Tested-by: Vladimir Zapolskiy <[email protected]>
|
|
Architectures supported by KASAN_HW_TAGS can provide an asymmetric mode
of execution. On an MTE enabled arm64 hw for example this can be
identified with the asymmetric tagging mode of execution. In particular,
when such a mode is present, the CPU triggers a fault on a tag mismatch
during a load operation and asynchronously updates a register when a tag
mismatch is detected during a store operation.
Extend the KASAN HW execution mode kernel command line parameter to
support asymmetric mode.
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
After merging async mode for KASAN_HW_TAGS a duplicate of the
kasan_flag_async flag was left erroneously inside the code.
Remove the duplicate.
Note: This change does not bring functional changes to the code
base.
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Evgenii Stepanov <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Signed-off-by: Vincenzo Frascino <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
All callers hand in 0 and never will hand in anything else.
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
Convert __page_rmapping to folio_raw_mapping and move it to mm/internal.h.
It's only a couple of instructions (load and mask), so it's definitely
going to be cheaper to inline it than call it. Leave page_rmapping
out of line. Change page_anon_vma() to not call folio_raw_mapping() --
it's more efficient to do the subtraction than the mask.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This function already assumed it was being passed a head page. No real
change here, except that thp_nr_pages() compiles away on kernels with
THP compiled out while folio_nr_pages() is always present. Also convert
page_memcg_rcu() to folio_memcg_rcu().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
These are the folio equivalents of relock_page_lruvec_irq() and
folio_lruvec_relock_irqsave(). Also convert page_matches_lruvec()
to folio_matches_lruvec().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
These are the folio equivalents of lock_page_lruvec() and similar
functions. Also convert lruvec_memcg_debug() to take a folio.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This replaces mem_cgroup_page_lruvec(). All callers converted.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This saves dozens of bytes of text by eliminating a lot of calls to
compound_head().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
These are the folio equivalents of lock_page_memcg() and
unlock_page_memcg().
lock_page_memcg() and unlock_page_memcg() have too many callers to be
easily replaced in a single patch, so reimplement them as wrappers for
now to be cleaned up later when enough callers have been converted to
use folios.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
The page was only being used for the memcg and to gather trace
information, so this is a simple conversion. The only caller of
mem_cgroup_track_foreign_dirty() will be converted to folios in a later
patch, so doing this now makes that patch simpler.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Convert all callers of mem_cgroup_migrate() to call page_folio() first.
They all look like they're using head pages already, but this proves it.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Convert all the callers to call page_folio(). Most of them were already
using a head page, but a few of them I can't prove were, so this may
actually fix a bug.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Use a folio rather than a page to ensure that we're only operating on
base or head pages, and not tail pages.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Convert all callers of mem_cgroup_charge() to call page_folio() on the
page they're currently passing in. Many of them will be converted to
use folios themselves soon.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
The memcg_data is only set on the head page, so enforce that by
typing it as a folio.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
memcg information is only stored in the head page, so the memcg
subsystem needs to assure that all accesses are to the head page.
The first step is converting page_memcg() to folio_memcg().
The callers of page_memcg() and PageMemcgKmem() are not yet ready to be
converted to use folios, so retain them as wrappers around folio_memcg()
and folio_memcg_kmem(). They will be converted in a later patch set.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
memcg_check_events only uses the page's nid, so call page_to_nid in the
callers to make the interface easier to understand.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
Opencode this one-line function in its three callers.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
By using the node id in mem_cgroup_update_tree(), we can delete
soft_limit_tree_from_page() and mem_cgroup_page_nodeinfo(). Saves 42
bytes of kernel text on my config.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
The last use of 'page' was removed by commit 468c398233da ("mm:
memcontrol: switch to native NR_ANON_THPS counter"), so we can now remove
the parameter from the function.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
|
|
This function is the equivalent of page_mapped(). It is slightly
shorter as we do not need to handle the PageTail() case. Reimplement
page_mapped() as a wrapper around folio_mapped(). folio_mapped()
is 13 bytes smaller than page_mapped(), but the page_mapped() wrapper
is 30 bytes, for a net increase of 17 bytes of text.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
|
|
end_page_private_2() becomes folio_end_private_2(),
wait_on_page_private_2() becomes folio_wait_private_2() and
wait_on_page_private_2_killable() becomes folio_wait_private_2_killable().
Adjust the fscache equivalents to call page_folio() before calling these
functions to avoid adding wrappers. Ends up costing 1 byte of text
in ceph & netfs, but the core shrinks by three calls to page_folio().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
|
|
Reinforce that page flags are actually in the head page by changing the
type from page to folio. Increases the size of cachefiles by two bytes,
but the kernel core is unchanged in size.
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Jeff Layton <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: David Howells <[email protected]>
|
|
Convert wake_up_page_bit() to folio_wake_bit(). All callers have a folio,
so use it directly. Saves 66 bytes of text in end_page_private_2().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Jeff Layton <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
|
|
Rename wait_on_page_bit() to folio_wait_bit(). We must always wait on
the folio, otherwise we won't be woken up due to the tail page hashing
to a different bucket from the head page.
This commit shrinks the kernel by 770 bytes, mostly due to moving
the page waitqueue lookup into folio_wait_bit_common().
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Jeff Layton <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
|