aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-09-09mm: increment gen # before restarting traversalKinsey Ho1-10/+12
The generation number in struct mem_cgroup_reclaim_iter should be incremented on every round-trip. Currently, it is possible for a concurrent reclaimer to jump in at the end of the hierarchy, causing a traversal restart (resetting the iteration position) without incrementing the generation number. By resetting the position without incrementing the generation, it's possible for another ongoing mem_cgroup_iter() thread to walk the tree twice. Move the traversal restart such that the generation number is incremented before the restart. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kinsey Ho <[email protected]> Reviewed-by: T.J. Mercier <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Koutný <[email protected]> Cc: Muchun Song <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Yosry Ahmed <[email protected]> Cc: Zefan Li <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: don't hold css->refcnt during traversalKinsey Ho1-17/+1
To obtain the pointer to the next memcg position, mem_cgroup_iter() currently holds css->refcnt during memcg traversal only to put css->refcnt at the end of the routine. This isn't necessary as an rcu_read_lock is already held throughout the function. The use of the RCU read lock with css_next_descendant_pre() guarantees that sibling linkage is safe without holding a ref on the passed-in @css. Remove css->refcnt usage during traversal by leveraging RCU. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kinsey Ho <[email protected]> Reviewed-by: T.J. Mercier <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Koutný <[email protected]> Cc: Muchun Song <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Yosry Ahmed <[email protected]> Cc: Zefan Li <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09cgroup: clarify css sibling linkage is protected by cgroup_mutex or RCUKinsey Ho2-8/+14
Patch series "Improve mem_cgroup_iter()", v4. Incremental cgroup iteration is being used again [1]. This patchset improves the reliability of mem_cgroup_iter(). It also improves simplicity and code readability. [1] https://lore.kernel.org/[email protected]/ This patch (of 5): Explicitly document that css sibling/descendant linkage is protected by cgroup_mutex or RCU. Also, document in css_next_descendant_pre() and similar functions that it isn't necessary to hold a ref on @pos. The following changes in this patchset rely on this clarification for simplification in memcg iteration code. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Suggested-by: Yosry Ahmed <[email protected]> Reviewed-by: Michal Koutný <[email protected]> Signed-off-by: Kinsey Ho <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Zefan Li <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: T.J. Mercier <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm/page_alloc: fix build with CONFIG_UNACCEPTED_MEMORY=nAndrew Morton1-11/+5
When has_unaccepted_memory() is unused, it prevents kernel builds with clang, `make W=1` and CONFIG_WERROR=y: mm/page_alloc.c:7036:20: error: unused function 'has_unaccepted_memory' [-Werror,-Wunused-function] 7036 | static inline bool has_unaccepted_memory(void) | ^~~~~~~~~~~~~~~~~~~~~ Fix it by removeing the CONFIG_UNACCEPTED_MEMORY=n stub. Link: https://lkml.kernel.org/r/[email protected] Reported-by: Andy Shevchenko <[email protected]> Closes: https://lkml.kernel.org/r/[email protected] Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: migrate: remove unused includesKefeng Wang1-7/+0
random.h is not needed since commit 6c542ab75714 ("mm/demotion: build demotion targets based on explicit memory tiers"), all functions moved into memory-tiers. nsproxy.h is not needed since commit 228ebcbe634a ("Uninline find_task_by_xxx set of functions"), no nsproxy, we only call find_task_by_vpid() now. hugetlb_cgroup.h is not needed since commit ab5ac90aecf5 ("mm, hugetlb: do not rely on overcommit limit during migration"), move_hugetlb_state() is called and it belongs to hugetlb.h, which is already included. balloon_compaction.h, we have more general movable_operations for non-lru movable page migration, so it could be dropped. memremap.h, userfaultfd_k.h and oom.h are introduced for zone device page migration, but all functions are moved into migrate_device.c, so no needed anymore too. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: thp: simplify split_huge_pages_pid()Nanyong Sun1-6/+1
The helper find_get_task_by_vpid() can totally replace the task_struct find logic in split_huge_pages_pid(), so use it to simplify the code. Also delete the needless comments for the helper function name already explains what it's doing here. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Nanyong Sun <[email protected]> Cc: Kefeng Wang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: migrate: simplify find_mm_struct()Nanyong Sun1-7/+1
Use find_get_task_by_vpid() to replace the task_struct find logic in find_mm_struct(), note that this patch move the ptrace_may_access() call out from rcu_read_lock() scope, this is ok because it actually does not need it, find_get_task_by_vpid() already get the pid and task safely, ptrace_may_access() can use the task safely, like what sched_core_share_pid() similarly do. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Nanyong Sun <[email protected]> Cc: Kefeng Wang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm/damon/tests/core-kunit: skip damon_test_nr_accesses_to_accesses_bp() if ↵SeongJae Park2-1/+19
aggr_interval is zero The aggregation interval of test purpose damon_attrs for damon_test_nr_accesses_to_accesses_bp() becomes zero on 32 bit architecture, since size of int and long types are same. As a result, damon_nr_accesses_to_accesses_bp() call with the test data triggers divide-by-zero exception. damon_nr_accesses_to_accesses_bp() shouldn't be called with such data, and the non-test code avoids that by checking the case on damon_update_monitoring_results(). Skip the test code in the case, and add an explicit caution of the case on the comment for the test target function. Link: https://lkml.kernel.org/r/[email protected] Fixes: 5e06ad590096 ("mm/damon/core-test: test max_nr_accesses overflow caused divide-by-zero") Signed-off-by: SeongJae Park <[email protected]> Reported-by: Guenter Roeck <[email protected]> Closes: https://lore.kernel.org/[email protected] Cc: Brendan Higgins <[email protected]> Cc: David Gow <[email protected]> Cc: Guenter Roeck <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09uprobes: use vm_special_mapping close() functionalitySven Schnelle3-21/+17
The following KASAN splat was shown: [ 44.505448] ================================================================== 20:37:27 [3421/145075] [ 44.505455] BUG: KASAN: slab-use-after-free in special_mapping_close+0x9c/0xc8 [ 44.505471] Read of size 8 at addr 00000000868dac48 by task sh/1384 [ 44.505479] [ 44.505486] CPU: 51 UID: 0 PID: 1384 Comm: sh Not tainted 6.11.0-rc6-next-20240902-dirty #1496 [ 44.505503] Hardware name: IBM 3931 A01 704 (z/VM 7.3.0) [ 44.505508] Call Trace: [ 44.505511] [<000b0324d2f78080>] dump_stack_lvl+0xd0/0x108 [ 44.505521] [<000b0324d2f5435c>] print_address_description.constprop.0+0x34/0x2e0 [ 44.505529] [<000b0324d2f5464c>] print_report+0x44/0x138 [ 44.505536] [<000b0324d1383192>] kasan_report+0xc2/0x140 [ 44.505543] [<000b0324d2f52904>] special_mapping_close+0x9c/0xc8 [ 44.505550] [<000b0324d12c7978>] remove_vma+0x78/0x120 [ 44.505557] [<000b0324d128a2c6>] exit_mmap+0x326/0x750 [ 44.505563] [<000b0324d0ba655a>] __mmput+0x9a/0x370 [ 44.505570] [<000b0324d0bbfbe0>] exit_mm+0x240/0x340 [ 44.505575] [<000b0324d0bc0228>] do_exit+0x548/0xd70 [ 44.505580] [<000b0324d0bc1102>] do_group_exit+0x132/0x390 [ 44.505586] [<000b0324d0bc13b6>] __s390x_sys_exit_group+0x56/0x60 [ 44.505592] [<000b0324d0adcbd6>] do_syscall+0x2f6/0x430 [ 44.505599] [<000b0324d2f78434>] __do_syscall+0xa4/0x170 [ 44.505606] [<000b0324d2f9454c>] system_call+0x74/0x98 [ 44.505614] [ 44.505616] Allocated by task 1384: [ 44.505621] kasan_save_stack+0x40/0x70 [ 44.505630] kasan_save_track+0x28/0x40 [ 44.505636] __kasan_kmalloc+0xa0/0xc0 [ 44.505642] __create_xol_area+0xfa/0x410 [ 44.505648] get_xol_area+0xb0/0xf0 [ 44.505652] uprobe_notify_resume+0x27a/0x470 [ 44.505657] irqentry_exit_to_user_mode+0x15e/0x1d0 [ 44.505664] pgm_check_handler+0x122/0x170 [ 44.505670] [ 44.505672] Freed by task 1384: [ 44.505676] kasan_save_stack+0x40/0x70 [ 44.505682] kasan_save_track+0x28/0x40 [ 44.505687] kasan_save_free_info+0x4a/0x70 [ 44.505693] __kasan_slab_free+0x5a/0x70 [ 44.505698] kfree+0xe8/0x3f0 [ 44.505704] __mmput+0x20/0x370 [ 44.505709] exit_mm+0x240/0x340 [ 44.505713] do_exit+0x548/0xd70 [ 44.505718] do_group_exit+0x132/0x390 [ 44.505722] __s390x_sys_exit_group+0x56/0x60 [ 44.505727] do_syscall+0x2f6/0x430 [ 44.505732] __do_syscall+0xa4/0x170 [ 44.505738] system_call+0x74/0x98 The problem is that uprobe_clear_state() kfree's struct xol_area, which contains struct vm_special_mapping *xol_mapping. This one is passed to _install_special_mapping() in xol_add_vma(). __mput reads: static inline void __mmput(struct mm_struct *mm) { VM_BUG_ON(atomic_read(&mm->mm_users)); uprobe_clear_state(mm); exit_aio(mm); ksm_exit(mm); khugepaged_exit(mm); /* must run before exit_mmap */ exit_mmap(mm); ... } So uprobe_clear_state() in the beginning free's the memory area containing the vm_special_mapping data, but exit_mmap() uses this address later via vma->vm_private_data (which was set in _install_special_mapping(). Fix this by moving uprobe_clear_state() to uprobes.c and use it as close() callback. [[email protected]: remove unneeded condition] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 223febc6e557 ("mm: add optional close() to struct vm_special_mapping") Signed-off-by: Sven Schnelle <[email protected]> Suggested-by: Linus Torvalds <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: page_alloc: fix missed updates of PGFREE in free_unref_{page/folios}Yosry Ahmed1-6/+4
PGFREE is currently updated in two code paths: - __free_pages_ok(): for pages freed to the buddy allocator. - free_unref_page_commit(): for pages freed to the pcplists. Before commit df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock"), free_unref_page_commit() used to fallback to freeing isolated pages directly to the buddy allocator through free_one_page(). This was done _after_ updating PGFREE, so the counter was correctly updated. However, that commit moved the fallback logic to its callers (now called free_unref_page() and free_unref_folios()), so PGFREE was no longer updated in this fallback case. Now that the code has developed, there are more cases in free_unref_page() and free_unref_folios() where we fallback to calling free_one_page() (e.g. !pcp_allowed_order(), pcp_spin_trylock() fails). These cases also miss updating PGFREE. To make sure PGFREE is updated in all cases where pages are freed to the buddy allocator, move the update down the stack to free_one_page(). This was noticed through code inspection, although it should be noticeable at runtime (at least with some workloads). Link: https://lkml.kernel.org/r/[email protected] Fixes: df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock") Signed-off-by: Yosry Ahmed <[email protected]> Cc: Brendan Jackman <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: care about shadow stack guard gap when getting an unmapped areaMark Brown1-0/+14
As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow stack guard gap during placement") our current mmap() implementation does not take care to ensure that a new mapping isn't placed with existing mappings inside it's own guard gaps. This is particularly important for shadow stacks since if two shadow stacks end up getting placed adjacent to each other then they can overflow into each other which weakens the protection offered by the feature. On x86 there is a custom arch_get_unmapped_area() which was updated by the above commit to cover this case by specifying a start_gap for allocations with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and use the generic implementation of arch_get_unmapped_area() so let's make the equivalent change there so they also don't get shadow stack pages placed without guard pages. x86 uses a single page guard, this is also sufficient for arm64 where we either do single word pops and pushes or unconstrained writes. Architectures which do not have this feature will define VM_SHADOW_STACK to VM_NONE and hence be unaffected. Link: https://lkml.kernel.org/r/20240904-mm-generic-shadow-stack-guard-v2-3-a46b8b6dc0ed@kernel.org Signed-off-by: Mark Brown <[email protected]> Suggested-by: Rick Edgecombe <[email protected]> Acked-by: Lorenzo Stoakes <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David S. Miller <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: James Bottomley <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Liam R. Howlett <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Naveen N Rao <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: pass vm_flags to generic_get_unmapped_area()Mark Brown3-8/+10
In preparation for using vm_flags to ensure guard pages for shadow stacks supply them as an argument to generic_get_unmapped_area(). The only user outside of the core code is the PowerPC book3s64 implementation which is trivially wrapping the generic implementation in the radix_enabled() case. No functional changes. Link: https://lkml.kernel.org/r/20240904-mm-generic-shadow-stack-guard-v2-2-a46b8b6dc0ed@kernel.org Signed-off-by: Mark Brown <[email protected]> Acked-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Acked-by: Michael Ellerman <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David S. Miller <[email protected]> Cc: "Edgecombe, Rick P" <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: James Bottomley <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Naveen N Rao <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: make arch_get_unmapped_area() take vm_flags by defaultMark Brown18-81/+51
Patch series "mm: Care about shadow stack guard gap when getting an unmapped area", v2. As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow stack guard gap during placement") our current mmap() implementation does not take care to ensure that a new mapping isn't placed with existing mappings inside it's own guard gaps. This is particularly important for shadow stacks since if two shadow stacks end up getting placed adjacent to each other then they can overflow into each other which weakens the protection offered by the feature. On x86 there is a custom arch_get_unmapped_area() which was updated by the above commit to cover this case by specifying a start_gap for allocations with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and use the generic implementation of arch_get_unmapped_area() so let's make the equivalent change there so they also don't get shadow stack pages placed without guard pages. The arm64 and RISC-V shadow stack implementations are currently on the list: https://lore.kernel.org/r/20240829-arm64-gcs-v12-0-42fec94743 https://lore.kernel.org/lkml/[email protected]/ Given the addition of the use of vm_flags in the generic implementation we also simplify the set of possibilities that have to be dealt with in the core code by making arch_get_unmapped_area() take vm_flags as standard. This is a bit invasive since the prototype change touches quite a few architectures but since the parameter is ignored the change is straightforward, the simplification for the generic code seems worth it. This patch (of 3): When we introduced arch_get_unmapped_area_vmflags() in 961148704acd ("mm: introduce arch_get_unmapped_area_vmflags()") we did so as part of properly supporting guard pages for shadow stacks on x86_64, which uses a custom arch_get_unmapped_area(). Equivalent features are also present on both arm64 and RISC-V, both of which use the generic implementation of arch_get_unmapped_area() and will require equivalent modification there. Rather than continue to deal with having two versions of the functions let's bite the bullet and have all implementations of arch_get_unmapped_area() take vm_flags as a parameter. The new parameter is currently ignored by all implementations other than x86. The only caller that doesn't have a vm_flags available is mm_get_unmapped_area(), as for the x86 implementation and the wrapper used on other architectures this is modified to supply no flags. No functional changes. Link: https://lkml.kernel.org/r/20240904-mm-generic-shadow-stack-guard-v2-0-a46b8b6dc0ed@kernel.org Link: https://lkml.kernel.org/r/20240904-mm-generic-shadow-stack-guard-v2-1-a46b8b6dc0ed@kernel.org Signed-off-by: Mark Brown <[email protected]> Acked-by: Lorenzo Stoakes <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Acked-by: Helge Deller <[email protected]> [parisc] Cc: Alexander Gordeev <[email protected]> Cc: Andreas Larsson <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Dave Hansen <[email protected]> Cc: David S. Miller <[email protected]> Cc: "Edgecombe, Rick P" <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: James Bottomley <[email protected]> Cc: John Paul Adrian Glaubitz <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Naveen N Rao <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm/damon/tests/vaddr-kunit: init maple tree without MT_FLAGS_LOCK_EXTERNSeongJae Park1-1/+1
damon_test_three_regions_in_vmas() initializes a maple tree with MM_MT_FLAGS. The flags contains MT_FLAGS_LOCK_EXTERN, which means mt_lock of the maple tree will not be used. And therefore the maple tree initialization code skips initialization of the mt_lock. However, __link_vmas(), which adds vmas for test to the maple tree, uses the mt_lock. In other words, the uninitialized spinlock is used. The problem becomes clear when spinlock debugging is turned on, since it reports spinlock bad magic bug. Fix the issue by excluding MT_FLAGS_LOCK_EXTERN from the maple tree initialization flags. Note that we don't use empty flags to make it further similar to the usage of mm maple tree, and to be prepared for possible future changes, as suggested by Liam. Link: https://lkml.kernel.org/r/[email protected] Fixes: d0cf3dd47f0d ("damon: convert __damon_va_three_regions to use the VMA iterator") Signed-off-by: SeongJae Park <[email protected]> Reported-by: Guenter Roeck <[email protected]> Closes: https://lore.kernel.org/[email protected] Suggested-by: Liam R. Howlett <[email protected]> Tested-by: Guenter Roeck <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: Kconfig: fixup zsmalloc configurationSergey Senozhatsky1-1/+1
zsmalloc is not exclusive to zswap. Commit b3fbd58fcbb1 ("mm: Kconfig: simplify zswap configuration") made CONFIG_ZSMALLOC only visible when CONFIG_ZSWAP is selected, which makes it impossible to menuconfig zsmalloc-specific features (stats, chain-size, etc.) on systems that use ZRAM but don't have ZSWAP enabled. Make zsmalloc depend on both ZRAM and ZSWAP. Link: https://lkml.kernel.org/r/[email protected] Fixes: b3fbd58fcbb1 ("mm: Kconfig: simplify zswap configuration") Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Minchan Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09filemap: fix the last_index of mm_filemap_get_pagesTakaya Saeki1-1/+1
In commit b6273b55d885 ("filemap: add trace events for get_pages, map_pages, and fault"), mm_filemap_get_pages was added to trace page cache access. However, it tracks an extra page beyond the end of the accessed range. This patch fixes it by replacing last_index with last_index - 1. Link: https://lkml.kernel.org/r/[email protected] Fixes: b6273b55d885 ("filemap: add trace events for get_pages, map_pages, and fault") Signed-off-by: Takaya Saeki <[email protected]> Cc: Junichi Uekawa <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Steven Rostedt (Google) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm,tmpfs: consider end of file write in shmem_is_hugeRik van Riel7-39/+42
Take the end of a file write into consideration when deciding whether or not to use huge pages for tmpfs files when the tmpfs filesystem is mounted with huge=within_size This allows large writes that append to the end of a file to automatically use large pages. Doing 4MB sequential writes without fallocate to a 16GB tmpfs file with fio. The numbers without THP or with huge=always stay the same, but the performance with huge=within_size now matches that of huge=always. huge before after 4kB pages 1560 MB/s 1560 MB/s within_size 1560 MB/s 4720 MB/s always: 4720 MB/s 4720 MB/s [[email protected]: coding-style cleanups] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Rik van Riel <[email protected]> Reviewed-by: Baolin Wang <[email protected]> Tested-by: Baolin Wang <[email protected]> Cc: Darrick J. Wong <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: support priority parameter in recompressionSergey Senozhatsky2-1/+16
recompress device attribute supports alg=NAME parameter so that we can specify only one particular algorithm we want to perform recompression with. However, with algo params we now can have several exactly same secondary algorithms but each with its own params tuning (e.g. priority 1 configured to use more aggressive level, and priority 2 configured to use a pre-trained dictionary). Support priority=NUM parameter so that we can correctly determine which secondary algorithm we want to use. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09Documentation/zram: add documentation for algorithm parametersSergey Senozhatsky1-8/+39
Document brief description of compression algorithms' parameters: compression level and pre-trained dictionary. [[email protected]: trivial fixup] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add dictionary support to zstd backendSergey Senozhatsky1-26/+124
This adds support for pre-trained zstd dictionaries [1] Dictionary is setup in params once (per-comp) and loaded to Cctx and Dctx by reference, so we don't allocate extra memory. TEST ==== *** zstd /sys/block/zram0/mm_stat 1750654976 504565092 514203648 0 514203648 1 0 34204 34204 *** zstd dict=/etc/zstd-dict-amd64 /sys/block/zram0/mm_stat 1750638592 465851259 475373568 0 475373568 1 0 34185 34185 *** zstd level=8 dict=/etc/zstd-dict-amd64 /sys/block/zram0/mm_stat 1750642688 430765171 439955456 0 439955456 1 0 34185 34185 [1] https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md#dictionary-builder Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add dictionary support to lz4hcSergey Senozhatsky1-7/+68
Support pre-trained dictionary param. Just like lz4, lz4hc doesn't mandate specific format of the dictionary and zstd --train can be used to train a dictionary for lz4, according to [1]. TEST ==== *** lz4hc /sys/block/zram0/mm_stat 1750638592 608954620 621031424 0 621031424 1 0 34288 34288 *** lz4hc dict=/etc/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750671360 505068582 514994176 0 514994176 1 0 34278 34278 [1] https://github.com/lz4/lz4/issues/557 Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add dictionary support to lz4Sergey Senozhatsky1-7/+67
Support pre-trained dictionary param. lz4 doesn't mandate specific format of the dictionary and even zstd --train can be used to train a dictionary for lz4, according to [1]. TEST ==== *** lz4 /sys/block/zram0/mm_stat 1750654976 664188565 676864000 0 676864000 1 0 34288 34288 *** lz4 dict=/etc/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750638592 619891141 632053760 0 632053760 1 0 34278 34278 *** lz4 level=5 dict=/etc/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750638592 727174243 740810752 0 740810752 1 0 34437 34437 [1] https://github.com/lz4/lz4/issues/557 Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: move immutable comp params away from per-CPU contextSergey Senozhatsky9-98/+168
Immutable params never change once comp has been allocated and setup, so we don't need to store multiple copies of them in each per-CPU backend context. Move those to per-comp zcomp_params and pass it to backends callbacks for requests execution. Basically, this means parameters sharing between different contexts. Also introduce two new backends callbacks: setup_params() and release_params(). First, we need to validate params in a driver-specific way; second, driver may want to allocate its specific representation of the params which is needed to execute requests. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: introduce zcomp_ctx structureSergey Senozhatsky9-135/+149
Keep run-time driver data (scratch buffers, etc.) in zcomp_ctx structure. This structure is allocated per-CPU because drivers (backends) need to modify its content during requests execution. We will split mutable and immutable driver data, this is a preparation path. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: introduce zcomp_req structureSergey Senozhatsky9-72/+77
Encapsulate compression/decompression data in zcomp_req structure. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add support for dict comp configSergey Senozhatsky1-9/+36
Handle dict=path algorithm param so that we can read a pre-trained compression algorithm dictionary which we then pass to the backend configuration. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: introduce algorithm_params device attributeSergey Senozhatsky3-0/+76
This attribute is used to setup compression algorithms' parameters, so we can tweak algorithms' characteristics. At this point only 'level' is supported (to be extended in the future). Each call sets up parameters for one particular algorithm, which should be specified either by the algorithm's priority or algo name. This is expected to be called after corresponding algorithm is selected via comp_algorithm or recomp_algorithm. echo "priority=0 level=1" > /sys/block/zram0/algorithm_params or echo "algo=zstd level=1" > /sys/block/zram0/algorithm_params Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: recalculate zstd compression params onceSergey Senozhatsky1-2/+3
zstd compression params depends on level, but are constant for a given instance of zstd compression backend. Calculate once (during ctx creation). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: introduce zcomp_params structureSergey Senozhatsky11-24/+67
We will store a per-algorithm parameters there (compression level, dictionary, dictionary size, etc.). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: check that backends array has at least one backendSergey Senozhatsky2-6/+21
Make sure that backends array has anything apart from the sentinel NULL value. We also select LZO_BACKEND if none backends were selected. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add 842 compression backend supportSergey Senozhatsky5-0/+94
Add s/w 842 compression support. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add zlib compression backend supportSergey Senozhatsky5-0/+158
Add s/w zlib (inflate/deflate) compression. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: pass estimated src size hint to zstdSergey Senozhatsky1-2/+2
zram works with PAGE_SIZE buffers, so we always know exact size of the source buffer and hence can pass estimated_src_size to zstd_get_params(). This hint on x86_64, for example, reduces the size of the work memory buffer from 1303520 bytes down to 90080 bytes. Given that compression streams are per-CPU that's quite some memory saving. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add zstd compression backend supportSergey Senozhatsky5-0/+123
Add s/w zstd compression. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add lz4hc compression backend supportSergey Senozhatsky5-2/+100
Add s/w lz4hc compression support. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add lz4 compression backend supportSergey Senozhatsky5-0/+98
Add s/w lz4 compression support. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: add lzo and lzorle compression backends supportSergey Senozhatsky7-0/+140
Add s/w lzo/lzorle compression support. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09zram: introduce custom comp backends APISergey Senozhatsky5-160/+78
Moving to custom backends implementation gives us ability to have our own minimalistic and extendable API, and algorithms tunings becomes possible. The list of compression backends is empty at this point, we will add backends in the followup patches. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nick Terrell <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09lib: zstd: fix null-deref in ZSTD_createCDict_advanced2()Sergey Senozhatsky1-0/+2
ZSTD_createCDict_advanced2() must ensure that ZSTD_createCDict_advanced_internal() has successfully allocated cdict. customMalloc() may be called under low memory condition and may be unable to allocate workspace for cdict. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Nick Terrell <[email protected]> Cc: Minchan Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09lib: lz4hc: export LZ4_resetStreamHC symbolSergey Senozhatsky1-0/+1
This symbol is needed to enable lz4hc dictionary support. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Nick Terrell <[email protected]> Cc: Minchan Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09lib: zstd: export API needed for dictionary supportSergey Senozhatsky3-0/+252
Patch series "zram: introduce custom comp backends API", v7. This series introduces support for run-time compression algorithms tuning, so users, for instance, can adjust compression/acceleration levels and provide pre-trained compression/decompression dictionaries which certain algorithms support. At this point we stop supporting (old/deprecated) comp API. We may add new acomp API support in the future, but before that zram needs to undergo some major rework (we are not ready for async compression). Some benchmarks for reference (look at column #2) *** init zstd /sys/block/zram0/mm_stat 1750659072 504622188 514355200 0 514355200 1 0 34204 34204 *** init zstd dict=/home/ss/zstd-dict-amd64 /sys/block/zram0/mm_stat 1750650880 465908890 475398144 0 475398144 1 0 34185 34185 *** init zstd level=8 dict=/home/ss/zstd-dict-amd64 /sys/block/zram0/mm_stat 1750654976 430803319 439873536 0 439873536 1 0 34185 34185 *** init lz4 /sys/block/zram0/mm_stat 1750646784 664266564 677060608 0 677060608 1 0 34288 34288 *** init lz4 dict=/home/ss/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750650880 619990300 632102912 0 632102912 1 0 34278 34278 *** init lz4hc /sys/block/zram0/mm_stat 1750630400 609023822 621232128 0 621232128 1 0 34288 34288 *** init lz4hc dict=/home/ss/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750659072 505133172 515231744 0 515231744 1 0 34278 34278 Recompress init zram zstd (prio=0), zstd level=5 (prio 1), zstd with dict (prio 2) *** zstd /sys/block/zram0/mm_stat 1750982656 504630584 514269184 0 514269184 1 0 34204 34204 *** idle recompress priority=1 (zstd level=5) /sys/block/zram0/mm_stat 1750982656 488645601 525438976 0 514269184 1 0 34204 34204 *** idle recompress priority=2 (zstd dict) /sys/block/zram0/mm_stat 1750982656 460869640 517914624 0 514269184 1 0 34185 34204 This patch (of 24): We need to export a number of API functions that enable advanced zstd usage - C/D dictionaries, dictionaries sharing between contexts, etc. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sergey Senozhatsky <[email protected]> Cc: Nick Terrell <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09maple_tree: fix comment typo on ma_flag of allocation treeWei Yang1-3/+3
The maple tree flag of allocation tree is MT_FLAGS_ALLOC_RANGE. Just correct it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: fix folio_alloc_noprof()Kent Overstreet1-1/+1
folio_alloc_noprof) wasn't calling the _noprof version, causing allocations to be accounted here instead of to the caller Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kent Overstreet <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09maple_tree: cleanup function descriptionsWei Yang1-58/+47
This patch tries to cleanup some function description: * function name mismatch * parameter name mismatch * parameter all end up with ':' * not prefix '*' if parameter is a pointer There is still some missing description of parameters, I didn't add them since I am not sure the exact meaning. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: page_alloc: simpify page del and expandHuan Yang1-10/+25
When page del from buddy and need expand, it will account free_pages in zone's migratetype. The current way is to subtract the page number of the current order when deleting, and then add it back when expanding. This is unnecessary, as when migrating the same type, we can directly record the difference between the high-order pages and the expand added, and then subtract it directly. This patch merge that, only when del and expand done, then account free_pages. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Huan Yang <[email protected]> Reviewed-by: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09selftests/mm: relax test to fail after 100 migration failuresDev Jain1-6/+11
It was recently observed at [1] that during the folio unmapping stage of migration, when the PTEs are cleared, a racing thread faulting on that folio may increase the refcount of the folio, sleep on the folio lock (the migration path has the lock), and migration ultimately fails when asserting the actual refcount against the expected. Thereby, the migration selftest fails on shared-anon mappings. The above enforces the fact that migration is a best-effort service, therefore, it is wrong to fail the test for just a single failure; hence, fail the test after 100 consecutive failures (where 100 is still a subjective choice). Note that, this has no effect on the execution time of the test since that is controlled by a timeout. [1] https://lore.kernel.org/all/[email protected]/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Dev Jain <[email protected]> Suggested-by: David Hildenbrand <[email protected]> Reviewed-by: Ryan Roberts <[email protected]> Tested-by: Ryan Roberts <[email protected]> Cc: Alistair Popple <[email protected]> Cc: Aneesh Kumar K.V <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Baolin Wang <[email protected]> Cc: Barry Song <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Gavin Shan <[email protected]> Cc: "Huang, Ying" <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Lance Yang <[email protected]> Cc: Mark Brown <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yang Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm/vmalloc.c: make use of the helper macro LIST_HEAD()Hongbo Li1-8/+3
list_head can be initialized automatically with LIST_HEAD() instead of calling INIT_LIST_HEAD(). Here we can simplify the code. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Hongbo Li <[email protected]> Reviewed-by: Uladzislau Rezki (Sony) <[email protected]> Cc: Christoph Hellwig <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: add sysfs entry to disable splitting underused THPsUsama Arif2-0/+36
If disabled, THPs faulted in or collapsed will not be added to _deferred_list, and therefore won't be considered for splitting under memory pressure if underused. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Usama Arif <[email protected]> Cc: Alexander Zhu <[email protected]> Cc: Barry Song <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Domenico Cerasuolo <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kairui Song <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nico Pache <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Yu Zhao <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: split underused THPsUsama Arif6-3/+69
This is an attempt to mitigate the issue of running out of memory when THP is always enabled. During runtime whenever a THP is being faulted in (__do_huge_pmd_anonymous_page) or collapsed by khugepaged (collapse_huge_page), the THP is added to _deferred_list. Whenever memory reclaim happens in linux, the kernel runs the deferred_split shrinker which goes through the _deferred_list. If the folio was partially mapped, the shrinker attempts to split it. If the folio is not partially mapped, the shrinker checks if the THP was underused, i.e. how many of the base 4K pages of the entire THP were zero-filled. If this number goes above a certain threshold (decided by /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none), the shrinker will attempt to split that THP. Then at remap time, the pages that were zero-filled are mapped to the shared zeropage, hence saving memory. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Usama Arif <[email protected]> Suggested-by: Rik van Riel <[email protected]> Co-authored-by: Johannes Weiner <[email protected]> Cc: Alexander Zhu <[email protected]> Cc: Barry Song <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Domenico Cerasuolo <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kairui Song <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nico Pache <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Yu Zhao <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-09-09mm: introduce a pageflag for partially mapped foliosUsama Arif8-21/+56
Currently folio->_deferred_list is used to keep track of partially_mapped folios that are going to be split under memory pressure. In the next patch, all THPs that are faulted in and collapsed by khugepaged are also going to be tracked using _deferred_list. This patch introduces a pageflag to be able to distinguish between partially mapped folios and others in the deferred_list at split time in deferred_split_scan. Its needed as __folio_remove_rmap decrements _mapcount, _large_mapcount and _entire_mapcount, hence it won't be possible to distinguish between partially mapped folios and others in deferred_split_scan. Eventhough it introduces an extra flag to track if the folio is partially mapped, there is no functional change intended with this patch and the flag is not useful in this patch itself, it will become useful in the next patch when _deferred_list has non partially mapped folios. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Usama Arif <[email protected]> Cc: Alexander Zhu <[email protected]> Cc: Barry Song <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Domenico Cerasuolo <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kairui Song <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nico Pache <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Ryan Roberts <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Yu Zhao <[email protected]> Cc: Shuang Zhai <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]>