aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-02-24mm/list_lru.c: remove kvfree_rcu_local()Shakeel Butt1-10/+2
The list_lru file used to have local kvfree_rcu() which was renamed by commit e0feed08ab41 ("mm/list_lru.c: Rename kvfree_rcu() to local variant") to introduce the globally visible kvfree_rcu(). Now we have global kvfree_rcu(), so remove the local kvfree_rcu_local() and just use the global one. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Shakeel Butt <[email protected]> Reviewed-by: Uladzislau Rezki <[email protected]> Reviewed-by: Kirill Tkhai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: replace the loop with a list_for_each_entry()Muchun Song1-27/+8
The rule of list walk has gone since commit a9d5adeeb4b2 ("mm/memcontrol: allow to uncharge page without using page->lru field") So remove the strange comment and replace the loop with a list_for_each_entry(). There is only one caller of the uncharge_list(). So just fold it into mem_cgroup_uncharge_list() and remove it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Acked-by: Johannes Weiner <[email protected]> Acked-by: Roman Gushchin <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/memcontrol: remove redundant NULL checkYang Li1-2/+1
Fix below warnings reported by coccicheck: mm/memcontrol.c:451:3-9: WARNING: NULL check before some freeing functions is not needed. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yang Li <[email protected]> Reported-by: Abaci Robot <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: page_counter: re-layout structure to reduce false sharingFeng Tang1-1/+8
When checking a memory cgroup related performance regression [1], from the perf c2c profiling data, we found high false sharing for accessing 'usage' and 'parent'. On 64 bit system, the 'usage' and 'parent' are close to each other, and easy to be in one cacheline (for cacheline size == 64+ B). 'usage' is usally written, while 'parent' is usually read as the cgroup's hierarchical counting nature. So move the 'parent' to the end of the structure to make sure they are in different cache lines. Following are some performance data with the patch, against v5.11-rc1. [ In the data, A means a platform with 2 sockets 48C/96T, B is a platform of 4 sockests 72C/144T, and if a %stddev will be shown bigger than 2%, P100/P50 means number of test tasks equals to 100%/50% of nr_cpu] will-it-scale/malloc1 --------------------- v5.11-rc1 v5.11-rc1+patch A-P100 15782 ± 2% -0.1% 15765 ± 3% will-it-scale.per_process_ops A-P50 21511 +8.9% 23432 will-it-scale.per_process_ops B-P100 9155 +2.2% 9357 will-it-scale.per_process_ops B-P50 10967 +7.1% 11751 ± 2% will-it-scale.per_process_ops will-it-scale/pagefault2 ------------------------ v5.11-rc1 v5.11-rc1+patch A-P100 79028 +3.0% 81411 will-it-scale.per_process_ops A-P50 183960 ± 2% +4.4% 192078 ± 2% will-it-scale.per_process_ops B-P100 85966 +9.9% 94467 ± 3% will-it-scale.per_process_ops B-P50 198195 +9.8% 217526 will-it-scale.per_process_ops fio (4k/1M is block size) ------------------------- v5.11-rc1 v5.11-rc1+patch A-P50-r-4k 16881 ± 2% +1.2% 17081 ± 2% fio.read_bw_MBps A-P50-w-4k 3931 +4.5% 4111 ± 2% fio.write_bw_MBps A-P50-r-1M 15178 -0.2% 15154 fio.read_bw_MBps A-P50-w-1M 3924 +0.1% 3929 fio.write_bw_MBps [1].https://lore.kernel.org/lkml/20201102091543.GM31092@shao2-debian/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Feng Tang <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Reviewed-by: Shakeel Butt <[email protected]> Acked-by: Johannes Weiner <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: kmem: make __memcg_kmem_(un)charge staticRoman Gushchin2-6/+8
I've noticed that __memcg_kmem_charge() and __memcg_kmem_uncharge() are not used anywhere except memcontrol.c. Yet they are not declared as non-static and are declared in memcontrol.h. This patch makes them static. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Roman Gushchin <[email protected]> Reviewed-by: Shakeel Butt <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcg: add swapcache stat for memcg v2Shakeel Butt8-27/+32
This patch adds swapcache stat for the cgroup v2. The swapcache represents the memory that is accounted against both the memory and the swap limit of the cgroup. The main motivation behind exposing the swapcache stat is for enabling users to gracefully migrate from cgroup v1's memsw counter to cgroup v2's memory and swap counters. Cgroup v1's memsw limit allows users to limit the memory+swap usage of a workload but without control on the exact proportion of memory and swap. Cgroup v2 provides separate limits for memory and swap which enables more control on the exact usage of memory and swap individually for the workload. With some little subtleties, the v1's memsw limit can be switched with the sum of the v2's memory and swap limits. However the alternative for memsw usage is not yet available in cgroup v2. Exposing per-cgroup swapcache stat enables that alternative. Adding the memory usage and swap usage and subtracting the swapcache will approximate the memsw usage. This will help in the transparent migration of the workloads depending on memsw usage and limit to v2' memory and swap counters. The reasons these applications are still interested in this approximate memsw usage are: (1) these applications are not really interested in two separate memory and swap usage metrics. A single usage metric is more simple to use and reason about for them. (2) The memsw usage metric hides the underlying system's swap setup from the applications. Applications with multiple instances running in a datacenter with heterogeneous systems (some have swap and some don't) will keep seeing a consistent view of their usage. [[email protected]: fix CONFIG_SWAP=n build] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Shakeel Butt <[email protected]> Acked-by: Michal Hocko <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Muchun Song <[email protected]> Cc: Yang Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/memcg: remove rcu locking for lock_page_lruvec function seriesAlex Shi1-6/+0
lock_page_lruvec() and its variants used rcu_read_lock() with the intention of safeguarding against the mem_cgroup being destroyed concurrently; but so long as they are called under the specified conditions (as they are), there is no way for the page's mem_cgroup to be destroyed. Delete the unnecessary rcu_read_lock() and _unlock(). Hugh Dickins polished the commit log. Thanks a lot! Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Alex Shi <[email protected]> Acked-by: Hugh Dickins <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/memcg: revise the using condition of lock_page_lruvec function seriesAlex Shi1-4/+5
lock_page_lruvec() and its variants are safe to use under the same conditions as commit_charge(): add lock_page_memcg() to the comment. Polished with Hugh Dickins' suggestions, thanks! Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Alex Shi <[email protected]> Acked-by: Hugh Dickins <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: make the slab calculation consistentMuchun Song1-39/+66
Although the ratio of the slab is one, we also should read the ratio from the related memory_stats instead of hard-coding. And the local variable of size is already the value of slab_unreclaimable. So we do not need to read again. To do this we need some code like below: if (unlikely(memory_stats[i].idx == NR_SLAB_UNRECLAIMABLE_B)) { - size = memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) + - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B); + VM_BUG_ON(i < 1); + VM_BUG_ON(memory_stats[i - 1].idx != NR_SLAB_RECLAIMABLE_B); + size += memcg_page_state(memcg, memory_stats[i - 1].idx) * + memory_stats[i - 1].ratio; It requires a series of VM_BUG_ONs or comments to ensure these two items are actually adjacent and in the right order. So it would probably be easier to implement this using a wrapper that has a big switch() for unit conversion. More details about this discussion can refer to: https://lore.kernel.org/patchwork/patch/1348611/ This would fix the ratio inconsistency and get rid of the order guarantee. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Feng Tang <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: NeilBrown <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: convert NR_FILE_PMDMAPPED account to pagesMuchun Song4-6/+8
Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_FILE_PMDMAPPED account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Feng Tang <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: NeilBrown <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: convert NR_SHMEM_PMDMAPPED account to pagesMuchun Song5-10/+15
Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_SHMEM_PMDMAPPED account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Feng Tang <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: NeilBrown <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: convert NR_SHMEM_THPS account to pagesMuchun Song9-33/+12
Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_SHMEM_THPS account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Feng Tang <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: NeilBrown <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: convert NR_FILE_THPS account to pagesMuchun Song7-10/+14
Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with if hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_FILE_THPS account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Feng Tang <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: NeilBrown <[email protected]> Cc: Pankaj Gupta <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: convert NR_ANON_THPS account to pagesMuchun Song8-28/+44
Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_ANON_THPS account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Vladimir Davydov <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Feng Tang <[email protected]> Cc: NeilBrown <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Pankaj Gupta <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: fix NR_ANON_THPS accounting in charge movingMuchun Song1-4/+2
Patch series "Convert all THP vmstat counters to pages", v6. This patch series is aimed to convert all THP vmstat counters to pages. The unit of some vmstat counters are pages, some are bytes, some are HPAGE_PMD_NR, and some are KiB. When we want to expose these vmstat counters to the userspace, we have to know the unit of the vmstat counters is which one. When the unit is bytes or kB, both clearly distinguishable by the B/KB suffix. But for the THP vmstat counters, we may make mistakes. For example, the below is some bug fix for the THP vmstat counters: - 7de2e9f195b9 ("mm: memcontrol: correct the NR_ANON_THPS counter of hierarchical memcg") - The first commit in this series ("fix NR_ANON_THPS accounting in charge moving") This patch series can make the code clear. And make all the unit of the THP vmstat counters in pages. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. In this series, I changed the following vmstat counters unit from HPAGE_PMD_NR to pages. However, there is no change to the print format of output to user space. - NR_ANON_THPS - NR_FILE_THPS - NR_SHMEM_THPS - NR_SHMEM_PMDMAPPED - NR_FILE_PMDMAPPED Doing this also can make the statistics more accuracy for the THP vmstat counters. This series is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Because we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. From this point of view, I think that do this converting is reasonable. Thanks Hugh for mentioning this. This was inspired by Johannes and Roman. Thanks to them. This patch (of 7): The unit of NR_ANON_THPS is HPAGE_PMD_NR already. So it should inc/dec by one rather than nr_pages. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 468c398233da ("mm: memcontrol: switch to native NR_ANON_THPS counter") Signed-off-by: Muchun Song <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Johannes Weiner <[email protected]> Acked-by: Pankaj Gupta <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Reviewed-by: Shakeel Butt <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Feng Tang <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: NeilBrown <[email protected]> Cc: Rafael. J. Wysocki <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcontrol: optimize per-lruvec stats counter memory usageMuchun Song2-3/+21
The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), Actually the threshold can be as big as MEMCG_CHARGE_BATCH * PAGE_SIZE. It still fits into s32. So introduce struct batched_lruvec_stat to optimize memory usage. The size of struct lruvec_stat is 304 bytes on 64 bit systems. As it is a per-cpu structure. So with this patch, we can save 304 / 2 * ncpu bytes per-memcg per-node where ncpu is the number of the possible CPU. If there are c memory cgroup (include dying cgroup) and n NUMA node in the system. Finally, we can save (152 * ncpu * c * n) bytes. [[email protected]: fix typo in comment] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Reviewed-by: Shakeel Butt <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vladimir Davydov <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Chris Down <[email protected]> Cc: Yafang Shao <[email protected]> Cc: Wei Yang <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: memcg/slab: pre-allocate obj_cgroups for slab caches with SLAB_ACCOUNTRoman Gushchin5-29/+31
In general it's unknown in advance if a slab page will contain accounted objects or not. In order to avoid memory waste, an obj_cgroup vector is allocated dynamically when a need to account of a new object arises. Such approach is memory efficient, but requires an expensive cmpxchg() to set up the memcg/objcgs pointer, because an allocation can race with a different allocation on another cpu. But in some common cases it's known for sure that a slab page will contain accounted objects: if the page belongs to a slab cache with a SLAB_ACCOUNT flag set. It includes such popular objects like vm_area_struct, anon_vma, task_struct, etc. In such cases we can pre-allocate the objcgs vector and simple assign it to the page without any atomic operations, because at this early stage the page is not visible to anyone else. A very simplistic benchmark (allocating 10000000 64-bytes objects in a row) shows ~15% win. In the real life it seems that most workloads are not very sensitive to the speed of (accounted) slab allocations. [[email protected]: open-code set_page_objcgs() and add some comments, by Johannes] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: fix it for mm-slub-call-account_slab_page-after-slab-page-initialization-fix.patch] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Roman Gushchin <[email protected]> Acked-by: Johannes Weiner <[email protected]> Reviewed-by: Shakeel Butt <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/swap: don't SetPageWorkingset unconditionally during swapinYu Zhao1-1/+0
We are capable of SetPageWorkingset based on refault distances after commit aae466b0052e ("mm/swap: implement workingset detection for anonymous LRU"). This is done by workingset_refault(), which is right above the unconditional SetPageWorkingset deleted by this patch. The unconditional SetPageWorkingset miscategorizes pages that are read ahead or never belonged to the working set (e.g., tmpfs pages accessed only once by fd). When those pages are swapped in (after they were swapped out) for the first time, they skew PSI (when using async swap). When this happens again, depending on their refault distances, they might skew workingset_restore_anon counter in addition to PSI because their shadows indicate they were part of the working set. Historically, SetPageWorkingset was added as part of the PSI series, and Johannes said: "It was meant to mark incoming pages under IO with SetPageWorkingset when waiting for them constituted a memory stall. On the page cache side, because we HAVE workingset detection, this was specific to recently evicted pages that had been active in their previous life. On the anon side, the aging algorithm had no distinction between workingset and sporadically used pages. Given the choice between a) no swapin stalls are pressure and b) all swapin stalls are pressure, I went with the latter in order to detect swap storms. The false positive case - high rate of swapin without severe memory pressure - was relatively unlikely, because we tried to avoid swapping until everything was completely on fire in the first place." Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yu Zhao <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Johannes Weiner <[email protected]> Acked-by: Joonsoo Kim <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/swap_state: constify static struct attribute_groupRikard Falkeborn1-1/+1
The only usage of swap_attr_group is to pass its address to sysfs_create_group() which takes a pointer to const attribute_group. Make it const to allow the compiler to put it in read-only memory. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Rikard Falkeborn <[email protected]> Reviewed-by: Amy Parker <[email protected]> Acked-by: "Huang, Ying" <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/page_io: use pr_alert_ratelimited for swap read/write errorsGeorgi Djakov1-6/+6
If there are errors during swap read or write, they can easily fill the log buffer and remove any previous messages that might be useful for debugging, especially on systems that rely for logging only on the kernel ring-buffer. For example, on a systems using zram as swap, we are more likely to see any page allocation errors preceding the swap write errors if the alerts are ratelimited. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Georgi Djakov <[email protected]> Acked-by: Minchan Kim <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/swapfile.c: fix debugging information problemStephen Zhang1-4/+4
Once the function name is changed, it may be easy to forget to modify the corresponding code here. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Stephen Zhang <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/swap_slots.c: remove redundant NULL checkYang Li1-2/+1
Fix below warnings reported by coccicheck: mm/swap_slots.c:197:3-9: WARNING: NULL check before some freeing functions is not needed. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yang Li <[email protected]> Reported-by: Abaci Robot <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm: backing-dev: Remove duplicated macro definitionBaolin Wang1-4/+2
Move the K() macro a little forward to remove the same macro definition. Link: https://lkml.kernel.org/r/d1ccdf2d3116dce9814f2bcc1f0415ecb4c76ea5.1612862230.git.baolin.wang@linux.alibaba.com Signed-off-by: Baolin Wang <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24fs/buffer.c: add checking buffer head stat before clearYang Guo1-1/+2
clear_buffer_new() is used to clear buffer new stat. When PAGE_SIZE is 64K, most buffer heads in the list are not needed to clear. clear_buffer_new() has an enpensive atomic modification operation, Let's add checking buffer head before clear it as __block_write_begin_int does which is good for performance. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yang Guo <[email protected]> Signed-off-by: Shaokun Zhang <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Nick Piggin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: simplify generic_file_read_iterChristoph Hellwig1-6/+4
Avoid the pointless goto out just for returning retval. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: rename generic_file_buffered_read to filemap_readChristoph Hellwig3-22/+19
Rename generic_file_buffered_read to match the naming of filemap_fault, also update the written parameter to a more descriptive name and improve the kerneldoc comment. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: don't relock the page after calling readpageMatthew Wilcox (Oracle)1-14/+7
We don't need to get the page lock again; we just need to wait for the I/O to finish, so use wait_on_page_locked_killable() like the other callers of ->readpage. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: restructure filemap_get_pagesMatthew Wilcox (Oracle)1-43/+28
Remove the got_pages label, remove indentation, rename find_page to retry, simplify error handling. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: split filemap_readahead out of filemap_get_pagesMatthew Wilcox (Oracle)1-5/+14
This simplifies the error handling. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: add filemap_range_uptodateMatthew Wilcox (Oracle)1-26/+39
Move the complicated condition and the calculations out of filemap_update_page() into its own function. [[email protected]: unlock page before dropping its refcount] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: move the iocb checks into filemap_update_pageMatthew Wilcox (Oracle)1-14/+10
We don't need to give up when a non-blocking request sees a !Uptodate page. We may be able to satisfy the read from a partially-uptodate page. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: convert filemap_update_page to return an errnoMatthew Wilcox (Oracle)1-21/+17
Use AOP_TRUNCATED_PAGE to indicate that no error occurred, but the page we looked up is no longer valid. In this case, the reference to the page will have been removed; if we hit any other error, the caller will release the reference. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: change filemap_create_page calling conventionsMatthew Wilcox (Oracle)1-26/+27
By moving the iocb flag checks to the caller, we can pass the file and the page index instead of the iocb. It never needed the iter. By passing the pagevec, we can return an errno (or AOP_TRUNCATED_PAGE) instead of an ERR_PTR. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: change filemap_read_page calling conventionsMatthew Wilcox (Oracle)1-47/+42
Make this function more generic by passing the file instead of the iocb. Check in the callers whether we should call readpage or not. Also make it return an errno / 0 / AOP_TRUNCATED_PAGE, and make calling put_page() the caller's responsibility. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: don't call ->readpage if IOCB_WAITQ is setMatthew Wilcox (Oracle)1-12/+2
The readpage operation can block in many (most?) filesystems, so we should punt to a work queue instead of calling it. This was the last caller of lock_page_for_iocb(), so remove it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: inline __wait_on_page_locked_async into callerMatthew Wilcox (Oracle)1-31/+22
The previous patch removed wait_on_page_locked_async(), so inline __wait_on_page_locked_async into __lock_page_async(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: support readpage splitting a pageMatthew Wilcox (Oracle)1-53/+23
For page splitting to succeed, the thread asking to split the page has to be the only one with a reference to the page. Calling wait_on_page_locked() while holding a reference to the page will effectively prevent this from happening with sufficient threads waiting on the same page. Use put_and_wait_on_page_locked() to sleep without holding a reference to the page, then retry the page lookup after the page is unlocked. Since we now get the page lock a little earlier in filemap_update_page(), we can eliminate a number of duplicate checks. The original intent (commit ebded02788b5 ("avoid unnecessary calls to lock_page when waiting for IO to complete during a read")) behind getting the page lock later was to avoid re-locking the page after it has been brought uptodate by another thread. We still avoid that because we go through the normal lookup path again after the winning thread has brought the page uptodate. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: pass a sleep state to put_and_wait_on_page_lockedMatthew Wilcox (Oracle)4-8/+10
This is prep work for the next patch, but I think at least one of the current callers would prefer a killable sleep to an uninterruptible one. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: use head pages in generic_file_buffered_readMatthew Wilcox (Oracle)1-37/+85
Add filemap_get_read_batch() which returns the head pages which represent a contiguous array of bytes in the file. It also stops when encountering a page marked as Readahead or !Uptodate (but does return that page) so it can be handled appropriately by filemap_get_pages(). That lets us remove the loop in filemap_get_pages() and check only the last page. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: convert filemap_get_pages to take a pagevecMatthew Wilcox (Oracle)1-44/+38
Using a pagevec lets us keep the pages and the number of pages together which simplifies a lot of the calling conventions. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: remove dynamically allocated array from filemap_readMatthew Wilcox (Oracle)1-13/+2
Increasing the batch size runs into diminishing returns. It's probably better to make, eg, three calls to filemap_get_pages() than it is to call into kmalloc(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Cc: Kent Overstreet <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: rename generic_file_buffered_read subfunctionsMatthew Wilcox (Oracle)1-29/+15
Patch series "Refactor generic_file_buffered_read", v5. This is a combination of Christoph's work to refactor generic_file_buffered_read() and some of my large-page support which was disrupted by Kent's refactoring of generic_file_buffered_read. This patch (of 18): The recent split of generic_file_buffered_read() created some very long function names which are hard to distinguish from each other. Rename as follows: generic_file_buffered_read_readpage -> filemap_read_page generic_file_buffered_read_pagenotuptodate -> filemap_update_page generic_file_buffered_read_no_cached_page -> filemap_create_page generic_file_buffered_read_get_pages -> filemap_get_pages Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: don't revert iter on -EIOCBQUEUEDPavel Begunkov1-2/+4
Currently, if I/O is enqueued for async execution direct paths of generic_file_{read,write}_iter() will always revert the iter. There are no users expecting that, and that is also costly. Leave iterators as is on -EIOCBQUEUED. Link: https://lkml.kernel.org/r/f5247b60e7abbd2ff850cd108491f53a2e0c501a.1610207781.git.asml.silence@gmail.com Signed-off-by: Pavel Begunkov <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Jens Axboe <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/filemap: remove unused parameter and change to void type for ↵Baolin Wang3-12/+3
replace_page_cache_page() Since commit 74d609585d8b ("page cache: Add and replace pages using the XArray") was merged, the replace_page_cache_page() can not fail and always return 0, we can remove the redundant return value and void it. Moreover remove the unused gfp_mask. Link: https://lkml.kernel.org/r/609c30e5274ba15d8b90c872fd0d8ac437a9b2bb.1610071401.git.baolin.wang@linux.alibaba.com Signed-off-by: Baolin Wang <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Miklos Szeredi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/page_owner: use helper function zone_end_pfn() to get end_pfnMiaohe Lin1-2/+2
Commit 108bcc96ef70 ("mm: add & use zone_end_pfn() and zone_spans_pfn()") introduced the helper zone_end_pfn() to calculate the zone end pfn. But pagetypeinfo_showmixedcount_print forgot to use it. And the initialization of local variable pfn is duplicated, remove one. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Miaohe Lin <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/debug_vm_pgtable/basic: iterate over entire protection_map[]Anshuman Khandual1-12/+35
Currently the basic tests just validate various page table transformations after starting with vm_get_page_prot(VM_READ|VM_WRITE|VM_EXEC) protection. Instead scan over the entire protection_map[] for better coverage. It also makes sure that all these basic page table tranformations checks hold true irrespective of the starting protection value for the page table entry. There is also a slight change in the debug print format for basic tests to capture the protection value it is being tested with. The modified output looks something like [pte_basic_tests ]: Validating PTE basic () [pte_basic_tests ]: Validating PTE basic (read) [pte_basic_tests ]: Validating PTE basic (write) [pte_basic_tests ]: Validating PTE basic (read|write) [pte_basic_tests ]: Validating PTE basic (exec) [pte_basic_tests ]: Validating PTE basic (read|exec) [pte_basic_tests ]: Validating PTE basic (write|exec) [pte_basic_tests ]: Validating PTE basic (read|write|exec) [pte_basic_tests ]: Validating PTE basic (shared) [pte_basic_tests ]: Validating PTE basic (read|shared) [pte_basic_tests ]: Validating PTE basic (write|shared) [pte_basic_tests ]: Validating PTE basic (read|write|shared) [pte_basic_tests ]: Validating PTE basic (exec|shared) [pte_basic_tests ]: Validating PTE basic (read|exec|shared) [pte_basic_tests ]: Validating PTE basic (write|exec|shared) [pte_basic_tests ]: Validating PTE basic (read|write|exec|shared) This adds a missing argument 'struct mm_struct *' in pud_basic_tests() test . This never got exposed before as PUD based THP is available only on X86 platform where mm_pmd_folded(mm) call gets macro replaced without requiring the mm_struct i.e __is_defined(__PAGETABLE_PMD_FOLDED). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Tested-by: Gerald Schaefer <[email protected]> [s390] Reviewed-by: Steven Price <[email protected]> Suggested-by: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Vineet Gupta <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/debug_vm_pgtable/basic: add validation for dirtiness after write protectAnshuman Khandual2-4/+43
Patch series "mm/debug_vm_pgtable: Some minor updates", v3. This series contains some cleanups and new test suggestions from Catalin from an earlier discussion. https://lore.kernel.org/linux-mm/20201123142237.GF17833@gaia/ This patch (of 2): This adds validation tests for dirtiness after write protect conversion for each page table level. There are two new separate test types involved here. The first test ensures that a given page table entry does not become dirty after pxx_wrprotect(). This is important for platforms like arm64 which transfers and drops the hardware dirty bit (!PTE_RDONLY) to the software dirty bit while making it an write protected one. This test ensures that no fresh page table entry could be created with hardware dirty bit set. The second test ensures that a given page table entry always preserve the dirty information across pxx_wrprotect(). This adds two previously missing PUD level basic tests and while here fixes pxx_wrprotect() related typos in the documentation file. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Suggested-by: Catalin Marinas <[email protected]> Tested-by: Gerald Schaefer <[email protected]> [s390] Cc: Christophe Leroy <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Steven Price <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/debug: improve memcg debuggingMatthew Wilcox (Oracle)1-5/+5
The memcg_data is only valid on the head page, not the tail pages. Change the format and location of the printout within the dump to match the other parts of struct page better. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: Zi Yan <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Johannes Weiner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm/slub: minor coding style tweaksZhiyuan Dai1-1/+1
Add whitespace to fix coding style issues, improve code reading. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Zhiyuan Dai <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-24mm, slub: remove slub_memcg_sysfs boot param and CONFIG_SLUB_MEMCG_SYSFS_ONVlastimil Babka3-38/+0
The boot param and config determine the value of memcg_sysfs_enabled, which is unused since commit 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all allocations") as there are no per-memcg kmem caches anymore. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Acked-by: Roman Gushchin <[email protected]> Acked-by: David Rientjes <[email protected]> Reviewed-by: Miaohe Lin <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>