aboutsummaryrefslogtreecommitdiff
path: root/include/linux/debugobjects.h
diff options
context:
space:
mode:
authorZi Yan <[email protected]>2023-09-13 16:12:44 -0400
committerAndrew Morton <[email protected]>2023-10-04 10:32:29 -0700
commit2e7cfe5cd5b6b0b98abf57a3074885979e187c1c (patch)
treedf2ad3549014423a0dbf40287b47ae33bd4baded /include/linux/debugobjects.h
parent3dfbb555c98ac55b9d911f9af0e35014b445fb41 (diff)
mm/cma: use nth_page() in place of direct struct page manipulation
Patch series "Use nth_page() in place of direct struct page manipulation", v3. On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be contiguous, since each memory section's memmap might be allocated independently. hugetlb pages can go beyond a memory section size, thus direct struct page manipulation on hugetlb pages/subpages might give wrong struct page. Kernel provides nth_page() to do the manipulation properly. Use that whenever code can see hugetlb pages. This patch (of 5): When dealing with hugetlb pages, manipulating struct page pointers directly can get to wrong struct page, since struct page is not guaranteed to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it properly. Without the fix, page_kasan_tag_reset() could reset wrong page tags, causing a wrong kasan result. No related bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Signed-off-by: Zi Yan <[email protected]> Reviewed-by: Muchun Song <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport (IBM) <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'include/linux/debugobjects.h')
0 files changed, 0 insertions, 0 deletions