aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVincent Whitchurch <[email protected]>2019-11-30 17:54:20 -0800
committerLinus Torvalds <[email protected]>2019-12-01 12:59:05 -0800
commit4c29700ed9908c15feeb84a40a415f4e921c5a66 (patch)
tree1df623783aa5d3443eec55681d02c8f29a0675d9
parentc5e79ef561b0292fa4448d3ea5de6430143b9f70 (diff)
mm/sparse: consistently do not zero memmap
sparsemem without VMEMMAP has two allocation paths to allocate the memory needed for its memmap (done in sparse_mem_map_populate()). In one allocation path (sparse_buffer_alloc() succeeds), the memory is not zeroed (since it was previously allocated with memblock_alloc_try_nid_raw()). In the other allocation path (sparse_buffer_alloc() fails and sparse_mem_map_populate() falls back to memblock_alloc_try_nid()), the memory is zeroed. AFAICS this difference does not appear to be on purpose. If the code is supposed to work with non-initialized memory (__init_single_page() takes care of zeroing the struct pages which are actually used), we should consistently not zero the memory, to avoid masking bugs. ( I noticed this because on my ARM64 platform, with 1 GiB of memory the first [and only] section is allocated from the zeroing path while with 2 GiB of memory the first 1 GiB section is allocated from the non-zeroing path. ) Michal: "the main user visible problem is a memory wastage. The overal amount of memory should be small. I wouldn't call it stable material." Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vincent Whitchurch <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: David Hildenbrand <[email protected]> Reviewed-by: Oscar Salvador <[email protected]> Reviewed-by: Pavel Tatashin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r--mm/sparse.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/sparse.c b/mm/sparse.c
index f6891c1992b1..01e467adc219 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -458,7 +458,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn,
if (map)
return map;
- map = memblock_alloc_try_nid(size,
+ map = memblock_alloc_try_nid_raw(size,
PAGE_SIZE, addr,
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
if (!map)