aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPavel Tatashin <[email protected]>2018-08-17 15:44:52 -0700
committerLinus Torvalds <[email protected]>2018-08-17 16:20:28 -0700
commit720e14ebec642bc56c44e5e60a2d595900e5bbf0 (patch)
tree0cf640d04be919592ed18fc197491732f7e630fd
parent50a7ca3c6fc86955f99fc432fc8a186b968b365b (diff)
mm: skip invalid pages block at a time in zero_resv_unresv()
The role of zero_resv_unavail() is to make sure that every struct page that is allocated but is not backed by memory that is accessible by kernel is zeroed and not in some uninitialized state. Since struct pages are allocated in blocks (2M pages in x86 case), we can skip pageblock_nr_pages at a time, when the first one is found to be invalid. This optimization may help since now on x86 every hole in e820 maps is marked as reserved in memblock, and thus will go through this function. This function is called before sched_clock() is initialized, so I used my x86 early boot clock patches to measure the performance improvement. With 1T hole on i7-8700 currently we would take 0.606918s of boot time, but with this optimization 0.001103s. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Pavel Tatashin <[email protected]> Reviewed-by: Oscar Salvador <[email protected]> Reviewed-by: Naoya Horiguchi <[email protected]> Cc: Pasha Tatashin <[email protected]> Cc: Steven Sistare <[email protected]> Cc: Daniel Jordan <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Dan Williams <[email protected]> Cc: "Huang, Ying" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r--mm/page_alloc.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0922ef5d2e46..33f6745bb649 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6405,8 +6405,11 @@ void __paginginit zero_resv_unavail(void)
pgcnt = 0;
for_each_resv_unavail_range(i, &start, &end) {
for (pfn = PFN_DOWN(start); pfn < PFN_UP(end); pfn++) {
- if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages)))
+ if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
+ pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
+ + pageblock_nr_pages - 1;
continue;
+ }
mm_zero_struct_page(pfn_to_page(pfn));
pgcnt++;
}