diff options
author | Ritesh Harjani (IBM) <[email protected]> | 2024-10-18 21:47:56 +0530 |
---|---|---|
committer | Michael Ellerman <[email protected]> | 2024-10-21 15:26:50 +1100 |
commit | 6faeac507beb2935d9171a01c3877b0505689c58 (patch) | |
tree | 1b873a3f968676b785deb091241ddbb96de6d25a /arch/powerpc/kernel/setup-common.c | |
parent | adfaec30ffaceecd565e06adae367aa944acc3c9 (diff) |
powerpc/fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
This patch refactors all CMA related initialization and alignment code
to within fadump_cma_init() which gets called in the end. This also means
that we keep [reserve_dump_area_start, boot_memory_size] page aligned
during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
aligned chunk and provide it to CMA. This inherently also fixes an issue in
the current code where the reserve_dump_area_start is not aligned
when the physical memory can have holes and the suitable chunk starts at
an unaligned boundary.
After this we should be able to call fadump_cma_init() independently
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain <[email protected]>
Acked-by: Hari Bathini <[email protected]>
Reviewed-by: Madhavan Srinivasan <[email protected]>
Signed-off-by: Ritesh Harjani (IBM) <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://patch.msgid.link/805d6b900968fb9402ad8f4e4775597db42085c4.1729146153.git.ritesh.list@gmail.com
Diffstat (limited to 'arch/powerpc/kernel/setup-common.c')
0 files changed, 0 insertions, 0 deletions