diff options
| author | Frank van der Linden <[email protected]> | 2024-04-04 16:25:15 +0000 |
|---|---|---|
| committer | Andrew Morton <[email protected]> | 2024-04-25 20:56:43 -0700 |
| commit | 55d134a7b499c77e7cfd0ee41046f3c376e791e5 (patch) | |
| tree | 66dc97cb5fff9ad261651d0d7f873e92de0ac530 /tools/perf/scripts/python/bin | |
| parent | b174f139bdc8aaaf72f5b67ad1bd512c4868a87e (diff) | |
mm/hugetlb: pass correct order_per_bit to cma_declare_contiguous_nid
The hugetlb_cma code passes 0 in the order_per_bit argument to
cma_declare_contiguous_nid (the alignment, computed using the page order,
is correctly passed in).
This causes a bit in the cma allocation bitmap to always represent a 4k
page, making the bitmaps potentially very large, and slower.
It would create bitmaps that would be pretty big. E.g. for a 4k page
size on x86, hugetlb_cma=64G would mean a bitmap size of (64G / 4k) / 8
== 2M. With HUGETLB_PAGE_ORDER as order_per_bit, as intended, this
would be (64G / 2M) / 8 == 4k. So, that's quite a difference.
Also, this restricted the hugetlb_cma area to ((PAGE_SIZE <<
MAX_PAGE_ORDER) * 8) * PAGE_SIZE (e.g. 128G on x86) , since
bitmap_alloc uses normal page allocation, and is thus restricted by
MAX_PAGE_ORDER. Specifying anything about that would fail the CMA
initialization.
So, correctly pass in the order instead.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Signed-off-by: Frank van der Linden <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/bin')
0 files changed, 0 insertions, 0 deletions