aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRoman Gushchin <[email protected]>2020-06-03 15:58:42 -0700
committerLinus Torvalds <[email protected]>2020-06-03 20:09:44 -0700
commit16867664936e32423375bf44d240f440fff194cb (patch)
tree6bc077f97f3aa8eb00756eca0f8ed026d8870777
parent58b7f1194fe1e188a1687e45c3475a98906aae4b (diff)
mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations
Currently a cma area is barely used by the page allocator because it's used only as a fallback from movable, however kswapd tries hard to make sure that the fallback path isn't used. This results in a system evicting memory and pushing data into swap, while lots of CMA memory is still available. This happens despite the fact that alloc_contig_range is perfectly capable of moving any movable allocations out of the way of an allocation. To effectively use the cma area let's alter the rules: if the zone has more free cma pages than the half of total free pages in the zone, use cma pageblocks first and fallback to movable blocks in the case of failure. [[email protected]: ifdef the cma-specific code] Link: http://lkml.kernel.org/r/[email protected] Co-developed-by: Rik van Riel <[email protected]> Signed-off-by: Roman Gushchin <[email protected]> Signed-off-by: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Minchan Kim <[email protected]> Cc: Qian Cai <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Joonsoo Kim <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r--mm/page_alloc.c14
1 files changed, 14 insertions, 0 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cbe73a5610a1..5207a9e86388 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2752,6 +2752,20 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
{
struct page *page;
+#ifdef CONFIG_CMA
+ /*
+ * Balance movable allocations between regular and CMA areas by
+ * allocating from CMA when over half of the zone's free memory
+ * is in the CMA area.
+ */
+ if (migratetype == MIGRATE_MOVABLE &&
+ zone_page_state(zone, NR_FREE_CMA_PAGES) >
+ zone_page_state(zone, NR_FREE_PAGES) / 2) {
+ page = __rmqueue_cma_fallback(zone, order);
+ if (page)
+ return page;
+ }
+#endif
retry:
page = __rmqueue_smallest(zone, order, migratetype);
if (unlikely(!page)) {