diff options
author | Michal Hocko <[email protected]> | 2019-11-05 21:16:44 -0800 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2019-11-06 08:47:50 -0800 |
commit | 93b3a674485f6a4b8ffff85d1682d5e8b7c51560 (patch) | |
tree | a65fd106e799a74eb29750b2b3341b082d493995 | |
parent | abaed0112c1db08be15a784a2c5c8a8b3063cdd3 (diff) |
mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo
pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
This is not really nice because it blocks both any interrupts on that
cpu and the page allocator. On large machines this might even trigger
the hard lockup detector.
Considering the pagetypeinfo is a debugging tool we do not really need
exact numbers here. The primary reason to look at the outuput is to see
how pageblocks are spread among different migratetypes and low number of
pages is much more interesting therefore putting a bound on the number
of pages on the free_list sounds like a reasonable tradeoff.
The new output will simply tell
[...]
Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
instead of
Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
The limit has been chosen arbitrary and it is a subject of a future
change should there be a need for that.
While we are at it, also drop the zone lock after each free_list
iteration which will help with the IRQ and page allocator responsiveness
even further as the IRQ lock held time is always bound to those 100k
pages.
[[email protected]: tweak comment text, per David Hildenbrand]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Suggested-by: Andrew Morton <[email protected]>
Reviewed-by: Waiman Long <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Acked-by: Rafael Aquini <[email protected]>
Acked-by: David Rientjes <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Song Liu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/vmstat.c | 23 |
1 files changed, 20 insertions, 3 deletions
diff --git a/mm/vmstat.c b/mm/vmstat.c index 4e885ecd44d1..a8222041bd44 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1383,12 +1383,29 @@ static void pagetypeinfo_showfree_print(struct seq_file *m, unsigned long freecount = 0; struct free_area *area; struct list_head *curr; + bool overflow = false; area = &(zone->free_area[order]); - list_for_each(curr, &area->free_list[mtype]) - freecount++; - seq_printf(m, "%6lu ", freecount); + list_for_each(curr, &area->free_list[mtype]) { + /* + * Cap the free_list iteration because it might + * be really large and we are under a spinlock + * so a long time spent here could trigger a + * hard lockup detector. Anyway this is a + * debugging tool so knowing there is a handful + * of pages of this order should be more than + * sufficient. + */ + if (++freecount >= 100000) { + overflow = true; + break; + } + } + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount); + spin_unlock_irq(&zone->lock); + cond_resched(); + spin_lock_irq(&zone->lock); } seq_putc(m, '\n'); } |