diff options
| author | Minchan Kim <[email protected]> | 2016-12-12 16:42:08 -0800 |
|---|---|---|
| committer | Linus Torvalds <[email protected]> | 2016-12-12 18:55:07 -0800 |
| commit | 4855e4a7f29d6d10b0b9c84e189c770c9a94e91e (patch) | |
| tree | eb75748238b9fd7e2be6b7f2e885f0526f24796a /tools/perf/scripts/python/stackcollapse.py | |
| parent | 88ed365ea227aa28841a8d6e196c9a261c76fffd (diff) | |
mm: prevent double decrease of nr_reserved_highatomic
There is race between page freeing and unreserved highatomic.
CPU 0 CPU 1
free_hot_cold_page
mt = get_pfnblock_migratetype
set_pcppage_migratetype(page, mt)
unreserve_highatomic_pageblock
spin_lock_irqsave(&zone->lock)
move_freepages_block
set_pageblock_migratetype(page)
spin_unlock_irqrestore(&zone->lock)
free_pcppages_bulk
__free_one_page(mt) <- mt is stale
By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed. By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock severak
times and then it will make mismatch between nr_reserved_highatomic and
the number of reserved pageblock.
So, this patch verifies whether the pageblock is highatomic or not and
decrease the count only if the pageblock is highatomic.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Minchan Kim <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Sangseok Lee <[email protected]>
Cc: Michal Hocko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/stackcollapse.py')
0 files changed, 0 insertions, 0 deletions