diff options
author | David Hildenbrand <[email protected]> | 2020-10-13 16:55:21 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2020-10-13 18:38:33 -0700 |
commit | 51030a53d81e308f55e0e1d2048d23d8c8d16e5b (patch) | |
tree | c7f17c391b86d8314ca2f02d30459befd42dcfbe | |
parent | c9c510dc2964420038f8527125a2cd5d8fb79cb6 (diff) |
mm/page_isolation: exit early when pageblock is isolated in set_migratetype_isolate()
Right now, if we have two isolations racing on a pageblock that's in the
MOVABLE zone, we would trigger the WARN_ON_ONCE(). Let's just return
directly, simplifying error handling.
The change was introduced in commit 3d680bdf60a5 ("mm/page_isolation: fix
potential warning from user"). As far as I can see, we currently don't
have alloc_contig_range() users that use the ZONE_MOVABLE (anymore), so
it's currently more a cleanup and a preparation for the future than a fix.
Signed-off-by: David Hildenbrand <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: Baoquan He <[email protected]>
Reviewed-by: Pankaj Gupta <[email protected]>
Acked-by: Mike Kravetz <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Jason Wang <[email protected]>
Cc: Mike Rapoport <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/page_isolation.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 63a3db10a8c0..ad3aa7ac59a7 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -29,10 +29,12 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ /* * We assume the caller intended to SET migrate type to isolate. * If it is already set, then someone else must have raced and - * set it before us. Return -EBUSY + * set it before us. */ - if (is_migrate_isolate_page(page)) - goto out; + if (is_migrate_isolate_page(page)) { + spin_unlock_irqrestore(&zone->lock, flags); + return -EBUSY; + } /* * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself. @@ -52,7 +54,6 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ ret = 0; } -out: spin_unlock_irqrestore(&zone->lock, flags); if (!ret) { drain_all_pages(zone); |