diff options
author | Balbir Singh <[email protected]> | 2018-12-28 00:33:24 -0800 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2018-12-28 12:11:46 -0800 |
commit | 5eb570a8d9248e0c1358078a59916d0e337e695b (patch) | |
tree | 9163f5c55857e2439bd9c2d37919a6e79d08218e | |
parent | c8f61cfc871fadfb73ad3eacd64fda457279e911 (diff) |
mm/hotplug: optimize clear_hwpoisoned_pages()
In hot remove, we try to clear poisoned pages, but a small optimization to
check if num_poisoned_pages is 0 helps remove the iteration through
nr_pages.
[[email protected]: tweak comment text]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Balbir Singh <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/sparse.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/mm/sparse.c b/mm/sparse.c index 3abc8cc50201..691544a2814c 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -740,6 +740,15 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) if (!memmap) return; + /* + * A further optimization is to have per section refcounted + * num_poisoned_pages. But that would need more space per memmap, so + * for now just do a quick global check to speed up this routine in the + * absence of bad pages. + */ + if (atomic_long_read(&num_poisoned_pages) == 0) + return; + for (i = 0; i < nr_pages; i++) { if (PageHWPoison(&memmap[i])) { atomic_long_sub(1, &num_poisoned_pages); |