diff options
author | Hillf Danton <[email protected]> | 2012-01-10 15:08:30 -0800 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2012-01-10 16:30:45 -0800 |
commit | ea5768c74b8e0d6a866508fc6399d5ff958da5e3 (patch) | |
tree | c9669e800bca28d9b905b6e47c206a5ea3fe149b | |
parent | 1ebb7044c9142c67d1d2b04d84010b4810a43fd8 (diff) |
mm/hugetlb.c: avoid bogus counter of surplus huge page
If we have to hand back the newly allocated huge page to page allocator,
for any reason, the changed counter should be recovered.
This affects only s390 at present.
Signed-off-by: Hillf Danton <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/hugetlb.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bb7dc405634f..ea8c3a4cd2ae 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid) if (page && arch_prepare_hugepage(page)) { __free_pages(page, huge_page_order(h)); - return NULL; + page = NULL; } spin_lock(&hugetlb_lock); |