From 397d2300b08cdee052053e362018cdb6dd65eea2 Mon Sep 17 00:00:00 2001 From: Christophe Leroy Date: Thu, 9 May 2019 12:59:38 +0000 Subject: powerpc/32s: fix flush_hash_pages() on SMP flush_hash_pages() runs with data translation off, so current task_struct has to be accesssed using physical address. Fixes: f7354ccac844 ("powerpc/32: Remove CURRENT_THREAD_INFO and rename TI_CPU") Cc: stable@vger.kernel.org # v5.1+ Reported-by: Erhard F. Signed-off-by: Christophe Leroy Signed-off-by: Michael Ellerman --- arch/powerpc/mm/book3s32/hash_low.S | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'arch/powerpc/mm') diff --git a/arch/powerpc/mm/book3s32/hash_low.S b/arch/powerpc/mm/book3s32/hash_low.S index e27792d0b744..8366c2abeafc 100644 --- a/arch/powerpc/mm/book3s32/hash_low.S +++ b/arch/powerpc/mm/book3s32/hash_low.S @@ -539,7 +539,8 @@ _GLOBAL(flush_hash_pages) #ifdef CONFIG_SMP lis r9, (mmu_hash_lock - PAGE_OFFSET)@ha addi r9, r9, (mmu_hash_lock - PAGE_OFFSET)@l - lwz r8,TASK_CPU(r2) + tophys (r8, r2) + lwz r8, TASK_CPU(r8) oris r8,r8,9 10: lwarx r0,0,r9 cmpi 0,r0,0 -- cgit From 7338874c337f01dc84597a5500a588732725ffc6 Mon Sep 17 00:00:00 2001 From: Michael Ellerman Date: Tue, 14 May 2019 23:00:58 +1000 Subject: powerpc/mm: Fix crashes with hugepages & 4K pages The recent commit to cleanup ifdefs in the hugepage initialisation led to crashes when using 4K pages as reported by Sachin: BUG: Kernel NULL pointer dereference at 0x0000001c Faulting instruction address: 0xc000000001d1e58c Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=4K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries ... CPU: 3 PID: 4635 Comm: futex_wake04 Tainted: G W O 5.1.0-next-20190507-autotest #1 NIP: c000000001d1e58c LR: c000000001d1e54c CTR: 0000000000000000 REGS: c000000004937890 TRAP: 0300 MSR: 8000000000009033 CR: 22424822 XER: 00000000 CFAR: c00000000183e9e0 DAR: 000000000000001c DSISR: 40000000 IRQMASK: 0 ... NIP kmem_cache_alloc+0xbc/0x5a0 LR kmem_cache_alloc+0x7c/0x5a0 Call Trace: huge_pte_alloc+0x580/0x950 hugetlb_fault+0x9a0/0x1250 handle_mm_fault+0x490/0x4a0 __do_page_fault+0x77c/0x1f00 do_page_fault+0x28/0x50 handle_page_fault+0x18/0x38 This is caused by us trying to allocate from a NULL kmem cache in __hugepte_alloc(). The kmem cache is NULL because it was never allocated in hugetlbpage_init(), because add_huge_page_size() returned an error. The reason add_huge_page_size() returned an error is a simple typo, we are calling check_and_get_huge_psize(size) when we should be passing shift instead. The fact that we're able to trigger this path when the kmem caches are NULL is a separate bug, ie. we should not advertise any hugepage sizes if we haven't setup the required caches for them. This was only seen with 4K pages, with 64K pages we don't need to allocate any extra kmem caches because the 16M hugepage just occupies a single entry at the PMD level. Fixes: 723f268f19da ("powerpc/mm: cleanup ifdef mess in add_huge_page_size()") Reported-by: Sachin Sant Tested-by: Sachin Sant Signed-off-by: Michael Ellerman Reviewed-by: Christophe Leroy Reviewed-by: Aneesh Kumar K.V --- arch/powerpc/mm/hugetlbpage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch/powerpc/mm') diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index c5c9ff2d7afc..b5d92dc32844 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -556,7 +556,7 @@ static int __init add_huge_page_size(unsigned long long size) if (size <= PAGE_SIZE || !is_power_of_2(size)) return -EINVAL; - mmu_psize = check_and_get_huge_psize(size); + mmu_psize = check_and_get_huge_psize(shift); if (mmu_psize < 0) return -EINVAL; -- cgit