aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/include
AgeCommit message (Collapse)AuthorFilesLines
2022-05-19powerpc/code-patching: Inline is_offset_in_{cond}_branch_range()Christophe Leroy1-2/+27
Test in is_offset_in_branch_range() and is_offset_in_cond_branch_range() are simple tests that are worth inlining. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/a05be0ccb7373e6a9789a1988fcd0c810f5f9269.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/signal: Report minimum signal frame size to userspace via AT_MINSIGSTKSZNicholas Piggin3-2/+21
Implement the AT_MINSIGSTKSZ AUXV entry, allowing userspace to dynamically size stack allocations in a manner forward-compatible with new processor state saved in the signal frame For now these statically find the maximum signal frame size rather than doing any runtime testing of features to minimise the size. glibc 2.34 will take advantage of this, as will applications that use use _SC_MINSIGSTKSZ and _SC_SIGSTKSZ. Reviewed-by: Tulio Magno Quites Machado Filho <[email protected]> Signed-off-by: Nicholas Piggin <[email protected]> References: 94b07c1f8c39 ("arm64: signal: Report signal frame size to userspace via auxv") Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-19powerpc/64: Bump SIGSTKSZ and MINSIGSTKSZNicholas Piggin1-0/+5
The sad tale of SIGSTKSZ and MINSIGSTKSZ is documented in glibc.git commit f7c399cff5bd ("PowerPC SIGSTKSZ"), which explains why glibc does not use the kernel defines for these constants. Since then in fact there has been a further expansion of the signal stack frame size on little-endian with linux commit 573ebfa6601f ("powerpc: Increase stack redzone for 64-bit userspace to 512 bytes"), which has caused it to exceed even the glibc defines. See kernel commit 63dee5df43a3 ("powerpc: Allow 4224 bytes of stack expansion for the signal frame") for more details of the history of the expansion. Increase MINSIGSTKSZ to 8192 which is double the current glibc value and fits the current stack frame with room to grow. SIGSTKSZ is set to 4x the minimum as convention. glibc will have to be updated as well. Reviewed-by: Tulio Magno Quites Machado Filho <[email protected]> Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-19Merge branch 'topic/ppc-kvm' into nextMichael Ellerman6-30/+11
Merge our KVM topic branch.
2022-05-19KVM: PPC: Book3s: Remove real mode interrupt controller hcalls handlersAlexey Kardashevskiy1-7/+0
Currently we have 2 sets of interrupt controller hypercalls handlers for real and virtual modes, this is from POWER8 times when switching MMU on was considered an expensive operation. POWER9 however does not have dependent threads and MMU is enabled for handling hcalls so the XIVE native or XICS-on-XIVE real mode handlers never execute on real P9 and later CPUs. This untemplate the handlers and only keeps the real mode handlers for XICS native (up to POWER8) and remove the rest of dead code. Changes in functions are mechanical except few missing empty lines to make checkpatch.pl happy. The default implemented hcalls list already contains XICS hcalls so no change there. This should not cause any behavioral change. Reported-by: kernel test robot <[email protected]> Signed-off-by: Alexey Kardashevskiy <[email protected]> Acked-by: Cédric Le Goater <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-19KVM: PPC: Book3s: Retire H_PUT_TCE/etc real mode handlersAlexey Kardashevskiy3-11/+2
LoPAPR defines guest visible IOMMU with hypercalls to use it - H_PUT_TCE/etc. Implemented first on POWER7 where hypercalls would trap in the KVM in the real mode (with MMU off). The problem with the real mode is some memory is not available and some API usage crashed the host but enabling MMU was an expensive operation. The problems with the real mode handlers are: 1. Occasionally these cannot complete the request so the code is copied+modified to work in the virtual mode, very little is shared; 2. The real mode handlers have to be linked into vmlinux to work; 3. An exception in real mode immediately reboots the machine. If the small DMA window is used, the real mode handlers bring better performance. However since POWER8, there has always been a bigger DMA window which VMs use to map the entire VM memory to avoid calling H_PUT_TCE. Such 1:1 mapping happens once and uses H_PUT_TCE_INDIRECT (a bulk version of H_PUT_TCE) which virtual mode handler is even closer to its real mode version. On POWER9 hypercalls trap straight to the virtual mode so the real mode handlers never execute on POWER9 and later CPUs. So with the current use of the DMA windows and MMU improvements in POWER9 and later, there is no point in duplicating the code. The 32bit passed through devices may slow down but we do not have many of these in practice. For example, with this applied, a 1Gbit ethernet adapter still demostrates above 800Mbit/s of actual throughput. This removes the real mode handlers from KVM and related code from the powernv platform. This updates the list of implemented hcalls in KVM-HV as the realmode handlers are removed. This changes ABI - kvmppc_h_get_tce() moves to the KVM module and kvmppc_find_table() is static now. Signed-off-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13mm: change huge_ptep_clear_flush() to return the original pteBaolin Wang1-3/+6
Patch series "Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating", v4. presently, migrating a hugetlb page or unmapping a poisoned hugetlb page, we'll use ptep_clear_flush() and set_pte_at() to nuke the page table entry and remap it, and this is incorrect for CONT-PTE or CONT-PMD size hugetlb page, which will cause potential data consistent issue. This patch set will change to use hugetlb related APIs to fix this issue. Note: Mike pointed out the huge_ptep_get() will only return the one specific value, and it would not take into account the dirty or young bits of CONT-PTE/PMDs like the huge_ptep_get_and_clear() [1]. This inconsistent issue is not introduced by this patch set, and this issue will be addressed in another thread [2]. Meanwhile the uffd for hugetlb case [3] pointed out by Gerald also needs another patch to address. [1] https://lore.kernel.org/linux-mm/[email protected]/ [2] https://lore.kernel.org/all/[email protected]/ [3] https://lore.kernel.org/linux-mm/20220503120343.6264e126@thinkpad/ This patch (of 3): It is incorrect to use ptep_clear_flush() to nuke a hugetlb page table when unmapping or migrating a hugetlb page, and will change to use huge_ptep_clear_flush() instead in the following patches. So this is a preparation patch, which changes the huge_ptep_clear_flush() to return the original pte to help to nuke a hugetlb page table. [[email protected]: fix build in several more architectures] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: fixup] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/20f77ddab90baa249bd24504c413189b82acde69.1652270205.git.baolin.wang@linux.alibaba.com Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/dcf065868cce35bceaf138613ad27f17bb7c0c19.1652147571.git.baolin.wang@linux.alibaba.com Signed-off-by: Baolin Wang <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Mike Kravetz <[email protected]> Reviewed-by: Muchun Song <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: James Bottomley <[email protected]> Cc: Helge Deller <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: David S. Miller <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Gerald Schaefer <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-05-13powerpc: define get_cycles macro for arch-overrideJason A. Donenfeld1-0/+1
PowerPC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Acked-by: Michael Ellerman <[email protected]> Signed-off-by: Jason A. Donenfeld <[email protected]>
2022-05-13KVM: PPC: Book3S HV P9: Move cede logic out of XIVE escalation rearmingNicholas Piggin1-2/+2
Move the cede abort logic out of xive escalation rearming and into the caller to prepare for handling a similar case with nested guest entry. Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Cédric Le Goater <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13KVM: PPC: Book3S HV: Remove KVMPPC_NR_LPIDSNicholas Piggin1-3/+0
KVMPPC_NR_LPIDS no longer represents any size restriction on the LPID space and can be removed. A CPU with more than 12 LPID bits implemented will now be able to create more than 4095 guests. Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13KVM: PPC: Book3S Nested: Use explicit 4096 LPID maximumNicholas Piggin1-1/+6
Rather than tie this to KVMPPC_NR_LPIDS which is becoming more dynamic, fix it to 4096 (12-bits) explicitly for now. kvmhv_get_nested() does not have to check against KVM_MAX_NESTED_GUESTS because the L1 partition table registration hcall already did that, and it checks against the partition table size. This patch also puts all the partition table size calculations into the same form, using 12 for the architected size field shift and 4 for the shift corresponding to the partition table entry size. Reviewed-by: Fabiano Rosas <[email protected]> Signed-of-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13KVM: PPC: Book3S HV Nested: Change nested guest lookup to use idrNicholas Piggin1-2/+1
This removes the fixed sized kvm->arch.nested_guests array. Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13KVM: PPC: Book3S HV: Update LPID allocator init for POWER9, NestedNicholas Piggin2-3/+1
The LPID allocator init is changed to: - use mmu_lpid_bits rather than hard-coding; - use KVM_MAX_NESTED_GUESTS for nested hypervisors; - not reserve the top LPID on POWER9 and newer CPUs. The reserved LPID is made a POWER7/8-specific detail. Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13KVM: PPC: Remove kvmppc_claim_lpidNicholas Piggin1-1/+0
Removing kvmppc_claim_lpid makes the lpid allocator API a bit simpler to change the underlying implementation in a future patch. The host LPID is always 0, so that can be a detail of the allocator. If the allocator range is restricted, that can reserve LPIDs at the top of the range. This allows kvmppc_claim_lpid to be removed. Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13KVM: PPC: Book3S HV: HFSCR[PREFIX] does not existNicholas Piggin1-1/+0
This facility is controlled by FSCR only. Reserved bits should not be set in the HFSCR register (although it's likely harmless as this position would not be re-used, and the L0 is forgiving here too). Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-13powerpc: add asm/stat.h to UAPI compile-test coverageMasahiro Yamada1-5/+5
asm/stat.h is currently excluded from the UAPI compile-test for ARCH=powerpc because of the errors like follows: HDRTEST usr/include/asm/stat.h In file included from <command-line>:32: ./usr/include/asm/stat.h:32:2: error: unknown type name 'ino_t' 32 | ino_t st_ino; | ^~~~~ ./usr/include/asm/stat.h:35:2: error: unknown type name 'mode_t' 35 | mode_t st_mode; | ^~~~~~ ./usr/include/asm/stat.h:40:2: error: unknown type name 'uid_t' 40 | uid_t st_uid; | ^~~~~ ./usr/include/asm/stat.h:41:2: error: unknown type name 'gid_t' 41 | gid_t st_gid; | ^~~~~ The errors can be fixed by prefixing the types with __kernel_. Then, remove the no-header-test entry from user/include/Makefile. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]>
2022-05-11powerpc/code-patching: Use jump_label for testing freed initmemChristophe Leroy1-0/+2
Once init is done, initmem is freed forever so no need to test system_state at every call to patch_instruction(). Use jump_label. This reduces by 2% the time needed to activate ftrace on an 8xx. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/0aee964721cab7316cffde21a2ca223cee14d373.1647962456.git.christophe.leroy@csgroup.eu
2022-05-09powerpc/pgtable: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE for book3sDavid Hildenbrand1-1/+20
Right now, the last 5 bits (0x1f) of the swap entry are used for the type and the bit before that (0x20) is used for _PAGE_SWP_SOFT_DIRTY. We cannot use 0x40, as that collides with _RPAGE_RSV1 -- contained in _PAGE_HPTEFLAGS. The next candidate would be _RPAGE_SW3 (0x200) -- which is used for _PAGE_SOFT_DIRTY for !swp ptes. So let's just use _PAGE_SOFT_DIRTY for _PAGE_SWP_SOFT_DIRTY (to make it easier to grasp) and use 0x20 now for _PAGE_SWP_EXCLUSIVE. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Don Dutile <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jann Horn <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: John Hubbard <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Liang Zhang <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nadav Amit <[email protected]> Cc: Oded Gabbay <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Pedro Demarchi Gomes <[email protected]> Cc: Peter Xu <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-05-09powerpc/pgtable: remove _PAGE_BIT_SWAP_TYPE for book3sDavid Hildenbrand1-7/+5
The swap type is simply stored in bits 0x1f of the swap pte. Let's simplify by just getting rid of _PAGE_BIT_SWAP_TYPE. It's not like that we can simply change it: _PAGE_SWP_SOFT_DIRTY would suddenly fall into _RPAGE_RSV1, which isn't possible and would make the BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY) angry. While at it, make it clearer which bit we're actually using for _PAGE_SWP_SOFT_DIRTY by just using the proper define and introduce and use SWP_TYPE_MASK. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: David Hildenbrand <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Don Dutile <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jann Horn <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: John Hubbard <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Liang Zhang <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Mike Kravetz <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nadav Amit <[email protected]> Cc: Oded Gabbay <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Pedro Demarchi Gomes <[email protected]> Cc: Peter Xu <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-05-08powerpc/8xx: Simplify flush_tlb_kernel_range()Christophe Leroy1-1/+11
In the same spirit as commit 63f501e07a85 ("powerpc/8xx: Simplify TLB handling"), simplify flush_tlb_kernel_range() for 8xx. 8xx cannot be SMP, and has 'tlbie' and 'tlbia' instructions, so an inline version of flush_tlb_kernel_range() for 8xx is worth it. With this page, first leg of change_page_attr() is: 2c: 55 29 00 3c rlwinm r9,r9,0,0,30 30: 91 23 00 00 stw r9,0(r3) 34: 7c 00 22 64 tlbie r4,r0 38: 7c 00 04 ac hwsync 3c: 38 60 00 00 li r3,0 40: 4e 80 00 20 blr Before the patch it was: 30: 55 29 00 3c rlwinm r9,r9,0,0,30 34: 91 2a 00 00 stw r9,0(r10) 38: 94 21 ff f0 stwu r1,-16(r1) 3c: 7c 08 02 a6 mflr r0 40: 38 83 10 00 addi r4,r3,4096 44: 90 01 00 14 stw r0,20(r1) 48: 48 00 00 01 bl 48 <change_page_attr+0x48> 48: R_PPC_REL24 flush_tlb_kernel_range 4c: 80 01 00 14 lwz r0,20(r1) 50: 38 60 00 00 li r3,0 54: 7c 08 03 a6 mtlr r0 58: 38 21 00 10 addi r1,r1,16 5c: 4e 80 00 20 blr Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/d2610043419ce3e0e53a85386baf2c3625af5cfb.1647877442.git.christophe.leroy@csgroup.eu
2022-05-08powerpc: Use rol32() instead of opencoding in csum_fold()Christophe Leroy1-8/+9
rol32(x, 16) will do the rotate using rlwinm. No need to open code using inline assembly. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/794337eff7bb803d2c4e67d9eee635390c4c48fe.1646812553.git.christophe.leroy@csgroup.eu
2022-05-08powerpc: Add missing headersChristophe Leroy1-0/+1
Don't inherit headers "by chances" from asm/prom.h, asm/mpc52xx.h, asm/pci.h etc... Include the needed headers, and remove asm/prom.h when it was needed exclusively for pulling necessary headers. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/be8bdc934d152a7d8ee8d1a840d5596e2f7d85e0.1646767214.git.christophe.leroy@csgroup.eu
2022-05-05termbits: Convert octal defines to hexIlpo Järvinen1-101/+101
Many archs have termbits.h as octal numbers. It makes hard for humans to parse the magnitude of large numbers correctly and to compare with hex ones of the same define. Convert octal values to hex. First step is an automated conversion with: for i in $(git ls-files | grep 'termbits\.h'); do awk --non-decimal-data '/^#define\s+[A-Z][A-Z0-9]*\s+0[0-9]/ { l=int(((length($3) - 1) * 3 + 3) / 4); repl = sprintf("0x%0" l "x", $3); print gensub(/[^[:blank:]]+/, repl, 3); next} {print}' $i > $i~; mv $i~ $i; done On top of that, some manual processing on alignment and number of zeros. In addition, small tweaks to formatting of a few comments on the same lines. Signed-off-by: Ilpo Järvinen <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Acked-by: Michael Ellerman <[email protected]> (powerpc) Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-05-06powerpc: Add missing declaration in asm/drmem.hChristophe Leroy1-0/+3
Don't rely on random inclusion of linux/of.h by users of asm/drmem.h Add a forward declaration of struct property and struct device_node. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/5643ec410e51b749db0636471cb7979524f9ed0e.1646767214.git.christophe.leroy@csgroup.eu
2022-05-06powerpc: Include asm/reg.h in asm/svm.hChristophe Leroy1-0/+2
is_secure_guest() uses mfmsr(). Don't rely on users to include asm/reg.h, include it in asm/svm.h Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/482c82c8a29d5fb3ea279b34f107e0e775001344.1646767214.git.christophe.leroy@csgroup.eu
2022-05-06powerpc: Don't include asm/prom.h in asm/parport.hChristophe Leroy1-1/+1
parport.h needs only of_irq.h, no need to go via asm/prom.h Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/ec796ee56cf61f16ba24e62a9d3525d11931538c.1646767214.git.christophe.leroy@csgroup.eu
2022-05-06powerpc/64: Move pci_device_from_OF_node() out of asm/pci-bridge.hChristophe Leroy1-12/+2
Move pci_device_from_OF_node() in pci64.c because it needs definition of struct device_node and is not worth inlining. ppc32.c already has it in pci32.c. That way pci-bridge.h doesn't need linux/of.h (Brought by asm/prom.h via asm/pci.h) Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/3c88286b55413730d7784133993a46ef4a3607ce.1646767214.git.christophe.leroy@csgroup.eu
2022-05-06powerpc: Reduce csum_add() complexity for PPC64Christophe Leroy1-5/+4
PPC64 does everything in C, gcc is able to skip calculation when one of the operands in zero. Move the constant folding in PPC32 part. This helps GCC and reduces ppc64_defconfig by 170 bytes. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/a4ca63dd4c4b09e1906d08fb814af5a41d0f3fcb.1644651363.git.christophe.leroy@csgroup.eu
2022-05-06powerpc: Reject probes on instructions that can't be single steppedNaveen N. Rao2-0/+54
Per the ISA, a Trace interrupt is not generated for: - [h|u]rfi[d] - rfscv - sc, scv, and Trap instructions that trap - Power-Saving Mode instructions - other instructions that cause interrupts (other than Trace interrupts) - the first instructions of any interrupt handler (applies to Branch and Single Step tracing; CIABR matches may still occur) - instructions that are emulated by software Add a helper to check for instructions belonging to the first four categories above and to reject kprobes, uprobes and xmon breakpoints on such instructions. We reject probing on instructions belonging to these categories across all ISA versions and across both BookS and BookE. For trap instructions, we can't know in advance if they can cause a trap, and there is no good reason to allow probing on those. Also, uprobes already refuses to probe trap instructions and kprobes does not allow probes on trap instructions used for kernel warnings and bugs. As such, stop allowing any type of probes/breakpoints on trap instruction across uprobes, kprobes and xmon. For some of the fp/altivec instructions that can generate an interrupt and which we emulate in the kernel (altivec assist, for example), we check and turn off single stepping in emulate_single_step(). Instructions generating a DSI are restarted and single stepping normally completes once the instruction is completed. In uprobes, if a single stepped instruction results in a non-fatal signal to be delivered to the task, such signals are "delayed" until after the instruction completes. For fatal signals, single stepping is cancelled and the instruction restarted in-place so that core dump captures proper addresses. In kprobes, we do not allow probes on instructions having an extable entry and we also do not allow probing interrupt vectors. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/f56ee979d50b8711fae350fc97870f3ca34acd75.1648648712.git.naveen.n.rao@linux.vnet.ibm.com
2022-05-06powerpc: Sort and de-dup primary opcodes in ppc-opcode.hNaveen N. Rao1-38/+31
Some of the primary opcodes are duplicated. Remove those, and sort the rest of the primary opcodes to make it easy to read. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/a05edf638a2638d708fc2db0272f6317837b5eab.1648648712.git.naveen.n.rao@linux.vnet.ibm.com
2022-05-05powerpc/mm: Convert to default topdown mmap layoutChristophe Leroy1-2/+0
Select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and remove arch/powerpc/mm/mmap.c This change reuses the generic framework added by commit 67f3977f805b ("arm64, mm: move generic mmap layout functions to mm") without any functional change. Comparison between powerpc implementation and the generic one: - mmap_is_legacy() is identical. - arch_mmap_rnd() does exactly the same allthough it's written slightly differently. - MIN_GAP and MAX_GAP are identical. - mmap_base() does the same but uses STACK_RND_MASK which provides the same values as stack_maxrandom_size(). - arch_pick_mmap_layout() is identical. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/518f9def87d3c889d5958103e7463cf45a2f673d.1649523076.git.christophe.leroy@csgroup.eu
2022-05-05powerpc/mm: Move get_unmapped_area functions to slice.cChristophe Leroy2-6/+6
hugetlb_get_unmapped_area() is now identical to the generic version if only RADIX is enabled, so move it to slice.c and let it fallback on the generic one when HASH MMU is not compiled in. Do the same with arch_get_unmapped_area() and arch_get_unmapped_area_topdown(). Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/b5d9c124e82889e0cb115c150915a0c0d84eb960.1649523076.git.christophe.leroy@csgroup.eu
2022-05-05powerpc/mm: Use generic_hugetlb_get_unmapped_area()Christophe Leroy1-4/+0
Use the generic version of arch_hugetlb_get_unmapped_area() which is now available at all time. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/05f77014c619061638ecc52a0a4136eb04cc2799.1649523076.git.christophe.leroy@csgroup.eu
2022-05-05powerpc/mm: Use generic_get_unmapped_area() and call it from ↵Christophe Leroy1-0/+8
arch_get_unmapped_area() Use the generic version of arch_get_unmapped_area() which is now available at all time instead of its copy radix__arch_get_unmapped_area() To allow that for PPC64, add arch_get_mmap_base() and arch_get_mmap_end() macros. Instead of setting mm->get_unmapped_area() to either arch_get_unmapped_area() or generic_get_unmapped_area(), always set it to arch_get_unmapped_area() and call generic_get_unmapped_area() from there when radix is enabled. Do the same with radix__arch_get_unmapped_area_topdown() Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/393be1fa386446443682fdb74544d733f68ef3bb.1649523076.git.christophe.leroy@csgroup.eu
2022-05-05powerpc/mm: Remove CONFIG_PPC_MM_SLICESChristophe Leroy2-8/+1
CONFIG_PPC_MM_SLICES is always selected by hash book3s/64. CONFIG_PPC_MM_SLICES is never selected by other platforms. Remove it. Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/dc2cdc204de8978574bf7c02329b6cfc4db0bce7.1649523076.git.christophe.leroy@csgroup.eu
2022-05-05powerpc/mm: Make slice specific to book3s/64Christophe Leroy4-47/+19
Since commit 555904d07eef ("powerpc/8xx: MM_SLICE is not needed anymore") only book3s/64 selects CONFIG_PPC_MM_SLICES. Move slice.c into mm/book3s64/ Move necessary stuff in asm/book3s/64/slice.h and remove asm/slice.h Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/4a0d74ef1966a5902b5fd4ac4b513a760a6d675a.1649523076.git.christophe.leroy@csgroup.eu
2022-05-04powerpc/eeh: Remove unused inline functionsYueHaibing1-4/+0
pseries_eeh_init_edev() is used exclusively in eeh_pseries.c, make it static and remove unused inline function. pseries_eeh_init_edev_recursive() is only called from files build wich CONFIG_HOTPLUG_PCI_RPA which depends on CONFIG_PSERIES and CONFIG_EEH, so can remove the unused inline version. Suggested-by: Christophe Leroy <[email protected]> Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-04powerpc/kuap: Remove unused inline function __kuap_assert_locked()YueHaibing1-1/+0
commit 2341964e27b0 ("powerpc/kuap: Remove __kuap_assert_locked()") left behind this one, remove it. Signed-off-by: YueHaibing <[email protected]> Acked-by: Christophe Leroy <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-04powerpc/smp: Remove unused inline functionsYueHaibing1-2/+0
commit 441c19c8a290 ("powerpc/kvm/book3s_hv: Rework the secondary inhibit code") left behind this, so can remove it. Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Daniel Axtens <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-04powerpc: Fix missing declaration of [en/dis]able_kernel_altivec()Magali Lemes1-0/+9
When CONFIG_PPC64 is set and CONFIG_ALTIVEC is not the following build failures occur: drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.c: In function 'dc_fpu_begin': >> drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.c:61:17: error: implicit declaration of function 'enable_kernel_altivec'; did you mean 'enable_kernel_vsx'? [-Werror=implicit-function-declaration] 61 | enable_kernel_altivec(); | ^~~~~~~~~~~~~~~~~~~~~ | enable_kernel_vsx drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.c: In function 'dc_fpu_end': >> drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.c:89:17: error: implicit declaration of function 'disable_kernel_altivec'; did you mean 'disable_kernel_vsx'? [-Werror=implicit-function-declaration] 89 | disable_kernel_altivec(); | ^~~~~~~~~~~~~~~~~~~~~~ | disable_kernel_vsx cc1: some warnings being treated as errors This commit adds stub instances of both enable_kernel_altivec() and disable_kernel_altivec() the same way as done in commit bd73758803c2 regarding enable_kernel_vsx() and disable_kernel_vsx(). Reported-by: kernel test robot <[email protected]> Signed-off-by: Magali Lemes <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-04powerpc/eeh: Remove unused inline function eeh_dev_phb_init_dynamic()YueHaibing1-2/+0
commit 475028efc708 ("powerpc/eeh: Remove eeh_dev_phb_init_dynamic()") left behind this, so can remove it. Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Daniel Axtens <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-05-04powerpc/time: Fix sparse warningsHe Ying1-0/+1
We found these warnings in arch/powerpc/kernel/time.c as follows: warning: symbol 'decrementer_max' was not declared. Should it be static? warning: symbol 'rtc_lock' was not declared. Should it be static? warning: symbol 'dtl_consumer' was not declared. Should it be static? Declare 'decrementer_max' in powerpc asm/time.h. Include linux/mc146818rtc.h in powerpc kernel/time.c where 'rtc_lock' is declared. And remove duplicated declaration of 'rtc_lock' in powerpc platforms/chrp/time.c because it has included linux/mc146818rtc.h. Move 'dtl_consumer' definition after "include <asm/dtl.h>" because it is declared there. Reported-by: Hulk Robot <[email protected]> Signed-off-by: He Ying <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Reviewed-by: Alexandre Belloni <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-28powerpc/mm: enable ARCH_HAS_VM_GET_PAGE_PROTAnshuman Khandual1-12/+0
This defines and exports a platform specific custom vm_get_page_prot() via subscribing ARCH_HAS_VM_GET_PAGE_PROT. While here, this also localizes arch_vm_get_page_prot() as __vm_get_page_prot() and moves it near vm_get_page_prot(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: David S. Miller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Khalid Aziz <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-04-26asm-generic: compat: Cleanup duplicate definitionsGuo Ren1-25/+5
There are 7 64bit architectures that support Linux COMPAT mode to run 32bit applications. A lot of definitions are duplicate: - COMPAT_USER_HZ - COMPAT_RLIM_INFINITY - COMPAT_OFF_T_MAX - __compat_uid_t, __compat_uid_t - compat_dev_t - compat_ipc_pid_t - struct compat_flock - struct compat_flock64 - struct compat_statfs - struct compat_ipc64_perm, compat_semid64_ds, compat_msqid64_ds, compat_shmid64_ds Cleanup duplicate definitions and merge them into asm-generic. Signed-off-by: Guo Ren <[email protected]> Signed-off-by: Guo Ren <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Tested-by: Heiko Stuebner <[email protected]> Acked-by: Helge Deller <[email protected]> # parisc Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Palmer Dabbelt <[email protected]>
2022-04-26fs: stat: compat: Add __ARCH_WANT_COMPAT_STATGuo Ren1-0/+1
RISC-V doesn't neeed compat_stat, so using __ARCH_WANT_COMPAT_STAT to exclude unnecessary SYSCALL functions. Signed-off-by: Guo Ren <[email protected]> Signed-off-by: Guo Ren <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Tested-by: Heiko Stuebner <[email protected]> Acked-by: Helge Deller <[email protected]> # parisc Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Palmer Dabbelt <[email protected]>
2022-04-26compat: consolidate the compat_flock{,64} definitionChristoph Hellwig1-16/+0
Provide a single common definition for the compat_flock and compat_flock64 structures using the same tricks as for the native variants. Another extra define is added for the packing required on x86. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Guo Ren <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Tested-by: Heiko Stuebner <[email protected]> Acked-by: Helge Deller <[email protected]> # parisc Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Palmer Dabbelt <[email protected]>
2022-04-26uapi: always define F_GETLK64/F_SETLK64/F_SETLKW64 in fcntl.hChristoph Hellwig1-4/+0
The F_GETLK64/F_SETLK64/F_SETLKW64 fcntl opcodes are only implemented for the 32-bit syscall APIs, but are also needed for compat handling on 64-bit kernels. Consolidate them in unistd.h instead of definining the internal compat definitions in compat.h, which is rather error prone (e.g. parisc gets the values wrong currently). Note that before this change they were never visible to userspace due to the fact that CONFIG_64BIT is only set for kernel builds. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Guo Ren <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Tested-by: Heiko Stuebner <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Palmer Dabbelt <[email protected]>
2022-04-26powerpc/fadump: save CPU reg data in vmcore when PHYP terminates LPARHari Bathini1-1/+1
An LPAR can be terminated by the POWER Hypervisor (PHYP) for various reasons. If FADump was configured when PHYP terminates the LPAR, platform-assisted dump is initiated to save the kernel dump. But CPU register data would not be processed/saved in the vmcore in such case because CPU mask is set in crash_fadump() at the time of kernel crash and it remains unset in this case with LPAR being terminated by PHYP abruptly. To get around the problem, initialize cpu_mask to cpu_possible_mask so as to ensure all possible CPUs' register data is processed for the vmcore generated on PHYP terminated LPAR. Also, rename the crash info member variable from online_mask to cpu_mask as it doesn't necessarily have to be online CPU mask always. Signed-off-by: Hari Bathini <[email protected]> Reviewed-by: Mahesh Salgaonkar <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-23powerpc: Remove unused SLOW_DOWN_IO definitionBjorn Helgaas1-2/+0
Remove unused SLOW_DOWN_IO definition. Signed-off-by: Bjorn Helgaas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-18swiotlb: add a SWIOTLB_ANY flag to lift the low memory restrictionChristoph Hellwig2-4/+1
Power SVM wants to allocate a swiotlb buffer that is not restricted to low memory for the trusted hypervisor scheme. Consolidate the support for this into the swiotlb_init interface by adding a new flag. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Konrad Rzeszutek Wilk <[email protected]> Tested-by: Boris Ostrovsky <[email protected]>