aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/include/asm
AgeCommit message (Collapse)AuthorFilesLines
2022-08-02Merge tag 'random-6.0-rc1-for-linus' of ↵Linus Torvalds2-29/+6
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: "Though there's been a decent amount of RNG-related development during this last cycle, not all of it is coming through this tree, as this cycle saw a shift toward tackling early boot time seeding issues, which took place in other trees as well. Here's a summary of the various patches: - The CONFIG_ARCH_RANDOM .config option and the "nordrand" boot option have been removed, as they overlapped with the more widely supported and more sensible options, CONFIG_RANDOM_TRUST_CPU and "random.trust_cpu". This change allowed simplifying a bit of arch code. - x86's RDRAND boot time test has been made a bit more robust, with RDRAND disabled if it's clearly producing bogus results. This would be a tip.git commit, technically, but I took it through random.git to avoid a large merge conflict. - The RNG has long since mixed in a timestamp very early in boot, on the premise that a computer that does the same things, but does so starting at different points in wall time, could be made to still produce a different RNG state. Unfortunately, the clock isn't set early in boot on all systems, so now we mix in that timestamp when the time is actually set. - User Mode Linux now uses the host OS's getrandom() syscall to generate a bootloader RNG seed and later on treats getrandom() as the platform's RDRAND-like faculty. - The arch_get_random_{seed_,}_long() family of functions is now arch_get_random_{seed_,}_longs(), which enables certain platforms, such as s390, to exploit considerable performance advantages from requesting multiple CPU random numbers at once, while at the same time compiling down to the same code as before on platforms like x86. - A small cleanup changing a cmpxchg() into a try_cmpxchg(), from Uros. - A comment spelling fix" More info about other random number changes that come in through various architecture trees in the full commentary in the pull request: https://lore.kernel.org/all/[email protected]/ * tag 'random-6.0-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: random: correct spelling of "overwrites" random: handle archrandom with multiple longs um: seed rng using host OS rng random: use try_cmpxchg in _credit_init_bits timekeeping: contribute wall clock to rng on time change x86/rdrand: Remove "nordrand" flag in favor of "random.trust_cpu" random: remove CONFIG_ARCH_RANDOM
2022-08-02Merge tag 'integrity-v6.0' of ↵Linus Torvalds1-0/+14
git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity Pull integrity updates from Mimi Zohar: "Aside from the one EVM cleanup patch, all the other changes are kexec related. On different architectures different keyrings are used to verify the kexec'ed kernel image signature. Here are a number of preparatory cleanup patches and the patches themselves for making the keyrings - builtin_trusted_keyring, .machine, .secondary_trusted_keyring, and .platform - consistent across the different architectures" * tag 'integrity-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity: kexec, KEYS, s390: Make use of built-in and secondary keyring for signature verification arm64: kexec_file: use more system keyrings to verify kernel image signature kexec, KEYS: make the code in bzImage64_verify_sig generic kexec: clean up arch_kexec_kernel_verify_sig kexec: drop weak attribute from functions kexec_file: drop weak attribute from functions evm: Use IS_ENABLED to initialize .enabled
2022-08-01powerpc: drop dependency on <asm/machdep.h> in archrandom.hYury Norov1-8/+1
archrandom.h includes <asm/machdep.h> to refer ppc_md. This causes circular header dependency, if generic nodemask.h includes random.h: In file included from include/linux/cred.h:16, from include/linux/seq_file.h:13, from arch/powerpc/include/asm/machdep.h:6, from arch/powerpc/include/asm/archrandom.h:5, from include/linux/random.h:109, from include/linux/nodemask.h:97, from include/linux/list_lru.h:12, from include/linux/fs.h:13, from include/linux/compat.h:17, from arch/powerpc/kernel/asm-offsets.c:12: include/linux/sched.h:1203:9: error: unknown type name 'nodemask_t' 1203 | nodemask_t mems_allowed; | ^~~~~~~~~~ Fix it by removing <asm/machdep.h> dependency from archrandom.h Now as arch_get_random_seed_long() moved to c-file, and not exported, it's not available for modules. As Michael Ellerman says: I think we actually don't need it exported to modules, I think it's a private detail of the RNG <-> architecture interface, not something that modules should be calling. CC: Andy Shevchenko <[email protected]> CC: Benjamin Herrenschmidt <[email protected]> CC: Michael Ellerman <[email protected]> CC: Paul Mackerras <[email protected]> CC: Rasmus Villemoes <[email protected]> CC: Stephen Rothwell <[email protected]> CC: [email protected] Suggested-by: Michael Ellerman <[email protected]> (for non-exporting) Acked-by: Michael Ellerman <[email protected]> Acked-by: Michael Ellerman <[email protected]> Signed-off-by: Yury Norov <[email protected]>
2022-08-01Merge branch 'topic/ppc-kvm' into nextMichael Ellerman1-3/+0
Bring in a few more commits we are keeping in our KVM topic branch.
2022-07-28powerpc/powernv: rename remaining rng powernv_ functions to pnv_Jason A. Donenfeld1-1/+1
The preferred nomenclature is pnv_, not powernv_, but rng.c used powernv_ for some reason, which isn't consistent with the rest. A recent commit added a few pnv_ functions to rng.c, making the file a bit of a mishmash. This commit just replaces the rest of them. Fixes: f3eac426657d ("powerpc/powernv: wire up rng during setup_arch") Signed-off-by: Jason A. Donenfeld <[email protected]> Tested-by: Sachin Sant <[email protected]> [mpe: Reorder after bug fix commits] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-28powerpc/powernv/kvm: Use darn for H_RANDOM on Power9Jason A. Donenfeld1-5/+0
The existing logic in KVM to support guests calling H_RANDOM only works on Power8, because it looks for an RNG in the device tree, but on Power9 we just use darn. In addition the existing code needs to work in real mode, so we have the special cased powernv_get_random_real_mode() to deal with that. Instead just have KVM call ppc_md.get_random_seed(), and do the real mode check inside of there, that way we use whatever RNG is available, including darn on Power9. Fixes: e928e9cb3601 ("KVM: PPC: Book3S HV: Add fast real-mode H_RANDOM implementation.") Cc: [email protected] # v4.1+ Signed-off-by: Jason A. Donenfeld <[email protected]> Tested-by: Sachin Sant <[email protected]> [mpe: Rebase on previous commit, update change log appropriately] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-28powerpc/pseries: define driver for Platform KeyStoreNayna Jain1-0/+11
PowerVM provides an isolated Platform Keystore(PKS) storage allocation for each LPAR with individually managed access controls to store sensitive information securely. It provides a new set of hypervisor calls for Linux kernel to access PKS storage. Define POWER LPAR Platform KeyStore(PLPKS) driver using H_CALL interface to access PKS storage. Signed-off-by: Nayna Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-28powerpc/crash: save cpu register data in crash_smp_send_stop()Hari Bathini1-0/+1
During kdump, two set of NMI IPIs are sent to secondary CPUs, if 'crash_kexec_post_notifiers' option is set. The first set of NMI IPIs to stop the CPUs and the other set to collect register data. Instead, capture register data for secondary CPUs while stopping them itself. Also, fallback to smp_send_stop() in case the function gets called without kdump configured. Signed-off-by: Hari Bathini <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-28powerpc: Finally remove unnecessary headers from asm/prom.hChristophe Leroy1-8/+2
Remove all headers included from asm/prom.h which are not used by asm/prom.h itself. Declare struct device_node and struct property locally to avoid including of.h Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/4be954abef978b34cff9193fc566ffefdd3517bb.1657264228.git.christophe.leroy@csgroup.eu
2022-07-28powerpc: Remove asm/prom.h from asm/mpc52xx.h and asm/pci.hChristophe Leroy2-2/+2
asm/pci.h and asm/mpc52xx.h don't need asm/prom.h Declare struct device_node locally to avoid including of.h Signed-off-by: Christophe Leroy <[email protected]> [mpe: Add missing include of prom.h to of_rtc.c] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/cf5243343e2364c2b40f22ee5ad9a6e2453d1121.1657264228.git.christophe.leroy@csgroup.eu
2022-07-27powerpc/44x: Fix build failure with GCC 12 (unrecognized opcode: `wrteei')Christophe Leroy3-2/+7
Building ppc40x_defconfig leads to following error CC arch/powerpc/kernel/idle.o {standard input}: Assembler messages: {standard input}:67: Error: unrecognized opcode: `wrteei' {standard input}:78: Error: unrecognized opcode: `wrteei' Add -mcpu=440 by default and alternatively 464 and 476. Once that's done, -mcpu=powerpc is only for book3s/32 now. But then comes CC arch/powerpc/kernel/io.o {standard input}: Assembler messages: {standard input}:198: Error: unrecognized opcode: `eieio' {standard input}:230: Error: unrecognized opcode: `eieio' {standard input}:245: Error: unrecognized opcode: `eieio' {standard input}:254: Error: unrecognized opcode: `eieio' {standard input}:273: Error: unrecognized opcode: `eieio' {standard input}:396: Error: unrecognized opcode: `eieio' {standard input}:404: Error: unrecognized opcode: `eieio' {standard input}:423: Error: unrecognized opcode: `eieio' {standard input}:512: Error: unrecognized opcode: `eieio' {standard input}:520: Error: unrecognized opcode: `eieio' {standard input}:539: Error: unrecognized opcode: `eieio' {standard input}:628: Error: unrecognized opcode: `eieio' {standard input}:636: Error: unrecognized opcode: `eieio' {standard input}:655: Error: unrecognized opcode: `eieio' Fix it by replacing eieio by mbar on booke. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/b0d982e223314ed82ab959f5d4ad2c4c00bedb99.1657549153.git.christophe.leroy@csgroup.eu
2022-07-27powerpc/ppc-opcode: Define and use PPC_RAW_SETB()Christophe Leroy1-1/+1
We have PPC_INST_SETB then build the 'setb' instruction in the user. Instead, define PPC_RAW_SETB() and use it. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/b08a4f26919a8f8cdcf7544ab552d9c1c63418b5.1657205708.git.christophe.leroy@csgroup.eu
2022-07-27powerpc/ppc-opcode: Define and use PPC_RAW_TRAP() and PPC_RAW_TW()Christophe Leroy2-1/+4
Add and use PPC_RAW_TRAP() instead of opencoding. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/52c7e522e56a38e3ff0363906919445920005a8f.1657205708.git.christophe.leroy@csgroup.eu
2022-07-27powerpc/probes: Remove ppc_opcode_tChristophe Leroy3-3/+2
ppc_opcode_t is just an u32. There is no point in hiding u32 behind such a typedef. Remove it. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/b2d762191b095530789ac8b71b167c6740bb6aed.1657205708.git.christophe.leroy@csgroup.eu
2022-07-27powerpc: Remove remaining parts of oprofileChristophe Leroy1-3/+0
Commit 9850b6c69356 ("arch: powerpc: Remove oprofile") removed oprofile. Remove all remaining parts of it. Signed-off-by: Christophe Leroy <[email protected]> Acked-by: Viresh Kumar <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/298432fe1a14c0a415760011d72c3f0999efd5e2.1657204631.git.christophe.leroy@csgroup.eu
2022-07-27powerpc/64s: Remove spurious fault flushing for NMMUNicholas Piggin1-3/+23
Commit 6d8278c414cb2 ("powerpc/64s/radix: do not flush TLB on spurious fault") removed the TLB flush for spurious faults, except when a coprocessor (nest MMU) maps the address space. This is not needed because the NMMU workaround in the PTE permission upgrade paths prevents PTEs existing with less restrictive access permissions than their corresponding TLB entries have. Remove it and replace with a comment. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-27powerpc/pci: Hide pci_create_OF_bus_map() for non-chrp codePali Rohár1-0/+2
Function pci_create_OF_bus_map() is used only in chrp code. So hide it from all other platforms. Signed-off-by: Pali Rohár <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-27powerpc/pci: Hide pci_device_from_OF_node() for non-powermac codePali Rohár1-0/+2
Function pci_device_from_OF_node() is used only in powermac code. So hide it from all other platforms. Signed-off-by: Pali Rohár <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-27powerpc/watchdog: introduce a NMI watchdog's factorLaurent Dufour1-0/+2
Introduce a factor which would apply to the NMI watchdog timeout. This factor is a percentage added to the watchdog_tresh value. The value is set under the watchdog_mutex protection and lockup_detector_reconfigure() is called to recompute wd_panic_timeout_tb. Once the factor is set, it remains until it is set back to 0, which means no impact. Signed-off-by: Laurent Dufour <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-25random: handle archrandom with multiple longsJason A. Donenfeld1-24/+6
The archrandom interface was originally designed for x86, which supplies RDRAND/RDSEED for receiving random words into registers, resulting in one function to generate an int and another to generate a long. However, other architectures don't follow this. On arm64, the SMCCC TRNG interface can return between one and three longs. On s390, the CPACF TRNG interface can return arbitrary amounts, with four longs having the same cost as one. On UML, the os_getrandom() interface can return arbitrary amounts. So change the api signature to take a "max_longs" parameter designating the maximum number of longs requested, and then return the number of longs generated. Since callers need to check this return value and loop anyway, each arch implementation does not bother implementing its own loop to try again to fill the maximum number of longs. Additionally, all existing callers pass in a constant max_longs parameter. Taken together, these two things mean that the codegen doesn't really change much for one-word-at-a-time platforms, while performance is greatly improved on platforms such as s390. Acked-by: Heiko Carstens <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Mark Rutland <[email protected]> Acked-by: Michael Ellerman <[email protected]> Acked-by: Borislav Petkov <[email protected]> Signed-off-by: Jason A. Donenfeld <[email protected]>
2022-07-22PCI: Move isa_dma_bridge_buggy out of asm/dma.hStafford Horne1-6/+0
The isa_dma_bridge_buggy symbol is only used for x86_32, and only x86_32 platforms or quirks ever set it. Add a new linux/isa-dma.h header that #defines isa_dma_bridge_buggy to 0 except on x86_32, where we keep it as a variable, and remove all the arch- specific definitions. [bhelgaas: commit log] Suggested-by: Arnd Bergmann <[email protected]> Suggested-by: Christoph Hellwig <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Stafford Horne <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]>
2022-07-22PCI: Remove pci_get_legacy_ide_irq() and asm-generic/pci.hStafford Horne1-1/+0
pci_get_legacy_ide_irq() is only used on platforms that support PNP, so many architectures define it but never use it. Replace uses of it with ATA_PRIMARY_IRQ() and ATA_SECONDARY_IRQ(), which provide the same functionality. Since pci_get_legacy_ide_irq() is no longer used, remove all the architecture-specific definitions of it as well as asm-generic/pci.h, which only provides pci_get_legacy_ide_irq() [bhelgaas: commit log] Co-developed-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Stafford Horne <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Pierre Morel <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]>
2022-07-21mmu_gather: Remove per arch tlb_{start,end}_vma()Peter Zijlstra1-2/+0
Scattered across the archs are 3 basic forms of tlb_{start,end}_vma(). Provide two new MMU_GATHER_knobs to enumerate them and remove the per arch tlb_{start,end}_vma() implementations. - MMU_GATHER_NO_FLUSH_CACHE indicates the arch has flush_cache_range() but does *NOT* want to call it for each VMA. - MMU_GATHER_MERGE_VMAS indicates the arch wants to merge the invalidate across multiple VMAs if possible. With these it is possible to capture the three forms: 1) empty stubs; select MMU_GATHER_NO_FLUSH_CACHE and MMU_GATHER_MERGE_VMAS 2) start: flush_cache_range(), end: empty; select MMU_GATHER_MERGE_VMAS 3) start: flush_cache_range(), end: flush_tlb_range(); default Obviously, if the architecture does not have flush_cache_range() then it also doesn't need to select MMU_GATHER_NO_FLUSH_CACHE. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: David Miller <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2022-07-20KVM: PPC: Book3s HV: Remove unused function kvmppc_bad_interruptMurilo Opsfelder Araujo1-1/+0
The commit fae5c9f3664b ("KVM: PPC: Book3S HV: remove ISA v3.0 and v3.1 support from P7/8 path") removed the last reference to the function. Fixes: fae5c9f3664b ("KVM: PPC: Book3S HV: remove ISA v3.0 and v3.1 support from P7/8 path") Signed-off-by: Murilo Opsfelder Araujo <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-20KVM: PPC: Book3S HV: Remove kvmhv_p9_[set,restore]_lpcr declarationsMurilo Opsfelder Araujo1-2/+0
The commit b1b1697ae0cc ("KVM: PPC: Book3S HV: Remove support for running HPT guest on RPT host without mixed mode support") removed the last references to these functions. Fixes: b1b1697ae0cc ("KVM: PPC: Book3S HV: Remove support for running HPT guest on RPT host without mixed mode support") Signed-off-by: Murilo Opsfelder Araujo <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-20powerpc/pseries: add FW_FEATURE_WATCHDOG flagScott Cheloha1-1/+2
PAPR v2.12 specifies a new optional function set, "hcall-watchdog", for the /rtas/ibm,hypertas-functions property. The presence of this function set indicates support for the H_WATCHDOG hypercall. Check for this function set and, if present, set the new FW_FEATURE_WATCHDOG flag. Signed-off-by: Scott Cheloha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-20powerpc/pseries: hvcall.h: add H_WATCHDOG opcode, H_NOOP return codeScott Cheloha1-1/+3
PAPR v2.12 defines a new hypercall, H_WATCHDOG. The hypercall permits guest control of one or more virtual watchdog timers. Add the opcode for the H_WATCHDOG hypercall to hvcall.h. While here, add a definition for H_NOOP, a possible return code for H_WATCHDOG. Signed-off-by: Scott Cheloha <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-07-18random: remove CONFIG_ARCH_RANDOMJason A. Donenfeld2-5/+0
When RDRAND was introduced, there was much discussion on whether it should be trusted and how the kernel should handle that. Initially, two mechanisms cropped up, CONFIG_ARCH_RANDOM, a compile time switch, and "nordrand", a boot-time switch. Later the thinking evolved. With a properly designed RNG, using RDRAND values alone won't harm anything, even if the outputs are malicious. Rather, the issue is whether those values are being *trusted* to be good or not. And so a new set of options were introduced as the real ones that people use -- CONFIG_RANDOM_TRUST_CPU and "random.trust_cpu". With these options, RDRAND is used, but it's not always credited. So in the worst case, it does nothing, and in the best case, maybe it helps. Along the way, CONFIG_ARCH_RANDOM's meaning got sort of pulled into the center and became something certain platforms force-select. The old options don't really help with much, and it's a bit odd to have special handling for these instructions when the kernel can deal fine with the existence or untrusted existence or broken existence or non-existence of that CPU capability. Simplify the situation by removing CONFIG_ARCH_RANDOM and using the ordinary asm-generic fallback pattern instead, keeping the two options that are actually used. For now it leaves "nordrand" for now, as the removal of that will take a different route. Acked-by: Michael Ellerman <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Borislav Petkov <[email protected]> Acked-by: Heiko Carstens <[email protected]> Acked-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Jason A. Donenfeld <[email protected]>
2022-07-17powerpc/mm: move protection_map[] inside the platformAnshuman Khandual1-19/+1
This moves protection_map[] inside the platform and while here, also enable ARCH_HAS_VM_GET_PAGE_PROT on 32 bit and nohash 64 (aka book3e/64) platforms via DECLARE_VM_GET_PAGE_PROT. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Reviewed-by: Christophe Leroy <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Sam Ravnborg <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-07-15kexec: drop weak attribute from functionsNaveen N. Rao1-0/+5
Drop __weak attribute from functions in kexec_core.c: - machine_kexec_post_load() - arch_kexec_protect_crashkres() - arch_kexec_unprotect_crashkres() - crash_free_reserved_phys_range() Link: https://lkml.kernel.org/r/c0f6219e03cb399d166d518ab505095218a902dd.1656659357.git.naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Naveen N. Rao <[email protected]> Suggested-by: Eric Biederman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Mimi Zohar <[email protected]>
2022-07-15kexec_file: drop weak attribute from functionsNaveen N. Rao1-0/+9
As requested (http://lkml.kernel.org/r/[email protected]), this series converts weak functions in kexec to use the #ifdef approach. Quoting the 3e35142ef99fe ("kexec_file: drop weak attribute from arch_kexec_apply_relocations[_add]") changelog: : Since commit d1bcae833b32f1 ("ELF: Don't generate unused section symbols") : [1], binutils (v2.36+) started dropping section symbols that it thought : were unused. This isn't an issue in general, but with kexec_file.c, gcc : is placing kexec_arch_apply_relocations[_add] into a separate : .text.unlikely section and the section symbol ".text.unlikely" is being : dropped. Due to this, recordmcount is unable to find a non-weak symbol in : .text.unlikely to generate a relocation record against. This patch (of 2); Drop __weak attribute from functions in kexec_file.c: - arch_kexec_kernel_image_probe() - arch_kimage_file_post_load_cleanup() - arch_kexec_kernel_image_load() - arch_kexec_locate_mem_hole() - arch_kexec_kernel_verify_sig() arch_kexec_kernel_image_load() calls into kexec_image_load_default(), so drop the static attribute for the latter. arch_kexec_kernel_verify_sig() is not overridden by any architecture, so drop the __weak attribute. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/2cd7ca1fe4d6bb6ca38e3283c717878388ed6788.1656659357.git.naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Naveen N. Rao <[email protected]> Suggested-by: Eric Biederman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Mimi Zohar <[email protected]>
2022-07-09Merge branch 'topic/ppc-kvm' into nextMichael Ellerman2-1/+23
Merge KVM related commits we are keeping in a topic branch in case of any conflicts with generic KVM changes.
2022-07-09Merge branch 'fixes' into nextMichael Ellerman1-0/+9
Merge our fixes branch. In particular this brings in commit 986481618023 ("powerpc/book3e: Fix PUD allocation size in map_kernel_page()") which fixes a build failure in next, because commit 2db2008e6363 ("powerpc/64e: Rewrite p4d_populate() as a static inline function") depends on it.
2022-06-29context_tracking: Split user tracking KconfigFrederic Weisbecker1-1/+1
Context tracking is going to be used not only to track user transitions but also idle/IRQs/NMIs. The user tracking part will then become a separate feature. Prepare Kconfig for that. [ frederic: Apply Max Filippov feedback. ] Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Neeraj Upadhyay <[email protected]> Cc: Uladzislau Rezki <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Nicolas Saenz Julienne <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Xiongfeng Wang <[email protected]> Cc: Yu Liao <[email protected]> Cc: Phil Auld <[email protected]> Cc: Paul Gortmaker<[email protected]> Cc: Alex Belits <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Nicolas Saenz Julienne <[email protected]> Tested-by: Nicolas Saenz Julienne <[email protected]>
2022-06-29powerpc/bpf: Fix use of user_pt_regs in uapiNaveen N. Rao1-0/+9
Trying to build a .c file that includes <linux/bpf_perf_event.h>: $ cat test_bpf_headers.c #include <linux/bpf_perf_event.h> throws the below error: /usr/include/linux/bpf_perf_event.h:14:28: error: field ‘regs’ has incomplete type 14 | bpf_user_pt_regs_t regs; | ^~~~ This is because we typedef bpf_user_pt_regs_t to 'struct user_pt_regs' in arch/powerpc/include/uaps/asm/bpf_perf_event.h, but 'struct user_pt_regs' is not exposed to userspace. Powerpc has both pt_regs and user_pt_regs structures. However, unlike arm64 and s390, we expose user_pt_regs to userspace as just 'pt_regs'. As such, we should typedef bpf_user_pt_regs_t to 'struct pt_regs' for userspace. Within the kernel though, we want to typedef bpf_user_pt_regs_t to 'struct user_pt_regs'. Remove arch/powerpc/include/uapi/asm/bpf_perf_event.h so that the uapi/asm-generic version of the header is exposed to userspace. Introduce arch/powerpc/include/asm/bpf_perf_event.h so that we can typedef bpf_user_pt_regs_t to 'struct user_pt_regs' for use within the kernel. Note that this was not showing up with the bpf selftest build since tools/include/uapi/asm/bpf_perf_event.h didn't include the powerpc variant. Fixes: a6460b03f945ee ("powerpc/bpf: Fix broken uapi for BPF_PROG_TYPE_PERF_EVENT") Cc: [email protected] # v4.20+ Signed-off-by: Naveen N. Rao <[email protected]> [mpe: Use typical naming for header include guard] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-06-29powerpc/64: Drop ppc_inst_as_str()Michael Ellerman1-19/+0
The ppc_inst_as_str() macro tries to make printing variable length, aka "prefixed", instructions convenient. It mostly succeeds, but it does hide an on-stack buffer, which triggers stack protector. More problematically it doesn't compile at all with GCC 12, with -Wdangling-pointer, due to the fact that it returns the char buffer declared inside the macro: arch/powerpc/kernel/trace/ftrace.c: In function '__ftrace_modify_call': ./include/linux/printk.h:475:44: error: using a dangling pointer to '__str' [-Werror=dangling-pointer=] 475 | #define printk(fmt, ...) printk_index_wrap(_printk, fmt, ##__VA_ARGS__) ... arch/powerpc/kernel/trace/ftrace.c:567:17: note: in expansion of macro 'pr_err' 567 | pr_err("Not expected bl: opcode is %s\n", ppc_inst_as_str(op)); | ^~~~~~ ./arch/powerpc/include/asm/inst.h:156:14: note: '__str' declared here 156 | char __str[PPC_INST_STR_LEN]; \ | ^~~~~ This could be fixed by having the caller declare the buffer, but in some places there'd need to be two buffers. In all cases where ppc_inst_as_str() is used the output is not really meant for user consumption, it's almost always indicative of a kernel bug. A simpler solution is to just print the value as an unsigned long. For normal instructions the output is identical. For prefixed instructions the value is printed as a single 64-bit quantity, whereas previously the low half was printed first. But that is good enough for debug output, especially as prefixed instructions will be rare in kernel code in practice. Old: c000000000111170 60420000 ori r2,r2,0 c000000000111174 04100001 e580fb00 .long 0xe580fb0004100001 New: c00000000010f90c 60420000 ori r2,r2,0 c00000000010f910 e580fb0004100001 .long 0xe580fb0004100001 Reported-by: Bagas Sanjaya <[email protected]> Reported-by: Petr Mladek <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Tested-by: Bagas Sanjaya <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-06-29KVM: PPC: Align pt_regs in kvm_vcpu_arch structureFabiano Rosas1-1/+5
The H_ENTER_NESTED hypercall receives as second parameter the address of a region of memory containing the values for the nested guest privileged registers. We currently use the pt_regs structure contained within kvm_vcpu_arch for that end. Most hypercalls that receive a memory address expect that region to not cross a 4K page boundary. We would want H_ENTER_NESTED to follow the same pattern so this patch ensures the pt_regs structure sits within a page. Note: the pt_regs structure is currently 384 bytes in size, so aligning to 512 is sufficient to ensure it will not cross a 4K page and avoids punching too big a hole in struct kvm_vcpu_arch. Signed-off-by: Fabiano Rosas <[email protected]> Signed-off-by: Murilo Opsfelder Araújo <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-06-29KVM: PPC: Book3S HV: tracing: Add missing hcall namesFabiano Rosas1-0/+8
The kvm_trace_symbol_hcall macro is missing several of the hypercalls defined in hvcall.h. Add the most common ones that are issued during guest lifetime, including the ones that are only used by QEMU and SLOF. Signed-off-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-06-29KVM: PPC: Book3S HV: Provide more detailed timings for P9 entry pathFabiano Rosas1-5/+7
Alter the data collection points for the debug timing code in the P9 path to be more in line with what the code does. The points where we accumulate time are now the following: vcpu_entry: From vcpu_run_hv entry until the start of the inner loop; guest_entry: From the start of the inner loop until the guest entry in asm; in_guest: From the guest entry in asm until the return to KVM C code; guest_exit: From the return into KVM C code until the corresponding hypercall/page fault handling or re-entry into the guest; hypercall: Time spent handling hcalls in the kernel (hcalls can go to QEMU, not accounted here); page_fault: Time spent handling page faults; vcpu_exit: vcpu_run_hv exit (almost no code here currently). Like before, these are exposed in debugfs in a file called "timings". There are four values: - number of occurrences of the accumulation point; - total time the vcpu spent in the phase in ns; - shortest time the vcpu spent in the phase in ns; - longest time the vcpu spent in the phase in ns; === Before: rm_entry: 53132 16793518 256 4060 rm_intr: 53132 2125914 22 340 rm_exit: 53132 24108344 374 2180 guest: 53132 40980507996 404 9997650 cede: 0 0 0 0 After: vcpu_entry: 34637 7716108 178 4416 guest_entry: 52414 49365608 324 747542 in_guest: 52411 40828715840 258 9997480 guest_exit: 52410 19681717182 826 102496674 vcpu_exit: 34636 1744462 38 182 hypercall: 45712 22878288 38 1307962 page_fault: 992 111104034 568 168688 With just one instruction (hcall): vcpu_entry: 1 942 942 942 guest_entry: 1 4044 4044 4044 in_guest: 1 1540 1540 1540 guest_exit: 1 3542 3542 3542 vcpu_exit: 1 80 80 80 hypercall: 0 0 0 0 page_fault: 0 0 0 0 === Signed-off-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-06-29KVM: PPC: Book3S HV: Decouple the debug timing from the P8 entry pathFabiano Rosas1-0/+8
We are currently doing the timing for debug purposes of the P9 entry path using the accumulators and terminology defined by the old entry path for P8 machines. Not only the "real-mode" and "napping" mentions are out of place for the P9 Radix entry path but also we cannot change them because the timing code is coupled to the structures defined in struct kvm_vcpu_arch. Add a new CONFIG_KVM_BOOK3S_HV_P9_TIMING to enable the timing code for the P9 entry path. For now, just add the new CONFIG and duplicate the structures. A subsequent patch will add the P9 changes. Signed-off-by: Fabiano Rosas <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-06-29powerpc/64e: KASAN Full support for BOOK3E/64Christophe Leroy1-1/+12
We now have memory organised in a way that allows implementing KASAN. Unlike book3s/64, book3e always has translation active so the only thing needed to use KASAN is to setup an early zero shadow mapping just after setting a stack pointer and before calling early_setup(). The memory layout is now as follows +------------------------+ Kernel virtual map end (0xc000200000000000) | | | 16TB of KASAN map | | | +------------------------+ Kernel KASAN shadow map start | | | 16TB of IO map | | | +------------------------+ Kernel IO map start | | | 16TB of vmemmap | | | +------------------------+ Kernel vmemmap start | | | 16TB of vmap | | | +------------------------+ Kernel virt start (0xc000100000000000) | | | 64TB of linear mem | | | +------------------------+ Kernel linear (0xc.....) Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/0bef8beda27baf71e3b9e8b13e620fba6e19499b.1656427701.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/64e: Reorganise virtual memoryChristophe Leroy1-3/+4
Reduce the size of IO map in order to leave the last quarter of virtual MAP for KASAN shadow mapping. This gives the following layout. +------------------------+ Kernel virtual map end (0xc000200000000000) | | | 16TB (unused) | | | +------------------------+ Kernel IO map end | | | 16TB of IO map | | | +------------------------+ Kernel IO map start | | | 16TB of vmemmap | | | +------------------------+ Kernel vmemmap start | | | 16TB of vmap | | | +------------------------+ Kernel virt start (0xc000100000000000) | | | 64TB of linear mem | | | +------------------------+ Kernel linear (0xc.....) Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/54ef01673bf14228106afd629f795c83acb9a00c.1656427701.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/64e: Move virtual memory closer to linear memoryChristophe Leroy1-1/+1
Today nohash/64 have linear memory based at 0xc000000000000000 and virtual memory based at 0x8000000000000000. In order to implement KASAN, we need to regroup both areas. Move virtual memmory at 0xc000100000000000. This complicates a bit TLB miss handlers. Until now, memory region was easily identified with the 4 higher bits of address: - 0 ==> User - c ==> Linear Memory - 8 ==> Virtual Memory Now we need to rely on the 20 higher bits, with: - 0xxxx ==> User - c0000 ==> Linear Memory - c0001 ==> Virtual Memory Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/4b225168031449fc34fc7132f3923cc8dc54af60.1656427701.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/64e: Remove unused REGION related macrosChristophe Leroy1-12/+0
Those macros are not used anywhere. Remove them as they are soon going to be wrong and are not worth modifying as they are not used. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/f0efde8cee0924c3991790042b176ac77ad35e1f.1656427701.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/64e: Remove MMU_FTR_USE_TLBRSRV and MMU_FTR_USE_PAIRED_MASChristophe Leroy1-12/+0
Commit fb5a515704d7 ("powerpc: Remove platforms/wsp and associated pieces") removed the last CPU having features MMU_FTRS_A2 and commit cd68098bcedd ("powerpc: Clean up MMU_FTRS_A2 and MMU_FTR_TYPE_3E") removed MMU_FTRS_A2 which was the last user of MMU_FTR_USE_TLBRSRV and MMU_FTR_USE_PAIRED_MAS. Remove all code that relies on MMU_FTR_USE_TLBRSRV and MMU_FTR_USE_PAIRED_MAS. With this change done, TLB miss can happen before the mmu feature fixups. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/cfd5a0ecdb1598da968832e1bddf7431ec267200.1656427701.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/64e: Rewrite p4d_populate() as a static inline functionChristophe Leroy1-1/+4
Rewrite p4d_populate() as a static inline function instead of a macro. This change allows typechecking and would have helped detecting a recently found bug in map_kernel_page(). Signed-off-by: Christophe Leroy <[email protected]> Acked-by: Mike Rapoport <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/1b416f8a8fe1bc3f4e01175680ce310b7eb3a1e4.1655974565.git.christophe.leroy@csgroup.eu
2022-06-29powerpc: Remove _PAGE_SAO stub for book3e/64Christophe Leroy1-2/+0
Since commit 634093c59a12 ("powerpc/mm: enable ARCH_HAS_VM_GET_PAGE_PROT"), _PAGE_SAO is used only in arch/powerpc/mm/book3s64/pgtable.c The _PAGE_SAO stub defined as 0 for book3e/64 can be removed. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/715e644fb3c7d992c0b71f6165ab6cf8c682055a.1655706069.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/irq: Make __do_irq() staticChristophe Leroy1-1/+0
Since commit 48cf12d88969 ("powerpc/irq: Inline call_do_irq() and call_do_softirq()"), __do_irq() is not used outside irq.c Reorder functions and make __do_irq() static and drop the declaration in irq.h. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/adbe1c8315ec2d63259f41468e82e51677bb1eda.1654769775.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/irq: remove inline assembly in hard_irq_disable macroChristophe Leroy1-3/+1
Use WRITE_ONCE() instead of opencoding the saving of current stack pointeur. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/9f05937d8722ddd2064a7c2362d8f9000e15e1ba.1652863723.git.christophe.leroy@csgroup.eu
2022-06-29powerpc/irq: Replace #ifdefs by IS_ENABLED()Christophe Leroy2-19/+16
Replace #ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG and #ifdef CONFIG_PERF_EVENTS by IS_ENABLED() in hw_irq.h and plpar_wrappers.h Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/c1ded642f8d9002767f8fed48ed6d1e76254ed73.1652862729.git.christophe.leroy@csgroup.eu