aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2021-06-29mm: replace CONFIG_NEED_MULTIPLE_NODES with CONFIG_NUMAMike Rapoport7-10/+10
After removal of DISCINTIGMEM the NEED_MULTIPLE_NODES and NUMA configuration options are equivalent. Drop CONFIG_NEED_MULTIPLE_NODES and use CONFIG_NUMA instead. Done with $ sed -i 's/CONFIG_NEED_MULTIPLE_NODES/CONFIG_NUMA/' \ $(git grep -wl CONFIG_NEED_MULTIPLE_NODES) $ sed -i 's/NEED_MULTIPLE_NODES/NUMA/' \ $(git grep -wl NEED_MULTIPLE_NODES) with manual tweaks afterwards. [[email protected]: fix arm boot crash] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Acked-by: David Hildenbrand <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Matt Turner <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Vineet Gupta <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-06-29arch/powerpc/kvm/book3s: use vma_lookup() in kvmppc_hv_setup_htab_rma()Liam Howlett1-2/+2
Using vma_lookup() removes the requirement to check if the address is within the returned vma. The code is easier to understand and more compact. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Reviewed-by: Laurent Dufour <[email protected]> Acked-by: David Hildenbrand <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-06-29arch/powerpc/kvm/book3s_hv_uvmem: use vma_lookup() instead of ↵Liam Howlett1-1/+1
find_vma_intersection() vma_lookup() finds the vma of a specific address with a cleaner interface and is more readable. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Reviewed-by: Laurent Dufour <[email protected]> Acked-by: David Hildenbrand <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-06-28Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds42-1600/+2174
Pull kvm updates from Paolo Bonzini: "This covers all architectures (except MIPS) so I don't expect any other feature pull requests this merge window. ARM: - Add MTE support in guests, complete with tag save/restore interface - Reduce the impact of CMOs by moving them in the page-table code - Allow device block mappings at stage-2 - Reduce the footprint of the vmemmap in protected mode - Support the vGIC on dumb systems such as the Apple M1 - Add selftest infrastructure to support multiple configuration and apply that to PMU/non-PMU setups - Add selftests for the debug architecture - The usual crop of PMU fixes PPC: - Support for the H_RPT_INVALIDATE hypercall - Conversion of Book3S entry/exit to C - Bug fixes S390: - new HW facilities for guests - make inline assembly more robust with KASAN and co x86: - Allow userspace to handle emulation errors (unknown instructions) - Lazy allocation of the rmap (host physical -> guest physical address) - Support for virtualizing TSC scaling on VMX machines - Optimizations to avoid shattering huge pages at the beginning of live migration - Support for initializing the PDPTRs without loading them from memory - Many TLB flushing cleanups - Refuse to load if two-stage paging is available but NX is not (this has been a requirement in practice for over a year) - A large series that separates the MMU mode (WP/SMAP/SMEP etc.) from CR0/CR4/EFER, using the MMU mode everywhere once it is computed from the CPU registers - Use PM notifier to notify the guest about host suspend or hibernate - Support for passing arguments to Hyper-V hypercalls using XMM registers - Support for Hyper-V TLB flush hypercalls and enlightened MSR bitmap on AMD processors - Hide Hyper-V hypercalls that are not included in the guest CPUID - Fixes for live migration of virtual machines that use the Hyper-V "enlightened VMCS" optimization of nested virtualization - Bugfixes (not many) Generic: - Support for retrieving statistics without debugfs - Cleanups for the KVM selftests API" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (314 commits) KVM: x86: rename apic_access_page_done to apic_access_memslot_enabled kvm: x86: disable the narrow guest module parameter on unload selftests: kvm: Allows userspace to handle emulation errors. kvm: x86: Allow userspace to handle emulation errors KVM: x86/mmu: Let guest use GBPAGES if supported in hardware and TDP is on KVM: x86/mmu: Get CR4.SMEP from MMU, not vCPU, in shadow page fault KVM: x86/mmu: Get CR0.WP from MMU, not vCPU, in shadow page fault KVM: x86/mmu: Drop redundant rsvd bits reset for nested NPT KVM: x86/mmu: Optimize and clean up so called "last nonleaf level" logic KVM: x86: Enhance comments for MMU roles and nested transition trickiness KVM: x86/mmu: WARN on any reserved SPTE value when making a valid SPTE KVM: x86/mmu: Add helpers to do full reserved SPTE checks w/ generic MMU KVM: x86/mmu: Use MMU's role to determine PTTYPE KVM: x86/mmu: Collapse 32-bit PAE and 64-bit statements for helpers KVM: x86/mmu: Add a helper to calculate root from role_regs KVM: x86/mmu: Add helper to update paging metadata KVM: x86/mmu: Don't update nested guest's paging bitmasks if CR0.PG=0 KVM: x86/mmu: Consolidate reset_rsvds_bits_mask() calls KVM: x86/mmu: Use MMU role_regs to get LA57, and drop vCPU LA57 helper KVM: x86/mmu: Get nested MMU's root level from the MMU's role ...
2021-06-28Merge tag 'sched-core-2021-06-28' of ↵Linus Torvalds3-9/+9
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler udpates from Ingo Molnar: - Changes to core scheduling facilities: - Add "Core Scheduling" via CONFIG_SCHED_CORE=y, which enables coordinated scheduling across SMT siblings. This is a much requested feature for cloud computing platforms, to allow the flexible utilization of SMT siblings, without exposing untrusted domains to information leaks & side channels, plus to ensure more deterministic computing performance on SMT systems used by heterogenous workloads. There are new prctls to set core scheduling groups, which allows more flexible management of workloads that can share siblings. - Fix task->state access anti-patterns that may result in missed wakeups and rename it to ->__state in the process to catch new abuses. - Load-balancing changes: - Tweak newidle_balance for fair-sched, to improve 'memcache'-like workloads. - "Age" (decay) average idle time, to better track & improve workloads such as 'tbench'. - Fix & improve energy-aware (EAS) balancing logic & metrics. - Fix & improve the uclamp metrics. - Fix task migration (taskset) corner case on !CONFIG_CPUSET. - Fix RT and deadline utilization tracking across policy changes - Introduce a "burstable" CFS controller via cgroups, which allows bursty CPU-bound workloads to borrow a bit against their future quota to improve overall latencies & batching. Can be tweaked via /sys/fs/cgroup/cpu/<X>/cpu.cfs_burst_us. - Rework assymetric topology/capacity detection & handling. - Scheduler statistics & tooling: - Disable delayacct by default, but add a sysctl to enable it at runtime if tooling needs it. Use static keys and other optimizations to make it more palatable. - Use sched_clock() in delayacct, instead of ktime_get_ns(). - Misc cleanups and fixes. * tag 'sched-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits) sched/doc: Update the CPU capacity asymmetry bits sched/topology: Rework CPU capacity asymmetry detection sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag psi: Fix race between psi_trigger_create/destroy sched/fair: Introduce the burstable CFS controller sched/uclamp: Fix uclamp_tg_restrict() sched/rt: Fix Deadline utilization tracking during policy change sched/rt: Fix RT utilization tracking during policy change sched: Change task_struct::state sched,arch: Remove unused TASK_STATE offsets sched,timer: Use __set_current_state() sched: Add get_current_state() sched,perf,kvm: Fix preemption condition sched: Introduce task_is_running() sched: Unbreak wakeups sched/fair: Age the average idle time sched/cpufreq: Consider reduced CPU capacity in energy calculation sched/fair: Take thermal pressure into account while estimating energy thermal/cpufreq_cooling: Update offline CPUs per-cpu thermal_pressure sched/fair: Return early from update_tg_cfs_load() if delta == 0 ...
2021-06-28Merge tag 'perf-core-2021-06-28' of ↵Linus Torvalds1-17/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf events updates from Ingo Molnar: - Platform PMU driver updates: - x86 Intel uncore driver updates for Skylake (SNR) and Icelake (ICX) servers - Fix RDPMC support - Fix [extended-]PEBS-via-PT support - Fix Sapphire Rapids event constraints - Fix :ppp support on Sapphire Rapids - Fix fixed counter sanity check on Alder Lake & X86_FEATURE_HYBRID_CPU - Other heterogenous-PMU fixes - Kprobes: - Remove the unused and misguided kprobe::fault_handler callbacks. - Warn about kprobes taking a page fault. - Fix the 'nmissed' stat counter. - Misc cleanups and fixes. * tag 'perf-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf: Fix task context PMU for Hetero perf/x86/intel: Fix instructions:ppp support in Sapphire Rapids perf/x86/intel: Add more events requires FRONTEND MSR on Sapphire Rapids perf/x86/intel: Fix fixed counter check warning for some Alder Lake perf/x86/intel: Fix PEBS-via-PT reload base value for Extended PEBS perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kprobes: Do not increment probe miss count in the fault handler x86,kprobes: WARN if kprobes tries to handle a fault kprobes: Remove kprobe::fault_handler uprobes: Update uprobe_write_opcode() kernel-doc comment perf/hw_breakpoint: Fix DocBook warnings in perf hw_breakpoint perf/core: Fix DocBook warnings perf/core: Make local function perf_pmu_snapshot_aux() static perf/x86/intel/uncore: Enable I/O stacks to IIO PMON mapping on ICX perf/x86/intel/uncore: Enable I/O stacks to IIO PMON mapping on SNR perf/x86/intel/uncore: Generalize I/O stacks to PMON mapping procedure perf/x86/intel/uncore: Drop unnecessary NULL checks after container_of()
2021-06-28Merge tag 'locking-core-2021-06-28' of ↵Linus Torvalds3-83/+89
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: - Core locking & atomics: - Convert all architectures to ARCH_ATOMIC: move every architecture to ARCH_ATOMIC, then get rid of ARCH_ATOMIC and all the transitory facilities and #ifdefs. Much reduction in complexity from that series: 63 files changed, 756 insertions(+), 4094 deletions(-) - Self-test enhancements - Futexes: - Add the new FUTEX_LOCK_PI2 ABI, which is a variant that doesn't set FLAGS_CLOCKRT (.e. uses CLOCK_MONOTONIC). [ The temptation to repurpose FUTEX_LOCK_PI's implicit setting of FLAGS_CLOCKRT & invert the flag's meaning to avoid having to introduce a new variant was resisted successfully. ] - Enhance futex self-tests - Lockdep: - Fix dependency path printouts - Optimize trace saving - Broaden & fix wait-context checks - Misc cleanups and fixes. * tag 'locking-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits) locking/lockdep: Correct the description error for check_redundant() futex: Provide FUTEX_LOCK_PI2 to support clock selection futex: Prepare futex_lock_pi() for runtime clock selection lockdep/selftest: Remove wait-type RCU_CALLBACK tests lockdep/selftests: Fix selftests vs PROVE_RAW_LOCK_NESTING lockdep: Fix wait-type for empty stack locking/selftests: Add a selftest for check_irq_usage() lockding/lockdep: Avoid to find wrong lock dep path in check_irq_usage() locking/lockdep: Remove the unnecessary trace saving locking/lockdep: Fix the dep path printing for backwards BFS selftests: futex: Add futex compare requeue test selftests: futex: Add futex wait test seqlock: Remove trailing semicolon in macros locking/lockdep: Reduce LOCKDEP dependency list locking/lockdep,doc: Improve readability of the block matrix locking/atomics: atomic-instrumented: simplify ifdeffery locking/atomic: delete !ARCH_ATOMIC remnants locking/atomic: xtensa: move to ARCH_ATOMIC locking/atomic: sparc: move to ARCH_ATOMIC locking/atomic: sh: move to ARCH_ATOMIC ...
2021-06-26powerpc/interrupt: Use names in check_return_regs_valid()Christophe Leroy1-2/+2
trap->regs == 0x3000 is trap_is_scv() trap 0x500 is INTERRUPT_EXTERNAL Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/d48bf0184a1de185eb0ed3282247f8a294710674.1624632537.git.christophe.leroy@csgroup.eu
2021-06-26powerpc/interrupt: Also use exit_must_hard_disable() on PPC32Christophe Leroy1-5/+3
Reduce #ifdefs a bit by making exit_must_hard_disable() return true on PPC32. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/52531029563c1fc823b790058e799d0ca71b028c.1624631463.git.christophe.leroy@csgroup.eu
2021-06-25Merge tag 'kvmarm-5.14' of ↵Paolo Bonzini6-38/+79
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 updates for v5.14. - Add MTE support in guests, complete with tag save/restore interface - Reduce the impact of CMOs by moving them in the page-table code - Allow device block mappings at stage-2 - Reduce the footprint of the vmemmap in protected mode - Support the vGIC on dumb systems such as the Apple M1 - Add selftest infrastructure to support multiple configuration and apply that to PMU/non-PMU setups - Add selftests for the debug architecture - The usual crop of PMU fixes
2021-06-26powerpc/sysfs: Replace sizeof(arr)/sizeof(arr[0]) with ARRAY_SIZEJason Wang1-6/+6
The ARRAY_SIZE macro is more compact and more formal in linux source. Signed-off-by: Jason Wang <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-26powerpc/ptrace: Refactor regs_set_return_{msr/ip}Christophe Leroy1-8/+2
regs_set_return_msr() and regs_set_return_ip() have a copy of the code of set_return_regs_changed(). Call the later instead. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/baf64a91557d3811c155616a6aa23ed7b3b21da4.1624619582.git.christophe.leroy@csgroup.eu
2021-06-26powerpc/ptrace: Move set_return_regs_changed() before regs_set_return_{msr/ip}Christophe Leroy1-5/+5
regs_set_return_msr() and regs_set_return_ip() have a copy of the code of set_return_regs_changed(). Move up set_return_regs_changed() so it can be reused by regs_set_return_{msr/ip} Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/49f4fb051a3e1cb69f7305d5b6768aec14727c32.1624619582.git.christophe.leroy@csgroup.eu
2021-06-26powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()Michael Ellerman1-6/+20
In raise_backtrace_ipi() we iterate through the cpumask of CPUs, sending each an IPI asking them to do a backtrace, but we don't wait for the backtrace to happen. We then iterate through the CPU mask again, and if any CPU hasn't done the backtrace and cleared itself from the mask, we print a trace on its behalf, noting that the trace may be "stale". This works well enough when a CPU is not responding, because in that case it doesn't receive the IPI and the sending CPU is left to print the trace. But when all CPUs are responding we are left with a race between the sending and receiving CPUs, if the sending CPU wins the race then it will erroneously print a trace. This leads to spurious "stale" traces from the sending CPU, which can then be interleaved messily with the receiving CPU, note the CPU numbers, eg: [ 1658.929157][ C7] rcu: Stack dump where RCU GP kthread last ran: [ 1658.929223][ C7] Sending NMI from CPU 7 to CPUs 1: [ 1658.929303][ C1] NMI backtrace for cpu 1 [ 1658.929303][ C7] CPU 1 didn't respond to backtrace IPI, inspecting paca. [ 1658.929362][ C1] CPU: 1 PID: 325 Comm: kworker/1:1H Tainted: G W E 5.13.0-rc2+ #46 [ 1658.929405][ C7] irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 325 (kworker/1:1H) [ 1658.929465][ C1] Workqueue: events_highpri test_work_fn [test_lockup] [ 1658.929549][ C7] Back trace of paca->saved_r1 (0xc0000000057fb400) (possibly stale): [ 1658.929592][ C1] NIP: c00000000002cf50 LR: c008000000820178 CTR: c00000000002cfa0 To fix it, change the logic so that the sending CPU waits 5s for the receiving CPU to print its trace. If the receiving CPU prints its trace successfully then the sending CPU just continues, avoiding any spurious "stale" trace. This has the added benefit of allowing all CPUs to print their traces in order and avoids any interleaving of their output. Fixes: 5cc05910f26e ("powerpc/64s: Wire up arch_trigger_cpumask_backtrace()") Cc: [email protected] # v4.18+ Reported-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/pseries/vas: Include irqdomain.hMichael Ellerman1-0/+1
There are patches in flight to break the dependency between asm/irq.h and linux/irqdomain.h, which would break compilation of vas.c because it needs the declaration of irq_create_mapping() etc. So add an explicit include of irqdomain.h to avoid that becoming a problem in future. Reported-by: Stephen Rothwell <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc: mark local variables around longjmp as volatileArnd Bergmann2-13/+13
gcc-11 points out that modifying local variables next to a longjmp/setjmp may cause undefined behavior: arch/powerpc/kexec/crash.c: In function 'crash_kexec_prepare_cpus.constprop': arch/powerpc/kexec/crash.c:108:22: error: variable 'ncpus' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbere d] arch/powerpc/kexec/crash.c:109:13: error: variable 'tries' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbere d] arch/powerpc/xmon/xmon.c: In function 'xmon_print_symbol': arch/powerpc/xmon/xmon.c:3625:21: error: variable 'name' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c: In function 'stop_spus': arch/powerpc/xmon/xmon.c:4057:13: error: variable 'i' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c: In function 'restart_spus': arch/powerpc/xmon/xmon.c:4098:13: error: variable 'i' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c: In function 'dump_opal_msglog': arch/powerpc/xmon/xmon.c:3008:16: error: variable 'pos' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c: In function 'show_pte': arch/powerpc/xmon/xmon.c:3207:29: error: variable 'tsk' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c: In function 'show_tasks': arch/powerpc/xmon/xmon.c:3302:29: error: variable 'tsk' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c: In function 'xmon_core': arch/powerpc/xmon/xmon.c:494:13: error: variable 'cmd' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c:860:21: error: variable 'bp' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c:860:21: error: variable 'bp' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] arch/powerpc/xmon/xmon.c:492:48: error: argument 'fromipi' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered] According to the documentation, marking these as 'volatile' is sufficient to avoid the problem, and it shuts up the warning. Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/pmu: Make the generic compat PMU use the architected eventsPaul Mackerras1-36/+134
This changes generic-compat-pmu.c so that it only uses architected events defined in Power ISA v3.0B, rather than event encodings which, while common to all the IBM Power Systems implementations, are nevertheless implementation-specific rather than architected. The intention is that any CPU implementation designed to conform to Power ISA v3.0B or later can use generic-compat-pmu.c. In addition to the existing events for cycles and instructions, this adds several other architected events, including alternative encodings for some events. In order to make it possible to measure cycles and instructions at the same time as each other, we set the CC5-6RUN bit in MMCR0, which makes PMC5 and PMC6 count instructions and cycles regardless of the run bit, so their events are now PM_CYC and PM_INST_CMPL rather than PM_RUN_CYC and PM_RUN_INST_CMPL (the latter are still available via other event codes). Note that POWER9 has an erratum where one architected event (PM_FLOP_CMPL, floating-point operations completed, code 0x100f4) does not work correctly. Given that there is a specific PMU driver for P9 which will be used in preference to generic-compat-pmu.c, that is not a real problem. Signed-off-by: Paul Mackerras <[email protected]> Reviewed-by: Madhavan Srinivasan <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/pseries/dlpar: use rtas_get_sensor()Nathan Lynch1-6/+3
Instead of making bare calls to get-sensor-state, use rtas_get_sensor(), which correctly handles busy and extended delay statuses. Fixes: ab519a011caa ("powerpc/pseries: Kernel DLPAR Infrastructure") Signed-off-by: Nathan Lynch <[email protected]> Reviewed-by: Laurent Dufour <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/rtas-rtc: remove unused constantNathan Lynch1-1/+1
RTAS_CLOCK_BUSY is unused, remove it. Signed-off-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/papr_scm: trivial: fix typo in a commentKajol Jain1-1/+1
There is a spelling mistake "byes" -> "bytes" in a comment of function drc_pmem_query_stats(). Fix that typo. Signed-off-by: Kajol Jain <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc: Fix is_kvm_guest() / kvm_para_available()Michael Ellerman3-7/+11
Commit a21d1becaa3f ("powerpc: Reintroduce is_kvm_guest() as a fast-path check") added is_kvm_guest() and changed kvm_para_available() to use it. is_kvm_guest() checks a static key, kvm_guest, and that static key is set in check_kvm_guest(). The problem is check_kvm_guest() is only called on pseries, and even then only in some configurations. That means is_kvm_guest() always returns false on all non-pseries and some pseries depending on configuration. That's a bug. For PR KVM guests this is noticable because they no longer do live patching of themselves, which can be detected by the omission of a message in dmesg such as: KVM: Live patching for a fast VM worked To fix it make check_kvm_guest() an initcall, to ensure it's always called at boot. It needs to be core so that it runs before kvm_guest_init() which is postcore. To be an initcall it needs to return int, where 0 means success, so update that. We still call it manually in pSeries_smp_probe(), because that runs before init calls are run. Fixes: a21d1becaa3f ("powerpc: Reintroduce is_kvm_guest() as a fast-path check") Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/64s: Make prom_init require RELOCATABLEMichael Ellerman2-56/+3
When we boot from open firmware (OF) using PPC_OF_BOOT_TRAMPOLINE, aka. prom_init, we run parts of the kernel at an address other than the link address. That happens because OF loads the kernel above zero (OF is at zero) and we run prom_init before copying the kernel down to zero. Currently that works even for non-relocatable kernels, because we do various fixups to the prom_init code to make it run where it's loaded. However those fixups are not sufficient if the kernel becomes large enough. In that case prom_init()'s final call to __start() can end up generating a plt branch: bl c000000002000018 <00000078.plt_branch.__start> That results in the kernel jumping to the linked address of __start, 0xc000000000000000, when really it needs to jump to the 0xc000000000000000 + the runtime address because the kernel is still running at the load address. We could do further shenanigans to handle that, see Jordan's patch for example: https://lore.kernel.org/linuxppc-dev/[email protected] However it is much simpler to just require a kernel with prom_init() to be built relocatable. The result works in all configurations without further work, and requires less code. This should have no effect on most people, as our defconfigs and essentially all distro configs already have RELOCATABLE enabled. Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/bpf: Use bctrl for making function callsNaveen N. Rao2-8/+8
blrl corrupts the link stack. Instead use bctrl when making function calls from BPF programs. Reported-by: Anton Blanchard <[email protected]> Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/xmon: Add support for running a command on all cpus in xmonNaveen N. Rao1-22/+126
It is sometimes desirable to run a command on all cpus in xmon. A typical scenario is to obtain the backtrace from all cpus in xmon if there is a soft lockup. Add rudimentary support for the same. The command to be run on all cpus should be prefixed with 'c#'. As an example, 'c#t' will run 't' command and produce a backtrace on all cpus in xmon. Since many xmon commands are not sensible for running in this manner, we only allow a predefined list of commands -- 'r', 'S' and 't' for now. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/configs: Enable STACK_TRACER and FTRACE_SYSCALLS in some of the configsNaveen N. Rao3-0/+5
Both these config options are generally enabled in distro kernels. Enable the same in a few powerpc64 configs to get better coverage and testing. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/kprobes: Warn if instruction patching failedNaveen N. Rao1-2/+2
When arming and disarming probes, we currently assume that instruction patching can never fail, and don't have a mechanism to surface errors. Add a warning in case instruction patching ever fails. Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/18d7b1309f938c08ce07738100932b551bdd3a52.1621416666.git.naveen.n.rao@linux.vnet.ibm.com
2021-06-25powerpc/kprobes: Roll IS_RFI() macro into IS_RFID()Naveen N. Rao2-6/+5
In kprobes and xmon, we should exclude both 32-bit and 64-bit variants of mtmsr and rfi instructions from being stepped. Have IS_RFID() also detect a rfi instruction similar to IS_MTMSRD(). Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/eee32e1b75dae85d471c89b4c0a123ad4b0aabf8.1621416666.git.naveen.n.rao@linux.vnet.ibm.com
2021-06-25powerpc/papr_scm: Add support for reporting dirty-shutdown-countVaibhav Jain2-0/+36
Persistent memory devices like NVDIMMs can loose cached writes in case something prevents flush on power-fail. Such situations are termed as dirty shutdown and are exposed to applications as last-shutdown-state (LSS) flag and a dirty-shutdown-counter(DSC) as described at [1]. The latter being useful in conditions where multiple applications want to detect a dirty shutdown event without racing with one another. PAPR-NVDIMMs have so far only exposed LSS style flags to indicate a dirty-shutdown-state. This patch further adds support for DSC via the "ibm,persistence-failed-count" device tree property of an NVDIMM. This property is a monotonic increasing 64-bit counter thats an indication of number of times an NVDIMM has encountered a dirty-shutdown event causing persistence loss. Since this value is not expected to change after system-boot hence papr_scm reads & caches its value during NVDIMM probe and exposes it as a PAPR sysfs attributed named 'dirty_shutdown' to match the name of similarly named NFIT sysfs attribute. Also this value is available to libnvdimm via PAPR_PDSM_HEALTH payload. 'struct nd_papr_pdsm_health' has been extended to add a new member called 'dimm_dsc' presence of which is indicated by the newly introduced PDSM_DIMM_DSC_VALID flag. References: [1] https://pmem.io/documents/Dirty_Shutdown_Handling-V1.0.pdf Signed-off-by: Vaibhav Jain <[email protected]> Reviewed-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/papr_scm: Make 'perf_stats' invisible if perf-stats unavailableVaibhav Jain1-11/+24
In case performance stats for an nvdimm are not available, reading the 'perf_stats' sysfs file returns an -ENOENT error. A better approach is to make the 'perf_stats' file entirely invisible to indicate that performance stats for an nvdimm are unavailable. So this patch updates 'papr_nd_attribute_group' to add a 'is_visible' callback implemented as newly introduced 'papr_nd_attribute_visible()' that returns an appropriate mode in case performance stats aren't supported in a given nvdimm. Also the initialization of 'papr_scm_priv.stat_buffer_len' is moved from papr_scm_nvdimm_init() to papr_scm_probe() so that it value is available when 'papr_nd_attribute_visible()' is called during nvdimm initialization. Even though 'perf_stats' attribute is available since v5.9, there are no known user-space tools/scripts that are dependent on presence of its sysfs file. Hence I dont expect any user-space breakage with this patch. Fixes: 2d02bf835e57 ("powerpc/papr_scm: Fetch nvdimm performance stats from PHYP") Signed-off-by: Vaibhav Jain <[email protected]> Reviewed-by: Dan Williams <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/kprobes: Fix Oops by passing ppc_inst as a pointer to emulate_step() ↵Naveen N. Rao1-2/+6
on ppc32 Trying to use a kprobe on ppc32 results in the below splat: BUG: Unable to handle kernel data access on read at 0x7c0802a6 Faulting instruction address: 0xc002e9f0 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=4K PowerPC 44x Platform Modules linked in: CPU: 0 PID: 89 Comm: sh Not tainted 5.13.0-rc1-01824-g3a81c0495fdb #7 NIP: c002e9f0 LR: c0011858 CTR: 00008a47 REGS: c292fd50 TRAP: 0300 Not tainted (5.13.0-rc1-01824-g3a81c0495fdb) MSR: 00009000 <EE,ME> CR: 24002002 XER: 20000000 DEAR: 7c0802a6 ESR: 00000000 <snip> NIP [c002e9f0] emulate_step+0x28/0x324 LR [c0011858] optinsn_slot+0x128/0x10000 Call Trace: opt_pre_handler+0x7c/0xb4 (unreliable) optinsn_slot+0x128/0x10000 ret_from_syscall+0x0/0x28 The offending instruction is: 81 24 00 00 lwz r9,0(r4) Here, we are trying to load the second argument to emulate_step(): struct ppc_inst, which is the instruction to be emulated. On ppc64, structures are passed in registers when passed by value. However, per the ppc32 ABI, structures are always passed to functions as pointers. This isn't being adhered to when setting up the call to emulate_step() in the optprobe trampoline. Fix the same. Fixes: eacf4c0202654a ("powerpc: Enable OPTPROBES on PPC32") Cc: [email protected] Signed-off-by: Naveen N. Rao <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/5bdc8cbc9a95d0779e27c9ddbf42b40f51f883c0.1624425798.git.christophe.leroy@csgroup.eu
2021-06-24KVM: debugfs: Reuse binary stats descriptorsJing Zhang2-58/+0
To remove code duplication, use the binary stats descriptors in the implementation of the debugfs interface for statistics. This unifies the definition of statistics for the binary and debugfs interfaces. Signed-off-by: Jing Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2021-06-24KVM: stats: Support binary stats retrieval for a VCPUJing Zhang2-0/+83
Add a VCPU ioctl to get a statistics file descriptor by which a read functionality is provided for userspace to read out VCPU stats header, descriptors and data. Define VCPU statistics descriptors and header for all architectures. Reviewed-by: David Matlack <[email protected]> Reviewed-by: Ricardo Koller <[email protected]> Reviewed-by: Krish Sadhukhan <[email protected]> Reviewed-by: Fuad Tabba <[email protected]> Tested-by: Fuad Tabba <[email protected]> #arm64 Signed-off-by: Jing Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2021-06-24KVM: stats: Support binary stats retrieval for a VMJing Zhang2-0/+34
Add a VM ioctl to get a statistics file descriptor by which a read functionality is provided for userspace to read out VM stats header, descriptors and data. Define VM statistics descriptors and header for all architectures. Reviewed-by: David Matlack <[email protected]> Reviewed-by: Ricardo Koller <[email protected]> Reviewed-by: Krish Sadhukhan <[email protected]> Reviewed-by: Fuad Tabba <[email protected]> Tested-by: Fuad Tabba <[email protected]> #arm64 Signed-off-by: Jing Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2021-06-24KVM: stats: Add fd-based API to read binary stats dataJing Zhang1-1/+1
This commit defines the API for userspace and prepare the common functionalities to support per VM/VCPU binary stats data readings. The KVM stats now is only accessible by debugfs, which has some shortcomings this change series are supposed to fix: 1. The current debugfs stats solution in KVM could be disabled when kernel Lockdown mode is enabled, which is a potential rick for production. 2. The current debugfs stats solution in KVM is organized as "one stats per file", it is good for debugging, but not efficient for production. 3. The stats read/clear in current debugfs solution in KVM are protected by the global kvm_lock. Besides that, there are some other benefits with this change: 1. All KVM VM/VCPU stats can be read out in a bulk by one copy to userspace. 2. A schema is used to describe KVM statistics. From userspace's perspective, the KVM statistics are self-describing. 3. With the fd-based solution, a separate telemetry would be able to read KVM stats in a less privileged environment. 4. After the initial setup by reading in stats descriptors, a telemetry only needs to read the stats data itself, no more parsing or setup is needed. Reviewed-by: David Matlack <[email protected]> Reviewed-by: Ricardo Koller <[email protected]> Reviewed-by: Krish Sadhukhan <[email protected]> Reviewed-by: Fuad Tabba <[email protected]> Tested-by: Fuad Tabba <[email protected]> #arm64 Signed-off-by: Jing Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2021-06-24KVM: stats: Separate generic stats from architecture specific onesJing Zhang6-28/+23
Generic KVM stats are those collected in architecture independent code or those supported by all architectures; put all generic statistics in a separate structure. This ensures that they are defined the same way in the statistics API which is being added, removing duplication among different architectures in the declaration of the descriptors. No functional change intended. Reviewed-by: David Matlack <[email protected]> Reviewed-by: Ricardo Koller <[email protected]> Reviewed-by: Krish Sadhukhan <[email protected]> Signed-off-by: Jing Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2021-06-25powerpc/64s: Fix copy-paste data exposure into newly created tasksNicholas Piggin1-16/+32
copy-paste contains implicit "copy buffer" state that can contain arbitrary user data (if the user process executes a copy instruction). This could be snooped by another process if a context switch hits while the state is live. So cp_abort is executed on context switch to clear out possible sensitive data and prevent the leak. cp_abort is done after the low level _switch(), which means it is never reached by newly created tasks, so they could snoop on this buffer between their first and second context switch. Fix this by doing the cp_abort before calling _switch. Add some comments which should make the issue harder to miss. Fixes: 07d2a628bc000 ("powerpc/64s: Avoid cpabort in context switch when possible") Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/32: Avoid #ifdef nested with FTR_SECTION on booke syscall entryChristophe Leroy1-2/+2
On booke, SYSCALL_ENTRY macro nests an FTR_SECTION with a #ifdef CONFIG_KVM_BOOKE_HV. Duplicate the single instruction alternative to avoid nesting. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/33db61d5f85146262dbe26648f8f87eca3cae393.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/32: Reduce code duplication of system call entryChristophe Leroy3-37/+19
booke and non booke do pretty similar things in SYSCALL_ENTRY macro just before calling jumping to transfer_to_syscall(). Do them in transfer_to_syscall() instead. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/552e27fa09394a6bc70585fcdfa237f99a5d1267.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/32: Interchange r1 and r11 in SYSCALL_ENTRY on bookeChristophe Leroy2-10/+11
To better match non booke version of SYSCALL_ENTRY macro, interchange r1 and r11 in the booke version. While at it, in both versions use r1 instead of r11 to save _NIP and _CCR. All other uses of r11 will go away in next patch, so don't bother changing them for now. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/1684c39724a069b0ce1aa82eaee6ec194e354e4e.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/32: Interchange r10 and r12 in SYSCALL_ENTRY on non bookeChristophe Leroy1-17/+17
To better match booke version of SYSCALL_ENTRY macro, interchange r10 and r12 in the non booke version. Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/5ab3a517bc883a2fc905fb2cb5ee9344f37b2cfa.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc: Remove klimitChristophe Leroy5-10/+5
klimit is a global variable initialised at build time with the value of _end. This variable is never modified, so _end symbol can be used directly. Remove klimit. Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Kefeng Wang <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/9fa9ba6807c17f93f35a582c199c646c4a8bfd9c.1622800638.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/mm: Properly coalesce pages in ptdumpChristophe Leroy1-19/+3
Commit aaa229529244 ("powerpc/mm: Add physical address to Linux page table dump") changed range coalescing to only combine ranges that are both virtually and physically contiguous, in order to avoid erroneous combination of unrelated mappings in IOREMAP space. But in the VMALLOC space, mappings almost never have contiguous physical pages, so the commit mentionned above leads to dumping one line per page for vmalloc mappings. Taking into account the vmalloc always leave a gap between two areas, we never have two mappings dumped as a single combination even if they have the exact same flags. The only space that may have encountered such an issue was the early IOREMAP which is not using vmalloc engine. But previous commits added gaps between early IO mappings, so it is not an issue anymore. That commit created some difficulties with KASAN mappings, see commit cabe8138b23c ("powerpc: dump as a single line areas mapping a single physical page.") and with huge page, see commit b00ff6d8c1c3 ("powerpc/ptdump: Properly handle non standard page size"). So, almost revert commit aaa229529244 to properly coalesce pages mapped with the same flags as before, only keep the display of the first physical address of the range, as it can be usefull especially for IO mappings. It brings back powerpc at the same level as other architectures and simplifies the conversion to GENERIC PTDUMP. With the patch: ---[ kasan shadow mem start ]--- 0xf8000000-0xf8ffffff 0x07000000 16M huge rw present dirty accessed 0xf9000000-0xf91fffff 0x01434000 2M r present accessed 0xf9200000-0xf95affff 0x02104000 3776K rw present dirty accessed 0xfef5c000-0xfeffffff 0x01434000 656K r present accessed ---[ kasan shadow mem end ]--- Before: ---[ kasan shadow mem start ]--- 0xf8000000-0xf8ffffff 0x07000000 16M huge rw present dirty accessed 0xf9000000-0xf91fffff 0x01434000 16K r present accessed 0xf9200000-0xf9203fff 0x02104000 16K rw present dirty accessed 0xf9204000-0xf9207fff 0x0213c000 16K rw present dirty accessed 0xf9208000-0xf920bfff 0x02174000 16K rw present dirty accessed 0xf920c000-0xf920ffff 0x02188000 16K rw present dirty accessed 0xf9210000-0xf9213fff 0x021dc000 16K rw present dirty accessed 0xf9214000-0xf9217fff 0x02220000 16K rw present dirty accessed 0xf9218000-0xf921bfff 0x023c0000 16K rw present dirty accessed 0xf921c000-0xf921ffff 0x023d4000 16K rw present dirty accessed 0xf9220000-0xf9227fff 0x023ec000 32K rw present dirty accessed ... 0xf93b8000-0xf93e3fff 0x02614000 176K rw present dirty accessed 0xf93e4000-0xf94c3fff 0x027c0000 896K rw present dirty accessed 0xf94c4000-0xf94c7fff 0x0236c000 16K rw present dirty accessed 0xf94c8000-0xf94cbfff 0x041f0000 16K rw present dirty accessed 0xf94cc000-0xf94cffff 0x029c0000 16K rw present dirty accessed 0xf94d0000-0xf94d3fff 0x041ec000 16K rw present dirty accessed 0xf94d4000-0xf94d7fff 0x0407c000 16K rw present dirty accessed 0xf94d8000-0xf94f7fff 0x041c0000 128K rw present dirty accessed ... 0xf95ac000-0xf95affff 0x042b0000 16K rw present dirty accessed 0xfef5c000-0xfeffffff 0x01434000 16K r present accessed ---[ kasan shadow mem end ]--- Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/c56ce1f5c3c75adc9811b1a5f9c410fa74183a8d.1618828806.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/mm: Leave a gap between early allocated IO areasChristophe Leroy2-3/+3
Vmalloc system leaves a gap between allocated areas. It helps catching overflows. Do the same for IO areas which are allocated with early_ioremap_range() until slab_is_available(). Signed-off-by: Christophe Leroy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/c433e358190fb5d47650463ea1ab755fc7b73e6e.1618828806.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/papr_scm: Properly handle UUID types and APIAndy Shevchenko1-9/+18
Parse to and export from UUID own type, before dereferencing. This also fixes wrong comment (Little Endian UUID is something else) and should eliminate the direct strict types assignments. Fixes: 43001c52b603 ("powerpc/papr_scm: Use ibm,unit-guid as the iset cookie") Fixes: 259a948c4ba1 ("powerpc/pseries/scm: Use a specific endian format for storing uuid from the device tree") Signed-off-by: Andy Shevchenko <[email protected]> Reviewed-by: Aneesh Kumar K.V <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/pseries: fail quicker in dlpar_memory_add_by_ic()Daniel Henrique Barboza1-8/+6
The validation done at the start of dlpar_memory_add_by_ic() is an all of nothing scenario - if any LMBs in the range is marked as RESERVED we can fail right away. We then can remove the 'lmbs_available' var and its check with 'lmbs_to_add' since the whole LMB range was already validated in the previous step. Signed-off-by: Daniel Henrique Barboza <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/pseries: break early in dlpar_memory_add_by_count() loopsDaniel Henrique Barboza1-5/+12
After a successful dlpar_add_lmb() call the LMB is marked as reserved. Later on, depending whether we added enough LMBs or not, we rely on the marked LMBs to see which ones might need to be removed, and we remove the reservation of all of them. These are done in for_each_drmem_lmb() loops without any break condition. This means that we're going to check all LMBs of the partition even after going through all the reserved ones. This patch adds break conditions in both loops to avoid this. The 'lmbs_added' variable was renamed to 'lmbs_reserved', and it's now being decremented each time a lmb reservation is removed, indicating if there are still marked LMBs to be processed. Signed-off-by: Daniel Henrique Barboza <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/pseries: skip reserved LMBs in dlpar_memory_add_by_count()Daniel Henrique Barboza1-0/+3
The function is counting reserved LMBs as available to be added, but they aren't. This will cause the function to miscalculate the available LMBs and can trigger errors later on when executing dlpar_add_lmb(). Signed-off-by: Daniel Henrique Barboza <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc: Offline CPU in stop_this_cpu()Nicholas Piggin1-0/+11
printk_safe_flush_on_panic() has special lock breaking code for the case where we panic()ed with the console lock held. It relies on panic IPI causing other CPUs to mark themselves offline. Do as most other architectures do. This effectively reverts commit de6e5d38417e ("powerpc: smp_send_stop do not offline stopped CPUs"), unfortunately it may result in some false positive warnings, but the alternative is more situations where we can crash without getting messages out. Fixes: de6e5d38417e ("powerpc: smp_send_stop do not offline stopped CPUs") Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc: Make PPC_IRQ_SOFT_MASK_DEBUG depend on PPC64Nicholas Piggin1-0/+1
32-bit platforms don't have irq soft masking. Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-25powerpc/64s: Remove irq mask workaround in accumulate_stolen_time()Nicholas Piggin1-11/+0
The caller has been moved to C after irq soft-mask state has been reconciled, and Linux IRQs have been marked as disabled, so this no longer needs to play games with IRQ internals. Fixes: 68b34588e202 ("powerpc/64/sycall: Implement syscall entry/exit logic in C") Signed-off-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]