aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2020-09-30Merge remote-tracking branch 'arm64/for-next/ghostbusters' into ↵Marc Zyngier10-687/+818
kvm-arm64/hyp-pcpu Signed-off-by: Marc Zyngier <[email protected]>
2020-09-30kvm: arm64: Set up hyp percpu data for nVHEDavid Brazdil1-0/+8
Add hyp percpu section to linker script and rename the corresponding ELF sections of hyp/nvhe object files. This moves all nVHE-specific percpu variables to the new hyp percpu section. Allocate sufficient amount of memory for all percpu hyp regions at global KVM init time and create corresponding hyp mappings. The base addresses of hyp percpu regions are kept in a dynamically allocated array in the kernel. Add NULL checks in PMU event-reset code as it may run before KVM memory is initialized. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Create separate instances of kvm_host_data for VHE/nVHEDavid Brazdil1-1/+0
Host CPU context is stored in a global per-cpu variable `kvm_host_data`. In preparation for introducing independent per-CPU region for nVHE hyp, create two separate instances of `kvm_host_data`, one for VHE and one for nVHE. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Duplicate arm64_ssbd_callback_required for nVHE hypDavid Brazdil1-1/+0
Hyp keeps track of which cores require SSBD callback by accessing a kernel-proper global variable. Create an nVHE symbol of the same name and copy the value from kernel proper to nVHE as KVM is being enabled on a core. Done in preparation for separating percpu memory owned by kernel proper and nVHE. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Only define __kvm_ex_table for CONFIG_KVMDavid Brazdil1-0/+4
Minor cleanup that only creates __kvm_ex_table ELF section and related symbols if CONFIG_KVM is enabled. Also useful as more hyp-specific sections will be added. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Move nVHE hyp namespace macros to hyp_image.hDavid Brazdil2-2/+1
Minor cleanup to move all macros related to prefixing nVHE hyp section and symbol names into one place: hyp_image.h. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-29arm64: Add support for PR_SPEC_DISABLE_NOEXEC prctl() optionWill Deacon2-4/+40
The PR_SPEC_DISABLE_NOEXEC option to the PR_SPEC_STORE_BYPASS prctl() allows the SSB mitigation to be enabled only until the next execve(), at which point the state will revert back to PR_SPEC_ENABLE and the mitigation will be disabled. Add support for PR_SPEC_DISABLE_NOEXEC on arm64. Reported-by: Anthony Steinhauser <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Pull in task_stack_page() to Spectre-v4 mitigation codeWill Deacon1-0/+1
The kbuild robot reports that we're relying on an implicit inclusion to get a definition of task_stack_page() in the Spectre-v4 mitigation code, which is not always in place for some configurations: | arch/arm64/kernel/proton-pack.c:329:2: error: implicit declaration of function 'task_stack_page' [-Werror,-Wimplicit-function-declaration] | task_pt_regs(task)->pstate |= val; | ^ | arch/arm64/include/asm/processor.h:268:36: note: expanded from macro 'task_pt_regs' | ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) | ^ | arch/arm64/kernel/proton-pack.c:329:2: note: did you mean 'task_spread_page'? Add the missing include to fix the build error. Fixes: a44acf477220 ("arm64: Move SSBD prctl() handler alongside other spectre mitigation code") Reported-by: Anthony Steinhauser <[email protected]> Reported-by: kernel test robot <[email protected]> Link: https://lore.kernel.org/r/202009260013.Ul7AD29w%[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabledWill Deacon1-2/+0
Patching the EL2 exception vectors is integral to the Spectre-v2 workaround, where it can be necessary to execute CPU-specific sequences to nobble the branch predictor before running the hypervisor text proper. Remove the dependency on CONFIG_RANDOMIZE_BASE and allow the EL2 vectors to be patched even when KASLR is not enabled. Fixes: 7a132017e7a5 ("KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE") Reported-by: kernel test robot <[email protected]> Link: https://lore.kernel.org/r/202009221053.Jv1XsQUZ%[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Get rid of arm64_ssbd_stateMarc Zyngier1-2/+0
Out with the old ghost, in with the new... Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Simplify handling of ARCH_WORKAROUND_2Marc Zyngier2-16/+0
Owing to the fact that the host kernel is always mitigated, we can drastically simplify the WA2 handling by keeping the mitigation state ON when entering the guest. This means the guest is either unaffected or not mitigated. This results in a nice simplification of the mitigation space, and the removal of a lot of code that was never really used anyway. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rewrite Spectre-v4 mitigation codeWill Deacon7-335/+394
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Move SSBD prctl() handler alongside other spectre mitigation codeWill Deacon3-130/+119
As part of the spectre consolidation effort to shift all of the ghosts into their own proton pack, move all of the horrible SSBD prctl() code out of its own 'ssbd.c' file. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon1-1/+1
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Treat SSBS as a non-strict system featureWill Deacon1-3/+3
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rewrite Spectre-v2 mitigation codeWill Deacon2-233/+291
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Introduce separate file for spectre mitigations and reportingWill Deacon3-7/+33
The spectre mitigation code is spread over a few different files, which makes it both hard to follow, but also hard to remove it should we want to do that in future. Introduce a new file for housing the spectre mitigations, and populate it with the spectre-v1 reporting code to start with. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2Will Deacon1-1/+1
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Simplify install_bp_hardening_cb()Will Deacon1-20/+7
Use is_hyp_mode_available() to detect whether or not we need to patch the KVM vectors for branch hardening, which avoids the need to take the vector pointers as parameters. Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASEWill Deacon1-2/+2
The removal of CONFIG_HARDEN_BRANCH_PREDICTOR means that CONFIG_KVM_INDIRECT_VECTORS is synonymous with CONFIG_RANDOMIZE_BASE, so replace it. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Remove Spectre-related CONFIG_* optionsWill Deacon4-27/+3
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUsMarc Zyngier1-0/+7
Commit 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") changed the way we deal with per-CPU errata by only calling the .matches() callback until one CPU is found to be affected. At this point, .matches() stop being called, and .cpu_enable() will be called on all CPUs. This breaks the ARCH_WORKAROUND_2 handling, as only a single CPU will be mitigated. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") Cc: <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-28arm64: cpufeature: Export symbol read_sanitised_ftr_reg()Jean-Philippe Brucker1-0/+1
The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to share CPU page tables with devices. Allow the driver to be built as module by exporting the read_sanitized_ftr_reg() cpufeature symbol. Signed-off-by: Jean-Philippe Brucker <[email protected]> Reviewed-by: Jonathan Cameron <[email protected]> Acked-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-28arm64: perf: Defer irq_work to IPI_IRQ_WORKJulien Thierry1-9/+5
When handling events, armv8pmu_handle_irq() calls perf_event_overflow(), and subsequently calls irq_work_run() to handle any work queued by perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when queuing the work, this isn't strictly necessary and the work could be handled as part of the IPI_IRQ_WORK handler. In the common case the IPI handler will run immediately after the PMU IRQ handler, and where the PE is heavily loaded with interrupts other handlers may run first, widening the window where some counters are disabled. In practice this window is unlikely to be a significant issue, and removing the call to irq_work_run() would make the PMU IRQ handler NMI safe in addition to making it simpler, so let's do that. [Alexandru E.: Reworded commit message] Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Alexandru Elisei <[email protected]> Cc: Julien Thierry <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-28arm64: perf: Remove PMU lockingJulien Thierry1-28/+0
The PMU is disabled and enabled, and the counters are programmed from contexts where interrupts or preemption is disabled. The functions to toggle the PMU and to program the PMU counters access the registers directly and don't access data modified by the interrupt handler. That, and the fact that they're always called from non-preemptible contexts, means that we don't need to disable interrupts or use a spinlock. [Alexandru E.: Explained why locking is not needed, removed WARN_ONs] Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Alexandru Elisei <[email protected]> Tested-by: Sumit Garg <[email protected]> (Developerbox) Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-28arm64: perf: Avoid PMXEV* indirectionMark Rutland1-14/+85
Currently we access the counter registers and their respective type registers indirectly. This requires us to write to PMSELR, issue an ISB, then access the relevant PMXEV* registers. This is unfortunate, because: * Under virtualization, accessing one register requires two traps to the hypervisor, even though we could access the register directly with a single trap. * We have to issue an ISB which we could otherwise avoid the cost of. * When we use NMIs, the NMI handler will have to save/restore the select register in case the code it preempted was attempting to access a counter or its type register. We can avoid these issues by directly accessing the relevant registers. This patch adds helpers to do so. In armv8pmu_enable_event() we still need the ISB to prevent the PE from reordering the write to PMINTENSET_EL1 register. If the interrupt is enabled before we disable the counter and the new event is configured, we might get an interrupt triggered by the previously programmed event overflowing, but which we wrongly attribute to the event that we are enabling. Execute an ISB after we disable the counter. In the process, remove the comment that refers to the ARMv7 PMU. [Julien T.: Don't inline read/write functions to avoid big code-size increase, remove unused read_pmevtypern function, fix counter index issue.] [Alexandru E.: Removed comment, removed trailing semicolons in macros, added ISB] Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Alexandru Elisei <[email protected]> Tested-by: Sumit Garg <[email protected]> (Developerbox) Cc: Julien Thierry <[email protected]> Cc: Will Deacon <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-28arm64: perf: Add missing ISB in armv8pmu_enable_counter()Alexandru Elisei1-0/+5
Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In armv8pmu_enable_event(), the PE can reorder configuring the event type after we have enabled the counter and the interrupt. This can lead to an interrupt being asserted because of the previous event type that we were counting using the same counter, not the one that we've just configured. The same rationale applies to writes to the PMINTENSET_EL1 register. The PE can reorder enabling the interrupt at any point in the future after we have enabled the event. Prevent both situations from happening by adding an ISB just before we enable the event counter. Fixes: 030896885ade ("arm64: Performance counters support") Reported-by: Julien Thierry <[email protected]> Signed-off-by: Alexandru Elisei <[email protected]> Tested-by: Sumit Garg <[email protected]> (Developerbox) Cc: Julien Thierry <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-28arm64: perf: Add support caps under sysfsShaokun Zhang1-33/+70
ARMv8.4-PMU introduces the PMMIR_EL1 registers and some new PMU events, like STALL_SLOT etc, are related to it. Let's add a caps directory to /sys/bus/event_source/devices/armv8_pmuv3_0/ and support slots from PMMIR_EL1 registers in this entry. The user programs can get the slots from sysfs directly. /sys/bus/event_source/devices/armv8_pmuv3_0/caps/slots is exposed under sysfs. Both ARMv8.4-PMU and STALL_SLOT event are implemented, it returns the slots from PMMIR_EL1, otherwise it will return 0. Signed-off-by: Shaokun Zhang <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-28Merge branch 'irq/ipi-as-irq', remote-tracking branches 'origin/irq/dw' and ↵Marc Zyngier1-1/+3
'origin/irq/owl' into irq/irqchip-next Signed-off-by: Marc Zyngier <[email protected]>
2020-09-25kbuild: remove cc-option test of -Werror=date-timeMasahiro Yamada1-1/+1
The minimal compiler versions, GCC 4.9 and Clang 10 support this flag. Here is the godbolt: https://godbolt.org/z/xvjcMa Signed-off-by: Masahiro Yamada <[email protected]> Reviewed-by: Nathan Chancellor <[email protected]> Acked-by: Will Deacon <[email protected]>
2020-09-25kbuild: remove cc-option test of -fno-strict-overflowMasahiro Yamada1-1/+1
The minimal compiler versions, GCC 4.9 and Clang 10 support this flag. Here is the godbolt: https://godbolt.org/z/odq8h9 Signed-off-by: Masahiro Yamada <[email protected]> Reviewed-by: Nathan Chancellor <[email protected]> Acked-by: Will Deacon <[email protected]>
2020-09-25kbuild: preprocess module linker scriptMasahiro Yamada1-5/+0
There was a request to preprocess the module linker script like we do for the vmlinux one. (https://lkml.org/lkml/2020/8/21/512) The difference between vmlinux.lds and module.lds is that the latter is needed for external module builds, thus must be cleaned up by 'make mrproper' instead of 'make clean'. Also, it must be created by 'make modules_prepare'. You cannot put it in arch/$(SRCARCH)/kernel/, which is cleaned up by 'make clean'. I moved arch/$(SRCARCH)/kernel/module.lds to arch/$(SRCARCH)/include/asm/module.lds.h, which is included from scripts/module.lds.S. scripts/module.lds is fine because 'make clean' keeps all the build artifacts under scripts/. You can add arch-specific sections in <asm/module.lds.h>. Signed-off-by: Masahiro Yamada <[email protected]> Tested-by: Jessica Yu <[email protected]> Acked-by: Will Deacon <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Palmer Dabbelt <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Jessica Yu <[email protected]>
2020-09-21arm64: Move console stack display code to stacktrace.cMark Brown2-65/+65
Currently the code for displaying a stack trace on the console is located in traps.c rather than stacktrace.c, using the unwinding code that is in stacktrace.c. This can be confusing and make the code hard to find since such output is often referred to as a stack trace which might mislead the unwary. Due to this and since traps.c doesn't interact with this code except for via the public interfaces move the code to stacktrace.c to make it easier to find. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-21arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUsMarc Zyngier1-0/+8
Commit 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") changed the way we deal with ARCH_WORKAROUND_1, by moving most of the enabling code to the .matches() callback. This has the unfortunate effect that the workaround gets only enabled on the first affected CPU, and no other. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") Cc: <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-21arm64: Make use of ARCH_WORKAROUND_1 even when KVM is not enabledMarc Zyngier1-2/+5
We seem to be pretending that we don't have any firmware mitigation when KVM is not compiled in, which is not quite expected. Bring back the mitigation in this case. Fixes: 4db61fef16a1 ("arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations") Cc: <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-21arm64/sve: Implement a helper to load SVE registers from FPSIMD stateJulien Grall1-0/+17
In a follow-up patch, we may save the FPSIMD rather than the full SVE state when the state has to be zeroed on return to userspace (e.g during a syscall). Introduce an helper to load SVE vectors from FPSIMD state and zero the rest of SVE registers. Signed-off-by: Julien Grall <[email protected]> Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Dave Martin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-21arm64/sve: Implement a helper to flush SVE registersJulien Grall1-0/+8
Introduce a new helper that will zero all SVE registers but the first 128-bits of each vector. This will be used by subsequent patches to avoid costly store/maipulate/reload sequences in places like do_sve_acc(). Signed-off-by: Julien Grall <[email protected]> Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Dave Martin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-21arm64/signal: Update the comment in preserve_sve_contextJulien Grall1-1/+2
The SVE state is saved by fpsimd_signal_preserve_current_state() and not preserve_fpsimd_context(). Update the comment in preserve_sve_context to reflect the current behavior. Signed-off-by: Julien Grall <[email protected]> Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Dave Martin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-21arm64/fpsimd: Update documentation of do_sve_accJulien Grall1-1/+1
fpsimd_restore_current_state() enables and disables the SVE access trap based on TIF_SVE, not task_fpsimd_load(). Update the documentation of do_sve_acc to reflect this behavior. Signed-off-by: Julien Grall <[email protected]> Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Dave Martin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-18arch_topology, arm, arm64: define arch_scale_freq_invariant()Valentin Schneider1-0/+7
arch_scale_freq_invariant() is used by schedutil to determine whether the scheduler's load-tracking signals are frequency invariant. Its definition is overridable, though by default it is hardcoded to 'true' if arch_scale_freq_capacity() is defined ('false' otherwise). This behaviour is not overridden on arm, arm64 and other users of the generic arch topology driver, which is somewhat precarious: arch_scale_freq_capacity() will always be defined, yet not all cpufreq drivers are guaranteed to drive the frequency invariance scale factor setting. In other words, the load-tracking signals may very well *not* be frequency invariant. Now that cpufreq can be queried on whether the current driver is driving the Frequency Invariance (FI) scale setting, the current situation can be improved. This combines the query of whether cpufreq supports the setting of the frequency scale factor, with whether all online CPUs are counter-based FI enabled. While cpufreq FI enablement applies at system level, for all CPUs, counter-based FI support could also be used for only a subset of CPUs to set the invariance scale factor. Therefore, if cpufreq-based FI support is present, we consider the system to be invariant. If missing, we require all online CPUs to be counter-based FI enabled in order for the full system to be considered invariant. If the system ends up not being invariant, a new condition is needed in the counter initialization code that disables all scale factor setting based on counters. Precedence of counters over cpufreq use is not important here. The invariant status is only given to the system if all CPUs have at least one method of setting the frequency scale factor. Signed-off-by: Valentin Schneider <[email protected]> Signed-off-by: Ionela Voinescu <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Viresh Kumar <[email protected]> Reviewed-by: Sudeep Holla <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2020-09-18arch_topology, cpufreq: constify arch_* cpumasksValentin Schneider1-1/+1
The passed cpumask arguments to arch_set_freq_scale() and arch_freq_counters_available() are only iterated over, so reflect this in the prototype. This also allows to pass system cpumasks like cpu_online_mask without getting a warning. Signed-off-by: Valentin Schneider <[email protected]> Signed-off-by: Ionela Voinescu <[email protected]> Acked-by: Catalin Marinas <[email protected]> Acked-by: Viresh Kumar <[email protected]> Reviewed-by: Sudeep Holla <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2020-09-18arm64: Fix -Wunused-function warning when !CONFIG_HOTPLUG_CPUYueHaibing1-1/+3
If CONFIG_HOTPLUG_CPU is n, gcc warns: arch/arm64/kernel/smp.c:967:13: warning: ‘ipi_teardown’ defined but not used [-Wunused-function] static void ipi_teardown(int cpu) ^~~~~~~~~~~~ Use #ifdef guard this. Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-18arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMMWill Deacon2-1/+18
When generating instructions at runtime, for example due to kernel text patching or the BPF JIT, we can emit a trapping BRK instruction if we are asked to encode an invalid instruction such as an out-of-range] branch. This is indicative of a bug in the caller, and will result in a crash on executing the generated code. Unfortunately, the message from the crash is really unhelpful, and mumbles something about ptrace: | Unexpected kernel BRK exception at EL1 | Internal error: ptrace BRK handler: f2000100 [#1] SMP We can do better than this. Install a break handler for FAULT_BRK_IMM, which is the immediate used to encode the "I've been asked to generate an invalid instruction" error, and triage the faulting PC to determine whether or not the failure occurred in the BPF JIT. Link: https://lore.kernel.org/r/20200915141707.GB26439@willie-the-truck Reported-by: Ilias Apalodimas <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-18arm64/fpsimd: Fix missing-prototypes in fpsimd.cTian Tao1-0/+2
Fix the following warnings. arch/arm64/kernel/fpsimd.c:935:6: warning: no previous prototype for ‘do_sve_acc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:962:6: warning: no previous prototype for ‘do_fpsimd_acc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:971:6: warning: no previous prototype for ‘do_fpsimd_exc’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:1266:6: warning: no previous prototype for ‘kernel_neon_begin’ [-Wmissing-prototypes] arch/arm64/kernel/fpsimd.c:1292:6: warning: no previous prototype for ‘kernel_neon_end’ [-Wmissing-prototypes] Signed-off-by: Tian Tao <[email protected]> Reviewed-by: Dave Martin <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-18arm64: stacktrace: Convert to ARCH_STACKWALKMark Brown1-69/+10
Historically architectures have had duplicated code in their stack trace implementations for filtering what gets traced. In order to avoid this duplication some generic code has been provided using a new interface arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which factors all this out into the generic stack trace code. Convert arm64 to use this common infrastructure. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-18arm64: stacktrace: Make stack walk callback consistent with generic codeMark Brown3-13/+12
As with the generic arch_stack_walk() code the arm64 stack walk code takes a callback that is called per stack frame. Currently the arm64 code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The arm64 code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when arm64 is converted to use arch_stack_walk() change the signature and return sense of the arm64 specific callback to match that of the generic code. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Miroslav Benes <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-17arm64: paravirt: Initialize steal time when cpu is onlineAndrew Jones1-11/+15
Steal time initialization requires mapping a memory region which invokes a memory allocation. Doing this at CPU starting time results in the following trace when CONFIG_DEBUG_ATOMIC_SLEEP is enabled: BUG: sleeping function called from invalid context at mm/slab.h:498 in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 0, name: swapper/1 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.9.0-rc5+ #1 Call trace: dump_backtrace+0x0/0x208 show_stack+0x1c/0x28 dump_stack+0xc4/0x11c ___might_sleep+0xf8/0x130 __might_sleep+0x58/0x90 slab_pre_alloc_hook.constprop.101+0xd0/0x118 kmem_cache_alloc_node_trace+0x84/0x270 __get_vm_area_node+0x88/0x210 get_vm_area_caller+0x38/0x40 __ioremap_caller+0x70/0xf8 ioremap_cache+0x78/0xb0 memremap+0x9c/0x1a8 init_stolen_time_cpu+0x54/0xf0 cpuhp_invoke_callback+0xa8/0x720 notify_cpu_starting+0xc8/0xd8 secondary_start_kernel+0x114/0x180 CPU1: Booted secondary processor 0x0000000001 [0x431f0a11] However we don't need to initialize steal time at CPU starting time. We can simply wait until CPU online time, just sacrificing a bit of accuracy by returning zero for steal time until we know better. While at it, add __init to the functions that are only called by pv_time_init() which is __init. Signed-off-by: Andrew Jones <[email protected]> Fixes: e0685fa228fd ("arm64: Retrieve stolen time as paravirtualized guest") Cc: [email protected] Reviewed-by: Steven Price <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-09-17Merge remote-tracking branch 'origin/irq/ipi-as-irq' into irq/irqchip-nextMarc Zyngier2-54/+84
Signed-off-by: Marc Zyngier <[email protected]>
2020-09-17arm64: Remove custom IRQ stat accountingMarc Zyngier2-28/+13
Let's switch the arm64 code to the core accounting, which already does everything we need. Reviewed-by: Valentin Schneider <[email protected]> Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-09-17arm64: Kill __smp_cross_call and coMarc Zyngier1-31/+7
The old IPI registration interface is now unused on arm64, so let's get rid of it. Reviewed-by: Valentin Schneider <[email protected]> Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>