aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/cpu_errata.c
AgeCommit message (Collapse)AuthorFilesLines
2019-11-01arm64: apply ARM64_ERRATUM_845719 workaround for Brahma-B53 coreDoug Berger1-2/+11
The Broadcom Brahma-B53 core is susceptible to the issue described by ARM64_ERRATUM_845719 so this commit enables the workaround to be applied when executing on that core. Since there are now multiple entries to match, we must convert the existing ARM64_ERRATUM_845719 into an erratum list. Signed-off-by: Doug Berger <[email protected]> Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-10-31arm64: cpufeature: Enable Qualcomm Falkor errata 1009 for KryoBjorn Andersson1-6/+14
The Kryo cores share errata 1009 with Falkor, so add their model definitions and enable it for them as well. Signed-off-by: Bjorn Andersson <[email protected]> [will: Update entry in silicon-errata.rst] Signed-off-by: Will Deacon <[email protected]>
2019-10-29arm64: cpufeature: Enable Qualcomm Falkor/Kryo errata 1003Bjorn Andersson1-0/+1
With the introduction of 'cce360b54ce6 ("arm64: capabilities: Filter the entries based on a given mask")' the Qualcomm Falkor/Kryo errata 1003 is no long applied. The result of not applying errata 1003 is that MSM8996 runs into various RCU stalls and fails to boot most of the times. Give 1003 a "type" to ensure they are not filtered out in update_cpu_capabilities(). Fixes: cce360b54ce6 ("arm64: capabilities: Filter the entries based on a given mask") Cc: [email protected] Reported-by: Mark Brown <[email protected]> Suggested-by: Will Deacon <[email protected]> Signed-off-by: Bjorn Andersson <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-10-28Merge branch 'kvm-arm64/erratum-1319367' of ↵Catalin Marinas1-3/+10
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core Similarly to erratum 1165522 that affects Cortex-A76, A57 and A72 respectively suffer from errata 1319537 and 1319367, potentially resulting in TLB corruption if the CPU speculates an AT instruction while switching guests. The fix is slightly more involved since we don't have VHE to help us here, but the idea is the same: when switching a guest in, we must prevent any speculated AT from being able to parse the page tables until S2 is up and running. Only at this stage can we allow AT to take place. For this, we always restore the guest sysregs first, except for its SCTLR and TCR registers, which must be set with SCTLR.M=1 and TCR.EPD{0,1} = {1, 1}, effectively disabling the PTW and TLB allocation. Once S2 is setup, we restore the guest's SCTLR and TCR. Similar things must be done on TLB invalidation... * 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms: arm64: Enable and document ARM errata 1319367 and 1319537 arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs arm64: KVM: Reorder system register restoration and stage-2 activation arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions
2019-10-28Merge branch 'for-next/neoverse-n1-stale-instr' into for-next/coreCatalin Marinas1-1/+31
Neoverse-N1 cores with the 'COHERENT_ICACHE' feature may fetch stale instructions when software depends on prefetch-speculation-protection instead of explicit synchronization. [0] The workaround is to trap I-Cache maintenance and issue an inner-shareable TLBI. The affected cores have a Coherent I-Cache, so the I-Cache maintenance isn't necessary. The core tells user-space it can skip it with CTR_EL0.DIC. We also have to trap this register to hide the bit forcing DIC-aware user-space to perform the maintenance. To avoid trapping all cache-maintenance, this workaround depends on a firmware component that only traps I-cache maintenance from EL0 and performs the workaround. For user-space, the kernel's work is to trap CTR_EL0 to hide DIC, and produce a fake IminLine. EL3 traps the now-necessary I-Cache maintenance and performs the inner-shareable-TLBI that makes everything better. [0] https://developer.arm.com/docs/sden885747/latest/arm-neoverse-n1-mp050-software-developer-errata-notice * for-next/neoverse-n1-stale-instr: arm64: Silence clang warning on mismatched value/register sizes arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419 arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
2019-10-25arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419James Morse1-1/+31
Cores affected by Neoverse-N1 #1542419 could execute a stale instruction when a branch is updated to point to freshly generated instructions. To workaround this issue we need user-space to issue unnecessary icache maintenance that we can trap. Start by hiding CTR_EL0.DIC. Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: James Morse <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2019-10-21arm/arm64: Make use of the SMCCC 1.1 wrapperSteven Price1-52/+29
Rather than directly choosing which function to use based on psci_ops.conduit, use the new arm_smccc_1_1 wrapper instead. In some cases we still need to do some operations based on the conduit, but the code duplication is removed. No functional change. Signed-off-by: Steven Price <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2019-10-18arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versionsMarc Zyngier1-3/+10
Rework the EL2 vector hardening that is only selected for A57 and A72 so that the table can also be used for ARM64_WORKAROUND_1319367. Acked-by: Catalin Marinas <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2019-10-17Merge branch 'errata/tx2-219' into for-next/fixesWill Deacon1-0/+38
Workaround for Cavium/Marvell ThunderX2 erratum #219. * errata/tx2-219: arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected arm64: Avoid Cavium TX2 erratum 219 when switching TTBR arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set
2019-10-14arm64: errata: use arm_smccc_1_1_get_conduit()Mark Rutland1-25/+12
Now that we have arm_smccc_1_1_get_conduit(), we can hide the PSCI implementation details from the arm64 cpu errata code, so let's do so. As arm_smccc_1_1_get_conduit() implicitly checks that the SMCCC version is at least SMCCC_VERSION_1_1, we no longer need to check this explicitly where switch statements have a default case, e.g. in has_ssbd_mitigation(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Lorenzo Pieralisi <[email protected]> Cc: Will Deacon <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Suzuki K Poulose <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2019-10-08arm64: Avoid Cavium TX2 erratum 219 when switching TTBRMarc Zyngier1-0/+5
As a PRFM instruction racing against a TTBR update can have undesirable effects on TX2, NOP-out such PRFM on cores that are affected by the TX2-219 erratum. Cc: <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-10-08arm64: Enable workaround for Cavium TX2 erratum 219 when running SMTMarc Zyngier1-0/+33
It appears that the only case where we need to apply the TX2_219_TVM mitigation is when the core is in SMT mode. So let's condition the enabling on detecting a CPU whose MPIDR_EL1.Aff0 is non-zero. Cc: <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-10-01arm64: errata: Update stale commentThierry Reding1-2/+2
Commit 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") renamed the caller of the install_bp_hardening_cb() function but forgot to update a comment, which can be confusing when trying to follow the code flow. Fixes: 73f381660959 ("arm64: Advertise mitigation of Spectre-v2, or lack thereof") Signed-off-by: Thierry Reding <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-07-05arm64: KVM: Propagate full Spectre v2 workaround state to KVM guestsAndre Przywara1-5/+18
Recent commits added the explicit notion of "workaround not required" to the state of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had "needed" and "unknown" before. Export this knowledge to the rest of the kernel and enhance the existing kvm_arm_harden_branch_predictor() to report this new state as well. Export this new state to guests when they use KVM's firmware interface emulation. Signed-off-by: Andre Przywara <[email protected]> Reviewed-by: Steven Price <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-23arm64: Handle erratum 1418040 as a superset of erratum 1188873Marc Zyngier1-10/+14
We already mitigate erratum 1188873 affecting Cortex-A76 and Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to r3p1 of the same cores are affected by erratum 1418040, which has the same workaround as 1188873. Let's expand the range of affected revisions to match 1418040, and repaint all occurences of 1188873 to 1418040. Whilst we're there, do a bit of reformating in silicon-errata.txt and drop a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-05-23arm64: errata: Add workaround for Cortex-A76 erratum #1463225Will Deacon1-0/+24
Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum that can prevent interrupts from being taken when single-stepping. This patch implements a software workaround to prevent userspace from effectively being able to disable interrupts. Cc: <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-05-01Merge branch 'for-next/timers' of ↵Will Deacon1-2/+11
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into for-next/core Conflicts: arch/arm64/Kconfig arch/arm64/include/asm/arch_timer.h
2019-05-01arm64/speculation: Support 'mitigations=' cmdline optionJosh Poimboeuf1-1/+5
Configure arm64 runtime CPU speculation bug mitigations in accordance with the 'mitigations=' cmdline option. This affects Meltdown, Spectre v2, and Speculative Store Bypass. The default behavior is unchanged. Signed-off-by: Josh Poimboeuf <[email protected]> [will: reorder checks so KASLR implies KPTI and SSBS is affected by cmdline] Signed-off-by: Will Deacon <[email protected]>
2019-05-01arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSBWill Deacon1-4/+6
SSBS provides a relatively cheap mitigation for SSB, but it is still a mitigation and its presence does not indicate that the CPU is unaffected by the vulnerability. Tweak the mitigation logic so that we report the correct string in sysfs. Signed-off-by: Will Deacon <[email protected]>
2019-05-01arm64: add sysfs vulnerability show for speculative store bypassJeremy Linton1-0/+42
Return status based on ssbd_state and __ssb_safe. If the mitigation is disabled, or the firmware isn't responding then return the expected machine state based on a whitelist of known good cores. Given a heterogeneous machine, the overall machine vulnerability defaults to safe but is reset to unsafe when we miss the whitelist and the firmware doesn't explicitly tell us the core is safe. In order to make that work we delay transitioning to vulnerable until we know the firmware isn't responding to avoid a case where we miss the whitelist, but the firmware goes ahead and reports the core is not vulnerable. If all the cores in the machine have SSBS, then __ssb_safe will remain true. Tested-by: Stefan Wahren <[email protected]> Signed-off-by: Jeremy Linton <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-04-30arm64: Apply ARM64_ERRATUM_1188873 to Neoverse-N1Marc Zyngier1-2/+11
Neoverse-N1 is also affected by ARM64_ERRATUM_1188873, so let's add it to the list of affected CPUs. Signed-off-by: Marc Zyngier <[email protected]> [will: Update silicon-errata.txt] Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: Always enable ssb vulnerability detectionJeremy Linton1-4/+5
Ensure we are always able to detect whether or not the CPU is affected by SSB, so that we can later advertise this to userspace. Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> [will: Use IS_ENABLED instead of #ifdef] Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: add sysfs vulnerability show for spectre-v2Jeremy Linton1-1/+26
Track whether all the cores in the machine are vulnerable to Spectre-v2, and whether all the vulnerable cores have been mitigated. We then expose this information to userspace via sysfs. Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: Always enable spectre-v2 vulnerability detectionJeremy Linton1-7/+8
Ensure we are always able to detect whether or not the CPU is affected by Spectre-v2, so that we can later advertise this to userspace. Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: Use firmware to detect CPUs that are not affected by Spectre-v2Marc Zyngier1-9/+23
The SMCCC ARCH_WORKAROUND_1 service can indicate that although the firmware knows about the Spectre-v2 mitigation, this particular CPU is not vulnerable, and it is thus not necessary to call the firmware on this CPU. Let's use this information to our benefit. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: Advertise mitigation of Spectre-v2, or lack thereofMarc Zyngier1-53/+56
We currently have a list of CPUs affected by Spectre-v2, for which we check that the firmware implements ARCH_WORKAROUND_1. It turns out that not all firmwares do implement the required mitigation, and that we fail to let the user know about it. Instead, let's slightly revamp our checks, and rely on a whitelist of cores that are known to be non-vulnerable, and let the user know the status of the mitigation in the kernel log. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: Add sysfs vulnerability show for spectre-v1Mian Yousaf Kaukab1-0/+6
spectre-v1 has been mitigated and the mitigation is always active. Report this to userspace via sysfs Signed-off-by: Mian Yousaf Kaukab <[email protected]> Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> Acked-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-04-26arm64: Provide a command line to disable spectre_v2 mitigationJeremy Linton1-0/+13
There are various reasons, such as benchmarking, to disable spectrev2 mitigation on a machine. Provide a command-line option to do so. Signed-off-by: Jeremy Linton <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Reviewed-by: Andre Przywara <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Tested-by: Stefan Wahren <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Signed-off-by: Will Deacon <[email protected]>
2019-01-10arm64: kpti: Avoid rewriting early page tables when KASLR is enabledWill Deacon1-1/+1
A side effect of commit c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") is that the linear map is created with page granularity, which means that transitioning the early page table from global to non-global mappings when enabling kpti can take a significant amount of time during boot. Given that most CPU implementations do not require kpti, this mainly impacts KASLR builds where kpti is forcefully enabled. However, in these situations we know early on that non-global mappings are required and can avoid the use of global mappings from the beginning. The only gotcha is Cavium erratum #27456, which we must detect based on the MIDR value of the boot CPU. Reviewed-by: Ard Biesheuvel <[email protected]> Reported-by: John Garry <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-12-25Merge tag 'arm64-upstream' of ↵Linus Torvalds1-82/+67
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 festive updates from Will Deacon: "In the end, we ended up with quite a lot more than I expected: - Support for ARMv8.3 Pointer Authentication in userspace (CRIU and kernel-side support to come later) - Support for per-thread stack canaries, pending an update to GCC that is currently undergoing review - Support for kexec_file_load(), which permits secure boot of a kexec payload but also happens to improve the performance of kexec dramatically because we can avoid the sucky purgatory code from userspace. Kdump will come later (requires updates to libfdt). - Optimisation of our dynamic CPU feature framework, so that all detected features are enabled via a single stop_machine() invocation - KPTI whitelisting of Cortex-A CPUs unaffected by Meltdown, so that they can benefit from global TLB entries when KASLR is not in use - 52-bit virtual addressing for userspace (kernel remains 48-bit) - Patch in LSE atomics for per-cpu atomic operations - Custom preempt.h implementation to avoid unconditional calls to preempt_schedule() from preempt_enable() - Support for the new 'SB' Speculation Barrier instruction - Vectorised implementation of XOR checksumming and CRC32 optimisations - Workaround for Cortex-A76 erratum #1165522 - Improved compatibility with Clang/LLD - Support for TX2 system PMUS for profiling the L3 cache and DMC - Reflect read-only permissions in the linear map by default - Ensure MMIO reads are ordered with subsequent calls to Xdelay() - Initial support for memory hotplug - Tweak the threshold when we invalidate the TLB by-ASID, so that mremap() performance is improved for ranges spanning multiple PMDs. - Minor refactoring and cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (125 commits) arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset() arm64: sysreg: Use _BITUL() when defining register bits arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4 arm64: docs: document pointer authentication arm64: ptr auth: Move per-thread keys from thread_info to thread_struct arm64: enable pointer authentication arm64: add prctl control for resetting ptrauth keys arm64: perf: strip PAC when unwinding userspace arm64: expose user PAC bit positions via ptrace arm64: add basic pointer authentication support arm64/cpufeature: detect pointer authentication arm64: Don't trap host pointer auth use to EL2 arm64/kvm: hide ptrauth from guests arm64/kvm: consistently handle host HCR_EL2 flags arm64: add pointer authentication register bits arm64: add comments about EC exception levels arm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field arm64: enable per-task stack canaries ...
2018-12-13arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matchesWill Deacon1-33/+1
Open-coding the pointer-auth HWCAPs is a mess and can be avoided by reusing the multi-cap logic from the CPU errata framework. Move the multi_entry_cap_matches code to cpufeature.h and reuse it for the pointer auth HWCAPs. Reviewed-by: Suzuki Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-12-10Merge branch 'kvm/cortex-a76-erratum-1165522' into aarch64/for-next/coreWill Deacon1-0/+8
Pull in KVM workaround for A76 erratum #116522. Conflicts: arch/arm64/include/asm/cpucaps.h
2018-12-10arm64: KVM: Force VHE for systems affected by erratum 1165522Marc Zyngier1-0/+8
In order to easily mitigate ARM erratum 1165522, we need to force affected CPUs to run in VHE mode if using KVM. Reviewed-by: James Morse <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-12-06arm64: capabilities: Merge duplicate entries for Qualcomm erratum 1003Suzuki K Poulose1-9/+16
Remove duplicate entries for Qualcomm erratum 1003. Since the entries are not purely based on generic MIDR checks, use the multi_cap_entry type to merge the entries. Cc: Christopher Covington <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Vladimir Murzin <[email protected]> Tested-by: Vladimir Murzin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-12-06arm64: capabilities: Merge duplicate Cavium erratum entriesSuzuki K Poulose1-26/+24
Merge duplicate entries for a single capability using the midr range list for Cavium errata 30115 and 27456. Cc: Andrew Pinski <[email protected]> Cc: David Daney <[email protected]> Cc: Will Deacon <[email protected]> Cc: Catalin Marinas <[email protected]> Reviewed-by: Vladimir Murzin <[email protected]> Tested-by: Vladimir Murzin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-12-06arm64: capabilities: Merge entries for ARM64_WORKAROUND_CLEAN_CACHESuzuki K Poulose1-12/+16
We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability : 1) ARM Errata 826319, 827319, 824069, 819472 on A53 r0p[012] 2) ARM Errata 819472 on A53 r0p[01] Both have the same work around. Merge these entries to avoid duplicate entries for a single capability. Add a new Kconfig entry to control the "capability" entry to make it easier to handle combinations of the CONFIGs. Cc: Will Deacon <[email protected]> Cc: Andre Przywara <[email protected]> Cc: Mark Rutland <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-11-29arm64: Add workaround for Cortex-A76 erratum 1286807Catalin Marinas1-3/+17
On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual address for a cacheable mapping of a location is being accessed by a core while another core is remapping the virtual address to a new physical page using the recommended break-before-make sequence, then under very rare circumstances TLBI+DSB completes before a read using the translation being invalidated has been observed by other observers. The workaround repeats the TLBI+DSB operation and is shared with the Qualcomm Falkor erratum 1009 Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-11-27arm64: Use a raw spinlock in __install_bp_hardening_cb()James Morse1-3/+3
__install_bp_hardening_cb() is called via stop_machine() as part of the cpu_enable callback. To force each CPU to take its turn when allocating slots, they take a spinlock. With the RT patches applied, the spinlock becomes a mutex, and we get warnings about sleeping while in stop_machine(): | [ 0.319176] CPU features: detected: RAS Extension Support | [ 0.319950] BUG: scheduling while atomic: migration/3/36/0x00000002 | [ 0.319955] Modules linked in: | [ 0.319958] Preemption disabled at: | [ 0.319969] [<ffff000008181ae4>] cpu_stopper_thread+0x7c/0x108 | [ 0.319973] CPU: 3 PID: 36 Comm: migration/3 Not tainted 4.19.1-rt3-00250-g330fc2c2a880 #2 | [ 0.319975] Hardware name: linux,dummy-virt (DT) | [ 0.319976] Call trace: | [ 0.319981] dump_backtrace+0x0/0x148 | [ 0.319983] show_stack+0x14/0x20 | [ 0.319987] dump_stack+0x80/0xa4 | [ 0.319989] __schedule_bug+0x94/0xb0 | [ 0.319991] __schedule+0x510/0x560 | [ 0.319992] schedule+0x38/0xe8 | [ 0.319994] rt_spin_lock_slowlock_locked+0xf0/0x278 | [ 0.319996] rt_spin_lock_slowlock+0x5c/0x90 | [ 0.319998] rt_spin_lock+0x54/0x58 | [ 0.320000] enable_smccc_arch_workaround_1+0xdc/0x260 | [ 0.320001] __enable_cpu_capability+0x10/0x20 | [ 0.320003] multi_cpu_stop+0x84/0x108 | [ 0.320004] cpu_stopper_thread+0x84/0x108 | [ 0.320008] smpboot_thread_fn+0x1e8/0x2b0 | [ 0.320009] kthread+0x124/0x128 | [ 0.320010] ret_from_fork+0x10/0x18 Switch this to a raw spinlock, as we know this is only called with IRQs masked. Signed-off-by: James Morse <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-10-19arm64: KVM: Guests can skip __install_bp_hardening_cb()s HYP workJames Morse1-0/+9
enable_smccc_arch_workaround_1() passes NULL as the hyp_vecs start and end if the HVC conduit is in use, and ARM_SMCCC_ARCH_WORKAROUND_1 is detected. If the guest kernel happened to be built with KVM_INDIRECT_VECTORS, we go on to allocate a slot, memcpy() the empty workaround in and do the appropriate cache maintenance. This works as we always tell memcpy() the range is 0, so it never accesses the NULL src pointer, but we still do the cache maintenance. If hyp_vecs_start is NULL we know we're a guest, just update the fn like the !KVM_INDIRECT_VECTORS version. Reviewed-by: Julien Thierry <[email protected]> Acked-by: Marc Zyngier <[email protected]> Signed-off-by: James Morse <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-10-16arm64: cpufeature: Trap CTR_EL0 access only where it is necessarySuzuki K Poulose1-1/+6
When there is a mismatch in the CTR_EL0 field, we trap access to CTR from EL0 on all CPUs to expose the safe value. However, we could skip trapping on a CPU which matches the safe value. Cc: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-10-16arm64: cpufeature: Fix handling of CTR_EL0.IDC fieldSuzuki K Poulose1-3/+24
CTR_EL0.IDC reports the data cache clean requirements for instruction to data coherence. However, if the field is 0, we need to check the CLIDR_EL1 fields to detect the status of the feature. Currently we don't do this and generate a warning with tainting the kernel, when there is a mismatch in the field among the CPUs. Also the userspace doesn't have a reliable way to check the CLIDR_EL1 register to check the status. This patch fixes the problem by checking the CLIDR_EL1 fields, when (CTR_EL0.IDC == 0) and updates the kernel's copy of the CTR_EL0 for the CPU with the actual status of the feature. This would allow the sanity check infrastructure to do the proper checking of the fields and also allow the CTR_EL0 emulation code to supply the real status of the feature. Now, if a CPU has raw CTR_EL0.IDC == 0 and effective IDC == 1 (with overall system wide IDC == 1), we need to expose the real value to the user. So, we trap CTR_EL0 access on the CPU which reports incorrect CTR_EL0.IDC. Fixes: commit 6ae4b6e057888 ("arm64: Add support for new control bits CTR_EL0.DIC and CTR_EL0.IDC") Cc: Shanker Donthineni <[email protected]> Cc: Philip Elcan <[email protected]> Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-10-01arm64: arch_timer: Add workaround for ARM erratum 1188873Marc Zyngier1-0/+8
When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by: Mark Rutland <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-09-19arm64: cpu_errata: Remove ARM64_MISMATCHED_CACHE_LINE_SIZEWill Deacon1-13/+2
There's no need to treat mismatched cache-line sizes reported by CTR_EL0 differently to any other mismatched fields that we treat as "STRICT" in the cpufeature code. In both cases we need to trap and emulate EL0 accesses to the register, so drop ARM64_MISMATCHED_CACHE_LINE_SIZE and rely on ARM64_MISMATCHED_CACHE_TYPE instead. Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]> [[email protected]: move ARM64_HAS_CNP in the empty cpucaps.h slot] Signed-off-by: Catalin Marinas <[email protected]>
2018-09-14arm64: cpu: Move errata and feature enable callbacks closer to callersWill Deacon1-0/+6
The cpu errata and feature enable callbacks are only called via their respective arm64_cpu_capabilities structure and therefore shouldn't exist in the global namespace. Move the PAN, RAS and cache maintenance emulation enable callbacks into the same files as their corresponding arm64_cpu_capabilities structures, making them static in the process. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-09-14arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3Will Deacon1-2/+24
On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD state without needing to call into firmware. This patch hooks into the existing SSBD infrastructure so that SSBS is used on CPUs that support it, but it's all made horribly complicated by the very real possibility of big/little systems that don't uniformly provide the new capability. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-07-12arm64: kill config_sctlr_el1()Mark Rutland1-2/+1
Now that we have sysreg_clear_set(), we can consistently use this instead of config_sctlr_el1(). Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Dave Martin <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-07-06arm64: errata: Don't define type field twice for arm64_errata[] entriesWill Deacon1-2/+0
The ERRATA_MIDR_REV_RANGE macro assigns ARM64_CPUCAP_LOCAL_CPU_ERRATUM to the '.type' field of the 'struct arm64_cpu_capabilities', so there's no need to assign it explicitly as well. Signed-off-by: Will Deacon <[email protected]>
2018-07-05arm64: IPI each CPU after invalidating the I-cache for kernel mappingsWill Deacon1-1/+1
When invalidating the instruction cache for a kernel mapping via flush_icache_range(), it is also necessary to flush the pipeline for other CPUs so that instructions fetched into the pipeline before the I-cache invalidation are discarded. For example, if module 'foo' is unloaded and then module 'bar' is loaded into the same area of memory, a CPU could end up executing instructions from 'foo' when branching into 'bar' if these instructions were fetched into the pipeline before 'foo' was unloaded. Whilst this is highly unlikely to occur in practice, particularly as any exception acts as a context-synchronizing operation, following the letter of the architecture requires us to execute an ISB on each CPU in order for the new instruction stream to be visible. Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-07-05arm64: Handle mismatched cache typeSuzuki K Poulose1-3/+14
Track mismatches in the cache type register (CTR_EL0), other than the D/I min line sizes and trap user accesses if there are any. Fixes: be68a8aaf925 ("arm64: cpufeature: Fix CTR_EL0 field definitions") Cc: <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Cc: Catalin Marinas <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>