aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/cpu_errata.c
AgeCommit message (Collapse)AuthorFilesLines
2018-07-05arm64: Fix mismatched cache line size detectionSuzuki K Poulose1-2/+4
If there is a mismatch in the I/D min line size, we must always use the system wide safe value both in applications and in the kernel, while performing cache operations. However, we have been checking more bits than just the min line sizes, which triggers false negatives. We may need to trap the user accesses in such cases, but not necessarily patch the kernel. This patch fixes the check to do the right thing as advertised. A new capability will be added to check mismatches in other fields and ensure we trap the CTR accesses. Fixes: be68a8aaf925 ("arm64: cpufeature: Fix CTR_EL0 field definitions") Cc: <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Reported-by: Will Deacon <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-06-08Merge tag 'arm64-upstream' of ↵Linus Torvalds1-0/+182
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "Apart from the core arm64 and perf changes, the Spectre v4 mitigation touches the arm KVM code and the ACPI PPTT support touches drivers/ (acpi and cacheinfo). I should have the maintainers' acks in place. Summary: - Spectre v4 mitigation (Speculative Store Bypass Disable) support for arm64 using SMC firmware call to set a hardware chicken bit - ACPI PPTT (Processor Properties Topology Table) parsing support and enable the feature for arm64 - Report signal frame size to user via auxv (AT_MINSIGSTKSZ). The primary motivation is Scalable Vector Extensions which requires more space on the signal frame than the currently defined MINSIGSTKSZ - ARM perf patches: allow building arm-cci as module, demote dev_warn() to dev_dbg() in arm-ccn event_init(), miscellaneous cleanups - cmpwait() WFE optimisation to avoid some spurious wakeups - L1_CACHE_BYTES reverted back to 64 (for performance reasons that have to do with some network allocations) while keeping ARCH_DMA_MINALIGN to 128. cache_line_size() returns the actual hardware Cache Writeback Granule - Turn LSE atomics on by default in Kconfig - Kernel fault reporting tidying - Some #include and miscellaneous cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (53 commits) arm64: Fix syscall restarting around signal suppressed by tracer arm64: topology: Avoid checking numa mask for scheduler MC selection ACPI / PPTT: fix build when CONFIG_ACPI_PPTT is not enabled arm64: cpu_errata: include required headers arm64: KVM: Move VCPU_WORKAROUND_2_FLAG macros to the top of the file arm64: signal: Report signal frame size to userspace via auxv arm64/sve: Thin out initialisation sanity-checks for sve_max_vl arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests arm64: KVM: Add ARCH_WORKAROUND_2 support for guests arm64: KVM: Add HYP per-cpu accessors arm64: ssbd: Add prctl interface for per-thread mitigation arm64: ssbd: Introduce thread flag to control userspace mitigation arm64: ssbd: Restore mitigation status on CPU resume arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation arm64: ssbd: Add global mitigation state accessor arm64: Add 'ssbd' command-line option arm64: Add ARCH_WORKAROUND_2 probing arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2 arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1 ...
2018-06-05arm64: cpu_errata: include required headersArnd Bergmann1-0/+2
Without including psci.h and arm-smccc.h, we now get a build failure in some configurations: arch/arm64/kernel/cpu_errata.c: In function 'arm64_update_smccc_conduit': arch/arm64/kernel/cpu_errata.c:278:10: error: 'psci_ops' undeclared (first use in this function); did you mean 'sysfs_ops'? arch/arm64/kernel/cpu_errata.c: In function 'arm64_set_ssbd_mitigation': arch/arm64/kernel/cpu_errata.c:311:3: error: implicit declaration of function 'arm_smccc_1_1_hvc' [-Werror=implicit-function-declaration] arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL); Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-31arm64: ssbd: Restore mitigation status on CPU resumeMarc Zyngier1-1/+1
On a system where firmware can dynamically change the state of the mitigation, the CPU will always come up with the mitigation enabled, including when coming back from suspend. If the user has requested "no mitigation" via a command line option, let's enforce it by calling into the firmware again to disable it. Similarily, for a resume from hibernate, the mitigation could have been disabled by the boot kernel. Let's ensure that it is set back on in that case. Acked-by: Will Deacon <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-31arm64: ssbd: Skip apply_ssbd if not using dynamic mitigationMarc Zyngier1-0/+14
In order to avoid checking arm64_ssbd_callback_required on each kernel entry/exit even if no mitigation is required, let's add yet another alternative that by default jumps over the mitigation, and that gets nop'ed out if we're doing dynamic mitigation. Think of it as a poor man's static key... Reviewed-by: Julien Grall <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-31arm64: Add 'ssbd' command-line optionMarc Zyngier1-16/+87
On a system where the firmware implements ARCH_WORKAROUND_2, it may be useful to either permanently enable or disable the workaround for cases where the user decides that they'd rather not get a trap overhead, and keep the mitigation permanently on or off instead of switching it on exception entry/exit. In any case, default to the mitigation being enabled. Reviewed-by: Julien Grall <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-31arm64: Add ARCH_WORKAROUND_2 probingMarc Zyngier1-0/+69
As for Spectre variant-2, we rely on SMCCC 1.1 to provide the discovery mechanism for detecting the SSBD mitigation. A new capability is also allocated for that purpose, and a config option. Reviewed-by: Julien Grall <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-31arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2Marc Zyngier1-0/+2
In a heterogeneous system, we can end up with both affected and unaffected CPUs. Let's check their status before calling into the firmware. Reviewed-by: Julien Grall <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-31arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1Marc Zyngier1-0/+24
In order for the kernel to protect itself, let's call the SSBD mitigation implemented by the higher exception level (either hypervisor or firmware) on each transition between userspace and kernel. We must take the PSCI conduit into account in order to target the right exception level, hence the introduction of a runtime patching callback. Reviewed-by: Mark Rutland <[email protected]> Reviewed-by: Julien Grall <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-05-09arm64: capabilities: Add NVIDIA Denver CPU to bp_harden listDavid Gilhooley1-0/+1
The NVIDIA Denver CPU also needs a PSCI call to harden the branch predictor. Signed-off-by: David Gilhooley <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-04-11arm64: Move the content of bpi.S to hyp-entry.SMarc Zyngier1-2/+2
bpi.S was introduced as we were starting to build the Spectre v2 mitigation framework, and it was rather unclear that it would become strictly KVM specific. Now that the picture is a lot clearer, let's move the content of that file to hyp-entry.S, where it actually belong. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-04-11arm64: Get rid of __smccc_workaround_1_hvc_*Marc Zyngier1-6/+3
The very existence of __smccc_workaround_1_hvc_* is a thinko, as KVM will never use a HVC call to perform the branch prediction invalidation. Even as a nested hypervisor, it would use an SMC instruction. Let's get rid of it. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-04-11arm64: capabilities: Rework EL2 vector hardening entryMarc Zyngier1-9/+11
Since 5e7951ce19ab ("arm64: capabilities: Clean up midr range helpers"), capabilities must be represented with a single entry. If multiple CPU types can use the same capability, then they need to be enumerated in a list. The EL2 hardening stuff (which affects both A57 and A72) managed to escape the conversion in the above patch thanks to the 4.17 merge window. Let's fix it now. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-04-11arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardeningShanker Donthineni1-47/+19
The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead of Silicon provider service ID 0xC2001700. Cc: <[email protected]> # 4.14+ Signed-off-by: Shanker Donthineni <[email protected]> [maz: reworked errata framework integration] Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-04-09Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-5/+20
Pull kvm updates from Paolo Bonzini: "ARM: - VHE optimizations - EL2 address space randomization - speculative execution mitigations ("variant 3a", aka execution past invalid privilege register access) - bugfixes and cleanups PPC: - improvements for the radix page fault handler for HV KVM on POWER9 s390: - more kvm stat counters - virtio gpu plumbing - documentation - facilities improvements x86: - support for VMware magic I/O port and pseudo-PMCs - AMD pause loop exiting - support for AMD core performance extensions - support for synchronous register access - expose nVMX capabilities to userspace - support for Hyper-V signaling via eventfd - use Enlightened VMCS when running on Hyper-V - allow userspace to disable MWAIT/HLT/PAUSE vmexits - usual roundup of optimizations and nested virtualization bugfixes Generic: - API selftest infrastructure (though the only tests are for x86 as of now)" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (174 commits) kvm: x86: fix a prototype warning kvm: selftests: add sync_regs_test kvm: selftests: add API testing infrastructure kvm: x86: fix a compile warning KVM: X86: Add Force Emulation Prefix for "emulate the next instruction" KVM: X86: Introduce handle_ud() KVM: vmx: unify adjacent #ifdefs x86: kvm: hide the unused 'cpu' variable KVM: VMX: remove bogus WARN_ON in handle_ept_misconfig Revert "KVM: X86: Fix SMRAM accessing even if VM is shutdown" kvm: Add emulation for movups/movupd KVM: VMX: raise internal error for exception during invalid protected mode state KVM: nVMX: Optimization: Dont set KVM_REQ_EVENT when VMExit with nested_run_pending KVM: nVMX: Require immediate-exit when event reinjected to L2 and L1 event pending KVM: x86: Fix misleading comments on handling pending exceptions KVM: x86: Rename interrupt.pending to interrupt.injected KVM: VMX: No need to clear pending NMI/interrupt on inject realmode interrupt x86/kvm: use Enlightened VMCS when running on Hyper-V x86/hyper-v: detect nested features x86/hyper-v: define struct hv_enlightened_vmcs and clean field bits ...
2018-04-04Merge tag 'arm64-upstream' of ↵Linus Torvalds1-137/+179
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "Nothing particularly stands out here, probably because people were tied up with spectre/meltdown stuff last time around. Still, the main pieces are: - Rework of our CPU features framework so that we can whitelist CPUs that don't require kpti even in a heterogeneous system - Support for the IDC/DIC architecture extensions, which allow us to elide instruction and data cache maintenance when writing out instructions - Removal of the large memory model which resulted in suboptimal codegen by the compiler and increased the use of literal pools, which could potentially be used as ROP gadgets since they are mapped as executable - Rework of forced signal delivery so that the siginfo_t is well-formed and handling of show_unhandled_signals is consolidated and made consistent between different fault types - More siginfo cleanup based on the initial patches from Eric Biederman - Workaround for Cortex-A55 erratum #1024718 - Some small ACPI IORT updates and cleanups from Lorenzo Pieralisi - Misc cleanups and non-critical fixes" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (70 commits) arm64: uaccess: Fix omissions from usercopy whitelist arm64: fpsimd: Split cpu field out from struct fpsimd_state arm64: tlbflush: avoid writing RES0 bits arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h arm64: move percpu cmpxchg implementation from cmpxchg.h to percpu.h arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC arm64: fpsimd: include <linux/init.h> in fpsimd.h drivers/perf: arm_pmu_platform: do not warn about affinity on uniprocessor perf: arm_spe: include linux/vmalloc.h for vmap() Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)" arm64: cpufeature: Avoid warnings due to unused symbols arm64: Add work around for Arm Cortex-A55 Erratum 1024718 arm64: Delay enabling hardware DBM feature arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 arm64: capabilities: Handle shared entries arm64: capabilities: Add support for checks based on a list of MIDRs arm64: Add helpers for checking CPU MIDR against a range arm64: capabilities: Clean up midr range helpers arm64: capabilities: Change scope of VHE to Boot CPU feature ...
2018-03-28arm64: Add temporary ERRATA_MIDR_ALL_VERSIONS compatibility macroMarc Zyngier1-2/+6
MIDR_ALL_VERSIONS is changing, and won't have the same meaning in 4.17, and the right thing to use will be ERRATA_MIDR_ALL_VERSIONS. In order to cope with the merge window, let's add a compatibility macro that will allow a relatively smooth transition, and that can be removed post 4.17-rc1. Signed-off-by: Marc Zyngier <[email protected]>
2018-03-28Revert "arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening"Marc Zyngier1-19/+36
Creates far too many conflicts with arm64/for-next/core, to be resent post -rc1. This reverts commit f9f5dc19509bbef6f5e675346f1a7d7b846bdb12. Signed-off-by: Marc Zyngier <[email protected]>
2018-03-27arm64: cpufeature: Avoid warnings due to unused symbolsWill Deacon1-3/+3
An allnoconfig build complains about unused symbols due to functions that are called via conditional cpufeature and cpu_errata table entries. Annotate these as __maybe_unused if they are likely to be generic, or predicate their compilation on the same option as the table entry if they are specific to a given alternative. Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Handle shared entriesSuzuki K Poulose1-7/+48
Some capabilities have different criteria for detection and associated actions based on the matching criteria, even though they all share the same capability bit. So far we have used multiple entries with the same capability bit to handle this. This is prone to errors, as the cpu_enable is invoked for each entry, irrespective of whether the detection rule applies to the CPU or not. And also this complicates other helpers, e.g, __this_cpu_has_cap. This patch adds a wrapper entry to cover all the possible variations of a capability by maintaining list of matches + cpu_enable callbacks. To avoid complicating the prototypes for the "matches()", we use arm64_cpu_capabilities maintain the list and we ignore all the other fields except the matches & cpu_enable. This ensures : 1) The capabilitiy is set when at least one of the entry detects 2) Action is only taken for the entries that "matches". This avoids explicit checks in the cpu_enable() take some action. The only constraint here is that, all the entries should have the same "type" (i.e, scope and conflict rules). If a cpu_enable() method is associated with multiple matches for a single capability, care should be taken that either the match criteria are mutually exclusive, or that the method is robust against being called multiple times. This also reverts the changes introduced by commit 67948af41f2e6818ed ("arm64: capabilities: Handle duplicate entries for a capability"). Cc: Robin Murphy <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Add support for checks based on a list of MIDRsSuzuki K Poulose1-37/+44
Add helpers for detecting an errata on list of midr ranges of affected CPUs, with the same work around. Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: Add helpers for checking CPU MIDR against a rangeSuzuki K Poulose1-11/+5
Add helpers for checking if the given CPU midr falls in a range of variants/revisions for a given model. Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Clean up midr range helpersSuzuki K Poulose1-48/+60
We are about to introduce generic MIDR range helpers. Clean up the existing helpers in erratum handling, preparing them to use generic version. Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Add flags to handle the conflicts on late CPUSuzuki K Poulose1-4/+4
When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | |-----------------------------| | a | y | n | |-----------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds that cannot be activated after the kernel has finished booting.And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). Add two different flags to indicate how the conflict should be handled. ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability. Now that we have the flags to describe the behavior of the errata and the features, as we treat them, define types for ERRATUM and FEATURE. Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Prepare for fine grained capabilitiesSuzuki K Poulose1-4/+4
We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed to the userspace and the CPU hwcaps used by the kernel, which include cpu features and CPU errata work arounds. Capabilities have some properties that decide how they should be treated : 1) Detection, i.e scope : A cap could be "detected" either : - if it is present on at least one CPU (SCOPE_LOCAL_CPU) Or - if it is present on all the CPUs (SCOPE_SYSTEM) 2) When is it enabled ? - A cap is treated as "enabled" when the system takes some action based on whether the capability is detected or not. e.g, setting some control register, patching the kernel code. Right now, we treat all caps are enabled at boot-time, after all the CPUs are brought up by the kernel. But there are certain caps, which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI) and kernel starts using them, even before the secondary CPUs are brought up. We would need a way to describe this for each capability. 3) Conflict on a late CPU - When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | ------------------------------| | a | y | n | ------------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds which requires some work around, which cannot be delayed. And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). So this calls for a lot more fine grained behavior for each capability. And if we define all the attributes to control their behavior properly, we may be able to use a single table for the CPU hwcaps (which cover errata and features, not the ELF HWCAPs). This is a prepartory step to get there. More bits would be added for the properties listed above. We are going to use a bit-mask to encode all the properties of a capabilities. This patch encodes the "SCOPE" of the capability. As such there is no change in how the capabilities are treated. Cc: Mark Rutland <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Move errata processing codeSuzuki K Poulose1-33/+0
We have errata work around processing code in cpu_errata.c, which calls back into helpers defined in cpufeature.c. Now that we are going to make the handling of capabilities generic, by adding the information to each capability, move the errata work around specific processing code. No functional changes. Cc: Will Deacon <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Andre Przywara <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-26arm64: capabilities: Update prototype for enable call backDave Martin1-28/+25
We issue the enable() call back for all CPU hwcaps capabilities available on the system, on all the CPUs. So far we have ignored the argument passed to the call back, which had a prototype to accept a "void *" for use with on_each_cpu() and later with stop_machine(). However, with commit 0a0d111d40fd1 ("arm64: cpufeature: Pass capability structure to ->enable callback"), there are some users of the argument who wants the matching capability struct pointer where there are multiple matching criteria for a single capability. Clean up the declaration of the call back to make it clear. 1) Renamed to cpu_enable(), to imply taking necessary actions on the called CPU for the entry. 2) Pass const pointer to the capability, to allow the call back to check the entry. (e.,g to check if any action is needed on the CPU) 3) We don't care about the result of the call back, turning this to a void. Cc: Will Deacon <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Andre Przywara <[email protected]> Cc: James Morse <[email protected]> Acked-by: Robin Murphy <[email protected]> Reviewed-by: Julien Thierry <[email protected]> Signed-off-by: Dave Martin <[email protected]> [suzuki: convert more users, rename call back and drop results] Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-03-19arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardeningShanker Donthineni1-36/+19
The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead of Silicon provider service ID 0xC2001700. Cc: <[email protected]> # 4.14+ Signed-off-by: Shanker Donthineni <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2018-03-19arm64: Enable ARM64_HARDEN_EL2_VECTORS on Cortex-A57 and A72Marc Zyngier1-0/+12
Cortex-A57 and A72 are vulnerable to the so-called "variant 3a" of Meltdown, where an attacker can speculatively obtain the value of a privileged system register. By enabling ARM64_HARDEN_EL2_VECTORS on these CPUs, obtaining VBAR_EL2 is not disclosing the hypervisor mappings anymore. Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2018-03-19arm64: Make BP hardening slot counter availableMarc Zyngier1-5/+4
We're about to need to allocate hardening slots from other parts of the kernel (in order to support ARM64_HARDEN_EL2_VECTORS). Turn the counter into an atomic_t and make it available to the rest of the kernel. Also add BP_HARDEN_EL2_SLOTS as the number of slots instead of the hardcoded 4... Acked-by: Catalin Marinas <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2018-03-09arm64: Relax ARM_SMCCC_ARCH_WORKAROUND_1 discoveryMarc Zyngier1-2/+2
A recent update to the ARM SMCCC ARCH_WORKAROUND_1 specification allows firmware to return a non zero, positive value to describe that although the mitigation is implemented at the higher exception level, the CPU on which the call is made is not affected. Let's relax the check on the return value from ARCH_WORKAROUND_1 so that we only error out if the returned value is negative. Fixes: b092201e0020 ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support") Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-03-09arm64/kernel: enable A53 erratum #8434319 handling at runtimeArd Biesheuvel1-0/+9
Omit patching of ADRP instruction at module load time if the current CPUs are not susceptible to the erratum. Signed-off-by: Ard Biesheuvel <[email protected]> [will: Drop duplicate initialisation of .def_scope field] Signed-off-by: Will Deacon <[email protected]>
2018-03-09arm64/errata: add REVIDR handling to frameworkArd Biesheuvel1-3/+18
In some cases, core variants that are affected by a certain erratum also exist in versions that have the erratum fixed, and this fact is recorded in a dedicated bit in system register REVIDR_EL1. Since the architecture does not require that a certain bit retains its meaning across different variants of the same model, each such REVIDR bit is tightly coupled to a certain revision/variant value, and so we need a list of revidr_mask/midr pairs to carry this information. So add the struct member and the associated macros and handling to allow REVIDR fixes to be taken into account. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-02-12arm64: Add missing Falkor part number for branch predictor hardeningShanker Donthineni1-0/+9
References to CPU part number MIDR_QCOM_FALKOR were dropped from the mailing list patch due to mainline/arm64 branch dependency. So this patch adds the missing part number. Fixes: ec82b567a74f ("arm64: Implement branch predictor hardening for Falkor") Acked-by: Marc Zyngier <[email protected]> Signed-off-by: Shanker Donthineni <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: Kill PSCI_GET_VERSION as a variant-2 workaroundMarc Zyngier1-32/+13
Now that we've standardised on SMCCC v1.1 to perform the branch prediction invalidation, let's drop the previous band-aid. If vendors haven't updated their firmware to do SMCCC 1.1, they haven't updated PSCI either, so we don't loose anything. Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-02-06arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening supportMarc Zyngier1-1/+67
Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1. It is lovely. Really. Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-01-23arm64: Branch predictor hardening for Cavium ThunderX2Jayachandran C1-0/+10
Use PSCI based mitigation for speculative execution attacks targeting the branch predictor. We use the same mechanism as the one used for Cortex-A CPUs, we expect the PSCI version call to have a side effect of clearing the BTBs. Acked-by: Will Deacon <[email protected]> Signed-off-by: Jayachandran C <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-01-23arm64: Run enable method for errata work arounds on late CPUsSuzuki K Poulose1-3/+6
When a CPU is brought up after we have finalised the system wide capabilities (i.e, features and errata), we make sure the new CPU doesn't need a new errata work around which has not been detected already. However we don't run enable() method on the new CPU for the errata work arounds already detected. This could cause the new CPU running without potential work arounds. It is upto the "enable()" method to decide if this CPU should do something about the errata. Fixes: commit 6a6efbb45b7d95c84 ("arm64: Verify CPU errata work arounds on hotplugged CPU") Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Andre Przywara <[email protected]> Cc: Dave Martin <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-01-14arm64: cpu_errata: Add Kryo to Falkor 1003 errataStephen Boyd1-0/+21
The Kryo CPUs are also affected by the Falkor 1003 errata, so we need to do the same workaround on Kryo CPUs. The MIDR is slightly more complicated here, where the PART number is not always the same when looking at all the bits from 15 to 4. Drop the lower 8 bits and just look at the top 4 to see if it's '2' and then consider those as Kryo CPUs. This covers all the combinations without having to list them all out. Fixes: 38fd94b0275c ("arm64: Work around Falkor erratum 1003") Acked-by: Will Deacon <[email protected]> Signed-off-by: Stephen Boyd <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-01-08arm64: Implement branch predictor hardening for FalkorShanker Donthineni1-2/+38
Falkor is susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a mitigation for these attacks, preventing any malicious entries from affecting other victim contexts. Signed-off-by: Shanker Donthineni <[email protected]> [will: fix label name when !CONFIG_KVM and remove references to MIDR_FALKOR] Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-01-08arm64: Implement branch predictor hardening for affected Cortex-A CPUsWill Deacon1-0/+42
Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a PSCI-based mitigation for these CPUs when available. The call into firmware will invalidate the branch predictor state, preventing any malicious entries from affecting other victim contexts. Co-developed-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2018-01-08arm64: Add skeleton to harden the branch predictor against aliasing attacksWill Deacon1-0/+74
Aliasing attacks against CPU branch predictors can allow an attacker to redirect speculative control flow on some CPUs and potentially divulge information from one context to another. This patch adds initial skeleton code behind a new Kconfig option to enable implementation-specific mitigations against these attacks for CPUs that are affected. Co-developed-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2017-06-15arm64: Add workaround for Cavium Thunder erratum 30115David Daney1-0/+21
Some Cavium Thunder CPUs suffer a problem where a KVM guest may inadvertently cause the host kernel to quit receiving interrupts. Use the Group-0/1 trapping in order to deal with it. [maz]: Adapted patch to the Group-0/1 trapping, reworked commit log Tested-by: Alexander Graf <[email protected]> Acked-by: Catalin Marinas <[email protected]> Reviewed-by: Eric Auger <[email protected]> Signed-off-by: David Daney <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Christoffer Dall <[email protected]>
2017-04-07arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921Marc Zyngier1-0/+8
In order to work around Cortex-A73 erratum 858921 in a subsequent patch, add the required capability that advertise the erratum. As the configuration option it depends on is not present yet, this has no immediate effect. Acked-by: Thomas Gleixner <[email protected]> Acked-by: Daniel Lezcano <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-04-07arm64: cpu_errata: Allow an erratum to be match for all revisions of a coreMarc Zyngier1-0/+7
Some minor erratum may not be fixed in further revisions of a core, leading to a situation where the workaround needs to be updated each time an updated core is released. Introduce a MIDR_ALL_VERSIONS match helper that will work for all versions of that MIDR, once and for all. Acked-by: Thomas Gleixner <[email protected]> Acked-by: Mark Rutland <[email protected]> Acked-by: Daniel Lezcano <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2017-02-10arm64: Work around Falkor erratum 1003Christopher Covington1-0/+9
The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum is triggered, page table entries using the new translation table base address (BADDR) will be allocated into the TLB using the old ASID. All circumstances leading to the incorrect ASID being cached in the TLB arise when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory operation is in the process of performing a translation using the specific TTBRx_EL1 being written, and the memory operation uses a translation table descriptor designated as non-global. EL2 and EL3 code changing the EL1&0 ASID is not subject to this erratum because hardware is prohibited from performing translations from an out-of-context translation regime. Consider the following pseudo code. write new BADDR and ASID values to TTBRx_EL1 Replacing the above sequence with the one below will ensure that no TLB entries with an incorrect ASID are used by software. write reserved value to TTBRx_EL1[ASID] ISB write new value to TTBRx_EL1[BADDR] ISB write new value to TTBRx_EL1[ASID] ISB When the above sequence is used, page table entries using the new BADDR value may still be incorrectly allocated into the TLB using the reserved ASID. Yet this will not reduce functionality, since TLB entries incorrectly tagged with the reserved ASID will never be hit by a later instruction. Based on work by Shanker Donthineni <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Christopher Covington <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-02-01arm64: Work around Falkor erratum 1009Christopher Covington1-0/+9
During a TLB invalidate sequence targeting the inner shareable domain, Falkor may prematurely complete the DSB before all loads and stores using the old translation are observed. Instruction fetches are not subject to the conditions of this erratum. If the original code sequence includes multiple TLB invalidate instructions followed by a single DSB, onle one of the TLB instructions needs to be repeated to work around this erratum. While the erratum only applies to cases in which the TLBI specifies the inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or stronger (OSH, SYS), this changes applies the workaround overabundantly-- to local TLBI, DSB NSH sequences as well--for simplicity. Based on work by Shanker Donthineni <[email protected]> Signed-off-by: Christopher Covington <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-01-13arm64: errata: Provide macro for major and minor cpu revisionsRobert Richter1-6/+9
Definition of cpu ranges are hard to read if the cpu variant is not zero. Provide MIDR_CPU_VAR_REV() macro to describe the full hardware revision of a cpu including variant and (minor) revision. Signed-off-by: Robert Richter <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2016-10-20arm64: cpufeature: Schedule enable() calls instead of calling them via IPIJames Morse1-1/+2
The enable() call for a cpufeature/errata is called using on_each_cpu(). This issues a cross-call IPI to get the work done. Implicitly, this stashes the running PSTATE in SPSR when the CPU receives the IPI, and restores it when we return. This means an enable() call can never modify PSTATE. To allow PAN to do this, change the on_each_cpu() call to use stop_machine(). This schedules the work on each CPU which allows us to modify PSTATE. This involves changing the protype of all the enable() functions. enable_cpu_capabilities() is called during boot and enables the feature on all online CPUs. This path now uses stop_machine(). CPU features for hotplug'd CPUs are enabled by verify_local_cpu_features() which only acts on the local CPU, and can already modify the running PSTATE as it is called from secondary_start_kernel(). Reported-by: Tony Thompson <[email protected]> Reported-by: Vladimir Murzin <[email protected]> Signed-off-by: James Morse <[email protected]> Cc: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2016-09-09arm64: Work around systems with mismatched cache line sizesSuzuki K Poulose1-0/+22
Systems with differing CPU i-cache/d-cache line sizes can cause problems with the cache management by software when the execution is migrated from one to another. Usually, the application reads the cache size on a CPU and then uses that length to perform cache operations. However, if it gets migrated to another CPU with a smaller cache line size, things could go completely wrong. To prevent such cases, always use the smallest cache line size among the CPUs. The kernel CPU feature infrastructure already keeps track of the safe value for all CPUID registers including CTR. This patch works around the problem by : For kernel, dynamically patch the kernel to read the cache size from the system wide copy of CTR_EL0. For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT) and emulate the mrs instruction to return the system wide safe value of CTR_EL0. For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0 via read_system_reg), we keep track of the pointer to table entry for CTR_EL0 in the CPU feature infrastructure. Cc: Mark Rutland <[email protected]> Cc: Andre Przywara <[email protected]> Cc: Will Deacon <[email protected]> Cc: Catalin Marinas <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>