aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-12-02arm64: uaccess cleanup macro namingMark Rutland6-26/+26
Now the uaccess primitives use LDTR/STTR unconditionally, the uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a redundant argument. Let's remove the redundant argument and rename these to user_{ldp,stp,ldst} respectively to clean this up. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Robin Murohy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: uaccess: split user/kernel routinesMark Rutland2-65/+47
This patch separates arm64's user and kernel memory access primitives into distinct routines, adding new __{get,put}_kernel_nofault() helpers to access kernel memory, upon which core code builds larger copy routines. The kernel access routines (using LDR/STR) are not affected by PAN (when legitimately accessing kernel memory), nor are they affected by UAO. Switching to KERNEL_DS may set UAO, but this does not adversely affect the kernel access routines. The user access routines (using LDTR/STTR) are not affected by PAN (when legitimately accessing user memory), but are affected by UAO. As these are only legitimate to use under USER_DS with UAO clear, this should not be problematic. Routines performing atomics to user memory (futex and deprecated instruction emulation) still need to transiently clear PAN, and these are left as-is. These are never used on kernel memory. Subsequent patches will refactor the uaccess helpers to remove redundant code, and will also remove the redundant PAN/UAO manipulation. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: uaccess: refactor __{get,put}_userMark Rutland1-17/+27
As a step towards implementing __{get,put}_kernel_nofault(), this patch splits most user-memory specific logic out of __{get,put}_user(), with the memory access and fault handling in new __{raw_get,put}_mem() helpers. For now the LDR/LDTR patching is left within the *get_mem() helpers, and will be removed in a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: uaccess: simplify __copy_user_flushcache()Mark Rutland1-3/+1
Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and doesn't use uaccess_mask_ptr() on the user address. Let's have it call raw_copy_from_user(), which is both a simplification and ensures that user pointers are masked under speculation. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: uaccess: rename privileged uaccess routinesMark Rutland3-8/+8
We currently have many uaccess_*{enable,disable}*() variants, which subsequent patches will cut down as part of removing set_fs() and friends. Once this simplification is made, most uaccess routines will only need to ensure that the user page tables are mapped in TTBR0, as is currently dealt with by uaccess_ttbr0_{enable,disable}(). The existing uaccess_{enable,disable}() routines ensure that user page tables are mapped in TTBR0, and also disable PAN protections, which is necessary to be able to use atomics on user memory, but also permit unrelated privileged accesses to access user memory. As preparatory step, let's rename uaccess_{enable,disable}() to uaccess_{enable,disable}_privileged(), highlighting this caveat and discouraging wider misuse. Subsequent patches can reuse the uaccess_{enable,disable}() naming for the common case of ensuring the user page tables are mapped in TTBR0. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: sdei: explicitly simulate PAN/UAO entryMark Rutland2-6/+39
In preparation for removing addr_limit and set_fs() we must decouple the SDEI PAN/UAO manipulation from the uaccess code, and explicitly reinitialize these as required. SDEI enters the kernel with a non-architectural exception, and prior to the most recent revision of the specification (ARM DEN 0054B), PSTATE bits (e.g. PAN, UAO) are not manipulated in the same way as for architectural exceptions. Notably, older versions of the spec can be read ambiguously as to whether PSTATE bits are inherited unchanged from the interrupted context or whether they are generated from scratch, with TF-A doing the latter. We have three cases to consider: 1) The existing TF-A implementation of SDEI will clear PAN and clear UAO (along with other bits in PSTATE) when delivering an SDEI exception. 2) In theory, implementations of SDEI prior to revision B could inherit PAN and UAO (along with other bits in PSTATE) unchanged from the interrupted context. However, in practice such implementations do not exist. 3) Going forward, new implementations of SDEI must clear UAO, and depending on SCTLR_ELx.SPAN must either inherit or set PAN. As we can ignore (2) we can assume that upon SDEI entry, UAO is always clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN. Therefore, we must explicitly initialize PAN, but do not need to do anything for UAO. Considering what we need to do: * When set_fs() is removed, force_uaccess_begin() will have no HW side-effects. As this only clears UAO, which we can assume has already been cleared upon entry, this is not a problem. We do not need to add code to manipulate UAO explicitly. * PAN may be cleared upon entry (in case 1 above), so where a kernel is built to use PAN and this is supported by all CPUs, the kernel must set PAN upon entry to ensure expected behaviour. * PAN may be inherited from the interrupted context (in case 3 above), and so where a kernel is not built to use PAN or where PAN support is not uniform across CPUs, the kernel must clear PAN to ensure expected behaviour. This patch reworks the SDEI code accordingly, explicitly setting PAN to the expected state in all cases. To cater for the cases where the kernel does not use PAN or this is not uniformly supported by hardware we add a new cpu_has_pan() helper which can be used regardless of whether the kernel is built to use PAN. The existing system_uses_ttbr0_pan() is redefined in terms of system_uses_hw_pan() both for clarity and as a minor optimization when HW PAN is not selected. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: James Morse <[email protected]> Cc: James Morse <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: sdei: move uaccess logic to arch/arm64/Mark Rutland2-20/+12
The SDEI support code is split across arch/arm64/ and drivers/firmware/, largley this is split so that the arch-specific portions are under arch/arm64, and the management logic is under drivers/firmware/. However, exception entry fixups are currently under drivers/firmware. Let's move the exception entry fixups under arch/arm64/. This de-clutters the management logic, and puts all the arch-specific portions in one place. Doing this also allows the fixups to be applied earlier, so things like PAN and UAO will be in a known good state before we run other logic. This will also make subsequent refactoring easier. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: James Morse <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: head.S: always initialize PSTATEMark Rutland2-11/+26
As with SCTLR_ELx and other control registers, some PSTATE bits are UNKNOWN out-of-reset, and we may not be able to rely on hardware or firmware to initialize them to our liking prior to entry to the kernel, e.g. in the primary/secondary boot paths and return from idle/suspend. It would be more robust (and easier to reason about) if we consistently initialized PSTATE to a default value, as we do with control registers. This will ensure that the kernel is not adversely affected by bits it is not aware of, e.g. when support for a feature such as PAN/UAO is disabled. This patch ensures that PSTATE is consistently initialized at boot time via an ERET. This is not intended to relax the existing requirements (e.g. DAIF bits must still be set prior to entering the kernel). For features detected dynamically (which may require system-wide support), it is still necessary to subsequently modify PSTATE. As ERET is not always a Context Synchronization Event, an ISB is placed before each exception return to ensure updates to control registers have taken effect. This handles the kernel being entered with SCTLR_ELx.EOS clear (or any future control bits being in an UNKNOWN state). Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: head.S: cleanup SCTLR_ELx initializationMark Rutland3-10/+16
Let's make SCTLR_ELx initialization a bit clearer by using meaningful names for the initialization values, following the same scheme for SCTLR_EL1 and SCTLR_EL2. These definitions will be used more widely in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: head.S: rename el2_setup -> init_kernel_elMark Rutland2-8/+9
For a while now el2_setup has performed some basic initialization of EL1 even when the kernel is booted at EL1, so the name is a little misleading. Further, some comments are stale as with VHE it doesn't drop the CPU to EL1. To clarify things, rename el2_setup to init_kernel_el, and update comments to be clearer as to the function's purpose. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: add C wrappers for SET_PSTATE_*()Mark Rutland3-3/+7
To make callsites easier to read, add trivial C wrappers for the SET_PSTATE_*() helpers, and convert trivial uses over to these. The new wrappers will be used further in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: ensure ERET from kthread is illegalMark Rutland1-9/+8
For consistency, all tasks have a pt_regs reserved at the highest portion of their task stack. Among other things, this ensures that a task's SP is always pointing within its stack rather than pointing immediately past the end. While it is never legitimate to ERET from a kthread, we take pains to initialize pt_regs for kthreads as if this were legitimate. As this is never legitimate, the effects of an erroneous return are rarely tested. Let's simplify things by initializing a kthread's pt_regs such that an ERET is caught as an illegal exception return, and removing the explicit initialization of other exception context. Note that as spectre_v4_enable_task_mitigation() only manipulates the PSTATE within the unused regs this is safe to remove. As user tasks will have their exception context initialized via start_thread() or start_compat_thread(), this should only impact cases where something has gone very wrong and we'd like that to be clearly indicated. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-11-30KVM: arm64: Advertise ID_AA64PFR0_EL1.CSV3=1 if the CPUs are Meltdown-safeMarc Zyngier3-5/+18
Cores that predate the introduction of ID_AA64PFR0_EL1.CSV3 to the ARMv8 architecture have this field set to 0, even of some of them are not affected by the vulnerability. The kernel maintains a list of unaffected cores (A53, A55 and a few others) so that it doesn't impose an expensive mitigation uncessarily. As we do for CSV2, let's expose the CSV3 property to guests that run on HW that is effectively not vulnerable. This can be reset to zero by writing to the ID register from userspace, ensuring that VMs can be migrated despite the new property being set. Reported-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-30KVM: arm64: Delay the polling of the GICR_VPENDBASER.Dirty bitShenming Lu6-4/+47
In order to reduce the impact of the VPT parsing happening on the GIC, we can split the vcpu reseidency in two phases: - programming GICR_VPENDBASER: this still happens in vcpu_load() - checking for the VPT parsing to be complete: this can happen on vcpu entry (in kvm_vgic_flush_hwstate()) This allows the GIC and the CPU to work in parallel, rewmoving some of the entry overhead. Suggested-by: Marc Zyngier <[email protected]> Signed-off-by: Shenming Lu <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-28arm64: Make the Meltdown mitigation state availableMarc Zyngier2-3/+19
Our Meltdown mitigation state isn't exposed outside of cpufeature.c, contrary to the rest of the Spectre mitigation state. As we are going to use it in KVM, expose a arm64_get_meltdown_state() helper which returns the same possible values as arm64_get_spectre_v?_state(). Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/misc-5.11' into kvmarm-master/nextMarc Zyngier5-59/+37
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/cache-demux' into kvmarm-master/nextMarc Zyngier2-10/+31
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: selftests: Filter out DEMUX registersAndrew Jones1-9/+30
DEMUX register presence depends on the host's hardware (the CLIDR_EL1 register to be precise). This means there's no set of them that we can bless and that it's possible to encounter new ones when running on different hardware (which would generate "Consider adding them ..." messages, but we'll never want to add them.) Remove the ones we have in the blessed list and filter them out of the new list, but also provide a new command line switch to list them if one so desires. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: CSSELR_EL1 max is 13Andrew Jones1-1/+1
Not counting TnD, which KVM doesn't currently consider, CSSELR_EL1 can have a maximum value of 0b1101 (13), which corresponds to an instruction cache at level 7. With CSSELR_MAX set to 12 we can only select up to cache level 6. Change it to 14. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Remove unused __extended_idmap_trampoline() prototypeWill Deacon1-1/+0
__extended_idmap_trampoline() was removed a long time ago by 3421e9d88d7a ("arm64: KVM: Simplify HYP init/teardown") so remove the unused function prototype. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Remove kvm_arch_vm_ioctl_check_extension()Will Deacon4-55/+34
kvm_arch_vm_ioctl_check_extension() is only called from kvm_vm_ioctl_check_extension(), so we can inline it and remove the extra function. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Move 'struct kvm_arch_memory_slot' out of uapi/Will Deacon2-3/+3
'struct kvm_arch_memory_slot' isn't part of the user ABI, so move it out of the uapi/ headers in case we start using it in future and accidentally back ourselves into a corner. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27Merge branch 'kvm-arm64/vector-rework' into kvmarm-master/nextMarc Zyngier13-260/+215
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/pmu-undef' into kvmarm-master/nextMarc Zyngier5-53/+32
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Get rid of the PMU ready stateMarc Zyngier2-4/+0
The PMU ready state has no user left. Goodbye. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Gate kvm_pmu_update_state() on the PMU featureMarc Zyngier1-1/+1
We currently gate the update of the PMU state on the PMU being "ready". The "ready" state is only set to true when the first vcpu run is successful, and if it isn't, we never reach the update code. So the "ready" state is never the right thing to check for, and it should instead be the presence of the PMU feature, which makes a bit more sense. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Remove dead PMU sysreg decoding codeMarc Zyngier1-5/+4
The handling of traps in access_pmu_evcntr() has a couple of omminous "else return false;" statements that don't make any sense: the decoding tree coverse all the registers that trap to this handler, and returning false implies that we change PC, which we don't. Get rid of what is evidently dead code. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Remove PMU RAZ/WI handlingMarc Zyngier1-30/+0
There is no RAZ/WI handling allowed for the PMU registers in the ARMv8 architecture. Nobody can remember how we cam to the conclusion that we could do this, but the ARMv8 ARM is pretty clear that we cannot. Remove the RAZ/WI handling of the PMU system registers when it is not configured. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Inject UNDEF on PMU access when no PMU configuredMarc Zyngier1-4/+8
The ARMv8 architecture says that in the absence of FEAT_PMUv3, all the PMU-related register generate an UNDEF. Let's make sure that all our PMU handers catch this case by hooking into check_pmu_access_disabled(), and add checks in a couple of other places. Note that we still cannot deliver an exception into the guest as the offending cases are already caught by the RAZ/WI handling. But this puts us one step away to architectural compliance. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Refuse illegal KVM_ARM_VCPU_PMU_V3 at reset timeMarc Zyngier2-2/+6
We accept to configure a PMU when a vcpu is created, even if the HW (or the host) doesn't support it. This results in failures when attributes get set, which is a bit odd as we should have failed the vcpu creation the first place. Move the check to the point where we check the vcpu feature set, and fail early if we cannot support a PMU. This further simplifies the attribute handling. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Set ID_AA64DFR0_EL1.PMUVer to 0 when no PMU supportMarc Zyngier1-1/+6
We always expose the HW view of PMU in ID_AA64FDR0_EL1.PMUver, even when the PMU feature is disabled, while the architecture says that FEAT_PMUv3 not being implemented should result in this field being zero. Let's follow the architecture's guidance. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Refuse to run VCPU if PMU is not initializedAlexandru Elisei1-4/+4
When enabling the PMU in kvm_arm_pmu_v3_enable(), KVM returns early if the PMU flag created is false and skips any other checks. Because PMU emulation is gated only on the VCPU feature being set, this makes it possible for userspace to get away with setting the VCPU feature but not doing any initialization for the PMU. Fix it by returning an error when trying to run the VCPU if the PMU hasn't been initialized correctly. The PMU is marked as created only if the interrupt ID has been set when using an in-kernel irqchip. This means the same check in kvm_arm_pmu_v3_enable() is redundant, remove it. Signed-off-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Add kvm_vcpu_has_pmu() helperMarc Zyngier2-5/+6
There are a number of places where we check for the KVM_ARM_VCPU_PMU_V3 feature. Wrap this check into a new kvm_vcpu_has_pmu(), and use it at the existing locations. No functional change. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/host-hvc-table' into kvmarm-master/nextMarc Zyngier8-117/+241
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/copro-no-more' into kvmarm-master/nextMarc Zyngier10-274/+151
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/el2-pc' into kvmarm-master/nextMarc Zyngier21-737/+666
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exceptionMarc Zyngier1-3/+3
Registers x0/x1 get repeateadly pushed and poped during a host HVC call. Instead, leave the registers on the stack, trading a store instruction on the fast path for an add on the slow path. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Simplify __kvm_enable_ssbs()Marc Zyngier4-14/+6
Move the setting of SSBS directly into the HVC handler, using the C helpers rather than the inline asssembly code. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Patch kimage_voffset instead of loading the EL1 valueMarc Zyngier4-7/+30
Directly using the kimage_voffset variable is fine for now, but will become more problematic as we start distrusting EL1. Instead, patch the kimage_voffset into the HYP text, ensuring we don't have to load an untrusted value later on. Signed-off-by: Marc Zyngier <[email protected]>
2020-11-16KVM: arm64: Remove redundant hyp vectors entryWill Deacon3-3/+9
The hyp vectors entry corresponding to HYP_VECTOR_DIRECT (i.e. when neither Spectre-v2 nor Spectre-v3a are present) is unused, as we can simply dispatch straight to __kvm_hyp_vector in this case. Remove the redundant vector, and massage the logic for resolving a slot to a vectors entry. Reported-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16arm64: spectre: Consolidate spectre-v3a detectionWill Deacon3-11/+15
The spectre-v3a mitigation is split between cpu_errata.c and spectre.c, with the former handling detection of the problem and the latter handling enabling of the workaround. Move the detection logic alongside the enabling logic, like we do for the other spectre mitigations. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16arm64: spectre: Rename ARM64_HARDEN_EL2_VECTORS to ARM64_SPECTRE_V3AWill Deacon8-18/+22
Since ARM64_HARDEN_EL2_VECTORS is really a mitigation for Spectre-v3a, rename it accordingly for consistency with the v2 and v4 mitigation. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Allocate hyp vectors staticallyWill Deacon9-183/+126
The EL2 vectors installed when a guest is running point at one of the following configurations for a given CPU: - Straight at __kvm_hyp_vector - A trampoline containing an SMC sequence to mitigate Spectre-v2 and then a direct branch to __kvm_hyp_vector - A dynamically-allocated trampoline which has an indirect branch to __kvm_hyp_vector - A dynamically-allocated trampoline containing an SMC sequence to mitigate Spectre-v2 and then an indirect branch to __kvm_hyp_vector The indirect branches mean that VA randomization at EL2 isn't trivially bypassable using Spectre-v3a (where the vector base is readable by the guest). Rather than populate these vectors dynamically, configure everything statically and use an enumerated type to identify the vector "slot" corresponding to one of the configurations above. This both simplifies the code, but also makes it much easier to implement at EL2 later on. Signed-off-by: Will Deacon <[email protected]> [maz: fixed double call to kvm_init_vector_slots() on nVHE] Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Re-jig logic when patching hardened hyp vectorsWill Deacon1-2/+2
The hardened hyp vectors are not used on systems running with VHE or CPUs without the ARM64_HARDEN_EL2_VECTORS capability. Re-jig the checking logic slightly in kvm_patch_vector_branch() so that it's a bit clearer what we're looking for. This is purely cosmetic. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Move BP hardening helpers into spectre.hWill Deacon4-30/+32
The BP hardening helpers are an integral part of the Spectre-v2 mitigation, so move them into asm/spectre.h and inline the arm64_get_bp_hardening_data() function at the same time. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Make BP hardening globals static insteadWill Deacon3-6/+8
Branch predictor hardening of the hyp vectors is partially driven by a couple of global variables ('__kvm_bp_vect_base' and '__kvm_harden_el2_vector_slot'). However, these are only used within a single compilation unit, so internalise them there instead. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Move kvm_get_hyp_vector() out of header fileWill Deacon2-45/+44
kvm_get_hyp_vector() has only one caller, so move it out of kvm_mmu.h and inline it into a new function, cpu_set_hyp_vector(), for setting the vector. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Tidy up kvm_map_vector()Will Deacon1-14/+14
The bulk of the work in kvm_map_vector() is conditional on the ARM64_HARDEN_EL2_VECTORS capability, so return early if that is not set and make the code a bit easier to read. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Remove redundant Spectre-v2 code from kvm_map_vector()Will Deacon1-5/+0
'__kvm_bp_vect_base' is only used when dealing with the hardened vectors so remove the redundant assignments in kvm_map_vectors(). Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-15Linux 5.10-rc4Linus Torvalds1-1/+1