diff options
author | Will Deacon <[email protected]> | 2020-11-13 11:38:44 +0000 |
---|---|---|
committer | Marc Zyngier <[email protected]> | 2020-11-16 10:43:05 +0000 |
commit | b881cdce77b48bd488f268041f32951bab89bb0f (patch) | |
tree | ee08e27ab59bdf43f567d3edaa8e0ea5151d14a3 /arch/arm64/kvm/va_layout.c | |
parent | da592e68a5a333b81111bd6336838764732f723e (diff) |
KVM: arm64: Allocate hyp vectors statically
The EL2 vectors installed when a guest is running point at one of the
following configurations for a given CPU:
- Straight at __kvm_hyp_vector
- A trampoline containing an SMC sequence to mitigate Spectre-v2 and
then a direct branch to __kvm_hyp_vector
- A dynamically-allocated trampoline which has an indirect branch to
__kvm_hyp_vector
- A dynamically-allocated trampoline containing an SMC sequence to
mitigate Spectre-v2 and then an indirect branch to __kvm_hyp_vector
The indirect branches mean that VA randomization at EL2 isn't trivially
bypassable using Spectre-v3a (where the vector base is readable by the
guest).
Rather than populate these vectors dynamically, configure everything
statically and use an enumerated type to identify the vector "slot"
corresponding to one of the configurations above. This both simplifies
the code, but also makes it much easier to implement at EL2 later on.
Signed-off-by: Will Deacon <[email protected]>
[maz: fixed double call to kvm_init_vector_slots() on nVHE]
Signed-off-by: Marc Zyngier <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Quentin Perret <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Diffstat (limited to 'arch/arm64/kvm/va_layout.c')
-rw-r--r-- | arch/arm64/kvm/va_layout.c | 11 |
1 files changed, 1 insertions, 10 deletions
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index 760db2c84b9b..cc8e8756600f 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -137,7 +137,7 @@ void kvm_patch_vector_branch(struct alt_instr *alt, u64 addr; u32 insn; - BUG_ON(nr_inst != 5); + BUG_ON(nr_inst != 4); if (!cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS) || WARN_ON_ONCE(has_vhe())) { @@ -160,15 +160,6 @@ void kvm_patch_vector_branch(struct alt_instr *alt, */ addr += KVM_VECTOR_PREAMBLE; - /* stp x0, x1, [sp, #-16]! */ - insn = aarch64_insn_gen_load_store_pair(AARCH64_INSN_REG_0, - AARCH64_INSN_REG_1, - AARCH64_INSN_REG_SP, - -16, - AARCH64_INSN_VARIANT_64BIT, - AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX); - *updptr++ = cpu_to_le32(insn); - /* movz x0, #(addr & 0xffff) */ insn = aarch64_insn_gen_movewide(AARCH64_INSN_REG_0, (u16)addr, |