diff options
author | Marc Zyngier <[email protected]> | 2024-06-14 15:45:43 +0100 |
---|---|---|
committer | Oliver Upton <[email protected]> | 2024-06-19 08:14:37 +0000 |
commit | 8e236efa4cd2df8b270784a33d7e334933789f1a (patch) | |
tree | 2c90f19b82c0572125b11b23e6e177aa78358dab /arch/arm64/kvm/nested.c | |
parent | 67fda56e76da4c4be9a8502d7211dbba024576d2 (diff) |
KVM: arm64: nv: Handle TLB invalidation targeting L2 stage-1
While dealing with TLB invalidation targeting the guest hypervisor's
own stage-1 was easy, doing the same thing for its own guests is
a bit more involved.
Since such an invalidation is scoped by VMID, it needs to apply to
all s2_mmu contexts that have been tagged by that VMID, irrespective
of the value of VTTBR_EL2.BADDR.
So for each s2_mmu context matching that VMID, we invalidate the
corresponding TLBs, each context having its own "physical" VMID.
Co-developed-by: Jintack Lim <[email protected]>
Co-developed-by: Christoffer Dall <[email protected]>
Signed-off-by: Jintack Lim <[email protected]>
Signed-off-by: Christoffer Dall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Oliver Upton <[email protected]>
Diffstat (limited to 'arch/arm64/kvm/nested.c')
-rw-r--r-- | arch/arm64/kvm/nested.c | 35 |
1 files changed, 35 insertions, 0 deletions
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index 114a3f59c28b..a2734d135211 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -364,6 +364,41 @@ int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return ret; } +/* + * We can have multiple *different* MMU contexts with the same VMID: + * + * - S2 being enabled or not, hence differing by the HCR_EL2.VM bit + * + * - Multiple vcpus using private S2s (huh huh...), hence differing by the + * VBBTR_EL2.BADDR address + * + * - A combination of the above... + * + * We can always identify which MMU context to pick at run-time. However, + * TLB invalidation involving a VMID must take action on all the TLBs using + * this particular VMID. This translates into applying the same invalidation + * operation to all the contexts that are using this VMID. Moar phun! + */ +void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid, + const union tlbi_info *info, + void (*tlbi_callback)(struct kvm_s2_mmu *, + const union tlbi_info *)) +{ + write_lock(&kvm->mmu_lock); + + for (int i = 0; i < kvm->arch.nested_mmus_size; i++) { + struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i]; + + if (!kvm_s2_mmu_valid(mmu)) + continue; + + if (vmid == get_vmid(mmu->tlb_vttbr)) + tlbi_callback(mmu, info); + } + + write_unlock(&kvm->mmu_lock); +} + struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; |