aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-10-30arm64: cpufeature: upgrade hyp caps to finalMark Rutland3-15/+24
We finalize caps before initializing kvm hyp code, and any use of cpus_have_const_cap() in kvm hyp code generates redundant and potentially unsound code to read the cpu_hwcaps array. A number of helper functions used in both hyp context and regular kernel context use cpus_have_const_cap(), as some regular kernel code runs before the capabilities are finalized. It's tedious and error-prone to write separate copies of these for hyp and non-hyp code. So that we can avoid the redundant code, let's automatically upgrade cpus_have_const_cap() to cpus_have_final_cap() when used in hyp context. With this change, there's never a reason to access to cpu_hwcaps array from hyp code, and we don't need to create an NVHE alias for this. This should have no effect on non-hyp code. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: David Brazdil <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-30arm64: cpufeature: reorder cpus_have_{const, final}_cap()Mark Rutland1-8/+8
In a subsequent patch we'll modify cpus_have_const_cap() to call cpus_have_final_cap(), and hence we need to define cpus_have_final_cap() first. To make subsequent changes easier to follow, this patch reorders the two without making any other changes. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: David Brazdil <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-30KVM: arm64: Factor out is_{vhe,nvhe}_hyp_code()Mark Rutland1-5/+16
Currently has_vhe() detects whether it is being compiled for VHE/NVHE hyp code based on preprocessor definitions, and uses this knowledge to avoid redundant runtime checks. There are other cases where we'd like to use this knowledge, so let's factor the preprocessor checks out into separate helpers. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: David Brazdil <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Force PTE mapping on fault resulting in a device mappingSantosh Shukla1-0/+1
VFIO allows a device driver to resolve a fault by mapping a MMIO range. This can be subsequently result in user_mem_abort() to try and compute a huge mapping based on the MMIO pfn, which is a sure recipe for things to go wrong. Instead, force a PTE mapping when the pfn faulted in has a device mapping. Fixes: 6d674e28f642 ("KVM: arm/arm64: Properly handle faulting of device mappings") Suggested-by: Marc Zyngier <[email protected]> Signed-off-by: Santosh Shukla <[email protected]> [maz: rewritten commit message] Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Use fallback mapping sizes for contiguous huge page sizesGavin Shan1-7/+19
Although huge pages can be created out of multiple contiguous PMDs or PTEs, the corresponding sizes are not supported at Stage-2 yet. Instead of failing the mapping, fall back to the nearer supported mapping size (CONT_PMD to PMD and CONT_PTE to PTE respectively). Suggested-by: Marc Zyngier <[email protected]> Signed-off-by: Gavin Shan <[email protected]> [maz: rewritten commit message] Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Fix masks in stage2_pte_cacheable()Will Deacon1-1/+1
stage2_pte_cacheable() tries to figure out whether the mapping installed in its 'pte' parameter is cacheable or not. Unfortunately, it fails miserably because it extracts the memory attributes from the entry using FIELD_GET(), which returns the attributes shifted down to bit 0, but then compares this with the unshifted value generated by the PAGE_S2_MEMATTR() macro. A direct consequence of this bug is that cache maintenance is silently skipped, which in turn causes 32-bit guests to crash early on when their set/way maintenance is trapped but not emulated correctly. Fix the broken masks by avoiding the use of FIELD_GET() altogether. Fixes: 6d9d2115c480 ("KVM: arm64: Add support for stage-2 map()/unmap() in generic page-table") Reported-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCRMarc Zyngier2-3/+4
The DBGD{CCINT,SCRext} and DBGVCR register entries in the cp14 array are missing their target register, resulting in all accesses being targetted at the guard sysreg (indexed by __INVALID_SYSREG__). Point the emulation code at the actual register entries. Fixes: bdfb4b389c8d ("arm64: KVM: add trap handlers for AArch32 debug registers") Signed-off-by: Marc Zyngier <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Allocate stage-2 pgd pages with GFP_KERNEL_ACCOUNTWill Deacon1-1/+1
For consistency with the rest of the stage-2 page-table page allocations (performing using a kvm_mmu_memory_cache), ensure that __GFP_ACCOUNT is included in the GFP flags for the PGD pages. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Drop useless PAN setting on host EL1 to EL2 transitionMarc Zyngier1-2/+0
Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much sense as this bit only means something for translation regimes that include EL0. This obviously isn't the case in the nVHE case, so let's drop this setting. Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Vladimir Murzin <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Remove leftover kern_hyp_va() in nVHE TLB invalidationMarc Zyngier1-1/+0
The new calling convention says that pointers coming from the SMCCC interface are turned into their HYP version in the host HVC handler. However, there is still a stray kern_hyp_va() in the TLB invalidation code, which could result in a corrupted pointer. Drop the spurious conversion. Fixes: a071261d9318 ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion") Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-29KVM: arm64: Don't corrupt tpidr_el2 on failed HVC callMarc Zyngier1-7/+16
The hyp-init code starts by stashing a register in TPIDR_EL2 in in order to free a register. This happens no matter if the HVC call is legal or not. Although nothing wrong seems to come out of it, it feels odd to alter the EL2 state for something that eventually returns an error. Instead, use the fact that we know exactly which bits of the __kvm_hyp_init call are non-zero to perform the check with a series of EOR/ROR instructions, combined with a build-time check that the value is the one we expect. Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-02Merge branches 'kvm-arm64/pt-new' and 'kvm-arm64/pmu-5.9' into ↵Marc Zyngier3-21/+30
kvmarm-master/next Signed-off-by: Marc Zyngier <[email protected]>
2020-10-02KVM: arm64: Ensure user_mem_abort() return value is initialisedWill Deacon1-1/+1
If a change in the MMU notifier sequence number forces user_mem_abort() to return early when attempting to handle a stage-2 fault, we return uninitialised stack to kvm_handle_guest_abort(), which could potentially result in the injection of an external abort into the guest or a spurious return to userspace. Neither or these are what we want to do. Initialise 'ret' to 0 in user_mem_abort() so that bailing due to a change in the MMU notrifier sequence number is treated as though the fault was handled. Reported-by: kernel test robot <[email protected]> Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Alexandru Elisei <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Cc: Gavin Shan <[email protected]> Cc: Alexandru Elisei <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-10-02KVM: arm64: Pass level hint to TLBI during stage-2 permission faultWill Deacon1-7/+16
Alex pointed out that we don't pass a level hint to the TLBI instruction when handling a stage-2 permission fault, even though the walker does at some point have the level information in its hands. Rework stage2_update_leaf_attrs() so that it can optionally return the level of the updated pte to its caller, which can in turn be used to provide the correct TLBI level hint. Reported-by: Alexandru Elisei <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Alexandru Elisei <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Cc: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected]
2020-10-02KVM: arm64: Fix some documentation build warningsMauro Carvalho Chehab1-13/+13
As warned with make htmldocs: .../Documentation/virt/kvm/devices/vcpu.rst:70: WARNING: Malformed table. Text in column margin in table line 2. ======= ====================================================== -ENODEV: PMUv3 not supported or GIC not initialized -ENXIO: PMUv3 not properly configured or in-kernel irqchip not configured as required prior to calling this attribute -EBUSY: PMUv3 already initialized -EINVAL: Invalid filter range ======= ====================================================== The ':' character for two lines are above the size of the column. Besides that, other tables at the file doesn't use ':', so just drop them. While here, also fix this warning also introduced at the same patch: .../Documentation/virt/kvm/devices/vcpu.rst:88: WARNING: Block quote ends without a blank line; unexpected unindent. By marking the C code as a literal block. Fixes: 8be86a5eec04 ("KVM: arm64: Document PMU filtering API") Signed-off-by: Mauro Carvalho Chehab <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Paolo Bonzini <[email protected]> Link: https://lore.kernel.org/r/b5385dd0213f1f070667925bf7a807bf5270ba78.1601616399.git.mchehab+huawei@kernel.org
2020-09-30Merge branch 'kvm-arm64/hyp-pcpu' into kvmarm-master/nextMarc Zyngier43-1197/+1262
Signed-off-by: Marc Zyngier <[email protected]>
2020-09-30Merge remote-tracking branch 'arm64/for-next/ghostbusters' into ↵Marc Zyngier32-1046/+981
kvm-arm64/hyp-pcpu Signed-off-by: Marc Zyngier <[email protected]>
2020-09-30kvm: arm64: Remove unnecessary hyp mappingsDavid Brazdil2-36/+0
With all nVHE per-CPU variables being part of the hyp per-CPU region, mapping them individual is not necessary any longer. They are mapped to hyp as part of the overall per-CPU region. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Andrew Scull <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Set up hyp percpu data for nVHEDavid Brazdil5-6/+87
Add hyp percpu section to linker script and rename the corresponding ELF sections of hyp/nvhe object files. This moves all nVHE-specific percpu variables to the new hyp percpu section. Allocate sufficient amount of memory for all percpu hyp regions at global KVM init time and create corresponding hyp mappings. The base addresses of hyp percpu regions are kept in a dynamically allocated array in the kernel. Add NULL checks in PMU event-reset code as it may run before KVM memory is initialized. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Create separate instances of kvm_host_data for VHE/nVHEDavid Brazdil6-9/+13
Host CPU context is stored in a global per-cpu variable `kvm_host_data`. In preparation for introducing independent per-CPU region for nVHE hyp, create two separate instances of `kvm_host_data`, one for VHE and one for nVHE. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Duplicate arm64_ssbd_callback_required for nVHE hypDavid Brazdil4-2/+19
Hyp keeps track of which cores require SSBD callback by accessing a kernel-proper global variable. Create an nVHE symbol of the same name and copy the value from kernel proper to nVHE as KVM is being enabled on a core. Done in preparation for separating percpu memory owned by kernel proper and nVHE. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Add helpers for accessing nVHE hyp per-cpu varsDavid Brazdil1-2/+23
Defining a per-CPU variable in hyp/nvhe will result in its name being prefixed with __kvm_nvhe_. Add helpers for declaring these variables in kernel proper and accessing them with this_cpu_ptr and per_cpu_ptr. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Remove hyp_adr/ldr_this_cpuDavid Brazdil3-24/+21
The hyp_adr/ldr_this_cpu helpers were introduced for use in hyp code because they always needed to use TPIDR_EL2 for base, while adr/ldr_this_cpu from kernel proper would select between TPIDR_EL2 and _EL1 based on VHE/nVHE. Simplify this now that the hyp mode case can be handled using the __KVM_VHE/NVHE_HYPERVISOR__ macros. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Andrew Scull <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Remove __hyp_this_cpu_readDavid Brazdil7-32/+36
this_cpu_ptr is meant for use in kernel proper because it selects between TPIDR_EL1/2 based on nVHE/VHE. __hyp_this_cpu_ptr was used in hyp to always select TPIDR_EL2. Unify all users behind this_cpu_ptr and friends by selecting _EL2 register under __KVM_NVHE_HYPERVISOR__. VHE continues selecting the register using alternatives. Under CONFIG_DEBUG_PREEMPT, the kernel helpers perform a preemption check which is omitted by the hyp helpers. Preserve the behavior for nVHE by overriding the corresponding macros under __KVM_NVHE_HYPERVISOR__. Extend the checks into VHE hyp code. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Andrew Scull <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Only define __kvm_ex_table for CONFIG_KVMDavid Brazdil1-0/+4
Minor cleanup that only creates __kvm_ex_table ELF section and related symbols if CONFIG_KVM is enabled. Also useful as more hyp-specific sections will be added. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Move nVHE hyp namespace macros to hyp_image.hDavid Brazdil4-9/+14
Minor cleanup to move all macros related to prefixing nVHE hyp section and symbol names into one place: hyp_image.h. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-30kvm: arm64: Partially link nVHE hyp code, simplify HYPCOPYDavid Brazdil4-27/+72
Relying on objcopy to prefix the ELF section names of the nVHE hyp code is brittle and prevents us from using wildcards to match specific section names. Improve the build rules by partially linking all '.nvhe.o' files and prefixing their ELF section names using a linker script. Continue using objcopy for prefixing ELF symbol names. One immediate advantage of this approach is that all subsections matching a pattern can be merged into a single prefixed section, eg. .text and .text.* can be linked into a single '.hyp.text'. This removes the need for -fno-reorder-functions on GCC and will be useful in the future too: LTO builds use .text subsections, compilers routinely generate .rodata subsections, etc. Partially linking all hyp code into a single object file also makes it easier to analyze. Signed-off-by: David Brazdil <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-29arm64: Add support for PR_SPEC_DISABLE_NOEXEC prctl() optionWill Deacon2-4/+40
The PR_SPEC_DISABLE_NOEXEC option to the PR_SPEC_STORE_BYPASS prctl() allows the SSB mitigation to be enabled only until the next execve(), at which point the state will revert back to PR_SPEC_ENABLE and the mitigation will be disabled. Add support for PR_SPEC_DISABLE_NOEXEC on arm64. Reported-by: Anthony Steinhauser <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Pull in task_stack_page() to Spectre-v4 mitigation codeWill Deacon1-0/+1
The kbuild robot reports that we're relying on an implicit inclusion to get a definition of task_stack_page() in the Spectre-v4 mitigation code, which is not always in place for some configurations: | arch/arm64/kernel/proton-pack.c:329:2: error: implicit declaration of function 'task_stack_page' [-Werror,-Wimplicit-function-declaration] | task_pt_regs(task)->pstate |= val; | ^ | arch/arm64/include/asm/processor.h:268:36: note: expanded from macro 'task_pt_regs' | ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) | ^ | arch/arm64/kernel/proton-pack.c:329:2: note: did you mean 'task_spread_page'? Add the missing include to fix the build error. Fixes: a44acf477220 ("arm64: Move SSBD prctl() handler alongside other spectre mitigation code") Reported-by: Anthony Steinhauser <[email protected]> Reported-by: kernel test robot <[email protected]> Link: https://lore.kernel.org/r/202009260013.Ul7AD29w%[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabledWill Deacon6-58/+36
Patching the EL2 exception vectors is integral to the Spectre-v2 workaround, where it can be necessary to execute CPU-specific sequences to nobble the branch predictor before running the hypervisor text proper. Remove the dependency on CONFIG_RANDOMIZE_BASE and allow the EL2 vectors to be patched even when KASLR is not enabled. Fixes: 7a132017e7a5 ("KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE") Reported-by: kernel test robot <[email protected]> Link: https://lore.kernel.org/r/202009221053.Jv1XsQUZ%[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Get rid of arm64_ssbd_stateMarc Zyngier2-16/+0
Out with the old ghost, in with the new... Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Convert ARCH_WORKAROUND_2 to arm64_get_spectre_v4_state()Marc Zyngier3-14/+30
Convert the KVM WA2 code to using the Spectre infrastructure, making the code much more readable. It also allows us to take SSBS into account for the mitigation. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Get rid of kvm_arm_have_ssbd()Marc Zyngier1-23/+0
kvm_arm_have_ssbd() is now completely unused, get rid of it. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Simplify handling of ARCH_WORKAROUND_2Marc Zyngier14-163/+41
Owing to the fact that the host kernel is always mitigated, we can drastically simplify the WA2 handling by keeping the mitigation state ON when entering the guest. This means the guest is either unaffected or not mitigated. This results in a nice simplification of the mitigation space, and the removal of a lot of code that was never really used anyway. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rewrite Spectre-v4 mitigation codeWill Deacon9-352/+401
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Move SSBD prctl() handler alongside other spectre mitigation codeWill Deacon3-130/+119
As part of the spectre consolidation effort to shift all of the ghosts into their own proton pack, move all of the horrible SSBD prctl() code out of its own 'ssbd.c' file. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon3-3/+3
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Treat SSBS as a non-strict system featureWill Deacon1-3/+3
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Group start_thread() functions togetherWill Deacon1-12/+12
The is_ttbrX_addr() functions have somehow ended up in the middle of the start_thread() functions, so move them out of the way to keep the code readable. Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Set CSV2 for guests on hardware unaffected by Spectre-v2Marc Zyngier1-0/+3
If the system is not affected by Spectre-v2, then advertise to the KVM guest that it is not affected, without the need for a safelist in the guest. Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rewrite Spectre-v2 mitigation codeWill Deacon8-264/+327
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Introduce separate file for spectre mitigations and reportingWill Deacon3-7/+33
The spectre mitigation code is spread over a few different files, which makes it both hard to follow, but also hard to remove it should we want to do that in future. Introduce a new file for housing the spectre mitigations, and populate it with the spectre-v1 reporting code to start with. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2Will Deacon4-17/+16
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Simplify install_bp_hardening_cb()Will Deacon1-20/+7
Use is_hyp_mode_available() to detect whether or not we need to patch the KVM vectors for branch hardening, which avoids the need to take the vector pointers as parameters. Signed-off-by: Will Deacon <[email protected]>
2020-09-29KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASEWill Deacon6-9/+6
The removal of CONFIG_HARDEN_BRANCH_PREDICTOR means that CONFIG_KVM_INDIRECT_VECTORS is synonymous with CONFIG_RANDOMIZE_BASE, so replace it. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Remove Spectre-related CONFIG_* optionsWill Deacon11-80/+4
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by: Will Deacon <[email protected]>
2020-09-29arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUsMarc Zyngier1-0/+7
Commit 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") changed the way we deal with per-CPU errata by only calling the .matches() callback until one CPU is found to be affected. At this point, .matches() stop being called, and .cpu_enable() will be called on all CPUs. This breaks the ARCH_WORKAROUND_2 handling, as only a single CPU will be mitigated. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") Cc: <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-09-29Merge branch 'kvm-arm64/pmu-5.9' into kvmarm-master/nextMarc Zyngier7-30/+257
Signed-off-by: Marc Zyngier <[email protected]>
2020-09-29KVM: arm64: Match PMU error code descriptions with error conditionsAlexandru Elisei1-4/+5
Update the description of the PMU KVM_{GET, SET}_DEVICE_ATTR error codes to be a better match for the code that returns them. Signed-off-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-29KVM: arm64: Add undocumented return values for PMU device control groupAlexandru Elisei1-0/+2
KVM_ARM_VCPU_PMU_V3_IRQ returns -EFAULT if get_user() fails when reading the interrupt number from kvm_device_attr.addr. KVM_ARM_VCPU_PMU_V3_INIT returns the error value from kvm_vgic_set_owner(). kvm_arm_pmu_v3_init() checks that the vgic has been initialized and the interrupt number is valid, but kvm_vgic_set_owner() can still return the error code -EEXIST if another device has already claimed the interrupt. Signed-off-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Reviewed-by: Andrew Jones <[email protected]> Link: https://lore.kernel.org/r/[email protected]