aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-04-22Merge branch 'kvm-arm64/kill_oprofile_dependency' into kvmarm-master/nextMarc Zyngier8-84/+6
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-22perf: Get rid of oprofile leftoversMarc Zyngier2-7/+0
perf_pmu_name() and perf_num_counters() are unused. Drop them. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210414134409.1266357-6-maz@kernel.org
2021-04-22sh: Get rid of oprofile leftoversMarc Zyngier1-18/+0
perf_pmu_name() and perf_num_counters() are unused. Drop them. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20210414134409.1266357-5-maz@kernel.org
2021-04-22s390: Get rid of oprofile leftoversMarc Zyngier1-21/+0
perf_pmu_name() and perf_num_counters() are unused. Drop them. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20210414134409.1266357-4-maz@kernel.org
2021-04-22arm64: Get rid of oprofile leftoversMarc Zyngier1-30/+0
perf_pmu_name() and perf_num_counters() are now unused. Drop them. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210414134409.1266357-3-maz@kernel.org
2021-04-22KVM: arm64: Divorce the perf code from oprofile helpersMarc Zyngier3-7/+6
KVM/arm64 is the sole user of perf_num_counters(), and really could do without it. Stop using the obsolete API by relying on the existing probing code. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210414134409.1266357-2-maz@kernel.org
2021-04-20Merge branch 'kvm-arm64/ptp' into kvmarm-master/nextMarc Zyngier2-3/+4
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-20KVM: arm64: Fix Function ID typo for PTP_KVM serviceZenghui Yu1-2/+2
Per include/linux/arm-smccc.h, the Function ID of PTP_KVM service is defined as ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID. Fix the typo in documentation to keep the git grep consistent. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210417113804.1992-1-yuzenghui@huawei.com
2021-04-20ptp: Don't print an error if ptp_kvm is not supportedJon Hunter1-1/+2
Commit 300bb1fe7671 ("ptp: arm/arm64: Enable ptp_kvm for arm/arm64") enable ptp_kvm support for ARM platforms and for any ARM platform that does not support this, the following error message is displayed ... ERR KERN fail to initialize ptp_kvm For platforms that do not support ptp_kvm this error is a bit misleading and so fix this by only printing this message if the error returned by kvm_arch_ptp_init() is not -EOPNOTSUPP. Note that -EOPNOTSUPP is only returned by ARM platforms today if ptp_kvm is not supported. Fixes: 300bb1fe7671 ("ptp: arm/arm64: Enable ptp_kvm for arm/arm64") Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210420132419.1318148-1-jonathanh@nvidia.com
2021-04-15Merge branch 'kvm-arm64/nvhe-panic-info' into kvmarm-master/nextMarc Zyngier1-0/+7
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-15bug: Provide dummy version of bug_get_file_line() when !GENERIC_BUGMarc Zyngier1-0/+7
Provide the missing dummy bug_get_file_line() implementation when GENENERIC_BUG isn't selected. Reported-by: kernel test robot <lkp@intel.com> Fixes: 26dbc7e299c7 ("bug: Factor out a getter for a bug's file line") Cc: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge remote-tracking branch 'coresight/next-ETE-TRBE' into kvmarm-master/nextMarc Zyngier2-2/+2
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13coresight: trbe: Fix return value check in arm_trbe_register_coresight_cpu()Wei Yongjun1-1/+1
In case of error, the function devm_kasprintf() returns NULL pointer not ERR_PTR(). The IS_ERR() test in the return value check should be replaced with NULL test. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Link: https://lore.kernel.org/r/20210409094901.1903622-1-weiyongjun1@huawei.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-13coresight: core: Make symbol 'csdev_sink' staticWei Yongjun1-1/+1
The sparse tool complains as follows: drivers/hwtracing/coresight/coresight-core.c:26:1: warning: symbol '__pcpu_scope_csdev_sink' was not declared. Should it be static? As csdev_sink is not used outside of coresight-core.c after the introduction of coresight_[set|get]_percpu_sink() helpers, this change marks it static. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Link: https://lore.kernel.org/r/20210409094900.1902783-1-weiyongjun1@huawei.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-13Merge remote-tracking branch 'arm64/for-next/neon-softirqs-disabled' into ↵Marc Zyngier8-85/+39
kvmarm-master/next Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge remote-tracking branch 'arm64/for-next/vhe-only' into kvmarm-master/nextMarc Zyngier7-29/+89
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/vlpi-save-restore' into kvmarm-master/nextMarc Zyngier6-12/+119
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/vgic-5.13' into kvmarm-master/nextMarc Zyngier12-45/+706
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/ptp' into kvmarm-master/nextMarc Zyngier25-78/+443
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/nvhe-wxn' into kvmarm-master/nextMarc Zyngier3-17/+7
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/nvhe-sve' into kvmarm-master/nextMarc Zyngier1-0/+2
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/nvhe-panic-info' into kvmarm-master/nextMarc Zyngier14-56/+95
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/misc-5.13' into kvmarm-master/nextMarc Zyngier5-3/+40
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/memslot-fixes' into kvmarm-master/nextMarc Zyngier2-8/+14
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/host-stage2' into kvmarm-master/nextMarc Zyngier51-348/+2607
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13Merge branch 'kvm-arm64/debug-5.13' into kvmarm-master/nextMarc Zyngier6-29/+83
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13KVM: arm/arm64: Fix KVM_VGIC_V3_ADDR_TYPE_REDIST readEric Auger1-2/+2
When reading the base address of the a REDIST region through KVM_VGIC_V3_ADDR_TYPE_REDIST we expect the redistributor region list to be populated with a single element. However list_first_entry() expects the list to be non empty. Instead we should use list_first_entry_or_null which effectively returns NULL if the list is empty. Fixes: dbd9733ab674 ("KVM: arm/arm64: Replace the single rdist region by a list") Cc: <Stable@vger.kernel.org> # v4.18+ Signed-off-by: Eric Auger <eric.auger@redhat.com> Reported-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210412150034.29185-1-eric.auger@redhat.com
2021-04-12arm64: fpsimd: run kernel mode NEON with softirqs disabledArd Biesheuvel8-15/+31
Kernel mode NEON can be used in task or softirq context, but only in a non-nesting manner, i.e., softirq context is only permitted if the interrupt was not taken at a point where the kernel was using the NEON in task context. This means all users of kernel mode NEON have to be aware of this limitation, and either need to provide scalar fallbacks that may be much slower (up to 20x for AES instructions) and potentially less safe, or use an asynchronous interface that defers processing to a later time when the NEON is guaranteed to be available. Given that grabbing and releasing the NEON is cheap, we can relax this restriction, by increasing the granularity of kernel mode NEON code, and always disabling softirq processing while the NEON is being used in task context. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: introduce wxN aliases for wN registersArd Biesheuvel1-0/+8
The AArch64 asm syntax has this slightly tedious property that the names used in mnemonics to refer to registers depend on whether the opcode in question targets the entire 64-bits (xN), or only the least significant 8, 16 or 32 bits (wN). When writing parameterized code such as macros, this can be annoying, as macro arguments don't lend themselves to indexed lookups, and so generating a reference to wN in a macro that receives xN as an argument is problematic. For instance, an upcoming patch that modifies the implementation of the cond_yield macro to be able to refer to 32-bit registers would need to modify invocations such as cond_yield 3f, x8 to cond_yield 3f, 8 so that the second argument can be token pasted after x or w to emit the correct register reference. Unfortunately, this interferes with the self documenting nature of the first example, where the second argument is obviously a register, whereas in the second example, one would need to go and look at the code to find out what '8' means. So let's fix this by defining wxN aliases for all xN registers, which resolve to the 32-bit alias of each respective 64-bit register. This allows the macro implementation to paste the xN reference after a w to obtain the correct register name. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: remove conditional NEON yield macrosArd Biesheuvel1-70/+0
The users of the conditional NEON yield macros have all been switched to the simplified cond_yield macro, and so the NEON specific ones can be removed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11KVM: arm64: Don't advertise FEAT_SPE to guestsAlexandru Elisei1-0/+2
Even though KVM sets up MDCR_EL2 to trap accesses to the SPE buffer and sampling control registers and to inject an undefined exception, the presence of FEAT_SPE is still advertised in the ID_AA64DFR0_EL1 register, if the hardware supports it. Getting an undefined exception when accessing a register usually happens for a hardware feature which is not implemented, and indeed this is how PMU emulation is handled when the virtual machine has been created without the KVM_ARM_VCPU_PMU_V3 feature. Let's be consistent and never advertise FEAT_SPE, because KVM doesn't have support for emulating it yet. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210409152154.198566-3-alexandru.elisei@arm.com
2021-04-11KVM: arm64: Don't print warning when trapping SPE registersAlexandru Elisei2-0/+15
KVM sets up MDCR_EL2 to trap accesses to the SPE buffer and sampling control registers and it relies on the fact that KVM injects an undefined exception for unknown registers. This mechanism of injecting undefined exceptions also prints a warning message for the host kernel; for example, when a guest tries to access PMSIDR_EL1: [ 2.691830] kvm [142]: Unsupported guest sys_reg access at: 80009e78 [800003c5] [ 2.691830] { Op0( 3), Op1( 0), CRn( 9), CRm( 9), Op2( 7), func_read }, This is unnecessary, because KVM has explicitly configured trapping of those registers and is well aware of their existence. Prevent the warning by adding the SPE registers to the list of registers that KVM emulates. The access function will inject the undefined exception. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210409152154.198566-2-alexandru.elisei@arm.com
2021-04-09KVM: arm64: Fully zero the vcpu state on resetMarc Zyngier1-0/+5
On vcpu reset, we expect all the registers to be brought back to their initial state, which happens to be a bunch of zeroes. However, some recent commit broke this, and is now leaving a bunch of registers (such as the FP state) with whatever was left by the guest. My bad. Zero the reset of the state (32bit SPSRs and FPSIMD state). Cc: stable@vger.kernel.org Fixes: e47c2055c68e ("KVM: arm64: Make struct kvm_regs userspace-only") Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-09KVM: arm64: Clarify vcpu reset behaviourMarc Zyngier1-0/+12
Although the KVM_ARM_VCPU_INIT documentation mention that the registers are reset to their "initial values", it doesn't describe what these values are. Describe this state explicitly. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-08arm64: Get rid of CONFIG_ARM64_VHEMarc Zyngier4-28/+1
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Cope with CPUs stuck in VHE modeMarc Zyngier3-8/+52
It seems that the CPUs part of the SoC known as Apple M1 have the terrible habit of being stuck with HCR_EL2.E2H==1, in violation of the architecture. Try and work around this deplorable state of affairs by detecting the stuck bit early and short-circuit the nVHE dance. Additional filtering code ensures that attempts at switching to nVHE from the command-line are also ignored. It is still unknown whether there are many more such nuggets to be found... Reported-by: Hector Martin <marcan@marcan.st> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: cpufeature: Allow early filtering of feature overrideMarc Zyngier3-0/+36
Some CPUs are broken enough that some overrides need to be rejected at the earliest opportunity. In some cases, that's right at cpu feature override time. Provide the necessary infrastructure to filter out overrides, and to report such filtered out overrides to the core cpufeature code. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08Merge remote-tracking branch 'coresight/next-ETE-TRBE' into kvmarm-master/nextMarc Zyngier30-77/+2055
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-08KVM: arm64: Fix table format for PTP documentationMarc Zyngier1-11/+11
The documentation build legitimately screams about the PTP documentation table being misformated. Fix it by adjusting the table width guides. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-07KVM: arm64: Mark the kvmarm ML as moderated for non-subscribersMarc Zyngier1-1/+1
The kvmarm mailing list is moderated for non-subscriber, but that was never advertised. Fix this with the hope that people will eventually subscribe before posting, saving me the hassle of letting their post through eventually. Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-07KVM: arm64: Initialize VCPU mdcr_el2 before loading itAlexandru Elisei3-28/+63
When a VCPU is created, the kvm_vcpu struct is initialized to zero in kvm_vm_ioctl_create_vcpu(). On VHE systems, the first time vcpu.arch.mdcr_el2 is loaded on hardware is in vcpu_load(), before it is set to a sensible value in kvm_arm_setup_debug() later in the run loop. The result is that KVM executes for a short time with MDCR_EL2 set to zero. This has several unintended consequences: * Setting MDCR_EL2.HPMN to 0 is constrained unpredictable according to ARM DDI 0487G.a, page D13-3820. The behavior specified by the architecture in this case is for the PE to behave as if MDCR_EL2.HPMN is set to a value less than or equal to PMCR_EL0.N, which means that an unknown number of counters are now disabled by MDCR_EL2.HPME, which is zero. * The host configuration for the other debug features controlled by MDCR_EL2 is temporarily lost. This has been harmless so far, as Linux doesn't use the other fields, but that might change in the future. Let's avoid both issues by initializing the VCPU's mdcr_el2 field in kvm_vcpu_vcpu_first_run_init(), thus making sure that the MDCR_EL2 register has a consistent value after each vcpu_load(). Fixes: d5a21bcc2995 ("KVM: arm64: Move common VHE/non-VHE trap config in separate functions") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-3-alexandru.elisei@arm.com
2021-04-07Documentation: KVM: Document KVM_GUESTDBG_USE_HW control flag for arm64Alexandru Elisei1-1/+2
Commit 21b6f32f9471 ("KVM: arm64: guest debug, define API headers") added the arm64 KVM_GUESTDBG_USE_HW flag for the KVM_SET_GUEST_DEBUG ioctl and commit 834bf88726f0 ("KVM: arm64: enable KVM_CAP_SET_GUEST_DEBUG") documented and implemented the flag functionality. Since its introduction, at no point was the flag known by any name other than KVM_GUESTDBG_USE_HW for the arm64 architecture, so refer to it as such in the documentation. CC: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-2-alexandru.elisei@arm.com
2021-04-07ptp: arm/arm64: Enable ptp_kvm for arm/arm64Jianyong Wu4-1/+64
Currently, there is no mechanism to keep time sync between guest and host in arm/arm64 virtualization environment. Time in guest will drift compared with host after boot up as they may both use third party time sources to correct their time respectively. The time deviation will be in order of milliseconds. But in some scenarios,like in cloud environment, we ask for higher time precision. kvm ptp clock, which chooses the host clock source as a reference clock to sync time between guest and host, has been adopted by x86 which takes the time sync order from milliseconds to nanoseconds. This patch enables kvm ptp clock for arm/arm64 and improves clock sync precision significantly. Test result comparisons between with kvm ptp clock and without it in arm/arm64 are as follows. This test derived from the result of command 'chronyc sources'. we should take more care of the last sample column which shows the offset between the local clock and the source at the last measurement. no kvm ptp in guest: MS Name/IP address Stratum Poll Reach LastRx Last sample ======================================================================== ^* dns1.synet.edu.cn 2 6 377 13 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 21 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 29 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 37 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 45 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 53 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 61 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 4 -130us[ +796us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 12 -130us[ +796us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 20 -130us[ +796us] +/- 21ms in host: MS Name/IP address Stratum Poll Reach LastRx Last sample ======================================================================== ^* 120.25.115.20 2 7 377 72 -470us[ -603us] +/- 18ms ^* 120.25.115.20 2 7 377 92 -470us[ -603us] +/- 18ms ^* 120.25.115.20 2 7 377 112 -470us[ -603us] +/- 18ms ^* 120.25.115.20 2 7 377 2 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 22 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 43 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 63 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 83 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 103 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 123 +872ns[-6808ns] +/- 17ms The dns1.synet.edu.cn is the network reference clock for guest and 120.25.115.20 is the network reference clock for host. we can't get the clock error between guest and host directly, but a roughly estimated value will be in order of hundreds of us to ms. with kvm ptp in guest: chrony has been disabled in host to remove the disturb by network clock. MS Name/IP address Stratum Poll Reach LastRx Last sample ======================================================================== * PHC0 0 3 377 8 -7ns[ +1ns] +/- 3ns * PHC0 0 3 377 8 +1ns[ +16ns] +/- 3ns * PHC0 0 3 377 6 -4ns[ -0ns] +/- 6ns * PHC0 0 3 377 6 -8ns[ -12ns] +/- 5ns * PHC0 0 3 377 5 +2ns[ +4ns] +/- 4ns * PHC0 0 3 377 13 +2ns[ +4ns] +/- 4ns * PHC0 0 3 377 12 -4ns[ -6ns] +/- 4ns * PHC0 0 3 377 11 -8ns[ -11ns] +/- 6ns * PHC0 0 3 377 10 -14ns[ -20ns] +/- 4ns * PHC0 0 3 377 8 +4ns[ +5ns] +/- 4ns The PHC0 is the ptp clock which choose the host clock as its source clock. So we can see that the clock difference between host and guest is in order of ns. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-8-jianyong.wu@arm.com
2021-04-07KVM: arm64: Add support for the KVM PTP serviceJianyong Wu7-0/+107
Implement the hypervisor side of the KVM PTP interface. The service offers wall time and cycle count from host to guest. The caller must specify whether they want the host's view of either the virtual or physical counter. Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-7-jianyong.wu@arm.com
2021-04-07clocksource: Add clocksource id for arm arch counterJianyong Wu2-0/+3
Add clocksource id to the ARM generic counter so that it can be easily identified from callers such as ptp_kvm. Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-6-jianyong.wu@arm.com
2021-04-07time: Add mechanism to recognize clocksource in time_get_snapshotThomas Gleixner5-5/+27
System time snapshots are not conveying information about the current clocksource which was used, but callers like the PTP KVM guest implementation have the requirement to evaluate the clocksource type to select the appropriate mechanism. Introduce a clocksource id field in struct clocksource which is by default set to CSID_GENERIC (0). Clocksource implementations can set that field to a value which allows to identify the clocksource. Store the clocksource id of the current clocksource in the system_time_snapshot so callers can evaluate which clocksource was used to take the snapshot and act accordingly. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-5-jianyong.wu@arm.com
2021-04-07ptp: Reorganize ptp_kvm.c to make it arch-independentJianyong Wu4-62/+139
Currently, the ptp_kvm module contains a lot of x86-specific code. Let's move this code into a new arch-specific file in the same directory, and rename the arch-independent file to ptp_kvm_common.c. Acked-by: Richard Cochran <richardcochran@gmail.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-4-jianyong.wu@arm.com
2021-04-07KVM: selftests: vgic_init kvm selftests fixupEric Auger3-171/+136
Bring some improvements/rationalization over the first version of the vgic_init selftests: - ucall_init is moved in run_cpu() - vcpu_args_set is not called as not needed - whenever a helper is supposed to succeed, call the non "_" version - helpers do not return -errno, instead errno is checked by the caller - vm_gic struct is used whenever possible, as well as vm_gic_destroy - _kvm_create_device takes an addition fd parameter Signed-off-by: Eric Auger <eric.auger@redhat.com> Suggested-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407135937.533141-1-eric.auger@redhat.com
2021-04-07KVM: arm64: Don't retrieve memory slot again in page fault handlerGavin Shan1-2/+7
We needn't retrieve the memory slot again in user_mem_abort() because the corresponding memory slot has been passed from the caller. This would save some CPU cycles. For example, the time used to write 1GB memory, which is backed by 2MB hugetlb pages and write-protected, is dropped by 6.8% from 928ms to 864ms. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210316041126.81860-4-gshan@redhat.com
2021-04-07KVM: arm64: Use find_vma_intersection()Gavin Shan1-4/+6
find_vma_intersection() has been existing to search the intersected vma. This uses the function where it's applicable, to simplify the code. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210316041126.81860-3-gshan@redhat.com