Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This converts the EP93xx SPI master driver to use GPIO
descriptors for chip select handling.
EP93xx was using platform data to pass in GPIO lines,
by converting all board files to use GPIO descriptor
tables the core will look up the GPIO lines from the
SPI device in the same manner as for device tree.
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
|
|
The current implementation has two errors:
1: The second xor instruction will clear carry flag which
is necessary for following sbb instruction.
2: The select coding for sbb instruction is wrong, the coding
is "sbb dreg_hi,ecx", but what we need is "sbb ecx,dreg_hi".
This patch rewrites the implementation and fixes the errors.
This patch fixes below errors reported by bpf/test_verifier in x32
platform when the jit is enabled:
"
0: (b4) w1 = 4
1: (b4) w2 = 4
2: (1f) r2 -= r1
3: (4f) r2 |= r1
4: (87) r2 = -r2
5: (c7) r2 s>>= 63
6: (5f) r1 &= r2
7: (bf) r0 = r1
8: (95) exit
processed 9 insns (limit 131072), stack depth 0
0: (b4) w1 = 4
1: (b4) w2 = 4
2: (1f) r2 -= r1
3: (4f) r2 |= r1
4: (87) r2 = -r2
5: (c7) r2 s>>= 63
6: (5f) r1 &= r2
7: (bf) r0 = r1
8: (95) exit
processed 9 insns (limit 131072), stack depth 0
......
Summary: 1189 PASSED, 125 SKIPPED, 15 FAILED
"
Signed-off-by: Wang YanQing <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
The current method to compare 64-bit numbers for conditional jump is:
1) Compare the high 32-bit first.
2) If the high 32-bit isn't the same, then goto step 4.
3) Compare the low 32-bit.
4) Check the desired condition.
This method is right for unsigned comparison, but it is buggy for signed
comparison, because it does signed comparison for low 32-bit too.
There is only one sign bit in 64-bit number, that is the MSB in the 64-bit
number, it is wrong to treat low 32-bit as signed number and do the signed
comparison for it.
This patch fixes the bug and adds a testcase in selftests/bpf for such bug.
Signed-off-by: Wang YanQing <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC fixes from Vineet Gupta:
"A few minor fixes for ARC.
- regression in memset if line size !64
- avoid panic if PAE and IOC"
* tag 'arc-5.1-final' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
ARC: memset: fix build with L1_CACHE_SHIFT != 6
ARC: [hsdk] Make it easier to add PAE40 region to DTB
ARC: PAE40: don't panic and instead turn off hw ioc
|
|
The pvlock_page and hvclock_page variables are (as the name implies)
addresses to pages, created by the linker script.
But we declared them as just "extern u8" variables, which _works_, but
now that gcc does some more bounds checking, it causes warnings like
warning: array subscript 1 is outside array bounds of ‘u8[1]’
when we then access more than one byte from those variables.
Fix this by simply making the declaration of the variables match
reality, which makes the compiler happy too.
Signed-off-by: Linus Torvalds <[email protected]>
|
|
When a VCPU doesn't have pointer auth, we want to hide all four pointer
auth ID register fields from the guest, not just one of them.
Fixes: 384b40caa8af ("KVM: arm/arm64: Context-switch ptrauth registers")
Reported-by: Andrew Murray <[email protected]>
Fscked-up-by: Marc Zyngier <[email protected]>
Acked-by: Will Deacon <[email protected]>
Tested-by: Andrew Murray <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into for-next/core
Conflicts:
arch/arm64/Kconfig
arch/arm64/include/asm/arch_timer.h
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into for-next/core
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into for-next/core
|
|
Select the TISCI Interrupt Router, Aggregator drivers and all its
dependencies for TI's SoCs based on K3 architecture.
Suggested-by: Marc Zyngier <[email protected]>
Signed-off-by: Lokesh Vutla <[email protected]>
Acked-by: Tony Lindgren <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
|
|
Configure arm64 runtime CPU speculation bug mitigations in accordance
with the 'mitigations=' cmdline option. This affects Meltdown, Spectre
v2, and Speculative Store Bypass.
The default behavior is unchanged.
Signed-off-by: Josh Poimboeuf <[email protected]>
[will: reorder checks so KASLR implies KPTI and SSBS is affected by cmdline]
Signed-off-by: Will Deacon <[email protected]>
|
|
SSBS provides a relatively cheap mitigation for SSB, but it is still a
mitigation and its presence does not indicate that the CPU is unaffected
by the vulnerability.
Tweak the mitigation logic so that we report the correct string in sysfs.
Signed-off-by: Will Deacon <[email protected]>
|
|
Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.
Signed-off-by: Mian Yousaf Kaukab <[email protected]>
Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Return status based on ssbd_state and __ssb_safe. If the
mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a whitelist of known
good cores.
Given a heterogeneous machine, the overall machine vulnerability
defaults to safe but is reset to unsafe when we miss the whitelist
and the firmware doesn't explicitly tell us the core is safe.
In order to make that work we delay transitioning to vulnerable
until we know the firmware isn't responding to avoid a case
where we miss the whitelist, but the firmware goes ahead and
reports the core is not vulnerable. If all the cores in the
machine have SSBS, then __ssb_safe will remain true.
Tested-by: Stefan Wahren <[email protected]>
Signed-off-by: Jeremy Linton <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
__early_cpu_boot_status is of type long. Use quad
assembler directive to allocate proper size.
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Arun KS <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
There is a kernel crash that happens if rt_sigreturn() is called inside
a transactional block.
This crash happens if the kernel hits an in-kernel page fault when
accessing userspace memory, usually through copy_ckvsx_to_user(). A
major page fault calls might_sleep() function, which can cause a task
reschedule. A task reschedule (switch_to()) reclaim and recheckpoint
the TM states, but, in the signal return path, the checkpointed memory
was already reclaimed, thus the exception stack has MSR that points to
MSR[TS]=0.
When the code returns from might_sleep() and a task reschedule
happened, then this task is returned with the memory recheckpointed,
and CPU MSR[TS] = suspended.
This means that there is a side effect at might_sleep() if it is
called with CPU MSR[TS] = 0 and the task has regs->msr[TS] != 0.
This side effect can cause a TM bad thing, since at the exception
entrance, the stack saves MSR[TS]=0, and this is what will be used at
RFID, but, the processor has MSR[TS] = Suspended, and this transition
will be invalid and a TM Bad thing will be raised, causing the
following crash:
Unexpected TM Bad Thing exception at c00000000000e9ec (msr 0x8000000302a03031) tm_scratch=800000010280b033
cpu 0xc: Vector: 700 (Program Check) at [c00000003ff1fd70]
pc: c00000000000e9ec: fast_exception_return+0x100/0x1bc
lr: c000000000032948: handle_rt_signal64+0xb8/0xaf0
sp: c0000004263ebc40
msr: 8000000302a03031
current = 0xc000000415050300
paca = 0xc00000003ffc4080 irqmask: 0x03 irq_happened: 0x01
pid = 25006, comm = sigfuz
Linux version 5.0.0-rc1-00001-g3bd6e94bec12 (breno@debian) (gcc version 8.2.0 (Debian 8.2.0-3)) #899 SMP Mon Jan 7 11:30:07 EST 2019
WARNING: exception is not recoverable, can't continue
enter ? for help
[c0000004263ebc40] c000000000032948 handle_rt_signal64+0xb8/0xaf0 (unreliable)
[c0000004263ebd30] c000000000022780 do_notify_resume+0x2f0/0x430
[c0000004263ebe20] c00000000000e844 ret_from_except_lite+0x70/0x74
--- Exception: c00 (System Call) at 00007fffbaac400c
SP (7fffeca90f40) is in userspace
The solution for this problem is running the sigreturn code with
regs->msr[TS] disabled, thus, avoiding hitting the side effect above.
This does not seem to be a problem since regs->msr will be replaced by
the ucontext value, so, it is being flushed already. In this case, it
is flushed earlier.
Signed-off-by: Breno Leitao <[email protected]>
Acked-by: Michael Neuling <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
This patch fixes the below crash by making sure we touch the subpage
protection related structures only if we know they are allocated on
the platform. With radix translation we don't allocate hash context at
all and trying to access subpage_prot_table results in:
Faulting instruction address: 0xc00000000008bdb4
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Radix MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV
....
NIP [c00000000008bdb4] sys_subpage_prot+0x74/0x590
LR [c00000000000b688] system_call+0x5c/0x70
Call Trace:
[c00020002c6b7d30] [c00020002c6b7d90] 0xc00020002c6b7d90 (unreliable)
[c00020002c6b7e20] [c00000000000b688] system_call+0x5c/0x70
Instruction dump:
fb61ffd8 fb81ffe0 fba1ffe8 fbc1fff0 fbe1fff8 f821ff11 e92d1178 f9210068
39200000 e92d0968 ebe90630 e93f03e8 <eb891038> 60000000 3860fffe e9410068
We also move the subpage_prot_table with mmp_sem held to avoid race
between two parallel subpage_prot syscall.
Fixes: 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t for radix")
Reported-by: Sachin Sant <[email protected]>
Signed-off-by: Aneesh Kumar K.V <[email protected]>
Tested-by: Sachin Sant <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Print more information about MCE error whether it is an hardware or
software error.
Some of the MCE errors can be easily categorized as hardware or
software errors e.g. UEs are due to hardware error, where as error
triggered due to invalid usage of tlbie is a pure software bug. But
not all the MCE errors can be easily categorize into either software
or hardware. There are errors like multihit errors which are usually
result of a software bug, but in some rare cases a hardware failure
can cause a multihit error. In past, we have seen case where after
replacing faulty chip, multihit errors stopped occurring. Same with
parity errors, which are usually due to faulty hardware but there are
chances where multihit can also cause an parity error. Such errors are
difficult to determine what really caused it. Hence this patch
classifies MCE errors into following four categorize:
1. Hardware error:
UE and Link timeout failure errors.
2. Probable hardware error (some chance of software cause)
SLB/ERAT/TLB Parity errors.
3. Software error
Invalid tlbie form.
4. Probable software error (some chance of hardware cause)
SLB/ERAT/TLB Multihit errors.
Sample output:
MCE: CPU80: machine check (Warning) Guest SLB Multihit DAR: 000001001b6e0320 [Recovered]
MCE: CPU80: PID: 24765 Comm: qemu-system-ppc Guest NIP: [00007fffa309dc60]
MCE: CPU80: Probable Software error (some chance of hardware cause)
Signed-off-by: Mahesh Salgaonkar <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Currently all machine check errors are printed as severe errors which
isn't correct. Print soft errors as warning instead of severe errors.
Signed-off-by: Mahesh Salgaonkar <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Also add cpu number while displaying MCE log. This will help cleaner
logs when MCE hits on multiple cpus simultaneously.
Before the changes the MCE output was:
Severe Machine check interrupt [Recovered]
NIP [d00000000ba80280]: insert_slb_entry.constprop.0+0x278/0x2c0 [mcetest_slb]
Initiator: CPU
Error type: SLB [Multihit]
Effective address: d00000000ba80280
After this patch series changes the MCE output will be:
MCE: CPU80: machine check (Warning) Host SLB Multihit [Recovered]
MCE: CPU80: NIP: [d00000000b550280] insert_slb_entry.constprop.0+0x278/0x2c0 [mcetest_slb]
MCE: CPU80: Probable software error (some chance of hardware cause)
UE in host application:
MCE: CPU48: machine check (Severe) Host UE Load/Store DAR: 00007fffc6079a80 paddr: 0000000f8e260000 [Not recovered]
MCE: CPU48: PID: 4584 Comm: find NIP: [0000000010023368]
MCE: CPU48: Hardware error
and for MCE in Guest:
MCE: CPU80: machine check (Warning) Guest SLB Multihit DAR: 000001001b6e0320 [Recovered]
MCE: CPU80: PID: 24765 Comm: qemu-system-ppc Guest NIP: [00007fffa309dc60]
MCE: CPU80: Probable software error (some chance of hardware cause)
Signed-off-by: Mahesh Salgaonkar <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
When analysing sources of OS jitter, I noticed that doorbells cannot be
traced.
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
In commit 2bf1071a8d50 ("powerpc/64s: Remove POWER9 DD1 support") the
function __switch_to remove usage for 'dummy_copy_buffer'. Since it is
not used anywhere else, remove it completely.
This remove the following warning:
arch/powerpc/kernel/process.c:1156:17: error: 'dummy_copy_buffer' defined but not used
Suggested-by: Christophe Leroy <[email protected]>
Signed-off-by: Mathieu Malaterre <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Currently error return from kobject_init_and_add() is not followed by
a call to kobject_put(). This means there is a memory leak.
Add call to kobject_put() in error path of kobject_init_and_add().
Signed-off-by: Tobin C. Harding <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Tyrel Datwyler <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Towards the goal of removing cc-ldoption, it seems that --hash-style=
was added to binutils 2.17.50.0.2 in 2006. The minimal required
version of binutils for the kernel according to
Documentation/process/changes.rst is 2.20.
Suggested-by: Masahiro Yamada <[email protected]>
Signed-off-by: Nick Desaulniers <[email protected]>
Reviewed-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
The size checks in vmx_nested_state are wrong because the calculations
are made based on the size of a pointer to a struct kvm_nested_state
rather than the size of a struct kvm_nested_state.
Reported-by: Felix Wilhelm <[email protected]>
Signed-off-by: Jim Mattson <[email protected]>
Reviewed-by: Drew Schmitt <[email protected]>
Reviewed-by: Marc Orr <[email protected]>
Reviewed-by: Peter Shier <[email protected]>
Reviewed-by: Krish Sadhukhan <[email protected]>
Fixes: 8fcc4b5923af5de58b80b53a069453b135693304
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use specific inline functions for RIP and RSP instead of
going through kvm_register_read and kvm_register_write,
which are quite a mouthful. kvm_rsp_read and kvm_rsp_write
did not exist, so add them.
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
... now that there is no overhead when using dedicated accessors.
Opportunistically remove a bogus "FIXME" in handle_rdmsr() regarding
the upper 32 bits of RAX and RDX. Zeroing the upper 32 bits is
architecturally correct as 32-bit writes in 64-bit mode unconditionally
clear the upper 32 bits.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Except for RSP and RIP, which are held in VMX's VMCS, GPRs are always
treated "available and dirtly" on both VMX and SVM, i.e. are
unconditionally loaded/saved immediately before/after VM-Enter/VM-Exit.
Eliminating the unnecessary caching code reduces the size of KVM by a
non-trivial amount, much of which comes from the most common code paths.
E.g. on x86_64, kvm_emulate_cpuid() is reduced from 342 to 182 bytes and
kvm_emulate_hypercall() from 1362 to 1143, with the total size of KVM
dropping by ~1000 bytes. With CONFIG_RETPOLINE=y, the numbers are even
more pronounced, e.g.: 353->182, 1418->1172 and well over 2000 bytes.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
pfn_valid check is not sufficient because it only checks if a page has a struct
page or not, if "mem=" was passed to the kernel some valid pages won't have a
struct page. This means that if guests were assigned valid memory that lies
after the mem= boundary it will be passed uncached to the guest no matter what
the guest caching attributes are for this memory.
Introduce a new function e820__mapped_raw_any which is equivalent to
e820__mapped_any but uses the original e820 unmodified and use it to
identify real *RAM*.
Signed-off-by: KarimAllah Ahmed <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use page_address_valid in a few more locations that is already checking for
a page aligned address that does not cross the maximum physical address.
Signed-off-by: KarimAllah Ahmed <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map for accessing the enlightened VMCS since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map for accessing the shadow VMCS since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzessutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use the new mapping API for mapping guest memory to avoid depending on
"struct page".
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map in emulator_cmpxchg_emulated since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map when mapping the posted interrupt descriptor table since
using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory
that has a "struct page".
One additional semantic change is that the virtual host mapping lifecycle
has changed a bit. It now has the same lifetime of the pinning of the
interrupt descriptor table page on the host side.
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map when mapping the virtual APIC page since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".
One additional semantic change is that the virtual host mapping lifecycle
has changed a bit. It now has the same lifetime of the pinning of the
virtual APIC page on the host side.
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map when mapping the L1 MSR bitmap since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".
Signed-off-by: KarimAllah Ahmed <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Use kvm_vcpu_map to the map the VMCS12 from guest memory because
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of
pages and the respective struct page to map in the kernel virtual
address space.
This doesn't work if get_user_pages_fast() is invoked with a userspace
virtual address that's backed by PFNs outside of kernel reach (e.g., when
limiting the kernel memory with mem= in the command line and using
/dev/mem to map memory).
If get_user_pages_fast() fails, look up the VMA that back the userspace
virtual address, compute the PFN and the physical address, and map it in
the kernel virtual address space with memremap().
Signed-off-by: Filippo Sironi <[email protected]>
Signed-off-by: KarimAllah Ahmed <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Update the PML table without mapping and unmapping the page. This also
avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct
page" for guest memory.
As a side-effect of using kvm_write_guest_page the page is also properly
marked as dirty.
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Read the data directly from guest memory instead of the map->read->unmap
sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which
assumes that there is a "struct page" for guest memory.
Suggested-by: Jim Mattson <[email protected]>
Signed-off-by: KarimAllah Ahmed <[email protected]>
Reviewed-by: Jim Mattson <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
The hardware configuration register has some useful bits which can be
used by guests. Implement McStatusWrEn which can be used by guests when
injecting MCEs with the in-kernel mce-inject module.
For that, we need to set bit 18 - McStatusWrEn - first, before writing
the MCi_STATUS registers (otherwise we #GP).
Add the required machinery to do so.
Signed-off-by: Borislav Petkov <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: KVM <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Yazen Ghannam <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
The capabilities header depends on asm/vmx.h but doesn't explicitly
include said file. This currently doesn't cause problems as all users
of capbilities.h first include asm/vmx.h, but the issue often results in
build errors if someone starts moving things around the VMX files.
Fixes: 3077c1910882 ("KVM: VMX: Move capabilities structs and helpers to dedicated file")
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Smatch complains about this:
arch/x86/kvm/vmx/vmx.c:5730 dump_vmcs()
warn: KERN_* level not at start of string
The code should be using pr_cont() instead of pr_err().
Fixes: 9d609649bb29 ("KVM: vmx: print more APICv fields in dump_vmcs")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Ten percent of nothin' is... let me do the math here. Nothin' into
nothin', carry the nothin'...
Cc: Wanpeng Li <[email protected]>
Reviewed-by: Liran Alon <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Checking for a pending non-periodic interrupt in start_hv_timer() leads
to restart_apic_timer() making an unnecessary call to start_sw_timer()
due to start_hv_timer() returning false.
Alternatively, start_hv_timer() could return %true when there is a
pending non-periodic interrupt, but that approach is less intuitive,
i.e. would require a beefy comment to explain an otherwise simple check.
Cc: Liran Alon <[email protected]>
Cc: Wanpeng Li <[email protected]>
Suggested-by: Liran Alon <[email protected]>
Suggested-by: Paolo Bonzini <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Refactor kvm_x86_ops->set_hv_timer to use an explicit parameter for
stating that the timer has expired. Overloading the return value is
unnecessarily clever, e.g. can lead to confusion over the proper return
value from start_hv_timer() when r==1.
Cc: Wanpeng Li <[email protected]>
Cc: Liran Alon <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Explicitly call cancel_hv_timer() instead of returning %false to coerce
restart_apic_timer() into canceling it by way of start_sw_timer().
Functionally, the existing code is correct in the sense that it doesn't
doing anything visibily wrong, e.g. generate spurious interrupts or miss
an interrupt. But it's extremely confusing and inefficient, e.g. there
are multiple extraneous calls to apic_timer_expired() that effectively
get dropped due to @timer_pending being %true.
Cc: Wanpeng Li <[email protected]>
Cc: Liran Alon <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
...now that VMX's preemption timer, i.e. the hv_timer, also adjusts its
programmed time based on lapic_timer_advance_ns. Without the delay, a
guest can see a timer interrupt arrive before the requested time when
KVM is using the hv_timer to emulate the guest's interrupt.
Fixes: c5ce8235cffa0 ("KVM: VMX: Optimize tscdeadline timer latency")
Cc: <[email protected]>
Cc: Wanpeng Li <[email protected]>
Reviewed-by: Liran Alon <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|