aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2024-07-02x86/resctrl: Prepare to split rdt_domain structureTony Luck5-69/+69
The rdt_domain structure is used for both control and monitor features. It is about to be split into separate structures for these two usages because the scope for control and monitoring features for a resource will be different for future resources. To allow for common code that scans a list of domains looking for a specific domain id, move all the common fields ("list", "id", "cpu_mask") into their own structure within the rdt_domain structure. Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Tested-by: Babu Moger <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-02x86/resctrl: Prepare for new domain scopeTony Luck4-17/+42
Resctrl resources operate on subsets of CPUs in the system with the defining attribute of each subset being an instance of a particular level of cache. E.g. all CPUs sharing an L3 cache would be part of the same domain. In preparation for features that are scoped at the NUMA node level, change the code from explicit references to "cache_level" to a more generic scope. At this point the only options for this scope are groups of CPUs that share an L2 cache or L3 cache. Clean up the error handling when looking up domains. Report invalid ids before calling rdt_find_domain() in preparation for better messages when scope can be other than cache scope. This means that rdt_find_domain() will never return an error. So remove checks for error from the call sites. Signed-off-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Reinette Chatre <[email protected]> Tested-by: Babu Moger <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-02x86/xen: Convert comma to semicolonChen Ni1-2/+2
Replace a comma between expression statements by a semicolon. Fixes: 8310b77b48c5 ("Xen/gnttab: handle p2m update errors on a per-slot basis") Signed-off-by: Chen Ni <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Juergen Gross <[email protected]>
2024-07-02x86/xen/time: Reduce Xen timer tickFrediano Ziglio1-1/+1
Current timer tick is causing some deadline to fail. The current high value constant was probably due to an old bug in the Xen timer implementation causing errors if the deadline was in the future. This was fixed in Xen commit: 19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future Usage of VCPU_SSHOTTMR_future in Linux kernel was removed by: c06b6d70feb3 xen/x86: don't lose event interrupts Signed-off-by: Frediano Ziglio <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Juergen Gross <[email protected]>
2024-07-02x86/efi: Drop support for fake EFI memory mapsArd Biesheuvel8-270/+10
Between kexec and confidential VM support, handling the EFI memory maps correctly on x86 is already proving to be rather difficult (as opposed to other EFI architectures which manage to never modify the EFI memory map to begin with) EFI fake memory map support is essentially a development hack (for testing new support for the 'special purpose' and 'more reliable' EFI memory attributes) that leaked into production code. The regions marked in this manner are not actually recognized as such by the firmware itself or the EFI stub (and never have), and marking memory as 'more reliable' seems rather futile if the underlying memory is just ordinary RAM. Marking memory as 'special purpose' in this way is also dubious, but may be in use in production code nonetheless. However, the same should be achievable by using the memmap= command line option with the ! operator. EFI fake memmap support is not enabled by any of the major distros (Debian, Fedora, SUSE, Ubuntu) and does not exist on other architectures, so let's drop support for it. Acked-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Dan Williams <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
2024-07-01x86/alternatives, kvm: Fix a couple of CALLs without a frame pointerBorislav Petkov (AMD)4-7/+10
objtool complains: arch/x86/kvm/kvm.o: warning: objtool: .altinstr_replacement+0xc5: call without frame pointer save/setup vmlinux.o: warning: objtool: .altinstr_replacement+0x2eb: call without frame pointer save/setup Make sure %rSP is an output operand to the respective asm() statements. The test_cc() hunk and ALT_OUTPUT_SP() courtesy of peterz. Also from him add some helpful debugging info to the documentation. Now on to the explanations: tl;dr: The alternatives macros are pretty fragile. If I do ALT_OUTPUT_SP(output) in order to be able to package in a %rsp reference for objtool so that a stack frame gets properly generated, the inline asm input operand with positional argument 0 in clear_page(): "0" (page) gets "renumbered" due to the added : "+r" (current_stack_pointer), "=D" (page) and then gcc says: ./arch/x86/include/asm/page_64.h:53:9: error: inconsistent operand constraints in an ‘asm’ The fix is to use an explicit "D" constraint which points to a singleton register class (gcc terminology) which ends up doing what is expected here: the page pointer - input and output - should be in the same %rdi register. Other register classes have more than one register in them - example: "r" and "=r" or "A": ‘A’ The ‘a’ and ‘d’ registers. This class is used for instructions that return double word results in the ‘ax:dx’ register pair. Single word values will be allocated either in ‘ax’ or ‘dx’. so using "D" and "=D" just works in this particular case. And yes, one would say, sure, why don't you do "+D" but then: : "+r" (current_stack_pointer), "+D" (page) : [old] "i" (clear_page_orig), [new1] "i" (clear_page_rep), [new2] "i" (clear_page_erms), : "cc", "memory", "rax", "rcx") now find the Waldo^Wcomma which throws a wrench into all this. Because that silly macro has an "input..." consume-all last macro arg and in it, one is supposed to supply input *and* clobbers, leading to silly syntax snafus. Yap, they need to be cleaned up, one fine day... Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Reported-by: kernel test robot <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Sean Christopherson <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/20240625112056.GDZnqoGDXgYuWBDUwu@fat_crate.local
2024-06-30x86-32: fix cmpxchg8b_emu build error with clangLinus Torvalds1-7/+5
The kernel test robot reported that clang no longer compiles the 32-bit x86 kernel in some configurations due to commit 95ece48165c1 ("locking/atomic/x86: Rewrite x86_32 arch_atomic64_{,fetch}_{and,or,xor}() functions"). The build fails with arch/x86/include/asm/cmpxchg_32.h:149:9: error: inline assembly requires more registers than available and the reason seems to be that not only does the cmpxchg8b instruction need four fixed registers (EDX:EAX and ECX:EBX), with the emulation fallback the inline asm also wants a fifth fixed register for the address (it uses %esi for that, but that's just a software convention with cmpxchg8b_emu). Avoiding using another pointer input to the asm (and just forcing it to use the "0(%esi)" addressing that we end up requiring for the sw fallback) seems to fix the issue. Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Fixes: 95ece48165c1 ("locking/atomic/x86: Rewrite x86_32 arch_atomic64_{,fetch}_{and,or,xor}() functions") Link: https://lore.kernel.org/all/[email protected]/ Suggested-by: Uros Bizjak <[email protected]> Reviewed-and-Tested-by: Uros Bizjak <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2024-06-29x86/cpu/intel: Drop stray FAM6 check with new Intel CPU model definesAndrew Cooper1-11/+7
The outer if () should have been dropped when switching to c->x86_vfm. Fixes: 6568fc18c2f6 ("x86/cpu/intel: Switch to new Intel CPU model defines") Signed-off-by: Andrew Cooper <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Tony Luck <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-28Merge tag 'hardening-v6.10-rc6' of ↵Linus Torvalds1-9/+6
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull hardening fixes from Kees Cook: - Remove invalid tty __counted_by annotation (Nathan Chancellor) - Add missing MODULE_DESCRIPTION()s for KUnit string tests (Jeff Johnson) - Remove non-functional per-arch kstack entropy filtering * tag 'hardening-v6.10-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: tty: mxser: Remove __counted_by from mxser_board.ports[] randomize_kstack: Remove non-functional per-arch entropy filtering string: kunit: add missing MODULE_DESCRIPTION() macros
2024-06-28x86: stop playing stack games in profile_pc()Linus Torvalds1-19/+1
The 'profile_pc()' function is used for timer-based profiling, which isn't really all that relevant any more to begin with, but it also ends up making assumptions based on the stack layout that aren't necessarily valid. Basically, the code tries to account the time spent in spinlocks to the caller rather than the spinlock, and while I support that as a concept, it's not worth the code complexity or the KASAN warnings when no serious profiling is done using timers anyway these days. And the code really does depend on stack layout that is only true in the simplest of cases. We've lost the comment at some point (I think when the 32-bit and 64-bit code was unified), but it used to say: Assume the lock function has either no stack frame or a copy of eflags from PUSHF. which explains why it just blindly loads a word or two straight off the stack pointer and then takes a minimal look at the values to just check if they might be eflags or the return pc: Eflags always has bits 22 and up cleared unlike kernel addresses but that basic stack layout assumption assumes that there isn't any lock debugging etc going on that would complicate the code and cause a stack frame. It causes KASAN unhappiness reported for years by syzkaller [1] and others [2]. With no real practical reason for this any more, just remove the code. Just for historical interest, here's some background commits relating to this code from 2006: 0cb91a229364 ("i386: Account spinlocks to the caller during profiling for !FP kernels") 31679f38d886 ("Simplify profile_pc on x86-64") and a code unification from 2009: ef4512882dbe ("x86: time_32/64.c unify profile_pc") but the basics of this thing actually goes back to before the git tree. Link: https://syzkaller.appspot.com/bug?extid=84fe685c02cd112a2ac3 [1] Link: https://lore.kernel.org/all/CAK55_s7Xyq=nh97=K=G1sxueOFrJDAvPOJAL4TPTCAYvmxO9_A@mail.gmail.com/ [2] Signed-off-by: Linus Torvalds <[email protected]>
2024-06-28x86/cpufeatures: Add HWP highest perf change feature flagSrinivas Pandruvada1-0/+1
When CPUID[6].EAX[15] is set to 1, this CPU supports notification for HWP (Hardware P-states) highest performance change. Add a feature flag to check if the CPU supports HWP highest performance change. Signed-off-by: Srinivas Pandruvada <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Rafael J. Wysocki <[email protected]>
2024-06-28KVM: X86: Remove unnecessary GFP_KERNEL_ACCOUNT for temporary variablesPeng Hao1-5/+4
Some variables allocated in kvm_arch_vcpu_ioctl are released when the function exits, so there is no need to set GFP_KERNEL_ACCOUNT. Signed-off-by: Peng Hao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: x86/pmu: Introduce distinct macros for GP/fixed counter max numberDapeng Mi6-25/+31
Refine the macros which define maximum General Purpose (GP) and fixed counter numbers. Currently the macro KVM_INTEL_PMC_MAX_GENERIC is used to represent the maximum supported General Purpose (GP) counter number ambiguously across Intel and AMD platforms. This would cause issues if AMD begins to support more GP counters than Intel. Thus a bunch of new macros including vendor specific and vendor independent are introduced to replace the old macros. The vendor independent macros are used in x86 common code to hide vendor difference and eliminate the ambiguity. No logic changes are introduced in this patch. Signed-off-by: Dapeng Mi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Co-developed-by: Sean Christopherson <[email protected]> Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: x86: WARN if a vCPU gets a valid wakeup that KVM can't yet injectSean Christopherson1-1/+4
WARN if a blocking vCPU is awakened by a valid wake event that KVM can't inject, e.g. because KVM needs to complete a nested VM-enter, or needs to re-inject an exception. For the nested VM-Enter case, KVM is supposed to clear "nested_run_pending" if L1 puts L2 into HLT, i.e. entering HLT "completes" the nested VM-Enter. And for already-injected exceptions, it should be impossible for the vCPU to be in a blocking state if a VM-Exit occurred while an exception was being vectored. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: nVMX: Fold requested virtual interrupt check into has_nested_events()Sean Christopherson7-33/+5
Check for a Requested Virtual Interrupt, i.e. a virtual interrupt that is pending delivery, in vmx_has_nested_events() and drop the one-off kvm_x86_ops.guest_apic_has_interrupt() hook. In addition to dropping a superfluous hook, this fixes a bug where KVM would incorrectly treat virtual interrupts _for L2_ as always enabled due to kvm_arch_interrupt_allowed(), by way of vmx_interrupt_blocked(), treating IRQs as enabled if L2 is active and vmcs12 is configured to exit on IRQs, i.e. KVM would treat a virtual interrupt for L2 as a valid wake event based on L1's IRQ blocking status. Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: nVMX: Check for pending posted interrupts when looking for nested eventsSean Christopherson1-2/+34
Check for pending (and notified!) posted interrupts when checking if L2 has a pending wake event, as fully posted/notified virtual interrupt is a valid wake event for HLT. Note that KVM must check vmx->nested.pi_pending to avoid prematurely waking L2, e.g. even if KVM sees a non-zero PID.PIR and PID.0N=1, the virtual interrupt won't actually be recognized until a notification IRQ is received by the vCPU or the vCPU does (nested) VM-Enter. Fixes: 26844fee6ade ("KVM: x86: never write to memory from kvm_vcpu_check_block()") Cc: [email protected] Cc: Maxim Levitsky <[email protected]> Reported-by: Jim Mattson <[email protected]> Closes: https://lore.kernel.org/all/[email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: VMX: Split out the non-virtualization part of vmx_interrupt_blocked()Sean Christopherson2-3/+9
Move the non-VMX chunk of the "interrupt blocked" checks to a separate helper so that KVM can reuse the code to detect if interrupts are blocked for L2, e.g. to determine if a virtual interrupt _for L2_ is a valid wake event. If L1 disables HLT-exiting for L2, nested APICv is enabled, and L2 HLTs, then L2 virtual interrupts are valid wake events, but if and only if interrupts are unblocked for L2. Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: nVMX: Request immediate exit iff pending nested event needs injectionSean Christopherson3-4/+4
When requesting an immediate exit from L2 in order to inject a pending event, do so only if the pending event actually requires manual injection, i.e. if and only if KVM actually needs to regain control in order to deliver the event. Avoiding the "immediate exit" isn't simply an optimization, it's necessary to make forward progress, as the "already expired" VMX preemption timer trick that KVM uses to force a VM-Exit has higher priority than events that aren't directly injected. At present time, this is a glorified nop as all events processed by vmx_has_nested_events() require injection, but that will not hold true in the future, e.g. if there's a pending virtual interrupt in vmcs02.RVI. I.e. if KVM is trying to deliver a virtual interrupt to L2, the expired VMX preemption timer will trigger VM-Exit before the virtual interrupt is delivered, and KVM will effectively hang the vCPU in an endless loop of forced immediate VM-Exits (because the pending virtual interrupt never goes away). Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: nVMX: Add a helper to get highest pending from Posted Interrupt vectorSean Christopherson2-2/+13
Add a helper to retrieve the highest pending vector given a Posted Interrupt descriptor. While the actual operation is straightforward, it's surprisingly easy to mess up, e.g. if one tries to reuse lapic.c's find_highest_vector(), which doesn't work with PID.PIR due to the APIC's IRR and ISR component registers being physically discontiguous (they're 4-byte registers aligned at 16-byte intervals). To make PIR handling more consistent with respect to IRR and ISR handling, return -1 to indicate "no interrupt pending". Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: VMX: Switch __vmx_exit() and kvm_x86_vendor_exit() in vmx_exit()Kai Huang1-1/+1
In the vmx_init() error handling path, the __vmx_exit() is done before kvm_x86_vendor_exit(). They should follow the same order in vmx_exit(). But currently __vmx_exit() is done after kvm_x86_vendor_exit() in vmx_exit(). Switch the order of them to fix. Fixes: e32b120071ea ("KVM: VMX: Do _all_ initialization before exposing /dev/kvm to userspace") Signed-off-by: Kai Huang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: VMX: Remove unnecessary INVEPT[GLOBAL] from hardware enable pathSean Christopherson1-3/+0
Remove the completely pointess global INVEPT, i.e. EPT TLB flush, from KVM's VMX enablement path. KVM always does a targeted TLB flush when using a "new" EPT root, in quotes because "new" simply means a root that isn't currently being used by the vCPU. KVM also _deliberately_ runs with stale TLB entries for defunct roots, i.e. doesn't do a TLB flush when vCPUs stop using roots, precisely because KVM does the flush on first use. As called out by the comment in kvm_mmu_load(), the reason KVM flushes on first use is because KVM can't guarantee the correctness of past hypervisors. Jumping back to the global INVEPT, when the painfully terse commit 1439442c7b25 ("KVM: VMX: Enable EPT feature for KVM") was added, the effective TLB flush being performed was: static void vmx_flush_tlb(struct kvm_vcpu *vcpu) { vpid_sync_vcpu_all(to_vmx(vcpu)); } I.e. KVM was not flushing EPT TLB entries when allocating a "new" root, which very strongly suggests that the global INVEPT during hardware enabling was a misguided hack that addressed the most obvious symptom, but failed to fix the underlying bug. Reviewed-by: Paolo Bonzini <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: nVMX: Update VMCS12_REVISION comment to state it should never changeSean Christopherson1-6/+8
Rewrite the comment above VMCS12_REVISION to unequivocally state that the ID must never change. KVM_{G,S}ET_NESTED_STATE have been officially supported for some time now, i.e. changing VMCS12_REVISION would break userspace. Opportunistically add a blurb to the CHECK_OFFSET() comment to make it explicitly clear that new fields are allowed, i.e. that the restriction on the layout is all about backwards compatibility. No functional change intended. Cc: Jim Mattson <[email protected]> Reviewed-by: Jim Mattson <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28randomize_kstack: Remove non-functional per-arch entropy filteringKees Cook1-9/+6
An unintended consequence of commit 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") was that the per-architecture entropy size filtering reduced how many bits were being added to the mix, rather than how many bits were being used during the offsetting. All architectures fell back to the existing default of 0x3FF (10 bits), which will consume at most 1KiB of stack space. It seems that this is working just fine, so let's avoid the confusion and update everything to use the default. The prior intent of the per-architecture limits were: arm64: capped at 0x1FF (9 bits), 5 bits effective powerpc: uncapped (10 bits), 6 or 7 bits effective riscv: uncapped (10 bits), 6 bits effective x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective s390: capped at 0xFF (8 bits), undocumented effective entropy Current discussion has led to just dropping the original per-architecture filters. The additional entropy appears to be safe for arm64, x86, and s390. Quoting Arnd, "There is no point pretending that 15.75KB is somehow safe to use while 15.00KB is not." Co-developed-by: Yuntao Liu <[email protected]> Signed-off-by: Yuntao Liu <[email protected]> Fixes: 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Arnd Bergmann <[email protected]> Acked-by: Mark Rutland <[email protected]> Acked-by: Heiko Carstens <[email protected]> # s390 Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
2024-06-28KVM: SVM: Use sev_es_host_save_area() helper when initializing tsc_auxSean Christopherson1-9/+6
Use sev_es_host_save_area() instead of open coding an equivalent when setting the MSR_TSC_AUX field during setup. No functional change intended. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: SVM: Force sev_es_host_save_area() to be inlined (for noinstr usage)Sean Christopherson1-1/+1
Force sev_es_host_save_area() to be always inlined, as it's used in the low level VM-Enter/VM-Exit path, which is non-instrumentable. vmlinux.o: warning: objtool: svm_vcpu_enter_exit+0xb0: call to sev_es_host_save_area() leaves .noinstr.text section vmlinux.o: warning: objtool: svm_vcpu_enter_exit+0xbf: call to sev_es_host_save_area.isra.0() leaves .noinstr.text section Fixes: c92be2fd8edf ("KVM: SVM: Save/restore non-volatile GPRs in SEV-ES VMRUN via host save area") Reported-by: Borislav Petkov <[email protected]> Tested-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: x86: Add missing MODULE_DESCRIPTION() macrosJeff Johnson2-0/+2
Add module descriptions for the vendor modules to fix allmodconfig 'make W=1' warnings: WARNING: modpost: missing MODULE_DESCRIPTION() in arch/x86/kvm/kvm-intel.o WARNING: modpost: missing MODULE_DESCRIPTION() in arch/x86/kvm/kvm-amd.o Signed-off-by: Jeff Johnson <[email protected]> Link: https://lore.kernel.org/r/[email protected] [sean: split kvm.ko change to separate commit] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28KVM: Validate hva in kvm_gpc_activate_hva() to fix __kvm_gpc_refresh() WARNPei Li1-1/+1
Check that the virtual address is "ok" when activating a gfn_to_pfn_cache with a host VA to ensure that KVM never attempts to use a bad address. This fixes a bug where KVM fails to check the incoming address when handling KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA in kvm_xen_vcpu_set_attr(). Reported-by: [email protected] Closes: https://syzkaller.appspot.com/bug?extid=fd555292a1da3180fc82 Tested-by: [email protected] Signed-off-by: Pei Li <[email protected]> Reviewed-by: Paul Durrant <[email protected]> Reviewed-by: David Woodhouse <[email protected]> Link: https://lore.kernel.org/r/[email protected] [sean: rewrite changelog with --verbose] Signed-off-by: Sean Christopherson <[email protected]>
2024-06-28x86/bugs: Add 'spectre_bhi=vmexit' cmdline optionJosh Poimboeuf1-5/+11
In cloud environments it can be useful to *only* enable the vmexit mitigation and leave syscalls vulnerable. Add that as an option. This is similar to the old spectre_bhi=auto option which was removed with the following commit: 36d4fe147c87 ("x86/bugs: Remove CONFIG_BHI_MITIGATION_AUTO and spectre_bhi=auto") with the main difference being that this has a more descriptive name and is disabled by default. Mitigation switch requested by Maksim Davydov <[email protected]>. [ bp: Massage. ] Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Daniel Sneddon <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Link: https://lore.kernel.org/r/2cbad706a6d5e1da2829e5e123d8d5c80330148c.1719381528.git.jpoimboe@kernel.org
2024-06-28x86/syscall: Mark exit[_group] syscall handlers __noreturnJosh Poimboeuf7-23/+36
The direct-call syscall dispatch function doesn't know that the exit() and exit_group() syscall handlers don't return, so the call sites aren't optimized accordingly. Fix that by marking the exit syscall declarations __noreturn. Fixes the following warnings: vmlinux.o: warning: objtool: x64_sys_call+0x2804: __x64_sys_exit() is missing a __noreturn annotation vmlinux.o: warning: objtool: ia32_sys_call+0x29b6: __ia32_sys_exit_group() is missing a __noreturn annotation Fixes: 1e3ad78334a6 ("x86/syscall: Don't force use of indirect calls for system calls") Closes: https://lkml.kernel.org/lkml/6dba9b32-db2c-4e6d-9500-7a08852f17a3@paulmck-laptop Reported-by: Paul E. McKenney <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Tested-by: Paul E. McKenney <[email protected]> Link: https://lore.kernel.org/r/5d8882bc077d8eadcc7fd1740b56dfb781f12288.1719381528.git.jpoimboe@kernel.org
2024-06-27Merge tag 'amd-pstate-v6.11-2024-06-26' of ↵Rafael J. Wysocki1-0/+2
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/superm1/linux Merge more amd-pstate driver updates for 6.11 from Mario Limonciello: "Add support for amd-pstate core performance boost support which allows controlling which CPU cores can operate above nominal frequencies for short periods of time." * tag 'amd-pstate-v6.11-2024-06-26' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/superm1/linux: Documentation: cpufreq: amd-pstate: update doc for Per CPU boost control method cpufreq: amd-pstate: Cap the CPPC.max_perf to nominal_perf if CPB is off cpufreq: amd-pstate: initialize core precision boost state cpufreq: acpi: move MSR_K7_HWCR_CPB_DIS_BIT into msr-index.h
2024-06-27Merge back cpufreq material for v6.11.Rafael J. Wysocki2-0/+2
2024-06-26cpufreq: acpi: move MSR_K7_HWCR_CPB_DIS_BIT into msr-index.hPerry Yuan1-0/+2
There are some other drivers also need to use the MSR_K7_HWCR_CPB_DIS_BIT for CPB control bit, so it makes sense to move the definition to a common header file to allow other driver to use it. No intentional functional impact. Suggested-by: Gautham Ranjal Shenoy <[email protected]> Signed-off-by: Perry Yuan <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Huang Rui <[email protected]> Link: https://lore.kernel.org/r/78b6c75e6cffddce3e950dd543af6ae9f8eeccc3.1718988436.git.perry.yuan@amd.com Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mario Limonciello <[email protected]>
2024-06-25x86/vmware: Add TDX hypercall supportAlexey Makhalov2-0/+97
VMware hypercalls use I/O port, VMCALL or VMMCALL instructions. Add a call to __tdx_hypercall() in order to support TDX guests. No change in high bandwidth hypercalls, as only low bandwidth ones are supported for TDX guests. [ bp: Massage, clear on-stack struct tdx_module_args variable. ] Co-developed-by: Tim Merrifield <[email protected]> Signed-off-by: Tim Merrifield <[email protected]> Signed-off-by: Alexey Makhalov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-25x86/vmware: Remove legacy VMWARE_HYPERCALL* macrosAlexey Makhalov1-26/+0
No more direct use of these macros should be allowed. The vmware_hypercallX API still uses the new implementation of VMWARE_HYPERCALL macro internally, but it is not exposed outside of the vmware.h. Signed-off-by: Alexey Makhalov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-25x86/vmware: Correct macro namesAlexey Makhalov1-4/+4
VCPU_RESERVED and LEGACY_X2APIC are not VMware hypercall commands. These are bits in the return value of the VMWARE_CMD_GETVCPU_INFO command. Change VMWARE_CMD_ prefix to GETVCPU_INFO_ one. And move the bit-shift operation into the macro body. Fixes: 4cca6ea04d31c ("x86/apic: Allow x2apic without IR on VMware platform") Signed-off-by: Alexey Makhalov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-25x86/vmware: Use VMware hypercall APIAlexey Makhalov1-70/+25
Remove VMWARE_CMD macro and move to vmware_hypercall API. No functional changes intended. Use u32/u64 instead of uint32_t/uint64_t across the file. Signed-off-by: Alexey Makhalov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-25x86/vmware: Introduce VMware hypercall APIAlexey Makhalov2-22/+327
Introduce a vmware_hypercall family of functions. It is a common implementation to be used by the VMware guest code and virtual device drivers in architecture independent manner. The API consists of vmware_hypercallX and vmware_hypercall_hb_{out,in} set of functions analogous to KVM's hypercall API. Architecture-specific implementation is hidden inside. It will simplify future enhancements in VMware hypercalls such as SEV-ES and TDX related changes without needs to modify a caller in device drivers code. Current implementation extends an idea from bac7b4e84323 ("x86/vmware: Update platform detection code for VMCALL/VMMCALL hypercalls") to have a slow, but safe path vmware_hypercall_slow() earlier during the boot when alternatives are not yet applied. The code inherits VMWARE_CMD logic from the commit mentioned above. Move common macros from vmware.c to vmware.h. [ bp: Fold in a fix: https://lore.kernel.org/r/[email protected] ] Signed-off-by: Alexey Makhalov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-25syscalls: fix compat_sys_io_pgetevents_time64 usageArnd Bergmann1-1/+1
Using sys_io_pgetevents() as the entry point for compat mode tasks works almost correctly, but misses the sign extension for the min_nr and nr arguments. This was addressed on parisc by switching to compat_sys_io_pgetevents_time64() in commit 6431e92fc827 ("parisc: io_pgetevents_time64() needs compat syscall in 32-bit compat mode"), as well as by using more sophisticated system call wrappers on x86 and s390. However, arm64, mips, powerpc, sparc and riscv still have the same bug. Change all of them over to use compat_sys_io_pgetevents_time64() like parisc already does. This was clearly the intention when the function was originally added, but it got hooked up incorrectly in the tables. Cc: [email protected] Fixes: 48166e6ea47d ("y2038: add 64-bit time_t syscalls to all 32-bit architectures") Acked-by: Heiko Carstens <[email protected]> # s390 Signed-off-by: Arnd Bergmann <[email protected]>
2024-06-25x86/kmsan: Fix hook for unaligned accessesBrian Johannesmeyer1-1/+4
When called with a 'from' that is not 4-byte-aligned, string_memcpy_fromio() calls the movs() macro to copy the first few bytes, so that 'from' becomes 4-byte-aligned before calling rep_movs(). This movs() macro modifies 'to', and the subsequent line modifies 'n'. As a result, on unaligned accesses, kmsan_unpoison_memory() uses the updated (aligned) values of 'to' and 'n'. Hence, it does not unpoison the entire region. Save the original values of 'to' and 'n', and pass those to kmsan_unpoison_memory(), so that the entire region is unpoisoned. Signed-off-by: Brian Johannesmeyer <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-24x86/platform/iosf_mbi: Convert PCIBIOS_* return codes to errnosIlpo Järvinen1-2/+2
iosf_mbi_pci_{read,write}_mdr() use pci_{read,write}_config_dword() that return PCIBIOS_* codes but functions also return -ENODEV which are not compatible error codes. As neither of the functions are related to PCI read/write functions, they should return normal errnos. Convert PCIBIOS_* returns code using pcibios_err_to_errno() into normal errno before returning it. Fixes: 46184415368a ("arch: x86: New MailBox support driver for Intel SOC's") Signed-off-by: Ilpo Järvinen <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-24x86/pci/xen: Fix PCIBIOS_* return code handlingIlpo Järvinen1-2/+2
xen_pcifront_enable_irq() uses pci_read_config_byte() that returns PCIBIOS_* codes. The error handling, however, assumes the codes are normal errnos because it checks for < 0. xen_pcifront_enable_irq() also returns the PCIBIOS_* code back to the caller but the function is used as the (*pcibios_enable_irq) function which should return normal errnos. Convert the error check to plain non-zero check which works for PCIBIOS_* return codes and convert the PCIBIOS_* return code using pcibios_err_to_errno() into normal errno before returning it. Fixes: 3f2a230caf21 ("xen: handled remapped IRQs when enabling a pcifront PCI device.") Signed-off-by: Ilpo Järvinen <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-24x86/pci/intel_mid_pci: Fix PCIBIOS_* return code handlingIlpo Järvinen1-2/+2
intel_mid_pci_irq_enable() uses pci_read_config_byte() that returns PCIBIOS_* codes. The error handling, however, assumes the codes are normal errnos because it checks for < 0. intel_mid_pci_irq_enable() also returns the PCIBIOS_* code back to the caller but the function is used as the (*pcibios_enable_irq) function which should return normal errnos. Convert the error check to plain non-zero check which works for PCIBIOS_* return codes and convert the PCIBIOS_* return code using pcibios_err_to_errno() into normal errno before returning it. Fixes: 5b395e2be6c4 ("x86/platform/intel-mid: Make IRQ allocation a bit more flexible") Signed-off-by: Ilpo Järvinen <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-24x86/of: Return consistent error type from x86_of_pci_irq_enable()Ilpo Järvinen1-1/+1
x86_of_pci_irq_enable() returns PCIBIOS_* code received from pci_read_config_byte() directly and also -EINVAL which are not compatible error types. x86_of_pci_irq_enable() is used as (*pcibios_enable_irq) function which should not return PCIBIOS_* codes. Convert the PCIBIOS_* return code from pci_read_config_byte() into normal errno using pcibios_err_to_errno(). Fixes: 96e0a0797eba ("x86: dtb: Add support for PCI devices backed by dtb nodes") Signed-off-by: Ilpo Järvinen <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-06-24clocksource: hyper-v: Use lapic timer in a TDX VM without paravisorDexuan Cui1-1/+15
In a TDX VM without paravisor, currently the default timer is the Hyper-V timer, which depends on the slow VM Reference Counter MSR: the Hyper-V TSC page is not enabled in such a VM because the VM uses Invariant TSC as a better clocksource and it's challenging to mark the Hyper-V TSC page shared in very early boot. Lower the rating of the Hyper-V timer so the local APIC timer becomes the the default timer in such a VM, and print a warning in case Invariant TSC is unavailable in such a VM. This change should cause no perceivable performance difference. Cc: [email protected] # 6.6+ Reviewed-by: Roman Kisel <[email protected]> Signed-off-by: Dexuan Cui <[email protected]> Reviewed-by: Michael Kelley <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Wei Liu <[email protected]> Message-ID: <[email protected]>
2024-06-23Merge tag 'x86_urgent_for_v6.10_rc5' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Borislav Petkov: - An ARM-relevant fix to not free default RMIDs of a resource control group - A randconfig build fix for the VMware virtual GPU driver * tag 'x86_urgent_for_v6.10_rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/resctrl: Don't try to free nonexistent RMIDs drm/vmwgfx: Fix missing HYPERVISOR_GUEST dependency
2024-06-22Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-7/+6
Pull kvm fixes from Paolo Bonzini: "ARM: - Fix dangling references to a redistributor region if the vgic was prematurely destroyed. - Properly mark FFA buffers as released, ensuring that both parties can make forward progress. x86: - Allow getting/setting MSRs for SEV-ES guests, if they're using the pre-6.9 KVM_SEV_ES_INIT API. - Always sync pending posted interrupts to the IRR prior to IOAPIC route updates, so that EOIs are intercepted properly if the old routing table requested that. Generic: - Avoid __fls(0) - Fix reference leak on hwpoisoned page - Fix a race in kvm_vcpu_on_spin() by ensuring loads and stores are atomic. - Fix bug in __kvm_handle_hva_range() where KVM calls a function pointer that was intended to be a marker only (nothing bad happens but kind of a mine and also technically undefined behavior) - Do not bother accounting allocations that are small and freed before getting back to userspace. Selftests: - Fix compilation for RISC-V. - Fix a "shift too big" goof in the KVM_SEV_INIT2 selftest. - Compute the max mappable gfn for KVM selftests on x86 using GuestMaxPhyAddr from KVM's supported CPUID (if it's available)" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: SEV-ES: Fix svm_get_msr()/svm_set_msr() for KVM_SEV_ES_INIT guests KVM: Discard zero mask with function kvm_dirty_ring_reset virt: guest_memfd: fix reference leak on hwpoisoned page kvm: do not account temporary allocations to kmem MAINTAINERS: Drop Wanpeng Li as a Reviewer for KVM Paravirt support KVM: x86: Always sync PIR to IRR prior to scanning I/O APIC routes KVM: Stop processing *all* memslots when "null" mmu_notifier handler is found KVM: arm64: FFA: Release hyp rx buffer KVM: selftests: Fix RISC-V compilation KVM: arm64: Disassociate vcpus from redistributor region on teardown KVM: Fix a data race on last_boosted_vcpu in kvm_vcpu_on_spin() KVM: selftests: x86: Prioritize getting max_gfn from GuestPhysBits KVM: selftests: Fix shift of 32 bit unsigned int more than 32 bits
2024-06-21KVM: SEV-ES: Fix svm_get_msr()/svm_set_msr() for KVM_SEV_ES_INIT guestsMichael Roth1-2/+2
With commit 27bd5fdc24c0 ("KVM: SEV-ES: Prevent MSR access post VMSA encryption"), older VMMs like QEMU 9.0 and older will fail when booting SEV-ES guests with something like the following error: qemu-system-x86_64: error: failed to get MSR 0x174 qemu-system-x86_64: ../qemu.git/target/i386/kvm/kvm.c:3950: kvm_get_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. This is because older VMMs that might still call svm_get_msr()/svm_set_msr() for SEV-ES guests after guest boot even if those interfaces were essentially just noops because of the vCPU state being encrypted and stored separately in the VMSA. Now those VMMs will get an -EINVAL and generally crash. Newer VMMs that are aware of KVM_SEV_INIT2 however are already aware of the stricter limitations of what vCPU state can be sync'd during guest run-time, so newer QEMU for instance will work both for legacy KVM_SEV_ES_INIT interface as well as KVM_SEV_INIT2. So when using KVM_SEV_INIT2 it's okay to assume userspace can deal with -EINVAL, whereas for legacy KVM_SEV_ES_INIT the kernel might be dealing with either an older VMM and so it needs to assume that returning -EINVAL might break the VMM. Address this by only returning -EINVAL if the guest was started with KVM_SEV_INIT2. Otherwise, just silently return. Cc: Ravi Bangoria <[email protected]> Cc: Nikunj A Dadhania <[email protected]> Reported-by: Srikanth Aithal <[email protected]> Closes: https://lore.kernel.org/lkml/37usuu4yu4ok7be2hqexhmcyopluuiqj3k266z4gajc2rcj4yo@eujb23qc3zcm/ Fixes: 27bd5fdc24c0 ("KVM: SEV-ES: Prevent MSR access post VMSA encryption") Signed-off-by: Michael Roth <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2024-06-20bpf: remove unused parameter in bpf_jit_binary_pack_finalizeRafael Passos1-2/+2
Fixes a compiler warning. the bpf_jit_binary_pack_finalize function was taking an extra bpf_prog parameter that went unused. This removves it and updates the callers accordingly. Signed-off-by: Rafael Passos <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-06-20bpf, x64: Remove tail call detectionLeon Hwang1-9/+2
As 'prog->aux->tail_call_reachable' is correct for tail call present, it's unnecessary to detect tail call in x86 jit. Therefore, let's remove it. Signed-off-by: Leon Hwang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2024-06-20KVM: x86/tdp_mmu: Take a GFN in kvm_tdp_mmu_fast_pf_get_last_sptep()Rick Edgecombe3-4/+3
Pass fault->gfn into kvm_tdp_mmu_fast_pf_get_last_sptep(), instead of passing fault->addr and then converting it to a GFN. Future changes will make fault->addr and fault->gfn differ when running TDX guests. The GFN will be conceptually the same as it is for normal VMs, but fault->addr may contain a TDX specific bit that differentiates between "shared" and "private" memory. This bit will be used to direct faults to be handled on different roots, either the normal "direct" root or a new type of root that handles private memory. The TDP iterators will process the traditional GFN concept and apply the required TDX specifics depending on the root type. For this reason, it needs to operate on regular GFN and not the addr, which may contain these special TDX specific bits. Today kvm_tdp_mmu_fast_pf_get_last_sptep() takes fault->addr and then immediately converts it to a GFN with a bit shift. However, this would unfortunately retain the TDX specific bits in what is supposed to be a traditional GFN. Excluding TDX's needs, it is also is unnecessary to pass fault->addr and convert it to a GFN when the GFN is already on hand. So instead just pass the GFN into kvm_tdp_mmu_fast_pf_get_last_sptep() and use it directly. Signed-off-by: Rick Edgecombe <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>