aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/paravirt.c
AgeCommit message (Collapse)AuthorFilesLines
2024-08-07x86/paravirt: Fix incorrect virt spinlock setting on bare metalChen Yu1-4/+3
The kernel can change spinlock behavior when running as a guest. But this guest-friendly behavior causes performance problems on bare metal. The kernel uses a static key to switch between the two modes. In theory, the static key is enabled by default (run in guest mode) and should be disabled for bare metal (and in some guests that want native behavior or paravirt spinlock). A performance drop is reported when running encode/decode workload and BenchSEE cache sub-workload. Bisect points to commit ce0a1b608bfc ("x86/paravirt: Silence unused native_pv_lock_init() function warning"). When CONFIG_PARAVIRT_SPINLOCKS is disabled the virt_spin_lock_key is incorrectly set to true on bare metal. The qspinlock degenerates to test-and-set spinlock, which decreases the performance on bare metal. Set the default value of virt_spin_lock_key to false. If booting in a VM, enable this key. Later during the VM initialization, if other high-efficient spinlock is preferred (e.g. paravirt-spinlock), or the user wants the native qspinlock (via nopvspin boot commandline), the virt_spin_lock_key is disabled accordingly. This results in the following decision matrix: X86_FEATURE_HYPERVISOR Y Y Y N CONFIG_PARAVIRT_SPINLOCKS Y Y N Y/N PV spinlock Y N N Y/N virt_spin_lock_key N Y/N Y N Fixes: ce0a1b608bfc ("x86/paravirt: Silence unused native_pv_lock_init() function warning") Reported-by: Prem Nath Dey <[email protected]> Reported-by: Xiaoping Zhou <[email protected]> Suggested-by: Dave Hansen <[email protected]> Suggested-by: Qiuxu Zhuo <[email protected]> Suggested-by: Nikolay Borisov <[email protected]> Signed-off-by: Chen Yu <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/all/[email protected]
2023-12-10x86/paravirt: Remove no longer needed paravirt patching codeJuergen Gross1-30/+0
Now that paravirt is using the alternatives patching infrastructure, remove the paravirt patching code. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-12-10x86/paravirt: Move some functions and defines to alternative.cJuergen Gross1-21/+9
As a preparation for replacing paravirt patching completely by alternative patching, move some backend functions and #defines to the alternatives code and header. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-09-19x86/xen: move paravirt lazy codeJuergen Gross1-67/+0
Only Xen is using the paravirt lazy mode code, so it can be moved to Xen specific sources. This allows to make some of the functions static or to merge them into their only call sites. While at it do a rename from "paravirt" to "xen" for all moved specifiers. No functional change. Signed-off-by: Juergen Gross <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Juergen Gross <[email protected]>
2023-08-28Merge tag 'x86-cleanups-2023-08-28' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull misc x86 cleanups from Ingo Molnar: "The following commit deserves special mention: 22dc02f81cddd Revert "sched/fair: Move unused stub functions to header" This is in x86/cleanups, because the revert is a re-application of a number of cleanups that got removed inadvertedly" [ This also effectively undoes the amd_check_microcode() microcode declaration change I had done in my microcode loader merge in commit 42a7f6e3ffe0 ("Merge tag 'x86_microcode_for_v6.6_rc1' [...]"). I picked the declaration change by Arnd from this branch instead, which put it in <asm/processor.h> instead of <asm/microcode.h> like I had done in my merge resolution - Linus ] * tag 'x86-cleanups-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/platform/uv: Refactor code using deprecated strncpy() interface to use strscpy() x86/hpet: Refactor code using deprecated strncpy() interface to use strscpy() x86/platform/uv: Refactor code using deprecated strcpy()/strncpy() interfaces to use strscpy() x86/qspinlock-paravirt: Fix missing-prototype warning x86/paravirt: Silence unused native_pv_lock_init() function warning x86/alternative: Add a __alt_reloc_selftest() prototype x86/purgatory: Include header for warn() declaration x86/asm: Avoid unneeded __div64_32 function definition Revert "sched/fair: Move unused stub functions to header" x86/apic: Hide unused safe_smp_processor_id() on 32-bit UP x86/cpu: Fix amd_check_microcode() declaration
2023-08-03x86/paravirt: Fix tlb_remove_table function callback prototype warningKees Cook1-2/+6
Under W=1, this warning is visible in Clang 16 and newer: arch/x86/kernel/paravirt.c:337:4: warning: cast from 'void (*)(struct mmu_gather *, struct page *)' to 'void (*)(struct mmu_gather *, void *)' converts to incompatible function type [-Wcast-function-type-strict] (void (*)(struct mmu_gather *, void *))tlb_remove_page, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Add a direct wrapper instead, which will make this warning (and potential KCFI failures) go away. Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Cc: Juergen Gross <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sami Tolvanen <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Ajay Kaher <[email protected]> Cc: Alexey Makhalov <[email protected]> Cc: VMware PV-Drivers Reviewers <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: [email protected] Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
2023-08-03x86/paravirt: Silence unused native_pv_lock_init() function warningArnd Bergmann1-1/+2
The native_pv_lock_init() function is only used in SMP configurations and declared in asm/qspinlock.h which is not used in UP kernels, but the function is still defined for both, which causes a warning: arch/x86/kernel/paravirt.c:76:13: error: no previous prototype for 'native_pv_lock_init' [-Werror=missing-prototypes] Move the declaration to asm/paravirt.h so it is visible even with CONFIG_SMP but short-circuit the definition to turn it into an empty function. Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-03-17x86/paravirt: Convert simple paravirt functions to asmJuergen Gross1-21/+6
All functions referenced via __PV_IS_CALLEE_SAVE() need to be assembler functions, as those functions calls are hidden from the compiler. In case the kernel is compiled with "-fzero-call-used-regs" the compiler will clobber caller-saved registers at the end of C functions, which will result in unexpectedly zeroed registers at the call site of the related paravirt functions. Replace the C functions with DEFINE_PARAVIRT_ASM() constructs using the same instructions as the related paravirt calls in the PVOP_ALT_[V]CALLEE*() macros. And since they're not C functions visible to the compiler anymore, latter won't do the callee-clobbered zeroing invoked by -fzero-call-used-regs and thus won't corrupt registers. [ bp: Extend commit message. ] Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-03-06x86/paravirt: Merge activate_mm() and dup_mmap() callbacksJuergen Gross1-2/+1
The two paravirt callbacks .mmu.activate_mm() and .mmu.dup_mmap() are sharing the same implementations in all cases: for Xen PV guests they are pinning the PGD of the new mm_struct, and for all other cases they are a NOP. In the end, both callbacks are meant to register an address space with the underlying hypervisor, so there needs to be only a single callback for that purpose. So merge them to a common callback .mmu.enter_mmap() (in contrast to the corresponding already existing .mmu.exit_mmap()). As the first parameter of the old callbacks isn't used, drop it from the replacement one. [ bp: Remove last occurrence of paravirt_activate_mm() in asm/mmu_context.h ] Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Reviewed-by: Srivatsa S. Bhat (VMware) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-02-21Merge tag 'x86_cpu_for_v6.3_rc1' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpuid updates from Borislav Petkov: - Cache the AMD debug registers in per-CPU variables to avoid MSR writes where possible, when supporting a debug registers swap feature for SEV-ES guests - Add support for AMD's version of eIBRS called Automatic IBRS which is a set-and-forget control of indirect branch restriction speculation resources on privilege change - Add support for a new x86 instruction - LKGS - Load kernel GS which is part of the FRED infrastructure - Reset SPEC_CTRL upon init to accomodate use cases like kexec which rediscover - Other smaller fixes and cleanups * tag 'x86_cpu_for_v6.3_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/amd: Cache debug register values in percpu variables KVM: x86: Propagate the AMD Automatic IBRS feature to the guest x86/cpu: Support AMD Automatic IBRS x86/cpu, kvm: Add the SMM_CTL MSR not present feature x86/cpu, kvm: Add the Null Selector Clears Base feature x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leaf x86/cpu, kvm: Add the NO_NESTED_DATA_BP feature KVM: x86: Move open-coded CPUID leaf 0x80000021 EAX bit propagation code x86/cpu, kvm: Add support for CPUID_80000021_EAX x86/gsseg: Add the new <asm/gsseg.h> header to <asm/asm-prototypes.h> x86/gsseg: Use the LKGS instruction if available for load_gs_index() x86/gsseg: Move load_gs_index() to its own new header file x86/gsseg: Make asm_load_gs_index() take an u16 x86/opcode: Add the LKGS instruction to x86-opcode-map x86/cpufeature: Add the CPU feature bit for LKGS x86/bugs: Reset speculation control settings on init x86/cpu: Remove redundant extern x86_read_arch_cap_msr()
2023-01-13cpuidle, xenpv: Make more PARAVIRT_XXL noinstr cleanPeter Zijlstra1-2/+12
objtool found a few cases where this code called out into instrumented code: vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0xde: call to wbinvd() leaves .noinstr.text section vmlinux.o: warning: objtool: default_idle+0x4: call to arch_safe_halt() leaves .noinstr.text section vmlinux.o: warning: objtool: xen_safe_halt+0xa: call to HYPERVISOR_sched_op.constprop.0() leaves .noinstr.text section Solve this by: - marking arch_safe_halt(), wbinvd(), native_wbinvd() and HYPERVISOR_sched_op() as __always_inline(). - Explicitly uninlining xen_safe_halt() and pv_native_wbinvd() [they were already uninlined by the compiler on use as function pointers] and annotating them as 'noinstr'. - Annotating pv_native_safe_halt() as 'noinstr'. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Tony Lindgren <[email protected]> Tested-by: Ulf Hansson <[email protected]> Reviewed-by: Srivatsa S. Bhat (VMware) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Acked-by: Frederic Weisbecker <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-01-12x86/gsseg: Move load_gs_index() to its own new header fileH. Peter Anvin (Intel)1-0/+1
GS is a special segment on x86_64, move load_gs_index() to its own new header file to simplify header inclusion. No change in functionality. Signed-off-by: H. Peter Anvin (Intel) <[email protected]> Signed-off-by: Xin Li <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-11-24x86/paravirt: Use common macro for creating simple asm paravirt functionsJuergen Gross1-21/+2
There are some paravirt assembler functions which are sharing a common pattern. Introduce a macro DEFINE_PARAVIRT_ASM() for creating them. Note that this macro is including explicit alignment of the generated functions, leading to __raw_callee_save___kvm_vcpu_is_preempted(), _paravirt_nop() and paravirt_ret0() to be aligned at 4 byte boundaries now. The explicit _paravirt_nop() prototype in paravirt.c isn't needed, as it is included in paravirt_types.h already. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Srivatsa S. Bhat (VMware) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2022-10-17x86/paravirt: Properly align PV functionsThomas Gleixner1-0/+2
Ensure inline asm functions are consistently aligned with compiler generated and SYM_FUNC_START*() functions. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-03-15x86/ibt,paravirt: Sprinkle ENDBRPeter Zijlstra1-0/+2
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-03-15x86/entry,xen: Early rewrite of restore_regs_and_return_to_kernel()Peter Zijlstra1-4/+0
By doing an early rewrite of 'jmp native_iret` in restore_regs_and_return_to_kernel() we can get rid of the last INTERRUPT_RETURN user and paravirt_iret. Suggested-by: Andrew Cooper <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-03-15x86/ibt,paravirt: Use text_gen_insn() for paravirt_patch()Peter Zijlstra1-20/+3
Less duplication is more better. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-12-08x86: Prepare inline-asm for straight-line-speculationPeter Zijlstra1-2/+2
Replace all ret/retq instructions with ASM_RET in preparation of making it more than a single instruction. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-11-11Merge branch 'kvm-guest-sev-migration' into kvm-masterPaolo Bonzini1-0/+1
Add guest api and guest kernel support for SEV live migration. Introduces a new hypercall to notify the host of changes to the page encryption status. If the page is encrypted then it must be migrated through the SEV firmware or a helper VM sharing the key. If page is not encrypted then it can be migrated normally by userspace. This new hypercall is invoked using paravirt_ops. Conflicts: sev_active() replaced by cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT).
2021-11-11mm: x86: Invoke hypercall when page encryption status is changedBrijesh Singh1-0/+1
Invoke a hypercall when a memory region is changed from encrypted -> decrypted and vice versa. Hypervisor needs to know the page encryption status during the guest migration. Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Reviewed-by: Steve Rutherford <[email protected]> Reviewed-by: Venu Busireddy <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Ashish Kalra <[email protected]> Reviewed-by: Borislav Petkov <[email protected]> Message-Id: <0a237d5bb08793916c7790a3e653a2cbe7485761.1629726117.git.ashish.kalra@amd.com> Signed-off-by: Paolo Bonzini <[email protected]>
2021-11-02x86/xen: switch initial pvops IRQ functions to dummy onesJuergen Gross1-1/+12
The initial pvops functions handling irq flags will only ever be called before interrupts are being enabled. So switch them to be dummy functions: - xen_save_fl() can always return 0 - xen_irq_disable() is a nop - xen_irq_enable() can BUG() Add some generic paravirt functions for that purpose. Signed-off-by: Juergen Gross <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Boris Ostrovsky <[email protected]>
2021-09-17x86/xen: Make irq_disable() noinstrPeter Zijlstra1-1/+6
vmlinux.o: warning: objtool: pv_ops[31]: native_irq_disable vmlinux.o: warning: objtool: pv_ops[31]: __raw_callee_save_xen_irq_disable vmlinux.o: warning: objtool: pv_ops[31]: xen_irq_disable_direct vmlinux.o: warning: objtool: lock_is_held_type()+0x5b: call to pv_ops[31]() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-09-17x86/xen: Make irq_enable() noinstrPeter Zijlstra1-1/+6
vmlinux.o: warning: objtool: pv_ops[32]: native_irq_enable vmlinux.o: warning: objtool: pv_ops[32]: __raw_callee_save_xen_irq_enable vmlinux.o: warning: objtool: pv_ops[32]: xen_irq_enable_direct vmlinux.o: warning: objtool: lock_is_held_type()+0xfe: call to pv_ops[32]() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-09-17x86/xen: Make set_debugreg() noinstrPeter Zijlstra1-3/+6
vmlinux.o: warning: objtool: pv_ops[2]: xen_set_debugreg vmlinux.o: warning: objtool: pv_ops[2]: native_set_debugreg vmlinux.o: warning: objtool: exc_debug()+0x3b: call to pv_ops[2]() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-09-17x86/xen: Make get_debugreg() noinstrPeter Zijlstra1-2/+6
vmlinux.o: warning: objtool: pv_ops[1]: xen_get_debugreg vmlinux.o: warning: objtool: pv_ops[1]: native_get_debugreg vmlinux.o: warning: objtool: exc_debug()+0x25: call to pv_ops[1]() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-09-17x86/xen: Make write_cr2() noinstrPeter Zijlstra1-1/+6
vmlinux.o: warning: objtool: pv_ops[42]: native_write_cr2 vmlinux.o: warning: objtool: pv_ops[42]: xen_write_cr2 vmlinux.o: warning: objtool: exc_nmi()+0x127: call to pv_ops[42]() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-09-17x86/xen: Make read_cr2() noinstrPeter Zijlstra1-1/+6
vmlinux.o: warning: objtool: pv_ops[41]: native_read_cr2 vmlinux.o: warning: objtool: pv_ops[41]: xen_read_cr2 vmlinux.o: warning: objtool: pv_ops[41]: xen_read_cr2_direct vmlinux.o: warning: objtool: exc_double_fault()+0x15: call to pv_ops[41]() leaves .noinstr.text section Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-04-29Merge tag 'x86-mm-2021-04-29' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 tlb updates from Ingo Molnar: "The x86 MM changes in this cycle were: - Implement concurrent TLB flushes, which overlaps the local TLB flush with the remote TLB flush. In testing this improved sysbench performance measurably by a couple of percentage points, especially if TLB-heavy security mitigations are active. - Further micro-optimizations to improve the performance of TLB flushes" * tag 'x86-mm-2021-04-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: smp: Micro-optimize smp_call_function_many_cond() smp: Inline on_each_cpu_cond() and on_each_cpu() x86/mm/tlb: Remove unnecessary uses of the inline keyword cpumask: Mark functions as pure x86/mm/tlb: Do not make is_lazy dirty for no reason x86/mm/tlb: Privatize cpu_tlbstate x86/mm/tlb: Flush remote and local TLBs concurrently x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy() x86/mm/tlb: Unify flush_tlb_func_local() and flush_tlb_func_remote() smp: Run functions concurrently in smp_call_function_many_cond()
2021-03-11x86/paravirt: Have only one paravirt patch functionJuergen Gross1-18/+2
There is no need any longer to have different paravirt patch functions for native and Xen. Eliminate native_patch() and rename paravirt_patch_default() to paravirt_patch(). Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-03-11x86/paravirt: Switch functions with custom code to ALTERNATIVEJuergen Gross1-10/+6
Instead of using paravirt patching for custom code sequences use ALTERNATIVE for the functions with custom code replacements. Instead of patching an ud2 instruction for unpopulated vector entries into the caller site, use a simple function just calling BUG() as a replacement. Simplify the register defines for assembler paravirt calling, as there isn't much usage left. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-03-11x86/paravirt: Switch iret pvops to ALTERNATIVEJuergen Gross1-24/+2
The iret paravirt op is rather special as it is using a jmp instead of a call instruction. Switch it to ALTERNATIVE. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-03-11x86/paravirt: Switch time pvops functions to use static_call()Juergen Gross1-4/+9
The time pvops functions are the only ones left which might be used in 32-bit mode and which return a 64-bit value. Switch them to use the static_call() mechanism instead of pvops, as this allows quite some simplification of the pvops implementation. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-03-06x86/mm/tlb: Flush remote and local TLBs concurrentlyNadav Amit1-1/+1
To improve TLB shootdown performance, flush the remote and local TLBs concurrently. Introduce flush_tlb_multi() that does so. Introduce paravirtual versions of flush_tlb_multi() for KVM, Xen and hyper-v (Xen and hyper-v are only compile-tested). While the updated smp infrastructure is capable of running a function on a single local core, it is not optimized for this case. The multiple function calls and the indirect branch introduce some overhead, and might make local TLB flushes slower than they were before the recent changes. Before calling the SMP infrastructure, check if only a local TLB flush is needed to restore the lost performance in this common case. This requires to check mm_cpumask() one more time, but unless this mask is updated very frequently, this should impact performance negatively. Signed-off-by: Nadav Amit <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Michael Kelley <[email protected]> # Hyper-v parts Reviewed-by: Juergen Gross <[email protected]> # Xen and paravirt parts Reviewed-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-02-10x86/pv: Rework arch_local_irq_restore() to not use popfJuergen Gross1-1/+0
POPF is a rather expensive operation, so don't use it for restoring irq flags. Instead, test whether interrupts are enabled in the flags parameter and enable interrupts via STI in that case. This results in the restore_fl paravirt op to be no longer needed. Suggested-by: Andy Lutomirski <[email protected]> Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-02-10x86/xen: Drop USERGS_SYSRET64 paravirt callJuergen Gross1-4/+1
USERGS_SYSRET64 is used to return from a syscall via SYSRET, but a Xen PV guest will nevertheless use the IRET hypercall, as there is no sysret PV hypercall defined. So instead of testing all the prerequisites for doing a sysret and then mangling the stack for Xen PV again for doing an iret just use the iret exit from the beginning. This can easily be done via an ALTERNATIVE like it is done for the sysenter compat case already. It should be noted that this drops the optimization in Xen for not restoring a few registers when returning to user mode, but it seems as if the saved instructions in the kernel more than compensate for this drop (a kernel build in a Xen PV guest was slightly faster with this patch applied). While at it remove the stale sysret32 remnants. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-02-10x86/pv: Switch SWAPGS to ALTERNATIVEJuergen Gross1-1/+0
SWAPGS is used only for interrupts coming from user mode or for returning to user mode. So there is no reason to use the PARAVIRT framework, as it can easily be replaced by an ALTERNATIVE depending on X86_FEATURE_XENPV. There are several instances using the PV-aware SWAPGS macro in paths which are never executed in a Xen PV guest. Replace those with the plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Borislav Petkov <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-08-15x86/paravirt: Remove set_pte_at() pv-opJuergen Gross1-1/+0
On x86 set_pte_at() is now always falling back to set_pte(). So instead of having this fallback after the paravirt maze just drop the set_pte_at paravirt operation and let set_pte_at() use the set_pte() function directly. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-08-15x86/paravirt: Remove 32-bit support from CONFIG_PARAVIRT_XXLJuergen Gross1-18/+0
The last 32-bit user of stuff under CONFIG_PARAVIRT_XXL is gone. Remove 32-bit specific parts. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-07-18x86/ioperm: Fix io bitmap invalidation on Xen PVAndy Lutomirski1-1/+2
tss_invalidate_io_bitmap() wasn't wired up properly through the pvop machinery, so the TSS and Xen's io bitmap would get out of sync whenever disabling a valid io bitmap. Add a new pvop for tss_invalidate_io_bitmap() to fix it. This is XSA-329. Fixes: 22fe5b0439dd ("x86/ioperm: Move TSS bitmap update to exit to user work") Signed-off-by: Andy Lutomirski <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/d53075590e1f91c19f8af705059d3ff99424c020.1595030016.git.luto@kernel.org
2020-06-09mm: reorder includes after introduction of linux/pgtable.hMike Rapoport1-1/+1
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include of the latter in the middle of asm includes. Fix this up with the aid of the below script and manual adjustments here and there. import sys import re if len(sys.argv) is not 3: print "USAGE: %s <file> <header>" % (sys.argv[0]) sys.exit(1) hdr_to_move="#include <linux/%s>" % sys.argv[2] moved = False in_hdrs = False with open(sys.argv[1], "r") as f: lines = f.readlines() for _line in lines: line = _line.rstrip(' ') if line == hdr_to_move: continue if line.startswith("#include <linux/"): in_hdrs = True elif not moved and in_hdrs: moved = True print hdr_to_move print line Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09mm: introduce include/linux/pgtable.hMike Rapoport1-1/+1
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions. Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-26x86/tlb: Move __flush_tlb_one_user() out of lineThomas Gleixner1-5/+0
cpu_tlbstate is exported because various TLB-related functions need access to it, but cpu_tlbstate is sensitive information which should only be accessed by well-contained kernel functions and not be directly exposed to modules. As a third step, move _flush_tlb_one_user() out of line and hide the native function. The latter can be static when CONFIG_PARAVIRT is disabled. Consolidate the name space while at it and remove the pointless extra wrapper in the paravirt code. No functional change. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-04-26x86/tlb: Move __flush_tlb_global() out of lineThomas Gleixner1-9/+0
cpu_tlbstate is exported because various TLB-related functions need access to it, but cpu_tlbstate is sensitive information which should only be accessed by well-contained kernel functions and not be directly exposed to modules. As a second step, move __flush_tlb_global() out of line and hide the native function. The latter can be static when CONFIG_PARAVIRT is disabled. Consolidate the namespace while at it and remove the pointless extra wrapper in the paravirt code. No functional change. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-04-26x86/tlb: Move __flush_tlb() out of lineThomas Gleixner1-6/+1
cpu_tlbstate is exported because various TLB-related functions need access to it, but cpu_tlbstate is sensitive information which should only be accessed by well-contained kernel functions and not be directly exposed to modules. As a first step, move __flush_tlb() out of line and hide the native function. The latter can be static when CONFIG_PARAVIRT is disabled. Consolidate the namespace while at it and remove the pointless extra wrapper in the paravirt code. No functional change. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Alexandre Chartre <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-02-29x86/ioperm: Add new paravirt function update_io_bitmap()Juergen Gross1-0/+5
Commit 111e7b15cf10f6 ("x86/ioperm: Extend IOPL config to control ioperm() as well") reworked the iopl syscall to use I/O bitmaps. Unfortunately this broke Xen PV domains using that syscall as there is currently no I/O bitmap support in PV domains. Add I/O bitmap support via a new paravirt function update_io_bitmap which Xen PV domains can use to update their I/O bitmaps via a hypercall. Fixes: 111e7b15cf10f6 ("x86/ioperm: Extend IOPL config to control ioperm() as well") Reported-by: Jan Beulich <[email protected]> Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Jan Beulich <[email protected]> Reviewed-by: Jan Beulich <[email protected]> Cc: <[email protected]> # 5.5 Link: https://lkml.kernel.org/r/[email protected]
2019-11-16x86/iopl: Remove legacy IOPL optionThomas Gleixner1-2/+0
The IOPL emulation via the I/O bitmap is sufficient. Remove the legacy cruft dealing with the (e)flags based IOPL mechanism. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Juergen Gross <[email protected]> (Paravirt and Xen parts) Acked-by: Andy Lutomirski <[email protected]>
2019-07-22x86/paravirt: Drop {read,write}_cr8() hooksAndrew Cooper1-4/+0
There is a lot of infrastructure for functionality which is used exclusively in __{save,restore}_processor_state() on the suspend/resume path. cr8 is an alias of APIC_TASKPRI, and APIC_TASKPRI is saved/restored by lapic_{suspend,resume}(). Saving and restoring cr8 independently of the rest of the Local APIC state isn't a clever thing to be doing. Delete the suspend/resume cr8 handling, which shrinks the size of struct saved_context, and allows for the removal of both PVOPS. Signed-off-by: Andrew Cooper <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-07-17x86/paravirt: Make read_cr2() CALLEE_SAVEPeter Zijlstra1-1/+1
The one paravirt read_cr2() implementation (Xen) is actually quite trivial and doesn't need to clobber anything other than the return register. Making read_cr2() CALLEE_SAVE avoids all the PUSH/POP nonsense and allows more convenient use from assembly. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-07-08Merge branch 'x86-paravirt-for-linus' of ↵Linus Torvalds1-23/+23
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 paravirt updates from Ingo Molnar: "A handful of paravirt patching code enhancements to make it more robust against patching failures, and related cleanups and not so related cleanups - by Thomas Gleixner and myself" * 'x86-paravirt-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/paravirt: Rename paravirt_patch_site::instrtype to paravirt_patch_site::type x86/paravirt: Standardize 'insn_buff' variable names x86/paravirt: Match paravirt patchlet field definition ordering to initialization ordering x86/paravirt: Replace the paravirt patch asm magic x86/paravirt: Unify the 32/64 bit paravirt patching code x86/paravirt: Detect over-sized patching bugs in paravirt_patch_call() x86/paravirt: Detect over-sized patching bugs in paravirt_patch_insns() x86/paravirt: Remove bogus extern declarations
2019-05-24treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 102Thomas Gleixner1-13/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 51 franklin st fifth floor boston ma 02110 1301 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 50 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>