aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/include/asm
AgeCommit message (Collapse)AuthorFilesLines
2020-09-17compat: lift compat_s64 and compat_u64 to <asm-generic/compat.h>Christoph Hellwig1-2/+0
lift the compat_s64 and compat_u64 definitions into common code using the COMPAT_FOR_U64_ALIGNMENT symbol for the x86 special case. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-16ACPI: processor: Use CPUIDLE_FLAG_TLB_FLUSHEDPeter Zijlstra1-2/+0
Make acpi_processor_idle() use the generic TLB flushing code. This again removes RCU usage after rcu_idle_enter(). (XXX make every C3 invalidate TLBs, not just C3-BM) Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Borislav Petkov <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2020-09-16x86/irq: Make most MSI ops XEN privateThomas Gleixner1-2/+0
Nothing except XEN uses the setup/teardown ops. Hide them there. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/irq: Cleanup the arch_*_msi_irqs() leftoversThomas Gleixner2-12/+0
Get rid of all the gunk and remove the 'select PCI_MSI_ARCH_FALLBACK' from the x86 Kconfig so the weak functions in the PCI core are replaced by stubs which emit a warning, which ensures that any fail to set the irq domain pointer results in a warning when the device is used. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/pci: Set default irq domain in pcibios_add_device()Thomas Gleixner1-0/+2
Now that interrupt remapping sets the irqdomain pointer when a PCI device is added it's possible to store the default irq domain in the device struct in pcibios_add_device(). If the bus to which a device is connected has an irq domain associated then this domain is used otherwise the default domain (PCI/MSI native or XEN PCI/MSI) is used. Using the bus domain ensures that special MSI bus domains like VMD work. This makes XEN and the non-remapped native case work solely based on the irq domain pointer in struct device for PCI/MSI and allows to remove the arch fallback and make most of the x86_msi ops private to XEN in the next steps. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/irq: Initialize PCI/MSI domain at PCI init timeThomas Gleixner2-2/+7
No point in initializing the default PCI/MSI interrupt domain early and no point to create it when XEN PV/HVM/DOM0 are active. Move the initialization to pci_arch_init() and convert it to init ops so that XEN can override it as XEN has it's own PCI/MSI management. The XEN override comes in a later step. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/pci: Reducde #ifdeffery in PCI init codeThomas Gleixner1-0/+11
Adding a function call before the first #ifdef in arch_pci_init() triggers a 'mixed declarations and code' warning if PCI_DIRECT is enabled. Use stub functions and move the #ifdeffery to the header file where it is not in the way. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/msi: Use generic MSI domain opsThomas Gleixner1-2/+0
pci_msi_get_hwirq() and pci_msi_set_desc are not longer special. Enable the generic MSI domain ops in the core and PCI MSI code unconditionally and get rid of the x86 specific implementations in the X86 MSI code and in the hyperv PCI driver. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/msi: Consolidate MSI allocationThomas Gleixner1-8/+0
Convert the interrupt remap drivers to retrieve the pci device from the msi descriptor and use info::hwirq. This is the first step to prepare x86 for using the generic MSI domain ops. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Wei Liu <[email protected]> Acked-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/irq: Consolidate UV domain allocationThomas Gleixner1-9/+12
Move the UV specific fields into their own struct for readability sake. Get rid of the #ifdeffery as it does not matter at all whether the alloc info is a couple of bytes longer or not. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/irq: Consolidate DMAR irq allocationThomas Gleixner1-6/+0
None of the DMAR specific fields are required. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86_ioapic_Consolidate_IOAPIC_allocationThomas Gleixner1-11/+12
Move the IOAPIC specific fields into their own struct and reuse the common devid. Get rid of the #ifdeffery as it does not matter at all whether the alloc info is a couple of bytes longer or not. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Wei Liu <[email protected]> Acked-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/msi: Consolidate HPET allocationThomas Gleixner1-7/+0
None of the magic HPET fields are required in any way. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/irq: Prepare consolidation of irq_alloc_infoThomas Gleixner1-6/+16
struct irq_alloc_info is a horrible zoo of unnamed structs in a union. Many of the struct fields can be generic and don't have to be type specific like hpet_id, ioapic_id... Provide a generic set of members to prepare for the consolidation. The goal is to make irq_alloc_info have the same basic member as the generic msi_alloc_info so generic MSI domain ops can be reused and yet more mess can be avoided when (non-PCI) device MSI support comes along. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16iommu/irq_remapping: Consolidate irq domain lookupThomas Gleixner1-8/+0
Now that the iommu implementations handle the X86_*_GET_PARENT_DOMAIN types, consolidate the two getter functions. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/irq: Add allocation type for parent domain retrievalThomas Gleixner1-0/+2
irq_remapping_ir_irq_domain() is used to retrieve the remapping parent domain for an allocation type. irq_remapping_irq_domain() is for retrieving the actual device domain for allocating interrupts for a device. The two functions are similar and can be unified by using explicit modes for parent irq domain retrieval. Add X86_IRQ_ALLOC_TYPE_IOAPIC/HPET_GET_PARENT and use it in the iommu implementations. Drop the parent domain retrieval for PCI_MSI/X as that is unused. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86_irq_Rename_X86_IRQ_ALLOC_TYPE_MSI_to_reflect_PCI_dependencyThomas Gleixner1-2/+2
No functional change. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/msi: Move compose message callback where it belongsThomas Gleixner1-0/+8
Composing the MSI message at the MSI chip level is wrong because the underlying parent domain is the one which knows how the message should be composed for the direct vector delivery or the interrupt remapping table entry. The interrupt remapping aware PCI/MSI chip does that already. Make the direct delivery chip do the same and move the composition of the direct delivery MSI message to the vector domain irq chip. This prepares for the upcoming device MSI support to avoid having architecture specific knowledge in the device MSI domain irq chips. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-16x86/init: Remove unused init opsThomas Gleixner2-20/+0
Some past platform removal forgot to get rid of this unused ballast. Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-10x86/efi: Add GHCB mappings when SEV-ES is activeTom Lendacky1-0/+2
Calling down to EFI runtime services can result in the firmware performing VMGEXIT calls. The firmware is likely to use the GHCB of the OS (e.g., for setting EFI variables), so each GHCB in the system needs to be identity-mapped in the EFI page tables, as unencrypted, to avoid page faults. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Moved GHCB mapping loop to sev-es.c ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-10objtool: Make unwind hint definitions available to other architecturesJulien Thierry2-79/+11
Unwind hints are useful to provide objtool with information about stack states in non-standard functions/code. While the type of information being provided might be very arch specific, the mechanism to provide the information can be useful for other architectures. Move the relevant unwint hint definitions for all architectures to see. [ jpoimboe: REGS_IRET -> REGS_PARTIAL ] Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]>
2020-09-10objtool: Rename frame.h -> objtool.hJulien Thierry1-1/+1
Header frame.h is getting more code annotations to help objtool analyze object files. Rename the file to objtool.h. [ jpoimboe: add objtool.h to MAINTAINERS ] Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Josh Poimboeuf <[email protected]>
2020-09-10perf/x86/amd/ibs: Support 27-bit extended Op/cycle counterKim Phillips1-0/+1
IBS hardware with the OpCntExt feature gets a 7-bit wider internal counter. Both the maximum and current count bitfields in the IBS_OP_CTL register are extended to support reading and writing it. No changes are necessary to the driver for handling the extra contiguous current count bits (IbsOpCurCnt), as the driver already passes through 32 bits of that field. However, the driver has to do some extra bit manipulation when converting from a period to the non-contiguous (although conveniently aligned) extra bits in the IbsOpMaxCnt bitfield. This decreases IBS Op interrupt overhead when the period is over 1,048,560 (0xffff0), which would previously activate the driver's software counter. That threshold is now 134,217,712 (0x7fffff0). Signed-off-by: Kim Phillips <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-10perf/x86/amd/ibs: Fix raw sample data accumulationKim Phillips1-0/+1
Neither IbsBrTarget nor OPDATA4 are populated in IBS Fetch mode. Don't accumulate them into raw sample user data in that case. Also, in Fetch mode, add saving the IBS Fetch Control Extended MSR. Technically, there is an ABI change here with respect to the IBS raw sample data format, but I don't see any perf driver version information being included in perf.data file headers, but, existing users can detect whether the size of the sample record has reduced by 8 bytes to determine whether the IBS driver has this fix. Fixes: 904cb3677f3a ("perf/x86/amd/ibs: Update IBS MSRs and feature definitions") Reported-by: Stephane Eranian <[email protected]> Signed-off-by: Kim Phillips <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle NMI StateJoerg Roedel1-0/+7
When running under SEV-ES, the kernel has to tell the hypervisor when to open the NMI window again after an NMI was injected. This is done with an NMI-complete message to the hypervisor. Add code to the kernel's NMI handler to send this message right at the beginning of do_nmi(). This always allows nesting NMIs. [ bp: Mark __sev_es_nmi_complete() noinstr: vmlinux.o: warning: objtool: exc_nmi()+0x17: call to __sev_es_nmi_complete() leaves .noinstr.text section While at it, use __pa_nodebug() for the same reason due to CONFIG_DEBUG_VIRTUAL=y: vmlinux.o: warning: objtool: __sev_es_nmi_complete()+0xd9: call to __phys_addr() leaves .noinstr.text section ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/head/64: Don't call verify_cpu() on starting APsJoerg Roedel1-0/+1
The APs are not ready to handle exceptions when verify_cpu() is called in secondary_startup_64(). Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/smpboot: Load TSS and getcpu GDT entry before loading IDTJoerg Roedel1-0/+1
The IDT on 64-bit contains vectors which use paranoid_entry() and/or IST stacks. To make these vectors work, the TSS and the getcpu GDT entry need to be set up before the IDT is loaded. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/realmode: Setup AP jump tableTom Lendacky1-0/+5
As part of the GHCB specification, the booting of APs under SEV-ES requires an AP jump table when transitioning from one layer of code to another (e.g. when going from UEFI to the OS). As a result, each layer that parks an AP must provide the physical address of an AP jump table to the next layer via the hypervisor. Upon booting of the kernel, read the AP jump table address from the hypervisor. Under SEV-ES, APs are started using the INIT-SIPI-SIPI sequence. Before issuing the first SIPI request for an AP, the start CS and IP is programmed into the AP jump table. Upon issuing the SIPI request, the AP will awaken and jump to that start CS:IP address. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: - Adapted to different code base - Moved AP table setup from SIPI sending path to real-mode setup code - Fix sparse warnings ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/realmode: Add SEV-ES specific trampoline entry pointJoerg Roedel1-0/+3
The code at the trampoline entry point is executed in real-mode. In real-mode, #VC exceptions can't be handled so anything that might cause such an exception must be avoided. In the standard trampoline entry code this is the WBINVD instruction and the call to verify_cpu(), which are both not needed anyway when running as an SEV-ES guest. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/paravirt: Allow hypervisor-specific VMMCALL handling under SEV-ESJoerg Roedel1-1/+15
Add two new paravirt callbacks to provide hypervisor-specific processor state in the GHCB and to copy state from the hypervisor back to the processor. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Add a Runtime #VC Exception HandlerTom Lendacky1-0/+6
Add the handlers for #VC exceptions invoked at runtime. Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/entry/64: Add entry code for #VC handlerJoerg Roedel3-0/+46
The #VC handler needs special entry code because: 1. It runs on an IST stack 2. It needs to be able to handle nested #VC exceptions To make this work, the entry code is implemented to pretend it doesn't use an IST stack. When entered from user-mode or early SYSCALL entry path it switches to the task stack. If entered from kernel-mode it tries to switch back to the previous stack in the IRET frame. The stack found in the IRET frame is validated first, and if it is not safe to use it for the #VC handler, the code will switch to a fall-back stack (the #VC2 IST stack). From there, it can cause nested exceptions again. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/dumpstack/64: Add noinstr version of get_stack_info()Joerg Roedel1-0/+2
The get_stack_info() functionality is needed in the entry code for the #VC exception handler. Provide a version of it in the .text.noinstr section which can be called safely from there. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Adjust #VC IST Stack on entering NMI handlerJoerg Roedel1-0/+19
When an NMI hits in the #VC handler entry code before it has switched to another stack, any subsequent #VC exception in the NMI code-path will overwrite the interrupted #VC handler's stack. Make sure this doesn't happen by explicitly adjusting the #VC IST entry in the NMI handler for the time it can cause #VC exceptions. [ bp: Touchups, spelling fixes. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Allocate and map an IST stack for #VC handlerJoerg Roedel2-12/+22
Allocate and map an IST stack and an additional fall-back stack for the #VC handler. The memory for the stacks is allocated only when SEV-ES is active. The #VC handler needs to use an IST stack because a #VC exception can be raised from kernel space with unsafe stack, e.g. in the SYSCALL entry path. Since the #VC exception can be nested, the #VC handler switches back to the interrupted stack when entered from kernel space. If switching back is not possible, the fall-back stack is used. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Setup per-CPU GHCBs for the runtime handlerTom Lendacky1-0/+2
The runtime handler needs one GHCB per-CPU. Set them up and map them unencrypted. [ bp: Touchups and simplification. ] Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Setup GHCB-based boot #VC handlerJoerg Roedel3-1/+6
Add the infrastructure to handle #VC exceptions when the kernel runs on virtual addresses and has mapped a GHCB. This handler will be used until the runtime #VC handler takes over. Since the handler runs very early, disable instrumentation for sev-es.c. [ bp: Make vc_ghcb_invalidate() __always_inline so that it can be inlined in noinstr functions like __sev_es_nmi_complete(). ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Setup an early #VC handlerJoerg Roedel1-0/+3
Setup an early handler for #VC exceptions. There is no GHCB mapped yet, so just re-use the vc_no_ghcb_handler(). It can only handle CPUID exit-codes, but that should be enough to get the kernel through verify_cpu() and __startup_64() until it runs on virtual addresses. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> [ boot failure Error: kernel_ident_mapping_init() failed. ] Reported-by: kernel test robot <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-08x86: remove address space overrides using set_fs()Christoph Hellwig3-37/+2
Stop providing the possibility to override the address space using set_fs() now that there is no need for that any more. To properly handle the TASK_SIZE_MAX checking for 4 vs 5-level page tables on x86 a new alternative is introduced, which just like the one in entry_64.S has to use the hardcoded virtual address bits to escape the fact that TASK_SIZE_MAX isn't actually a constant when 5-level page tables are enabled. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08x86: make TASK_SIZE_MAX usable from assembly codeChristoph Hellwig2-3/+3
For 64-bit the only thing missing was a strategic _AC, and for 32-bit we need to use __PAGE_OFFSET instead of PAGE_OFFSET in the TASK_SIZE definition to escape the explicit unsigned long cast. This just works because __PAGE_OFFSET is defined using _AC itself and thus never needs the cast anyway. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08x86: move PAGE_OFFSET, TASK_SIZE & friends to page_{32,64}_types.hChristoph Hellwig3-49/+49
At least for 64-bit this moves them closer to some of the defines they are based on, and it prepares for using the TASK_SIZE_MAX definition from assembly. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-07x86/sev-es: Add SEV-ES Feature DetectionJoerg Roedel2-0/+5
Add a sev_es_active() function for checking whether SEV-ES is enabled. Also cache the value of MSR_AMD64_SEV at boot to speed up the feature checking in the running code. [ bp: Remove "!!" in sev_active() too. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Move early exception dispatch to C codeJoerg Roedel2-2/+4
Move the assembly coded dispatch between page-faults and all other exceptions to C code to make it easier to maintain and extend. Also change the return-type of early_make_pgtable() to bool and make it static. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/idt: Make IDT init functions static inlinesJoerg Roedel2-0/+34
Move these two functions from kernel/idt.c to include/asm/desc.h: * init_idt_data() * idt_init_desc() These functions are needed to setup IDT entries very early and need to be called from head64.c. To be usable this early, these functions need to be compiled without instrumentation and the stack-protector feature. These features need to be kept enabled for kernel/idt.c, so head64.c must use its own versions. [ bp: Take Kees' suggested patch title and add his Rev-by. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Install a CPU bringup IDTJoerg Roedel1-0/+1
Add a separate bringup IDT for the CPU bringup code that will be used until the kernel switches to the idt_table. There are two reasons for a separate IDT: 1) When the idt_table is set up and the secondary CPUs are booted, it contains entries (e.g. IST entries) which require certain CPU state to be set up. This includes a working TSS (for IST), MSR_GS_BASE (for stack protector) or CR4.FSGSBASE (for paranoid_entry) path. By using a dedicated IDT for early boot this state need not to be set up early. 2) The idt_table is static to idt.c, so any function using/modifying must be in idt.c too. That means that all compiler driven instrumentation like tracing or KASAN is also active in this code. But during early CPU bringup the environment is not set up for this instrumentation to work correctly. To avoid all of these hassles and make early exception handling robust, use a dedicated bringup IDT. The IDT is loaded two times, first on the boot CPU while the kernel is still running on direct mapped addresses, and again later after the switch to kernel addresses has happened. The second IDT load happens on the boot and secondary CPUs. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Install startup GDTJoerg Roedel1-0/+1
Handling exceptions during boot requires a working GDT. The kernel GDT can't be used on the direct mapping, so load a startup GDT and setup segments. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/fpu: Move xgetbv()/xsetbv() into a separate headerJoerg Roedel2-29/+35
The xgetbv() function is needed in the pre-decompression boot code, but asm/fpu/internal.h can't be included there directly. Doing so opens the door to include-hell due to various include-magic in boot/compressed/misc.h. Avoid that by moving xgetbv()/xsetbv() to a separate header file and include it instead. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Setup a GHCB-based VC Exception handlerJoerg Roedel1-0/+39
Install an exception handler for #VC exception that uses a GHCB. Also add the infrastructure for handling different exit-codes by decoding the instruction that caused the exception and error handling. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add stage1 #VC handlerJoerg Roedel3-0/+39
Add the first handler for #VC exceptions. At stage 1 there is no GHCB yet because the kernel might still be running on the EFI page table. The stage 1 handler is limited to the MSR-based protocol to talk to the hypervisor and can only support CPUID exit-codes, but that is enough to get to stage 2. [ bp: Zap superfluous newlines after rd/wrmsr instruction mnemonics. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add IDT InfrastructureJoerg Roedel1-0/+3
Add code needed to setup an IDT in the early pre-decompression boot-code. The IDT is loaded first in startup_64, which is after EfiExitBootServices() has been called, and later reloaded when the kernel image has been relocated to the end of the decompression area. This allows to setup different IDT handlers before and after the relocation. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]