aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2020-09-07x86/head/64: Load segment registers earlierJoerg Roedel1-26/+26
Make sure segments are properly set up before setting up an IDT and doing anything that might cause a #VC exception. This is later needed for early exception handling. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Load GDT after switch to virtual addressesJoerg Roedel1-8/+8
Load the GDT right after switching to virtual addresses to make sure there is a defined GDT for exception handling. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Install startup GDTJoerg Roedel3-0/+48
Handling exceptions during boot requires a working GDT. The kernel GDT can't be used on the direct mapping, so load a startup GDT and setup segments. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/idt: Split idt_data setup out of set_intr_gate()Joerg Roedel1-8/+14
The code to setup idt_data is needed for early exception handling, but set_intr_gate() can't be used that early because it has pv-ops in its code path which don't work that early. Split out the idt_data initialization part from set_intr_gate() so that it can be used separately. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/sev-es: Add CPUID handling to #VC handlerTom Lendacky2-0/+39
Handle #VC exceptions caused by CPUID instructions. These happen in early boot code when the KASLR code checks for RDTSC. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling framework ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/fpu: Move xgetbv()/xsetbv() into a separate headerJoerg Roedel2-29/+35
The xgetbv() function is needed in the pre-decompression boot code, but asm/fpu/internal.h can't be included there directly. Doing so opens the door to include-hell due to various include-magic in boot/compressed/misc.h. Avoid that by moving xgetbv()/xsetbv() to a separate header file and include it instead. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/sev-es: Add support for handling IOIO exceptionsTom Lendacky2-0/+246
Add support for decoding and handling #VC exceptions for IOIO events. [ [email protected]: Adapted code to #VC handling framework ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Unmap GHCB page before booting the kernelJoerg Roedel3-2/+35
Force a page-fault on any further accesses to the GHCB page when they shouldn't happen anymore. This will catch any bugs where a #VC exception is raised even though none is expected anymore. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Setup a GHCB-based VC Exception handlerJoerg Roedel10-1/+331
Install an exception handler for #VC exception that uses a GHCB. Also add the infrastructure for handling different exit-codes by decoding the instruction that caused the exception and error handling. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add set_page_en/decrypted() helpersJoerg Roedel2-0/+135
The functions are needed to map the GHCB for SEV-ES guests. The GHCB is used for communication with the hypervisor, so its content must not be encrypted. After the GHCB is not needed anymore it must be mapped encrypted again so that the running kernel image can safely re-use the memory. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Check return value of kernel_ident_mapping_init()Joerg Roedel1-2/+5
The function can fail to create an identity mapping, check for that and bail out if it happens. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Call set_sev_encryption_mask() earlierJoerg Roedel2-4/+8
Call set_sev_encryption_mask() while still on the stage 1 #VC-handler because the stage 2 handler needs the kernel's own page tables to be set up, to which calling set_sev_encryption_mask() is a prerequisite. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add stage1 #VC handlerJoerg Roedel9-0/+160
Add the first handler for #VC exceptions. At stage 1 there is no GHCB yet because the kernel might still be running on the EFI page table. The stage 1 handler is limited to the MSR-based protocol to talk to the hypervisor and can only support CPUID exit-codes, but that is enough to get to stage 2. [ bp: Zap superfluous newlines after rd/wrmsr instruction mnemonics. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Change add_identity_map() to take start and endJoerg Roedel1-10/+5
Changing the function to take start and end as parameters instead of start and size simplifies the callers which don't need to calculate the size if they already have start and end. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Don't pre-map memory in KASLR codeJoerg Roedel3-37/+3
With the page-fault handler in place, he identity mapping can be built on-demand. So remove the code which manually creates the mappings and unexport/remove the functions used for it. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Always switch to own page tableJoerg Roedel3-25/+32
When booted through startup_64(), the kernel keeps running on the EFI page table until the KASLR code sets up its own page table. Without KASLR, the pre-decompression boot code never switches off the EFI page table. Change that by unconditionally switching to a kernel-controlled page table after relocation. This makes sure the kernel can make changes to the mapping when necessary, for example map pages unencrypted in SEV and SEV-ES guests. Also, remove the debug_putstr() calls in initialize_identity_maps() because the function now runs before console_init() is called. [ bp: Massage commit message. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add page-fault handlerJoerg Roedel4-0/+49
Install a page-fault handler to add an identity mapping to addresses not yet mapped. Also do some checking whether the error code is sane. This makes non SEV-ES machines use the exception handling infrastructure in the pre-decompressions boot code too, making it less likely to break in the future. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Rename kaslr_64.c to ident_map_64.cJoerg Roedel4-10/+18
The file contains only code related to identity-mapped page tables. Rename the file and compile it always in. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add IDT InfrastructureJoerg Roedel6-1/+147
Add code needed to setup an IDT in the early pre-decompression boot-code. The IDT is loaded first in startup_64, which is after EfiExitBootServices() has been called, and later reloaded when the kernel image has been relocated to the end of the decompression area. This allows to setup different IDT handlers before and after the relocation. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Disable red-zone usageJoerg Roedel1-1/+1
The x86-64 ABI defines a red-zone on the stack: The 128-byte area beyond the location pointed to by %rsp is considered to be reserved and shall not be modified by signal or interrupt handlers. Therefore, functions may use this area for temporary data that is not needed across function calls. In particular, leaf functions may use this area for their entire stack frame, rather than adjusting the stack pointer in the prologue and epilogue. This area is known as the red zone. This is not compatible with exception handling, because the IRET frame written by the hardware at the stack pointer and the functions to handle the exception will overwrite the temporary variables of the interrupted function, causing undefined behavior. So disable red-zones for the pre-decompression boot code. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/insn: Add insn_has_rep_prefix() helperJoerg Roedel2-0/+25
Add a function to check whether an instruction has a REP prefix. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07KVM: SVM: Use __packed shorthandBorislav Petkov1-6/+6
Use the shorthand to make it more readable. No functional changes. Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/insn: Add insn_get_modrm_reg_off()Joerg Roedel2-0/+24
Add a function to the instruction decoder which returns the pt_regs offset of the register specified in the reg field of the modrm byte. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07KVM: SVM: Add GHCB Accessor functionsJoerg Roedel1-0/+43
Building a correct GHCB for the hypervisor requires setting valid bits in the GHCB. Simplify that process by providing accessor functions to set values and to update the valid bitmap and to check the valid bitmap in KVM. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/umip: Factor out instruction decodingJoerg Roedel3-22/+48
Factor out the code used to decode an instruction with the correct address and operand sizes to a helper function. No functional changes. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07KVM: SVM: Add GHCB definitionsTom Lendacky2-2/+51
Extend the vmcb_safe_area with SEV-ES fields and add a new 'struct ghcb' which will be used for guest-hypervisor communication. Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/umip: Factor out instruction fetchJoerg Roedel3-20/+46
Factor out the code to fetch the instruction from user-space to a helper function. No functional changes. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07KVM: SVM: nested: Don't allocate VMCB structures on stackJoerg Roedel1-14/+33
Do not allocate a vmcb_control_area and a vmcb_save_area on the stack, as these structures will become larger with future extenstions of SVM and thus the svm_set_nested_state() function will become a too large stack frame. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/insn: Make inat-tables.c suitable for pre-decompression codeJoerg Roedel1-1/+49
The inat-tables.c file has some arrays in it that contain pointers to other arrays. These pointers need to be relocated when the kernel image is moved to a different location. The pre-decompression boot-code has no support for applying ELF relocations, so initialize these arrays at runtime in the pre-decompression code to make sure all pointers are correctly initialized. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/traps: Move pf error codes to <asm/trap_pf.h>Joerg Roedel2-18/+25
Move the definition of the x86 page-fault error code bits to a new header file asm/trap_pf.h. This makes it easier to include them into pre-decompression boot code. No functional changes. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/cpufeatures: Add SEV-ES CPU featureTom Lendacky3-1/+4
Add CPU feature detection for Secure Encrypted Virtualization with Encrypted State. This feature enhances SEV by also encrypting the guest register state, making it in-accessible to the hypervisor. Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07Merge 'x86/cpu' to pick up dependent bitsBorislav Petkov6-31/+59
Pick up work happening in parallel to avoid nasty merge conflicts later. Signed-off-by: Borislav Petkov <[email protected]>
2020-09-07Merge 'x86/kaslr' to pick up dependent bitsBorislav Petkov2-135/+107
Signed-off-by: Borislav Petkov <[email protected]>
2020-09-04Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski37-90/+84
We got slightly different patches removing a double word in a comment in net/ipv4/raw.c - picked the version from net. Simple conflict in drivers/net/ethernet/ibm/ibmvnic.c. Use cached values instead of VNIC login response buffer (following what commit 507ebe6444a4 ("ibmvnic: Fix use-after-free of VNIC login response buffer") did). Signed-off-by: Jakub Kicinski <[email protected]>
2020-09-05x86/resctrl: Fix spelling in user-visible warning messagesColin Ian King1-2/+2
Fix spelling mistake "Could't" -> "Couldn't" in user-visible warning messages. [ bp: Massage commit message; s/cpu/CPU/g ] Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-04x86/mce: Increase maximum number of banks to 64Akshay Gupta1-6/+2
...because future AMD systems will support up to 64 MCA banks per CPU. MAX_NR_BANKS is used to allocate a number of data structures, and it is used as a ceiling for values read from MCG_CAP[Count]. Therefore, this change will have no functional effect on existing systems with 32 or fewer MCA banks per CPU. However, this will increase the size of the following structures: Global bitmaps: - core.c / mce_banks_ce_disabled - core.c / all_banks - core.c / valid_banks - core.c / toclear - Total: 32 new bits * 4 bitmaps = 16 new bytes Per-CPU bitmaps: - core.c / mce_poll_banks - intel.c / mce_banks_owned - Total: 32 new bits * 2 bitmaps = 8 new bytes The bitmaps are arrays of longs. So this change will only affect 32-bit execution, since there will be one additional long used. There will be no additional memory use on 64-bit execution, because the size of long is 64 bits. Global structs: - amd.c / struct smca_bank smca_banks[]: 16 bytes per bank - core.c / struct mce_bank_dev mce_bank_devs[]: 56 bytes per bank - Total: 32 new banks * (16 + 56) bytes = 2304 new bytes Per-CPU structs: - core.c / struct mce_bank mce_banks_array[]: 16 bytes per bank - Total: 32 new banks * 16 bytes = 512 new bytes 32-bit Total global size increase: 2320 bytes Total per-CPU size increase: 520 bytes 64-bit Total global size increase: 2304 bytes Total per-CPU size increase: 512 bytes This additional memory should still fit within the existing .data section of the kernel binary. However, in the case where it doesn't fit, an additional page (4kB) of memory will be added to the binary to accommodate the extra data which will be the maximum size increase of vmlinux. Signed-off-by: Akshay Gupta <[email protected]> [ Adjust commit message and code comment. ] Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-04x86/entry: Unbreak 32bit fast syscallThomas Gleixner1-9/+20
Andy reported that the syscall treacing for 32bit fast syscall fails: # ./tools/testing/selftests/x86/ptrace_syscall_32 ... [RUN] SYSEMU [FAIL] Initial args are wrong (nr=224, args=10 11 12 13 14 4289172732) ... [RUN] SYSCALL [FAIL] Initial args are wrong (nr=29, args=0 0 0 0 0 4289172732) The eason is that the conversion to generic entry code moved the retrieval of the sixth argument (EBP) after the point where the syscall entry work runs, i.e. ptrace, seccomp, audit... Unbreak it by providing a split up version of syscall_enter_from_user_mode(). - syscall_enter_from_user_mode_prepare() establishes state and enables interrupts - syscall_enter_from_user_mode_work() runs the entry work Replace the call to syscall_enter_from_user_mode() in the 32bit fast syscall C-entry with the split functions and stick the EBP retrieval between them. Fixes: 27d6b4d14f5c ("x86/entry: Use generic syscall entry function") Reported-by: Andy Lutomirski <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Change thread.debugreg6 to thread.virtual_dr6Peter Zijlstra5-24/+26
Current usage of thread.debugreg6 is convoluted at best. It starts life as a copy of the hardware DR6 value, but then various bits are cleared and set. Replace this with a new variable thread.virtual_dr6 that is initialized to 0 when DR6 is read and only gains bits, at the same time the actual (on stack) dr6 value which is read from the hardware only gets bits cleared. Suggested-by: Andy Lutomirski <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Support negative polarity DR6 bitsPeter Zijlstra3-6/+5
DR6 has a whole bunch of bits that have negative polarity; they were architecturally reserved and defined to be 1 and are now getting used. Since they're 1 by default, 0 becomes the signal value. Handle this by xor'ing the read DR6 value by the reserved mask, this will flip them around such that 1 is the signal value (positive polarity). Current Linux doesn't yet support any of these bits, but there's two defined: - DR6[11] Bus Lock Debug Exception (ISEr39) - DR6[16] Restricted Transactional Memory (SDM) Update ptrace_{set,get}_debugreg() to provide/consume the value in architectural polarity. Although afaict ptrace_set_debugreg(6) is pointless, the value is not consumed anywhere. Change hw_breakpoint_restore() to alway write the DR6_RESERVED value to DR6, again, no consumer for that write. Suggested-by: Andrew Cooper <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Simplify hw_breakpoint_handler()Peter Zijlstra1-6/+2
This is called with interrupts disabled, there's no point in using get_cpu() and per_cpu(). Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Remove aout_dump_debugregs()Peter Zijlstra2-38/+0
Unused remnants for the bit-bucket. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Remove the historical junkPeter Zijlstra1-11/+12
Remove the historical junk and replace it with a WARN and a comment. The problem is that even though the kernel only uses TF single-step in kprobes and KGDB, both of which consume the event before this, QEMU/KVM has bugs in this area that can trigger this state so it has to be dealt with. Suggested-by: Brian Gerst <[email protected]> Suggested-by: Andy Lutomirski <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Move cond_local_irq_enable() block into exc_debug_user()Peter Zijlstra1-29/+29
The cond_local_irq_enable() block, dealing with vm86 and sending signals is only relevant for #DB-from-user, move it there. This then reduces handle_debug() to only the notifier call, so rename it to notify_debug(). Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Move historical SYSENTER junk into exc_debug_kernel()Peter Zijlstra1-24/+25
The historical SYSENTER junk is explicitly for from-kernel, so move it to the #DB-from-kernel handler. It is ordered after the notifier, which is important for KGDB which uses TF single-step and needs to consume the event before that point. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Simplify #DB signal codePeter Zijlstra1-6/+9
There's no point in calculating si_code if it's not going to be used. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Remove handle_debug(.user) argumentPeter Zijlstra1-11/+10
The handle_debug(.user) argument is used to terminate the #DB handler early for the INT1-from-kernel case, since the kernel doesn't use INT1. Remove the argument and handle this explicitly in #DB-from-kernel. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Move kprobe_debug_handler() into exc_debug_kernel()Peter Zijlstra2-6/+8
Kprobes are on kernel text, and thus only matter for #DB-from-kernel. Kprobes are ordered before the generic notifier, preserve that order. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Sync BTF earlierPeter Zijlstra1-7/+7
Move the BTF sync near the DR6 load, as this will be the only common code guaranteed to run on every #DB. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/debug: Allow a single level of #DB recursionAndy Lutomirski1-34/+31
Trying to clear DR7 around a #DB from usermode malfunctions if the tasks schedules when delivering SIGTRAP. Rather than trying to define a special no-recursion region, just allow a single level of recursion. The same mechanism is used for NMI, and it hasn't caused any problems yet. Fixes: 9f58fdde95c9 ("x86/db: Split out dr6/7 handling") Reported-by: Kyle Huey <[email protected]> Debugged-by: Josh Poimboeuf <[email protected]> Signed-off-by: Andy Lutomirski <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/8b9bd05f187231df008d48cf818a6a311cbd5c98.1597882384.git.luto@kernel.org Link: https://lore.kernel.org/r/[email protected]
2020-09-04x86/entry: Fix AC assertionPeter Zijlstra1-2/+10
The WARN added in commit 3c73b81a9164 ("x86/entry, selftests: Further improve user entry sanity checks") unconditionally triggers on a IVB machine because it does not support SMAP. For !SMAP hardware the CLAC/STAC instructions are patched out and thus if userspace sets AC, it is still have set after entry. Fixes: 3c73b81a9164 ("x86/entry, selftests: Further improve user entry sanity checks") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Daniel Thompson <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]