aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2020-09-10perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count()Kim Phillips1-4/+8
get_ibs_op_count() adds hardware's current count (IbsOpCurCnt) bits to its count regardless of hardware's valid status. According to the PPR for AMD Family 17h Model 31h B0 55803 Rev 0.54, if the counter rolls over, valid status is set, and the lower 7 bits of IbsOpCurCnt are randomized by hardware. Don't include those bits in the driver's event count. Fixes: 8b1e13638d46 ("perf/x86-ibs: Fix usage of IBS op current count") Signed-off-by: Kim Phillips <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
2020-09-10perf/x86/amd: Fix sampling Large Increment per Cycle eventsKim Phillips1-2/+2
Commit 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events") mistakenly zeroes the upper 16 bits of the count in set_period(). That's fine for counting with perf stat, but not sampling with perf record when only Large Increment events are being sampled. To enable sampling, we sign extend the upper 16 bits of the merged counter pair as described in the Family 17h PPRs: "Software wanting to preload a value to a merged counter pair writes the high-order 16-bit value to the low-order 16 bits of the odd counter and then writes the low-order 48-bit value to the even counter. Reading the even counter of the merged counter pair returns the full 64-bit value." Fixes: 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events") Signed-off-by: Kim Phillips <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
2020-09-10perf/amd/uncore: Set all slices and threads to restore perf stat -a behaviourKim Phillips1-20/+8
Commit 2f217d58a8a0 ("perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCs") inadvertently changed the uncore driver's behaviour wrt perf tool invocations with or without a CPU list, specified with -C / --cpu=. Change the behaviour of the driver to assume the former all-cpu (-a) case, which is the more commonly desired default. This fixes '-a -A' invocations without explicit cpu lists (-C) to not count L3 events only on behalf of the first thread of the first core in the L3 domain. BEFORE: Activity performed by the first thread of the last core (CPU#43) in CPU#40's L3 domain is not reported by CPU#40: sudo perf stat -a -A -e l3_request_g1.caching_l3_cache_accesses taskset -c 43 perf bench mem memcpy -s 32mb -l 100 -f default ... CPU36 21,835 l3_request_g1.caching_l3_cache_accesses CPU40 87,066 l3_request_g1.caching_l3_cache_accesses CPU44 17,360 l3_request_g1.caching_l3_cache_accesses ... AFTER: The L3 domain activity is now reported by CPU#40: sudo perf stat -a -A -e l3_request_g1.caching_l3_cache_accesses taskset -c 43 perf bench mem memcpy -s 32mb -l 100 -f default ... CPU36 354,891 l3_request_g1.caching_l3_cache_accesses CPU40 1,780,870 l3_request_g1.caching_l3_cache_accesses CPU44 315,062 l3_request_g1.caching_l3_cache_accesses ... Fixes: 2f217d58a8a0 ("perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCs") Signed-off-by: Kim Phillips <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2020-09-10perf/x86/intel/ds: Fix x86_pmu_stop warning for large PEBSKan Liang1-12/+20
A warning as below may be triggered when sampling with large PEBS. [ 410.411250] perf: interrupt took too long (72145 > 71975), lowering kernel.perf_event_max_sample_rate to 2000 [ 410.724923] ------------[ cut here ]------------ [ 410.729822] WARNING: CPU: 0 PID: 16397 at arch/x86/events/core.c:1422 x86_pmu_stop+0x95/0xa0 [ 410.933811] x86_pmu_del+0x50/0x150 [ 410.937304] event_sched_out.isra.0+0xbc/0x210 [ 410.941751] group_sched_out.part.0+0x53/0xd0 [ 410.946111] ctx_sched_out+0x193/0x270 [ 410.949862] __perf_event_task_sched_out+0x32c/0x890 [ 410.954827] ? set_next_entity+0x98/0x2d0 [ 410.958841] __schedule+0x592/0x9c0 [ 410.962332] schedule+0x5f/0xd0 [ 410.965477] exit_to_usermode_loop+0x73/0x120 [ 410.969837] prepare_exit_to_usermode+0xcd/0xf0 [ 410.974369] ret_from_intr+0x2a/0x3a [ 410.977946] RIP: 0033:0x40123c [ 411.079661] ---[ end trace bc83adaea7bb664a ]--- In the non-overflow context, e.g., context switch, with large PEBS, perf may stop an event twice. An example is below. //max_samples_per_tick is adjusted to 2 //NMI is triggered intel_pmu_handle_irq() handle_pmi_common() drain_pebs() __intel_pmu_pebs_event() perf_event_overflow() __perf_event_account_interrupt() hwc->interrupts = 1 return 0 //A context switch happens right after the NMI. //In the same tick, the perf_throttled_seq is not changed. perf_event_task_sched_out() perf_pmu_sched_task() intel_pmu_drain_pebs_buffer() __intel_pmu_pebs_event() perf_event_overflow() __perf_event_account_interrupt() ++hwc->interrupts >= max_samples_per_tick return 1 x86_pmu_stop(); # First stop perf_event_context_sched_out() task_ctx_sched_out() ctx_sched_out() event_sched_out() x86_pmu_del() x86_pmu_stop(); # Second stop and trigger the warning Perf should only invoke the perf_event_overflow() in the overflow context. Current drain_pebs() is called from: - handle_pmi_common() -- overflow context - intel_pmu_pebs_sched_task() -- non-overflow context - intel_pmu_pebs_disable() -- non-overflow context - intel_pmu_auto_reload_read() -- possible overflow context With PERF_SAMPLE_READ + PERF_FORMAT_GROUP, the function may be invoked in the NMI handler. But, before calling the function, the PEBS buffer has already been drained. The __intel_pmu_pebs_event() will not be called in the possible overflow context. To fix the issue, an indicator is required to distinguish between the overflow context aka handle_pmi_common() and other cases. The dummy regs pointer can be used as the indicator. In the non-overflow context, perf should treat the last record the same as other PEBS records, and doesn't invoke the generic overflow handler. Fixes: 21509084f999 ("perf/x86/intel: Handle multiple records in the PEBS buffer") Reported-by: Like Xu <[email protected]> Suggested-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Like Xu <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-10x86/tsc: Use seqcount_latch_tAhmed S. Darwish1-5/+5
Latch sequence counters have unique read and write APIs, and thus seqcount_latch_t was recently introduced at seqlock.h. Use that new data type instead of plain seqcount_t. This adds the necessary type-safety and ensures that only latching-safe seqcount APIs are to be used. Signed-off-by: Ahmed S. Darwish <[email protected]> [peterz: unwreck cyc2ns_read_begin()] Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle NMI StateJoerg Roedel4-0/+32
When running under SEV-ES, the kernel has to tell the hypervisor when to open the NMI window again after an NMI was injected. This is done with an NMI-complete message to the hypervisor. Add code to the kernel's NMI handler to send this message right at the beginning of do_nmi(). This always allows nesting NMIs. [ bp: Mark __sev_es_nmi_complete() noinstr: vmlinux.o: warning: objtool: exc_nmi()+0x17: call to __sev_es_nmi_complete() leaves .noinstr.text section While at it, use __pa_nodebug() for the same reason due to CONFIG_DEBUG_VIRTUAL=y: vmlinux.o: warning: objtool: __sev_es_nmi_complete()+0xd9: call to __phys_addr() leaves .noinstr.text section ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Support CPU offline/onlineJoerg Roedel2-0/+64
Add a play_dead handler when running under SEV-ES. This is needed because the hypervisor can't deliver an SIPI request to restart the AP. Instead, the kernel has to issue a VMGEXIT to halt the VCPU until the hypervisor wakes it up again. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/head/64: Don't call verify_cpu() on starting APsJoerg Roedel3-0/+19
The APs are not ready to handle exceptions when verify_cpu() is called in secondary_startup_64(). Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/smpboot: Load TSS and getcpu GDT entry before loading IDTJoerg Roedel3-1/+25
The IDT on 64-bit contains vectors which use paranoid_entry() and/or IST stacks. To make these vectors work, the TSS and the getcpu GDT entry need to be set up before the IDT is loaded. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/realmode: Setup AP jump tableTom Lendacky4-2/+93
As part of the GHCB specification, the booting of APs under SEV-ES requires an AP jump table when transitioning from one layer of code to another (e.g. when going from UEFI to the OS). As a result, each layer that parks an AP must provide the physical address of an AP jump table to the next layer via the hypervisor. Upon booting of the kernel, read the AP jump table address from the hypervisor. Under SEV-ES, APs are started using the INIT-SIPI-SIPI sequence. Before issuing the first SIPI request for an AP, the start CS and IP is programmed into the AP jump table. Upon issuing the SIPI request, the AP will awaken and jump to that start CS:IP address. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: - Adapted to different code base - Moved AP table setup from SIPI sending path to real-mode setup code - Fix sparse warnings ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/realmode: Add SEV-ES specific trampoline entry pointJoerg Roedel3-0/+26
The code at the trampoline entry point is executed in real-mode. In real-mode, #VC exceptions can't be handled so anything that might cause such an exception must be avoided. In the standard trampoline entry code this is the WBINVD instruction and the call to verify_cpu(), which are both not needed anyway when running as an SEV-ES guest. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/vmware: Add VMware-specific handling for VMMCALL under SEV-ESDoug Covelli1-5/+45
Add VMware-specific handling for #VC faults caused by VMMCALL instructions. Signed-off-by: Doug Covelli <[email protected]> Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: - Adapt to different paravirt interface ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/kvm: Add KVM-specific VMMCALL handling under SEV-ESTom Lendacky1-6/+29
Implement the callbacks to copy the processor state required by KVM to the GHCB. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: - Split out of a larger patch - Adapt to different callback functions ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/paravirt: Allow hypervisor-specific VMMCALL handling under SEV-ESJoerg Roedel2-1/+27
Add two new paravirt callbacks to provide hypervisor-specific processor state in the GHCB and to copy state from the hypervisor back to the processor. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle #DB EventsJoerg Roedel1-0/+17
Handle #VC exceptions caused by #DB exceptions in the guest. Those must be handled outside of instrumentation_begin()/end() so that the handler will not be raised recursively. Handle them by calling the kernel's debug exception handler. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle #AC EventsJoerg Roedel1-0/+19
Implement a handler for #VC exceptions caused by #AC exceptions. The #AC exception is just forwarded to do_alignment_check() and not pushed down to the hypervisor, as requested by the SEV-ES GHCB Standardization Specification. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle VMMCALL EventsTom Lendacky1-0/+23
Implement a handler for #VC exceptions caused by VMMCALL instructions. This is only a starting point, VMMCALL emulation under SEV-ES needs further hypervisor-specific changes to provide additional state. [ bp: Drop "this patch". ] Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling infrastructure ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle MWAIT/MWAITX EventsTom Lendacky1-0/+10
Implement a handler for #VC exceptions caused by MWAIT and MWAITX instructions. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling infrastructure ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle MONITOR/MONITORX EventsTom Lendacky1-0/+13
Implement a handler for #VC exceptions caused by MONITOR and MONITORX instructions. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling infrastructure ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle INVD EventsTom Lendacky1-0/+4
Implement a handler for #VC exceptions caused by INVD instructions. Since Linux should never use INVD, just mark it as unsupported. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling infrastructure ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle RDPMC EventsTom Lendacky1-0/+22
Implement a handler for #VC exceptions caused by RDPMC instructions. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling infrastructure ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle RDTSC(P) EventsTom Lendacky3-0/+31
Implement a handler for #VC exceptions caused by RDTSC and RDTSCP instructions. Also make it available in the pre-decompression stage because the KASLR code uses RDTSC/RDTSCP to gather entropy and some hypervisors intercept these instructions. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: - Adapt to #VC handling infrastructure - Make it available early ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle WBINVD EventsTom Lendacky1-0/+9
Implement a handler for #VC exceptions caused by WBINVD instructions. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling framework ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle DR7 read/write eventsTom Lendacky1-0/+85
Add code to handle #VC exceptions on DR7 register reads and writes. This is needed early because show_regs() reads DR7 to print it out. Under SEV-ES, there is currently no support for saving/restoring the DRx registers but software expects to be able to write to the DR7 register. For now, cache the value written to DR7 and return it on read attempts, but do not touch the real hardware DR7. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: - Adapt to #VC handling framework - Support early usage ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle MSR eventsTom Lendacky1-0/+28
Implement a handler for #VC exceptions caused by RDMSR/WRMSR instructions. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to #VC handling infrastructure. ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle MMIO String InstructionsJoerg Roedel1-0/+77
Add handling for emulation of the MOVS instruction on MMIO regions, as done by the memcpy_toio() and memcpy_fromio() functions. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle MMIO eventsTom Lendacky2-0/+227
Add a handler for #VC exceptions caused by MMIO intercepts. These intercepts come along as nested page faults on pages with reserved bits set. Signed-off-by: Tom Lendacky <[email protected]> [ [email protected]: Adapt to VC handling framework ] Co-developed-by: Joerg Roedel <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Handle instruction fetches from user-spaceJoerg Roedel1-9/+22
When a #VC exception is triggered by user-space, the instruction decoder needs to read the instruction bytes from user addresses. Enhance vc_decode_insn() to safely fetch kernel and user instructions. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Wire up existing #VC exit-code handlersJoerg Roedel2-4/+9
Re-use the handlers for CPUID- and IOIO-caused #VC exceptions in the early boot handler. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Add a Runtime #VC Exception HandlerTom Lendacky3-8/+255
Add the handlers for #VC exceptions invoked at runtime. Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/entry/64: Add entry code for #VC handlerJoerg Roedel5-0/+171
The #VC handler needs special entry code because: 1. It runs on an IST stack 2. It needs to be able to handle nested #VC exceptions To make this work, the entry code is implemented to pretend it doesn't use an IST stack. When entered from user-mode or early SYSCALL entry path it switches to the task stack. If entered from kernel-mode it tries to switch back to the previous stack in the IRET frame. The stack found in the IRET frame is validated first, and if it is not safe to use it for the #VC handler, the code will switch to a fall-back stack (the #VC2 IST stack). From there, it can cause nested exceptions again. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/dumpstack/64: Add noinstr version of get_stack_info()Joerg Roedel4-20/+30
The get_stack_info() functionality is needed in the entry code for the #VC exception handler. Provide a version of it in the .text.noinstr section which can be called safely from there. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Adjust #VC IST Stack on entering NMI handlerJoerg Roedel3-0/+81
When an NMI hits in the #VC handler entry code before it has switched to another stack, any subsequent #VC exception in the NMI code-path will overwrite the interrupted #VC handler's stack. Make sure this doesn't happen by explicitly adjusting the #VC IST entry in the NMI handler for the time it can cause #VC exceptions. [ bp: Touchups, spelling fixes. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Allocate and map an IST stack for #VC handlerJoerg Roedel5-14/+63
Allocate and map an IST stack and an additional fall-back stack for the #VC handler. The memory for the stacks is allocated only when SEV-ES is active. The #VC handler needs to use an IST stack because a #VC exception can be raised from kernel space with unsafe stack, e.g. in the SYSCALL entry path. Since the #VC exception can be nested, the #VC handler switches back to the interrupted stack when entered from kernel space. If switching back is not possible, the fall-back stack is used. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Setup per-CPU GHCBs for the runtime handlerTom Lendacky3-1/+60
The runtime handler needs one GHCB per-CPU. Set them up and map them unencrypted. [ bp: Touchups and simplification. ] Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Setup GHCB-based boot #VC handlerJoerg Roedel9-8/+176
Add the infrastructure to handle #VC exceptions when the kernel runs on virtual addresses and has mapped a GHCB. This handler will be used until the runtime #VC handler takes over. Since the handler runs very early, disable instrumentation for sev-es.c. [ bp: Make vc_ghcb_invalidate() __always_inline so that it can be inlined in noinstr functions like __sev_es_nmi_complete(). ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Setup an early #VC handlerJoerg Roedel3-1/+57
Setup an early handler for #VC exceptions. There is no GHCB mapped yet, so just re-use the vc_no_ghcb_handler(). It can only handle CPUID exit-codes, but that should be enough to get the kernel through verify_cpu() and __startup_64() until it runs on virtual addresses. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> [ boot failure Error: kernel_ident_mapping_init() failed. ] Reported-by: kernel test robot <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/sev-es: Compile early handler code into kernel imageJoerg Roedel3-10/+175
Setup sev-es.c and include the code from the pre-decompression stage to also build it into the image of the running kernel. Temporarily add __maybe_unused annotations to avoid build warnings until the functions get used. [ bp: Use the non-tracing rd/wrmsr variants because: vmlinux.o: warning: objtool: __sev_es_nmi_complete()+0x11f: \ call to do_trace_write_msr() leaves .noinstr.text section as __sev_es_nmi_complete() is noinstr due to being called from the NMI handler exc_nmi(). ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-09x86/defconfigs: Explicitly unset CONFIG_64BIT in i386_defconfigDaniel Díaz1-0/+1
A recent refresh of the defconfigs got rid of the following (unset) config: # CONFIG_64BIT is not set Innocuous as it seems, when the config file is saved again the behavior is changed so that CONFIG_64BIT=y. Currently, $ make i386_defconfig $ grep CONFIG_64BIT .config CONFIG_64BIT=y whereas previously (and with this patch): $ make i386_defconfig $ grep CONFIG_64BIT .config # CONFIG_64BIT is not set ( This was found with weird compiler errors on OpenEmbedded builds, as the compiler was unable to cope with 64-bits data types. ) Fixes: 1d0e12fd3a84 ("x86/defconfigs: Refresh defconfig files") Reported-by: Jarkko Nikula <[email protected]> Reported-by: Andy Shevchenko <[email protected]> Tested-by: Sedat Dilek <[email protected]> Signed-off-by: Daniel Díaz <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2020-09-08x86: remove address space overrides using set_fs()Christoph Hellwig8-77/+39
Stop providing the possibility to override the address space using set_fs() now that there is no need for that any more. To properly handle the TASK_SIZE_MAX checking for 4 vs 5-level page tables on x86 a new alternative is introduced, which just like the one in entry_64.S has to use the hardcoded virtual address bits to escape the fact that TASK_SIZE_MAX isn't actually a constant when 5-level page tables are enabled. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08x86: make TASK_SIZE_MAX usable from assembly codeChristoph Hellwig2-3/+3
For 64-bit the only thing missing was a strategic _AC, and for 32-bit we need to use __PAGE_OFFSET instead of PAGE_OFFSET in the TASK_SIZE definition to escape the explicit unsigned long cast. This just works because __PAGE_OFFSET is defined using _AC itself and thus never needs the cast anyway. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08x86: move PAGE_OFFSET, TASK_SIZE & friends to page_{32,64}_types.hChristoph Hellwig3-49/+49
At least for 64-bit this moves them closer to some of the defines they are based on, and it prepares for using the TASK_SIZE_MAX definition from assembly. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08uaccess: add infrastructure for kernel builds with set_fs()Christoph Hellwig1-0/+1
Add a CONFIG_SET_FS option that is selected by architecturess that implement set_fs, which is all of them initially. If the option is not set stubs for routines related to overriding the address space are provided so that architectures can start to opt out of providing set_fs. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-09-08x86/kprobes: Use generic kretprobe trampoline handlerMasami Hiramatsu1-105/+3
Use the generic kretprobe trampoline handler. Use regs->sp for framepointer verification. [ mingo: Minor edits. ] Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/159870601250.1229682.14598707734683575237.stgit@devnote2
2020-09-08x86/sev-es: Print SEV-ES info into the kernel logJoerg Roedel1-3/+26
Refactor the message printed to the kernel log which indicates whether SEV or SME, etc is active. This will scale better in the future when more memory encryption features might be added. Also add SEV-ES to the list of features. [ bp: Massage. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/sev-es: Add SEV-ES Feature DetectionJoerg Roedel4-1/+16
Add a sev_es_active() function for checking whether SEV-ES is enabled. Also cache the value of MSR_AMD64_SEV at boot to speed up the feature checking in the running code. [ bp: Remove "!!" in sev_active() too. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Move early exception dispatch to C codeJoerg Roedel4-16/+20
Move the assembly coded dispatch between page-faults and all other exceptions to C code to make it easier to maintain and extend. Also change the return-type of early_make_pgtable() to bool and make it static. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/idt: Make IDT init functions static inlinesJoerg Roedel3-34/+34
Move these two functions from kernel/idt.c to include/asm/desc.h: * init_idt_data() * idt_init_desc() These functions are needed to setup IDT entries very early and need to be called from head64.c. To be usable this early, these functions need to be compiled without instrumentation and the stack-protector feature. These features need to be kept enabled for kernel/idt.c, so head64.c must use its own versions. [ bp: Take Kees' suggested patch title and add his Rev-by. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Install a CPU bringup IDTJoerg Roedel3-0/+45
Add a separate bringup IDT for the CPU bringup code that will be used until the kernel switches to the idt_table. There are two reasons for a separate IDT: 1) When the idt_table is set up and the secondary CPUs are booted, it contains entries (e.g. IST entries) which require certain CPU state to be set up. This includes a working TSS (for IST), MSR_GS_BASE (for stack protector) or CR4.FSGSBASE (for paranoid_entry) path. By using a dedicated IDT for early boot this state need not to be set up early. 2) The idt_table is static to idt.c, so any function using/modifying must be in idt.c too. That means that all compiler driven instrumentation like tracing or KASAN is also active in this code. But during early CPU bringup the environment is not set up for this instrumentation to work correctly. To avoid all of these hassles and make early exception handling robust, use a dedicated bringup IDT. The IDT is loaded two times, first on the boot CPU while the kernel is still running on direct mapped addresses, and again later after the switch to kernel addresses has happened. The second IDT load happens on the boot and secondary CPUs. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/head/64: Switch to initial stack earlierJoerg Roedel1-3/+6
Make sure there is a stack once the kernel runs from virtual addresses. At this stage any secondary CPU which boots will have lost its stack because the kernel switched to a new page-table which does not map the real-mode stack anymore. This is needed for handling early #VC exceptions caused by instructions like CPUID. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]