aboutsummaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)AuthorFilesLines
2024-08-06x86/traps: Enable UBSAN traps on x86Gatlin Newhouse2-5/+66
Currently ARM64 extracts which specific sanitizer has caused a trap via encoded data in the trap instruction. Clang on x86 currently encodes the same data in the UD1 instruction but x86 handle_bug() and is_valid_bugaddr() currently only look at UD2. Bring x86 to parity with ARM64, similar to commit 25b84002afb9 ("arm64: Support Clang UBSAN trap codes for better reporting"). See the llvm links for information about the code generation. Enable the reporting of UBSAN sanitizer details on x86 compiled with clang when CONFIG_UBSAN_TRAP=y by analysing UD1 and retrieving the type immediate which is encoded by the compiler after the UD1. [ tglx: Simplified it by moving the printk() into handle_bug() ] Signed-off-by: Gatlin Newhouse <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Kees Cook <[email protected]> Link: https://lore.kernel.org/all/[email protected] Link: https://github.com/llvm/llvm-project/commit/c5978f42ec8e9#diff-bb68d7cd885f41cfc35843998b0f9f534adb60b415f647109e597ce448e92d9f Link: https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/X86/X86InstrSystem.td#L27
2024-08-05perf/x86/intel/bts: Fix comment about default perf_event_paranoid settingJames Clark1-3/+0
The default paranoid setting was updated in commit 0161028b7c8a ("perf/core: Change the default paranoia level to 2") so this comment is no longer true. Signed-off-by: James Clark <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2024-08-05perf/x86/intel/uncore: Use D0:F0 as a default deviceZhenyu Wang1-0/+4
Some uncore PMON registers are located in the MMIO space of the Host Bridge and DRAM Controller device, which is located at D0:F0 for Tiger Lake and later client generation. Use D0:F0 as a default device. So it doesn't need to keep adding the complete Device ID list for each generation anymore. Signed-off-by: Zhenyu Wang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kan Liang <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-08-05perf/x86/intel/uncore: Add LNL uncore iMC freerunning supportZhenyu Wang1-0/+1
LNL uncore imc freerunning counters keep same as previous HW. Signed-off-by: Zhenyu Wang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kan Liang <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-08-05perf/x86/intel/uncore: Add Lunar Lake supportKan Liang3-0/+141
The uncore subsystem for Lunar Lake is similar to the previous Meteor Lake. The uncore PerfMon registers are located at both MSR and MMIO space. The ARB and iMC are kept. There is no difference from the Meteor Lake. Move the global control initialization to the first box of the CBOX. The sNCU is moved to the MMIO space. The HBO is newly added and only be accessed from the MMIO space. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-08-05perf/x86/intel/uncore: Factor out common MMIO init and ops functionsKan Liang1-17/+30
Some uncore PMON registers are located in the MMIO space. For the client machine, the MMIO space is usually located at D0:F0 but in a different BAR. For example, some uncore PMON registers are located in the SAF BAR, not the MCHBAR in the Lunar Lake. The current __uncore_imc_init_box() hard code the BAR information. Factor out the uncore_get_box_mmio_addr() which uses the BAR information as a parameter. The only change is the error output message. The hardcode name 'MCHBAR' is replaced by the offset of a BAR. Add a new macro, MMIO_UNCORE_COMMON_OPS(), since the MMIO ops functions are usually the same among different generations. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-08-05perf/x86/intel/uncore: Add Arrow Lake supportKan Liang1-0/+3
>From the perspective of the uncore PMU, the Arrow Lake is the same as the previous Meteor Lake. The only difference is the event list, which will be supported in the perf tool later. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-08-05x86/mm/ident_map: Use gbpages only where full GB page should be mapped.Steve Wahl1-5/+18
When ident_pud_init() uses only GB pages to create identity maps, large ranges of addresses not actually requested can be included in the resulting table; a 4K request will map a full GB. This can include a lot of extra address space past that requested, including areas marked reserved by the BIOS. That allows processor speculation into reserved regions, that on UV systems can cause system halts. Only use GB pages when map creation requests include the full GB page of space. Fall back to using smaller 2M pages when only portions of a GB page are included in the request. No attempt is made to coalesce mapping requests. If a request requires a map entry at the 2M (pmd) level, subsequent mapping requests within the same 1G region will also be at the pmd level, even if adjacent or overlapping such requests could have been combined to map a full GB page. Existing usage starts with larger regions and then adds smaller regions, so this should not have any great consequence. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Pavin Joseph <[email protected]> Tested-by: Sarah Brofeldt <[email protected]> Tested-by: Eric Hagberg <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-05x86/kexec: Add EFI config table identity mapping for kexec kernelTao Liu1-0/+27
A kexec kernel boot failure is sometimes observed on AMD CPUs due to an unmapped EFI config table array. This can be seen when "nogbpages" is on the kernel command line, and has been observed as a full BIOS reboot rather than a successful kexec. This was also the cause of reported regressions attributed to Commit 7143c5f4cf20 ("x86/mm/ident_map: Use gbpages only where full GB page should be mapped.") which was subsequently reverted. To avoid this page fault, explicitly include the EFI config table array in the kexec identity map. Further explanation: The following 2 commits caused the EFI config table array to be accessed when enabling sev at kernel startup. commit ec1c66af3a30 ("x86/compressed/64: Detect/setup SEV/SME features earlier during boot") commit c01fce9cef84 ("x86/compressed: Add SEV-SNP feature detection/setup") This is in the code that examines whether SEV should be enabled or not, so it can even affect systems that are not SEV capable. This may result in a page fault if the EFI config table array's address is unmapped. Since the page fault occurs before the new kernel establishes its own identity map and page fault routines, it is unrecoverable and kexec fails. Most often, this problem is not seen because the EFI config table array gets included in the map by the luck of being placed at a memory address close enough to other memory areas that *are* included in the map created by kexec. Both the "nogbpages" command line option and the "use gpbages only where full GB page should be mapped" change greatly reduce the chance of being included in the map by luck, which is why the problem appears. Signed-off-by: Tao Liu <[email protected]> Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Pavin Joseph <[email protected]> Tested-by: Sarah Brofeldt <[email protected]> Tested-by: Eric Hagberg <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-04Merge tag 'x86-urgent-2024-08-04' of ↵Linus Torvalds8-17/+36
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: - Prevent a deadlock on cpu_hotplug_lock in the aperf/mperf driver. A recent change in the ACPI code which consolidated code pathes moved the invocation of init_freq_invariance_cppc() to be moved to a CPU hotplug handler. The first invocation on AMD CPUs ends up enabling a static branch which dead locks because the static branch enable tries to acquire cpu_hotplug_lock but that lock is already held write by the hotplug machinery. Use static_branch_enable_cpuslocked() instead and take the hotplug lock read for the Intel code path which is invoked from the architecture code outside of the CPU hotplug operations. - Fix the number of reserved bits in the sev_config structure bit field so that the bitfield does not exceed 64 bit. - Add missing Zen5 model numbers - Fix the alignment assumptions of pti_clone_pgtable() and clone_entry_text() on 32-bit: The code assumes PMD aligned code sections, but on 32-bit the kernel entry text is not PMD aligned. So depending on the code size and location, which is configuration and compiler dependent, entry text can cross a PMD boundary. As the start is not PMD aligned adding PMD size to the start address is larger than the end address which results in partially mapped entry code for user space. That causes endless recursion on the first entry from userspace (usually #PF). Cure this by aligning the start address in the addition so it ends up at the next PMD start address. clone_entry_text() enforces PMD mapping, but on 32-bit the tail might eventually be PTE mapped, which causes a map fail because the PMD for the tail is not a large page mapping. Use PTI_LEVEL_KERNEL_IMAGE for the clone() invocation which resolves to PTE on 32-bit and PMD on 64-bit. - Zero the 8-byte case for get_user() on range check failure on 32-bit The recend consolidation of the 8-byte get_user() case broke the zeroing in the failure case again. Establish it by clearing ECX before the range check and not afterwards as that obvioulsy can't be reached when the range check fails * tag 'x86-urgent-2024-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/uaccess: Zero the 8-byte get_range case on failure on 32-bit x86/mm: Fix pti_clone_entry_text() for i386 x86/mm: Fix pti_clone_pgtable() alignment assumption x86/setup: Parse the builtin command line before merging x86/CPU/AMD: Add models 0x60-0x6f to the Zen5 range x86/sev: Fix __reserved field in sev_config x86/aperfmperf: Fix deadlock on cpu_hotplug_lock
2024-08-04Merge tag 'perf-urgent-2024-08-04' of ↵Linus Torvalds2-12/+15
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 perf fixes from Thomas Gleixner: - Move the smp_processor_id() invocation back into the non-preemtible region, so that the result is valid to use - Add the missing package C2 residency counters for Sierra Forest CPUs to make the newly added support actually useful * tag 'perf-urgent-2024-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86: Fix smp_processor_id()-in-preemptible warnings perf/x86/intel/cstate: Add pkg C2 residency counter for Sierra Forest
2024-08-02x86/hyperv: Set X86_FEATURE_TSC_KNOWN_FREQ when Hyper-V provides frequencyMichael Kelley1-0/+1
A Linux guest on Hyper-V gets the TSC frequency from a synthetic MSR, if available. In this case, set X86_FEATURE_TSC_KNOWN_FREQ so that Linux doesn't unnecessarily do refined TSC calibration when setting up the TSC clocksource. With this change, a message such as this is no longer output during boot when the TSC is used as the clocksource: [ 1.115141] tsc: Refined TSC clocksource calibration: 2918.408 MHz Furthermore, the guest and host will have exactly the same view of the TSC frequency, which is important for features such as the TSC deadline timer that are emulated by the Hyper-V host. Signed-off-by: Michael Kelley <[email protected]> Reviewed-by: Roman Kisel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Wei Liu <[email protected]> Message-ID: <[email protected]>
2024-08-02Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds7-20/+24
Pull kvm updates from Paolo Bonzini: "The bulk of the changes here is a largish change to guest_memfd, delaying the clearing and encryption of guest-private pages until they are actually added to guest page tables. This started as "let's make it impossible to misuse the API" for SEV-SNP; but then it ballooned a bit. The new logic is generally simpler and more ready for hugepage support in guest_memfd. Summary: - fix latent bug in how usage of large pages is determined for confidential VMs - fix "underline too short" in docs - eliminate log spam from limited APIC timer periods - disallow pre-faulting of memory before SEV-SNP VMs are initialized - delay clearing and encrypting private memory until it is added to guest page tables - this change also enables another small cleanup: the checks in SNP_LAUNCH_UPDATE that limit it to non-populated, private pages can now be moved in the common kvm_gmem_populate() function - fix compilation error that the RISC-V merge introduced in selftests" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86/mmu: fix determination of max NPT mapping level for private pages KVM: riscv: selftests: Fix compile error KVM: guest_memfd: abstract how prepared folios are recorded KVM: guest_memfd: let kvm_gmem_populate() operate only on private gfns KVM: extend kvm_range_has_memory_attributes() to check subset of attributes KVM: cleanup and add shortcuts to kvm_range_has_memory_attributes() KVM: guest_memfd: move check for already-populated page to common code KVM: remove kvm_arch_gmem_prepare_needed() KVM: guest_memfd: make kvm_gmem_prepare_folio() operate on a single struct kvm KVM: guest_memfd: delay kvm_gmem_prepare_folio() until the memory is passed to the guest KVM: guest_memfd: return locked folio from __kvm_gmem_get_pfn KVM: rename CONFIG_HAVE_KVM_GMEM_* to CONFIG_HAVE_KVM_ARCH_GMEM_* KVM: guest_memfd: do not go through struct page KVM: guest_memfd: delay folio_mark_uptodate() until after successful preparation KVM: guest_memfd: return folio from __kvm_gmem_get_pfn() KVM: x86: disallow pre-fault for SNP VMs before initialization KVM: Documentation: Fix title underline too short warning KVM: x86: Eliminate log spam from limited APIC timer periods
2024-08-02x86/tsc: Check for sockets instead of CPUs to make code match commentPaul E. McKenney1-1/+1
The unsynchronized_tsc() eventually checks num_possible_cpus(), and if the system is non-Intel and the number of possible CPUs is greater than one, assumes that TSCs are unsynchronized. This despite the comment saying "assume multi socket systems are not synchronized", that is, socket rather than CPU. This behavior was preserved by commit 8fbbc4b45ce3 ("x86: merge tsc_init and clocksource code") and by the previous relevant commit 7e69f2b1ead2 ("clocksource: Remove the update callback"). The clocksource drivers were added by commit 5d0cf410e94b ("Time: i386 Clocksource Drivers") back in 2006, and the comment still said "socket" rather than "CPU". Therefore, bravely (and perhaps foolishly) make the code match the comment. Note that it is possible to bypass both code and comment by booting with tsc=reliable, but this also disables the clocksource watchdog, which is undesirable when trust in the TSC is strictly limited. Reported-by: Zhengxu Chen <[email protected]> Reported-by: Danielle Costantino <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02Merge branch 'kvm-fixes' into HEADPaolo Bonzini7-20/+24
* fix latent bug in how usage of large pages is determined for confidential VMs * fix "underline too short" in docs * eliminate log spam from limited APIC timer periods * disallow pre-faulting of memory before SEV-SNP VMs are initialized * delay clearing and encrypting private memory until it is added to guest page tables * this change also enables another small cleanup: the checks in SNP_LAUNCH_UPDATE that limit it to non-populated, private pages can now be moved in the common kvm_gmem_populate() function
2024-08-02clockevents/drivers/i8253: Fix stop sequence for timer 0David Woodhouse1-11/+0
According to the data sheet, writing the MODE register should stop the counter (and thus the interrupts). This appears to work on real hardware, at least modern Intel and AMD systems. It should also work on Hyper-V. However, on some buggy virtual machines the mode change doesn't have any effect until the counter is subsequently loaded (or perhaps when the IRQ next fires). So, set MODE 0 and then load the counter, to ensure that those buggy VMs do the right thing and the interrupts stop. And then write MODE 0 *again* to stop the counter on compliant implementations too. Apparently, Hyper-V keeps firing the IRQ *repeatedly* even in mode zero when it should only happen once, but the second MODE write stops that too. Userspace test program (mostly written by tglx): ===== #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <stdint.h> #include <sys/io.h> static __always_inline void __out##bwl(type value, uint16_t port) \ { \ asm volatile("out" #bwl " %" #bw "0, %w1" \ : : "a"(value), "Nd"(port)); \ } \ \ static __always_inline type __in##bwl(uint16_t port) \ { \ type value; \ asm volatile("in" #bwl " %w1, %" #bw "0" \ : "=a"(value) : "Nd"(port)); \ return value; \ } BUILDIO(b, b, uint8_t) #define inb __inb #define outb __outb #define PIT_MODE 0x43 #define PIT_CH0 0x40 #define PIT_CH2 0x42 static int is8254; static void dump_pit(void) { if (is8254) { // Latch and output counter and status outb(0xC2, PIT_MODE); printf("%02x %02x %02x\n", inb(PIT_CH0), inb(PIT_CH0), inb(PIT_CH0)); } else { // Latch and output counter outb(0x0, PIT_MODE); printf("%02x %02x\n", inb(PIT_CH0), inb(PIT_CH0)); } } int main(int argc, char* argv[]) { int nr_counts = 2; if (argc > 1) nr_counts = atoi(argv[1]); if (argc > 2) is8254 = 1; if (ioperm(0x40, 4, 1) != 0) return 1; dump_pit(); printf("Set oneshot\n"); outb(0x38, PIT_MODE); outb(0x00, PIT_CH0); outb(0x0F, PIT_CH0); dump_pit(); usleep(1000); dump_pit(); printf("Set periodic\n"); outb(0x34, PIT_MODE); outb(0x00, PIT_CH0); outb(0x0F, PIT_CH0); dump_pit(); usleep(1000); dump_pit(); dump_pit(); usleep(100000); dump_pit(); usleep(100000); dump_pit(); printf("Set stop (%d counter writes)\n", nr_counts); outb(0x30, PIT_MODE); while (nr_counts--) outb(0xFF, PIT_CH0); dump_pit(); usleep(100000); dump_pit(); usleep(100000); dump_pit(); printf("Set MODE 0\n"); outb(0x30, PIT_MODE); dump_pit(); usleep(100000); dump_pit(); usleep(100000); dump_pit(); return 0; } ===== Suggested-by: Sean Christopherson <[email protected]> Co-developed-by: Li RongQing <[email protected]> Signed-off-by: Li RongQing <[email protected]> Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Michael Kelley <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02x86/i8253: Disable PIT timer 0 when not in useDavid Woodhouse1-2/+9
Leaving the PIT interrupt running can cause noticeable steal time for virtual guests. The VMM generally has a timer which toggles the IRQ input to the PIC and I/O APIC, which takes CPU time away from the guest. Even on real hardware, running the counter may use power needlessly (albeit not much). Make sure it's turned off if it isn't going to be used. Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Michael Kelley <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02uretprobe: change syscall number, againArnd Bergmann1-1/+1
Despite multiple attempts to get the syscall number assignment right for the newly added uretprobe syscall, we ended up with a bit of a mess: - The number is defined as 467 based on the assumption that the xattrat family of syscalls would use 463 through 466, but those did not make it into 6.11. - The include/uapi/asm-generic/unistd.h file still lists the number 463, but the new scripts/syscall.tbl that was supposed to have the same data lists 467 instead as the number for arc, arm64, csky, hexagon, loongarch, nios2, openrisc and riscv. None of these architectures actually provide a uretprobe syscall. - All the other architectures (powerpc, arm, mips, ...) don't list this syscall at all. There are two ways to make it consistent again: either list it with the same syscall number on all architectures, or only list it on x86 but not in scripts/syscall.tbl and asm-generic/unistd.h. Based on the most recent discussion, it seems like we won't need it anywhere else, so just remove the inconsistent assignment and instead move the x86 number to the next available one in the architecture specific range, which is 335. Fixes: 5c28424e9a34 ("syscalls: Fix to add sys_uretprobe to syscall.tbl") Fixes: 190fec72df4a ("uprobe: Wire up uretprobe system call") Fixes: 63ded110979b ("uprobe: Change uretprobe syscall scope and number") Acked-by: Masami Hiramatsu (Google) <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2024-08-02x86/pkeys: Restore altstack access in sigreturn()Aruna Ramakrishna1-3/+3
A process can disable access to the alternate signal stack by not enabling the altstack's PKEY in the PKRU register. Nevertheless, the kernel updates the PKRU temporarily for signal handling. However, in sigreturn(), restore_sigcontext() will restore the PKRU to the user-defined PKRU value. This will cause restore_altstack() to fail with a SIGSEGV as it needs read access to the altstack which is prohibited by the user-defined PKRU value. Fix this by restoring altstack before restoring PKRU. Signed-off-by: Aruna Ramakrishna <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02x86/pkeys: Update PKRU to enable all pkeys before XSAVEAruna Ramakrishna2-4/+19
If the alternate signal stack is protected by a different PKEY than the current execution stack, copying XSAVE data to the sigaltstack will fail if its PKEY is not enabled in the PKRU register. It's unknown which pkey was used by the application for the altstack, so enable all PKEYS before XSAVE. But this updated PKRU value is also pushed onto the sigframe, which means the register value restored from sigcontext will be different from the user-defined one, which is incorrect. Fix that by overwriting the PKRU value on the sigframe with the original, user-defined PKRU. Signed-off-by: Aruna Ramakrishna <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02x86/pkeys: Add helper functions to update PKRU on the sigframeAruna Ramakrishna4-0/+43
In the case where a user thread sets up an alternate signal stack protected by the default PKEY (i.e. PKEY 0), while the thread's stack is protected by a non-zero PKEY, both these PKEYS have to be enabled in the PKRU register for the signal to be delivered to the application correctly. However, the PKRU value restored after handling the signal must not enable this extra PKEY (i.e. PKEY 0) - i.e., the PKRU value in the sigframe has to be overwritten with the user-defined value. Add helper functions that will update PKRU value in the sigframe after XSAVE. Note that sig_prepare_pkru() makes no assumption about which PKEY could be used to protect the altstack (i.e. it may not be part of init_pkru), and so enables all PKEYS. No functional change. Signed-off-by: Aruna Ramakrishna <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02x86/pkeys: Add PKRU as a parameter in signal handling functionsAruna Ramakrishna3-5/+6
Assume there's a multithreaded application that runs untrusted user code. Each thread has its stack/code protected by a non-zero PKEY, and the PKRU register is set up such that only that particular non-zero PKEY is enabled. Each thread also sets up an alternate signal stack to handle signals, which is protected by PKEY zero. The PKEYs man page documents that the PKRU will be reset to init_pkru when the signal handler is invoked, which means that PKEY zero access will be enabled. But this reset happens after the kernel attempts to push fpu state to the alternate stack, which is not (yet) accessible by the kernel, which leads to a new SIGSEGV being sent to the application, terminating it. Enabling both the non-zero PKEY (for the thread) and PKEY zero in userspace will not work for this use case. It cannot have the alt stack writeable by all - the rationale here is that the code running in that thread (using a non-zero PKEY) is untrusted and should not have access to the alternate signal stack (that uses PKEY zero), to prevent the return address of a function from being changed. The expectation is that kernel should be able to set up the alternate signal stack and deliver the signal to the application even if PKEY zero is explicitly disabled by the application. The signal handler accessibility should not be dictated by whatever PKRU value the thread sets up. The PKRU register is managed by XSAVE, which means the sigframe contents must match the register contents - which is not the case here. It's required that the signal frame contains the user-defined PKRU value (so that it is restored correctly from sigcontext) but the actual register must be reset to init_pkru so that the alt stack is accessible and the signal can be delivered to the application. It seems that the proper fix here would be to remove PKRU from the XSAVE framework and manage it separately, which is quite complicated. As a workaround, do this: orig_pkru = rdpkru(); wrpkru(orig_pkru & init_pkru_value); xsave_to_user_sigframe(); put_user(pkru_sigframe_addr, orig_pkru) In preparation for writing PKRU to sigframe, pass PKRU as an additional parameter down the call chain from get_sigframe(). No functional change. Signed-off-by: Aruna Ramakrishna <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-02Merge branch 'linus' into x86/mmThomas Gleixner237-10378/+12376
Bring x86 and selftests up to date
2024-08-02perf,x86: avoid missing caller address in stack traces captured in uprobeAndrii Nakryiko1-0/+63
When tracing user functions with uprobe functionality, it's common to install the probe (e.g., a BPF program) at the first instruction of the function. This is often going to be `push %rbp` instruction in function preamble, which means that within that function frame pointer hasn't been established yet. This leads to consistently missing an actual caller of the traced function, because perf_callchain_user() only records current IP (capturing traced function) and then following frame pointer chain (which would be caller's frame, containing the address of caller's caller). So when we have target_1 -> target_2 -> target_3 call chain and we are tracing an entry to target_3, captured stack trace will report target_1 -> target_3 call chain, which is wrong and confusing. This patch proposes a x86-64-specific heuristic to detect `push %rbp` (`push %ebp` on 32-bit architecture) instruction being traced. Given entire kernel implementation of user space stack trace capturing works under assumption that user space code was compiled with frame pointer register (%rbp/%ebp) preservation, it seems pretty reasonable to use this instruction as a strong indicator that this is the entry to the function. In that case, return address is still pointed to by %rsp/%esp, so we fetch it and add to stack trace before proceeding to unwind the rest using frame pointer-based logic. We also check for `endbr64` (for 64-bit modes) as another common pattern for function entry, as suggested by Josh Poimboeuf. Even if we get this wrong sometimes for uprobes attached not at the function entry, it's OK because stack trace will still be overall meaningful, just with one extra bogus entry. If we don't detect this, we end up with guaranteed to be missing caller function entry in the stack trace, which is worse overall. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2024-08-01x86/uaccess: Zero the 8-byte get_range case on failure on 32-bitDavid Gow1-1/+3
While zeroing the upper 32 bits of an 8-byte getuser on 32-bit x86 was fixed by commit 8c860ed825cb ("x86/uaccess: Fix missed zeroing of ia32 u64 get_user() range checking") it was broken again in commit 8a2462df1547 ("x86/uaccess: Improve the 8-byte getuser() case"). This is because the register which holds the upper 32 bits (%ecx) is being cleared _after_ the check_range, so if the range check fails, %ecx is never cleared. This can be reproduced with: ./tools/testing/kunit/kunit.py run --arch i386 usercopy Instead, clear %ecx _before_ check_range in the 8-byte case. This reintroduces a bit of the ugliness we were trying to avoid by adding another #ifndef CONFIG_X86_64, but at least keeps check_range from needing a separate bad_get_user_8 jump. Fixes: 8a2462df1547 ("x86/uaccess: Improve the 8-byte getuser() case") Signed-off-by: David Gow <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/all/[email protected]
2024-08-01KVM: x86/mmu: fix determination of max NPT mapping level for private pagesAckerley Tng1-1/+1
The `if (req_max_level)` test was meant ignore req_max_level if PG_LEVEL_NONE was returned. Hence, this function should return max_level instead of the ignored req_max_level. This is only a latent issue for now, since guest_memfd does not support large pages. Signed-off-by: Ackerley Tng <[email protected]> Message-ID: <[email protected]> Fixes: f32fb32820b1 ("KVM: x86: Add hook for determining max NPT mapping level") Signed-off-by: Paolo Bonzini <[email protected]>
2024-08-01x86/mce: Use mce_prep_record() helpers for apei_smca_report_x86_error()Yazen Ghannam1-8/+8
Current AMD systems can report MCA errors using the ACPI Boot Error Record Table (BERT). The BERT entries for MCA errors will be an x86 Common Platform Error Record (CPER) with an MSR register context that matches the MCAX/SMCA register space. However, the BERT will not necessarily be processed on the CPU that reported the MCA errors. Therefore, the correct CPU number needs to be determined and the information saved in struct mce. Use the newly defined mce_prep_record_*() helpers to get the correct data. Also, add an explicit check to verify that a valid CPU number was found from the APIC ID search. Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Borislav Petkov (AMD) <[email protected]>
2024-08-01x86/mce: Define mce_prep_record() helpers for common and per-CPU fieldsYazen Ghannam2-11/+25
Generally, MCA information for an error is gathered on the CPU that reported the error. In this case, CPU-specific information from the running CPU will be correct. However, this will be incorrect if the MCA information is gathered while running on a CPU that didn't report the error. One example is creating an MCA record using mce_prep_record() for errors reported from ACPI. Split mce_prep_record() so that there is a helper function to gather common, i.e. not CPU-specific, information and another helper for CPU-specific information. Leave mce_prep_record() defined as-is for the common case when running on the reporting CPU. Get MCG_CAP in the global helper even though the register is per-CPU. This value is not already cached per-CPU like other values. And it does not assist with any per-CPU decoding or handling. Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Borislav Petkov (AMD) <[email protected]>
2024-08-01x86/mce: Rename mce_setup() to mce_prep_record()Yazen Ghannam4-7/+7
There is no MCE "setup" done in mce_setup(). Rather, this function initializes and prepares an MCE record. Rename the function to highlight what it does. No functional change is intended. Suggested-by: Borislav Petkov <[email protected]> Signed-off-by: Yazen Ghannam <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Borislav Petkov (AMD) <[email protected]>
2024-08-01x86/mm: Fix pti_clone_entry_text() for i386Peter Zijlstra1-1/+1
While x86_64 has PMD aligned text sections, i386 does not have this luxery. Notably ALIGN_ENTRY_TEXT_END is empty and _etext has PAGE alignment. This means that text on i386 can be page granular at the tail end, which in turn means that the PTI text clones should consistently account for this. Make pti_clone_entry_text() consistent with pti_clone_kernel_text(). Fixes: 16a3fe634f6a ("x86/mm/pti: Clone kernel-image on PTE level for 32 bit") Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
2024-08-01x86/mm: Fix pti_clone_pgtable() alignment assumptionPeter Zijlstra1-3/+3
Guenter reported dodgy crashes on an i386-nosmp build using GCC-11 that had the form of endless traps until entry stack exhaust and then #DF from the stack guard. It turned out that pti_clone_pgtable() had alignment assumptions on the start address, notably it hard assumes start is PMD aligned. This is true on x86_64, but very much not true on i386. These assumptions can cause the end condition to malfunction, leading to a 'short' clone. Guess what happens when the user mapping has a short copy of the entry text? Use the correct increment form for addr to avoid alignment assumptions. Fixes: 16a3fe634f6a ("x86/mm/pti: Clone kernel-image on PTE level for 32 bit") Reported-by: Guenter Roeck <[email protected]> Tested-by: Guenter Roeck <[email protected]> Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2024-07-31x86/setup: Parse the builtin command line before mergingBorislav Petkov (AMD)3-8/+23
Commit in Fixes was added as a catch-all for cases where the cmdline is parsed before being merged with the builtin one. And promptly one issue appeared, see Link below. The microcode loader really needs to parse it that early, but the merging happens later. Reshuffling the early boot nightmare^W code to handle that properly would be a painful exercise for another day so do the chicken thing and parse the builtin cmdline too before it has been merged. Fixes: 0c40b1c7a897 ("x86/setup: Warn when option parsing is done too early") Reported-by: Mike Lothian <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/20240730152108.GAZqkE5Dfi9AuKllRw@fat_crate.local Link: https://lore.kernel.org/r/20240722152330.GCZp55ck8E_FT4kPnC@fat_crate.local
2024-07-31x86/tsc: Use topology_max_packages() to get package numberFeng Tang1-5/+3
Commit b50db7095fe0 ("x86/tsc: Disable clocksource watchdog for TSC on qualified platorms") was introduced to solve problem that sometimes TSC clocksource is wrongly judged as unstable by watchdog like 'jiffies', HPET, etc. In it, the hardware package number is a key factor for judging whether to disable the watchdog for TSC, and 'nr_online_nodes' was chosen due to, at that time (kernel v5.1x), it is available in early boot phase before registering 'tsc-early' clocksource, where all non-boot CPUs are not brought up yet. Dave and Rui pointed out there are many cases in which 'nr_online_nodes' is cheated and not accurate, like: * SNC (sub-numa cluster) mode enabled * numa emulation (numa=fake=8 etc.) * numa=off * platforms with CPU-less HBM nodes, CPU-less Optane memory nodes. * 'maxcpus=' cmdline setup, where chopped CPUs could be onlined later * 'nr_cpus=', 'possible_cpus=' cmdline setup, where chopped CPUs can not be onlined after boot The SNC case is the most user-visible case, as many CSP (Cloud Service Provider) enable this feature in their server fleets. When SNC3 enabled, a 2 socket machine will appear to have 6 NUMA nodes, and get impacted by the issue in reality. Thomas' recent patchset of refactoring x86 topology code improves topology_max_packages() greatly, by making it more accurate and available in early boot phase, which works well in most of the above cases. The only exceptions are 'nr_cpus=' and 'possible_cpus=' setup, which may under-estimate the package number. As during topology setup, the boot CPU iterates through all enumerated APIC IDs and either accepts or rejects the APIC ID. For accepted IDs, it figures out which bits of the ID map to the package number. It tracks which package numbers have been seen in a bitmap. topology_max_packages() just returns the number of bits set in that bitmap. 'nr_cpus=' and 'possible_cpus=' can cause more APIC IDs to be rejected and can artificially lower the number of bits in the package bitmap and thus topology_max_packages(). This means that, for example, a system with 8 physical packages might reject all the CPUs on 6 of those packages and be left with only 2 packages and 2 bits set in the package bitmap. It needs the TSC watchdog, but would disable it anyway. This isn't ideal, but it only happens for debug-oriented options. This is fixable by tracking the package numbers for rejected CPUs. But it's not worth the trouble for debugging. So use topology_max_packages() to replace nr_online_nodes(). Reported-by: Dave Hansen <[email protected]> Signed-off-by: Feng Tang <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Waiman Long <[email protected]> Link: https://lore.kernel.org/all/[email protected] Closes: https://lore.kernel.org/lkml/[email protected]/
2024-07-31perf/x86: Fix smp_processor_id()-in-preemptible warningsLi Huafei1-10/+12
The following bug was triggered on a system built with CONFIG_DEBUG_PREEMPT=y: # echo p > /proc/sysrq-trigger BUG: using smp_processor_id() in preemptible [00000000] code: sh/117 caller is perf_event_print_debug+0x1a/0x4c0 CPU: 3 UID: 0 PID: 117 Comm: sh Not tainted 6.11.0-rc1 #109 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x4f/0x60 check_preemption_disabled+0xc8/0xd0 perf_event_print_debug+0x1a/0x4c0 __handle_sysrq+0x140/0x180 write_sysrq_trigger+0x61/0x70 proc_reg_write+0x4e/0x70 vfs_write+0xd0/0x430 ? handle_mm_fault+0xc8/0x240 ksys_write+0x9c/0xd0 do_syscall_64+0x96/0x190 entry_SYSCALL_64_after_hwframe+0x4b/0x53 This is because the commit d4b294bf84db ("perf/x86: Hybrid PMU support for counters") took smp_processor_id() outside the irq critical section. If a preemption occurs in perf_event_print_debug() and the task is migrated to another cpu, we may get incorrect pmu debug information. Move smp_processor_id() back inside the irq critical section to fix this issue. Fixes: d4b294bf84db ("perf/x86: Hybrid PMU support for counters") Signed-off-by: Li Huafei <[email protected]> Reviewed-and-tested-by: K Prateek Nayak <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kan Liang <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/CPU/AMD: Add models 0x60-0x6f to the Zen5 rangePerry Yuan1-1/+1
Add some new Zen5 models for the 0x1A family. [ bp: Merge the 0x60 and 0x70 ranges. ] Signed-off-by: Perry Yuan <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for GDSBreno Leitao2-1/+12
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create a new kernel config that allows GDS to be completely disabled, similarly to the "gather_data_sampling=off" or "mitigations=off" kernel command-line. Now, there are two options for GDS mitigation: * CONFIG_MITIGATION_GDS=n -> Mitigation disabled (New) * CONFIG_MITIGATION_GDS=y -> Mitigation enabled (GDS_MITIGATION_FULL) Suggested-by: Josh Poimboeuf <[email protected]> Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Remove GDS Force Kconfig optionBreno Leitao2-23/+0
Remove the MITIGATION_GDS_FORCE Kconfig option, which aggressively disables AVX as a mitigation for Gather Data Sampling (GDS) vulnerabilities. This option is not widely used by distros. While removing the Kconfig option, retain the runtime configuration ability through the `gather_data_sampling=force` kernel parameter. This allows users to still enable this aggressive mitigation if needed, without baking it into the kernel configuration. Simplify the kernel configuration while maintaining flexibility for runtime mitigation choices. Suggested-by: Borislav Petkov <[email protected]> Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Daniel Sneddon <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for SSBBreno Leitao2-4/+16
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the SSB CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for Spectre V2Breno Leitao2-4/+17
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the Spectre V2 CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for SRBDSBreno Leitao2-1/+16
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the SRBDS CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for Spectre v1Breno Leitao2-1/+12
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the Spectre v1 CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for RETBLEEDBreno Leitao2-1/+14
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the RETBLEED CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for L1TFBreno Leitao2-1/+12
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the L1TF CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for MMIO Stable DataBreno Leitao2-1/+14
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the MMIO Stale data CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for TAABreno Leitao2-1/+13
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the TAA CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/bugs: Add a separate config for MDSBreno Leitao2-1/+11
Currently, the CONFIG_SPECULATION_MITIGATIONS is halfway populated, where some mitigations have entries in Kconfig, and they could be modified, while others mitigations do not have Kconfig entries, and could not be controlled at build time. Create an entry for the MDS CPU mitigation under CONFIG_SPECULATION_MITIGATIONS. This allow users to enable or disable it at compilation time. Signed-off-by: Breno Leitao <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Josh Poimboeuf <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/sev: Fix __reserved field in sev_configPavan Kumar Paluri1-1/+1
sev_config currently has debug, ghcbs_initialized, and use_cas fields. However, __reserved count has not been updated. Fix this. Fixes: 34ff65901735 ("x86/sev: Use kernel provided SVSM Calling Areas") Signed-off-by: Pavan Kumar Paluri <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2024-07-30x86/microcode/AMD: Fix a -Wsometimes-uninitialized clang false positiveBorislav Petkov (AMD)1-1/+1
Initialize equiv_id in order to shut up: arch/x86/kernel/cpu/microcode/amd.c:714:6: warning: variable 'equiv_id' is \ used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] if (x86_family(bsp_cpuid_1_eax) < 0x17) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ because clang doesn't do interprocedural analysis for warnings to see that this variable won't be used uninitialized. Fixes: 94838d230a6c ("x86/microcode/AMD: Use the family,model,stepping encoded in the patch ID") Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Signed-off-by: Borislav Petkov (AMD) <[email protected]>
2024-07-29bpf, x64: Fix tailcall hierarchyLeon Hwang1-28/+79
This patch fixes a tailcall issue caused by abusing the tailcall in bpf2bpf feature. As we know, tail_call_cnt propagates by rax from caller to callee when to call subprog in tailcall context. But, like the following example, MAX_TAIL_CALL_CNT won't work because of missing tail_call_cnt back-propagation from callee to caller. \#include <linux/bpf.h> \#include <bpf/bpf_helpers.h> \#include "bpf_legacy.h" struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 1); __uint(key_size, sizeof(__u32)); __uint(value_size, sizeof(__u32)); } jmp_table SEC(".maps"); int count = 0; static __noinline int subprog_tail1(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } static __noinline int subprog_tail2(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } SEC("tc") int entry(struct __sk_buff *skb) { volatile int ret = 1; count++; subprog_tail1(skb); subprog_tail2(skb); return ret; } char __license[] SEC("license") = "GPL"; At run time, the tail_call_cnt in entry() will be propagated to subprog_tail1() and subprog_tail2(). But, when the tail_call_cnt in subprog_tail1() updates when bpf_tail_call_static(), the tail_call_cnt in entry() won't be updated at the same time. As a result, in entry(), when tail_call_cnt in entry() is less than MAX_TAIL_CALL_CNT and subprog_tail1() returns because of MAX_TAIL_CALL_CNT limit, bpf_tail_call_static() in suprog_tail2() is able to run because the tail_call_cnt in subprog_tail2() propagated from entry() is less than MAX_TAIL_CALL_CNT. So, how many tailcalls are there for this case if no error happens? From top-down view, does it look like hierarchy layer and layer? With this view, there will be 2+4+8+...+2^33 = 2^34 - 2 = 17,179,869,182 tailcalls for this case. How about there are N subprog_tail() in entry()? There will be almost N^34 tailcalls. Then, in this patch, it resolves this case on x86_64. In stead of propagating tail_call_cnt from caller to callee, it propagates its pointer, tail_call_cnt_ptr, tcc_ptr for short. However, where does it store tail_call_cnt? It stores tail_call_cnt on the stack of main prog. When tail call happens in subprog, it increments tail_call_cnt by tcc_ptr. Meanwhile, it stores tail_call_cnt_ptr on the stack of main prog, too. And, before jump to tail callee, it has to pop tail_call_cnt and tail_call_cnt_ptr. Then, at the prologue of subprog, it must not make rax as tail_call_cnt_ptr again. It has to reuse tail_call_cnt_ptr from caller. As a result, at run time, it has to recognize rax is tail_call_cnt or tail_call_cnt_ptr at prologue by: 1. rax is tail_call_cnt if rax is <= MAX_TAIL_CALL_CNT. 2. rax is tail_call_cnt_ptr if rax is > MAX_TAIL_CALL_CNT, because a pointer won't be <= MAX_TAIL_CALL_CNT. Here's an example to dump JITed. struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 1); __uint(key_size, sizeof(__u32)); __uint(value_size, sizeof(__u32)); } jmp_table SEC(".maps"); int count = 0; static __noinline int subprog_tail(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } SEC("tc") int entry(struct __sk_buff *skb) { int ret = 1; count++; subprog_tail(skb); subprog_tail(skb); return ret; } When bpftool p d j id 42: int entry(struct __sk_buff * skb): bpf_prog_0c0f4c2413ef19b1_entry: ; int entry(struct __sk_buff *skb) 0: endbr64 4: nopl (%rax,%rax) 9: xorq %rax, %rax ;; rax = 0 (tail_call_cnt) c: pushq %rbp d: movq %rsp, %rbp 10: endbr64 14: cmpq $33, %rax ;; if rax > 33, rax = tcc_ptr 18: ja 0x20 ;; if rax > 33 goto 0x20 ---+ 1a: pushq %rax ;; [rbp - 8] = rax = 0 | 1b: movq %rsp, %rax ;; rax = rbp - 8 | 1e: jmp 0x21 ;; ---------+ | 20: pushq %rax ;; <--------|---------------+ 21: pushq %rax ;; <--------+ [rbp - 16] = rax 22: pushq %rbx ;; callee saved 23: movq %rdi, %rbx ;; rbx = skb (callee saved) ; count++; 26: movabsq $-82417199407104, %rdi 30: movl (%rdi), %esi 33: addl $1, %esi 36: movl %esi, (%rdi) ; subprog_tail(skb); 39: movq %rbx, %rdi ;; rdi = skb 3c: movq -16(%rbp), %rax ;; rax = tcc_ptr 43: callq 0x80 ;; call subprog_tail() ; subprog_tail(skb); 48: movq %rbx, %rdi ;; rdi = skb 4b: movq -16(%rbp), %rax ;; rax = tcc_ptr 52: callq 0x80 ;; call subprog_tail() ; return ret; 57: movl $1, %eax 5c: popq %rbx 5d: leave 5e: retq int subprog_tail(struct __sk_buff * skb): bpf_prog_3a140cef239a4b4f_subprog_tail: ; int subprog_tail(struct __sk_buff *skb) 0: endbr64 4: nopl (%rax,%rax) 9: nopl (%rax) ;; do not touch tail_call_cnt c: pushq %rbp d: movq %rsp, %rbp 10: endbr64 14: pushq %rax ;; [rbp - 8] = rax (tcc_ptr) 15: pushq %rax ;; [rbp - 16] = rax (tcc_ptr) 16: pushq %rbx ;; callee saved 17: pushq %r13 ;; callee saved 19: movq %rdi, %rbx ;; rbx = skb ; asm volatile("r1 = %[ctx]\n\t" 1c: movabsq $-105487587488768, %r13 ;; r13 = jmp_table 26: movq %rbx, %rdi ;; 1st arg, skb 29: movq %r13, %rsi ;; 2nd arg, jmp_table 2c: xorl %edx, %edx ;; 3rd arg, index = 0 2e: movq -16(%rbp), %rax ;; rax = [rbp - 16] (tcc_ptr) 35: cmpq $33, (%rax) 39: jae 0x4e ;; if *tcc_ptr >= 33 goto 0x4e --------+ 3b: jmp 0x4e ;; jmp bypass, toggled by poking | 40: addq $1, (%rax) ;; (*tcc_ptr)++ | 44: popq %r13 ;; callee saved | 46: popq %rbx ;; callee saved | 47: popq %rax ;; undo rbp-16 push | 48: popq %rax ;; undo rbp-8 push | 49: nopl (%rax,%rax) ;; tail call target, toggled by poking | ; return 0; ;; | 4e: popq %r13 ;; restore callee saved <--------------+ 50: popq %rbx ;; restore callee saved 51: leave 52: retq Furthermore, when trampoline is the caller of bpf prog, which is tail_call_reachable, it is required to propagate rax through trampoline. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") Reviewed-by: Eduard Zingerman <[email protected]> Signed-off-by: Leon Hwang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]>
2024-07-29x86/aperfmperf: Fix deadlock on cpu_hotplug_lockJonathan Cameron1-2/+4
The broken patch results in a call to init_freq_invariance_cppc() in a CPU hotplug handler in both the path for initially present CPUs and those hotplugged later. That function includes a one time call to amd_set_max_freq_ratio() which in turn calls freq_invariance_enable() that has a static_branch_enable() which takes the cpu_hotlug_lock which is already held. Avoid the deadlock by using static_branch_enable_cpuslocked() as the lock will always be already held. The equivalent path on Intel does not already hold this lock, so take it around the call to freq_invariance_enable(), which results in it being held over the call to register_syscall_ops, which looks to be safe to do. Fixes: c1385c1f0ba3 ("ACPI: processor: Simplify initial onlining to use same path for cold and hotplug") Closes: https://lore.kernel.org/all/CABXGCsPvqBfL5hQDOARwfqasLRJ_eNPBbCngZ257HOe=xbWDkA@mail.gmail.com/ Reported-by: Mikhail Gavrilov <[email protected]> Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Jonathan Cameron <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Tested-by: Mikhail Gavrilov <[email protected]> Tested-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]