aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/boot/compressed/misc.h
AgeCommit message (Collapse)AuthorFilesLines
2024-09-01mm: rework accept memory helpersKirill A. Shutemov1-1/+1
Make accept_memory() and range_contains_unaccepted_memory() take 'start' and 'size' arguments instead of 'start' and 'end'. Remove accept_page(), replacing it with direct calls to accept_memory(). The accept_page() name is going to be used for a different function. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kirill A. Shutemov <[email protected]> Suggested-by: David Hildenbrand <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Mike Rapoport (Microsoft) <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2024-02-06x86/boot: Add a message about ignored early NMIsNOMURA JUNICHI(野村 淳一)1-0/+1
Commit 78a509fba9c9 ("x86/boot: Ignore NMIs during very early boot") added an empty handler in early boot stage to avoid boot failure due to spurious NMIs. Add a diagnostic message to show that early NMIs have occurred. [ bp: Touchups. ] [ Committer note: tested by stopping the guest really early and injecting NMIs through qemu's monitor. Result: early console in setup code Spurious early NMIs ignored: 13 ... ] Suggested-by: Borislav Petkov <[email protected]> Suggested-by: H. Peter Anvin <[email protected]> Signed-off-by: Jun'ichi Nomura <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Link: https://lore.kernel.org/lkml/20231130103339.GCZWhlA196uRklTMNF@fat_crate.local
2024-02-06x86/boot: Add error_putdec() helperH. Peter Anvin1-0/+2
Add a helper to print decimal numbers to early console. Suggested-by: Borislav Petkov (AMD) <[email protected]> Signed-off-by: H. Peter Anvin (Intel) <[email protected]> Signed-off-by: Jun'ichi Nomura <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/lkml/20240123112624.GBZa-iYP1l9SSYtr-V@fat_crate.local/ Link: https://lore.kernel.org/r/[email protected]
2023-11-30x86/boot: Ignore NMIs during very early bootJun'ichi Nomura1-0/+1
When there are two racing NMIs on x86, the first NMI invokes NMI handler and the 2nd NMI is latched until IRET is executed. If panic on NMI and panic kexec are enabled, the first NMI triggers panic and starts booting the next kernel via kexec. Note that the 2nd NMI is still latched. During the early boot of the next kernel, once an IRET is executed as a result of a page fault, then the 2nd NMI is unlatched and invokes the NMI handler. However, NMI handler is not set up at the early stage of boot, which results in a boot failure. Avoid such problems by setting up a NOP handler for early NMIs. [ mingo: Refined the changelog. ] Signed-off-by: Jun'ichi Nomura <[email protected]> Signed-off-by: Derek Barbosa <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Peter Zijlstra <[email protected]>
2023-10-18x86/boot: Rename conflicting 'boot_params' pointer to 'boot_params_ptr'Ard Biesheuvel1-1/+0
The x86 decompressor is built and linked as a separate executable, but it shares components with the kernel proper, which are either #include'd as C files, or linked into the decompresor as a static library (e.g, the EFI stub) Both the kernel itself and the decompressor define a global symbol 'boot_params' to refer to the boot_params struct, but in the former case, it refers to the struct directly, whereas in the decompressor, it refers to a global pointer variable referring to the struct boot_params passed by the bootloader or constructed from scratch. This ambiguity is unfortunate, and makes it impossible to assign this decompressor variable from the x86 EFI stub, given that declaring it as extern results in a clash. So rename the decompressor version (whose scope is limited) to boot_params_ptr. [ mingo: Renamed 'boot_params_p' to 'boot_params_ptr' for clarity ] Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: [email protected]
2023-08-07x86/decompressor: Assign paging related global variables earlierArd Biesheuvel1-2/+0
There is no need to defer the assignment of the paging related global variables 'pgdir_shift' and 'ptrs_per_p4d' until after the trampoline is cleaned up, so assign them as soon as it is clear that 5-level paging will be enabled. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-06-06x86/boot/compressed: Handle unaccepted memoryKirill A. Shutemov1-0/+10
The firmware will pre-accept the memory used to run the stub. But, the stub is responsible for accepting the memory into which it decompresses the main kernel. Accept memory just before decompression starts. The stub is also responsible for choosing a physical address in which to place the decompressed kernel image. The KASLR mechanism will randomize this physical address. Since the accepted memory region is relatively small, KASLR would be quite ineffective if it only used the pre-accepted area (EFI_CONVENTIONAL_MEMORY). Ensure that KASLR randomizes among the entire physical address space by also including EFI_UNACCEPTED_MEMORY. Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Liam Merwick <[email protected]> Reviewed-by: Tom Lendacky <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-04-04x86/boot: Centralize __pa()/__va() definitionsKirill A. Shutemov1-0/+9
Replace multiple __pa()/__va() definitions with a single one in misc.h. Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Reviewed-by: Mike Rapoport <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230330114956.20342-2-kirill.shutemov%40linux.intel.com
2023-01-19x86/sev: Add SEV-SNP guest feature negotiation supportNikunj A Dadhania1-0/+2
The hypervisor can enable various new features (SEV_FEATURES[1:63]) and start a SNP guest. Some of these features need guest side implementation. If any of these features are enabled without it, the behavior of the SNP guest will be undefined. It may fail booting in a non-obvious way making it difficult to debug. Instead of allowing the guest to continue and have it fail randomly later, detect this early and fail gracefully. The SEV_STATUS MSR indicates features which the hypervisor has enabled. While booting, SNP guests should ascertain that all the enabled features have guest side implementation. In case a feature is not implemented in the guest, the guest terminates booting with GHCB protocol Non-Automatic Exit(NAE) termination request event, see "SEV-ES Guest-Hypervisor Communication Block Standardization" document (currently at https://developer.amd.com/wp-content/resources/56421.pdf), section "Termination Request". Populate SW_EXITINFO2 with mask of unsupported features that the hypervisor can easily report to the user. More details in the AMD64 APM Vol 2, Section "SEV_STATUS MSR". [ bp: - Massage. - Move snp_check_features() call to C code. Note: the CC:stable@ aspect here is to be able to protect older, stable kernels when running on newer hypervisors. Or not "running" but fail reliably and in a well-defined manner instead of randomly. ] Fixes: cbd3d4f7c4e5 ("x86/sev: Check SEV-SNP features support") Signed-off-by: Nikunj A Dadhania <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Tom Lendacky <[email protected]> Cc: <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-08-24x86/boot: Don't propagate uninitialized boot_params->cc_blob_addressMichael Roth1-1/+11
In some cases, bootloaders will leave boot_params->cc_blob_address uninitialized rather than zeroing it out. This field is only meant to be set by the boot/compressed kernel in order to pass information to the uncompressed kernel when SEV-SNP support is enabled. Therefore, there are no cases where the bootloader-provided values should be treated as anything other than garbage. Otherwise, the uncompressed kernel may attempt to access this bogus address, leading to a crash during early boot. Normally, sanitize_boot_params() would be used to clear out such fields but that happens too late: sev_enable() may have already initialized it to a valid value that should not be zeroed out. Instead, have sev_enable() zero it out unconditionally beforehand. Also ensure this happens for !CONFIG_AMD_MEM_ENCRYPT as well by also including this handling in the sev_enable() stub function. [ bp: Massage commit message and comments. ] Fixes: b190a043c49a ("x86/sev: Add SEV-SNP feature detection/setup") Reported-by: Jeremi Piotrowski <[email protected]> Reported-by: [email protected] Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: [email protected] Link: https://bugzilla.kernel.org/show_bug.cgi?id=216387 Link: https://lore.kernel.org/r/[email protected]
2022-05-23Merge tag 'x86_tdx_for_v5.19_rc1' of ↵Linus Torvalds1-1/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull Intel TDX support from Borislav Petkov: "Intel Trust Domain Extensions (TDX) support. This is the Intel version of a confidential computing solution called Trust Domain Extensions (TDX). This series adds support to run the kernel as part of a TDX guest. It provides similar guest protections to AMD's SEV-SNP like guest memory and register state encryption, memory integrity protection and a lot more. Design-wise, it differs from AMD's solution considerably: it uses a software module which runs in a special CPU mode called (Secure Arbitration Mode) SEAM. As the name suggests, this module serves as sort of an arbiter which the confidential guest calls for services it needs during its lifetime. Just like AMD's SNP set, this series reworks and streamlines certain parts of x86 arch code so that this feature can be properly accomodated" * tag 'x86_tdx_for_v5.19_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits) x86/tdx: Fix RETs in TDX asm x86/tdx: Annotate a noreturn function x86/mm: Fix spacing within memory encryption features message x86/kaslr: Fix build warning in KASLR code in boot stub Documentation/x86: Document TDX kernel architecture ACPICA: Avoid cache flush inside virtual machines x86/tdx/ioapic: Add shared bit for IOAPIC base address x86/mm: Make DMA memory shared for TD guest x86/mm/cpa: Add support for TDX shared memory x86/tdx: Make pages shared in ioremap() x86/topology: Disable CPU online/offline control for TDX guests x86/boot: Avoid #VE during boot for TDX platforms x86/boot: Set CR0.NE early and keep it set during the boot x86/acpi/x86/boot: Add multiprocessor wake-up support x86/boot: Add a trampoline for booting APs via firmware handoff x86/tdx: Wire up KVM hypercalls x86/tdx: Port I/O: Add early boot support x86/tdx: Port I/O: Add runtime hypercalls x86/boot: Port I/O: Add decompression-time support for TDX x86/boot: Port I/O: Allow to hook up alternative helpers ...
2022-04-17x86/boot: Add an efi.h header for the decompressorBorislav Petkov1-1/+2
Copy the needed symbols only and remove the kernel proper includes. No functional changes. Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-07x86/boot: Port I/O: Allow to hook up alternative helpersKirill A. Shutemov1-1/+1
Port I/O instructions trigger #VE in the TDX environment. In response to the exception, kernel emulates these instructions using hypercalls. But during early boot, on the decompression stage, it is cumbersome to deal with #VE. It is cleaner to go to hypercalls directly, bypassing #VE handling. Add a way to hook up alternative port I/O helpers in the boot stub with a new pio_ops structure. For now, set the ops structure to just call the normal I/O operation functions. out*()/in*() macros redefined to use pio_ops callbacks. It eliminates need in changing call sites. io_delay() changed to use port I/O helper instead of inline assembly. Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2022-04-07x86: Consolidate port I/O helpersKirill A. Shutemov1-1/+1
There are two implementations of port I/O helpers: one in the kernel and one in the boot stub. Move the helpers required for both to <asm/shared/io.h> and use the one implementation everywhere. Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2022-04-07x86/tdx: Detect TDX at early kernel decompression timeKuppuswamy Sathyanarayanan1-0/+2
The early decompression code does port I/O for its console output. But, handling the decompression-time port I/O demands a different approach from normal runtime because the IDT required to support #VE based port I/O emulation is not yet set up. Paravirtualizing I/O calls during the decompression step is acceptable because the decompression code doesn't have a lot of call sites to IO instruction. To support port I/O in decompression code, TDX must be detected before the decompression code might do port I/O. Detect whether the kernel runs in a TDX guest. Add an early_is_tdx_guest() interface to query the cached TDX guest status in the decompression code. TDX is detected with CPUID. Make cpuid_count() accessible outside boot/cpuflags.c. TDX detection in the main kernel is very similar. Move common bits into <asm/shared/tdx.h>. The actual port I/O paravirtualization will come later in the series. Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]> Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Tony Luck <[email protected]> Reviewed-by: Dave Hansen <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2022-04-07x86/compressed/64: Add identity mapping for Confidential Computing blobMichael Roth1-0/+2
The run-time kernel will need to access the Confidential Computing blob very early during boot to access the CPUID table it points to. At that stage, it will be relying on the identity-mapped page table set up by the boot/compressed kernel, so make sure the blob and the CPUID table it points to are mapped in advance. [ bp: Massage. ] Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-07x86/compressed: Export and rename add_identity_map()Michael Roth1-0/+1
SEV-specific code will need to add some additional mappings, but doing this within ident_map_64.c requires some SEV-specific helpers to be exported and some SEV-specific struct definitions to be pulled into ident_map_64.c. Instead, export add_identity_map() so SEV-specific (and other subsystem-specific) code can be better contained outside of ident_map_64.c. While at it, rename the function to kernel_add_identity_map(), similar to the kernel_ident_mapping_init() function it relies upon. No functional changes. Suggested-by: Borislav Petkov <[email protected]> Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-06x86/compressed/acpi: Move EFI vendor table lookup to helperMichael Roth1-0/+13
Future patches for SEV-SNP-validated CPUID will also require early parsing of the EFI configuration. Incrementally move the related code into a set of helpers that can be re-used for that purpose. [ bp: Unbreak unnecessarily broken lines. ] Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-06x86/compressed/acpi: Move EFI config table lookup to helperMichael Roth1-0/+9
Future patches for SEV-SNP-validated CPUID will also require early parsing of the EFI configuration. Incrementally move the related code into a set of helpers that can be re-used for that purpose. [ bp: Remove superfluous zeroing of a stack variable. ] Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-06x86/compressed/acpi: Move EFI system table lookup to helperMichael Roth1-0/+6
Future patches for SEV-SNP-validated CPUID will also require early parsing of the EFI configuration. Incrementally move the related code into a set of helpers that can be re-used for that purpose. Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-06x86/compressed/acpi: Move EFI detection to helperMichael Roth1-0/+16
Future patches for SEV-SNP-validated CPUID will also require early parsing of the EFI configuration. Incrementally move the related code into a set of helpers that can be re-used for that purpose. First, carve out the functionality which determines the EFI environment type the machine is booting on. [ bp: Massage commit message. ] Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-06x86/compressed: Add helper for validating pages in the decompression stageBrijesh Singh1-0/+4
Many of the integrity guarantees of SEV-SNP are enforced through the Reverse Map Table (RMP). Each RMP entry contains the GPA at which a particular page of DRAM should be mapped. The VMs can request the hypervisor to add pages in the RMP table via the Page State Change VMGEXIT defined in the GHCB specification. Inside each RMP entry is a Validated flag; this flag is automatically cleared to 0 by the CPU hardware when a new RMP entry is created for a guest. Each VM page can be either validated or invalidated, as indicated by the Validated flag in the RMP entry. Memory access to a private page that is not validated generates a #VC. A VM must use the PVALIDATE instruction to validate a private page before using it. To maintain the security guarantee of SEV-SNP guests, when transitioning pages from private to shared, the guest must invalidate the pages before asking the hypervisor to change the page state to shared in the RMP table. After the pages are mapped private in the page table, the guest must issue a page state change VMGEXIT to mark the pages private in the RMP table and validate them. Upon boot, BIOS should have validated the entire system memory. During the kernel decompression stage, early_setup_ghcb() uses set_page_decrypted() to make the GHCB page shared (i.e. clear encryption attribute). And while exiting from the decompression, it calls set_page_encrypted() to make the page private. Add snp_set_page_{private,shared}() helpers that are used by set_page_{decrypted,encrypted}() to change the page state in the RMP table. [ bp: Massage commit message and comments. ] Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-04-06x86/compressed/64: Detect/setup SEV/SME features earlier during bootMichael Roth1-2/+2
With upcoming SEV-SNP support, SEV-related features need to be initialized earlier during boot, at the same point the initial #VC handler is set up, so that the SEV-SNP CPUID table can be utilized during the initial feature checks. Also, SEV-SNP feature detection will rely on EFI helper functions to scan the EFI config table for the Confidential Computing blob, and so would need to be implemented at least partially in C. Currently set_sev_encryption_mask() is used to initialize the sev_status and sme_me_mask globals that advertise what SEV/SME features are available in a guest. Rename it to sev_enable() to better reflect that (SME is only enabled in the case of SEV guests in the boot/compressed kernel), and move it to just after the stage1 #VC handler is set up so that it can be used to initialize SEV-SNP as well in future patches. While at it, re-implement it as C code so that all SEV feature detection can be better consolidated with upcoming SEV-SNP feature detection, which will also be in C. The 32-bit entry path remains unchanged, as it never relied on the set_sev_encryption_mask() initialization to begin with. [ bp: Massage commit message. ] Signed-off-by: Michael Roth <[email protected]> Signed-off-by: Brijesh Singh <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-11-02Merge tag 'x86_core_for_v5.16_rc1' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Do not #GP on userspace use of CLI/STI but pretend it was a NOP to keep old userspace from breaking. Adjust the corresponding iopl selftest to that. - Improve stack overflow warnings to say which stack got overflowed and raise the exception stack sizes to 2 pages since overflowing the single page of exception stack is very easy to do nowadays with all the tracing machinery enabled. With that, rip out the custom mapping of AMD SEV's too. - A bunch of changes in preparation for FGKASLR like supporting more than 64K section headers in the relocs tool, correct ORC lookup table size to cover the whole kernel .text and other adjustments. * tag 'x86_core_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: selftests/x86/iopl: Adjust to the faked iopl CLI/STI usage vmlinux.lds.h: Have ORC lookup cover entire _etext - _stext x86/boot/compressed: Avoid duplicate malloc() implementations x86/boot: Allow a "silent" kaslr random byte fetch x86/tools/relocs: Support >64K section headers x86/sev: Make the #VC exception stacks part of the default stacks storage x86: Increase exception stack sizes x86/mm/64: Improve stack overflow warnings x86/iopl: Fake iopl(3) CLI/STI usage
2021-10-27x86/boot/compressed: Avoid duplicate malloc() implementationsKees Cook1-0/+2
The early malloc() and free() implementation in include/linux/decompress/mm.h (which is also included by the static decompressors) is static. This is fine when the only thing interested in using malloc() is the decompression code, but the x86 early boot environment may use malloc() in a couple places, leading to a potential collision when the static copies of the available memory region ("malloc_ptr") gets reset to the global "free_mem_ptr" value. As it happened, the existing usage pattern was accidentally safe because each user did 1 malloc() and 1 free() before returning and were not nested: extract_kernel() (misc.c) choose_random_location() (kaslr.c) mem_avoid_init() handle_mem_options() malloc() ... free() ... parse_elf() (misc.c) malloc() ... free() Once the future FGKASLR series is added, however, it will insert additional malloc() calls local to fgkaslr.c in the middle of parse_elf()'s malloc()/free() pair: parse_elf() (misc.c) malloc() if (...) { layout_randomized_image(output, &ehdr, phdrs); malloc() <- boom ... else layout_image(output, &ehdr, phdrs); free() To avoid collisions, there must be a single implementation of malloc(). Adjust include/linux/decompress/mm.h so that visibility can be controlled, provide prototypes in misc.h, and implement the functions in misc.c. This also results in a small size savings: $ size vmlinux.before vmlinux.after text data bss dec hex filename 8842314 468 178320 9021102 89a6ae vmlinux.before 8842240 468 178320 9021028 89a664 vmlinux.after Signed-off-by: Kees Cook <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-09-25lib/string: Move helper functions out of string.cKees Cook1-0/+2
The core functions of string.c are those that may be implemented by per-architecture functions, or overloaded by FORTIFY_SOURCE. As a result, it needs to be built with __NO_FORTIFY. Without this, macros will collide with function declarations. This was accidentally working due to -ffreestanding (on some architectures). Make this deterministic by explicitly setting __NO_FORTIFY and move all the helper functions into string_helpers.c so that they gain the fortification coverage they had been missing. Cc: Andrew Morton <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Andy Lavr <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Bartosz Golaszewski <[email protected]> Acked-by: Andy Shevchenko <[email protected]> Signed-off-by: Kees Cook <[email protected]>
2021-05-12x86/boot/compressed: Enable -WundefNick Desaulniers1-1/+1
A discussion around -Wundef showed that there were still a few boolean Kconfigs where #if was used rather than #ifdef to guard different code. Kconfig doesn't define boolean configs, which can result in -Wundef warnings. arch/x86/boot/compressed/Makefile resets the CFLAGS used for this directory, and doesn't re-enable -Wundef as the top level Makefile does. If re-added, with RANDOMIZE_BASE and X86_NEED_RELOCS disabled, the following warnings are visible. arch/x86/boot/compressed/misc.h:82:5: warning: 'CONFIG_RANDOMIZE_BASE' is not defined, evaluates to 0 [-Wundef] ^ arch/x86/boot/compressed/misc.c:175:5: warning: 'CONFIG_X86_NEED_RELOCS' is not defined, evaluates to 0 [-Wundef] ^ Simply fix these and re-enable this warning for this directory. Suggested-by: Nathan Chancellor <[email protected]> Suggested-by: Joe Perches <[email protected]> Signed-off-by: Nick Desaulniers <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Kees Cook <[email protected]> Reviewed-by: Nathan Chancellor <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-03-18x86/boot/compressed/64: Cleanup exception handling before booting kernelJoerg Roedel1-0/+6
Disable the exception handling before booting the kernel to make sure any exceptions that happen during early kernel boot are not directed to the pre-decompression code. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-12-22kasan, x86, s390: update undef CONFIG_KASANAndrey Konovalov1-0/+1
With the intoduction of hardware tag-based KASAN some kernel checks of this kind: ifdef CONFIG_KASAN will be updated to: if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) x86 and s390 use a trick to #undef CONFIG_KASAN for some of the code that isn't linked with KASAN runtime and shouldn't have any KASAN annotations. Also #undef CONFIG_KASAN_GENERIC with CONFIG_KASAN. Link: https://lkml.kernel.org/r/9d84bfaaf8fabe0fc89f913c9e420a30bd31a260.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <[email protected]> Signed-off-by: Vincenzo Frascino <[email protected]> Reviewed-by: Marco Elver <[email protected]> Acked-by: Vasily Gorbik <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Tested-by: Vincenzo Frascino <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Branislav Rankov <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Evgenii Stepanov <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-10-29x86/boot/compressed/64: Check SEV encryption in 64-bit boot-pathJoerg Roedel1-0/+2
Check whether the hypervisor reported the correct C-bit when running as an SEV guest. Using a wrong C-bit position could be used to leak sensitive data from the guest to the hypervisor. The check function is in a separate file: arch/x86/kernel/sev_verify_cbit.S so that it can be re-used in the running kernel image. [ bp: Massage. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tom Lendacky <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-10x86/sev-es: Check required CPU features for SEV-ESMartin Radev1-2/+3
Make sure the machine supports RDRAND, otherwise there is no trusted source of randomness in the system. To also check this in the pre-decompression stage, make has_cpuflag() not depend on CONFIG_RANDOMIZE_BASE anymore. Signed-off-by: Martin Radev <[email protected]> Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Unmap GHCB page before booting the kernelJoerg Roedel1-0/+6
Force a page-fault on any further accesses to the GHCB page when they shouldn't happen anymore. This will catch any bugs where a #VC exception is raised even though none is expected anymore. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Setup a GHCB-based VC Exception handlerJoerg Roedel1-0/+7
Install an exception handler for #VC exception that uses a GHCB. Also add the infrastructure for handling different exit-codes by decoding the instruction that caused the exception and error handling. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add set_page_en/decrypted() helpersJoerg Roedel1-0/+2
The functions are needed to map the GHCB for SEV-ES guests. The GHCB is used for communication with the hypervisor, so its content must not be encrypted. After the GHCB is not needed anymore it must be mapped encrypted again so that the running kernel image can safely re-use the memory. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add stage1 #VC handlerJoerg Roedel1-0/+1
Add the first handler for #VC exceptions. At stage 1 there is no GHCB yet because the kernel might still be running on the EFI page table. The stage 1 handler is limited to the MSR-based protocol to talk to the hypervisor and can only support CPUID exit-codes, but that is enough to get to stage 2. [ bp: Zap superfluous newlines after rd/wrmsr instruction mnemonics. ] Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Don't pre-map memory in KASLR codeJoerg Roedel1-10/+0
With the page-fault handler in place, he identity mapping can be built on-demand. So remove the code which manually creates the mappings and unexport/remove the functions used for it. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add page-fault handlerJoerg Roedel1-0/+6
Install a page-fault handler to add an identity mapping to addresses not yet mapped. Also do some checking whether the error code is sane. This makes non SEV-ES machines use the exception handling infrastructure in the pre-decompressions boot code too, making it less likely to break in the future. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Rename kaslr_64.c to ident_map_64.cJoerg Roedel1-0/+8
The file contains only code related to identity-mapped page tables. Rename the file and compile it always in. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-09-07x86/boot/compressed/64: Add IDT InfrastructureJoerg Roedel1-0/+5
Add code needed to setup an IDT in the early pre-decompression boot-code. The IDT is loaded first in startup_64, which is after EfiExitBootServices() has been called, and later reloaded when the kernel image has been relocated to the end of the decompression area. This allows to setup different IDT handlers before and after the relocation. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-07-31x86/kaslr: Replace 'unsigned long long' with 'u64'Arvind Sankar1-2/+2
No functional change. Signed-off-by: Arvind Sankar <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-03-28x86/boot/compressed: Fix debug_puthex() parameter typeJoerg Roedel1-1/+1
In the CONFIG_X86_VERBOSE_BOOTUP=Y case, the debug_puthex() macro just turns into __puthex(), which takes 'unsigned long' as parameter. But in the CONFIG_X86_VERBOSE_BOOTUP=N case, it is a function which takes 'unsigned char *', causing compile warnings when the function is used. Fix the parameter type to get rid of the warnings. Signed-off-by: Joerg Roedel <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-07-18x86, boot: Remove multiple copy of static function sanitize_boot_params()Zhenzhong Duan1-1/+0
Kernel build warns: 'sanitize_boot_params' defined but not used [-Wunused-function] at below files: arch/x86/boot/compressed/cmdline.c arch/x86/boot/compressed/error.c arch/x86/boot/compressed/early_serial_console.c arch/x86/boot/compressed/acpi.c That's becausethey each include misc.h which includes a definition of sanitize_boot_params() via bootparam_utils.h. Remove the inclusion from misc.h and have the c file including bootparam_utils.h directly. Signed-off-by: Zhenzhong Duan <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-03-27x86/boot: Fix incorrect ifdeffery scopeBaoquan He1-2/+2
The declarations related to immovable memory handling are out of the BOOT_COMPRESSED_MISC_H #ifdef scope, wrap them inside. Signed-off-by: Baoquan He <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Chao Fan <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Juergen Gross <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-02-06x86/boot: Fix randconfig build error due to MEMORY_HOTREMOVEBorislav Petkov1-1/+1
When building randconfigs, one of the failures is: ld: arch/x86/boot/compressed/kaslr.o: in function `choose_random_location': kaslr.c:(.text+0xbf7): undefined reference to `count_immovable_mem_regions' ld: kaslr.c:(.text+0xcbe): undefined reference to `immovable_mem' make[2]: *** [arch/x86/boot/compressed/vmlinux] Error 1 because CONFIG_ACPI is not enabled in this particular .config but CONFIG_MEMORY_HOTREMOVE is and count_immovable_mem_regions() is unresolvable because it is defined in compressed/acpi.c which is the compilation unit that depends on CONFIG_ACPI. Add CONFIG_ACPI to the explicit dependencies for MEMORY_HOTREMOVE. Signed-off-by: Borislav Petkov <[email protected]> Cc: Chao Fan <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-02-06x86/boot: Fix cmdline_find_option() prototype visibilityBorislav Petkov1-2/+0
ac09c5f43cf6 ("x86/boot: Build the command line parsing code unconditionally") enabled building the command line parsing code unconditionally but it forgot to remove the respective ifdeffery around the prototypes in the misc.h header, leading to arch/x86/boot/compressed/acpi.c: In function ‘get_acpi_rsdp’: arch/x86/boot/compressed/acpi.c:37:8: warning: implicit declaration of function \ ‘cmdline_find_option’ [-Wimplicit-function-declaration] ret = cmdline_find_option("acpi_rsdp", val, MAX_ADDR_LEN); ^~~~~~~~~~~~~~~~~~~ for configs where neither CONFIG_EARLY_PRINTK nor CONFIG_RANDOMIZE_BASE was defined. Drop the ifdeffery in the header too. Fixes: ac09c5f43cf6 ("x86/boot: Build the command line parsing code unconditionally") Reported-by: kbuild test robot <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Chao Fan <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/5c51daf0.83pQEkvDZILqoSYW%[email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-02-01x86/boot/KASLR: Limit KASLR to extract the kernel in immovable memory onlyChao Fan1-0/+1
KASLR may randomly choose a range which is located in movable memory regions. As a result, this will break memory hotplug and make the movable memory chosen by KASLR immovable. Therefore, limit KASLR to choose memory regions in the immovable range after consulting the SRAT table. [ bp: - Rewrite commit message. - Trim comments. ] Signed-off-by: Chao Fan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Baoquan He <[email protected]> Cc: [email protected] Cc: Dave Hansen <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Cc: Ingo Molnar <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: Kees Cook <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: [email protected] Cc: Thomas Gleixner <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-02-01x86/boot: Parse SRAT table and count immovable memory regionsChao Fan1-0/+10
Parse SRAT for the immovable memory regions and use that information to control which offset KASLR selects so that it doesn't overlap with any movable region. [ bp: - Move struct mem_vector where it is visible so that it builds. - Correct comments. - Rewrite commit message. ] Signed-off-by: Chao Fan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Baoquan He <[email protected]> Cc: <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Juergen Gross <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-02-01x86/boot: Early parse RSDP and save it in boot_paramsChao Fan1-0/+7
The RSDP is needed by KASLR so parse it early and save it in boot_params.acpi_rsdp_addr, before KASLR setup runs. RSDP is needed by other kernel facilities so have the parsing code built-in instead of a long "depends on" line in Kconfig. [ bp: - Trim commit message and comments - Add CONFIG_ACPI dependency in the Makefile - Move ->acpi_rsdp_addr assignment with the rest of boot_params massaging in extract_kernel(). ] Signed-off-by: Chao Fan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: [email protected] Cc: Cao jin <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Cc: Ingo Molnar <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: Kees Cook <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: [email protected] Cc: Thomas Gleixner <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-02-01x86/boot: Add "acpi_rsdp=" early parsingChao Fan1-0/+3
KASLR may randomly choose offsets which are located in movable memory regions resulting in the movable memory becoming immovable. The ACPI SRAT (System/Static Resource Affinity Table) describes memory ranges including ranges of memory provided by hot-added memory devices. In order to access SRAT, one needs the Root System Description Pointer (RSDP) with which to find the Root/Extended System Description Table (R/XSDT) which then contains the system description tables of which SRAT is one of. In case the RSDP address has been passed on the command line (kexec-ing a second kernel) parse it from there. [ bp: Rewrite the commit message and cleanup the code. ] Signed-off-by: Chao Fan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: [email protected] Cc: [email protected] Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Cc: Ingo Molnar <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: Kees Cook <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: [email protected] Cc: Thomas Gleixner <[email protected]> Cc: Tom Lendacky <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-09-03x86/paravirt: Introduce new config option PARAVIRT_XXLJuergen Gross1-0/+1
A large amount of paravirt ops is used by Xen PV guests only. Add a new config option PARAVIRT_XXL which is selected by XEN_PV. Later we can put the Xen PV only paravirt ops under the PARAVIRT_XXL umbrella. Since irq related paravirt ops are used only by VSMP and Xen PV, let VSMP select PARAVIRT_XXL, too, in order to enable moving the irq ops under PARAVIRT_XXL. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]