aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2020-04-29arm64: vdso: use consistent 'abi' nomenclatureMark Rutland1-35/+34
The current code doesn't use a consistent naming scheme for structures, enums, or variables, making it harder than necessary to determine the relationship between these. Let's make this easier by consistently using 'vdso_abi' nomenclature. The 'vdso_lookup' array is renamed to 'vdso_info' to describe what it contains rather than how it is consumed. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-29arm64: vdso: simplify arch_vdso_type ifdefferyMark Rutland1-10/+5
Currently we have some ifdeffery to determine the number of elements in enum arch_vdso_type as VDSO_TYPES, rather that the usual pattern of having the enum define this: | enum foo_type { | FOO_TYPE_A, | FOO_TYPE_B, | #ifdef CONFIG_C | FOO_TYPE_C, | #endif | NR_FOO_TYPES | } ... however, given we only use this number to size the vdso_lookup[] array, this is redundant anyway as the compiler can automatically size the array to fit all defined elements. So let's remove the VDSO_TYPES to simplify the code. At the same time, let's use designated initializers for the array elements so that these are guarnateed to be at the expected indices, regardless of how we modify the structure. For clariy the redundant explicit initialization of the enum elements is dropped. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-29arm64: vdso: remove aarch32_vdso_pages[]Mark Rutland1-7/+12
The aarch32_vdso_pages[] array is unnecessarily confusing. We only ever use the C_VECTORS and C_SIGPAGE slots, and the other slots are unused despite having corresponding mappings (sharing pages with the AArch64 vDSO). Let's make this clearer by using separate variables for the vectors page and the sigreturn page. A subsequent patch will clean up the C_* naming and conflation of pages with mappings. Note that since both the vectors page and sig page are single pages, and the mapping is a single page long, their pages array do not need to be NULL-terminated (and this was not the case with the existing code for the sig page as it was the last entry in the aarch32_vdso_pages array). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28Merge branch 'work.sysctl' of ↵Daniel Borkmann2-3/+2
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull in Christoph Hellwig's series that changes the sysctl's ->proc_handler methods to take kernel pointers instead. It gets rid of the set_fs address space overrides used by BPF. As per discussion, pull in the feature branch into bpf-next as it relates to BPF sysctl progs. Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]/T/
2020-04-28efi/libstub/arm64: align PE/COFF sections to segment alignmentArd Biesheuvel2-2/+3
The arm64 kernel's segment alignment is fixed at 64 KB for any page size, and relocatable kernels are able to fix up any misalignment of the kernel image with respect to the 2 MB section alignment that is mandated by the arm64 boot protocol. Let's increase the PE/COFF section alignment to the same value, so that kernels loaded by the UEFI PE/COFF loader are guaranteed to end up at an address that doesn't require any reallocation to be done if the kernel is relocatable. Signed-off-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: vdso: Add '-Bsymbolic' to ldflagsVincenzo Frascino1-3/+5
Commit 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation") introduced an unused 'VDSO_LDFLAGS' variable to the vdso Makefile, suggesting that we should be passing '-Bsymbolic' to the linker, as we do when linking the compat vDSO. Although it's not strictly necessary to pass this flag, it would be required if we were to add any internal references to the exported symbols. It's also consistent with how we link the compat vdso so, since there's no real downside from passing it, add '-Bsymbolic' to the ldflags for the native vDSO. Fixes: 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation") Reported-by: Geoff Levand <[email protected]> Signed-off-by: Vincenzo Frascino <[email protected]> Cc: Will Deacon <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64/kernel: Fix range on invalidating dcache for boot page tablesGavin Shan2-3/+10
Prior to commit 8eb7e28d4c642c31 ("arm64/mm: move runtime pgds to rodata"), idmap_pgd_dir, tramp_pg_dir, reserved_ttbr0, swapper_pg_dir, and init_pg_dir were contiguous at the end of the kernel image. The maintenance at the end of __create_page_tables assumed these were contiguous, and affected everything from the start of idmap_pg_dir to the end of init_pg_dir. That commit moved all but init_pg_dir into the .rodata section, with other data placed between idmap_pg_dir and init_pg_dir, but did not update the maintenance. Hence the maintenance is performed on much more data than necessary (but as the bootloader previously made this clean to the PoC there is no functional problem). As we only alter idmap_pg_dir, and init_pg_dir, we only need to perform maintenance for these. As the other dirs are in .rodata, the bootloader will have initialised them as expected and cleaned them to the PoC. The kernel will initialize them as necessary after enabling the MMU. This patch reworks the maintenance to only cover the idmap_pg_dir and init_pg_dir to avoid this unnecessary work. Signed-off-by: Gavin Shan <[email protected]> Reviewed-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Add an overview comment for the cpufeature frameworkWill Deacon1-0/+50
Now that Suzuki isn't within throwing distance, I thought I'd better add a rough overview comment to cpufeature.c so that it doesn't take me days to remember how it works next time. Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Relax checks for AArch32 support at EL[0-2]Will Deacon1-7/+3
We don't need to be quite as strict about mismatched AArch32 support, which is good because the friendly hardware folks have been busy mismatching this to their hearts' content. * We don't care about EL2 or EL3 (there are silly comments concerning the latter, so remove those) * EL1 support is gated by the ARM64_HAS_32BIT_EL1 capability and handled gracefully when a mismatch occurs * EL0 support is gated by the ARM64_HAS_32BIT_EL0 capability and handled gracefully when a mismatch occurs Relax the AArch32 checks to FTR_NONSTRICT. Tested-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Relax AArch32 system checks if EL1 is 64-bit onlyWill Deacon1-1/+32
If AArch32 is not supported at EL1, the AArch32 feature register fields no longer advertise support for some system features: * ISAR4.SMC * PFR1.{Virt_frac, Sec_frac, Virtualization, Security, ProgMod} In which case, we don't need to emit "SANITY CHECK" failures for all of them. Add logic to relax the strictness of individual feature register fields at runtime and use this for the fields above if 32-bit EL1 is not supported. Tested-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Factor out checking of AArch32 featuresWill Deacon1-47/+65
update_cpu_features() is pretty large, so split out the checking of the AArch32 features into a separate function and call it after checking the AArch64 features. Tested-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Remove redundant call to id_aa64pfr0_32bit_el0()Will Deacon1-3/+1
There's no need to call id_aa64pfr0_32bit_el0() twice because the sanitised value of ID_AA64PFR0_EL1 has already been updated for the CPU being onlined. Remove the redundant function call. Tested-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Add CPU capability for AArch32 EL1 supportWill Deacon1-0/+12
Although we emit a "SANITY CHECK" warning and taint the kernel if we detect a CPU mismatch for AArch32 support at EL1, we still online the CPU with disastrous consequences for any running 32-bit VMs. Introduce a capability for AArch32 support at EL1 so that late onlining of incompatible CPUs is forbidden. Tested-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Spell out register fields for ID_ISAR4 and ID_PFR1Will Deacon1-2/+26
In preparation for runtime updates to the strictness of some AArch32 features, spell out the register fields for ID_ISAR4 and ID_PFR1 to make things clearer to read. Note that this isn't functionally necessary, as the feature arrays themselves are not modified dynamically and remain 'const'. Tested-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: cpufeature: Relax check for IESB supportSai Prakash Ranjan1-1/+1
We don't care if IESB is supported or not as we always set SCTLR_ELx.IESB and, if it works, that's really great. Relax the ID_AA64MMFR2.IESB cpufeature check so that we don't warn and taint if it's mismatched. [will: rewrote commit message] Signed-off-by: Sai Prakash Ranjan <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: smp: Make cpus_stuck_in_kernel staticZou Wei1-1/+1
Fix the following sparse warning: arch/arm64/kernel/smp.c:68:5: warning: symbol 'cpus_stuck_in_kernel' was not declared. Should it be static? Reported-by: Hulk Robot <[email protected]> Signed-off-by: Zou Wei <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: entry: remove unneeded semicolon in el1_sync_handler()Jason Yan1-1/+1
Fix the following coccicheck warning: arch/arm64/kernel/entry-common.c:97:2-3: Unneeded semicolon Reported-by: Hulk Robot <[email protected]> Signed-off-by: Jason Yan <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64/kernel: vmlinux.lds: drop redundant discard/keep macrosArd Biesheuvel1-8/+2
ARM_EXIT_KEEP and ARM_EXIT_DISCARD are always defined in the same way, so we don't really need them in the first place. Signed-off-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: kexec_file: Avoid temp buffer for RNG seedGeorge Spelvin1-4/+4
After using get_random_bytes(), you want to wipe the buffer afterward so the seed remains secret. In this case, we can eliminate the temporary buffer entirely. fdt_setprop_placeholder() returns a pointer to the property value buffer, allowing us to put the random data directly in there without using a temporary buffer at all. Faster and less stack all in one. Signed-off-by: George Spelvin <[email protected]> Acked-by: Will Deacon <[email protected]> Cc: Hsin-Yi Wang <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: rename stext to primary_entryArd Biesheuvel4-14/+13
For historical reasons, the primary entry routine living somewhere in the inittext section is called stext(), which is confusing, given that there is also a section marker called _stext which lives at a fixed offset in the image (either 64 or 4096 bytes, depending on whether CONFIG_EFI is enabled) Let's rename stext to primary_entry(), which is a better description and reflects the secondary_entry() routine that already exists for SMP boot. Signed-off-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviwed-by: Mark Brown <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: simplify ptrauth initializationMark Rutland4-14/+10
Currently __cpu_setup conditionally initializes the address authentication keys and enables them in SCTLR_EL1, doing so differently for the primary CPU and secondary CPUs, and skipping this work for CPUs returning from an idle state. For the latter case, cpu_do_resume restores the keys and SCTLR_EL1 value after the MMU has been enabled. This flow is rather difficult to follow, so instead let's move the primary and secondary CPU initialization into their respective boot paths. By following the example of cpu_do_resume and doing so once the MMU is enabled, we can always initialize the keys from the values in thread_struct, and avoid the machinery necessary to pass the keys in secondary_data or open-coding initialization for the boot CPU. This means we perform an additional RMW of SCTLR_EL1, but we already do this in the cpu_do_resume path, and for other features in cpufeature.c, so this isn't a major concern in a bringup path. Note that even while the enable bits are clear, the key registers are accessible. As this now renders the argument to __cpu_setup redundant, let's also remove that entirely. Future extensions can follow a similar approach to initialize values that differ for primary/secondary CPUs. Signed-off-by: Mark Rutland <[email protected]> Tested-by: Amit Daniel Kachhap <[email protected]> Reviewed-by: Amit Daniel Kachhap <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: James Morse <[email protected]> Cc: Suzuki K Poulose <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-28arm64: remove ptrauth_keys_install_kernel sync argMark Rutland1-2/+2
The 'sync' argument to ptrauth_keys_install_kernel macro is somewhat opaque at callsites, so instead lets have regular and _nosync variants of the macro to make this a little more obvious. Signed-off-by: Mark Rutland <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2020-04-27sysctl: pass kernel pointers to ->proc_handlerChristoph Hellwig2-3/+2
Instead of having all the sysctl handlers deal with user pointers, which is rather hairy in terms of the BPF interaction, copy the input to and from userspace in common code. This also means that the strings are always NUL-terminated by the common code, making the API a little bit safer. As most handler just pass through the data to one of the common handlers a lot of the changes are mechnical. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Andrey Ignatov <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-04-15arm64: vdso: don't free unallocated pagesMark Rutland1-12/+1
The aarch32_vdso_pages[] array never has entries allocated in the C_VVAR or C_VDSO slots, and as the array is zero initialized these contain NULL. However in __aarch32_alloc_vdso_pages() when aarch32_alloc_kuser_vdso_page() fails we attempt to free the page whose struct page is at NULL, which is obviously nonsensical. This patch removes the erroneous page freeing. Fixes: 7c1deeeb0130 ("arm64: compat: VDSO setup for compat layer") Cc: <[email protected]> # 5.3.x- Cc: Vincenzo Frascino <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2020-04-09Merge tag 'arm64-fixes' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Catalin Marinas: - Ensure that the compiler and linker versions are aligned so that ld doesn't complain about not understanding a .note.gnu.property section (emitted when pointer authentication is enabled). - Force -mbranch-protection=none when the feature is not enabled, in case a compiler may choose a different default value. - Remove CONFIG_DEBUG_ALIGN_RODATA. It was never in defconfig and rarely enabled. - Fix checking 16-bit Thumb-2 instructions checking mask in the emulation of the SETEND instruction (it could match the bottom half of a 32-bit Thumb-2 instruction). * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: armv8_deprecated: Fix undef_hook mask for thumb setend arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature arm64: Always force a branch protection mode when the compiler has one arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch init/kconfig: Add LD_VERSION Kconfig
2020-04-08arm64: armv8_deprecated: Fix undef_hook mask for thumb setendFredrik Strupe1-1/+1
For thumb instructions, call_undef_hook() in traps.c first reads a u16, and if the u16 indicates a T32 instruction (u16 >= 0xe800), a second u16 is read, which then makes up the the lower half-word of a T32 instruction. For T16 instructions, the second u16 is not read, which makes the resulting u32 opcode always have the upper half set to 0. However, having the upper half of instr_mask in the undef_hook set to 0 masks out the upper half of all thumb instructions - both T16 and T32. This results in trapped T32 instructions with the lower half-word equal to the T16 encoding of setend (b650) being matched, even though the upper half-word is not 0000 and thus indicates a T32 opcode. An example of such a T32 instruction is eaa0b650, which should raise a SIGILL since T32 instructions with an eaa prefix are unallocated as per Arm ARM, but instead works as a SETEND because the second half-word is set to b650. This patch fixes the issue by extending instr_mask to include the upper u32 half, which will still match T16 instructions where the upper half is 0, but not T32 instructions. Fixes: 2d888f48e056 ("arm64: Emulate SETEND for AArch32 tasks") Cc: <[email protected]> # 4.0.x- Reviewed-by: Suzuki K Poulose <[email protected]> Signed-off-by: Fredrik Strupe <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2020-04-03Merge tag 'spdx-5.7-rc1' of ↵Linus Torvalds3-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull SPDX updates from Greg KH: "Here are three SPDX patches for 5.7-rc1. One fixes up the SPDX tag for a single driver, while the other two go through the tree and add SPDX tags for all of the .gitignore files as needed. Nothing too complex, but you will get a merge conflict with your current tree, that should be trivial to handle (one file modified by two things, one file deleted.) All three of these have been in linux-next for a while, with no reported issues other than the merge conflict" * tag 'spdx-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: ASoC: MT6660: make spdxcheck.py happy .gitignore: add SPDX License Identifier .gitignore: remove too obvious comments
2020-03-31Merge tag 'arm64-upstream' of ↵Linus Torvalds27-431/+808
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "The bulk is in-kernel pointer authentication, activity monitors and lots of asm symbol annotations. I also queued the sys_mremap() patch commenting the asymmetry in the address untagging. Summary: - In-kernel Pointer Authentication support (previously only offered to user space). - ARM Activity Monitors (AMU) extension support allowing better CPU utilisation numbers for the scheduler (frequency invariance). - Memory hot-remove support for arm64. - Lots of asm annotations (SYM_*) in preparation for the in-kernel Branch Target Identification (BTI) support. - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the PMU init callbacks, support for new DT compatibles. - IPv6 header checksum optimisation. - Fixes: SDEI (software delegated exception interface) double-lock on hibernate with shared events. - Minor clean-ups and refactoring: cpu_ops accessor, cpu_do_switch_mm() converted to C, cpufeature finalisation helper. - sys_mremap() comment explaining the asymmetric address untagging behaviour" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits) mm/mremap: Add comment explaining the untagging behaviour of mremap() arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed arm64: move kimage_vaddr to .rodata arm64: use mov_q instead of literal ldr arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file ...
2020-03-30Merge tag 'timers-core-2020-03-30' of ↵Linus Torvalds3-16/+11
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timekeeping and timer updates from Thomas Gleixner: "Core: - Consolidation of the vDSO build infrastructure to address the difficulties of cross-builds for ARM64 compat vDSO libraries by restricting the exposure of header content to the vDSO build. This is achieved by splitting out header content into separate headers. which contain only the minimaly required information which is necessary to build the vDSO. These new headers are included from the kernel headers and the vDSO specific files. - Enhancements to the generic vDSO library allowing more fine grained control over the compiled in code, further reducing architecture specific storage and preparing for adopting the generic library by PPC. - Cleanup and consolidation of the exit related code in posix CPU timers. - Small cleanups and enhancements here and there Drivers: - The obligatory new drivers: Ingenic JZ47xx and X1000 TCU support - Correct the clock rate of PIT64b global clock - setup_irq() cleanup - Preparation for PWM and suspend support for the TI DM timer - Expand the fttmr010 driver to support ast2600 systems - The usual small fixes, enhancements and cleanups all over the place" * tag 'timers-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (80 commits) Revert "clocksource/drivers/timer-probe: Avoid creating dead devices" vdso: Fix clocksource.h macro detection um: Fix header inclusion arm64: vdso32: Enable Clang Compilation lib/vdso: Enable common headers arm: vdso: Enable arm to use common headers x86/vdso: Enable x86 to use common headers mips: vdso: Enable mips to use common headers arm64: vdso32: Include common headers in the vdso library arm64: vdso: Include common headers in the vdso library arm64: Introduce asm/vdso/processor.h arm64: vdso32: Code clean up linux/elfnote.h: Replace elf.h with UAPI equivalent scripts: Fix the inclusion order in modpost common: Introduce processor.h linux/ktime.h: Extract common header for vDSO linux/jiffies.h: Extract common header for vDSO linux/time64.h: Extract common header for vDSO linux/time32.h: Extract common header for vDSO linux/time.h: Extract common header for vDSO ...
2020-03-30Merge tag 'smp-core-2020-03-30' of ↵Linus Torvalds2-10/+7
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull core SMP updates from Thomas Gleixner: "CPU (hotplug) updates: - Support for locked CSD objects in smp_call_function_single_async() which allows to simplify callsites in the scheduler core and MIPS - Treewide consolidation of CPU hotplug functions which ensures the consistency between the sysfs interface and kernel state. The low level functions cpu_up/down() are now confined to the core code and not longer accessible from random code" * tag 'smp-core-2020-03-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits) cpu/hotplug: Ignore pm_wakeup_pending() for disable_nonboot_cpus() cpu/hotplug: Hide cpu_up/down() cpu/hotplug: Move bringup of secondary CPUs out of smp_init() torture: Replace cpu_up/down() with add/remove_cpu() firmware: psci: Replace cpu_up/down() with add/remove_cpu() xen/cpuhotplug: Replace cpu_up/down() with device_online/offline() parisc: Replace cpu_up/down() with add/remove_cpu() sparc: Replace cpu_up/down() with add/remove_cpu() powerpc: Replace cpu_up/down() with add/remove_cpu() x86/smp: Replace cpu_up/down() with add/remove_cpu() arm64: hibernate: Use bringup_hibernate_cpu() cpu/hotplug: Provide bringup_hibernate_cpu() arm64: Use reboot_cpu instead of hardconding it to 0 arm64: Don't use disable_nonboot_cpus() ARM: Use reboot_cpu instead of hardcoding it to 0 ARM: Don't use disable_nonboot_cpus() ia64: Replace cpu_down() with smp_shutdown_nonboot_cpus() cpu/hotplug: Create a new function to shutdown nonboot cpus cpu/hotplug: Add new {add,remove}_cpu() functions sched/core: Remove rq.hrtick_csd_pending ...
2020-03-30Merge branch 'efi-core-for-linus' of ↵Linus Torvalds3-76/+27
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI updates from Ingo Molnar: "The EFI changes in this cycle are much larger than usual, for two (positive) reasons: - The GRUB project is showing signs of life again, resulting in the introduction of the generic Linux/UEFI boot protocol, instead of x86 specific hacks which are increasingly difficult to maintain. There's hope that all future extensions will now go through that boot protocol. - Preparatory work for RISC-V EFI support. The main changes are: - Boot time GDT handling changes - Simplify handling of EFI properties table on arm64 - Generic EFI stub cleanups, to improve command line handling, file I/O, memory allocation, etc. - Introduce a generic initrd loading method based on calling back into the firmware, instead of relying on the x86 EFI handover protocol or device tree. - Introduce a mixed mode boot method that does not rely on the x86 EFI handover protocol either, and could potentially be adopted by other architectures (if another one ever surfaces where one execution mode is a superset of another) - Clean up the contents of 'struct efi', and move out everything that doesn't need to be stored there. - Incorporate support for UEFI spec v2.8A changes that permit firmware implementations to return EFI_UNSUPPORTED from UEFI runtime services at OS runtime, and expose a mask of which ones are supported or unsupported via a configuration table. - Partial fix for the lack of by-VA cache maintenance in the decompressor on 32-bit ARM. - Changes to load device firmware from EFI boot service memory regions - Various documentation updates and minor code cleanups and fixes" * 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (114 commits) efi/libstub/arm: Fix spurious message that an initrd was loaded efi/libstub/arm64: Avoid image_base value from efi_loaded_image partitions/efi: Fix partition name parsing in GUID partition entry efi/x86: Fix cast of image argument efi/libstub/x86: Use ULONG_MAX as upper bound for all allocations efi: Fix a mistype in comments mentioning efivar_entry_iter_begin() efi/libstub: Avoid linking libstub/lib-ksyms.o into vmlinux efi/x86: Preserve %ebx correctly in efi_set_virtual_address_map() efi/x86: Ignore the memory attributes table on i386 efi/x86: Don't relocate the kernel unless necessary efi/x86: Remove extra headroom for setup block efi/x86: Add kernel preferred address to PE header efi/x86: Decompress at start of PE image load address x86/boot/compressed/32: Save the output address instead of recalculating it efi/libstub/x86: Deal with exit() boot service returning x86/boot: Use unsigned comparison for addresses efi/x86: Avoid using code32_start efi/x86: Make efi32_pe_entry() more readable efi/x86: Respect 32-bit ABI in efi32_pe_entry() efi/x86: Annotate the LOADED_IMAGE_PROTOCOL_GUID with SYM_DATA ...
2020-03-25arm64: hibernate: Use bringup_hibernate_cpu()Qais Yousef1-8/+5
Use bringup_hibernate_cpu() instead of open coding it. [ tglx: Split out the core change ] Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-25arm64: Use reboot_cpu instead of hardconding it to 0Qais Yousef1-1/+1
Use `reboot_cpu` variable instead of hardcoding 0 as the reboot cpu in machine_shutdown(). Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-25arm64: Don't use disable_nonboot_cpus()Qais Yousef1-2/+2
disable_nonboot_cpus() is not safe to use when doing machine_down(), because it relies on freeze_secondary_cpus() which in turn is a suspend/resume related freeze and could abort if the logic detects any pending activities that can prevent finishing the offlining process. Beside disable_nonboot_cpus() is dependent on CONFIG_PM_SLEEP_SMP which is an othogonal config to rely on to ensure this function works correctly. Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-25Merge branch 'for-next/kernel-ptrauth' into for-next/coreCatalin Marinas10-38/+122
* for-next/kernel-ptrauth: : Return address signing - in-kernel support arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: install user ptrauth keys at kernel exit time arm64: rename ptrauth key structures to be user-specific arm64: cpufeature: add pointer auth meta-capabilities arm64: cpufeature: Fix meta-capability cpufeature check
2020-03-25Merge branch 'for-next/asm-cleanups' into for-next/coreCatalin Marinas4-10/+10
* for-next/asm-cleanups: : Various asm clean-ups (alignment, mov_q vs ldr, .idmap) arm64: move kimage_vaddr to .rodata arm64: use mov_q instead of literal ldr
2020-03-25Merge branch 'for-next/asm-annotations' into for-next/coreCatalin Marinas6-144/+139
* for-next/asm-annotations: : Modernise arm64 assembly annotations arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Mark call_smc_arch_workaround_1 as __maybe_unused arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=y arm64: vdso32: Convert to modern assembler annotations arm64: vdso: Convert to modern assembler annotations arm64: sdei: Annotate SDEI entry points using new style annotations arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs arm64: kvm: Annotate assembly using modern annoations arm64: kernel: Convert to modern annotations for assembly data arm64: head: Annotate stext and preserve_boot_args as code arm64: head.S: Convert to modern annotations for assembly functions arm64: ftrace: Modernise annotation of return_to_handler arm64: ftrace: Correct annotation of ftrace_caller assembly arm64: entry-ftrace.S: Convert to modern annotations for assembly functions arm64: entry: Additional annotation conversions for entry.S arm64: entry: Annotate ret_from_fork as code arm64: entry: Annotate vector table and handlers as code arm64: crypto: Modernize names for AES function macros arm64: crypto: Modernize some extra assembly annotations
2020-03-25Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', ↵Catalin Marinas13-241/+539
'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core * for-next/memory-hotremove: : Memory hot-remove support for arm64 arm64/mm: Enable memory hot remove arm64/mm: Hold memory hotplug lock while walking for kernel page table dump * for-next/arm_sdei: : SDEI: fix double locking on return from hibernate and clean-up firmware: arm_sdei: clean up sdei_event_create() firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp firmware: arm_sdei: fix possible double-lock on hibernate error path firmware: arm_sdei: fix double-lock on hibernate with shared events * for-next/amu: : ARMv8.4 Activity Monitors support clocksource/drivers/arm_arch_timer: validate arch_timer_rate arm64: use activity monitors for frequency invariance cpufreq: add function to get the hardware max frequency Documentation: arm64: document support for the AMU extension arm64/kvm: disable access to AMU registers from kvm guests arm64: trap to EL1 accesses to AMU counters from EL0 arm64: add support for the AMU extension v1 * for-next/final-cap-helper: : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it arm64: kvm: hyp: use cpus_have_final_cap() arm64: cpufeature: add cpus_have_final_cap() * for-next/cpu_ops-cleanup: : cpu_ops[] access code clean-up arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed * for-next/misc: : Various fixes and clean-ups arm64: define __alloc_zeroed_user_highpage arm64/kernel: Simplify __cpu_up() by bailing out early arm64: remove redundant blank for '=' operator arm64: kexec_file: Fixed code style. arm64: add blank after 'if' arm64: fix spelling mistake "ca not" -> "cannot" arm64: entry: unmask IRQ in el0_sp() arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI) arm64: csum: Optimise IPv6 header checksum arch/arm64: fix typo in a comment arm64: remove gratuitious/stray .ltorg stanzas arm64: Update comment for ASID() macro arm64: mm: convert cpu_do_switch_mm() to C arm64: fix NUMA Kconfig typos * for-next/perf: : arm64 perf updates arm64: perf: Add support for ARMv8.5-PMU 64-bit counters KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 arm64: cpufeature: Extract capped perfmon fields arm64: perf: Clean up enable/disable calls perf: arm-ccn: Use scnprintf() for robustness arm64: perf: Support new DT compatibles arm64: perf: Refactor PMU init callbacks perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
2020-03-25arm64: head: Convert install_el2_stub to SYM_INNER_LABELMark Brown1-1/+1
New assembly annotations have recently been introduced which aim to make the way we describe symbols in assembly more consistent. Recently the arm64 assembler was converted to use these but install_el2_stub was missed. Signed-off-by: Mark Brown <[email protected]> [[email protected]: changed to SYM_L_LOCAL] Signed-off-by: Catalin Marinas <[email protected]>
2020-03-25.gitignore: add SPDX License IdentifierMasahiro Yamada3-0/+3
Add SPDX License Identifier to all .gitignore files. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2020-03-25arm64: bti: Document behaviour for dynamically linked binariesMark Brown1-0/+5
For dynamically linked binaries the interpreter is responsible for setting PROT_BTI on everything except itself. The dynamic linker needs to be aware of PROT_BTI, for example in order to avoid dropping that when marking executable pages read only after doing relocations, and doing everything in userspace ensures that we don't get any issues due to divergences in behaviour between the kernel and dynamic linker within a single executable. Add a comment indicating that this is intentional to the code to help people trying to understand what's going on. Signed-off-by: Mark Brown <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2020-03-24arm64: Introduce get_cpu_ops() helper functionGavin Shan4-31/+61
This introduces get_cpu_ops() to return the CPU operations according to the given CPU index. For now, it simply returns the @cpu_ops[cpu] as before. Also, helper function __cpu_try_die() is introduced to be shared by cpu_die() and ipi_cpu_crash_stop(). So it shouldn't introduce any functional changes. Signed-off-by: Gavin Shan <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Mark Rutland <[email protected]>
2020-03-24arm64: Rename cpu_read_ops() to init_cpu_ops()Gavin Shan3-3/+3
This renames cpu_read_ops() to init_cpu_ops() as the function is only called in initialization phase. Also, we will introduce get_cpu_ops() in the subsequent patches, to retireve the CPU operation by the given CPU index. The usage of cpu_read_ops() and get_cpu_ops() are difficult to be distinguished from their names. Signed-off-by: Gavin Shan <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2020-03-24arm64: Declare ACPI parking protocol CPU operation if neededGavin Shan1-0/+2
It's obvious we needn't declare the corresponding CPU operation when CONFIG_ARM64_ACPI_PARKING_PROTOCOL is disabled, even it doesn't cause any compiling warnings. Signed-off-by: Gavin Shan <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2020-03-24arm64: move kimage_vaddr to .rodataRemi Denis-Courmont1-5/+7
This datum is not referenced from .idmap.text: it does not need to be mapped in idmap. Lets move it to .rodata as it is never written to after early boot of the primary CPU. (Maybe .data.ro_after_init would be cleaner though?) Signed-off-by: Rémi Denis-Courmont <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2020-03-24arm64: use mov_q instead of literal ldrRemi Denis-Courmont3-5/+3
In practice, this requires only 2 instructions, or even only 1 for the idmap_pg_dir size (with 4 or 64 KiB pages). Only the MAIR values needed more than 2 instructions and it was already converted to mov_q by 95b3f74bec203804658e17f86fe20755bb8abcb9. Signed-off-by: Remi Denis-Courmont <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Mark Rutland <[email protected]>
2020-03-21arm64: vdso32: Enable Clang CompilationVincenzo Frascino1-0/+11
Enable Clang Compilation for the vdso32 library. Signed-off-by: Vincenzo Frascino <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Nathan Chancellor <[email protected]> # build Tested-by: Stephen Boyd <[email protected]> Reviewed-by: Nathan Chancellor <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-21arm64: vdso32: Include common headers in the vdso libraryVincenzo Frascino1-2/+0
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Refactor the vdso32 implementation to include common headers. Signed-off-by: Vincenzo Frascino <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-21arm64: vdso: Include common headers in the vdso libraryVincenzo Frascino1-2/+0
The vDSO library should only include the necessary headers required for a userspace library (UAPI and a minimal set of kernel headers). To make this possible it is necessary to isolate from the kernel headers the common parts that are strictly necessary to build the library. Refactor the vdso implementation to include common headers. Signed-off-by: Vincenzo Frascino <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-21arm64: vdso32: Code clean upVincenzo Frascino1-12/+0
The compat vdso library had some checks that are not anymore relevant. Remove the unused code from the compat vDSO library. Note: This patch is preparatory for a future one that will introduce asm/vdso/processor.h on arm64. Signed-off-by: Vincenzo Frascino <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Link: https://lkml.kernel.org/r/[email protected]