aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2022-09-16arm64: alternatives: add alternative_has_feature_*()Mark Rutland2-29/+0
Currrently we use a mixture of alternative sequences and static branches to handle features detected at boot time. For ease of maintenance we generally prefer to use static branches in C code, but this has a few downsides: * Each static branch has metadata in the __jump_table section, which is not discarded after features are finalized. This wastes some space, and slows down the patching of other static branches. * The static branches are patched at a different point in time from the alternatives, so changes are not atomic. This leaves a transient period where there could be a mismatch between the behaviour of alternatives and static branches, which could be problematic for some features (e.g. pseudo-NMI). * More (instrumentable) kernel code is executed to patch each static branch, which can be risky when patching certain features (e.g. irqflags management for pseudo-NMI). * When CONFIG_JUMP_LABEL=n, static branches are turned into a load of a flag and a conditional branch. This means it isn't safe to use such static branches in an alternative address space (e.g. the NVHE/PKVM hyp code), where the generated address isn't safe to acccess. To deal with these issues, this patch introduces new alternative_has_feature_*() helpers, which work like static branches but are patched using alternatives. This ensures the patching is performed at the same time as other alternative patching, allows the metadata to be freed after patching, and is safe for use in alternative address spaces. Note that all supported toolchains have asm goto support, and since commit: a0a12c3ed057af57 ("asm goto: eradicate CC_HAS_ASM_GOTO)" ... the CC_HAS_ASM_GOTO Kconfig symbol has been removed, so no feature check is necessary, and we can always make use of asm goto. Additionally, note that: * This has no impact on cpus_have_cap(), which is a dynamic check. * This has no functional impact on cpus_have_const_cap(). The branches are patched slightly later than before this patch, but these branches are not reachable until caps have been finalised. * It is now invalid to use cpus_have_final_cap() in the window between feature detection and patching. All existing uses are only expected after patching anyway, so this should not be a problem. * The LSE atomics will now be enabled during alternatives patching rather than immediately before. As the LL/SC an LSE atomics are functionally equivalent this should not be problematic. When building defconfig with GCC 12.1.0, the resulting Image is 64KiB smaller: | % ls -al Image-* | -rw-r--r-- 1 mark mark 37108224 Aug 23 09:56 Image-after | -rw-r--r-- 1 mark mark 37173760 Aug 23 09:54 Image-before According to bloat-o-meter.pl: | add/remove: 44/34 grow/shrink: 602/1294 up/down: 39692/-61108 (-21416) | Function old new delta | [...] | Total: Before=16618336, After=16596920, chg -0.13% | add/remove: 0/2 grow/shrink: 0/0 up/down: 0/-1296 (-1296) | Data old new delta | arm64_const_caps_ready 16 - -16 | cpu_hwcap_keys 1280 - -1280 | Total: Before=8987120, After=8985824, chg -0.01% | add/remove: 0/0 grow/shrink: 0/0 up/down: 0/0 (0) | RO Data old new delta | Total: Before=18408, After=18408, chg +0.00% Signed-off-by: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: James Morse <[email protected]> Cc: Joey Gouly <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: alternatives: have callbacks take a capMark Rutland3-18/+35
Today, callback alternatives are special-cased within __apply_alternatives(), and are applied alongside patching for system capabilities as ARM64_NCAPS is not part of the boot_capabilities feature mask. This special-casing is less than ideal. Giving special meaning to ARM64_NCAPS for this requires some structures and loops to use ARM64_NCAPS + 1 (AKA ARM64_NPATCHABLE), while others use ARM64_NCAPS. It's also not immediately clear callback alternatives are only applied when applying alternatives for system-wide features. To make this a bit clearer, changes the way that callback alternatives are identified to remove the special-casing of ARM64_NCAPS, and to allow callback alternatives to be associated with a cpucap as with all other alternatives. New cpucaps, ARM64_ALWAYS_BOOT and ARM64_ALWAYS_SYSTEM are added which are always detected alongside boot cpu capabilities and system capabilities respectively. All existing callback alternatives are made to use ARM64_ALWAYS_SYSTEM, and so will be patched at the same point during the boot flow as before. Subsequent patches will make more use of these new cpucaps. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: James Morse <[email protected]> Cc: Joey Gouly <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: alternatives: make alt_region constMark Rutland1-14/+12
We never alter a struct alt_region after creation, and we open-code the bounds of the kernel alternatives region in two functions. The duplication is a bit unfortunate for clarity (and in future we're likely to have more functions altering alternative regions), and to avoid accidents it would be good to make the structure const. This patch adds a shared struct `kernel_alternatives` alt_region for the main kernel image, and marks the alt_regions as const to prevent unintentional modification. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: James Morse <[email protected]> Cc: Joey Gouly <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: alternatives: hoist print out of __apply_alternatives()Mark Rutland1-2/+4
Printing in the middle of __apply_alternatives() is potentially unsafe and not all that helpful given these days we practically always patch *something*. Hoist the print out of __apply_alternatives(), and add separate prints to __apply_alternatives() and apply_alternatives_all(), which will make it easier to spot if either patching call goes wrong. Signed-off-by: Mark Rutland <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: James Morse <[email protected]> Cc: Joey Gouly <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: alternatives: proton-pack: prepare for cap changesMark Rutland1-1/+1
The spectre patching callbacks use cpus_have_final_cap(), and subsequent patches will make it invalid to call cpus_have_final_cap() before alternatives patching has completed. In preparation for said change, this patch modifies the spectre patching callbacks use cpus_have_cap(). This is not subject to patching, and will dynamically check the cpu_hwcaps array, which is functionally equivalent to the existing behaviour. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Joey Gouly <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: James Morse <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: errata: remove BF16 HWCAP due to incorrect result on Cortex-A510James Morse1-0/+26
Cortex-A510's erratum #2658417 causes two BF16 instructions to return the wrong result in rare circumstances when a pair of A510 CPUs are using shared neon hardware. The two instructions affected are BFMMLA and VMMLA, support for these is indicated by the BF16 HWCAP. Remove it on affected platforms. Signed-off-by: James Morse <[email protected]> Link: https://lore.kernel.org/r/[email protected] [[email protected]: add revision to the Kconfig help; remove .type] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: cpufeature: Expose get_arm64_ftr_reg() outside cpufeature.cJames Morse1-1/+1
get_arm64_ftr_reg() returns the properties of a system register based on its instruction encoding. This is needed by erratum workaround in cpu_errata.c to modify the user-space visible view of id registers. Signed-off-by: James Morse <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-spaceJames Morse1-7/+30
arm64 advertises hardware features to user-space via HWCAPs, and by emulating access to the CPUs id registers. The cpufeature code has a sanitised system-wide view of an id register, and a sanitised user-space view of an id register, where some features use their 'safe' value instead of the hardware value. It is currently possible for a HWCAP to be advertised where the user-space view of the id register does not show the feature as supported. Erratum workaround need to remove both the HWCAP, and the feature from the user-space view of the id register. This involves duplicating the code, and spreading it over cpufeature.c and cpu_errata.c. Make the HWCAP code use the user-space view of id registers. This ensures the values never diverge, and allows erratum workaround to remove HWCAP by modifying the user-space view of the id register. Signed-off-by: James Morse <[email protected]> Reviewed-by: Suzuki K Poulose <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64/sysreg: Use feature numbering for PMU and SPE revisionsMark Brown1-2/+2
Currently the kernel refers to the versions of the PMU and SPE features by the version of the architecture where those features were updated but the ARM refers to them using the FEAT_ names for the features. To improve consistency and help with updating for newer features and since v9 will make our current naming scheme a bit more confusing update the macros identfying features to use the FEAT_ based scheme. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition namesMark Brown3-12/+12
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64DFR0_EL1 to follow the convention. No functional changes. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architectureMark Brown3-12/+12
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1 does not align well with kernel conventions, using as it does a lot of MixedCase in various arrangements. In preparation for automatically generating the defines for this register rename the defines used to match what is in the architecture. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: rework BTI exception handlingMark Rutland2-4/+20
If a BTI exception is taken from EL1, the entry code will treat this as an unhandled exception and will panic() the kernel. This is inconsistent with the way we handle FPAC exceptions, which have a dedicated handler and only necessarily kill the thread from which the exception was taken from, and we don't log all the information that could be relevant to debug the issue. The code in do_bti() has: BUG_ON(!user_mode(regs)); ... and it seems like the intent was to call this for EL1 BTI exceptions, as with FPAC, but this was omitted due to an oversight. This patch adds separate EL0 and EL1 BTI exception handlers, with the latter calling die() directly to report the original context the BTI exception was taken from. This matches our handling of FPAC exceptions. Prior to this patch, a BTI failure is reported as: | Unhandled 64-bit el1h sync exception on CPU0, ESR 0x0000000034000002 -- BTI | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00131-g7d937ff0221d-dirty #9 | Hardware name: linux,dummy-virt (DT) | pstate: 20400809 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=-c) | pc : test_bti_callee+0x4/0x10 | lr : test_bti_caller+0x1c/0x28 | sp : ffff80000800bdf0 | x29: ffff80000800bdf0 x28: 0000000000000000 x27: 0000000000000000 | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 | x23: ffff80000a2b8000 x22: 0000000000000000 x21: 0000000000000000 | x20: ffff8000099fa5b0 x19: ffff800009ff7000 x18: fffffbfffda37000 | x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000041a90000 | x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000 | x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000040000000 | x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000f83 | x5 : ffff80000a2b6000 x4 : ffff0000028d0000 x3 : ffff800009f78378 | x2 : 0000000000000000 x1 : 0000000040210000 x0 : ffff8000080257e4 | Kernel panic - not syncing: Unhandled exception | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00131-g7d937ff0221d-dirty #9 | Hardware name: linux,dummy-virt (DT) | Call trace: | dump_backtrace.part.0+0xcc/0xe0 | show_stack+0x18/0x5c | dump_stack_lvl+0x64/0x80 | dump_stack+0x18/0x34 | panic+0x170/0x360 | arm64_exit_nmi.isra.0+0x0/0x80 | el1h_64_sync_handler+0x64/0xd0 | el1h_64_sync+0x64/0x68 | test_bti_callee+0x4/0x10 | smp_cpus_done+0xb0/0xbc | smp_init+0x7c/0x8c | kernel_init_freeable+0x128/0x28c | kernel_init+0x28/0x13c | ret_from_fork+0x10/0x20 With this patch applied, a BTI failure is reported as: | Internal error: Oops - BTI: 0000000034000002 [#1] PREEMPT SMP | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00132-g0ad98265d582-dirty #8 | Hardware name: linux,dummy-virt (DT) | pstate: 20400809 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=-c) | pc : test_bti_callee+0x4/0x10 | lr : test_bti_caller+0x1c/0x28 | sp : ffff80000800bdf0 | x29: ffff80000800bdf0 x28: 0000000000000000 x27: 0000000000000000 | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 | x23: ffff80000a2b8000 x22: 0000000000000000 x21: 0000000000000000 | x20: ffff8000099fa5b0 x19: ffff800009ff7000 x18: fffffbfffda37000 | x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000041a90000 | x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000 | x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000040000000 | x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000f83 | x5 : ffff80000a2b6000 x4 : ffff0000028d0000 x3 : ffff800009f78378 | x2 : 0000000000000000 x1 : 0000000040210000 x0 : ffff800008025804 | Call trace: | test_bti_callee+0x4/0x10 | smp_cpus_done+0xb0/0xbc | smp_init+0x7c/0x8c | kernel_init_freeable+0x128/0x28c | kernel_init+0x28/0x13c | ret_from_fork+0x10/0x20 | Code: d50323bf d53cd040 d65f03c0 d503233f (d50323bf) Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Mark Brown <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: rework FPAC exception handlingMark Rutland2-8/+12
If an FPAC exception is taken from EL1, the entry code will call do_ptrauth_fault(), where due to: BUG_ON(!user_mode(regs)) ... the kernel will report a problem within do_ptrauth_fault() rather than reporting the original context the FPAC exception was taken from. The pt_regs and ESR value reported will be from within do_ptrauth_fault() and the code dump will be for the BRK in BUG_ON(), which isn't sufficient to debug the cause of the original exception. This patch makes the reporting better by having separate EL0 and EL1 FPAC exception handlers, with the latter calling die() directly to report the original context the FPAC exception was taken from. Note that we only need to prevent kprobes of the EL1 FPAC handler, since the EL0 FPAC handler cannot be called recursively. For consistency with do_el0_svc*(), I've named the split functions do_el{0,1}_fpac() rather than do_el{0,1}_ptrauth_fault(). I've also clarified the comment to not imply there are casues other than FPAC exceptions. Prior to this patch FPAC exceptions are reported as: | kernel BUG at arch/arm64/kernel/traps.c:517! | Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00130-g9c8a180a1cdf-dirty #12 | Hardware name: FVP Base RevC (DT) | pstate: 00400009 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : do_ptrauth_fault+0x3c/0x40 | lr : el1_fpac+0x34/0x54 | sp : ffff80000a3bbc80 | x29: ffff80000a3bbc80 x28: ffff0008001d8000 x27: 0000000000000000 | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 | x23: 0000000020400009 x22: ffff800008f70fa4 x21: ffff80000a3bbe00 | x20: 0000000072000000 x19: ffff80000a3bbcb0 x18: fffffbfffda37000 | x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000081a90000 | x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000 | x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000080000000 | x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000783 | x5 : ffff80000a3bbcb0 x4 : ffff0008001d8000 x3 : 0000000072000000 | x2 : 0000000000000000 x1 : 0000000020400009 x0 : ffff80000a3bbcb0 | Call trace: | do_ptrauth_fault+0x3c/0x40 | el1h_64_sync_handler+0xc4/0xd0 | el1h_64_sync+0x64/0x68 | test_pac+0x8/0x10 | smp_init+0x7c/0x8c | kernel_init_freeable+0x128/0x28c | kernel_init+0x28/0x13c | ret_from_fork+0x10/0x20 | Code: 97fffe5e a8c17bfd d50323bf d65f03c0 (d4210000) With this patch applied FPAC exceptions are reported as: | Internal error: Oops - FPAC: 0000000072000000 [#1] PREEMPT SMP | Modules linked in: | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00132-g78846e1c4757-dirty #11 | Hardware name: FVP Base RevC (DT) | pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : test_pac+0x8/0x10 | lr : 0x0 | sp : ffff80000a3bbe00 | x29: ffff80000a3bbe00 x28: 0000000000000000 x27: 0000000000000000 | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 | x23: ffff80000a2c8000 x22: 0000000000000000 x21: 0000000000000000 | x20: ffff8000099fa5b0 x19: ffff80000a007000 x18: fffffbfffda37000 | x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000081a90000 | x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000 | x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000080000000 | x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000783 | x5 : ffff80000a2c6000 x4 : ffff0008001d8000 x3 : ffff800009f88378 | x2 : 0000000000000000 x1 : 0000000080210000 x0 : ffff000001a90000 | Call trace: | test_pac+0x8/0x10 | smp_init+0x7c/0x8c | kernel_init_freeable+0x128/0x28c | kernel_init+0x28/0x13c | ret_from_fork+0x10/0x20 | Code: d50323bf d65f03c0 d503233f aa1f03fe (d50323bf) Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Mark Brown <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: consistently pass ESR_ELx to die()Mark Rutland2-14/+14
Currently, bug_handler() and kasan_handler() call die() with '0' as the 'err' value, whereas die_kernel_fault() passes the ESR_ELx value. For consistency, this patch ensures we always pass the ESR_ELx value to die(). As this is only called for exceptions taken from kernel mode, there should be no user-visible change as a result of this patch. For UNDEFINED exceptions, I've had to modify do_undefinstr() and its callers to pass the ESR_ELx value. In all cases the ESR_ELx value had already been read and was available. Signed-off-by: Mark Rutland <[email protected]> Cc: Mark Brown <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Reviewed-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: die(): pass 'err' as longMark Rutland1-3/+3
Recently, we reworked a lot of code to consistentlt pass ESR_ELx as a 64-bit quantity. However, we missed that this can be passed into die() and __die() as the 'err' parameter where it is truncated to a 32-bit int. As notify_die() already takes 'err' as a long, this patch changes die() and __die() to also take 'err' as a long, ensuring that the full value of ESR_ELx is retained. At the same time, die() is updated to consistently log 'err' as a zero-padded 64-bit quantity. Subsequent patches will pass the ESR_ELx value to die() for a number of exceptions. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Mark Brown <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-16arm64: report EL1 UNDEFs betterMark Rutland1-1/+3
If an UNDEFINED exception is taken from EL1, and do_undefinstr() doesn't find any suitable undef_hook, it will call: BUG_ON(!user_mode(regs)) ... and the kernel will report a failure witin do_undefinstr() rather than reporting the original context that the UNDEFINED exception was taken from. The pt_regs and ESR value reported within the BUG() handler will be from within do_undefinstr() and the code dump will be for the BRK in BUG_ON(), which isn't sufficient to debug the cause of the original exception. This patch makes the reporting better by having do_undefinstr() call die() directly in this case to report the original context from which the UNDEFINED exception was taken. Prior to this patch, an undefined instruction is reported as: | kernel BUG at arch/arm64/kernel/traps.c:497! | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP | Modules linked in: | CPU: 0 PID: 0 Comm: swapper Not tainted 5.19.0-rc3-00127-geff044f1b04e-dirty #3 | Hardware name: linux,dummy-virt (DT) | pstate: 000000c5 (nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : do_undefinstr+0x28c/0x2ac | lr : do_undefinstr+0x298/0x2ac | sp : ffff800009f63bc0 | x29: ffff800009f63bc0 x28: ffff800009f73c00 x27: ffff800009644a70 | x26: ffff8000096778a8 x25: 0000000000000040 x24: 0000000000000000 | x23: 00000000800000c5 x22: ffff800009894060 x21: ffff800009f63d90 | x20: 0000000000000000 x19: ffff800009f63c40 x18: 0000000000000006 | x17: 0000000000403000 x16: 00000000bfbfd000 x15: ffff800009f63830 | x14: ffffffffffffffff x13: 0000000000000000 x12: 0000000000000019 | x11: 0101010101010101 x10: 0000000000161b98 x9 : 0000000000000000 | x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000 | x5 : ffff800009f761d0 x4 : 0000000000000000 x3 : ffff80000a2b80f8 | x2 : 0000000000000000 x1 : ffff800009f73c00 x0 : 00000000800000c5 | Call trace: | do_undefinstr+0x28c/0x2ac | el1_undef+0x2c/0x4c | el1h_64_sync_handler+0x84/0xd0 | el1h_64_sync+0x64/0x68 | setup_arch+0x550/0x598 | start_kernel+0x88/0x6ac | __primary_switched+0xb8/0xc0 | Code: 17ffff95 a9425bf5 17ffffb8 a9025bf5 (d4210000) With this patch applied, an undefined instruction is reported as: | Internal error: Oops - Undefined instruction: 0 [#1] PREEMPT SMP | Modules linked in: | CPU: 0 PID: 0 Comm: swapper Not tainted 5.19.0-rc3-00128-gf27cfcc80e52-dirty #5 | Hardware name: linux,dummy-virt (DT) | pstate: 800000c5 (Nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : setup_arch+0x550/0x598 | lr : setup_arch+0x50c/0x598 | sp : ffff800009f63d90 | x29: ffff800009f63d90 x28: 0000000081000200 x27: ffff800009644a70 | x26: ffff8000096778c8 x25: 0000000000000040 x24: 0000000000000000 | x23: 0000000000000100 x22: ffff800009f69a58 x21: ffff80000a2b80b8 | x20: 0000000000000000 x19: 0000000000000000 x18: 0000000000000006 | x17: 0000000000403000 x16: 00000000bfbfd000 x15: ffff800009f63830 | x14: ffffffffffffffff x13: 0000000000000000 x12: 0000000000000019 | x11: 0101010101010101 x10: 0000000000161b98 x9 : 0000000000000000 | x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000 | x5 : 0000000000000008 x4 : 0000000000000010 x3 : 0000000000000000 | x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000 | Call trace: | setup_arch+0x550/0x598 | start_kernel+0x88/0x6ac | __primary_switched+0xb8/0xc0 | Code: b4000080 90ffed80 912ac000 97db745f (00000000) Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Alexandru Elisei <[email protected]> Cc: Amit Daniel Kachhap <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Reviewed-by: Anshuman Khandual <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-11kernel: exit: cleanup release_thread()Kefeng Wang1-4/+0
Only x86 has own release_thread(), introduce a new weak release_thread() function to clean empty definitions in other ARCHs. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kefeng Wang <[email protected]> Acked-by: Guo Ren <[email protected]> [csky] Acked-by: Russell King (Oracle) <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Brian Cain <[email protected]> Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Stafford Horne <[email protected]> [openrisc] Acked-by: Catalin Marinas <[email protected]> [arm64] Acked-by: Huacai Chen <[email protected]> [LoongArch] Cc: Alexander Gordeev <[email protected]> Cc: Anton Ivanov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dinh Nguyen <[email protected]> Cc: Guo Ren <[email protected]> [csky] Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: James Bottomley <[email protected]> Cc: Johannes Berg <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Xuerui Wang <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-09-10arm64: mm: fix resume for 52-bit enabled buildsJoey Gouly1-0/+3
__cpu_setup() was changed to take the actual number of VA bits in x0, however the resume path was not updated at the same time. Load `vabits_actual` in the resume path, to ensure that the correct number of VA bits is used. This fixes booting v6.0-rc kernels on my Juno. Signed-off-by: Joey Gouly <[email protected]> Fixes: 0aaa68532e9d ("arm64: mm: fix booting with 52-bit address space") Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Ard Biesheuvel <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-09-09arm64: spectre: increase parameters that can be used to turn off bhb ↵Liu Song1-1/+9
mitigation individually In our environment, it was found that the mitigation BHB has a great impact on the benchmark performance. For example, in the lmbench test, the "process fork && exit" test performance drops by 20%. So it is necessary to have the ability to turn off the mitigation individually through cmdline, thus avoiding having to compile the kernel by adjusting the config. Signed-off-by: Liu Song <[email protected]> Acked-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: run softirqs on the per-CPU IRQ stackQi Zheng1-0/+14
Currently arm64 supports per-CPU IRQ stack, but softirqs are still handled in the task context. Since any call to local_bh_enable() at any level in the task's call stack may trigger a softirq processing run, which could potentially cause a task stack overflow if the combined stack footprints exceed the stack's size, let's run these softirqs on the IRQ stack as well. Signed-off-by: Qi Zheng <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Acked-by: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: stacktrace: track hyp stacks in unwinder's address spaceMark Rutland1-1/+1
Currently unwind_next_frame_record() has an optional callback to convert the address space of the FP. This is necessary for the NVHE unwinder, which tracks the stacks in the hyp VA space, but accesses the frame records in the kernel VA space. This is a bit unfortunate since it clutters unwind_next_frame_record(), which will get in the way of future rework. Instead, this patch changes the NVHE unwinder to track the stacks in the kernel's VA space and translate to FP prior to calling unwind_next_frame_record(). This removes the need for the translate_fp() callback, as all unwinders consistently track stacks in the native address space of the unwinder. At the same time, this patch consolidates the generation of the stack addresses behind the stackinfo_get_*() helpers. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Kalesh Singh <[email protected]> Reviewed-by: Madhavan T. Venkataraman <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Fuad Tabba <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: stacktrace: track all stack boundaries explicitlyMark Rutland1-53/+38
Currently we call an on_accessible_stack() callback for each step of the unwinder, requiring redundant work to be performed in the core of the unwind loop (e.g. disabling preemption around accesses to per-cpu variables containing stack boundaries). To prevent unwind loops which go through a stack multiple times, we have to track the set of unwound stacks, requiring a stack_type enum which needs to cater for all the stacks of all possible callees. To prevent loops within a stack, we must track the prior FP values. This patch reworks the unwinder to minimize the work in the core of the unwinder, and to remove the need for the stack_type enum. The set of accessible stacks (and their boundaries) are determined at the start of the unwind, and the current stack is tracked during the unwind, with completed stacks removed from the set of accessible stacks. This makes the boundary checks more accurate (e.g. detecting overlapped frame records), and removes the need for separate tracking of the prior FP and visited stacks. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Kalesh Singh <[email protected]> Reviewed-by: Madhavan T. Venkataraman <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Fuad Tabba <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: stacktrace: rework stack boundary discoveryMark Rutland2-24/+43
In subsequent patches we'll want to acquire the stack boundaries ahead-of-time, and we'll need to be able to acquire the relevant stack_info regardless of whether we have an object the happens to be on the stack. This patch replaces the on_XXX_stack() helpers with stackinfo_get_XXX() helpers, with the caller being responsible for the checking whether an object is on a relevant stack. For the moment this is moved into the on_accessible_stack() functions, making these slightly larger; subsequent patches will remove the on_accessible_stack() functions and simplify the logic. The on_irq_stack() and on_task_stack() helpers are kept as these are used by IRQ entry sequences and stackleak respectively. As they're only used as predicates, the stack_info pointer parameter is removed in both cases. As the on_accessible_stack() functions are always passed a non-NULL info pointer, these now update info unconditionally. When updating the type to STACK_TYPE_UNKNOWN, the low/high bounds are also modified, but as these will not be consumed this should have no adverse affect. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Kalesh Singh <[email protected]> Reviewed-by: Madhavan T. Venkataraman <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Fuad Tabba <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: stacktrace: move SDEI stack helpers to stacktrace codeMark Rutland2-34/+9
For clarity and ease of maintenance, it would be helpful for all the stack helpers to be in the same place. Move the SDEI stack helpers into the stacktrace code where all the other stack helpers live. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Kalesh Singh <[email protected]> Reviewed-by: Madhavan T. Venkataraman <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Fuad Tabba <[email protected]> Cc: James Morse <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()Mark Rutland1-1/+1
The unwind_next_common() function unwinds a single frame record. There are other unwind steps (e.g. unwinding through trampolines) which are handled in the regular kernel unwinder, and in future there may be other common unwind helpers. Clarify the purpose of unwind_next_common() by renaming it to unwind_next_frame_record(). At the same time, add commentary, and delete the redundant comment at the top of asm/stacktrace/common.h. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Kalesh Singh <[email protected]> Reviewed-by: Madhavan T. Venkataraman <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Fuad Tabba <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: stacktrace: simplify unwind_next_common()Mark Rutland1-2/+1
Currently unwind_next_common() takes a pointer to a stack_info which is only ever used within unwind_next_common(). Make it a local variable and simplify callers. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Reviewed-by: Kalesh Singh <[email protected]> Reviewed-by: Madhavan T. Venkataraman <[email protected]> Reviewed-by: Mark Brown <[email protected]> Cc: Fuad Tabba <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: alternative: patch alternatives in the vDSOJoey Gouly3-3/+35
Make it possible to use alternatives in the vDSO, so that better implementations can be used if possible. Signed-off-by: Joey Gouly <[email protected]> Cc: Will Deacon <[email protected]> Cc: Vincenzo Frascino <[email protected]> Cc: Mark Rutland <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64: module: move find_section to headerJoey Gouly1-15/+0
Move it to the header so that the implementation can be shared by the alternatives code. Signed-off-by: Joey Gouly <[email protected]> Cc: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 SME enumerationMark Brown1-2/+2
In preparation for automatic generation of constants update the define for SME being implemented to the convention we are using, no functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 BTI enumerationMark Brown1-2/+2
In preparation for automatic generation of constants update the define for BTI being implemented to the convention we are using, no functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 fractional version fieldsMark Brown1-2/+2
The naming for fractional versions fields in ID_AA64PFR1_EL1 does not align with that in the architecture, lacking underscores and using upper case where the architecture uses lower case. In preparation for automatic generation of defines bring the code in sync with the architecture, no functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for MTE feature enumerationMark Brown1-4/+4
In preparation for conversion to automatic generation refresh the names given to the items in the MTE feture enumeration to reflect our standard pattern for naming, corresponding to the architecture feature names they reflect. No functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for SSBS feature enumerationMark Brown1-3/+3
In preparation for conversion to automatic generation refresh the names given to the items in the SSBS feature enumeration to reflect our standard pattern for naming, corresponding to the architecture feature names they reflect. No functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for ID_AA64PFR0_EL1.AdvSIMD constantsMark Brown1-3/+3
The architecture refers to the register field identifying advanced SIMD as AdvSIMD but the kernel refers to it as ASIMD. Use the architecture's naming. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for ID_AA64PFR0_EL1 constantsMark Brown1-4/+4
We generally refer to the baseline feature implemented as _IMP so in preparation for automatic generation of register defines update those for ID_AA64PFR0_EL1 to reflect this. In the case of ASIMD we don't actually use the define so just remove it. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for ID_AA64MMFR2_EL1.CnPMark Brown1-2/+2
The kernel refers to ID_AA64MMFR2_EL1.CnP as CNP. In preparation for automatic generation of defines for the system registers bring the naming used by the kernel in sync with that of DDI0487H.a. No functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for ID_AA64MMFR2_EL1.VARangeMark Brown2-3/+3
The kernel refers to ID_AA64MMFR2_EL1.VARange as LVA. In preparation for automatic generation of defines for the system registers bring the naming used by the kernel in sync with that of DDI0487H.a. No functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming for ID_AA64MMFR1_EL1 fieldsKristina Martsenko4-22/+22
In preparation for converting the ID_AA64MMFR1_EL1 system register defines to automatic generation, rename them to follow the conventions used by other automatically generated registers: * Add _EL1 in the register name. * Rename fields to match the names in the ARM ARM: * LOR -> LO * HPD -> HPDS * VHE -> VH * HADBS -> HAFDBS * SPECSEI -> SpecSEI * VMIDBITS -> VMIDBits There should be no functional change as a result of this patch. Signed-off-by: Kristina Martsenko <[email protected]> Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming of ID_AA64MMFR0_EL1.ASIDBitsMark Brown1-1/+1
For some reason we refer to ID_AA64MMFR0_EL1.ASIDBits as ASID. Add BITS into the name, bringing the naming into sync with DDI0487H.a. Due to the large amount of MixedCase in this register which isn't really consistent with either the kernel style or the majority of the architecture the use of upper case is preserved. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Standardise naming of ID_AA64MMFR0_EL1.BigEndMark Brown1-1/+1
For some reason we refer to ID_AA64MMFR0_EL1.BigEnd as BIGENDEL. Remove the EL from the name, bringing the naming into sync with DDI0487H.a. Due to the large amount of MixedCase in this register which isn't really consistent with either the kernel style or the majority of the architecture the use of upper case is preserved. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Add _EL1 into ID_AA64PFR1_EL1 constant namesMark Brown3-25/+25
Our standard is to include the _EL1 in the constant names for registers but we did not do that for ID_AA64PFR1_EL1, update to do so in preparation for conversion to automatic generation. No functional change. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Add _EL1 into ID_AA64PFR0_EL1 definition namesMark Brown4-38/+38
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64PFR0_EL1 to follow the convention. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Add _EL1 into ID_AA64MMFR2_EL1 definition namesMark Brown2-23/+23
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64MMFR2_EL1 to follow the convention. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-09arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition namesMark Brown2-20/+20
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64MMFR0_EL1 to follow the convention. No functional changes. Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Kristina Martsenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-08arm64/ptrace: Don't clear calling process' TIF_SME on OOMMark Brown1-2/+0
If allocating memory for the target SVE state in za_set() fails we clear TIF_SME for the ptracing task which is obviously not correct. If we are here we know that the target task already had neither TIF_SVE nor TIF_SME set since we only need to allocate if either the target had not used either SVE or SME and had no need to allocate state before or we just changed the vector length with vec_set_vector_length() which clears TIF_ for us on allocation failure so just remove the clear entirely. Reported-by: Wang ShaoBo <[email protected]> Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-09-06arm64/sysreg: Add hwcap for SVE EBF16Mark Brown2-0/+2
SVE has a separate identification register indicating support for BFloat16 operations. Add a hwcap identifying support for EBF16 in this register, mirroring what we did for the non-SVE case. While there is currently an architectural requirement for BF16 support to be the same in SVE and non-SVE contexts there are separate identification registers this separate hwcap helps avoid issues if that requirement were to be relaxed in the future, we have already chosen to have a separate capability for base BF16 support. Signed-off-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-06arm64: compat: Implement misalignment fixups for multiword loadsArd Biesheuvel2-0/+388
The 32-bit ARM kernel implements fixups on behalf of user space when using LDM/STM or LDRD/STRD instructions on addresses that are not 32-bit aligned. This is not something that is supported by the architecture, but was done anyway to increase compatibility with user space software, which mostly targeted x86 at the time and did not care about aligned accesses. This feature is one of the remaining impediments to being able to switch to 64-bit kernels on 64-bit capable hardware running 32-bit user space, so let's implement it for the arm64 compat layer as well. Note that the intent is to implement the exact same handling of misaligned multi-word loads and stores as the 32-bit kernel does, including what appears to be missing support for user space programs that rely on SETEND to switch to a different byte order and back. Also, like the 32-bit ARM version, we rely on the faulting address reported by the CPU to infer the memory address, instead of decoding the instruction fully to obtain this information. This implementation is taken from the 32-bit ARM tree, with all pieces removed that deal with instructions other than LDRD/STRD and LDM/STM, or that deal with alignment exceptions taken in kernel mode. Cc: [email protected] Cc: Vagrant Cascadian <[email protected]> Cc: Riku Voipio <[email protected]> Cc: Steve McIntyre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] [[email protected]: change the option to 'default n'] Signed-off-by: Catalin Marinas <[email protected]>
2022-09-01arm64: head: Ignore bogus KASLR displacement on non-relocatable kernelsArd Biesheuvel1-0/+2
Even non-KASLR kernels can be built as relocatable, to work around broken bootloaders that violate the rules regarding physical placement of the kernel image - in this case, the physical offset modulo 2 MiB is used as the KASLR offset, and all absolute symbol references are fixed up in the usual way. This workaround is enabled by default. CONFIG_RELOCATABLE can also be disabled entirely, in which case the relocation code and the code that captures the offset are omitted from the build. However, since commit aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR"), this code got out of sync, and we still add the offset to the kernel virtual address before populating the page tables even though we never capture it. This means we add a bogus value instead, breaking the boot entirely. Fixes: aacd149b6238 ("arm64: head: avoid relocating the kernel twice for KASLR") Signed-off-by: Ard Biesheuvel <[email protected]> Tested-by: Mikulas Patocka <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-09-01arm64/kexec: Fix missing extra range for crashkres_low.Levi Yun1-1/+1
Like crashk_res, Calling crash_exclude_mem_range function with crashk_low_res area would need extra crash_mem range too. Add one more extra cmem slot in case of crashk_low_res is used. Signed-off-by: Levi Yun <[email protected]> Fixes: 944a45abfabc ("arm64: kdump: Reimplement crashkernel=X") Cc: <[email protected]> # 5.19.x Acked-by: Baoquan He <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
2022-08-23arm64/sme: Don't flush SVE register state when handling SME trapsMark Brown1-11/+0
Currently as part of handling a SME access trap we flush the SVE register state. This is not needed and would corrupt register state if the task has access to the SVE registers already. For non-streaming mode accesses the required flushing will be done in the SVE access trap. For streaming mode SVE register accesses the architecture guarantees that the register state will be flushed when streaming mode is entered or exited so there is no need for us to do so. Simply remove the register initialisation. Fixes: 8bd7f91c03d8 ("arm64/sme: Implement traps and syscall handling for SME") Signed-off-by: Mark Brown <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>