aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/apic
AgeCommit message (Collapse)AuthorFilesLines
2023-08-09x86/apic/ipi: Code cleanupThomas Gleixner1-16/+7
Remove completely useless and mindlessly copied comments and tidy up the code which causes eye bleed when looking at it. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic/32: Remove x86_cpu_to_logical_apicidThomas Gleixner2-58/+9
This per CPU variable is just yet another form of voodoo programming. The boot ordering is: per_cpu(x86_cpu_to_logical_apicid, cpu) = 1U << cpu; ..... setup_apic() apic->init_apic_ldr() default_init_apic_ldr() apic_write(SET_APIC_LOGICAL_ID(1UL << smp_processor_id(), APIC_LDR); id = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR); WARN_ON(id != per_cpu(x86_cpu_to_logical_apicid, cpu)); per_cpu(x86_cpu_to_logical_apicid, cpu) = id; So first write the default into LDR and then validate it against the same default which was set up during early boot APIC enumeration. Brilliant, isn't it? The comment above the per CPU variable declaration describes it well: 'Let's keep it ugly for now.' Remove the useless gunk and use '1U << cpu' consistently all over the place. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic/32: Sanitize logical APIC ID handlingThomas Gleixner4-38/+2
apic::x86_32_early_logical_apicid() is yet another historical joke. It is used to preset the x86_cpu_to_logical_apicid per CPU variable during APIC enumeration with: - 1 shifted left by the CPU number - the physical APIC ID in case of bigsmp The latter is hillarious because bigsmp uses physical destination mode which never can use the logical APIC ID. It gets even worse. As bigsmp can be enforced late in the boot process the probe function overwrites the per CPU variable which is never used for this APIC type once again. Remove that gunk and store 1 << cpunr unconditionally if and only if the CPU number is less than 8, because the default logical destination mode only allows up to 8 CPUs. This is just an intermediate step before removing the per CPU insanity completely. Stay tuned. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Get rid of apic_physThomas Gleixner1-9/+10
No need for an extra variable to find out whether the APIC has been mapped or is accessible (X2APIC mode). Provide an inline for this and check apic_mmio_base which is only set when the local APIC has been mapped. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Remove check_phys_apicid_present()Thomas Gleixner9-21/+0
The only silly usage site is gone. Remove the gunk which was even outright wrong in the bigsmp_32 case which returned true unconditionally. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Sanitize num_processors handlingThomas Gleixner1-3/+6
num_processors is 0 by default and only gets incremented when local APICs are registered. Make init_apic_mappings(), which tries to enable the local APIC in the case that no SMP configuration was found set num_processors to 1. This allows to remove yet another check for the local APIC and yet another place which registers the boot CPUs local APIC ID. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Sanitize APIC address setupThomas Gleixner1-19/+10
Convert places which just write mp_lapic_addr and let them register the local APIC address directly instead of relying on magic other code to do so. Add a WARN_ON() into register_lapic_address() which is raised when register_lapic_address() is invoked more than once during boot. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Split register_apic_address()Thomas Gleixner1-8/+14
Split the fixmap setup out of register_lapic_address() and reuse it when the X2APIC is disabled during setup. This avoids registering the APIC ID (setting 'mp_lapic_addr') twice. [ dhansen: changelog wording tweak ] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Make some APIC init functions boolThomas Gleixner1-18/+18
Quite some APIC init functions are pure boolean, but use the success = 0, fail < 0 model. That's confusing as hell when reading through the code. Convert them to boolean. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Remove the pointless APIC version checkThomas Gleixner1-15/+4
This historical leftover is really uninteresting today. Whatever MPTABLE or MADT delivers we only trust the hardware anyway. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Register boot CPU APIC earlyThomas Gleixner1-70/+50
Register the boot CPU APIC right when the boot CPUs APIC is read from the hardware. No point is doing this on random places and having wild heuristics to save the boot CPU APIC ID slot and CPU number 0 reserved. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Consolidate boot_cpu_physical_apicid initialization sitesThomas Gleixner1-70/+32
boot_cpu_physical_apicid is written in random places and in the last consequence filled with the APIC ID read from the local APIC. That causes it to have inconsistent state when the MPTABLE is broken. As a consequence tons of moronic checks are sprinkled all over the place. Consolidate the code and read it exactly once when either X2APIC mode is detected early or when the APIC mapping is established. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Nuke unused apic::inquire_remote_apic()Thomas Gleixner8-17/+0
Put it to the other historical leftovers. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Remove unused max_physical_apicidThomas Gleixner1-8/+0
max_physical_apicid is assigned but never read. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Get rid of hard_smp_processor_id()Thomas Gleixner4-8/+3
No point in having a wrapper around read_apic_id(). Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Remove pointless x86_bios_cpu_apicidThomas Gleixner4-9/+4
It's a useless copy of x86_cpu_to_apicid. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic/ioapic: Rename skip_ioapic_setupThomas Gleixner2-12/+12
Another variable name which is confusing at best. Convert to bool. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-09x86/apic: Rename disable_apicThomas Gleixner4-16/+16
It reflects a state and not a command. Make it bool while at it. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Sohil Mehta <[email protected]> Tested-by: Juergen Gross <[email protected]> # Xen PV (dom0 and unpriv. guest)
2023-08-06x86/vector: Replace IRQ_MOVE_CLEANUP_VECTOR with a timer callbackThomas Gleixner1-20/+78
The left overs of a moved interrupt are cleaned up once the interrupt is raised on the new target CPU. Keeping the vector valid on the original target CPU guarantees that there can't be an interrupt lost if the affinity change races with an concurrent interrupt from the device. This cleanup utilizes the lowest priority interrupt vector for this cleanup, which makes sure that in the unlikely case when the to be cleaned up interrupt is pending in the local APICs IRR the cleanup vector does not live lock. But there is no real reason to use an interrupt vector for cleaning up the leftovers of a moved interrupt. It's not a high performance operation. The only requirement is that it happens on the original target CPU. Convert it to use a timer instead and adjust the code accordingly. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Xin Li <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-08-06x86/vector: Rename send_cleanup_vector() to vector_schedule_cleanup()Thomas Gleixner1-4/+4
Rename send_cleanup_vector() to vector_schedule_cleanup() to prepare for replacing the vector cleanup IPI with a timer callback. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Xin Li <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Steve Wahl <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-07-31x86/apic: Hide unused safe_smp_processor_id() on 32-bit UPArnd Bergmann1-0/+2
When CONFIG_SMP is disabled in a 32-bit config, the prototype for safe_smp_processor_id() is hidden, which causes a W=1 warning: arch/x86/kernel/apic/ipi.c:316:5: error: no previous prototype for 'safe_smp_processor_id' [-Werror=missing-prototypes] Since there are no callers in this configuration, just hide the definition as well. [ bp: Clarify it is a 32-bit config. ] Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-06-26Merge tag 'x86_platform_for_6.5' of ↵Linus Torvalds1-123/+195
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 platform updates from Dave Hansen: "Allow CPUs in SGX/HPE Ultraviolet to start using Sub-NUMA clustering (SNC) mode. SNC has been around outside the UV world for a while but evidently never worked on UV systems. SNC is rather notorious for breaking bad assumptions of a 1:1 relationship between physical sockets and NUMA nodes. The UV code was rather prolific with these assumptions and took quite a bit of refactoring to remove them" * tag 'x86_platform_for_6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/platform/uv: Update UV[23] platform code for SNC x86/platform/uv: Remove remaining BUG_ON() and BUG() calls x86/platform/uv: UV support for sub-NUMA clustering x86/platform/uv: Helper functions for allocating and freeing conversion tables x86/platform/uv: When searching for minimums, start at INT_MAX not 99999 x86/platform/uv: Fix printed information in calc_mmioh_map x86/platform/uv: Introduce helper function uv_pnode_to_socket. x86/platform/uv: Add platform resolving #defines for misc GAM_MMIOH_REDIRECT*
2023-06-26Merge tag 'smp-core-2023-06-26' of ↵Linus Torvalds1-11/+29
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull SMP updates from Thomas Gleixner: "A large update for SMP management: - Parallel CPU bringup The reason why people are interested in parallel bringup is to shorten the (kexec) reboot time of cloud servers to reduce the downtime of the VM tenants. The current fully serialized bringup does the following per AP: 1) Prepare callbacks (allocate, intialize, create threads) 2) Kick the AP alive (e.g. INIT/SIPI on x86) 3) Wait for the AP to report alive state 4) Let the AP continue through the atomic bringup 5) Let the AP run the threaded bringup to full online state There are two significant delays: #3 The time for an AP to report alive state in start_secondary() on x86 has been measured in the range between 350us and 3.5ms depending on vendor and CPU type, BIOS microcode size etc. #4 The atomic bringup does the microcode update. This has been measured to take up to ~8ms on the primary threads depending on the microcode patch size to apply. On a two socket SKL server with 56 cores (112 threads) the boot CPU spends on current mainline about 800ms busy waiting for the APs to come up and apply microcode. That's more than 80% of the actual onlining procedure. This can be reduced significantly by splitting the bringup mechanism into two parts: 1) Run the prepare callbacks and kick the AP alive for each AP which needs to be brought up. The APs wake up, do their firmware initialization and run the low level kernel startup code including microcode loading in parallel up to the first synchronization point. (#1 and #2 above) 2) Run the rest of the bringup code strictly serialized per CPU (#3 - #5 above) as it's done today. Parallelizing that stage of the CPU bringup might be possible in theory, but it's questionable whether required surgery would be justified for a pretty small gain. If the system is large enough the first AP is already waiting at the first synchronization point when the boot CPU finished the wake-up of the last AP. That reduces the AP bringup time on that SKL from ~800ms to ~80ms, i.e. by a factor ~10x. The actual gain varies wildly depending on the system, CPU, microcode patch size and other factors. There are some opportunities to reduce the overhead further, but that needs some deep surgery in the x86 CPU bringup code. For now this is only enabled on x86, but the core functionality obviously works for all SMP capable architectures. - Enhancements for SMP function call tracing so it is possible to locate the scheduling and the actual execution points. That allows to measure IPI delivery time precisely" * tag 'smp-core-2023-06-26' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits) trace,smp: Add tracepoints for scheduling remotelly called functions trace,smp: Add tracepoints around remotelly called functions MAINTAINERS: Add CPU HOTPLUG entry x86/smpboot: Fix the parallel bringup decision x86/realmode: Make stack lock work in trampoline_compat() x86/smp: Initialize cpu_primary_thread_mask late cpu/hotplug: Fix off by one in cpuhp_bringup_mask() x86/apic: Fix use of X{,2}APIC_ENABLE in asm with older binutils x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it x86/smpboot: Support parallel startup of secondary CPUs x86/smpboot: Implement a bit spinlock to protect the realmode stack x86/apic: Save the APIC virtual base address cpu/hotplug: Allow "parallel" bringup up to CPUHP_BP_KICK_AP_STATE x86/apic: Provide cpu_primary_thread mask x86/smpboot: Enable split CPU startup cpu/hotplug: Provide a split up CPUHP_BRINGUP mechanism cpu/hotplug: Reset task stack state in _cpu_up() cpu/hotplug: Remove unused state functions riscv: Switch to hotplug core state synchronization parisc: Switch to hotplug core state synchronization ...
2023-06-19x86/apic: Fix kernel panic when booting with intremap=off and x2apic_physDheeraj Kumar Srivastava1-1/+4
When booting with "intremap=off" and "x2apic_phys" on the kernel command line, the physical x2APIC driver ends up being used even when x2APIC mode is disabled ("intremap=off" disables x2APIC mode). This happens because the first compound condition check in x2apic_phys_probe() is false due to x2apic_mode == 0 and so the following one returns true after default_acpi_madt_oem_check() having already selected the physical x2APIC driver. This results in the following panic: kernel BUG at arch/x86/kernel/apic/io_apic.c:2409! invalid opcode: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.4.0-rc2-ver4.1rc2 #2 Hardware name: Dell Inc. PowerEdge R6515/07PXPY, BIOS 2.3.6 07/06/2021 RIP: 0010:setup_IO_APIC+0x9c/0xaf0 Call Trace: <TASK> ? native_read_msr apic_intr_mode_init x86_late_time_init start_kernel x86_64_start_reservations x86_64_start_kernel secondary_startup_64_no_verify </TASK> which is: setup_IO_APIC: apic_printk(APIC_VERBOSE, "ENABLING IO-APIC IRQs\n"); for_each_ioapic(ioapic) BUG_ON(mp_irqdomain_create(ioapic)); Return 0 to denote that x2APIC has not been enabled when probing the physical x2APIC driver. [ bp: Massage commit message heavily. ] Fixes: 9ebd680bd029 ("x86, apic: Use probe routines to simplify apic selection") Signed-off-by: Dheeraj Kumar Srivastava <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Reviewed-by: Kishon Vijay Abraham I <[email protected]> Reviewed-by: Vasant Hegde <[email protected]> Reviewed-by: Cyrill Gorcunov <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-05-31x86/platform/uv: Update UV[23] platform code for SNCSteve Wahl1-4/+38
Previous Sub-NUMA Clustering changes need not just a count of blades present, but a count that includes any missing ids for blades not present; in other words, the range from lowest to highest blade id. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230519190752.3297140-9-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Remove remaining BUG_ON() and BUG() callsSteve Wahl1-2/+4
Replace BUG and BUG_ON with WARN_ON_ONCE and carry on as best as we can. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230519190752.3297140-8-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: UV support for sub-NUMA clusteringSteve Wahl1-68/+94
Sub-NUMA clustering (SNC) invalidates previous assumptions of a 1:1 relationship between blades, sockets, and nodes. Fix these assumptions and build tables correctly when SNC is enabled. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230519190752.3297140-7-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Helper functions for allocating and freeing conversion tablesSteve Wahl1-44/+53
Add alloc_conv_table() and FREE_1_TO_1_TABLE() to reduce duplicated code among the conversion tables we use. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230519190752.3297140-6-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: When searching for minimums, start at INT_MAX not 99999Steve Wahl1-2/+2
Using a starting value of INT_MAX rather than 999999 or 99999 means this algorithm won't fail should the numbers being compared ever exceed this value. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230519190752.3297140-5-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Fix printed information in calc_mmioh_mapSteve Wahl1-6/+7
Fix incorrect mask names and values in calc_mmioh_map() that caused it to print wrong NASID information. And an unused blade position is not an error condition, but will yield an invalid NASID value, so change the invalid NASID message from an error to a debug message. Signed-off-by: Steve Wahl <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Link: https://lore.kernel.org/all/20230519190752.3297140-4-steve.wahl%40hpe.com
2023-05-29x86/smp: Initialize cpu_primary_thread_mask lateThomas Gleixner1-1/+17
Marking primary threads in the cpumask during early boot is only correct in certain configurations, but broken e.g. for the legacy hyperthreading detection. This is due to the complete mess in the CPUID evaluation code which initializes smp_num_siblings only half during early init and fixes it up later when identify_boot_cpu() is invoked. So using smp_num_siblings before identify_boot_cpu() leads to incorrect results. Fixing the early CPU init code to provide the proper data is a larger scale surgery as the code has dependencies on data structures which are not initialized during early boot. Move the initialization of cpu_primary_thread_mask wich depends on smp_num_siblings being correct to an early initcall so that it is set up correctly before SMP bringup. Fixes: f54d4434c281 ("x86/apic: Provide cpu_primary_thread mask") Reported-by: "Kirill A. Shutemov" <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Kirill A. Shutemov <[email protected]> Link: https://lore.kernel.org/r/87sfbhlwp9.ffs@tglx
2023-05-15x86/smpboot: Support parallel startup of secondary CPUsDavid Woodhouse1-1/+1
In parallel startup mode the APs are kicked alive by the control CPU quickly after each other and run through the early startup code in parallel. The real-mode startup code is already serialized with a bit-spinlock to protect the real-mode stack. In parallel startup mode the smpboot_control variable obviously cannot contain the Linux CPU number so the APs have to determine their Linux CPU number on their own. This is required to find the CPUs per CPU offset in order to find the idle task stack and other per CPU data. To achieve this, export the cpuid_to_apicid[] array so that each AP can find its own CPU number by searching therein based on its APIC ID. Introduce a flag in the top bits of smpboot_control which indicates that the AP should find its CPU number by reading the APIC ID from the APIC. This is required because CPUID based APIC ID retrieval can only provide the initial APIC ID, which might have been overruled by the firmware. Some AMD APUs come up with APIC ID = initial APIC ID + 0x10, so the APIC ID to CPU number lookup would fail miserably if based on CPUID. Also virtualization can make its own APIC ID assignements. The only requirement is that the APIC IDs are consistent with the APCI/MADT table. For the boot CPU or in case parallel bringup is disabled the control bits are empty and the CPU number is directly available in bit 0-23 of smpboot_control. [ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ] [ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ] [ seanc: Fix stray override of initial_gs in common_cpu_up() ] [ Oleksandr Natalenko: reported suspend/resume issue fixed in x86_acpi_suspend_lowlevel ] [ tglx: Make it read the APIC ID from the APIC instead of using CPUID, split the bitlock part out ] Co-developed-by: Thomas Gleixner <[email protected]> Co-developed-by: Brian Gerst <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Brian Gerst <[email protected]> Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Tested-by: Guilherme G. Piccoli <[email protected]> # Steam Deck Link: https://lore.kernel.org/r/[email protected]
2023-05-15x86/apic: Save the APIC virtual base addressThomas Gleixner1-0/+4
For parallel CPU brinugp it's required to read the APIC ID in the low level startup code. The virtual APIC base address is a constant because its a fix-mapped address. Exposing that constant which is composed via macros to assembly code is non-trivial due to header inclusion hell. Aside of that it's constant only because of the vsyscall ABI requirement. Once vsyscall is out of the picture the fixmap can be placed at runtime. Avoid header hell, stay flexible and store the address in a variable which can be exposed to the low level startup code. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Tested-by: Guilherme G. Piccoli <[email protected]> # Steam Deck Link: https://lore.kernel.org/r/[email protected]
2023-05-15x86/apic: Provide cpu_primary_thread maskThomas Gleixner1-11/+9
Make the primary thread tracking CPU mask based in preparation for simpler handling of parallel bootup. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Tested-by: Guilherme G. Piccoli <[email protected]> # Steam Deck Link: https://lore.kernel.org/r/[email protected]
2023-04-25Merge tag 'x86-apic-2023-04-24' of ↵Linus Torvalds3-52/+93
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 APIC updates from Thomas Gleixner: - Fix the incorrect handling of atomic offset updates in reserve_eilvt_offset() The check for the return value of atomic_cmpxchg() is not compared against the old value, it is compared against the new value, which makes it two round on success. Convert it to atomic_try_cmpxchg() which does the right thing. - Handle IO/APIC less systems correctly When IO/APIC is not advertised by ACPI then the computation of the lower bound for dynamically allocated interrupts like MSI goes wrong. This lower bound is used to exclude the IO/APIC legacy GSI space as that must stay reserved for the legacy interrupts. In case that the system, e.g. VM, does not advertise an IO/APIC the lower bound stays at 0. 0 is an invalid interrupt number except for the legacy timer interrupt on x86. The return value is unchecked in the core code, so it ends up to allocate interrupt number 0 which is subsequently considered to be invalid by the caller, e.g. the MSI allocation code. A similar problem was already cured for device tree based systems years ago, but that missed - or did not envision - the zero IO/APIC case. Consolidate the zero check and return the provided "from" argument to the core code call site, which is guaranteed to be greater than 0. - Simplify the X2APIC cluster CPU mask logic for CPU hotplug Per cluster CPU masks are required for X2APIC in cluster mode to determine the correct cluster for a target CPU when calculating the destination for IPIs These masks are established when CPUs are borught up. The first CPU in a cluster must allocate a new cluster CPU mask. As this happens during the early startup of a CPU, where memory allocations cannot be done, the mask has to be allocated by the control CPU. The current implementation allocates a clustermask just in case and if the to be brought up CPU is the first in a cluster the CPU takes over this allocation from a global pointer. This works nicely in the fully serialized CPU bringup scenario which is used today, but would fail completely for parallel bringup of CPUs. The cluster association of a CPU can be computed from the APIC ID which is enumerated by ACPI/MADT. So the cluster CPU masks can be preallocated and associated upfront and the upcoming CPUs just need to set their corresponding bit. Aside of preparing for parallel bringup this is a valuable simplification on its own. - Remove global variables which control the early startup of secondary CPUs on 64-bit The only information which is needed by a starting CPU is the Linux CPU number. The CPU number allows it to retrieve the rest of the required data from already existing per CPU storage. So instead of initial_stack, early_gdt_desciptor and initial_gs provide a new variable smpboot_control which contains the Linux CPU number for now. The starting CPU can retrieve and compute all required information for startup from there. Aside of being a cleanup, this is also preparing for parallel CPU bringup, where starting CPUs will look up their Linux CPU number via the APIC ID, when smpboot_control has the corresponding control bit set. - Make cc_vendor globally accesible Subsequent parallel bringup changes require access to cc_vendor because confidental computing platforms need special treatment in the early startup phase vs. CPUID and APCI ID readouts. The change makes cc_vendor global and provides stub accessors in case that CONFIG_ARCH_HAS_CC_PLATFORM is not set. This was merged from the x86/cc branch in anticipation of further parallel bringup commits which require access to cc_vendor. Due to late discoveries of fundamental issue with those patches these commits never happened. The merge commit is unfortunately in the middle of the APIC commits so unraveling it would have required a rebase or revert. As the parallel bringup seems to be well on its way for 6.5 this would be just pointless churn. As the commit does not contain any functional change it's not a risk to keep it. * tag 'x86-apic-2023-04-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/ioapic: Don't return 0 from arch_dynirq_lower_bound() x86/apic: Fix atomic update of offset in reserve_eilvt_offset() x86/coco: Export cc_vendor x86/smpboot: Reference count on smpboot_setup_warm_reset_vector() x86/smpboot: Remove initial_gs x86/smpboot: Remove early_gdt_descr on 64-bit x86/smpboot: Remove initial_stack on 64-bit x86/apic/x2apic: Allow CPU cluster_mask to be populated in parallel
2023-04-12x86/ioapic: Don't return 0 from arch_dynirq_lower_bound()Saurabh Sengar1-5/+9
arch_dynirq_lower_bound() is invoked by the core interrupt code to retrieve the lowest possible Linux interrupt number for dynamically allocated interrupts like MSI. The x86 implementation uses this to exclude the IO/APIC GSI space. This works correctly as long as there is an IO/APIC registered, but returns 0 if not. This has been observed in VMs where the BIOS does not advertise an IO/APIC. 0 is an invalid interrupt number except for the legacy timer interrupt on x86. The return value is unchecked in the core code, so it ends up to allocate interrupt number 0 which is subsequently considered to be invalid by the caller, e.g. the MSI allocation code. The function has already a check for 0 in the case that an IO/APIC is registered, as ioapic_dynirq_base is 0 in case of device tree setups. Consolidate this and zero check for both ioapic_dynirq_base and gsi_top, which is used in the case that no IO/APIC is registered. Fixes: 3e5bedc2c258 ("x86/apic: Fix arch_dynirq_lower_bound() bug for DT enabled machines") Signed-off-by: Saurabh Sengar <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-04-07x86/apic: Fix atomic update of offset in reserve_eilvt_offset()Uros Bizjak1-3/+2
The detection of atomic update failure in reserve_eilvt_offset() is not correct. The value returned by atomic_cmpxchg() should be compared to the old value from the location to be updated. If these two are the same, then atomic update succeeded and "eilvt_offsets[offset]" location is updated to "new" in an atomic way. Otherwise, the atomic update failed and it should be retried with the value from "eilvt_offsets[offset]" - exactly what atomic_try_cmpxchg() does in a correct and more optimal way. Fixes: a68c439b1966c ("apic, x86: Check if EILVT APIC registers are available (AMD only)") Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-03-26x86/ioremap: Add hypervisor callback for private MMIO mapping in coco VMMichael Kelley1-2/+8
Current code always maps MMIO devices as shared (decrypted) in a confidential computing VM. But Hyper-V guest VMs on AMD SEV-SNP with vTOM use a paravisor running in VMPL0 to emulate some devices, such as the IO-APIC and TPM. In such a case, the device must be accessed as private (encrypted) because the paravisor emulates the device at an address below vTOM, where all accesses are encrypted. Add a new hypervisor callback to determine if an MMIO address should be mapped private. The callback allows hypervisor-specific code to handle any quirks, the use of a paravisor, etc. in determining whether a mapping must be private. If the callback is not used by a hypervisor, default to returning "false", which is consistent with normal coco VM behavior. Use this callback as another special case to check for when doing ioremap(). Just checking the starting address is sufficient as an ioremap range must be all private or all shared. Also make the callback in early boot IO-APIC mapping code that uses the fixmap. [ bp: Touchups. ] Signed-off-by: Michael Kelley <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-03-21x86/apic/x2apic: Allow CPU cluster_mask to be populated in parallelDavid Woodhouse1-44/+82
Each of the sibling CPUs in a cluster uses the same clustermask. The first CPU in a cluster will need a new clustermask allocated, while subsequent siblings will use the same clustermask as the first. However, the CPU being brought up cannot yet perform memory allocations at the point that this occurs in init_x2apic_ldr(). So at present, the alloc_clustermask() function allocates a clustermask just in case it's needed, storing it in the global cluster_hotplug_mask. A CPU which is the first sibling of a cluster will "take" it from there and set cluster_hotplug_mask to NULL, in order for alloc_clustermask() to allocate a new one before bringing up the next CPU. To facilitate parallel bringup of CPUs in future, switch to a model where alloc_clustermask() prepopulates the clustermask in the per_cpu data for each present CPU in the cluster in advance. All that the CPU needs to do for itself in init_x2apic_ldr() is set its own bit in that mask. The 'node' and 'clusterid' members of struct cluster_mask are thus redundant, and it can become a simple struct cpumask instead. Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Usama Arif <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Paul E. McKenney <[email protected]> Tested-by: Kim Phillips <[email protected]> Tested-by: Oleksandr Natalenko <[email protected]> Tested-by: Guilherme G. Piccoli <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-02-13x86/ioapic: Use irq_domain_create_hierarchy()Johan Hovold1-5/+2
Use the irq_domain_create_hierarchy() helper to create the hierarchical domain, which both serves as documentation and avoids poking at irqdomain internals. Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Tested-by: Hsin-Yi Wang <[email protected]> Tested-by: Mark-PK Tsai <[email protected]> Signed-off-by: Johan Hovold <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-12Merge tag 'x86-apic-2022-12-10' of ↵Linus Torvalds1-5/+8
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 apic update from Thomas Gleixner: "A set of changes for the x86 APIC code: - Handle the case where x2APIC is enabled and locked by the BIOS on a kernel with CONFIG_X86_X2APIC=n gracefully. Instead of a panic which does not make it to the graphical console during very early boot, simply disable the local APIC completely and boot with the PIC and very limited functionality, which allows to diagnose the issue - Convert x86 APIC device tree bindings to YAML - Extend x86 APIC device tree bindings to configure interrupt delivery mode and handle this in during init. This allows to boot with device tree on platforms which lack a legacy PIC" * tag 'x86-apic-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/of: Add support for boot time interrupt delivery mode configuration x86/of: Replace printk(KERN_LVL) with pr_lvl() dt-bindings: x86: apic: Introduce new optional bool property for lapic dt-bindings: x86: apic: Convert Intel's APIC bindings to YAML schema x86/of: Remove unused early_init_dt_add_memory_arch() x86/apic: Handle no CONFIG_X86_X2APIC on systems with x2APIC enabled by BIOS
2022-12-05x86/apic/msi: Enable PCI/IMSThomas Gleixner1-0/+5
Enable IMS in the domain init and allocation mapping code, but do not enable it on the vector domain as discussed in various threads on LKML. The interrupt remap domains can expand this setting like they do with PCI multi MSI. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-05x86/apic/msi: Remove arch_create_remap_msi_irq_domain()Thomas Gleixner1-41/+1
and related code which is not longer required now that the interrupt remap code has been converted to MSI parent domains. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-05iommu/amd: Switch to MSI base domainsThomas Gleixner1-0/+1
Remove the global PCI/MSI irqdomain implementation and provide the required MSI parent ops so the PCI/MSI code can detect the new parent and setup per device domains. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-05iommu/vt-d: Switch to MSI parent domainsThomas Gleixner1-0/+2
Remove the global PCI/MSI irqdomain implementation and provide the required MSI parent ops so the PCI/MSI code can detect the new parent and setup per device domains. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-05x86/apic/vector: Provide MSI parent domainThomas Gleixner1-47/+129
Enable MSI parent domain support in the x86 vector domain and fixup the checks in the iommu implementations to check whether device::msi::domain is the default MSI parent domain. That keeps the existing logic to protect e.g. devices behind VMD working. The interrupt remap PCI/MSI code still works because the underlying vector domain still provides the same functionality. None of the other x86 PCI/MSI, e.g. XEN and HyperV, implementations are affected either. They still work the same way both at the low level and the PCI/MSI implementations they provide. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-05genirq/msi: Move IRQ_DOMAIN_MSI_NOMASK_QUIRK to MSI flagsThomas Gleixner1-3/+2
It's truly a MSI only flag and for the upcoming per device MSI domains this must be in the MSI flags so it can be set during domain setup without exposing this quirk outside of x86. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Acked-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-12-02x86/apic: Handle no CONFIG_X86_X2APIC on systems with x2APIC enabled by BIOSMateusz Jończyk1-5/+8
A kernel that was compiled without CONFIG_X86_X2APIC was unable to boot on platforms that have x2APIC already enabled in the BIOS before starting the kernel. The kernel was supposed to panic with an approprite error message in validate_x2apic() due to the missing X2APIC support. However, validate_x2apic() was run too late in the boot cycle, and the kernel tried to initialize the APIC nonetheless. This resulted in an earlier panic in setup_local_APIC() because the APIC was not registered. In my experiments, a panic message in setup_local_APIC() was not visible in the graphical console, which resulted in a hang with no indication what has gone wrong. Instead of calling panic(), disable the APIC, which results in a somewhat working system with the PIC only (and no SMP). This way the user is able to diagnose the problem more easily. Disabling X2APIC mode is not an option because it's impossible on systems with locked x2APIC. The proper place to disable the APIC in this case is in check_x2apic(), which is called early from setup_arch(). Doing this in __apic_intr_mode_select() is too late. Make check_x2apic() unconditionally available and remove the empty stub. Reported-by: Paul Menzel <[email protected]> Reported-by: Robert Elliott (Servers) <[email protected]> Signed-off-by: Mateusz Jończyk <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Link: https://lore.kernel.org/lkml/[email protected] Link: https://lore.kernel.org/all/[email protected]
2022-11-17x86/apic: Remove X86_IRQ_ALLOC_CONTIGUOUS_VECTORSThomas Gleixner2-8/+2
Now that the PCI/MSI core code does early checking for multi-MSI support X86_IRQ_ALLOC_CONTIGUOUS_VECTORS is not required anymore. Remove the flag and rely on MSI_FLAG_MULTI_PCI_MSI. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-08-31x86/apic: Don't disable x2APIC if lockedDaniel Sneddon1-4/+40
The APIC supports two modes, legacy APIC (or xAPIC), and Extended APIC (or x2APIC). X2APIC mode is mostly compatible with legacy APIC, but it disables the memory-mapped APIC interface in favor of one that uses MSRs. The APIC mode is controlled by the EXT bit in the APIC MSR. The MMIO/xAPIC interface has some problems, most notably the APIC LEAK [1]. This bug allows an attacker to use the APIC MMIO interface to extract data from the SGX enclave. Introduce support for a new feature that will allow the BIOS to lock the APIC in x2APIC mode. If the APIC is locked in x2APIC mode and the kernel tries to disable the APIC or revert to legacy APIC mode a GP fault will occur. Introduce support for a new MSR (IA32_XAPIC_DISABLE_STATUS) and handle the new locked mode when the LEGACY_XAPIC_DISABLED bit is set by preventing the kernel from trying to disable the x2APIC. On platforms with the IA32_XAPIC_DISABLE_STATUS MSR, if SGX or TDX are enabled the LEGACY_XAPIC_DISABLED will be set by the BIOS. If legacy APIC is required, then it SGX and TDX need to be disabled in the BIOS. [1]: https://aepicleak.com/aepicleak.pdf Signed-off-by: Daniel Sneddon <[email protected]> Signed-off-by: Dave Hansen <[email protected]> Acked-by: Dave Hansen <[email protected]> Tested-by: Neelima Krishnan <[email protected]> Link: https://lkml.kernel.org/r/[email protected]