aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/apic
AgeCommit message (Collapse)AuthorFilesLines
2016-05-04x86/platform/UV: Fold blade info into per node hub info structsMike Travis1-111/+101
Migrate references from the blade info structs to the per node hub info structs. This phases out the allocation of the list of per blade info structs on node 0, in favor of a per node hub info struct allocated on the node's local memory. There are also some minor cosemetic changes in the comments and whitespace to clean things up a bit. Tested-by: Dimitri Sivanich <[email protected]> Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Allocate common per node hub info structs on local nodeMike Travis1-15/+44
Allocate and setup per node hub info structs. CPU 0/Node 0 hub info is statically allocated to be accessible early in system startup. The remaining hub info structs are allocated on the node's local memory, and shared among the CPU's on that node. This leaves the small amount of info unique to each CPU in the per CPU info struct. Memory is saved by combining the common per node info fields to common node local structs. In addtion, since the info is read only only after setup, it should stay in the L3 cache of the local processor socket. This should therefore improve the cache hit rate when a group of cpus on a node are all interrupted for a common task. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Reviewed-by: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Move blade local processor ID to the per cpu info structMike Travis1-1/+1
Move references to blade local processor ID to the new per cpu info structs. Create an access function that makes this move, and other potential moves opaque to callers of this function. Define a flag that indicates to callers in external GPL modules that this function replaces any local definition. This allows calling source code to be built for both pre-UV4 kernels as well as post-UV4 kernels. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Move scir info to the per cpu info structMike Travis1-9/+9
Change the references to the SCIR fields to the new per cpu info structs. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Create per cpu info structs to replace per hub info structsMike Travis1-5/+10
The major portion of the hub info is common to all cpus on that hub. This is step one of moving the per cpu hub info to a per node hub info struct. This patch creates the small per cpu info struct that will contain only information specific to each CPU. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Update MMIOH setup function to work for both UV3 and UV4Mike Travis1-1/+2
Since UV3 and UV4 MMIOH regions are setup the same, we can use a common function to setup both. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Clean up redunduncies after merge of UV4 MMR definitionsMike Travis1-1/+0
Clean up any redundancies caused by new UV4 MMR definitions superseding any previously definitions local to functions. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Reviewed-by: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Prep for UV4 MMR updatesMike Travis1-84/+124
Cleanup patch to rearrange code and modify some defines so the next patch, the new UV4 MMR definitions can be merged cleanly. * Clean up the M/N related address constants (M is # of address bits per blade, N is the # of blade selection bits per SSI/partition). * Fix the lookup of the alias overlay addresses and NMI definitions to allow for flexibility in newer UV architecture types. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Add UV MMR Illegal Access FunctionMike Travis1-0/+12
This new function is generated by the UV MMR generation script to identify MMR registers and fields that are not defined for a specific UV architecture. With this switch, the immediate panic can be replaced with a message and a bad return value allowing either hardware or the emulator to diagnose the problem. It allows functions common to some UV arches to use common defines that might not be fully defined for all arches, as long as they do not reference them on the unsupported arches. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-04x86/platform/UV: Add UV4 Specific DefinesMike Travis1-3/+9
Add UV4 specific defines to determine if current system type is a UV4 system. Tested-by: John Estabrook <[email protected]> Tested-by: Gary Kroening <[email protected]> Tested-by: Nathan Zimmer <[email protected]> Signed-off-by: Mike Travis <[email protected]> Reviewed-by: Dimitri Sivanich <[email protected]> Cc: Andrew Banman <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russ Anderson <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-04-29Merge branch 'x86/urgent' into x86/asm, to refresh the treeIngo Molnar1-1/+2
Signed-off-by: Ingo Molnar <[email protected]>
2016-04-28x86/apic: Handle zero vector gracefully in clear_vector_irq()Keith Busch1-1/+2
If x86_vector_alloc_irq() fails x86_vector_free_irqs() is invoked to cleanup the already allocated vectors. This subsequently calls clear_vector_irq(). The failed irq has no vector assigned, which triggers the BUG_ON(!vector) in clear_vector_irq(). We cannot suppress the call to x86_vector_free_irqs() for the failed interrupt, because the other data related to this irq must be cleaned up as well. So calling clear_vector_irq() with vector == 0 is legitimate. Remove the BUG_ON and return if vector is zero, [ tglx: Massaged changelog ] Fixes: b5dc8e6c21e7 "x86/irq: Use hierarchical irqdomain to manage CPU interrupt vectors" Signed-off-by: Keith Busch <[email protected]> Cc: [email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-04-13x86/cpufeature: Replace cpu_has_apic with boot_cpu_has() usageBorislav Petkov5-15/+15
Signed-off-by: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-04-13x86/cpufeature: Replace cpu_has_tsc with boot_cpu_has() usageBorislav Petkov1-5/+5
Signed-off-by: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: Dmitry Torokhov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Thomas Sailer <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-31x86/cpufeature: Remove cpu_has_x2apicBorislav Petkov1-1/+1
Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Tony Luck <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-24Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds3-21/+74
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Misc fixes: - fix hotplug bugs - fix irq live lock - fix various topology handling bugs - fix APIC ACK ordering - fix PV iopl handling - fix speling - fix/tweak memcpy_mcsafe() return value - fix fbcon bug - remove stray prototypes" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/msr: Remove unused native_read_tscp() x86/apic: Remove declaration of unused hw_nmi_is_cpu_stuck x86/oprofile/nmi: Add missing hotplug FROZEN handling x86/hpet: Use proper mask to modify hotplug action x86/apic/uv: Fix the hotplug notifier x86/apb/timer: Use proper mask to modify hotplug action x86/topology: Use total_cpus not nr_cpu_ids for logical packages x86/topology: Fix Intel HT disable x86/topology: Fix logical package mapping x86/irq: Cure live lock in fixup_irqs() x86/tsc: Prevent NULL pointer deref in calibrate_delay_is_known() x86/apic: Fix suspicious RCU usage in smp_trace_call_function_interrupt() x86/iopl: Fix iopl capability check on Xen PV x86/iopl/64: Properly context-switch IOPL on Xen PV selftests/x86: Add an iopl test x86/mm, x86/mce: Fix return type/value for memcpy_mcsafe() x86/video: Don't assume all FB devices are PCI devices arch/x86/irq: Purge useless handler declarations from hw_irq.h x86: Fix misspellings in comments
2016-03-22kernel: add kcov code coverageDmitry Vyukov1-0/+4
kcov provides code coverage collection for coverage-guided fuzzing (randomized testing). Coverage-guided fuzzing is a testing technique that uses coverage feedback to determine new interesting inputs to a system. A notable user-space example is AFL (http://lcamtuf.coredump.cx/afl/). However, this technique is not widely used for kernel testing due to missing compiler and kernel support. kcov does not aim to collect as much coverage as possible. It aims to collect more or less stable coverage that is function of syscall inputs. To achieve this goal it does not collect coverage in soft/hard interrupts and instrumentation of some inherently non-deterministic or non-interesting parts of kernel is disbled (e.g. scheduler, locking). Currently there is a single coverage collection mode (tracing), but the API anticipates additional collection modes. Initially I also implemented a second mode which exposes coverage in a fixed-size hash table of counters (what Quentin used in his original patch). I've dropped the second mode for simplicity. This patch adds the necessary support on kernel side. The complimentary compiler support was added in gcc revision 231296. We've used this support to build syzkaller system call fuzzer, which has found 90 kernel bugs in just 2 months: https://github.com/google/syzkaller/wiki/Found-Bugs We've also found 30+ bugs in our internal systems with syzkaller. Another (yet unexplored) direction where kcov coverage would greatly help is more traditional "blob mutation". For example, mounting a random blob as a filesystem, or receiving a random blob over wire. Why not gcov. Typical fuzzing loop looks as follows: (1) reset coverage, (2) execute a bit of code, (3) collect coverage, repeat. A typical coverage can be just a dozen of basic blocks (e.g. an invalid input). In such context gcov becomes prohibitively expensive as reset/collect coverage steps depend on total number of basic blocks/edges in program (in case of kernel it is about 2M). Cost of kcov depends only on number of executed basic blocks/edges. On top of that, kernel requires per-thread coverage because there are always background threads and unrelated processes that also produce coverage. With inlined gcov instrumentation per-thread coverage is not possible. kcov exposes kernel PCs and control flow to user-space which is insecure. But debugfs should not be mapped as user accessible. Based on a patch by Quentin Casasnovas. [[email protected]: make task_struct.kcov_mode have type `enum kcov_mode'] [[email protected]: unbreak allmodconfig] [[email protected]: follow x86 Makefile layout standards] Signed-off-by: Dmitry Vyukov <[email protected]> Reviewed-by: Kees Cook <[email protected]> Cc: syzkaller <[email protected]> Cc: Vegard Nossum <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Tavis Ormandy <[email protected]> Cc: Will Deacon <[email protected]> Cc: Quentin Casasnovas <[email protected]> Cc: Kostya Serebryany <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Kees Cook <[email protected]> Cc: Bjorn Helgaas <[email protected]> Cc: Sasha Levin <[email protected]> Cc: David Drysdale <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Jiri Slaby <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-19x86/apic/uv: Fix the hotplug notifierThomas Gleixner1-1/+2
The notifier is missing the CPU_DOWN_FAILED transition. That leaves the heartbeat disabled when CPU_DOWN_PREPARE fails. It also does not handle the FROZEN transition variants. That might not be an issue for UV, but it's inconsistent. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Dimitri Sivanich <[email protected]>
2016-03-18x86/irq: Cure live lock in fixup_irqs()Thomas Gleixner1-18/+70
Harry reported, that he's able to trigger a system freeze with cpu hot unplug. The freeze turned out to be a live lock caused by recent changes in irq_force_complete_move(). When fixup_irqs() and from there irq_force_complete_move() is called on the dying cpu, then all other cpus are in stop machine an wait for the dying cpu to complete the teardown. If there is a move of an interrupt pending then irq_force_complete_move() sends the cleanup IPI to the cpus in the old_domain mask and waits for them to clear the mask. That's obviously impossible as those cpus are firmly stuck in stop machine with interrupts disabled. I should have known that, but I completely overlooked it being concentrated on the locking issues around the vectors. And the existance of the call to __irq_complete_move() in the code, which actually sends the cleanup IPI made it reasonable to wait for that cleanup to complete. That call was bogus even before the recent changes as it was just a pointless distraction. We have to look at two cases: 1) The move_in_progress flag of the interrupt is set This means the ioapic has been updated with the new vector, but it has not fired yet. In theory there is a race: set_ioapic(new_vector) <-- Interrupt is raised before update is effective, i.e. it's raised on the old vector. So if the target cpu cannot handle that interrupt before the old vector is cleaned up, we get a spurious interrupt and in the worst case the ioapic irq line becomes stale, but my experiments so far have only resulted in spurious interrupts. But in case of cpu hotplug this should be a non issue because if the affinity update happens right before all cpus rendevouz in stop machine, there is no way that the interrupt can be blocked on the target cpu because all cpus loops first with interrupts enabled in stop machine, so the old vector is not yet cleaned up when the interrupt fires. So the only way to run into this issue is if the delivery of the interrupt on the apic/system bus would be delayed beyond the point where the target cpu disables interrupts in stop machine. I doubt that it can happen, but at least there is a theroretical chance. Virtualization might be able to expose this, but AFAICT the IOAPIC emulation is not as stupid as the real hardware. I've spent quite some time over the weekend to enforce that situation, though I was not able to trigger the delayed case. 2) The move_in_progress flag is not set and the old_domain cpu mask is not empty. That means, that an interrupt was delivered after the change and the cleanup IPI has been sent to the cpus in old_domain, but not all CPUs have responded to it yet. In both cases we can assume that the next interrupt will arrive on the new vector, so we can cleanup the old vectors on the cpus in the old_domain cpu mask. Fixes: 98229aa36caa "x86/irq: Plug vector cleanup race" Reported-by: Harry Junior <[email protected]> Tested-by: Tony Luck <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Joe Lawrence <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Ben Hutchings <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1603140931430.3657@nanos Signed-off-by: Thomas Gleixner <[email protected]>
2016-03-17Merge branch 'x86/cleanups' into x86/urgentIngo Molnar2-2/+2
Pull in some merge window leftovers. Signed-off-by: Ingo Molnar <[email protected]>
2016-03-15Merge branch 'irq-core-for-linus' of ↵Linus Torvalds2-1/+61
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "The 4.6 pile of irq updates contains: - Support for IPI irqdomains to support proper integration of IPIs to and from coprocessors. The first user of this new facility is MIPS. The relevant MIPS patches come with the core to avoid merge ordering issues and have been acked by Ralf. - A new command line option to set the default interrupt affinity mask at boot time. - Support for some more new ARM and MIPS interrupt controllers: tango, alpine-msix and bcm6345-l1 - Two small cleanups for x86/apic which we merged into irq/core to avoid yet another branch in x86 with two tiny commits. - The usual set of updates, cleanups in drivers/irqchip. Mostly in the area of ARM-GIC, arada-37-xp and atmel chips. Nothing outstanding here" * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (56 commits) irqchip/irq-alpine-msi: Release the correct domain on error irqchip/mxs: Fix error check of of_io_request_and_map() irqchip/sunxi-nmi: Fix error check of of_io_request_and_map() genirq: Export IRQ functions for module use irqchip/gic/realview: Support more RealView DCC variants Documentation/bindings: Document the Alpine MSIX driver irqchip: Add the Alpine MSIX interrupt controller irqchip/gic-v3: Always return IRQ_SET_MASK_OK_DONE in gic_set_affinity irqchip/gic-v3-its: Mark its_init() and its children as __init irqchip/gic-v3: Remove gic_root_node variable from the ITS code irqchip/gic-v3: ACPI: Add redistributor support via GICC structures irqchip/gic-v3: Add ACPI support for GICv3/4 initialization irqchip/gic-v3: Refactor gic_of_init() for GICv3 driver x86/apic: Deinline _flat_send_IPI_mask, save ~150 bytes x86/apic: Deinline __default_send_IPI_*, save ~200 bytes dt-bindings: interrupt-controller: Add SoC-specific compatible string to Marvell ODMI irqchip/mips-gic: Add new DT property to reserve IPIs MIPS: Delete smp-gic.c MIPS: Make smp CMP, CPS and MT use the new generic IPI functions MIPS: Add generic SMP IPI support ...
2016-03-15Merge branch 'x86-asm-for-linus' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 asm updates from Ingo Molnar: "This is another big update. Main changes are: - lots of x86 system call (and other traps/exceptions) entry code enhancements. In particular the complex parts of the 64-bit entry code have been migrated to C code as well, and a number of dusty corners have been refreshed. (Andy Lutomirski) - vDSO special mapping robustification and general cleanups (Andy Lutomirski) - cpufeature refactoring, cleanups and speedups (Borislav Petkov) - lots of other changes ..." * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (64 commits) x86/cpufeature: Enable new AVX-512 features x86/entry/traps: Show unhandled signal for i386 in do_trap() x86/entry: Call enter_from_user_mode() with IRQs off x86/entry/32: Change INT80 to be an interrupt gate x86/entry: Improve system call entry comments x86/entry: Remove TIF_SINGLESTEP entry work x86/entry/32: Add and check a stack canary for the SYSENTER stack x86/entry/32: Simplify and fix up the SYSENTER stack #DB/NMI fixup x86/entry: Only allocate space for tss_struct::SYSENTER_stack if needed x86/entry: Vastly simplify SYSENTER TF (single-step) handling x86/entry/traps: Clear DR6 early in do_debug() and improve the comment x86/entry/traps: Clear TIF_BLOCKSTEP on all debug exceptions x86/entry/32: Restore FLAGS on SYSEXIT x86/entry/32: Filter NT and speed up AC filtering in SYSENTER x86/entry/compat: In SYSENTER, sink AC clearing below the existing FLAGS test selftests/x86: In syscall_nt, test NT|TF as well x86/asm-offsets: Remove PARAVIRT_enabled x86/entry/32: Introduce and use X86_BUG_ESPFIX instead of paravirt_enabled uprobes: __create_xol_area() must nullify xol_mapping.fault x86/cpufeature: Create a new synthetic cpu capability for machine check recovery ...
2016-03-08x86/apic: Deinline _flat_send_IPI_mask, save ~150 bytesDenys Vlasenko1-1/+1
_flat_send_IPI_mask: 157 bytes, 3 callsites text data bss dec hex filename 96183823 20860520 36122624 153166967 9212477 vmlinux1_before 96183699 20860520 36122624 153166843 92123fb vmlinux Signed-off-by: Denys Vlasenko <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel J Blueman <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Travis <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/apic: Deinline __default_send_IPI_*, save ~200 bytesDenys Vlasenko1-0/+60
__default_send_IPI_shortcut: 49 bytes, 2 callsites __default_send_IPI_dest_field: 108 bytes, 7 callsites text data bss dec hex filename 96184086 20860488 36122624 153167198 921255e vmlinux_before 96183823 20860520 36122624 153166967 9212477 vmlinux Signed-off-by: Denys Vlasenko <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel J Blueman <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Travis <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-02-29x86/topology: Create logical package idThomas Gleixner1-0/+14
For per package oriented services we must be able to rely on the number of CPU packages to be within bounds. Create a tracking facility, which - calculates the number of possible packages depending on nr_cpu_ids after boot - makes sure that the package id is within the number of possible packages. If the apic id is outside we map it to a logical package id if there is enough space available. Provide interfaces for drivers to query the mapping and do translations from physcial to logical ids. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Harish Chegondi <[email protected]> Cc: Jacob Pan <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Luis R. Rodriguez <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Toshi Kani <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-02-24x86: Fix misspellings in commentsAdam Buchbinder2-2/+2
Signed-off-by: Adam Buchbinder <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-02-18Merge branch 'x86/urgent' into x86/asm, to pick up fixesIngo Molnar3-78/+154
Signed-off-by: Ingo Molnar <[email protected]>
2016-01-30x86/cpufeature: Replace the old static_cpu_has() with safe variantBorislav Petkov1-2/+2
So the old one didn't work properly before alternatives had run. And it was supposed to provide an optimized JMP because the assumption was that the offset it is jumping to is within a signed byte and thus a two-byte JMP. So I did an x86_64 allyesconfig build and dumped all possible sites where static_cpu_has() was used. The optimization amounted to all in all 12(!) places where static_cpu_has() had generated a 2-byte JMP. Which has saved us a whopping 36 bytes! This clearly is not worth the trouble so we can remove it. The only place where the optimization might count - in __switch_to() - we will handle differently. But that's not subject of this patch. Signed-off-by: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-01-19x86/platform/UV: Remove EFI memmap quirk for UV2+Alex Thorlton1-1/+4
Commit a5d90c923bcf ("x86/efi: Quirk out SGI UV") added a quirk to efi_apply_memmap_quirks to force SGI UV systems to fall back to the old EFI memmap mechanism. We have a BIOS fix for this issue on all systems except for UV1. This commit fixes up the EFI quirk/MMR mapping code so that we only apply the special case to UV1 hardware. Signed-off-by: Alex Thorlton <[email protected]> Reviewed-by: Matt Fleming <[email protected]> Cc: Dimitri Sivanich <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Hedi Berriche <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Travis <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-01-15x86/irq: Plug vector cleanup raceThomas Gleixner1-10/+53
We still can end up with a stale vector due to the following: CPU0 CPU1 CPU2 lock_vector() data->move_in_progress=0 sendIPI() unlock_vector() set_affinity() assign_irq_vector() lock_vector() handle_IPI move_in_progress = 1 lock_vector() unlock_vector() move_in_progress == 1 So we need to serialize the vector assignment against a pending cleanup. The solution is rather simple now. We not only check for the move_in_progress flag in assign_irq_vector(), we also check whether there is still a cleanup pending in the old_domain cpumask. If so, we return -EBUSY to the caller and let him deal with it. Though we have to be careful in the cpu unplug case. If the cleanout has not yet completed then the following setaffinity() call would return -EBUSY. Add code which prevents this. Full context is here: http://lkml.kernel.org/r/[email protected] Reported-and-tested-by: Joe Lawrence <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Call irq_force_move_complete with irq descriptorThomas Gleixner1-4/+7
First of all there is no point in looking up the irq descriptor again, but we also need the descriptor for the final cleanup race fix in the next patch. Make that change seperate. No functional difference. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Remove outgoing CPU from vector cleanup maskThomas Gleixner1-2/+16
We want to synchronize new vector assignments with a pending cleanup. Remove a dying cpu from a pending cleanup mask. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Remove the cpumask allocation from send_cleanup_vector()Thomas Gleixner1-13/+3
There is no need to allocate a new cpumask for sending the cleanup vector. The old_domain mask is now protected by the vector_lock, so we can safely remove the offline cpus from it and send the IPI with the resulting mask. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Clear move_in_progress before sending cleanup IPIThomas Gleixner1-1/+3
send_cleanup_vector() fiddles with the old_domain mask unprotected because it relies on the protection by the move_in_progress flag. But this is fatal, as the flag is reset after the IPI has been sent. So a cpu which receives the IPI can still see the flag set and therefor ignores the cleanup request. If no other cleanup request happens then the vector stays stale on that cpu and in case of an irq removal the vector still persists. That can lead to use after free when the next cleanup IPI happens. Protect the code with vector_lock and clear move_in_progress before sending the IPI. This does not plug the race which Joe reported because: CPU0 CPU1 CPU2 lock_vector() data->move_in_progress=0 sendIPI() unlock_vector() set_affinity() assign_irq_vector() lock_vector() handle_IPI move_in_progress = 1 lock_vector() unlock_vector() move_in_progress == 1 The full fix comes with a later patch. Reported-and-tested-by: Joe Lawrence <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Remove offline cpus from vector cleanupThomas Gleixner1-2/+6
No point of keeping offline cpus in the cleanup mask. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Get rid of code duplicationThomas Gleixner1-18/+15
Reusing an existing vector and assigning a new vector has duplicated code. Consolidate it. This is also a preparatory patch for finally plugging the cleanup race. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Copy vectormask instead of an AND operationThomas Gleixner1-1/+1
In the case that the new vector mask is a subset of the existing mask there is no point to do a AND operation of currentmask & newmask. The result is newmask. So we can simply copy the new mask to the current mask and be done with it. Preparatory patch for further consolidation. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Check vector allocation earlyThomas Gleixner1-13/+25
__assign_irq_vector() uses the vector_cpumask which is assigned by apic->vector_allocation_domain() without doing basic sanity checks. That can result in a situation where the final assignement of a newly found vector fails in apic->cpu_mask_to_apicid_and(). So we have to do rollbacks for no reason. apic->cpu_mask_to_apicid_and() only fails if vector_cpumask & requested_cpumask & cpu_online_mask is empty. Check for this condition right away and if the result is empty try immediately the next possible cpu in the requested mask. So in case of a failure the old setting is unchanged and we can remove the rollback code. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Reorganize the search in assign_irq_vectorThomas Gleixner1-8/+16
Split out the code which advances the target cpu for the search so we can reuse it for the next patch which adds an early validation check for the vectormask which we get from the apic. Add comments while at it. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Reorganize the return path in assign_irq_vectorThomas Gleixner1-14/+8
Use an explicit goto for the cases where we have success in the search/update and return -ENOSPC if the search loop ends due to no space. Preparatory patch for fixes. No functional change. Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Do not use apic_chip_data.old_domain as temporary bufferJiang Liu1-3/+5
Function __assign_irq_vector() makes use of apic_chip_data.old_domain as a temporary buffer, which is in the way of using apic_chip_data.old_domain for synchronizing the vector cleanup with the vector assignement code. Use a proper temporary cpumask for this. [ tglx: Renamed the mask to searched_cpumask for clarity ] Signed-off-by: Jiang Liu <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Fix a race in x86_vector_free_irqs()Jiang Liu1-8/+8
There's a race condition between x86_vector_free_irqs() { free_apic_chip_data(irq_data->chip_data); xxxxx //irq_data->chip_data has been freed, but the pointer //hasn't been reset yet irq_domain_reset_irq_data(irq_data); } and smp_irq_move_cleanup_interrupt() { raw_spin_lock(&vector_lock); data = apic_chip_data(irq_desc_get_irq_data(desc)); access data->xxxx // may access freed memory raw_spin_unlock(&desc->lock); } which may cause smp_irq_move_cleanup_interrupt() to access freed memory. Call irq_domain_reset_irq_data(), which clears the pointer with vector lock held. [ tglx: Free memory outside of lock held region. ] Signed-off-by: Jiang Liu <[email protected]> Tested-by: Borislav Petkov <[email protected]> Tested-by: Joe Lawrence <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: [email protected] #4.3+ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-15x86/irq: Call chip->irq_set_affinity in proper contextThomas Gleixner1-1/+5
setup_ioapic_dest() calls irqchip->irq_set_affinity() completely unprotected. That's wrong in several aspects: - it opens a race window where irq_set_affinity() can be interrupted and the irq chip left in unconsistent state. - it triggers a lockdep splat when we fix the vector race for 4.3+ because vector lock is taken with interrupts enabled. The proper calling convention is irq descriptor lock held and interrupts disabled. Reported-and-tested-by: Borislav Petkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Jeremiah Mahler <[email protected]> Cc: [email protected] Cc: Guenter Roeck <[email protected]> Cc: Joe Lawrence <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1601140919420.3575@nanos Signed-off-by: Thomas Gleixner <[email protected]>
2016-01-11Merge branch 'x86-apic-for-linus' of ↵Linus Torvalds12-27/+99
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 apic updates from Ingo Molnar: "The main changes in this cycle were: - introduce optimized single IPI sending methods on modern APICs (Linus Torvalds, Thomas Gleixner) - kexec/crash APIC handling fixes and enhancements (Hidehiro Kawai) - extend lapic vector saving/restoring to the CMCI (MCE) vector as well (Juergen Gross) - various fixes and enhancements (Jake Oshins, Len Brown)" * 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits) x86/irq: Export functions to allow MSI domains in modules Documentation: Document kernel.panic_on_io_nmi sysctl x86/nmi: Save regs in crash dump on external NMI x86/apic: Introduce apic_extnmi command line parameter kexec: Fix race between panic() and crash_kexec() panic, x86: Allow CPUs to save registers even if looping in NMI context panic, x86: Fix re-entrance problem due to panic on NMI x86/apic: Fix the saving and restoring of lapic vectors during suspend/resume x86/smpboot: Re-enable init_udelay=0 by default on modern CPUs x86/smp: Remove single IPI wrapper x86/apic: Use default send single IPI wrapper x86/apic: Provide default send single IPI wrapper x86/apic: Implement single IPI for apic_noop x86/apic: Wire up single IPI for apic_numachip x86/apic: Wire up single IPI for x2apic_uv x86/apic: Implement single IPI for x2apic_phys x86/apic: Wire up single IPI for bigsmp_apic x86/apic: Remove pointless indirections from bigsmp_apic x86/apic: Wire up single IPI for apic_physflat x86/apic: Remove pointless indirections from apic_physflat ...
2015-12-30x86/numachip: Fix NumaConnect2 MMCFG PCI accessDaniel J Blueman1-4/+1
The MMCFG PCI accessors weren't being setup for NumacConnect2 correctly due to over-early assignment; this would create the potential for the wrong PCI domain to be accessed. Fix this by using the correct arch-specific PCI init function. Signed-off-by: Daniel J Blueman <[email protected]> Acked-by: Steffen Persvold <[email protected]> Cc: Daniel Lezcano <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2015-12-20x86/irq: Export functions to allow MSI domains in modulesJake Oshins2-3/+7
The Linux kernel already has the concept of IRQ domain, wherein a component can expose a set of IRQs which are managed by a particular interrupt controller chip or other subsystem. The PCI driver exposes the notion of an IRQ domain for Message-Signaled Interrupts (MSI) from PCI Express devices. This patch exposes the functions which are necessary for creating a MSI IRQ domain within a module. [ tglx: Split it into x86 and core irq parts ] Signed-off-by: Jake Oshins <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2015-12-19x86/apic: Introduce apic_extnmi command line parameterHidehiro Kawai1-2/+32
This patch introduces a command line parameter apic_extnmi: apic_extnmi=( bsp|all|none ) The default value is "bsp" and this is the current behavior: only the Boot-Strapping Processor receives an external NMI. "all" allows external NMIs to be broadcast to all CPUs. This would raise the success rate of panic on NMI when BSP hangs in NMI context or the external NMI is swallowed by other NMI handlers on the BSP. If you specify "none", no CPUs receive external NMIs. This is useful for the dump capture kernel so that it cannot be shot down by accidentally pressing the external NMI button (on platforms which have it) while saving a crash dump. Signed-off-by: Hidehiro Kawai <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Bandan Das <[email protected]> Cc: Baoquan He <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: [email protected] Cc: [email protected] Cc: "Maciej W. Rozycki" <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ricardo Ribalda Delgado <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Viresh Kumar <[email protected]> Cc: Vivek Goyal <[email protected]> Cc: x86-ml <[email protected]> Link: http://lkml.kernel.org/r/20151210014632.25437.43778.stgit@softrs Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2015-12-19Merge branch 'linus' into x86/apicThomas Gleixner1-1/+5
Pull in update changes so we can apply conflicting patches
2015-11-24x86/apic: Fix the saving and restoring of lapic vectors during suspend/resumeJuergen Gross1-1/+10
Saving and restoring lapic vectors in lapic_suspend() and lapic_resume() is not consistent: the thmr vector saving is guarded by a different config option than the restore part. The cmci vector isn't handled at all. Those inconsistencies are not very critical, as the missing cmci vector will be set via mce resume handling, the wrong config option used for restoring the thmr vector can't be configured differently than the one which should be used. Nevertheless correct the thmr vector restore and add cmci vector handling. Signed-off-by: Juergen Gross <[email protected]> Acked-by: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] [ Minor code edits. ] Signed-off-by: Ingo Molnar <[email protected]>
2015-11-07x86/irq: Probe for PIC presence before allocating descs for legacy IRQsVitaly Kuznetsov1-1/+5
Commit d32932d02e18 ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces") brought a regression for Hyper-V Gen2 instances. These instances don't have i8259 legacy PIC but they use legacy IRQs for serial port, rtc, and acpi. With this commit included we end up with these IRQs not initialized. Earlier, there was a special workaround for legacy IRQs in mp_map_pin_to_irq() doing mp_irqdomain_map() without looking at nr_legacy_irqs() and now we fail in __irq_domain_alloc_irqs() when irq_domain_alloc_descs() returns -EEXIST. The essence of the issue seems to be that early_irq_init() calls arch_probe_nr_irqs() to figure out the number of legacy IRQs before we probe for i8259 and gets 16. Later when init_8259A() is called we switch to NULL legacy PIC and nr_legacy_irqs() starts to return 0 but we already have 16 descs allocated. Solve the issue by separating i8259 probe from init and calling it in arch_probe_nr_irqs() before we actually use nr_legacy_irqs() information. Fixes: d32932d02e18 ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces") Signed-off-by: Vitaly Kuznetsov <[email protected]> Cc: Jiang Liu <[email protected]> Cc: K. Y. Srinivasan <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>