aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)AuthorFilesLines
2014-10-03arm64/efi: uefi_init error handling fixDave Young1-3/+4
There's one early memmap leak in uefi_init error path, fix it and slightly tune the error handling code. Signed-off-by: Dave Young <dyoung@redhat.com> Acked-by: Mark Salter <msalter@redhat.com> Reported-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-10-03arm64: Remove unneeded extern keywordGeoff Levand1-2/+2
Function prototypes are never definitions, so remove any 'extern' keyword from the funcion prototypes in cpu_ops.h. Fixes warnings emited by checkpatch. Signed-off-by: Geoff Levand <geoff@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-03ARM64: make of_device_ids constUwe Kleine-König1-1/+1
of_device_ids (i.e. compatible strings and the respective data) are not supposed to change at runtime. All functions working with of_device_ids provided by <linux/of.h> work with const of_device_ids. So mark the only non-const struct in arch/arm64 as const, too. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-03arm{,64}/xen: Remove "EXPERIMENTAL" in the description of the Xen optionsJulien Grall1-1/+1
The Xen ARM API is stable since Xen 4.4 and everything has been upstreamed in Linux for ARM and ARM64. Therefore we can drop "EXPERIMENTAL" from the Xen option in the both Kconfig. Signed-off-by: Julien Grall <julien.grall@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org
2014-10-03locking,arch: Use ACCESS_ONCE() instead of cast to volatile in atomic_read()Pranith Kumar1-2/+2
Use the much more reader friendly ACCESS_ONCE() instead of the cast to volatile. This is purely a stylistic change. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1411482607-20948-1-git-send-email-bobby.prani@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-10-02Merge branches 'fiq' (early part), 'fixes', 'l2c' (early part) and 'misc' ↵Russell King2-4/+9
into for-next
2014-10-02ARM: 8167/1: extend the reserved memory for initrd to be page alignedYalin Wang1-1/+7
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-10-02ARM: 8168/1: extend __init_end to a page align addressYalin Wang1-1/+1
This patch changes the __init_end address to a page align address, so that free_initmem() can free the whole .init section, because if the end address is not page aligned, it will round down to a page align address, then the tail unligned page will not be freed. Signed-off-by: wang <yalin.wang2010@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-10-02arm64: Use phys_addr_t type for physical addressMin-Hua Chen2-2/+2
Change the type of physical address from unsigned long to phys_addr_t, make valid_phys_addr_range more readable. Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-02arm64, defconfig: Enable Cavium Thunder SoC in defconfigRobert Richter1-0/+1
This patch enables Thunder SoCs in the arm64 defconfig. This is esp. useful to add Thunder platforms to automated builds based on arm64 defconfig. Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-10-02arm64, thunder: Add Kconfig option for Cavium Thunder SoC FamilyRadha Mohan Chintakuntla2-2/+6
This introduces ARCH_THUNDER to enable soc specific drivers and dtb files. Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-10-02arm64, thunder: Add initial dts for Cavium Thunder SoCRadha Mohan Chintakuntla3-0/+470
Add initial device tree nodes for Cavium Thunder SoCs with support of 48 cores and gicv3. The dtsi file requires further changes, esp. for pci, gicv3-its and smmu. This changes will be added later together with the device drivers. Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2014-10-02kbuild: arm: Do not define "comma" twiceMasahiro Yamada1-2/+0
The definition of "comma" exists in scripts/Kbuild.include. We should not double it. Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Michal Marek <mmarek@suse.cz>
2014-10-02arm64: Use DMA_ERROR_CODE to denote failed allocationSean Paul1-1/+1
This patch replaces the static assignment of ~0 to dma_handle with DMA_ERROR_CODE to be consistent with other platforms. Signed-off-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-30arm64: Add architectural support for PCILiviu Dudau7-2/+134
Use the generic PCI domain and OF functions to provide support for PCI on arm64. [bhelgaas: Change comments to use generic PCI, not just PCIe. Nothing at this level is PCIe-specific.] Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-29clocksource: arm_arch_timer: Consolidate arch_timer_evtstrm_enableNathan Lynch1-14/+0
The arch_timer_evtstrm_enable hooks in arm and arm64 are substantially similar, the only difference being a CONFIG_COMPAT-conditional section which is relevant only for arm64. Copy the arm64 version to the driver, removing the arch-specific hooks. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com>
2014-09-29clocksource: arm_arch_timer: Enable counter access for 32-bit ARMNathan Lynch1-17/+0
The only difference between arm and arm64's implementations of arch_counter_set_user_access is that 32-bit ARM does not enable user access to the virtual counter. We want to enable this access for the 32-bit ARM VDSO, so copy the arm64 version to the driver itself, and remove the arch-specific implementations. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com>
2014-09-27Merge tag 'kvm-arm-for-3.18' of ↵Paolo Bonzini7-21/+25
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-next Changes for KVM for arm/arm64 for 3.18 This includes a bunch of changes: - Support read-only memory slots on arm/arm64 - Various changes to fix Sparse warnings - Correctly detect write vs. read Stage-2 faults - Various VGIC cleanups and fixes - Dynamic VGIC data strcuture sizing - Fix SGI set_clear_pend offset bug - Fix VTTBR_BADDR Mask - Correctly report the FSC on Stage-2 faults Conflicts: virt/kvm/eventfd.c [duplicate, different patch where the kvm-arm version broke x86. The kvm tree instead has the right one]
2014-09-26arm/arm64: KVM: Report correct FSC for unsupported fault typesChristoffer Dall1-0/+5
When we catch something that's not a permission fault or a translation fault, we log the unsupported FSC in the kernel log, but we were masking off the bottom bits of the FSC which was not very helpful. Also correctly report the FSC for data and instruction faults rather than telling people it was a DFCS, which doesn't exist in the ARM ARM. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-09-26arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd allocJoel Schopp2-4/+14
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits and not all the bits in the PA range. This is clearly a bug that manifests itself on systems that allocate memory in the higher address space range. [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT instead of a hard-coded value and to move the alignment check of the allocation to mmu.c. Also added a comment explaining why we hardcode the IPA range and changed the stage-2 pgd allocation to be based on the 40 bit IPA range instead of the maximum possible 48 bit PA range. - Christoffer ] Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Joel Schopp <joel.schopp@amd.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-09-26arm/arm64: unexport restart handlersGuenter Roeck1-1/+0
Implementing a restart handler in a module don't make sense as there would be no guarantee that the module is loaded when a restart is needed. Unexport arm_pm_restart to ensure that no one gets the idea to do it anyway. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonas Jensen <jonas.jensen@gmail.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2014-09-26arm64: support restart through restart handler call chainGuenter Roeck1-0/+2
The kernel core now supports a restart handler call chain to restart the system. Call it if arm_pm_restart is not set. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonas Jensen <jonas.jensen@gmail.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2014-09-25arm64: Fix typos in KGDB macrosCatalin Marinas3-14/+14
Some of the KGDB macros used for generating the BRK instructions had the wrong spelling for DBG and KGDB abbreviations. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-25arm64: insn: Add return statements after BUG_ON()Mark Brown2-0/+33
Following a recent series of enhancements to the insn code the ARMv8 allnoconfig build has been generating a large number of warnings in the form of: arch/arm64/kernel/insn.c:689:8: warning: 'insn' may be used uninitialized in this function [-Wmaybe-uninitialized] This is because BUG() and related macros can be compiled out so we get execution paths which normally result in a panic compiling out to noops instead. I wasn't able to immediately identify a sensible return value to use in these cases so just return AARCH64_BREAK_FAULT - this is all "should never happen" code so hopefully it never has a practical impact. Signed-off-by: Mark Brown <broonie@kernel.org> [catalin.marinas@arm.com: AARCH64_BREAK_FAULT definition contributed by Daniel Borkmann] [catalin.marinas@arm.com: replace return 0 with AARCH64_BREAK_FAULT] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-24kvm: Add arch specific mmu notifier for page invalidationTang Chen1-0/+5
This will be used to let the guest run while the APIC access page is not pinned. Because subsequent patches will fill in the function for x86, place the (still empty) x86 implementation in the x86.c file instead of adding an inline function in kvm_host.h. Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-09-24kvm: Fix page ageing bugsAndres Lagar-Cavilla1-1/+2
1. We were calling clear_flush_young_notify in unmap_one, but we are within an mmu notifier invalidate range scope. The spte exists no more (due to range_start) and the accessed bit info has already been propagated (due to kvm_pfn_set_accessed). Simply call clear_flush_young. 2. We clear_flush_young on a primary MMU PMD, but this may be mapped as a collection of PTEs by the secondary MMU (e.g. during log-dirty). This required expanding the interface of the clear_flush_young mmu notifier, so a lot of code has been trivially touched. 3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate the access bit by blowing the spte. This requires proper synchronizing with MMU notifier consumers, like every other removal of spte's does. Signed-off-by: Andres Lagar-Cavilla <andreslc@google.com> Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-09-23audit: arm64: Remove the audit arch argument to audit_syscall_entryEric Paris1-2/+2
The arm64 tree added calls to audit_syscall_entry() and rightly included the syscall number. The interface has since been changed to not need the syscall number. As such, arm64 should no longer pass that value. Signed-off-by: Eric Paris <eparis@redhat.com>
2014-09-23arm64: audit: Add audit hook in syscall_trace_enter/exit()AKASHI Takahiro1-0/+7
This patch adds auditing functions on entry to or exit from every system call invocation. Acked-by: Richard Guy Briggs <rgb@redhat.com> Acked-by Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-23arm64: debug: don't re-enable debug exceptions on return from el1_dbgWill Deacon1-1/+0
When returning from a debug exception taken from EL1, we unmask debug exceptions after handling the exception. This is crucial for debug exceptions taken from EL0, so that any kernel work on the ret_to_user path can be debugged by kgdb. However, when returning back to EL1 the only thing left to do is to restore the original register state before the exception return. If single-step has been enabled by the debug exception handler, we will get stuck in an infinite debug exception loop, since we will take the step exception as soon as we unmask debug exceptions. This patch avoids unmasking debug exceptions on the debug exception return path when the exception was taken from EL1. Fixes: 2a2830703a23 (arm64: debug: avoid accessing mdscr_el1 on fault paths where possible) Cc: <stable@vger.kernel.org> #3.16+ Reported-by: David Long <dave.long@linaro.org> Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22Revert "arm64: dmi: Add SMBIOS/DMI support"Catalin Marinas3-50/+0
This reverts commit 668ebd106860f09f43993517f786a2ddfd0f9ebe. ... because of lots of warnings during boot if Linux isn't started as an EFI application: WARNING: CPU: 4 PID: 1 at /work/Linux/linux-2.6-aarch64/drivers/firmware/dmi_scan.c:591 dmi_matches+0x10c/0x110() dmi check: not initialized yet. Modules linked in: CPU: 4 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc4+ #606 Call trace: [<ffffffc000087fb0>] dump_backtrace+0x0/0x124 [<ffffffc0000880e4>] show_stack+0x10/0x1c [<ffffffc0004d58f8>] dump_stack+0x74/0xb8 [<ffffffc0000ab640>] warn_slowpath_common+0x8c/0xb4 [<ffffffc0000ab6b4>] warn_slowpath_fmt+0x4c/0x58 [<ffffffc0003f2d7c>] dmi_matches+0x108/0x110 [<ffffffc0003f2da8>] dmi_check_system+0x24/0x68 [<ffffffc0006974c4>] atkbd_init+0x10/0x34 [<ffffffc0000814ac>] do_one_initcall+0x88/0x1a0 [<ffffffc00067aab4>] kernel_init_freeable+0x148/0x1e8 [<ffffffc0004d2c64>] kernel_init+0x10/0xd4 Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22arm64: use generic dma-contiguous.hZubair Lutfullah Kakakhel2-28/+1
dma-contiguous.h is now in asm-generic. Use that to avoid code repetition in arm64. Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Acked-by: Michal Nazarewicz <mina86@mina86.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: will.deacon@arm.com Cc: tglx@linutronix.de Cc: mingo@redhat.com Cc: hpa@zytor.com Cc: arnd@arndb.de Cc: gregkh@linuxfoundation.org Cc: m.szyprowski@samsung.com Cc: x86@kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-arch@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7358/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-09-22arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiersCatalin Marinas2-31/+7
Commit 6ecba8eb51b7 (arm64: Use bus notifiers to set per-device coherent DMA ops) introduced bus notifiers to set the coherent dma ops based on the 'dma-coherent' DT property. Since the generic of_dma_configure() handles this property for platform and AMBA devices, replace the notifiers with set_arch_dma_coherent_ops(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22arm64: dmi: Add SMBIOS/DMI supportYi Li3-0/+50
SMBIOS is important for server hardware vendors. It implements a spec for providing descriptive information about the platform. Things like serial numbers, physical layout of the ports, build configuration data, and the like. This has been tested by dmidecode and lshw tools. This patch adds the call to dmi_scan_machine() to arm64_enter_virtual_mode(), as that is the point where the EFI Configuration Tables are registered as being available. It needs to be in an early_initcall anyway as dmi_id_init(), which is an arch_initcall itself, depends on dmi_scan_machine() having been called already. Signed-off-by: Yi Li <yi.li@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-19Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds1-2/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Misc fixes: EFI fixes, a build fix, a page table dumping (debug) fix and a clang build fix" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: efi/arm64: Fix fdt-related memory reservation x86/mm: Apply the section attribute to the variable, not its type x86/efi: Fixup GOT in all boot code paths x86/efi: Only load initrd above 4g on second try x86-64, ptdump: Mark espfix area only if existent x86, irq: Fix build error caused by 9eabc99a635a77cbf09
2014-09-19arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm()Catalin Marinas1-4/+6
The aarch64_insn_gen_branch_imm() function takes an enum as the last argument rather than a bool. It happens to work because AARCH64_INSN_BRANCH_LINK matches 'true' but better to use the actual type. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-18arm/arm64: KVM: vgic: make number of irqs a configurable attributeMarc Zyngier1-0/+1
In order to make the number of interrupts configurable, use the new fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as a VGIC configurable attribute. Userspace can now specify the exact size of the GIC (by increments of 32 interrupts). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-09-18Merge remote-tracking branch 'kvm/next' into queueChristoffer Dall1-4/+8
Conflicts: arch/arm64/include/asm/kvm_host.h virt/kvm/arm/vgic.c
2014-09-17arm64:mm: initialize max_mapnr using function set_max_mapnrGanapatrao Kulkarni1-1/+1
Initializing max_mapnr using set_max_mapnr() helper function instead of direct reference. Also not adding PHYS_PFN_OFFSET to max_pfn, since it already contains it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-15setup: Move unmask of async interrupts after possible earlycon setupJon Masters1-5/+6
The kernel wants to enable reporting of asynchronous interrupts (i.e. System Errors) as early as possible. But if this happens too early then any pending System Error on initial entry into the kernel may never be reported where a user can see it. This situation will occur if the kernel is configured with CONFIG_PANIC_ON_OOPS set and (default or command line) enabled, in which case the kernel will panic as intended, however the associated logging messages indicating this failure condition will remain only in the kernel ring buffer and never be flushed out to the (not yet configured) console. Therefore, this patch moves the enabling of asynchronous interrupts during early setup to as early as reasonable, but after parsing any possible earlycon parameters setting up earlycon. Signed-off-by: Jon Masters <jcm@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-15arm64: LLVMLinux: Fix inline arm64 assembly for use with clangMark Charlebois1-1/+1
Remove '#' from immediate parameter in AARCH64 inline assembly in mmu. This code now works with both gcc and clang. Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-13arm64: Tell irq work about self IPI supportFrederic Weisbecker4-2/+14
ARM64 irq work self-IPI support depends on __smp_cross_call to point to some relevant IRQ controller operations. This information should be available after the call to init_IRQ(). Lets implement arch_irq_work_has_interrupt() accordingly. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2014-09-13irq_work: Introduce arch_irq_work_has_interrupt()Peter Zijlstra1-1/+2
The nohz full code needs irq work to trigger its own interrupt so that the subsystem can work even when the tick is stopped. Lets introduce arch_irq_work_has_interrupt() that archs can override to tell about their support for this ability. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2014-09-12Merge tag 'arm64-fixes' of ↵Linus Torvalds3-8/+28
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "Just a couple of stragglers here: - fix an issue migrating interrupts on CPU hotplug - fix a potential information leak of TLS registers across an exec (Nathan has sent a corresponding patch for arch/arm/ to rmk)" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: flush TLS registers during exec arm64: use irq_set_affinity with force=false when migrating irqs
2014-09-12arm64: pageattr: Correctly adjust unaligned start addressesLaura Abbott1-1/+2
The start address needs to be actually updated after it is detected to be unaligned. Adjust it and the end address properly. Reported-by: Zi Shen Lim <zlim.lnx@gmail.com> Reviewed-by: Zi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12net: bpf: arm64: fix module memory leak when JIT image build failsDaniel Borkmann1-1/+3
On ARM64, when the BPF JIT compiler fills the JIT image body with opcodes during translation of eBPF into ARM64 opcodes, we may fail for several reasons during that phase: one being that we jump to the notyet label for not yet supported eBPF instructions such as BPF_ST. In that case we only free offsets, but not the actual allocated target image where opcodes are being stored. Fix it by calling module_free() on dismantle time in case of errors. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12Merge arm64 CPU suspend branchCatalin Marinas8-32/+216
* cpuidle: arm64: add PSCI CPU_SUSPEND based cpu_suspend support arm64: kernel: introduce cpu_init_idle CPU operation arm64: kernel: refactor the CPU suspend API for retention states Documentation: arm: define DT idle states bindings
2014-09-12arm64: add PSCI CPU_SUSPEND based cpu_suspend supportLorenzo Pieralisi1-0/+104
This patch implements the cpu_suspend cpu operations method through the PSCI CPU SUSPEND API. The PSCI implementation translates the idle state index passed by the cpu_suspend core call into a valid PSCI state according to the PSCI states initialized at boot through the cpu_init_idle() CPU operations hook. The PSCI CPU suspend operation hook checks if the PSCI state is a standby state. If it is, it calls the PSCI suspend implementation straight away, without saving any context. If the state is a power down state the kernel calls the __cpu_suspend API (that saves the CPU context) and passed the PSCI suspend finisher as a parameter so that PSCI can be called by the __cpu_suspend implementation after saving and flushing the context as last function before power down. For power down states, entry point is set to cpu_resume physical address, that represents the default kernel execution address following a CPU reset. Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12arm64: kernel: introduce cpu_init_idle CPU operationLorenzo Pieralisi4-0/+48
The CPUidle subsystem on ARM64 machines requires the idle states implementation back-end to initialize idle states parameter upon boot. This patch adds a hook in the CPU operations structure that should be initialized by the CPU operations back-end in order to provide a function that initializes cpu idle states. This patch also adds the infrastructure to arm64 kernel required to export the CPU operations based initialization interface, so that drivers (ie CPUidle) can use it when they are initialized at probe time. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12arm64: kernel: refactor the CPU suspend API for retention statesLorenzo Pieralisi3-32/+64
CPU suspend is the standard kernel interface to be used to enter low-power states on ARM64 systems. Current cpu_suspend implementation by default assumes that all low power states are losing the CPU context, so the CPU registers must be saved and cleaned to DRAM upon state entry. Furthermore, the current cpu_suspend() implementation assumes that if the CPU suspend back-end method returns when called, this has to be considered an error regardless of the return code (which can be successful) since the CPU was not expected to return from a code path that is different from cpu_resume code path - eg returning from the reset vector. All in all this means that the current API does not cope well with low-power states that preserve the CPU context when entered (ie retention states), since first of all the context is saved for nothing on state entry for those states and a successful state entry can return as a normal function return, which is considered an error by the current CPU suspend implementation. This patch refactors the cpu_suspend() API so that it can be split in two separate functionalities. The arm64 cpu_suspend API just provides a wrapper around CPU suspend operation hook. A new function is introduced (for architecture code use only) for states that require context saving upon entry: __cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) __cpu_suspend() saves the context on function entry and calls the so called suspend finisher (ie fn) to complete the suspend operation. The finisher is not expected to return, unless it fails in which case the error is propagated back to the __cpu_suspend caller. The API refactoring results in the following pseudo code call sequence for a suspending CPU, when triggered from a kernel subsystem: /* * int cpu_suspend(unsigned long idx) * @idx: idle state index */ { -> cpu_suspend(idx) |---> CPU operations suspend hook called, if present |--> if (retention_state) |--> direct suspend back-end call (eg PSCI suspend) else |--> __cpu_suspend(idx, &back_end_finisher); } By refactoring the cpu_suspend API this way, the CPU operations back-end has a chance to detect whether idle states require state saving or not and can call the required suspend operations accordingly either through simple function call or indirectly through __cpu_suspend() which carries out state saving and suspend finisher dispatching to complete idle state entry. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-11arm64: flush TLS registers during execWill Deacon2-0/+24
Nathan reports that we leak TLS information from the parent context during an exec, as we don't clear the TLS registers when flushing the thread state. This patch updates the flushing code so that we: (1) Unconditionally zero the tpidr_el0 register (since this is fully context switched for native tasks and zeroed for compat tasks) (2) Zero the tp_value state in thread_info before clearing the tpidrr0_el0 register for compat tasks (since this is only writable by the set_tls compat syscall and therefore not fully switched). A missing compiler barrier is also added to the compat set_tls syscall. Cc: <stable@vger.kernel.org> Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com> Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: Will Deacon <will.deacon@arm.com>