aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)AuthorFilesLines
2015-10-28arm64: compat: fix stxr failure case in SWP emulationWill Deacon1-7/+9
If the STXR instruction fails in the SWP emulation code, we leave *data overwritten with the loaded value, therefore corrupting the data written by a subsequent, successful attempt. This patch re-jigs the code so that we only write back to *data once we know that the update has happened. Cc: <[email protected]> Fixes: bd35a4adc413 ("arm64: Port SWP/SWPB emulation support from arm") Reported-by: Shengjiu Wang <[email protected]> Reported-by: Vladimir Murzin <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2015-10-28efi: Use correct type for struct efi_memory_map::phys_mapArd Biesheuvel1-2/+2
We have been getting away with using a void* for the physical address of the UEFI memory map, since, even on 32-bit platforms with 64-bit physical addresses, no truncation takes place if the memory map has been allocated by the firmware (which only uses 1:1 virtually addressable memory), which is usually the case. However, commit: 0f96a99dab36 ("efi: Add "efi_fake_mem" boot option") adds code that clones and modifies the UEFI memory map, and the clone may live above 4 GB on 32-bit platforms. This means our use of void* for struct efi_memory_map::phys_map has graduated from 'incorrect but working' to 'incorrect and broken', and we need to fix it. So redefine struct efi_memory_map::phys_map as phys_addr_t, and get rid of a bunch of casts that are now unneeded. Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Matt Fleming <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-10-25Merge branch 'acpi-init'Rafael J. Wysocki2-32/+1
* acpi-init: clocksource: cosmetic: Drop OF 'dependency' from symbols clocksource / arm_arch_timer: Convert to ACPI probing clocksource: Add new CLKSRC_{PROBE,ACPI} config symbols clocksource / ACPI: Add probing infrastructure for ACPI-based clocksources irqchip / GIC: Convert the GIC driver to ACPI probing irqchip / ACPI: Add probing infrastructure for ACPI-based irqchips ACPI: Add early device probing infrastructure
2015-10-22Merge tag 'firmware/psci-1.0' of ↵Olof Johansson1-14/+0
git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux into next/drivers This pull request contains patches that enable PSCI 1.0 firmware features for arm/arm64 platforms: - Lorenzo Pieralisi adds support for the PSCI_FEATURES call, manages various 1.0 specifications updates (power state id and functions return values) and provides PSCI v1.0 DT bindings - Sudeep Holla implements PSCI v1.0 system suspend support to enable PSCI based suspend-to-RAM * tag 'firmware/psci-1.0' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux: drivers: firmware: psci: add system suspend support drivers: firmware: psci: define more generic PSCI_FN_NATIVE macro drivers: firmware: psci: add PSCI v1.0 DT bindings drivers: firmware: psci: add extended stateid power_state support drivers: firmware: psci: add PSCI_FEATURES call drivers: firmware: psci: move power_state handling to generic code drivers: firmware: psci: add INVALID_ADDRESS return value Signed-off-by: Olof Johansson <[email protected]>
2015-10-21arm64: Constify hwcap name string arraysDave Martin1-3/+3
The hwcap string arrays used for generating the contents of /proc/cpuinfo are currently arrays of non-const pointers. There's no need for these pointers to be mutable, so this patch makes them const so that they can be moved to .rodata. Signed-off-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64/debug: Make use of the system wide safe valueSuzuki K. Poulose1-2/+4
Use the system wide value of ID_AA64DFR0 to make safer decisions Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Move FP/ASIMD hwcap handling to common codeSuzuki K. Poulose2-11/+7
The FP/ASIMD is detected in fpsimd_init(), which is built-in unconditionally. Lets move the hwcap handling to the central place. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64/HWCAP: Use system wide safe valuesSuzuki K. Poulose1-72/+91
Extend struct arm64_cpu_capabilities to handle the HWCAP detection and make use of the system wide value of the feature registers for a reliable set of HWCAPs. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64/capabilities: Make use of system wide safe valueSuzuki K. Poulose1-20/+59
Now that we can reliably read the system wide safe value for a feature register, use that to compute the system capability. This patch also replaces the 'feature-register-specific' methods with a generic routine to check the capability. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Delay cpu feature capability checksSuzuki K. Poulose3-5/+100
At the moment we run through the arm64_features capability list for each CPU and set the capability if one of the CPU supports it. This could be problematic in a heterogeneous system with differing capabilities. Delay the CPU feature checks until all the enabled CPUs are up(i.e, smp_cpus_done(), so that we can make better decisions based on the overall system capability. Once we decide and advertise the capabilities the alternatives can be applied. From this state, we cannot roll back a feature to disabled based on the values from a new hotplugged CPU, due to the runtime patching and other reasons. So, for all new CPUs, we need to make sure that they have the established system capabilities. Failing which, we bring the CPU down, preventing it from turning online. Once the capabilities are decided, any new CPU booting up goes through verification to ensure that it has all the enabled capabilities and also invokes the respective enable() method on the CPU. The CPU errata checks are not delayed and is still executed per-CPU to detect the respective capabilities. If we ever come across a non-errata capability that needs to be checked on each-CPU, we could introduce them via a new capability table(or introduce a flag), which can be processed per CPU. The next patch will make the feature checks use the system wide safe value of a feature register. NOTE: The enable() methods associated with the capability is scheduled on all the CPUs (which is the only use case at the moment). If we need a different type of 'enable()' which only needs to be run once on any CPU, we should be able to handle that when needed. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> [[email protected]: static variable and coding style fixes] Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Refactor check_cpu_capabilitiesSuzuki K. Poulose2-4/+12
check_cpu_capabilities runs through a given list of caps and checks if the system has the cap, updates the system capability bitmap and also runs any enable() methods associated with them. All of this is not quite obvious from the name 'check'. This patch splits the check_cpu_capabilities into two parts : 1) update_cpu_capabilities => Runs through the given list and updates the system wide capability map. 2) enable_cpu_capabilities => Runs through the given list and invokes enable() (if any) for the caps enabled on the system. Cc: Andre Przywara <[email protected]> Cc: Will Deacon <[email protected]> Suggested-by: Catalin Marinas <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Cleanup mixed endian support detectionSuzuki K. Poulose1-22/+0
Make use of the system wide safe register to decide the support for mixed endian. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Read system wide CPUID valueSuzuki K. Poulose1-0/+9
Add an API for reading the safe CPUID value across the system from the new infrastructure. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Consolidate CPU Sanity check to CPU Feature infrastructureSuzuki K. Poulose2-145/+132
This patch consolidates the CPU Sanity check to the new infrastructure. Cc: Mark Rutland <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Keep track of CPU feature registersSuzuki K. Poulose2-1/+431
This patch adds an infrastructure to keep track of the CPU feature registers on the system. For each register, the infrastructure keeps track of the system wide safe value of the feature bits. Also, tracks the which fields of a register should be matched strictly across all the CPUs on the system for the SANITY check infrastructure. The feature bits are classified into following 3 types depending on the implication of the possible values. This information is used to decide the safe value for a feature. LOWER_SAFE - The smaller value is safer HIGHER_SAFE - The bigger value is safer EXACT - We can't decide between the two, so a predefined safe_value is used. This infrastructure will be later used to make better decisions for: - Kernel features (e.g, KVM, Debug) - SANITY Check - CPU capability - ELF HWCAP - Exposing CPU Feature register to userspace. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> [[email protected]: whitespace fix] Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Move /proc/cpuinfo handling codeSuzuki K. Poulose2-123/+124
This patch moves the /proc/cpuinfo handling code: arch/arm64/kernel/{setup.c to cpuinfo.c} No functional changes Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Move mixed endian support detectionSuzuki K. Poulose2-21/+22
Move the mixed endian support detection code to cpufeature.c from cpuinfo.c. This also moves the update_cpu_features() used by mixed endian detection code, which will get more functionality. Also moves the ID register field shifts to asm/sysreg.h, where all the useful definitions will end up in later patches. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Move cpu feature detection codeSuzuki K. Poulose2-108/+109
This patch moves the CPU feature detection code from arch/arm64/kernel/{setup.c to cpufeature.c} The plan is to consolidate all the CPU feature handling in cpufeature.c. Apart from changing pr_fmt from "alternatives" to "cpu features", there are no functional changes. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Delay cpuinfo_store_boot_cpuSuzuki K. Poulose2-8/+3
At the moment the boot CPU stores the cpuinfo long before the PERCPU areas are initialised by the kernel. This could be problematic as the non-boot CPU data structures might get copied with the data from the boot CPU, giving us no chance to detect if a particular CPU updated its cpuinfo. This patch delays the boot cpu store to smp_prepare_boot_cpu(). Also kills the setup_processor() which no longer does meaningful work. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Delay ELF HWCAP initialisation until all CPUs are upSuzuki K. Poulose2-8/+9
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are up, i.e, smp_cpus_done(). This is in preparation for detecting the common features across the CPUS and creating a consistent ELF HWCAP for the system. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-21arm64: Make the CPU information more clearSuzuki K. Poulose2-3/+3
At early boot, we print the CPU version/revision. On a heterogeneous system, we could have different types of CPUs. Print the CPU info for all active cpus. Also, the secondary CPUs prints the message only when they turn online. Also, remove the redundant 'revision' information which doesn't make any sense without the 'variant' field. Signed-off-by: Suzuki K. Poulose <[email protected]> Tested-by: Dave Martin <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-19arm64: Synchronise dump_backtrace() with perf callchainJungseok Lee1-5/+10
Unlike perf callchain relying on walk_stackframe(), dump_backtrace() has its own backtrace logic. A major difference between them is the moment a symbol is recorded. Perf writes down a symbol *before* calling unwind_frame(), but dump_backtrace() prints it out *after* unwind_frame(). As a result, the last valid symbol cannot be hooked in case of dump_backtrace(). This patch addresses the issue as synchronising dump_backtrace() with perf callchain. A simple test and its results are as follows: - crash trigger $ sudo echo c > /proc/sysrq-trigger - current status Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 - with this change Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 [<fffffe00000939ec>] el0_svc_naked+0x20/0x28 Note that this patch does not cover a case where MMU is disabled. The last stack frame of swapper, for example, has PC in a form of physical address. Unfortunately, a simple conversion using phys_to_virt() cannot cover all scenarios since PC is retrieved from LR - 4, not LR. It is a big tradeoff to change both head.S and unwind_frame() for only a few of symbols in *.S. Thus, this hunk does not take care of the case. Cc: AKASHI Takahiro <[email protected]> Cc: James Morse <[email protected]> Cc: Mark Rutland <[email protected]> Signed-off-by: Jungseok Lee <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-19arm64: add cpu_idle tracepoints to arch_cpu_idleJisheng Zhang1-0/+3
Currently, if cpuidle is disabled or not supported, powertop reports zero wakeups and zero events. This is due to the cpu_idle tracepoints are missing. This patch is to make cpu_idle tracepoints always available even if cpuidle is disabled or not supported. Signed-off-by: Jisheng Zhang <[email protected]> Acked-by: Lorenzo Pieralisi <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-19arm64: Add page size to the kernel image headerArd Biesheuvel1-1/+4
This patch adds the page size to the arm64 kernel image header so that one can infer the PAGESIZE used by the kernel. This will be helpful to diagnose failures to boot the kernel with page size not supported by the CPU. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Acked-by: Catalin Marinas <[email protected]> Reviewed-by: Christoffer Dall <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-19arm64: Check for selected granule supportSuzuki K. Poulose1-2/+15
Ensure that the selected page size is supported by the CPU(s). If it doesn't park it. Cc: Will Deacon <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-19arm64: Handle 4 level page table for swapperSuzuki K. Poulose1-1/+4
At the moment, we only support maximum of 3-level page table for swapper. With 48bit VA, 64K has only 3 levels and 4K uses section mapping. Add support for 4-level page table for swapper, needed by 16K pages. Cc: Will Deacon <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-19arm64: Move swapper pagetable definitionsSuzuki K. Poulose2-29/+9
Move the kernel pagetable (both swapper and idmap) definitions from the generic asm/page.h to a new file, asm/kernel-pgtable.h. This is mostly a cosmetic change, to clean up the asm/page.h to get rid of the arch specific details which are not needed by the generic code. Also renames the symbols to prevent conflicts. e.g, BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT Cc: Will Deacon <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Reviewed-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Acked-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-16arm64: debug: Fix typo in debug-monitors.cYang Shi1-1/+1
Fix handers to handlers. Signed-off-by: Yang Shi <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-16arm64: AArch32 user space PC alignment exceptionMark Salyzyn1-0/+2
ARMv7 does not have a PC alignment exception. ARMv8 AArch32 user space however can produce a PC alignment exception. Add handler so that we do not dump an unexpected stack trace in the logs. Signed-off-by: Mark Salyzyn <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-14Merge tag 'efi-next' of ↵Ingo Molnar1-14/+5
git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into core/efi Pull v4.4 EFI updates from Matt Fleming: - Make the EFI System Resource Table (ESRT) driver explicitly non-modular by ripping out the module_* code since Kconfig doesn't allow it to be built as a module anyway. (Paul Gortmaker) - Make the x86 efi=debug kernel parameter, which enables EFI debug code and output, generic and usable by arm64. (Leif Lindholm) - Add support to the x86 EFI boot stub for 64-bit Graphics Output Protocol frame buffer addresses. (Matt Fleming) - Detect when the UEFI v2.5 EFI_PROPERTIES_TABLE feature is enabled in the firmware and set an efi.flags bit so the kernel knows when it can apply more strict runtime mapping attributes - Ard Biesheuvel - Auto-load the efi-pstore module on EFI systems, just like we currently do for the efivars module. (Ben Hutchings) - Add "efi_fake_mem" kernel parameter which allows the system's EFI memory map to be updated with additional attributes for specific memory ranges. This is useful for testing the kernel code that handles the EFI_MEMORY_MORE_RELIABLE memmap bit even if your firmware doesn't include support. (Taku Izumi) Note: there is a semantic conflict between the following two commits: 8a53554e12e9 ("x86/efi: Fix multiple GOP device support") ae2ee627dc87 ("efifb: Add support for 64-bit frame buffer addresses") I fixed up the interaction in the merge commit, changing the type of current_fb_base from u32 to u64. Signed-off-by: Ingo Molnar <[email protected]>
2015-10-14Merge branch 'x86/urgent' into core/efi, to pick up a pending EFI fixIngo Molnar32-863/+777
Signed-off-by: Ingo Molnar <[email protected]>
2015-10-13Merge branch 'linus' into irq/coreThomas Gleixner5-17/+39
Bring in upstream updates for patches which depend on them
2015-10-12arm64: add KASAN supportAndrey Ryabinin6-3/+31
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: Andrey Ryabinin <[email protected]> Tested-by: Linus Walleij <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-12arm64/efi: isolate EFI stub from the kernel properArd Biesheuvel4-14/+49
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Matt Fleming <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-12efi/arm64: Clean up efi_get_fdt_params() interfaceLeif Lindholm1-1/+1
As we now have a common debug infrastructure between core and arm64 efi, drop the bit of the interface passing verbose output flags around. Signed-off-by: Leif Lindholm <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Cc: Mark Salter <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Matt Fleming <[email protected]>
2015-10-12arm64: Use core efi=debug instead of uefi_debug command line parameterLeif Lindholm1-14/+5
Now that we have an efi=debug command line option in the core code, use this instead of the arm64-specific uefi_debug option. Signed-off-by: Leif Lindholm <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Cc: Mark Salter <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Matt Fleming <[email protected]>
2015-10-12arm64: Fix missing #include in hw_breakpoint.cCatalin Marinas1-0/+1
A prior commit used to detect the hw breakpoint ABI behaviour based on the target state missed the asm/compat.h include and the build fails with !CONFIG_COMPAT. Fixes: 8f48c0629049 ("arm64: hw_breakpoint: use target state to determine ABI behaviour") Signed-off-by: Catalin Marinas <[email protected]>
2015-10-09arm64: cpufeatures: Check ICC_EL1_SRE.SRE before enabling ↵Marc Zyngier1-1/+18
ARM64_HAS_SYSREG_GIC_CPUIF As the firmware (or the hypervisor) may have disabled SRE access, check that SRE can actually be enabled before declaring that we do have that capability. Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2015-10-09arm64: el2_setup: Make sure ICC_SRE_EL2.SRE sticks before using GICv3 sysregsMarc Zyngier1-0/+2
Contrary to what was originally expected, EL3 firmware can (for whatever reason) disable GICv3 system register access. In this case, the kernel explodes very early. Work around this by testing if the SRE bit sticks or not. If it doesn't, abort the GICv3 setup, and pray that the firmware has passed a DT that doesn't contain a GICv3 node. Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2015-10-09arm64: fix a migrating irq bug when hotplug cpuYang Yingliang2-63/+2
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Cc: Russell King - ARM Linux <[email protected]> Cc: Hanjun Guo <[email protected]> Acked-by: Marc Zyngier <[email protected]> Signed-off-by: Yang Yingliang <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: perf: add Cortex-A57 supportMark Rutland1-0/+61
The Cortex-A57 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: perf: add Cortex-A53 supportMark Rutland1-2/+62
The Cortex-A53 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: perf: move to shared arm_pmu frameworkMark Rutland1-861/+90
Now that the arm_pmu framework has been factored out to drivers/perf we can make use of it for arm64, gaining support for heterogeneous PMUs and unifying the two codebases before they diverge further. The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching the style previously applied to the 32-bit PMUs. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: hw_breakpoint: use target state to determine ABI behaviourWill Deacon1-2/+16
The arm64 hw_breakpoint interface is slightly less flexible than its 32-bit counterpart, thanks to some changes in the architecture rendering unaligned watchpoint addresses obselete for AArch64. However, in a multi-arch environment (i.e. debugging a 32-bit target with a 64-bit GDB under a 64-bit kernel), we need to provide a feature compatible interface to GDB in order for debugging to function correctly. This patch adds a new helper, is_compat_bp, to our hw_breakpoint implementation which changes the interface behaviour based on the architecture of the debug target as opposed to the debugger itself. This allows debugged to function as expected for multi-arch configurations without relying on deprecated architectural behaviours when debugging native applications. Cc: Yao Qi <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: mm: kill mm_cpumask usageWill Deacon1-7/+0
mm_cpumask isn't actually used for anything on arm64, so remove all the code trying to keep it up-to-date. Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: mm: rewrite ASID allocator and MM context-switching codeWill Deacon2-2/+1
Our current switch_mm implementation suffers from a number of problems: (1) The ASID allocator relies on IPIs to synchronise the CPUs on a rollover event (2) Because of (1), we cannot allocate ASIDs with interrupts disabled and therefore make use of a TIF_SWITCH_MM flag to postpone the actual switch to finish_arch_post_lock_switch (3) We run context switch with a reserved (invalid) TTBR0 value, even though the ASID and pgd are updated atomically (4) We take a global spinlock (cpu_asid_lock) during context-switch (5) We use h/w broadcast TLB operations when they are not required (e.g. in flush_context) This patch addresses these problems by rewriting the ASID algorithm to match the bitmap-based arch/arm/ implementation more closely. This in turn allows us to remove much of the complications surrounding switch_mm, including the ugly thread flag. Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-07arm64: flush: use local TLB and I-cache invalidationWill Deacon3-4/+4
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: Catalin Marinas <[email protected]> Acked-by: David Daney <[email protected]> Acked-by: Ard Biesheuvel <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2015-10-06arm64: replace read_lock to rcu lock in call_break_hookYang Shi1-10/+11
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917 in_atomic(): 0, irqs_disabled(): 128, pid: 342, name: perf 1 lock held by perf/342: #0: (break_hook_lock){+.+...}, at: [<ffffffc0000851ac>] call_break_hook+0x34/0xd0 irq event stamp: 62224 hardirqs last enabled at (62223): [<ffffffc00010b7bc>] __call_rcu.constprop.59+0x104/0x270 hardirqs last disabled at (62224): [<ffffffc0000fbe20>] vprintk_emit+0x68/0x640 softirqs last enabled at (0): [<ffffffc000097928>] copy_process.part.8+0x428/0x17f8 softirqs last disabled at (0): [< (null)>] (null) CPU: 0 PID: 342 Comm: perf Not tainted 4.1.6-rt5 #4 Hardware name: linux,dummy-virt (DT) Call trace: [<ffffffc000089968>] dump_backtrace+0x0/0x128 [<ffffffc000089ab0>] show_stack+0x20/0x30 [<ffffffc0007030d0>] dump_stack+0x7c/0xa0 [<ffffffc0000c878c>] ___might_sleep+0x174/0x260 [<ffffffc000708ac8>] __rt_spin_lock+0x28/0x40 [<ffffffc000708db0>] rt_read_lock+0x60/0x80 [<ffffffc0000851a8>] call_break_hook+0x30/0xd0 [<ffffffc000085a70>] brk_handler+0x30/0x98 [<ffffffc000082248>] do_debug_exception+0x50/0xb8 Exception stack(0xffffffc00514fe30 to 0xffffffc00514ff50) fe20: 00000000 00000000 c1594680 0000007f fe40: ffffffff ffffffff 92063940 0000007f 0550dcd8 ffffffc0 00000000 00000000 fe60: 0514fe70 ffffffc0 000be1f8 ffffffc0 0514feb0 ffffffc0 0008948c ffffffc0 fe80: 00000004 00000000 0514fed0 ffffffc0 ffffffff ffffffff 9282a948 0000007f fea0: 00000000 00000000 9282b708 0000007f c1592820 0000007f 00083914 ffffffc0 fec0: 00000000 00000000 00000010 00000000 00000064 00000000 00000001 00000000 fee0: 005101e0 00000000 c1594680 0000007f c1594740 0000007f ffffffd8 ffffff80 ff00: 00000000 00000000 00000000 00000000 c1594770 0000007f c1594770 0000007f ff20: 00665e10 00000000 7f7f7f7f 7f7f7f7f 01010101 01010101 00000000 00000000 ff40: 928e4cc0 0000007f 91ff11e8 0000007f call_break_hook is called in atomic context (hard irq disabled), so replace the sleepable lock to rcu lock, replace relevant list operations to rcu version and call synchronize_rcu() in unregister_break_hook(). And, replace write lock to spinlock in {un}register_break_hook. Signed-off-by: Yang Shi <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2015-10-06arm64: Don't relocate non-existent initrdMark Rutland1-0/+2
When booting a kernel without an initrd, the kernel reports that it moves -1 bytes worth, having gone through the motions with initrd_start equal to initrd_end: Moving initrd from [4080000000-407fffffff] to [9fff49000-9fff48fff] Prevent this by bailing out early when the initrd size is zero (i.e. we have no initrd), avoiding the confusing message and other associated work. Fixes: 1570f0d7ab425c1e ("arm64: support initrd outside kernel linear map") Cc: Mark Salter <[email protected]> Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2015-10-05arm64: convert patch_lock to raw lockYang Shi1-3/+3
When running kprobe test on arm64 rt kernel, it reports the below warning: root@qemu7:~# modprobe kprobe_example BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917 in_atomic(): 0, irqs_disabled(): 128, pid: 484, name: modprobe CPU: 0 PID: 484 Comm: modprobe Not tainted 4.1.6-rt5 #2 Hardware name: linux,dummy-virt (DT) Call trace: [<ffffffc0000891b8>] dump_backtrace+0x0/0x128 [<ffffffc000089300>] show_stack+0x20/0x30 [<ffffffc00061dae8>] dump_stack+0x1c/0x28 [<ffffffc0000bbad0>] ___might_sleep+0x120/0x198 [<ffffffc0006223e8>] rt_spin_lock+0x28/0x40 [<ffffffc000622b30>] __aarch64_insn_write+0x28/0x78 [<ffffffc000622e48>] aarch64_insn_patch_text_nosync+0x18/0x48 [<ffffffc000622ee8>] aarch64_insn_patch_text_cb+0x70/0xa0 [<ffffffc000622f40>] aarch64_insn_patch_text_sync+0x28/0x48 [<ffffffc0006236e0>] arch_arm_kprobe+0x38/0x48 [<ffffffc00010e6f4>] arm_kprobe+0x34/0x50 [<ffffffc000110374>] register_kprobe+0x4cc/0x5b8 [<ffffffbffc002038>] kprobe_init+0x38/0x7c [kprobe_example] [<ffffffc000084240>] do_one_initcall+0x90/0x1b0 [<ffffffc00061c498>] do_init_module+0x6c/0x1cc [<ffffffc0000fd0c0>] load_module+0x17f8/0x1db0 [<ffffffc0000fd8cc>] SyS_finit_module+0xb4/0xc8 Convert patch_lock to raw loc kto avoid this issue. Although the problem is found on rt kernel, the fix should be applicable to mainline kernel too. Acked-by: Steven Rostedt <[email protected]> Signed-off-by: Yang Shi <[email protected]> Signed-off-by: Will Deacon <[email protected]>