aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm
AgeCommit message (Collapse)AuthorFilesLines
2017-05-09arm64: ensure extension of smp_store_release valueMark Rutland1-5/+15
When an inline assembly operand's type is narrower than the register it is allocated to, the least significant bits of the register (up to the operand type's width) are valid, and any other bits are permitted to contain any arbitrary value. This aligns with the AAPCS64 parameter passing rules. Our __smp_store_release() implementation does not account for this, and implicitly assumes that operands have been zero-extended to the width of the type being stored to. Thus, we may store unknown values to memory when the value type is narrower than the pointer type (e.g. when storing a char to a long). This patch fixes the issue by casting the value operand to the same width as the pointer operand in all cases, which ensures that the value is zero-extended as we expect. We use the same union trickery as __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that pointers are potentially cast to narrower width integers in unreachable paths. A whitespace issue at the top of __smp_store_release() is also corrected. No changes are necessary for __smp_load_acquire(). Load instructions implicitly clear any upper bits of the register, and the compiler will only consider the least significant bits of the register as valid regardless. Fixes: 47933ad41a86 ("arch: Introduce smp_load_acquire(), smp_store_release()") Fixes: 878a84d5a8a1 ("arm64: add missing data types in smp_load_acquire/smp_store_release") Cc: <stable@vger.kernel.org> # 3.14.x- Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-05-09arm64: xchg: hazard against entire exchange variableMark Rutland1-1/+1
The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a u8 pointer, and thus the hazard only applies to the first byte of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, as demonstrated with the following test case: union u { unsigned long l; unsigned int i[2]; }; unsigned long update_char_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } unsigned long update_long_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when using -O2 or above: 0000000000000000 <update_char_hazard>: 0: d2800001 mov x1, #0x0 // #0 4: f9000001 str x1, [x0] 8: d2800000 mov x0, #0x0 // #0 c: d65f03c0 ret 0000000000000010 <update_long_hazard>: 10: b9400401 ldr w1, [x0,#4] 14: d2800002 mov x2, #0x0 // #0 18: f9000002 str x2, [x0] 1c: b9400400 ldr w0, [x0,#4] 20: 4a000020 eor w0, w1, w0 24: d65f03c0 ret This patch fixes the issue by passing an unsigned long pointer into the +Q constraint, as we do for our cmpxchg code. This may hazard against more than is necessary, but this is better than missing a necessary hazard. Fixes: 305d454aaa29 ("arm64: atomics: implement native {relaxed, acquire, release} atomics") Cc: <stable@vger.kernel.org> # 4.4.x- Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-05-09arm64: entry: improve data abort handling of tagged pointersKristina Martsenko1-0/+9
When handling a data abort from EL0, we currently zero the top byte of the faulting address, as we assume the address is a TTBR0 address, which may contain a non-zero address tag. However, the address may be a TTBR1 address, in which case we should not zero the top byte. This patch fixes that. The effect is that the full TTBR1 address is passed to the task's signal handler (or printed out in the kernel log). When handling a data abort from EL1, we leave the faulting address intact, as we assume it's either a TTBR1 address or a TTBR0 address with tag 0x00. This is true as far as I'm aware, we don't seem to access a tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to forget about address tags, and code added in the future may not always remember to remove tags from addresses before accessing them. So add tag handling to the EL1 data abort handler as well. This also makes it consistent with the EL0 data abort handler. Fixes: d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Reviewed-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-05-09arm64: hw_breakpoint: fix watchpoint matching for tagged pointersKristina Martsenko1-3/+3
When we take a watchpoint exception, the address that triggered the watchpoint is found in FAR_EL1. We compare it to the address of each configured watchpoint to see which one was hit. The configured watchpoint addresses are untagged, while the address in FAR_EL1 will have an address tag if the data access was done using a tagged address. The tag needs to be removed to compare the address to the watchpoints. Currently we don't remove it, and as a result can report the wrong watchpoint as being hit (specifically, always either the highest TTBR0 watchpoint or lowest TTBR1 watchpoint). This patch removes the tag. Fixes: d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-26arm64: module: split core and init PLT sectionsArd Biesheuvel1-2/+7
The arm64 module PLT code allocates all PLT entries in a single core section, since the overhead of having a separate init PLT section is not justified by the small number of PLT entries usually required for init code. However, the core and init module regions are allocated independently, and there is a corner case where the core region may be allocated from the VMALLOC region if the dedicated module region is exhausted, but the init region, being much smaller, can still be allocated from the module region. This leads to relocation failures if the distance between those regions exceeds 128 MB. (In fact, this corner case is highly unlikely to occur on arm64, but the issue has been observed on ARM, whose module region is much smaller). So split the core and init PLT regions, and name the latter ".init.plt" so it gets allocated along with (and sufficiently close to) the .init sections that it serves. Also, given that init PLT entries may need to be emitted for branches that target the core module, modify the logic that disregards defined symbols to only disregard symbols that are defined in the same section as the relocated branch instruction. Since there may now be two PLT entries associated with each entry in the symbol table, we can no longer hijack the symbol::st_size fields to record the addresses of PLT entries as we emit them for zero-addend relocations. So instead, perform an explicit comparison to check for duplicate entries. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-24arm64: Add CNTFRQ_EL0 trap handlerMarc Zyngier1-0/+4
We now trap accesses to CNTVCT_EL0 when the counter is broken enough to require the kernel to mediate the access. But it turns out that some existing userspace (such as OpenMPI) do probe for the counter frequency, leading to an UNDEF exception as CNTVCT_EL0 and CNTFRQ_EL0 share the same control bit. The fix is to handle the exception the same way we do for CNTVCT_EL0. Fixes: a86bd139f2ae ("arm64: arch_timer: Enable CNTVCT_EL0 trap if workaround is enabled") Reported-by: Hanjun Guo <guohanjun@huawei.com> Tested-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-12Merge branch 'will/for-next/perf' into for-next/coreCatalin Marinas1-0/+2
* will/for-next/perf: arm64: pmuv3: use arm_pmu ACPI framework arm64: pmuv3: handle !PMUv3 when probing drivers/perf: arm_pmu: add ACPI framework arm64: add function to get a cpu's MADT GICC table drivers/perf: arm_pmu: split out platform device probe logic drivers/perf: arm_pmu: move irq request/free into probe drivers/perf: arm_pmu: split cpu-local irq request/free drivers/perf: arm_pmu: rename irq request/free functions drivers/perf: arm_pmu: handle no platform_device drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs() drivers/perf: arm_pmu: factor out pmu registration drivers/perf: arm_pmu: fold init into alloc drivers/perf: arm_pmu: define armpmu_init_fn drivers/perf: arm_pmu: remove pointless PMU disabling perf: qcom: Add L3 cache PMU driver drivers/perf: arm_pmu: split irq request from enable drivers/perf: arm_pmu: manage interrupts per-cpu drivers/perf: arm_pmu: rework per-cpu allocation MAINTAINERS: Add file patterns for perf device tree bindings
2017-04-11arm64: add function to get a cpu's MADT GICC tableMark Rutland1-0/+2
Currently the ACPI parking protocol code needs to parse each CPU's MADT GICC table to extract the mailbox address and so on. Each time we parse a GICC table, we call back to the parking protocol code to parse it. This has been fine so far, but we're about to have more code that needs to extract data from the GICC tables, and adding a callback for each user is going to get unwieldy. Instead, this patch ensures that we stash a copy of each CPU's GICC table at boot time, such that anything needing to parse it can later request it. This will allow for other parsers of GICC, and for simplification to the ACPI parking protocol code. Note that we must store a copy, rather than a pointer, since the core ACPI code temporarily maps/unmaps tables while iterating over them. Since we parse the MADT before we know how many CPUs we have (and hence before we setup the percpu areas), we must use an NR_CPUS sized array. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-07Merge tag 'arch-timer-errata-prereq' of ↵Catalin Marinas3-1/+6
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core Pre-requisites for the arch timer errata workarounds: - Allow checking of a CPU-local erratum - Add CNTVCT_EL0 trap handler - Define Cortex-A73 MIDR - Allow an erratum to be match for all revisions of a core - Add capability to advertise Cortex-A73 erratum 858921 * tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms: arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921 arm64: cpu_errata: Allow an erratum to be match for all revisions of a core arm64: Define Cortex-A73 MIDR arm64: Add CNTVCT_EL0 trap handler arm64: Allow checking of a CPU-local erratum
2017-04-07arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921Marc Zyngier1-1/+2
In order to work around Cortex-A73 erratum 858921 in a subsequent patch, add the required capability that advertise the erratum. As the configuration option it depends on is not present yet, this has no immediate effect. Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-04-07arm64: Define Cortex-A73 MIDRMarc Zyngier1-0/+2
As we're about to introduce a new workaround that is specific to Cortex-A73, let's define the coresponding MIDR. Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-04-07arm64: Add CNTVCT_EL0 trap handlerMarc Zyngier1-0/+2
Since people seem to make a point in breaking the userspace visible counter, we have no choice but to trap the access. Add the required handler. Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-04-05arm64: kdump: implement machine_crash_shutdown()AKASHI Takahiro3-2/+45
Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus and save registers' status in per-cpu ELF notes before starting crash dump kernel. See kernel_kexec(). Even if not all secondary cpus have shut down, we do kdump anyway. As we don't have to make non-boot(crashed) cpus offline (to preserve correct status of cpus at crash dump) before shutting down, this patch also adds a variant of smp_send_stop(). Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05arm64: hibernate: preserve kdump image around hibernationAKASHI Takahiro1-0/+10
Since arch_kexec_protect_crashkres() removes a mapping for crash dump kernel image, the loaded data won't be preserved around hibernation. In this patch, helper functions, crash_prepare_suspend()/ crash_post_resume(), are additionally called before/after hibernation so that the relevant memory segments will be mapped again and preserved just as the others are. In addition, to minimize the size of hibernation image, crash_is_nosave() is added to pfn_is_nosave() in order to recognize only the pages that hold loaded crash dump kernel image as saveable. Hibernation excludes any pages that are marked as Reserved and yet "nosave." Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05arm64: mm: add set_memory_valid()AKASHI Takahiro1-0/+1
This function validates and invalidates PTE entries, and will be utilized in kdump to protect loaded crash dump kernel image. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04Merge branch 'arm64/common-sysreg' of ↵Catalin Marinas2-75/+168
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux into for-next/core * 'arm64/common-sysreg' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: sysreg: add Set/Way sys encodings arm64: sysreg: add register encodings used by KVM arm64: sysreg: add physical timer registers arm64: sysreg: subsume GICv3 sysreg definitions arm64: sysreg: add performance monitor registers arm64: sysreg: add debug system registers arm64: sysreg: sort by encoding
2017-04-04arm64: cpufeature: Make ID reg accessor naming less counterintuitiveDave Martin3-5/+5
read_system_reg() can readily be confused with read_sysreg(), whereas these are really quite different in their meaning. This patches attempts to reduce the ambiguity be reserving "sysreg" for the actual system register accessors. read_system_reg() is instead renamed to read_sanitised_ftr_reg(), to make it more obvious that the Linux-defined sanitised feature register cache is being accessed here, not the underlying architectural system registers. cpufeature.c's internal __raw_read_system_reg() function is renamed in line with its actual purpose: a form of read_sysreg() that indexes on (non-compiletime-constant) encoding rather than symbolic register name. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23arm64: mm: set the contiguous bit for kernel mappings where appropriateArd Biesheuvel1-0/+10
This is the third attempt at enabling the use of contiguous hints for kernel mappings. The most recent attempt 0bfc445dec9d was reverted after it turned out that updating permission attributes on live contiguous ranges may result in TLB conflicts. So this time, the contiguous hint is not set for .rodata or for the linear alias of .text/.rodata, both of which are mapped read-write initially, and remapped read-only at a later stage. (Note that the latter region could also be unmapped and remapped again with updated permission attributes, given that the region, while live, is only mapped for the convenience of the hibernation code, but that also means the TLB footprint is negligible anyway, so why bother) This enables the following contiguous range sizes for the virtual mapping of the kernel image, and for the linear mapping: granule size | cont PTE | cont PMD | -------------+------------+------------+ 4 KB | 64 KB | 32 MB | 16 KB | 2 MB | 1 GB* | 64 KB | 2 MB | 16 GB* | * Only when built for 3 or more levels of translation. This is due to the fact that a 2 level configuration only consists of PGDs and PTEs, and the added complexity of dealing with folded PMDs is not justified considering that 16 GB contiguous ranges are likely to be ignored by the hardware (and 16k/2 levels is a niche configuration) Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23arm64: mmu: apply strict permissions to .init.text and .init.dataArd Biesheuvel1-0/+2
To avoid having mappings that are writable and executable at the same time, split the init region into a .init.text region that is mapped read-only, and a .init.data region that is mapped non-executable. This is possible now that the alternative patching occurs via the linear mapping, and the linear alias of the init region is always mapped writable (but never executable). Since the alternatives descriptions themselves are read-only data, move those into the .init.text region. Reviewed-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23arm64: alternatives: apply boot time fixups via the linear mappingArd Biesheuvel1-0/+1
One important rule of thumb when desiging a secure software system is that memory should never be writable and executable at the same time. We mostly adhere to this rule in the kernel, except at boot time, when regions may be mapped RWX until after we are done applying alternatives or making other one-off changes. For the alternative patching, we can improve the situation by applying the fixups via the linear mapping, which is never mapped with executable permissions. So map the linear alias of .text with RW- permissions initially, and remove the write permissions as soon as alternative patching has completed. Reviewed-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23arm64: Revert "arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y"Ard Biesheuvel1-5/+0
This reverts commit 9c0e83c371cf4696926c95f9c8c77cd6ea803426, which is no longer needed now that the modversions code plays nice with relocatable PIE kernels. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22arm64: struct debug_info: Check CONFIG_HAVE_HW_BREAKPOINTChris Redmon1-0/+2
Check if CONFIG_HAVE_HW_BREAKPOINT is enabled before compiling in extra data required for hardware breakpoints. Compiling out this code when hw breakpoints are disabled saves about 272 bytes per struct task_struct. Signed-off-by: Chris Redmon <credmonster@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22arm64: define BUG() instruction without CONFIG_BUGArnd Bergmann1-14/+19
This mirrors commit e9c38ceba8d9 ("ARM: 8455/1: define __BUG as asm(BUG_INSTR) without CONFIG_BUG") to make the behavior of arm64 consistent with arm and x86, and avoids lots of warnings in randconfig builds, such as: kernel/seccomp.c: In function '__seccomp_filter': kernel/seccomp.c:666:1: error: no return statement in function returning non-void [-Werror=return-type] Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: v8.3: Support for weaker release consistencySuzuki K Poulose1-0/+1
ARMv8.3 adds new instructions to support Release Consistent processor consistent (RCpc) model, which is weaker than the RCsc model. Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: v8.3: Support for complex number instructionsSuzuki K Poulose1-0/+1
ARM v8.3 adds support for new instructions to aid floating-point multiplication and addition of complex numbers. Expose the support via HWCAP and MRS emulation Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: v8.3: Support for Javascript conversion instructionSuzuki K Poulose1-0/+3
ARMv8.3 adds support for a new instruction to perform conversion from double precision floating point to integer to match the architected behaviour of the equivalent Javascript conversion. Expose the availability via HWCAP and MRS emulation. Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: KVM: Add support for VPIPT I-cachesWill Deacon1-4/+5
A VPIPT I-cache has two main properties: 1. Lines allocated into the cache are tagged by VMID and a lookup can only hit lines that were allocated with the current VMID. 2. I-cache invalidation from EL1/0 only invalidates lines that match the current VMID of the CPU doing the invalidation. This can cause issues with non-VHE configurations, where the host runs at EL1 and wants to invalidate I-cache entries for a guest running with a different VMID. VHE is not affected, because the host runs at EL2 and I-cache invalidation applies as expected. This patch solves the problem by invalidating the I-cache when unmapping a page at stage 2 on a system with a VPIPT I-cache but not running with VHE enabled. Hopefully this is an obscure enough configuration that the overhead isn't anything to worry about, although it does mean that the by-range I-cache invalidation currently performed when mapping at stage 2 can be elided on such systems, because the I-cache will be clean for the guest VMID following a rollover event. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: cache: Identify VPIPT I-cachesWill Deacon1-0/+7
Add support for detecting VPIPT I-caches, as introduced by ARMv8.2. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: cache: Merge cachetype.h into cache.hWill Deacon3-57/+31
cachetype.h and cache.h are small and both obviously related to caches. Merge them together to reduce clutter. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: cache: Remove support for ASID-tagged VIVT I-cachesWill Deacon2-9/+1
As a recent change to ARMv8, ASID-tagged VIVT I-caches are removed retrospectively from the architecture. Consequently, we don't need to support them in Linux either. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: cacheinfo: Remove CCSIDR-based cache information probingWill Deacon1-24/+0
The CCSIDR_EL1.{NumSets,Associativity,LineSize} fields are only for use in conjunction with set/way cache maintenance and are not guaranteed to represent the actual microarchitectural features of a design. The architecture explicitly states: | You cannot make any inference about the actual sizes of caches based | on these parameters. Furthermore, CCSIDR_EL1.{WT,WB,RA,WA} have been removed retrospectively from ARMv8 and are now considered to be UNKNOWN. Since the kernel doesn't make use of set/way cache maintenance and it is not possible for userspace to execute these instructions, we have no need for the CCSIDR information in the kernel. This patch removes the accessors, along with the related portions of the cacheinfo support, which should instead be reintroduced when firmware has a mechanism to provide us with reliable information. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20arm64: cpuinfo: remove I-cache VIPT aliasing detectionWill Deacon1-13/+0
The CCSIDR_EL1.{NumSets,Associativity,LineSize} fields are only for use in conjunction with set/way cache maintenance and are not guaranteed to represent the actual microarchitectural features of a design. The architecture explicitly states: | You cannot make any inference about the actual sizes of caches based | on these parameters. We currently use these fields to determine whether or the I-cache is aliasing, which is bogus and known to break on some platforms. Instead, assume the I-cache is always aliasing if it advertises a VIPT policy. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-16Merge tag 'arm64-fixes' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes/cleanups from Catalin Marinas: "In Will's absence I'm sending the arm64 fixes he queued for 4.11-rc3: - fix arm64 kernel boot warning when DEBUG_VIRTUAL and KASAN are enabled - enable KEYS_COMPAT for keyctl compat support - use cpus_have_const_cap() for system_uses_ttbr0_pan() (slight performance improvement) - update kerneldoc for cpu_suspend() rename - remove the arm64-specific kprobe_exceptions_notify (weak generic variant defined)" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: kernel: Update kerneldoc for cpu_suspend() rename arm64: use const cap for system_uses_ttbr0_pan() arm64: support keyctl() system call in 32-bit mode arm64: kasan: avoid bad virt_to_pfn() arm64: kprobes: remove kprobe_exceptions_notify
2017-03-11Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-2/+1
Pull KVM fixes from Radim Krčmář: "ARM updates from Marc Zyngier: - vgic updates: - Honour disabling the ITS - Don't deadlock when deactivating own interrupts via MMIO - Correctly expose the lact of IRQ/FIQ bypass on GICv3 - I/O virtualization: - Make KVM_CAP_NR_MEMSLOTS big enough for large guests with many PCIe devices - General bug fixes: - Gracefully handle exception generated with syndroms that the host doesn't understand - Properly invalidate TLBs on VHE systems x86: - improvements in emulation of VMCLEAR, VMX MSR bitmaps, and VCPU reset * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: nVMX: do not warn when MSR bitmap address is not backed KVM: arm64: Increase number of user memslots to 512 KVM: arm/arm64: Remove KVM_PRIVATE_MEM_SLOTS definition that are unused KVM: arm/arm64: Enable KVM_CAP_NR_MEMSLOTS on arm/arm64 KVM: Add documentation for KVM_CAP_NR_MEMSLOTS KVM: arm/arm64: VGIC: Fix command handling while ITS being disabled arm64: KVM: Survive unknown traps from guests arm: KVM: Survive unknown traps from guests KVM: arm/arm64: Let vcpu thread modify its own active state KVM: nVMX: reset nested_run_pending if the vCPU is going to be reset kvm: nVMX: VMCLEAR should not cause the vCPU to shut down KVM: arm/arm64: vgic-v3: Don't pretend to support IRQ/FIQ bypass arm64: KVM: VHE: Clear HCR_TGE when invalidating guest TLBs
2017-03-10arm64: use const cap for system_uses_ttbr0_pan()Mark Rutland1-1/+1
Since commit 4b65a5db362783ab ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1"), system_uses_ttbr0_pan() has used cpus_have_cap() to determine whether PAN is present. Since commit a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities"), which was introduced around the same time, cpus_have_cap() doesn't try to use a static key, and must always perform a load, test, and consitional branch (likely a tbnz for the latter two). Elsewhere, we moved to using cpus_have_const_cap(), which can use a static key (i.e. a non-conditional branch), which is patched at runtime when the feature is detected. This patch makes system_uses_ttbr0_pan() use cpus_have_const_cap(). The static key is likely a win for hot-paths like the uacccess primitives, and this makes our usage consistent regardless. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-09arch, mm: convert all architectures to use 5level-fixup.hKirill A. Shutemov1-0/+4
If an architecture uses 4level-fixup.h we don't need to do anything as it includes 5level-fixup.h. If an architecture uses pgtable-nop*d.h, define __ARCH_USE_5LEVEL_HACK before inclusion of the header. It makes asm-generic code to use 5level-fixup.h. If an architecture has 4-level paging or folds levels on its own, include 5level-fixup.h directly. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-03-09arm64: sysreg: add Set/Way sys encodingsMark Rutland1-0/+6
Cache maintenance ops fall in the SYS instruction class, and KVM needs to handle them. So as to keep all SYS encodings in one place, this patch adds them to sysreg.h. The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-2. To make it clear that these are instructions rather than registers, and to allow us to change the way these are handled in future, a new sys_insn() alias for sys_reg() is added and used for these new definitions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09arm64: sysreg: add register encodings used by KVMMark Rutland1-0/+37
This patch adds sysreg definitions for registers which KVM needs the encodings for, which are not currently describe in <asm/sysregs.h>. Subsequent patches will make use of these definitions. The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-6, but this is not an exhaustive addition. Additions are only made for registers used today by KVM. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09arm64: sysreg: add physical timer registersMark Rutland1-0/+4
This patch adds sysreg definitions for system registers used to control the architected physical timer. Subsequent patches will make use of these definitions. The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-6. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09arm64: sysreg: subsume GICv3 sysreg definitionsMark Rutland2-68/+65
Unlike most sysreg defintiions, the GICv3 definitions don't have a SYS_ prefix, and they don't live in <asm/sysreg.h>. Additionally, some definitions are duplicated elsewhere (e.g. in the KVM save/restore code). For consistency, and to make it possible to share a common definition for these sysregs, this patch moves the definitions to <asm/sysreg.h>, adding a SYS_ prefix, and sorting the registers per their encoding. Existing users of the definitions are fixed up so that this change is not problematic. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09arm64: sysreg: add performance monitor registersMark Rutland1-0/+25
This patch adds sysreg definitions for system registers which are part of the performance monitors extension. Subsequent patches will make use of these definitions. The set of registers is described in ARM DDI 0487A.k_iss10775, Table D5-9. The encodings were taken from Table C5-6 in the same document. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09arm64: sysreg: add debug system registersMark Rutland1-0/+23
This patch adds sysreg definitions for system registers in the debug and trace system register encoding space. Subsequent patches will make use of these definitions. The encodings were taken from ARM DDI 0487A.k_iss10775, Table C5-5. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09arm64: sysreg: sort by encodingMark Rutland1-8/+9
Out sysreg definitions are largely (but not entirely) in ascending order of op0:op1:CRn:CRm:op2. It would be preferable to enforce this sort, as this makes it easier to verify the set of encodings against documentation, and provides an obvious location for each addition in future, minimising conflicts. This patch enforces this order, by moving the few items that break it. There should be no functional change. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com>
2017-03-09KVM: arm64: Increase number of user memslots to 512Linu Cherian1-1/+1
Having only 32 memslots is a real constraint for the maximum number of PCI devices that can be assigned to a single guest. Assuming each PCI device/virtual function having two memory BAR regions, we could assign only 15 devices/virtual functions to a guest. Hence increase KVM_USER_MEM_SLOTS to 512 as done in other archs like powerpc. Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Linu Cherian <linu.cherian@cavium.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-03-09KVM: arm/arm64: Remove KVM_PRIVATE_MEM_SLOTS definition that are unusedLinu Cherian1-1/+0
arm/arm64 architecture doesnt use private memslots, hence removing KVM_PRIVATE_MEM_SLOTS macro definition. Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Linu Cherian <linu.cherian@cavium.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-03-02sched/headers: Prepare to remove the <linux/mm_types.h> dependency from ↵Ingo Molnar1-0/+1
<linux/sched.h> Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar1-0/+1
<linux/sched/task_stack.h> We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/task_stack.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar1-0/+1
<linux/sched/hotplug.h> We are going to split <linux/sched/hotplug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/hotplug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-01Merge tag 'arm64-fixes' of ↵Linus Torvalds1-4/+10
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "The main fix here addresses a kernel panic triggered on Qualcomm QDF2400 due to incorrect register usage in an erratum workaround introduced during the merge window. Summary: - Fix kernel panic on specific Qualcomm platform due to broken erratum workaround - Revert contiguous bit support due to TLB conflict aborts in simulation - Don't treat all CPU ID register fields as 4-bit quantities" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64/cpufeature: check correct field width when updating sys_val Revert "arm64: mm: set the contiguous bit for kernel mappings where appropriate" arm64: Avoid clobbering mm in erratum workaround on QDF2400
2017-02-27kprobes: move kprobe declarations to asm-generic/kprobes.hLuis R. Rodriguez1-0/+4
Often all is needed is these small helpers, instead of compiler.h or a full kprobes.h. This is important for asm helpers, in fact even some asm/kprobes.h make use of these helpers... instead just keep a generic asm file with helpers useful for asm code with the least amount of clutter as possible. Likewise we need now to also address what to do about this file for both when architectures have CONFIG_HAVE_KPROBES, and when they do not. Then for when architectures have CONFIG_HAVE_KPROBES but have disabled CONFIG_KPROBES. Right now most asm/kprobes.h do not have guards against CONFIG_KPROBES, this means most architecture code cannot include asm/kprobes.h safely. Correct this and add guards for architectures missing them. Additionally provide architectures that not have kprobes support with the default asm-generic solution. This lets us force asm/kprobes.h on the header include/linux/kprobes.h always, but most importantly we can now safely include just asm/kprobes.h on architecture code without bringing the full kitchen sink of header files. Two architectures already provided a guard against CONFIG_KPROBES on its kprobes.h: sh, arch. The rest of the architectures needed gaurds added. We avoid including any not-needed headers on asm/kprobes.h unless kprobes have been enabled. In a subsequent atomic change we can try now to remove compiler.h from include/linux/kprobes.h. During this sweep I've also identified a few architectures defining a common macro needed for both kprobes and ftrace, that of the definition of the breakput instruction up. Some refer to this as BREAKPOINT_INSTRUCTION. This must be kept outside of the #ifdef CONFIG_KPROBES guard. [mcgrof@kernel.org: fix arm64 build] Link: http://lkml.kernel.org/r/CAB=NE6X1WMByuARS4mZ1g9+W=LuVBnMDnh_5zyN0CLADaVh=Jw@mail.gmail.com [sfr@canb.auug.org.au: fixup for kprobes declarations moving] Link: http://lkml.kernel.org/r/20170214165933.13ebd4f4@canb.auug.org.au Link: http://lkml.kernel.org/r/20170203233139.32682-1-mcgrof@kernel.org Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>