aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/include/asm/assembler.h
AgeCommit message (Collapse)AuthorFilesLines
2020-10-28ARM: kernel: use relative references for UP/SMP alternativesArd Biesheuvel1-2/+2
Currently, the .alt.smp.init section contains the virtual addresses of the patch sites. Since patching may occur both before and after switching into virtual mode, this requires some manual handling of the address when applying the UP alternative. Let's simplify this by using relative offsets in the table entries: this allows us to simply add each entry's address to its contents, regardless of whether we are running in virtual mode or not. Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
2020-10-28ARM: assembler: introduce adr_l, ldr_l and str_l macrosArd Biesheuvel1-0/+84
Like arm64, ARM supports position independent code sequences that produce symbol references with a greater reach than the ordinary adr/ldr instructions. Since on ARM, the adrl pseudo-instruction is only supported in ARM mode (and not at all when using Clang), having a adr_l macro like we do on arm64 is useful, and increases symmetry as well. Currently, we use open coded instruction sequences involving literals and arithmetic operations. Instead, we can use movw/movt pairs on v7 CPUs, circumventing the D-cache entirely. E.g., on v7+ CPUs, we can emit a PC-relative reference as follows: movw <reg>, #:lower16:<sym> - (1f + 8) movt <reg>, #:upper16:<sym> - (1f + 8) 1: add <reg>, <reg>, pc For older CPUs, we can emit the literal into a subsection, allowing it to be emitted out of line while retaining the ability to perform arithmetic on label offsets. E.g., on pre-v7 CPUs, we can emit a PC-relative reference as follows: ldr <reg>, 2f 1: add <reg>, <reg>, pc .subsection 1 2: .long <sym> - (1b + 8) .previous This is allowed by the assembler because, unlike ordinary sections, subsections are combined into a single section in the object file, and so the label references are not true cross-section references that are visible as relocations. (Subsections have been available in binutils since 2004 at least, so they should not cause any issues with older toolchains.) So use the above to implement the macros mov_l, adr_l, ldr_l and str_l, all of which will use movw/movt pairs on v7 and later CPUs, and use PC-relative literals otherwise. Reviewed-by: Nicolas Pitre <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
2020-06-01Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-2/+1
Pull ARM updates from Russell King: - remove a now unnecessary usage of the KERNEL_DS for sys_oabi_epoll_ctl() - update my email address in a number of drivers - decompressor EFI updates from Ard Biesheuvel - module unwind section handling updates - sparsemem Kconfig cleanups - make act_mm macro respect THREAD_SIZE * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8980/1: Allow either FLATMEM or SPARSEMEM on the multiplatform build ARM: 8979/1: Remove redundant ARCH_SPARSEMEM_DEFAULT setting ARM: 8978/1: mm: make act_mm() respect THREAD_SIZE ARM: decompressor: run decompressor in place if loaded via UEFI ARM: decompressor: move GOT into .data for EFI enabled builds ARM: decompressor: defer loading of the contents of the LC0 structure ARM: decompressor: split off _edata and stack base into separate object ARM: decompressor: move headroom variable out of LC0 ARM: 8976/1: module: allow arch overrides for .init section names ARM: 8975/1: module: fix handling of unwind init sections ARM: 8974/1: use SPARSMEM_STATIC when SPARSEMEM is enabled ARM: 8971/1: replace the sole use of a symbol with its definition ARM: 8969/1: decompressor: simplify libfdt builds Update rmk's email address in various drivers ARM: compat: remove KERNEL_DS usage in sys_oabi_epoll_ctl()
2020-05-03ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.hRussell King1-74/+1
Consolidate the user access assembly code to asm/uaccess-asm.h. This moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable, uaccess_disable, uaccess_save, uaccess_restore macros, and creates two new ones for exception entry and exit - uaccess_entry and uaccess_exit. This makes the uaccess_save and uaccess_restore macros private to asm/uaccess-asm.h. Signed-off-by: Russell King <[email protected]>
2020-04-29ARM: 8971/1: replace the sole use of a symbol with its definitionJian Cai1-2/+1
ALT_UP_B macro sets symbol up_b_offset via .equ to an expression involving another symbol. The macro gets expanded twice when arch/arm/kernel/sleep.S is assembled, creating a scenario where up_b_offset is set to another expression involving symbols while its current value is based on symbols. LLVM integrated assembler does not allow such cases, and based on the documentation of binutils, "Values that are based on expressions involving other symbols are allowed, but some targets may restrict this to only being done once per assembly", so it may be better to avoid such cases as it is not clearly stated which targets should support or disallow them. The fix in this case is simple, as up_b_offset has only one use, so we can replace the use with the definition and get rid of up_b_offset. Link:https://github.com/ClangBuiltLinux/linux/issues/920 Reviewed-by: Stefan Agner <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Signed-off-by: Jian Cai <[email protected]> Signed-off-by: Russell King <[email protected]>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-02-26ARM: 8843/1: use unified assembler in headersStefan Agner1-6/+6
Use unified assembler syntax (UAL) in headers. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-11-12ARM: 8812/1: Optimise copy_{from/to}_user for !CPU_USE_DOMAINSVincent Whitchurch1-2/+4
ARMv6+ processors do not use CONFIG_CPU_USE_DOMAINS and use privileged ldr/str instructions in copy_{from/to}_user. They are currently unnecessarily using single ldr/str instructions and can use ldm/stm instructions instead like memcpy does (but with appropriate fixup tables). This speeds up a "dd if=foo of=bar bs=32k" on a tmpfs filesystem by about 4% on my Cortex-A9. before:134217728 bytes (128.0MB) copied, 0.543848 seconds, 235.4MB/s before:134217728 bytes (128.0MB) copied, 0.538610 seconds, 237.6MB/s before:134217728 bytes (128.0MB) copied, 0.544356 seconds, 235.1MB/s before:134217728 bytes (128.0MB) copied, 0.544364 seconds, 235.1MB/s before:134217728 bytes (128.0MB) copied, 0.537130 seconds, 238.3MB/s before:134217728 bytes (128.0MB) copied, 0.533443 seconds, 240.0MB/s before:134217728 bytes (128.0MB) copied, 0.545691 seconds, 234.6MB/s before:134217728 bytes (128.0MB) copied, 0.534695 seconds, 239.4MB/s before:134217728 bytes (128.0MB) copied, 0.540561 seconds, 236.8MB/s before:134217728 bytes (128.0MB) copied, 0.541025 seconds, 236.6MB/s after:134217728 bytes (128.0MB) copied, 0.520445 seconds, 245.9MB/s after:134217728 bytes (128.0MB) copied, 0.527846 seconds, 242.5MB/s after:134217728 bytes (128.0MB) copied, 0.519510 seconds, 246.4MB/s after:134217728 bytes (128.0MB) copied, 0.527231 seconds, 242.8MB/s after:134217728 bytes (128.0MB) copied, 0.525030 seconds, 243.8MB/s after:134217728 bytes (128.0MB) copied, 0.524236 seconds, 244.2MB/s after:134217728 bytes (128.0MB) copied, 0.523659 seconds, 244.4MB/s after:134217728 bytes (128.0MB) copied, 0.525018 seconds, 243.8MB/s after:134217728 bytes (128.0MB) copied, 0.519249 seconds, 246.5MB/s after:134217728 bytes (128.0MB) copied, 0.518527 seconds, 246.9MB/s Reviewed-by: Nicolas Pitre <[email protected]> Signed-off-by: Vincent Whitchurch <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-10-10Merge branches 'fixes', 'misc' and 'spectre' into for-nextRussell King1-0/+11
2018-10-05ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitizationJulien Thierry1-0/+11
Introduce C and asm helpers to sanitize user address, taking the address range they target into account. Use asm helper for existing sanitization in __copy_from_user(). Signed-off-by: Julien Thierry <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-08-13Merge branches 'fixes', 'misc' and 'spectre' into for-linusRussell King1-0/+4
Conflicts: arch/arm/include/asm/uaccess.h Signed-off-by: Russell King <[email protected]>
2018-08-02ARM: spectre-v1: mitigate user accessesRussell King1-0/+4
Spectre variant 1 attacks are about this sequence of pseudo-code: index = load(user-manipulated pointer); access(base + index * stride); In order for the cache side-channel to work, the access() must me made to memory which userspace can detect whether cache lines have been loaded. On 32-bit ARM, this must be either user accessible memory, or a kernel mapping of that same user accessible memory. The problem occurs when the load() speculatively loads privileged data, and the subsequent access() is made to user accessible memory. Any load() which makes use of a user-maniplated pointer is a potential problem if the data it has loaded is used in a subsequent access. This also applies for the access() if the data loaded by that access is used by a subsequent access. Harden the get_user() accessors against Spectre attacks by forcing out of bounds addresses to a NULL pointer. This prevents get_user() being used as the load() step above. As a side effect, put_user() will also be affected even though it isn't implicated. Also harden copy_from_user() by redoing the bounds check within the arm_copy_from_user() code, and NULLing the pointer if out of bounds. Acked-by: Mark Rutland <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-06-05Merge branches 'fixes', 'misc' and 'spectre' into for-linusRussell King1-0/+8
2018-05-31ARM: spectre-v1: add speculation barrier (csdb) macrosRussell King1-0/+8
Add assembly and C macros for the new CSDB instruction. Signed-off-by: Russell King <[email protected]> Acked-by: Mark Rutland <[email protected]> Boot-tested-by: Tony Lindgren <[email protected]> Reviewed-by: Tony Lindgren <[email protected]>
2018-05-19ARM: 8772/1: kprobes: Prohibit kprobes on get_user functionsMasami Hiramatsu1-0/+10
Since do_undefinstr() uses get_user to get the undefined instruction, it can be called before kprobes processes recursive check. This can cause an infinit recursive exception. Prohibit probing on get_user functions. Fixes: 24ba613c9d6c ("ARM kprobes: core code") Signed-off-by: Masami Hiramatsu <[email protected]> Cc: [email protected] Signed-off-by: Russell King <[email protected]>
2017-11-26ARM: BUG if jumping to usermode address in kernel modeRussell King1-0/+18
Detect if we are returning to usermode via the normal kernel exit paths but the saved PSR value indicates that we are in kernel mode. This could occur due to corrupted stack state, which has been observed with "ftracetest". This ensures that we catch the problem case before we get to user code. Signed-off-by: Russell King <[email protected]>
2017-06-30ARM: Prepare for randomized task_structArnd Bergmann1-0/+2
With the new task struct randomization, we can run into a build failure for certain random seeds, which will place fields beyond the allow immediate size in the assembly: arch/arm/kernel/entry-armv.S: Assembler messages: arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096) Only two constants in asm-offset.h are affected, and I'm changing both of them here to work correctly in all configurations. One more macro has the problem, but is currently unused, so this removes it instead of adding complexity. Suggested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]> [kees: Adjust commit log slightly] Signed-off-by: Kees Cook <[email protected]>
2016-09-06ARM: 8605/1: V7M: fix notrace variant of save_and_disable_irqsVladimir Murzin1-0/+4
Commit 8e43a905 "ARM: 7325/1: fix v7 boot with lockdep enabled" introduced notrace variant of save_and_disable_irqs to balance notrace variant of restore_irqs; however V7M case has been missed. It was not noticed because cache-v7.S the only place where notrace variant is used. So fix it, since we are going to extend V7 cache routines to handle V7M case too. Signed-off-by: Vladimir Murzin <[email protected]> Tested-by: Andras Szemzo <[email protected]> Tested-by: Joachim Eastwood <[email protected]> Tested-by: Alexandre TORGUE <[email protected]> Signed-off-by: Russell King <[email protected]>
2016-06-22ARM: introduce svc_pt_regs structureRussell King1-2/+2
Since the privileged mode pt_regs are an extended version of the saved userland pt_regs, introduce a new svc_pt_regs structure to describe this layout. Acked-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2016-06-22ARM: rename S_FRAME_SIZE to PT_REGS_SIZERussell King1-2/+2
S_FRAME_SIZE is no longer the size of the kernel stack frame, so this name is misleading. It is the size of the kernel pt_regs structure. Name it so. Acked-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2015-09-14Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-armLinus Torvalds1-5/+0
Pull ARM fixes from Russell King: "A number of fixes for the merge window, fixing a number of cases missed when testing the uaccess code, particularly cases which only show up with certain compiler versions" * 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: ARM: 8431/1: fix alignement of __bug_table section entries arm/xen: Enable user access to the kernel before issuing a privcmd call ARM: domains: add memory dependencies to get_domain/set_domain ARM: domains: thread_info.h no longer needs asm/domains.h ARM: uaccess: fix undefined instruction on ARMv7M/noMMU ARM: uaccess: remove unneeded uaccess_save_and_disable macro ARM: swpan: fix nwfpe for uaccess changes ARM: 8429/1: disable GCC SRA optimization
2015-09-09ARM: uaccess: remove unneeded uaccess_save_and_disable macroRussell King1-5/+0
This macro is never referenced, remove it. Signed-off-by: Russell King <[email protected]>
2015-09-03Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into ↵Russell King1-0/+47
for-linus
2015-08-26ARM: software-based priviledged-no-access supportRussell King1-0/+30
Provide a software-based implementation of the priviledged no access support found in ARMv8.1. Userspace pages are mapped using a different domain number from the kernel and IO mappings. If we switch the user domain to "no access" when we enter the kernel, we can prevent the kernel from touching userspace. However, the kernel needs to be able to access userspace via the various user accessor functions. With the wrapping in the previous patch, we can temporarily enable access when the kernel needs user access, and re-disable it afterwards. This allows us to trap non-intended accesses to userspace, eg, caused by an inadvertent dereference of the LIST_POISON* values, which, with appropriate user mappings setup, can be made to succeed. This in turn can allow use-after-free bugs to be further exploited than would otherwise be possible. Signed-off-by: Russell King <[email protected]>
2015-08-26ARM: entry: provide uaccess assembly macro hooksRussell King1-0/+17
Provide hooks into the kernel entry and exit paths to permit control of userspace visibility to the kernel. The intended use is: - on entry to kernel from user, uaccess_disable will be called to disable userspace visibility - on exit from kernel to user, uaccess_enable will be called to enable userspace visibility - on entry from a kernel exception, uaccess_save_and_disable will be called to save the current userspace visibility setting, and disable access - on exit from a kernel exception, uaccess_restore will be called to restore the userspace visibility as it was before the exception occurred. These hooks allows us to keep userspace visibility disabled for the vast majority of the kernel, except for localised regions where we want to explicitly access userspace. Signed-off-by: Russell King <[email protected]>
2015-08-25ARM: entry: efficiency cleanupsRussell King1-4/+12
Make the "fast" syscall return path fast again. The addition of IRQ tracing and context tracking has made this path grossly inefficient. We can do much better if these options are enabled if we save the syscall return code on the stack - we then don't need to save a bunch of registers around every single callout to C code. Acked-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2015-08-25ARM: entry: get rid of asm_trace_hardirqs_on_condRussell King1-6/+2
There's no need for this macro, it can use a default for the condition argument. Acked-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
2015-05-08ARM: replace BSYM() with badr assembly macroRussell King1-1/+16
BSYM() was invented to allow us to work around a problem with the assembler, where local symbols resolved by the assembler for the 'adr' instruction did not take account of their ISA. Since we don't want BSYM() used elsewhere, replace BSYM() with a new macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM() mechanics integrated into it. This ensures that the BSYM()-ification is only used in conjunction with 'adr'. Acked-by: Dave Martin <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
2015-04-14ARM: allow 16-bit instructions in ALT_UP()Russell King1-0/+3
Allow ALT_UP() to cope with a 16-bit Thumb instruction by automatically inserting a following nop instruction. This allows us to care less about getting the assembler to emit a 32-bit thumb instruction. Signed-off-by: Russell King <[email protected]>
2014-07-18ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King1-0/+21
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <[email protected]> Tested-by: Stephen Warren <[email protected]> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <[email protected]> # mioa701_bootresume.S Tested-by: Andrew Lunn <[email protected]> # Kirkwood Tested-by: Shawn Guo <[email protected]> Tested-by: Tony Lindgren <[email protected]> # OMAPs Tested-by: Gregory CLEMENT <[email protected]> # Armada XP, 375, 385 Acked-by: Sekhar Nori <[email protected]> # DaVinci Acked-by: Christoffer Dall <[email protected]> # kvm/hyp Acked-by: Haojian Zhuang <[email protected]> # PXA3xx Acked-by: Stefano Stabellini <[email protected]> # Xen Tested-by: Uwe Kleine-König <[email protected]> # ARMv7M Tested-by: Simon Horman <[email protected]> # Shmobile Signed-off-by: Russell King <[email protected]>
2014-07-01ARM: 8078/1: get rid of hardcoded assumptions about kernel stack sizeAndrey Ryabinin1-3/+5
Changing kernel stack size on arm is not as simple as it should be: 1) THREAD_SIZE macro doesn't respect PAGE_SIZE and THREAD_SIZE_ORDER 2) stack size is hardcoded in get_thread_info macro This patch fixes it by calculating THREAD_SIZE and thread_info address taking into account PAGE_SIZE and THREAD_SIZE_ORDER. Now changing stack size becomes simply changing THREAD_SIZE_ORDER. Signed-off-by: Andrey Ryabinin <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-05-25ARM: 8053/1: kernel: sleep: restore HYP mode configuration in cpu_resumeLorenzo Pieralisi1-1/+1
On CPUs with virtualization extensions the kernel installs HYP mode configuration on both primary and secondary cpus upon cold boot. On platforms where CPUs are shutdown in idle paths (ie CPU core gating), when a CPU resumes from low-power states it currently does not execute code that reinstalls the HYP configuration, which means that the kernel cannot run eg KVM properly on such machines. This patch, mirroring cold-boot behaviour, executes position independent code that reinstalls HYP configuration and drops to SVC mode safely on warmboot, so that deep idle states can be enabled in kernel running as hosts on platforms with power management HW. Cc: Christoffer Dall <[email protected]> Cc: Dave Martin <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Nicolas Pitre <[email protected]> Signed-off-by: Lorenzo Pieralisi <[email protected]> Acked-by: Marc Zyngier <[email protected]> Reviewed-by: Dave Martin <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-04-09ARM: 8018/1: Add {inc,dec}_preempt_count asm macrosCatalin Marinas1-0/+32
The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti (which also gets the current thread_info) instead of open-coding them in arch/arm/vfp/*.S files. Signed-off-by: Catalin Marinas <[email protected]> Tested-by: Arun KS <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-04-09ARM: 8017/1: Move asm macro get_thread_info to asm/assembler.hCatalin Marinas1-0/+10
asm/assembler.h is a better place for this macro since it is used by asm files outside arch/arm/kernel/ Signed-off-by: Catalin Marinas <[email protected]> Tested-by: Arun KS <[email protected]> Signed-off-by: Russell King <[email protected]>
2014-02-25ARM: 7990/1: asm: rename logical shift macros push pull into lspush lspullVictor Kamensky1-4/+4
Renames logical shift macros, 'push' and 'pull', defined in arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'. That eliminates name conflict between 'push' logical shift macro and 'push' instruction mnemonic. That allows assembler.h to be included in .S files that use 'push' instruction. Suggested-by: Will Deacon <[email protected]> Signed-off-by: Victor Kamensky <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
2013-10-19ARM: asm: Add ARM_BE8() assembly helperBen Dooks1-0/+7
Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: Nicolas Pitre <[email protected]> Reviewed-by: Will Deacon <[email protected]> Signed-off-by: Ben Dooks <[email protected]>
2013-08-12ARM: barrier: allow options to be passed to memory barrier instructionsWill Deacon1-2/+2
On ARMv7, the memory barrier instructions take an optional `option' field which can be used to constrain the effects of a memory barrier based on shareability and access type. This patch allows the caller to pass these options if required, and updates the smp_*() barriers to request inner-shareable barriers, affecting only stores for the _wmb variant. wmb() is also changed to use the -st version of dsb. Reported-by: Albin Tonnerre <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2013-04-17ARM: Add base support for ARMv7-MCatalin Marinas1-1/+16
This patch adds the base support for the ARMv7-M architecture. It consists of the corresponding arch/arm/mm/ files and various #ifdef's around the kernel. Exception handling is implemented by a subsequent patch. [ukleinek: squash in some changes originating from commit b5717ba (Cortex-M3: Add support for the Microcontroller Prototyping System) from the v2.6.33-arm1 patch stack, port to post 3.6, drop zImage support, drop reorganisation of pt_regs, assert CONFIG_CPU_V7M doesn't leak into installed headers and a few cosmetic changes] Signed-off-by: Catalin Marinas <[email protected]> Reviewed-by: Jonathan Austin <[email protected]> Tested-by: Jonathan Austin <[email protected]> Signed-off-by: Uwe Kleine-König <[email protected]>
2013-01-10ARM: virt: avoid clobbering lr when forcing svc modeRussell King1-7/+3
The safe_svcmode_maskall macro is used to ensure that we are running in svc mode, causing an exception return from hvc mode if required. This patch removes the unneeded lr clobber from the macro and operates entirely on the temporary parameter register instead. Signed-off-by: Russell King <[email protected]> [will: updated comment] Signed-off-by: Will Deacon <[email protected]>
2012-12-11ARM: 7599/1: head: Remove boot-time HYP mode check for v5 and belowDave Martin1-0/+8
The kernel can only be entered on HYP mode on CPUs which actually support it, i.e. >= ARMv7. pre-v6 platform support cannot coexist in the same kernel as support for v7 and higher, so there is no advantage in having the HYP mode check on pre-v6 hardware. At least one pre-v6 board is known to fail when the HYP mode check code is present, although the exact cause remains unknown and may be unrelated. [1] This patch restores the old behaviour for pre-v6 platforms, whereby the CPSR is forced directly to SVC mode with IRQs and FIQs masked. All kernels capable of booting on v7 hardware will retain the check, so this should not impair functionality. [1] http://lists.arm.linux.org.uk/lurker/message/20121130.013814.19218413.en.html ([ARM] head.S change broke platform device registration?) Signed-off-by: Dave Martin <[email protected]> Signed-off-by: Russell King <[email protected]>
2012-10-09ARM: 7549/1: HYP: fix boot on some ARM1136 coresMarc Zyngier1-4/+5
It appears that performing a "movs pc, lr" to force the kernel into SVC mode on the OMAP2420 (ARM1136) prevents the platform from booting correctly (change introduced in 80c59da [ARM: virt: allow the kernel to be entered in HYP mode]). While the reason it fails is not understood yet (the same code runs fine on the OMAP2430, ARM1136 as well), partially revert that change for platforms that do not enter in HYP mode, preserving the new feature and restoring a working kernel on the OMAP2420. Reported-by: Tony Lindgren <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Tested-by: Tony Lindgren <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Signed-off-by: Russell King <[email protected]>
2012-09-30Merge branch 'hyp-boot-mode-rmk' of ↵Russell King1-0/+28
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into devel-stable
2012-09-19ARM: virt: allow the kernel to be entered in HYP modeDave Martin1-0/+28
This patch does two things: * Ensure that asynchronous aborts are masked at kernel entry. The bootloader should be masking these anyway, but this reduces the damage window just in case it doesn't. * Enter svc mode via exception return to ensure that CPU state is properly serialised. This does not matter when switching from an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C parlance), but it potentially does matter when switching from a another privileged mode such as hyp mode. This should allow the kernel to boot safely either from svc mode or hyp mode, even if no support for use of the ARM Virtualization Extensions is built into the kernel. Signed-off-by: Dave Martin <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2012-09-09ARM: 7527/1: uaccess: explicitly check __user pointer when !CPU_USE_DOMAINSRussell King1-0/+8
The {get,put}_user macros don't perform range checking on the provided __user address when !CPU_HAS_DOMAINS. This patch reworks the out-of-line assembly accessors to check the user address against a specified limit, returning -EFAULT if is is out of range. [will: changed get_user register allocation to match put_user] [rmk: fixed building on older ARM architectures] Reported-by: Catalin Marinas <[email protected]> Signed-off-by: Will Deacon <[email protected]> Cc: [email protected] Signed-off-by: Russell King <[email protected]>
2012-03-29Merge tag 'cleanup2' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull "ARM: cleanups of io includes" from Olof Johansson: "Rob Herring has done a sweeping change cleaning up all of the mach/io.h includes, moving some of the oft-repeated macros to a common location and removing a bunch of boiler plate. This is another step closer to a common zImage for multiple platforms." Fix up various fairly trivial conflicts (<mach/io.h> removal vs changes around it, tegra localtimer.o is *still* gone, yadda-yadda). * tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (29 commits) ARM: tegra: Include assembler.h in sleep.S to fix build break ARM: pxa: use common IOMEM definition ARM: dma-mapping: convert ARCH_HAS_DMA_SET_COHERENT_MASK to kconfig symbol ARM: __io abuse cleanup ARM: create a common IOMEM definition ARM: iop13xx: fix missing declaration of iop13xx_init_early ARM: fix ioremap/iounmap for !CONFIG_MMU ARM: kill off __mem_pci ARM: remove bunch of now unused mach/io.h files ARM: make mach/io.h include optional ARM: clps711x: remove unneeded include of mach/io.h ARM: dove: add explicit include of dove.h to addr-map.c ARM: at91: add explicit include of hardware.h to uncompressor ARM: ep93xx: clean-up mach/io.h ARM: tegra: clean-up mach/io.h ARM: orion5x: clean-up mach/io.h ARM: davinci: remove unneeded mach/io.h include [media] davinci: remove includes of mach/io.h ARM: OMAP: Remove remaining includes for mach/io.h ARM: msm: clean-up mach/io.h ...
2012-03-13ARM: create a common IOMEM definitionRob Herring1-0/+2
Several platforms create IOMEM defines for casting to 'void __iomem *', and other platforms are incorrectly using __io() macro for the same purpose. This creates a common definition and removes all the platform specific versions. Rather than try to make linux/io.h and asm/io.h assembly safe, the assembly version of IOMEM is moved into asm/assembler.h. Signed-off-by: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Sekhar Nori <[email protected]> Cc: Kevin Hilman <[email protected]> Acked-by: H Hartley Sweeten <[email protected]> Cc: Ryan Mallon <[email protected]> Cc: Eric Miao <[email protected]> Cc: Haojian Zhuang <[email protected]> Acked-by: David Brown <[email protected]> Cc: Daniel Walker <[email protected]> Cc: Bryan Huntsman <[email protected]> Cc: Sascha Hauer <[email protected]> Cc: Shawn Guo <[email protected]> Acked-by: Tony Lindgren <[email protected]> Acked-by: Paul Walmsley <[email protected]> Acked-by: Viresh Kumar <[email protected]> Cc: Rajeev Kumar <[email protected]> Cc: Colin Cross <[email protected]> Cc: Olof Johansson <[email protected]> Cc: Stephen Warren <[email protected]> Acked-by: Linus Walleij <[email protected]> Acked-by: Arnd Bergmann <[email protected]>
2012-02-15ARM: 7325/1: fix v7 boot with lockdep enabledRabin Vincent1-0/+5
Bootup with lockdep enabled has been broken on v7 since b46c0f74657d ("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR"). This is because v7_setup (which is called very early during boot) calls v7_flush_dcache_all, and the save_and_disable_irqs added by that patch ends up attempting to call into lockdep C code (trace_hardirqs_off()) when we are in no position to execute it (no stack, MMU off). Fix this by using a notrace variant of save_and_disable_irqs. The code already uses the notrace variant of restore_irqs. Reviewed-by: Nicolas Pitre <[email protected]> Acked-by: Stephen Boyd <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: [email protected] Signed-off-by: Rabin Vincent <[email protected]> Signed-off-by: Russell King <[email protected]>
2012-01-25ARM: 7301/1: Rename the T() macro to TUSER() to avoid namespace conflictsCatalin Marinas1-2/+2
This macro is used to generate unprivileged accesses (LDRT/STRT) to user space. Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Signed-off-by: Russell King <[email protected]>
2011-12-08ARM: LPAE: add ISBs around MMU enabling codeWill Deacon1-0/+11
Before we enable the MMU, we must ensure that the TTBR registers contain sane values. After the MMU has been enabled, we jump to the *virtual* address of the following function, so we also need to ensure that the SCTLR write has taken effect. This patch adds ISB instructions around the SCTLR write to ensure the visibility of the above. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]>
2011-07-07ARM: assembler.h: Add string declaration macroDave Martin1-0/+9
Declaring strings in assembler source involves a certain amount of tedious boilerplate code in order to annotate the resulting symbol correctly. Encapsulating this boilerplate in a macro should help to avoid some duplication and the occasional mistake. Signed-off-by: Dave Martin <[email protected]> Acked-by: Nicolas Pitre <[email protected]>