aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/kernel
AgeCommit message (Collapse)AuthorFilesLines
2018-07-30ARM: 8781/1: Fix Thumb-2 syscall return for binutils 2.29+Vincent Whitchurch1-1/+3
When building the kernel as Thumb-2 with binutils 2.29 or newer, if the assembler has seen the .type directive (via ENDPROC()) for a symbol, it automatically handles the setting of the lowest bit when the symbol is used with ADR. The badr macro on the other hand handles this lowest bit manually. This leads to a jump to a wrong address in the wrong state in the syscall return path: Internal error: Oops - undefined instruction: 0 [#2] SMP THUMB2 Modules linked in: CPU: 0 PID: 652 Comm: modprobe Tainted: G D 4.18.0-rc3+ #8 PC is at ret_fast_syscall+0x4/0x62 LR is at sys_brk+0x109/0x128 pc : [<80101004>] lr : [<801c8a35>] psr: 60000013 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 50c5387d Table: 9e82006a DAC: 00000051 Process modprobe (pid: 652, stack limit = 0x(ptrval)) 80101000 <ret_fast_syscall>: 80101000: b672 cpsid i 80101002: f8d9 2008 ldr.w r2, [r9, #8] 80101006: f1b2 4ffe cmp.w r2, #2130706432 ; 0x7f000000 80101184 <local_restart>: 80101184: f8d9 a000 ldr.w sl, [r9] 80101188: e92d 0030 stmdb sp!, {r4, r5} 8010118c: f01a 0ff0 tst.w sl, #240 ; 0xf0 80101190: d117 bne.n 801011c2 <__sys_trace> 80101192: 46ba mov sl, r7 80101194: f5ba 7fc8 cmp.w sl, #400 ; 0x190 80101198: bf28 it cs 8010119a: f04f 0a00 movcs.w sl, #0 8010119e: f3af 8014 nop.w {20} 801011a2: f2af 1ea2 subw lr, pc, #418 ; 0x1a2 To fix this, add a new symbol name which doesn't have ENDPROC used on it and use that with badr. We can't remove the badr usage since that would would cause breakage with older binutils. Signed-off-by: Vincent Whitchurch <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-07-27Merge branch 'for-next/perf' of ↵Will Deacon3-13/+34
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into aarch64/for-next/core Pull in arm perf updates, including support for 64-bit (chained) event counters and some non-critical fixes for some of the system PMU drivers. Signed-off-by: Will Deacon <[email protected]>
2018-07-26mm: use vma_init() to initialize VMAs on stack and data segmentsKirill A. Shutemov1-0/+1
Make sure to initialize all VMAs properly, not only those which come from vm_area_cachep. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Kirill A. Shutemov <[email protected]> Acked-by: Linus Torvalds <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-07-26ARM: signal: copy registers using __copy_from_user()Russell King1-17/+21
__get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors. In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access. It becomes much more efficient to use __copy_from_user() instead, so let's use this for the ARM integer registers. Acked-by: Mark Rutland <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-07-25Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar1-1/+1
Signed-off-by: Ingo Molnar <[email protected]>
2018-07-20ARM/time: Remove read_boot_clock64()Pavel Tatashin1-13/+2
read_boot_clock64() is deleted, and replaced with read_persistent_wall_and_boot_offset(). The default implementation of read_persistent_wall_and_boot_offset() provides a better fallback than the current stubs for read_boot_clock64() that arm has with no users, so remove the old code. Signed-off-by: Pavel Tatashin <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-07-13Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-1/+1
Pull ARM fixes from Russell King: "A couple of small fixes this time around from Steven for an interaction between ftrace and kernel read-only protection, and Vladimir for nommu" * 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8780/1: ftrace: Only set kernel memory back to read-only after boot ARM: 8775/1: NOMMU: Use instr_sync instead of plain isb in common code
2018-07-11ARM: 8775/1: NOMMU: Use instr_sync instead of plain isb in common codeVladimir Murzin1-1/+1
Greg reported that commit 3c24121039c9d ("ARM: 8756/1: NOMMU: Postpone MPU activation till __after_proc_init") is causing breakage for the old Versatile platform in no-MMU mode (with out-of-tree patches): AS arch/arm/kernel/head-nommu.o arch/arm/kernel/head-nommu.S: Assembler messages: arch/arm/kernel/head-nommu.S:180: Error: selected processor does not support `isb' in ARM mode scripts/Makefile.build:417: recipe for target 'arch/arm/kernel/head-nommu.o' failed make[2]: *** [arch/arm/kernel/head-nommu.o] Error 1 Makefile:1034: recipe for target 'arch/arm/kernel' failed make[1]: *** [arch/arm/kernel] Error 2 Since the code is common for all NOMMU builds usage of the isb was a bad idea (please, note that isb also used in MPU related code which is fine because MPU has dependency on CPU_V7/CPU_V7M), instead use more robust instr_sync assembler macro. Fixes: 3c24121039c9 ("ARM: 8756/1: NOMMU: Postpone MPU activation till __after_proc_init") Reported-by: Greg Ungerer <[email protected]> Tested-by: Greg Ungerer <[email protected]> Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-07-10arm_pmu: Tidy up clear_event_idx call backsSuzuki K Poulose3-0/+25
The armpmu uses get_event_idx callback to allocate an event counter for a given event, which marks the selected counter as "used". Now, when we delete the counter, the arm_pmu goes ahead and clears the "used" bit and then invokes the "clear_event_idx" call back, which kind of splits the job between the core code and the backend. To keep things tidy, mandate the implementation of clear_event_idx() and add it for exisiting backends. This will be useful for adding the chained event support, where we leave the event idx maintenance to the backend. Also, when an event is removed from the PMU, reset the hw.idx to indicate that a counter is not allocated for this event, to help the backends do better checks. This will be also used for the chain counter support. Cc: Will Deacon <[email protected]> Cc: Mark Rutland <[email protected]> Reviewed-by: Julien Thierry <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-07-10arm_pmu: Change API to support 64bit counter valuesSuzuki K Poulose3-8/+8
Convert the {read/write}_counter APIs to handle 64bit values to enable supporting chained event counters. The backends still use 32bit values and we pass them 32bit values only. So in effect there are no functional changes. Cc: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Reviewed-by: Julien Thierry <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-07-10arm_pmu: Clean up maximum period handlingSuzuki K Poulose3-5/+0
Each PMU defines their max_period of the counter as the maximum value that can be counted. Since all the PMU backends support 32bit counters by default, let us remove the redundant field. No functional changes. Cc: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Reviewed-by: Julien Thierry <[email protected]> Signed-off-by: Suzuki K Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-07-09arm: perf: prevent unbind/bind via sysfsStefan Agner1-0/+1
Unbinding and rebinding the ARM PMU driver via sysfs leads to a warning followed by more errors: WARNING: CPU: 0 PID: 217 at kernel/irq/chip.c:1034 irq_modify_status+0x150/0x16c .. genirq: Flags mismatch irq 19. 00010c04 (arm-pmu) vs. 00010c04 (arm-pmu) hw perfevents: unable to request IRQ19 for ARM PMU counters hw perfevents: /pmu: failed to register PMU devices! armv7-pmu: probe of pmu failed with error -16 The driver is clearly not designed to be removed. Disable bind/ unbind for this driver. Signed-off-by: Stefan Agner <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-06-26perf/arch/arm: Implement hw_breakpoint_arch_parse()Frederic Weisbecker1-35/+36
Migrate to the new API in order to remove arch_validate_hwbkpt_settings() that clumsily mixes up architecture validation and commit. Signed-off-by: Frederic Weisbecker <[email protected]> Acked-by: Mark Rutland <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rich Felker <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-06-26perf/hw_breakpoint: Pass arch breakpoint struct to ↵Frederic Weisbecker1-6/+5
arch_check_bp_in_kernelspace() We can't pass the breakpoint directly on arch_check_bp_in_kernelspace() anymore because its architecture internal datas (struct arch_hw_breakpoint) are not yet filled by the time we call the function, and most implementation need this backend to be up to date. So arrange the function to take the probing struct instead. Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rich Felker <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2018-06-22rseq: Avoid infinite recursion when delivering SIGSEGVWill Deacon1-2/+2
When delivering a signal to a task that is using rseq, we call into __rseq_handle_notify_resume() so that the registers pushed in the sigframe are updated to reflect the state of the restartable sequence (for example, ensuring that the signal returns to the abort handler if necessary). However, if the rseq management fails due to an unrecoverable fault when accessing userspace or certain combinations of RSEQ_CS_* flags, then we will attempt to deliver a SIGSEGV. This has the potential for infinite recursion if the rseq code continuously fails on signal delivery. Avoid this problem by using force_sigsegv() instead of force_sig(), which is explicitly designed to reset the SEGV handler to SIG_DFL in the case of a recursive fault. In doing so, remove rseq_signal_deliver() from the internal rseq API and have an optional struct ksignal * parameter to rseq_handle_notify_resume() instead. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Mathieu Desnoyers <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-06-14Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variablesLinus Torvalds3-4/+4
The changes to automatically test for working stack protector compiler support in the Kconfig files removed the special STACKPROTECTOR_AUTO option that picked the strongest stack protector that the compiler supported. That was all a nice cleanup - it makes no sense to have the AUTO case now that the Kconfig phase can just determine the compiler support directly. HOWEVER. It also meant that doing "make oldconfig" would now _disable_ the strong stackprotector if you had AUTO enabled, because in a legacy config file, the sane stack protector configuration would look like CONFIG_HAVE_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_NONE is not set # CONFIG_CC_STACKPROTECTOR_REGULAR is not set # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_STACKPROTECTOR_AUTO=y and when you ran this through "make oldconfig" with the Kbuild changes, it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version used to be disabled (because it was really enabled by AUTO), and would disable it in the new config, resulting in: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_HAS_SANE_STACKPROTECTOR=y That's dangerously subtle - people could suddenly find themselves with the weaker stack protector setup without even realizing. The solution here is to just rename not just the old RECULAR stack protector option, but also the strong one. This does that by just removing the CC_ prefix entirely for the user choices, because it really is not about the compiler support (the compiler support now instead automatially impacts _visibility_ of the options to users). This results in "make oldconfig" actually asking the user for their choice, so that we don't have any silent subtle security model changes. The end result would generally look like this: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_CC_HAS_SANE_STACKPROTECTOR=y where the "CC_" versions really are about internal compiler infrastructure, not the user selections. Acked-by: Masahiro Yamada <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-06-12treewide: kmalloc() -> kmalloc_array()Kees Cook1-2/+2
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This patch replaces cases of: kmalloc(a * b, gfp) with: kmalloc_array(a * b, gfp) as well as handling cases of: kmalloc(a * b * c, gfp) with: kmalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kmalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kmalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The tools/ directory was manually excluded, since it has its own implementation of kmalloc(). The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kmalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kmalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kmalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(char) * COUNT + COUNT , ...) | kmalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kmalloc + kmalloc_array ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kmalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kmalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kmalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kmalloc(C1 * C2 * C3, ...) | kmalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kmalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kmalloc(sizeof(THING) * C2, ...) | kmalloc(sizeof(TYPE) * C2, ...) | kmalloc(C1 * C2 * C3, ...) | kmalloc(C1 * C2, ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - (E1) * E2 + E1, E2 , ...) | - kmalloc + kmalloc_array ( - (E1) * (E2) + E1, E2 , ...) | - kmalloc + kmalloc_array ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by: Kees Cook <[email protected]>
2018-06-10Merge branch 'core-rseq-for-linus' of ↵Linus Torvalds2-6/+33
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull restartable sequence support from Thomas Gleixner: "The restartable sequences syscall (finally): After a lot of back and forth discussion and massive delays caused by the speculative distraction of maintainers, the core set of restartable sequences has finally reached a consensus. It comes with the basic non disputed core implementation along with support for arm, powerpc and x86 and a full set of selftests It was exposed to linux-next earlier this week, so it does not fully comply with the merge window requirements, but there is really no point to drag it out for yet another cycle" * 'core-rseq-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: rseq/selftests: Provide Makefile, scripts, gitignore rseq/selftests: Provide parametrized tests rseq/selftests: Provide basic percpu ops test rseq/selftests: Provide basic test rseq/selftests: Provide rseq library selftests/lib.mk: Introduce OVERRIDE_TARGETS powerpc: Wire up restartable sequences system call powerpc: Add syscall detection for restartable sequences powerpc: Add support for restartable sequences x86: Wire up restartable sequence system call x86: Add support for restartable sequences arm: Wire up restartable sequences system call arm: Add syscall detection for restartable sequences arm: Add restartable sequences support rseq: Introduce restartable sequences system call uapi/headers: Provide types_32_64.h
2018-06-08Merge tag 'arm64-upstream' of ↵Linus Torvalds3-9/+4
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "Apart from the core arm64 and perf changes, the Spectre v4 mitigation touches the arm KVM code and the ACPI PPTT support touches drivers/ (acpi and cacheinfo). I should have the maintainers' acks in place. Summary: - Spectre v4 mitigation (Speculative Store Bypass Disable) support for arm64 using SMC firmware call to set a hardware chicken bit - ACPI PPTT (Processor Properties Topology Table) parsing support and enable the feature for arm64 - Report signal frame size to user via auxv (AT_MINSIGSTKSZ). The primary motivation is Scalable Vector Extensions which requires more space on the signal frame than the currently defined MINSIGSTKSZ - ARM perf patches: allow building arm-cci as module, demote dev_warn() to dev_dbg() in arm-ccn event_init(), miscellaneous cleanups - cmpwait() WFE optimisation to avoid some spurious wakeups - L1_CACHE_BYTES reverted back to 64 (for performance reasons that have to do with some network allocations) while keeping ARCH_DMA_MINALIGN to 128. cache_line_size() returns the actual hardware Cache Writeback Granule - Turn LSE atomics on by default in Kconfig - Kernel fault reporting tidying - Some #include and miscellaneous cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (53 commits) arm64: Fix syscall restarting around signal suppressed by tracer arm64: topology: Avoid checking numa mask for scheduler MC selection ACPI / PPTT: fix build when CONFIG_ACPI_PPTT is not enabled arm64: cpu_errata: include required headers arm64: KVM: Move VCPU_WORKAROUND_2_FLAG macros to the top of the file arm64: signal: Report signal frame size to userspace via auxv arm64/sve: Thin out initialisation sanity-checks for sve_max_vl arm64: KVM: Add ARCH_WORKAROUND_2 discovery through ARCH_FEATURES_FUNC_ID arm64: KVM: Handle guest's ARCH_WORKAROUND_2 requests arm64: KVM: Add ARCH_WORKAROUND_2 support for guests arm64: KVM: Add HYP per-cpu accessors arm64: ssbd: Add prctl interface for per-thread mitigation arm64: ssbd: Introduce thread flag to control userspace mitigation arm64: ssbd: Restore mitigation status on CPU resume arm64: ssbd: Skip apply_ssbd if not using dynamic mitigation arm64: ssbd: Add global mitigation state accessor arm64: Add 'ssbd' command-line option arm64: Add ARCH_WORKAROUND_2 probing arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2 arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1 ...
2018-06-06Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds11-80/+315
Pull ARM updates from Russell King: - Initial round of Spectre variant 1 and variant 2 fixes for 32-bit ARM - Clang support improvements - nommu updates for v8 MPU - enable ARM_MODULE_PLTS by default to avoid problems loading modules with larger kernels - vmlinux.lds and dma-mapping cleanups * 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (31 commits) ARM: spectre-v1: fix syscall entry ARM: spectre-v1: add array_index_mask_nospec() implementation ARM: spectre-v1: add speculation barrier (csdb) macros ARM: KVM: report support for SMCCC_ARCH_WORKAROUND_1 ARM: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling ARM: spectre-v2: KVM: invalidate icache on guest exit for Brahma B15 ARM: KVM: invalidate icache on guest exit for Cortex-A15 ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 ARM: spectre-v2: warn about incorrect context switching functions ARM: spectre-v2: add firmware based hardening ARM: spectre-v2: harden user aborts in kernel space ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit ARM: spectre-v2: harden branch predictor on context switches ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre ARM: bugs: add support for per-processor bug checking ARM: bugs: hook processor bug checking into SMP and suspend paths ARM: bugs: prepare processor bug infrastructure ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs ARM: 8774/1: remove no-op macro VMLINUX_SYMBOL() ARM: 8773/1: amba: Export amba_bustype ...
2018-06-06arm: Add syscall detection for restartable sequencesMathieu Desnoyers2-6/+26
Syscalls are not allowed inside restartable sequences, so add a call to rseq_syscall() at the very beginning of system call exiting path for CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect whether there is a syscall issued inside restartable sequences. Signed-off-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dave Watson <[email protected]> Cc: Will Deacon <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "H . Peter Anvin" <[email protected]> Cc: Chris Lameter <[email protected]> Cc: Russell King <[email protected]> Cc: Andrew Hunter <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: "Paul E . McKenney" <[email protected]> Cc: Paul Turner <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Josh Triplett <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ben Maurer <[email protected]> Cc: [email protected] Cc: Andy Lutomirski <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-06-06arm: Add restartable sequences supportMathieu Desnoyers1-0/+7
Call the rseq_handle_notify_resume() function on return to userspace if TIF_NOTIFY_RESUME thread flag is set. Perform fixup on the pre-signal frame when a signal is delivered on top of a restartable sequence critical section. Signed-off-by: Mathieu Desnoyers <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dave Watson <[email protected]> Cc: Will Deacon <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "H . Peter Anvin" <[email protected]> Cc: Chris Lameter <[email protected]> Cc: Russell King <[email protected]> Cc: Andrew Hunter <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: "Paul E . McKenney" <[email protected]> Cc: Paul Turner <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Josh Triplett <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ben Maurer <[email protected]> Cc: [email protected] Cc: Andy Lutomirski <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2018-06-05Merge branches 'fixes', 'misc' and 'spectre' into for-linusRussell King11-80/+315
2018-06-04Merge branch 'siginfo-linus' of ↵Linus Torvalds3-0/+7
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull siginfo updates from Eric Biederman: "This set of changes close the known issues with setting si_code to an invalid value, and with not fully initializing struct siginfo. There remains work to do on nds32, arc, unicore32, powerpc, arm, arm64, ia64 and x86 to get the code that generates siginfo into a simpler and more maintainable state. Most of that work involves refactoring the signal handling code and thus careful code review. Also not included is the work to shrink the in kernel version of struct siginfo. That depends on getting the number of places that directly manipulate struct siginfo under control, as it requires the introduction of struct kernel_siginfo for the in kernel things. Overall this set of changes looks like it is making good progress, and with a little luck I will be wrapping up the siginfo work next development cycle" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (46 commits) signal/sh: Stop gcc warning about an impossible case in do_divide_error signal/mips: Report FPE_FLTUNK for undiagnosed floating point exceptions signal/um: More carefully relay signals in relay_signal. signal: Extend siginfo_layout with SIL_FAULT_{MCEERR|BNDERR|PKUERR} signal: Remove unncessary #ifdef SEGV_PKUERR in 32bit compat code signal/signalfd: Add support for SIGSYS signal/signalfd: Remove __put_user from signalfd_copyinfo signal/xtensa: Use force_sig_fault where appropriate signal/xtensa: Consistenly use SIGBUS in do_unaligned_user signal/um: Use force_sig_fault where appropriate signal/sparc: Use force_sig_fault where appropriate signal/sparc: Use send_sig_fault where appropriate signal/sh: Use force_sig_fault where appropriate signal/s390: Use force_sig_fault where appropriate signal/riscv: Replace do_trap_siginfo with force_sig_fault signal/riscv: Use force_sig_fault where appropriate signal/parisc: Use force_sig_fault where appropriate signal/parisc: Use force_sig_mceerr where appropriate signal/openrisc: Use force_sig_fault where appropriate signal/nios2: Use force_sig_fault where appropriate ...
2018-06-04Merge tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-1/+1
Pull dma-mapping updates from Christoph Hellwig: - replace the force_dma flag with a dma_configure bus method. (Nipun Gupta, although one patch is іncorrectly attributed to me due to a git rebase bug) - use GFP_DMA32 more agressively in dma-direct. (Takashi Iwai) - remove PCI_DMA_BUS_IS_PHYS and rely on the dma-mapping API to do the right thing for bounce buffering. - move dma-debug initialization to common code, and apply a few cleanups to the dma-debug code. - cleanup the Kconfig mess around swiotlb selection - swiotlb comment fixup (Yisheng Xie) - a trivial swiotlb fix. (Dan Carpenter) - support swiotlb on RISC-V. (based on a patch from Palmer Dabbelt) - add a new generic dma-noncoherent dma_map_ops implementation and use it for arc, c6x and nds32. - improve scatterlist validity checking in dma-debug. (Robin Murphy) - add a struct device quirk to limit the dma-mask to 32-bit due to bridge/system issues, and switch x86 to use it instead of a local hack for VIA bridges. - handle devices without a dma_mask more gracefully in the dma-direct code. * tag 'dma-mapping-4.18' of git://git.infradead.org/users/hch/dma-mapping: (48 commits) dma-direct: don't crash on device without dma_mask nds32: use generic dma_noncoherent_ops nds32: implement the unmap_sg DMA operation nds32: consolidate DMA cache maintainance routines x86/pci-dma: switch the VIA 32-bit DMA quirk to use the struct device flag x86/pci-dma: remove the explicit nodac and allowdac option x86/pci-dma: remove the experimental forcesac boot option Documentation/x86: remove a stray reference to pci-nommu.c core, dma-direct: add a flag 32-bit dma limits dma-mapping: remove unused gfp_t parameter to arch_dma_alloc_attrs dma-debug: check scatterlist segments c6x: use generic dma_noncoherent_ops arc: use generic dma_noncoherent_ops arc: fix arc_dma_{map,unmap}_page arc: fix arc_dma_sync_sg_for_{cpu,device} arc: simplify arc_dma_sync_single_for_{cpu,device} dma-mapping: provide a generic dma-noncoherent implementation dma-mapping: simplify Kconfig dependencies riscv: add swiotlb support riscv: only enable ZONE_DMA32 for 64-bit ...
2018-06-04Merge branch 'hch.procfs' of ↵Linus Torvalds2-26/+3
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull procfs updates from Al Viro: "Christoph's proc_create_... cleanups series" * 'hch.procfs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (44 commits) xfs, proc: hide unused xfs procfs helpers isdn/gigaset: add back gigaset_procinfo assignment proc: update SIZEOF_PDE_INLINE_NAME for the new pde fields tty: replace ->proc_fops with ->proc_show ide: replace ->proc_fops with ->proc_show ide: remove ide_driver_proc_write isdn: replace ->proc_fops with ->proc_show atm: switch to proc_create_seq_private atm: simplify procfs code bluetooth: switch to proc_create_seq_data netfilter/x_tables: switch to proc_create_seq_private netfilter/xt_hashlimit: switch to proc_create_{seq,single}_data neigh: switch to proc_create_seq_data hostap: switch to proc_create_{seq,single}_data bonding: switch to proc_create_seq_data rtc/proc: switch to proc_create_single_data drbd: switch to proc_create_single resource: switch to proc_create_seq_data staging/rtl8192u: simplify procfs code jfs: simplify procfs code ...
2018-05-31ARM: spectre-v1: fix syscall entryRussell King2-11/+32
Prevent speculation at the syscall table decoding by clamping the index used to zero on invalid system call numbers, and using the csdb speculative barrier. Signed-off-by: Russell King <[email protected]> Acked-by: Mark Rutland <[email protected]> Boot-tested-by: Tony Lindgren <[email protected]> Reviewed-by: Tony Lindgren <[email protected]>
2018-05-31ARM: bugs: add support for per-processor bug checkingRussell King1-0/+4
Add support for per-processor bug checking - each processor function descriptor gains a function pointer for this check, which must not be an __init function. If non-NULL, this will be called whenever a CPU enters the kernel via which ever path (boot CPU, secondary CPU startup, CPU resuming, etc.) This allows processor specific bug checks to validate that workaround bits are properly enabled by firmware via all entry paths to the kernel. Signed-off-by: Russell King <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Boot-tested-by: Tony Lindgren <[email protected]> Reviewed-by: Tony Lindgren <[email protected]> Acked-by: Marc Zyngier <[email protected]>
2018-05-31ARM: bugs: hook processor bug checking into SMP and suspend pathsRussell King3-0/+11
Check for CPU bugs when secondary processors are being brought online, and also when CPUs are resuming from a low power mode. This gives an opportunity to check that processor specific bug workarounds are correctly enabled for all paths that a CPU re-enters the kernel. Signed-off-by: Russell King <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Boot-tested-by: Tony Lindgren <[email protected]> Reviewed-by: Tony Lindgren <[email protected]> Acked-by: Marc Zyngier <[email protected]>
2018-05-31ARM: bugs: prepare processor bug infrastructureRussell King2-0/+10
Prepare the processor bug infrastructure so that it can be expanded to check for per-processor bugs. Signed-off-by: Russell King <[email protected]> Reviewed-by: Florian Fainelli <[email protected]> Boot-tested-by: Tony Lindgren <[email protected]> Reviewed-by: Tony Lindgren <[email protected]> Acked-by: Marc Zyngier <[email protected]>
2018-05-21arm_pmu: simplify arm_pmu::handle_irqMark Rutland3-9/+4
The arm_pmu::handle_irq() callback has the same prototype as a generic IRQ handler, taking the IRQ number and a void pointer argument which it must convert to an arm_pmu pointer. This means that all arm_pmu::handle_irq() take an IRQ number they never use, and all must explicitly cast the void pointer to an arm_pmu pointer. Instead, let's change arm_pmu::handle_irq to take an arm_pmu pointer, allowing these casts to be removed. The redundant IRQ number parameter is also removed. Suggested-by: Hoeun Ryu <[email protected]> Signed-off-by: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2018-05-19ARM: 8774/1: remove no-op macro VMLINUX_SYMBOL()Masahiro Yamada1-8/+8
VMLINUX_SYMBOL() is no-op unless CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX is defined. It has ever been selected only by BLACKFIN and METAG. VMLINUX_SYMBOL() is unneeded for ARM-specific code. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: 8765/1: smp: Move clear_tasks_mm_cpumask() call to __cpu_die()Grygorii Strashko1-2/+1
Suspending a CPU on a RT kernel results in the following backtrace: | Disabling non-boot CPUs ... | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917 | in_atomic(): 1, irqs_disabled(): 128, pid: 18, name: migration/1 | INFO: lockdep is turned off. | irq event stamp: 122 | hardirqs last enabled at (121): [<c06ac0ac>] _raw_spin_unlock_irqrestore+0x88/0x90 | hardirqs last disabled at (122): [<c06abed0>] _raw_spin_lock_irq+0x28/0x5c | CPU: 1 PID: 18 Comm: migration/1 Tainted: G W 4.1.4-rt3-01046-g96ac8da #204 | Hardware name: Generic DRA74X (Flattened Device Tree) | [<c0019134>] (unwind_backtrace) from [<c0014774>] (show_stack+0x20/0x24) | [<c0014774>] (show_stack) from [<c06a70f4>] (dump_stack+0x88/0xdc) | [<c06a70f4>] (dump_stack) from [<c006cab8>] (___might_sleep+0x198/0x2a8) | [<c006cab8>] (___might_sleep) from [<c06ac4dc>] (rt_spin_lock+0x30/0x70) | [<c06ac4dc>] (rt_spin_lock) from [<c013f790>] (find_lock_task_mm+0x9c/0x174) | [<c013f790>] (find_lock_task_mm) from [<c00409ac>] (clear_tasks_mm_cpumask+0xb4/0x1ac) | [<c00409ac>] (clear_tasks_mm_cpumask) from [<c00166a4>] (__cpu_disable+0x98/0xbc) | [<c00166a4>] (__cpu_disable) from [<c06a2e8c>] (take_cpu_down+0x1c/0x50) | [<c06a2e8c>] (take_cpu_down) from [<c00f2600>] (multi_cpu_stop+0x11c/0x158) | [<c00f2600>] (multi_cpu_stop) from [<c00f2a9c>] (cpu_stopper_thread+0xc4/0x184) | [<c00f2a9c>] (cpu_stopper_thread) from [<c0069058>] (smpboot_thread_fn+0x18c/0x324) | [<c0069058>] (smpboot_thread_fn) from [<c00649c4>] (kthread+0xe8/0x104) | [<c00649c4>] (kthread) from [<c0010058>] (ret_from_fork+0x14/0x3c) | CPU1: shutdown The root cause of above backtrace is task_lock() which takes a sleeping lock on -RT. To fix the issue, move clear_tasks_mm_cpumask() call from __cpu_disable() to __cpu_die() which is called on the thread which is asking for a target CPU to be shutdown. In addition, this change restores CPU hotplug functionality on ARM CPU1 can be unplugged/plugged many times. Link: http://lkml.kernel.org/r/[email protected] [bigeasy: slighty edited the commit message] Signed-off-by: Grygorii Strashko <[email protected]> Cc: <[email protected]> Cc: Sekhar Nori <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: 8757/1: NOMMU: Support PMSAv8 MPUVladimir Murzin4-0/+176
ARMv8R/M architecture defines new memory protection scheme - PMSAv8 which is not compatible with PMSAv7. Key differences to PMSAv7 are: - Region geometry is defined by base and limit addresses - Addresses need to be either 32 or 64 byte aligned - No region priority due to overlapping regions are not allowed - It is unified, i.e. no distinction between data/instruction regions - Memory attributes are controlled via MAIR This patch implements support for PMSAv8 MPU defined by ARMv8R/M architecture. Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: 8756/1: NOMMU: Postpone MPU activation till __after_proc_initVladimir Murzin1-23/+22
This patch postpone MPU activation till __after_proc_init (which is placed in .text section) rather than doing it in __setup_mpu. It allows us ignore used-only-once .head.text section while programming PMSAv8 MPU (for PMSAv7 it stays covered anyway). Tested-by: Szemz? András <[email protected]> Tested-by: Alexandre TORGUE <[email protected]> Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: 8755/1: NOMMU: Reorganise __setup_mpuVladimir Murzin1-2/+5
Currently, we have mixed code placement between .head.text and .text depends on configuration we are building: _text M R(UP) R(SMP) ====================================================== __setup_mpu __HEAD __HEAD text __after_proc_init __HEAD __HEAD text __mmap_switched text text text We are going to support another variant of MPU which is different to PMSAv7 in sense overlapping MPU regions are not allowed, so this patch makes boundaries between these sections precise and consistent: _text M R(UP) R(SMP) ====================================================== __setup_mpu __HEAD __HEAD __HEAD __after_proc_init text text text __mmap_switched text text text Additionally, it paves a path to postpone MPU activation till __after_proc_init where we do set SCTLR anyway and can return directly to __mmap_switched. Tested-by: Szemz? András <[email protected]> Tested-by: Alexandre TORGUE <[email protected]> Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: 8754/1: NOMMU: Move PMSAv7 MPU under it's own namespaceVladimir Murzin2-39/+51
We are going to support different MPU which programming model is not compatible to PMSAv7, so move PMSAv7 MPU under it's own namespace. Tested-by: Szemz? András <[email protected]> Tested-by: Alexandre TORGUE <[email protected]> Signed-off-by: Vladimir Murzin <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: 8771/1: kprobes: Prohibit kprobes on do_undefinstrMasami Hiramatsu1-1/+4
Prohibit kprobes on do_undefinstr because kprobes on arm is implemented by undefined instruction. This means if we probe do_undefinstr(), it can cause infinit recursive exception. Fixes: 24ba613c9d6c ("ARM kprobes: core code") Signed-off-by: Masami Hiramatsu <[email protected]> Cc: [email protected] Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: kexec: record parent context registers for non-crash CPUsRussell King1-1/+1
How we got to machine_crash_nonpanic_core() (iow, from an IPI, etc) is not interesting for debugging a crash. The more interesting context is the parent context prior to the IPI being received. Record the parent context register state rather than the register state in machine_crash_nonpanic_core(), which is more relevant to the failing condition. Signed-off-by: Russell King <[email protected]>
2018-05-19ARM: kexec: fix kdump register saving on panic()Russell King1-12/+22
When a panic() occurs, the kexec code uses smp_send_stop() to stop the other CPUs, but this results in the CPU register state not being saved, and gdb is unable to inspect the state of other CPUs. Commit 0ee59413c967 ("x86/panic: replace smp_send_stop() with kdump friendly version in panic path") addressed the issue on x86, but ignored other architectures. Address the issue on ARM by splitting out the crash stop implementation to crash_smp_send_stop() and adding the necessary protection. Signed-off-by: Russell King <[email protected]>
2018-05-16proc: introduce proc_create_single{,_data}Christoph Hellwig2-26/+3
Variants of proc_create{,_data} that directly take a seq_file show callback and drastically reduces the boilerplate code in the callers. All trivial callers converted over. Signed-off-by: Christoph Hellwig <[email protected]>
2018-05-09arch: remove the ARCH_PHYS_ADDR_T_64BIT config symbolChristoph Hellwig1-1/+1
Instead select the PHYS_ADDR_T_64BIT for 32-bit architectures that need a 64-bit phys_addr_t type directly. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: James Hogan <[email protected]>
2018-04-25signal: Ensure every siginfo we send has all bits initializedEric W. Biederman3-0/+7
Call clear_siginfo to ensure every stack allocated siginfo is properly initialized before being passed to the signal sending functions. Note: It is not safe to depend on C initializers to initialize struct siginfo on the stack because C is allowed to skip holes when initializing a structure. The initialization of struct siginfo in tracehook_report_syscall_exit was moved from the helper user_single_step_siginfo into tracehook_report_syscall_exit itself, to make it clear that the local variable siginfo gets fully initialized. In a few cases the scope of struct siginfo has been reduced to make it clear that siginfo siginfo is not used on other paths in the function in which it is declared. Instances of using memset to initialize siginfo have been replaced with calls clear_siginfo for clarity. Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-04-09Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds3-317/+156
Pull ARM updates from Russell King: "A number of core ARM changes: - Refactoring linker script by Nicolas Pitre - Enable source fortification - Add support for Cortex R8" * 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: decompressor: fix warning introduced in fortify patch ARM: 8751/1: Add support for Cortex-R8 processor ARM: 8749/1: Kconfig: Add ARCH_HAS_FORTIFY_SOURCE ARM: simplify and fix linker script for TCM ARM: linker script: factor out TCM bits ARM: linker script: factor out vectors and stubs ARM: linker script: factor out unwinding table sections ARM: linker script: factor out stuff for the .text section ARM: linker script: factor out stuff for the DISCARD section ARM: linker script: factor out some common definitions between XIP and non-XIP
2018-04-02Merge branch 'syscalls-next' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux Pull removal of in-kernel calls to syscalls from Dominik Brodowski: "System calls are interaction points between userspace and the kernel. Therefore, system call functions such as sys_xyzzy() or compat_sys_xyzzy() should only be called from userspace via the syscall table, but not from elsewhere in the kernel. At least on 64-bit x86, it will likely be a hard requirement from v4.17 onwards to not call system call functions in the kernel: It is better to use use a different calling convention for system calls there, where struct pt_regs is decoded on-the-fly in a syscall wrapper which then hands processing over to the actual syscall function. This means that only those parameters which are actually needed for a specific syscall are passed on during syscall entry, instead of filling in six CPU registers with random user space content all the time (which may cause serious trouble down the call chain). Those x86-specific patches will be pushed through the x86 tree in the near future. Moreover, rules on how data may be accessed may differ between kernel data and user data. This is another reason why calling sys_xyzzy() is generally a bad idea, and -- at most -- acceptable in arch-specific code. This patchset removes all in-kernel calls to syscall functions in the kernel with the exception of arch/. On top of this, it cleans up the three places where many syscalls are referenced or prototyped, namely kernel/sys_ni.c, include/linux/syscalls.h and include/linux/compat.h" * 'syscalls-next' of git://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux: (109 commits) bpf: whitelist all syscalls for error injection kernel/sys_ni: remove {sys_,sys_compat} from cond_syscall definitions kernel/sys_ni: sort cond_syscall() entries syscalls/x86: auto-create compat_sys_*() prototypes syscalls: sort syscall prototypes in include/linux/compat.h net: remove compat_sys_*() prototypes from net/compat.h syscalls: sort syscall prototypes in include/linux/syscalls.h kexec: move sys_kexec_load() prototype to syscalls.h x86/sigreturn: use SYSCALL_DEFINE0 x86: fix sys_sigreturn() return type to be long, not unsigned long x86/ioport: add ksys_ioperm() helper; remove in-kernel calls to sys_ioperm() mm: add ksys_readahead() helper; remove in-kernel calls to sys_readahead() mm: add ksys_mmap_pgoff() helper; remove in-kernel calls to sys_mmap_pgoff() mm: add ksys_fadvise64_64() helper; remove in-kernel call to sys_fadvise64_64() fs: add ksys_fallocate() wrapper; remove in-kernel calls to sys_fallocate() fs: add ksys_p{read,write}64() helpers; remove in-kernel calls to syscalls fs: add ksys_truncate() wrapper; remove in-kernel calls to sys_truncate() fs: add ksys_sync_file_range helper(); remove in-kernel calls to syscall kernel: add ksys_setsid() helper; remove in-kernel call to sys_setsid() kernel: add ksys_unshare() helper; remove in-kernel calls to sys_unshare() ...
2018-04-02mm: add ksys_fadvise64_64() helper; remove in-kernel call to sys_fadvise64_64()Dominik Brodowski1-1/+1
Using the ksys_fadvise64_64() helper allows us to avoid the in-kernel calls to the sys_fadvise64_64() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as ksys_fadvise64_64(). Some compat stubs called sys_fadvise64(), which then just passed through the arguments to sys_fadvise64_64(). Get rid of this indirection, and call ksys_fadvise64_64() directly. This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected] Cc: Andrew Morton <[email protected]> Cc: [email protected] Signed-off-by: Dominik Brodowski <[email protected]>
2018-03-27Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-5/+7
Pull ARM fixes from Russell King: "A small number of small fixes for ARM, mostly for some build issues. One fix for a regression caused by the cpu hotplug conversion from a few kernel versions ago" * 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8750/1: deflate_xip_data.sh: minor fixes ARM: 8748/1: mm: Define vdso_start, vdso_end as array ARM: 8747/1: make CONFIG_DEBUG_WX depend on MMU ARM: 8746/1: vfp: Go back to clearing vfp_current_hw_state[]
2018-03-24ARM: 8748/1: mm: Define vdso_start, vdso_end as arrayJinbum Park1-5/+7
Define vdso_start, vdso_end as array to avoid compile-time analysis error for the case of built with CONFIG_FORTIFY_SOURCE. and, since vdso_start, vdso_end are used in vdso.c only, move extern-declaration from vdso.h to vdso.c. If kernel is built with CONFIG_FORTIFY_SOURCE, compile-time error happens at this code. - if (memcmp(&vdso_start, "177ELF", 4)) The size of "&vdso_start" is recognized as 1 byte, but n is 4, So that compile-time error is reported. Acked-by: Kees Cook <[email protected]> Signed-off-by: Jinbum Park <[email protected]> Signed-off-by: Russell King <[email protected]>
2018-03-09ARM: simplify and fix linker script for TCMNicolas Pitre3-54/+14
Let's put the TCM stuff in the __init section directly. No need for a separately freed memory area. Remove redundant linker sections, as well as comments that were more confusing than no comments at all. Finally make it XIP compatible by using LOAD_OFFSET in the section LMA specification. Signed-off-by: Nicolas Pitre <[email protected]> Tested-by: Chris Brandt <[email protected]>
2018-03-09ARM: linker script: factor out TCM bitsNicolas Pitre3-108/+62
This is a plain move with identical results, and therefore still broken in the XIP case. Signed-off-by: Nicolas Pitre <[email protected]> Tested-by: Chris Brandt <[email protected]>