aboutsummaryrefslogtreecommitdiff
path: root/arch/mips/kernel
AgeCommit message (Collapse)AuthorFilesLines
2017-02-14MIPS: sysmips: Remove duplicated include from syscall.cWei Yongjun1-1/+0
Remove duplicated include. Fixes: 7c0f6ba682b9 ("Replace <asm/uaccess.h> with <linux/uaccess.h> globally") Signed-off-by: Wei Yongjun <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Markus Elfring <[email protected]> Cc: Wei Yongjun <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15213/ Signed-off-by: James Hogan <[email protected]>
2017-02-14MIPS: Unify perf counter register definitionsJames Hogan1-32/+23
Unify definitions for MIPS performance counter register fields in mipsregs.h rather than duplicating them in perf_events and oprofile. This will allow future patches to use them to expose performance counters to KVM guests. Signed-off-by: James Hogan <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Robert Richter <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15212/ Signed-off-by: James Hogan <[email protected]>
2017-02-13MIPS: sync-r4k: Fix KERN_CONT falloutMatt Redfearn1-2/+2
Since commit 4bcc595ccd80 ("printk: reinstate KERN_CONT for printing continuation lines") the output of counter synchornisation has been split across lines: [ 0.665181] Synchronize counters for CPU 1: [ 0.678578] done. Fix this by using pr_cont, and replace printk with pr_info. Signed-off-by: Matt Redfearn <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15195/ Signed-off-by: James Hogan <[email protected]>
2017-02-13MIPS: IRQ Stack: Fix erroneous jal to plat_irq_dispatchMatt Redfearn1-1/+1
Commit dda45f701c9d ("MIPS: Switch to the irq_stack in interrupts") changed both the normal and vectored interrupt handlers. Unfortunately the vectored version, "except_vec_vi_handler", was incorrectly modified to unconditionally jal to plat_irq_dispatch, rather than doing a jalr to the vectored handler that has been set up. This is ok for many platforms which set the vectored handler to plat_irq_dispatch anyway, but will cause problems with platforms that use other handlers. Fixes: dda45f701c9d ("MIPS: Switch to the irq_stack in interrupts") Signed-off-by: Matt Redfearn <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15110/ Signed-off-by: James Hogan <[email protected]>
2017-02-13MIPS: Fix cacheinfo overflowJames Hogan1-1/+3
The recently added MIPS cacheinfo support used a macro populate_cache() to populate the cacheinfo structures depending on which caches are present. However the macro contains multiple statements without enclosing them in a do {} while (0) loop, so the L2 and L3 cache conditionals in populate_cache_leaves() only conditionalised the first statement in the macro. This overflows the buffer allocated by detect_cache_attributes(), resulting in boot failures under QEMU where neither the L2 or L2 caches are present. Enclose the macro statements in a do {} while (0) block to keep the whole macro inside the conditionals. Fixes: ef462f3b64e9 ("MIPS: Add cacheinfo support") Reported-by: Guenter Roeck <[email protected]> Signed-off-by: James Hogan <[email protected]> Tested-by: Guenter Roeck <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Justin Chen <[email protected]> Cc: Florian Fainelli <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15276/
2017-02-01fs/binfmt: Convert obsolete cputime type to nsecsFrederic Weisbecker2-20/+4
Use the new nsec based cputime accessors as part of the whole cputime conversion from cputime_t to nsecs. Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Stanislaw Gruszka <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Wanpeng Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2017-01-25MIPS: syscall: Return directly in mips_mmap()Markus Elfring1-9/+2
* Return an error code without storing it in an intermediate variable. * Delete the local variable "result" which became unnecessary with this refactoring. Signed-off-by: Markus Elfring <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: [email protected] Cc: LKML <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15073/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-25MIPS: MT: Move an assignment for the variable "retval" in ↵Markus Elfring1-2/+3
mipsmt_sys_sched_setaffinity() A local variable was set to an error code in one case before a concrete error situation was detected. Thus move the corresponding assignment into an if branch to indicate a software failure there. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: [email protected] Cc: LKML <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15072/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-25MIPS: Return directly in 32_mmap2()Markus Elfring1-8/+3
* Return a failure indication without storing it in an intermediate variable. * Delete the local variable "error" which became unnecessary with this refactoring. Signed-off-by: Markus Elfring <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: [email protected] Cc: LKML <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/15071/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-24MIPS: Fix printk continuations in cpu-bugs64.cJames Hogan1-12/+12
64-bit pre-r6 kernels output the following broken printk continuation lines during boot: Checking for the multiply/shift bug... no. Checking for the daddiu bug... no. Checking for the daddi bug... no. Fix the printk continuations in cpu-bugs64.c to use pr_cont to restore the correct output: Checking for the multiply/shift bug... no. Checking for the daddiu bug... no. Checking for the daddi bug... no. Signed-off-by: James Hogan <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14916/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export {copy, clear}_page functions alongside their definitionsPaul Burton2-25/+1
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocations for the copy_page & clear_page functions to be alongside their definitions. With this change there are no longer any symbols exported from mips_ksyms.c so remove the file. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14515/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export memcpy & memset functions alongside their definitionsPaul Burton1-24/+0
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocations for the memcpy & memset functions & variants thereof to be alongside their definitions. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14514/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export string functions alongside their definitionsPaul Burton1-24/+0
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocations for the strlen*, strnlen* & strncpy* functions to be alongside their definitions. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14513/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export csum functions alongside their definitionsPaul Burton1-8/+0
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocations for the csum_partial_* functions to be alongside their definitions. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14512/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export invalid_pte_table alongside its definitionPaul Burton1-2/+0
It's unclear to me why this wasn't always the case, but move the EXPORT_SYMBOL invocation for invalid_pte_table to be alongside its definition. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14511/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export _mcount alongside its definitionPaul Burton2-4/+3
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocation for _mcount to be alongside its definition. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14525/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Export _save_fp & _save_msa alongside their definitionsPaul Burton3-8/+5
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocations for _save_fp & _save_msa to be alongside their definitions. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14509/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: uprobes: Remove __weak attribute from arch_uprobe_copy_ixol.Marcin Nowakowski1-1/+1
Arch-specific implementation of arch_uprobe_copy_ixol is expected to override the weak implementation in generic code. As currently both implementations are marked as weak, it is up to the linker to chose one. Remove the __weak attribute from MIPS code to make sure the correct version is used. Fixes: 40e084a506eb ("MIPS: Add uprobes support.") Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14660/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Add cacheinfo supportJustin Chen2-1/+86
Add cacheinfo support for MIPS architectures. Use information from the cpuinfo_mips struct to populate the cacheinfo struct. This allows an architecture agnostic approach, however this also means if cache information is not properly populated within the cpuinfo_mips struct, there is nothing we can do. (I.E. c-r3k.c) Signed-off-by: Justin Chen <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14650/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: kexec: add debug info about the new kexec'ed imageMarcin Nowakowski1-0/+22
Print details of the new kexec image loaded. Based on the original code from commit 221f2c770e10d ("arm64/kexec: Add pr_debug output") Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14614/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: kexec: Do not reserve invalid crashkernel memory on bootMarcin Nowakowski1-0/+5
Do not reserve memory for the crashkernel if the commandline argument points to a wrong location. This can happen if the location is specified wrong or if the same commandline is reused when starting the crashkernel - in the latter case the reserved memory would point to the location from which the crashkernel is executing. Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14612/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: fix mem=X@Y commandline processingMarcin Nowakowski1-0/+4
When a memory offset is specified through the commandline, add the memory in range PHYS_OFFSET:Y as reserved memory area. Otherwise the bootmem allocator is initialised with low page equal to min_low_pfn = PHYS_OFFSET, and in free_all_bootmem will process pages starting from min_low_pfn instead of PFN(Y). Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14613/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: relocate: Optionally relocate the DTBMarcin Nowakowski1-2/+36
If the DTB is located in the target memory area for the relocated kernel it needs to be relocated as well before kernel relocation takes place. After copying the DTB use the new plat_fdt_relocated() API from the relocated kernel to ensure the relocated kernel updates any information that it may have cached about the location of the DTB. plat_fdt_relocated is declared as a weak symbol so that platforms that do not require it do not need to implement the method. Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14616/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Use early_init_fdt_reserve_self to protect DTB locationMarcin Nowakowski2-0/+11
early_init_fdt_reserve_self is used to tell the boot memory allocator that a memory is occupied by the DTB, so add it in the MIPS init code to ensure information about the DTB is added to the boot memory array. Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14610/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: init: Ensure bootmem does not corrupt reserved memoryMarcin Nowakowski1-3/+71
Current init code initialises bootmem allocator with all of the low memory that it assumes is available, but does not check for reserved memory block, which can lead to corruption of data that may be stored there. Move bootmem's allocation map to a location that does not cross any reserved regions Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14609/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: init: Ensure reserved memory regions are not added to bootmemMarcin Nowakowski1-0/+4
Memories managed through boot_mem_map are generally expected to define non-crossing areas. However, if part of a larger memory block is marked as reserved, it would still be added to bootmem allocator as an available block and could end up being overwritten by the allocator. Prevent this by explicitly marking the memory as reserved it if exists in the range used by bootmem allocator. Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14608/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Do not request resources for crashkernel if one isn't definedMarcin Nowakowski1-0/+3
When KEXEC is enabled but crashkernel details are not passed through the kernel commandline unnecessary resources are requested (start==end==0) Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14607/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Move register dump routines out of ptrace codeMarcin Nowakowski2-32/+46
Current register dump methods for MIPS are implemented inside ptrace methods, but there will be other uses in the kernel for them, so keep them separately in process.c and use those definitions for ptrace instead. Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14587/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: kexec: remove SMP_DUMPMarcin Nowakowski2-18/+1
SMP_DUMP has been added as a new IPI signal when kexec support was added for Cavium Octeon CPUs ('commit 7aa1c8f47e7e ("MIPS: kdump: Add support")'. However, the new signal doesn't appear to ever have a proper handler added (octeon_message_functions[] array has an empty handler for it), and generic IPI handlers now trigger a BUG() on unhandled signal. As the method is unused remove it completely and replace its only invocation with a smp_call_function(). [[email protected]: Renumber SMP_ASK_C0COUNT to avoid numbering gaps.] Signed-off-by: Marcin Nowakowski <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14630/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Remove r2_emul_return from struct thread_infoPaul Burton3-21/+0
The r2_emul_return field in struct thread_info was used in order to take an alternate codepath when returning to userland, which (besides not implementing certain features) effectively used the eretnc instruction in place of eret. The difference is that eretnc doesn't clear LLBit, and therefore doesn't cause a linked load & store sequence to fail due to emulation like eret would. The reason eret would usually be used to clear LLBit is so that after context switching we ensure that a load performed by one task doesn't influence another task. However commit 7c151d3d5d7a ("MIPS: Make use of the ERETNC instruction on MIPS R6") which introduced the r2_emul_return field and conditional use of eretnc also for some reason began explicitly clearing LLBit during context switches - despite retaining the use of eret for everything but returns from the pre-r6 instruction emulation code. As LLBit is cleared upon context switches anyway, simplify this by using eretnc unconditionally for MIPSr6 kernels. This allows us to remove the 4 byte r2_emul_return boolean from struct thread_info, simplify the return to user code in entry.S and avoid the overhead of tracking & checking state which we don't need. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14408/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: traps: Ensure L1 & L2 ECC checking match for CM3 systemsPaul Burton1-3/+60
On systems with CM3, we must ensure that the L1 & L2 ECC enables are set to the same value. This is presumed by the hardware & cache corruption can occur when it is not the case. Support enabling & disabling the L2 ECC checking on CM3 systems where this is controlled via a GCR, and ensure that it matches the state of L1 ECC checking. Remove I6400 from the switch statement it will no longer hit, and which was incorrect since the L2 ECC enable bit isn't in the CP0 ErrCtl register. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14413/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: R2-on-R6 MULTU/MADDU/MSUBU emulation bugfixLeonid Yegoshin1-6/+6
MIPS instructions MULTU, MADDU and MSUBU emulation requires registers HI/LO to be converted to signed 32bits before 64bit sign extension on MIPS64. Bug was found on running MIPS32 R2 test application on MIPS64 R6 kernel. Fixes: b0a668fb2038 ("MIPS: kernel: mips-r2-to-r6-emul: Add R2 emulator for MIPS R6") Signed-off-by: Leonid Yegoshin <[email protected]> Reported-by: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14043/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: BMIPS: Migrate interrupts during bmips_cpu_disableFlorian Fainelli1-0/+1
While we properly disabled the per-CPU timer interrupt, we also need to make sure that all interrupts that can possibly have this CPU in their smp_affinity mask also have a chance to see this interrupt migrated to a CPU not being taken offline. [[email protected]: Fix merge conflict.] Fixes: 230b6ff57552 ("MIPS: BMIPS: Mask off timer IRQs when hot-unplugging a CPU") Signed-off-by: Florian Fainelli <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14488/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: SMP-CPS: Don't BUG if a CPU fails to startMatt Redfearn1-1/+5
If there is no online CPU within a core which could receive the IPI to start another VP in that core, a BUG() is triggered. Instead print a warning and gracefully handle the failure such that the system remains usable, albeit without the requested secondary CPU. Signed-off-by: Matt Redfearn <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Qais Yousef <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14504/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: SMP: Remove cpu_callin_mapMatt Redfearn3-4/+0
The previous commit made cpu_callin_map redundant, since it is no longer used to signal secondary CPUs starting, or going offline. Remove it now. Signed-off-by: Matt Redfearn <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Qais Yousef <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Kevin Cernekee <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Andrew Morton <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Florian Fainelli <[email protected]> Cc: Anna-Maria Gleixner <[email protected]> Cc: Adam Buchbinder <[email protected]> Cc: Yang Shi <[email protected]> Cc: David Daney <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14503/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: SMP: Use a completion event to signal CPU upMatt Redfearn2-9/+10
If a secondary CPU failed to start, for any reason, the CPU requesting the secondary to start would get stuck in the loop waiting for the secondary to be present in the cpu_callin_map. Rather than that, use a completion event to signal that the secondary CPU has started and is waiting to synchronise counters. Since the CPU presence will no longer be marked in cpu_callin_map, remove the redundant test from arch_cpu_idle_dead(). Signed-off-by: Matt Redfearn <[email protected]> Cc: Maciej W. Rozycki <[email protected]> Cc: Jiri Slaby <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Qais Yousef <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Marcin Nowakowski <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14502/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Handle microMIPS jumps in the same way as MIPS32/MIPS64 jumpsPaul Burton1-0/+2
is_jump_ins() checks for plain jump ("j") instructions since commit e7438c4b893e ("MIPS: Fix sibling call handling in get_frame_info") but that commit didn't make the same change to the microMIPS code, leaving it inconsistent with the MIPS32/MIPS64 code. Handle the microMIPS encoding of the jump instruction too such that it behaves consistently. Signed-off-by: Paul Burton <[email protected]> Fixes: e7438c4b893e ("MIPS: Fix sibling call handling in get_frame_info") Cc: Tony Wu <[email protected]> Cc: [email protected] Cc: <[email protected]> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14533/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Calculate microMIPS ra properly when unwinding the stackPaul Burton1-20/+63
get_frame_info() calculates the offset of the return address within a stack frame simply by dividing a the bottom 16 bits of the instruction, treated as a signed integer, by the size of a long. Whilst this works for MIPS32 & MIPS64 ISAs where the sw or sd instructions are used, it's incorrect for microMIPS where encodings differ. The result is that we typically completely fail to unwind the stack on microMIPS. Fix this by adjusting is_ra_save_ins() to calculate the return address offset, and take into account the various different encodings there in the same place as we consider whether an instruction is storing the ra/$31 register. With this we are now able to unwind the stack for kernels targetting the microMIPS ISA, for example we can produce: Call Trace: [<80109e1f>] show_stack+0x63/0x7c [<8011ea17>] __warn+0x9b/0xac [<8011ea45>] warn_slowpath_fmt+0x1d/0x20 [<8013fe53>] register_console+0x43/0x314 [<8067c58d>] of_setup_earlycon+0x1dd/0x1ec [<8067f63f>] early_init_dt_scan_chosen_stdout+0xe7/0xf8 [<8066c115>] do_early_param+0x75/0xac [<801302f9>] parse_args+0x1dd/0x308 [<8066c459>] parse_early_options+0x25/0x28 [<8066c48b>] parse_early_param+0x2f/0x38 [<8066e8cf>] setup_arch+0x113/0x488 [<8066c4f3>] start_kernel+0x57/0x328 ---[ end trace 0000000000000000 ]--- Whereas previously we only produced: Call Trace: [<80109e1f>] show_stack+0x63/0x7c ---[ end trace 0000000000000000 ]--- Signed-off-by: Paul Burton <[email protected]> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <[email protected]> Cc: [email protected] Cc: <[email protected]> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14532/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Fix is_jump_ins() handling of 16b microMIPS instructionsPaul Burton1-3/+8
is_jump_ins() checks 16b instruction fields without verifying that the instruction is indeed 16b, as is done by is_ra_save_ins() & is_sp_move_ins(). Add the appropriate check. Signed-off-by: Paul Burton <[email protected]> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <[email protected]> Cc: [email protected] Cc: <[email protected]> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14531/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Fix get_frame_info() handling of microMIPS function sizePaul Burton1-7/+5
get_frame_info() is meant to iterate over up to the first 128 instructions within a function, but for microMIPS kernels it will not reach that many instructions unless the function is 512 bytes long since we calculate the maximum number of instructions to check by dividing the function length by the 4 byte size of a union mips_instruction. In microMIPS kernels this won't do since instructions are variable length. Fix this by instead checking whether the pointer to the current instruction has reached the end of the function, and use max_insns as a simple constant to check the number of iterations against. Signed-off-by: Paul Burton <[email protected]> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <[email protected]> Cc: [email protected] Cc: <[email protected]> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14530/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Prevent unaligned accesses during stack unwindingPaul Burton1-35/+35
During stack unwinding we call a number of functions to determine what type of instruction we're looking at. The union mips_instruction pointer provided to them may be pointing at a 2 byte, but not 4 byte, aligned address & we thus cannot directly access the 4 byte wide members of the union mips_instruction. To avoid this is_ra_save_ins() copies the required half-words of the microMIPS instruction to a correctly aligned union mips_instruction on the stack, which it can then access safely. The is_jump_ins() & is_sp_move_ins() functions do not correctly perform this temporary copy, and instead attempt to directly dereference 4 byte fields which may be misaligned and lead to an address exception. Fix this by copying the instruction halfwords to a temporary union mips_instruction in get_frame_info() such that we can provide a 4 byte aligned union mips_instruction to the is_*_ins() functions and they do not need to deal with misalignment themselves. Signed-off-by: Paul Burton <[email protected]> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <[email protected]> Cc: [email protected] Cc: <[email protected]> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14529/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Clear ISA bit correctly in get_frame_info()Paul Burton1-5/+2
get_frame_info() can be called in microMIPS kernels with the ISA bit already clear. For example this happens when unwind_stack_by_address() is called because we begin with a PC that has the ISA bit set & subtract the (odd) offset from the preceding symbol (which does not have the ISA bit set). Since get_frame_info() unconditionally subtracts 1 from the PC in microMIPS kernels it incorrectly misaligns the address it then attempts to access code at, leading to an address error exception. Fix this by using msk_isa16_mode() to clear the ISA bit, which allows get_frame_info() to function regardless of whether it is provided with a PC that has the ISA bit set or not. Signed-off-by: Paul Burton <[email protected]> Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.") Cc: Leonid Yegoshin <[email protected]> Cc: [email protected] Cc: <[email protected]> # v3.10+ Patchwork: https://patchwork.linux-mips.org/patch/14528/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Ensure bss section ends on a long-aligned addressPaul Burton1-1/+1
When clearing the .bss section in kernel_entry we do so using LONG_S instructions, and branch whilst the current write address doesn't equal the end of the .bss section minus the size of a long integer. The .bss section always begins at a long-aligned address and we always increment the write pointer by the size of a long integer - we therefore rely upon the .bss section ending at a long-aligned address. If this is not the case then the long-aligned write address can never be equal to the non-long-aligned end address & we will continue to increment past the end of the .bss section, attempting to zero the rest of memory. Despite this requirement that .bss end at a long-aligned address we pass 0 as the end alignment requirement to the BSS_SECTION macro and thus don't guarantee any particular alignment, allowing us to hit the error condition described above. Fix this by instead passing 8 bytes as the end alignment argument to the BSS_SECTION macro, ensuring that the end of the .bss section is always at least long-aligned. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14526/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Relocatable: Provide plat_post_relocation hookSteven J. Hill1-0/+20
This hook provides the platform the chance to perform any required setup before the boot processor switches to the relocated kernel. The relocated kernel has been copied and fixed up ready for execution at this point. Secondary CPUs may wish to switch to it early. There is also the opportunity for the platform to abort jumping to the relocated kernel if there is anything wrong with the chosen offset. Signed-off-by: Matt Redfearn <[email protected]> Signed-off-by: Steven J. Hill <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14651/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Switch to the irq_stack in interruptsMatt Redfearn1-5/+76
When enterring interrupt context via handle_int or except_vec_vi, switch to the irq_stack of the current CPU if it is not already in use. The current stack pointer is masked with the thread size and compared to the base or the irq stack. If it does not match then the stack pointer is set to the top of that stack, otherwise this is a nested irq being handled on the irq stack so the stack pointer should be left as it was. The in-use stack pointer is placed in the callee saved register s1. It will be saved to the stack when plat_irq_dispatch is invoked and can be restored once control returns here. Signed-off-by: Matt Redfearn <[email protected]> Acked-by: Jason A. Donenfeld <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14743/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Stack unwinding while on IRQ stackMatt Redfearn1-1/+14
Within unwind stack, check if the stack pointer being unwound is within the CPU's irq_stack and if so use that page rather than the task's stack page. Signed-off-by: Matt Redfearn <[email protected]> Acked-by: Jason A. Donenfeld <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Adam Buchbinder <[email protected]> Cc: Maciej W. Rozycki <[email protected]> Cc: Marcin Nowakowski <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Jiri Slaby <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14741/ Signed-off-by: Ralf Baechle <[email protected]>
2017-01-03MIPS: Introduce irq_stackMatt Redfearn2-0/+12
Allocate a per-cpu irq stack for use within interrupt handlers. Also add a utility function on_irq_stack to determine if a given stack pointer is within the irq stack for that cpu. Signed-off-by: Matt Redfearn <[email protected]> Acked-by: Jason A. Donenfeld <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Petr Mladek <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Aaron Tomlin <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/14740/ Signed-off-by: Ralf Baechle <[email protected]>
2016-12-25Merge branch 'timers-urgent-for-linus' of ↵Linus Torvalds5-7/+7
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer type cleanups from Thomas Gleixner: "This series does a tree wide cleanup of types related to timers/timekeeping. - Get rid of cycles_t and use a plain u64. The type is not really helpful and caused more confusion than clarity - Get rid of the ktime union. The union has become useless as we use the scalar nanoseconds storage unconditionally now. The 32bit timespec alike storage got removed due to the Y2038 limitations some time ago. That leaves the odd union access around for no reason. Clean it up. Both changes have been done with coccinelle and a small amount of manual mopping up" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: ktime: Get rid of ktime_equal() ktime: Cleanup ktime_set() usage ktime: Get rid of the union clocksource: Use a plain u64 instead of cycle_t
2016-12-25Merge branch 'smp-urgent-for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull SMP hotplug notifier removal from Thomas Gleixner: "This is the final cleanup of the hotplug notifier infrastructure. The series has been reintgrated in the last two days because there came a new driver using the old infrastructure via the SCSI tree. Summary: - convert the last leftover drivers utilizing notifiers - fixup for a completely broken hotplug user - prevent setup of already used states - removal of the notifiers - treewide cleanup of hotplug state names - consolidation of state space There is a sphinx based documentation pending, but that needs review from the documentation folks" * 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/armada-xp: Consolidate hotplug state space irqchip/gic: Consolidate hotplug state space coresight/etm3/4x: Consolidate hotplug state space cpu/hotplug: Cleanup state names cpu/hotplug: Remove obsolete cpu hotplug register/unregister functions staging/lustre/libcfs: Convert to hotplug state machine scsi/bnx2i: Convert to hotplug state machine scsi/bnx2fc: Convert to hotplug state machine cpu/hotplug: Prevent overwriting of callbacks x86/msr: Remove bogus cleanup from the error path bus: arm-ccn: Prevent hotplug callback leak perf/x86/intel/cstate: Prevent hotplug callback leak ARM/imx/mmcd: Fix broken cpu hotplug handling scsi: qedi: Convert to hotplug state machine
2016-12-25clocksource: Use a plain u64 instead of cycle_tThomas Gleixner5-7/+7
There is no point in having an extra type for extra confusion. u64 is unambiguous. Conversion was done with the following coccinelle script: @rem@ @@ -typedef u64 cycle_t; @fix@ typedef cycle_t; @@ -cycle_t +u64 Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: John Stultz <[email protected]>