aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2017-06-30sparc64: Use indirect calls in hamming weight stubsDavid S. Miller1-8/+8
Otherwise, depending upon link order, the branch relocation limits could be exceeded. Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
2017-06-29ARM: 8685/1: ensure memblock-limit is pmd-alignedDoug Berger1-4/+4
The pmd containing memblock_limit is cleared by prepare_page_table() which creates the opportunity for early_alloc() to allocate unmapped memory if memblock_limit is not pmd aligned causing a boot-time hang. Commit 965278dcb8ab ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") attempted to resolve this problem, but there is a path through the adjust_lowmem_bounds() routine where if all memory regions start and end on pmd-aligned addresses the memblock_limit will be set to arm_lowmem_limit. Since arm_lowmem_limit can be affected by the vmalloc early parameter, the value of arm_lowmem_limit may not be pmd-aligned. This commit corrects this oversight such that memblock_limit is always rounded down to pmd-alignment. Fixes: 965278dcb8ab ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") Signed-off-by: Doug Berger <[email protected]> Suggested-by: Mark Rutland <[email protected]> Signed-off-by: Russell King <[email protected]>
2017-06-29x86/ftrace: Exclude functions in head64.c from function-tracingKirill A. Shutemov1-0/+1
A recent commit moved most logic of early boot up from startup_64() written in assembly to __startup_64() written in C. Fengguang reported breakage due to the change. It was tracked down to CONFIG_FUNCTION_TRACER being enabled. Tracing this function is not possible because it's invoked from the earliest boot stage before the relocation fixups have been done. It is the function doing the relocation. Exclude it from being built with tracer stubs. Fixes: c88d71508e36 ("x86/boot/64: Rewrite startup_64() in C") Reported-by: Fengguang Wu <[email protected]> Signed-off-by: Kirill A. Shutemov <[email protected]> Acked-by: Steven Rostedt <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
2017-06-29perf/x86/intel/uncore: Fix wrong box pointer checkKan Liang1-1/+1
Should not init a NULL box. It will cause system crash. The issue looks like caused by a typo. This was not noticed because there is no NULL box. Also, for most boxes, they are enabled by default. The init code is not critical. Fixes: fff4b87e594a ("perf/x86/intel/uncore: Make package handling more robust") Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
2017-06-29arm64: ptrace: Fix incorrect get_user() use in compat_vfp_set()Dave Martin1-3/+5
Now that compat_vfp_get() uses the regset API to copy the FPSCR value out to userspace, compat_vfp_set() looks inconsistent. In particular, compat_vfp_set() will fail if called with kbuf != NULL && ubuf == NULL (which is valid usage according to the regset API). This patch fixes compat_vfp_set() to use user_regset_copyin(), similarly to compat_vfp_get(). This also squashes a sparse warning triggered by the cast that drops __user when calling get_user(). Signed-off-by: Dave Martin <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: ptrace: Remove redundant overrun check from compat_vfp_set()Dave Martin1-3/+0
compat_vfp_set() checks for userspace trying to write an excessive amount of data to the regset. However this check is conspicuous for its absence from every other _set() in the arm64 ptrace implementation. In fact, the core ptrace_regset() already clamps userspace's iov_len to the regset size before the individual regset .{get,set}() methods get called. This patch removes the redundant check. Signed-off-by: Dave Martin <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: ptrace: Avoid setting compat FP[SC]R to garbage if get_user failsDave Martin1-2/+4
If get_user() fails when reading the new FPSCR value from userspace in compat_vfp_get(), then garbage* will be written to the task's FPSR and FPCR registers. This patch prevents this by checking the return from get_user() first. [*] Actually, zero, due to the behaviour of get_user() on error, but that's still not what userspace expects. Fixes: 478fcb2cdb23 ("arm64: Debugging support") Signed-off-by: Dave Martin <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm: sun8i: orangepi-2: use internal phy-modeLABBE Corentin1-1/+1
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-06-29arm: sun8i: nanopi-neo: use internal phy-modeLABBE Corentin1-1/+1
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-06-29arm: sun8i: orangepi-one: use internal phy-modeLABBE Corentin1-1/+1
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-06-29arm: sun8i: orangepi-zero: use internal phy-modeLABBE Corentin1-1/+1
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-06-29arm: sun8i: orangepipc: use internal phy-modeLABBE Corentin1-1/+1
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-06-29KVM: LAPIC: Fix lapic timer injection delayWanpeng Li2-2/+6
If the TSC deadline timer is programmed really close to the deadline or even in the past, the computation in vmx_set_hv_timer will program the absolute target tsc value to vmcs preemption timer field w/ delta == 0, then plays a vmentry and an upcoming vmx preemption timer fire vmexit dance, the lapic timer injection is delayed due to this duration. Actually the lapic timer which is emulated by hrtimer can handle this correctly. This patch fixes it by firing the lapic timer and injecting a timer interrupt immediately during the next vmentry if the TSC deadline timer is programmed really close to the deadline or even in the past. This saves ~300 cycles on the tsc_deadline_timer test of apic.flat. Cc: Paolo Bonzini <[email protected]> Cc: Radim Krčmář <[email protected]> Signed-off-by: Wanpeng Li <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2017-06-29KVM: lapic: reorganize restart_apic_timerPaolo Bonzini3-50/+45
Move the code to cancel the hv timer into the caller, just before it starts the hrtimer. Check availability of the hv timer in start_hv_timer. Signed-off-by: Paolo Bonzini <[email protected]>
2017-06-29KVM: lapic: reorganize start_hv_timerPaolo Bonzini1-13/+27
There are many cases in which the hv timer must be canceled. Split out a new function to avoid duplication. Signed-off-by: Paolo Bonzini <[email protected]>
2017-06-29Merge tag 'actions-arm-soc+sps-for-4.13' of ↵Arnd Bergmann2-2/+35
git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions into next/soc Pull "Actions Semi ARM SoC for v4.13 #2" from Andreas Färber: This adds SMP code to bring up the remaining S500 CPU cores by reusing a helper factored out of the SPS power domains driver. * tag 'actions-arm-soc+sps-for-4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions: ARM: owl: smp: Implement SPS power-gating for CPU2 and CPU3 soc: actions: owl-sps: Factor out owl_sps_set_pg() for power-gating soc: actions: Add Owl SPS dt-bindings: power: Add Owl SPS power domains
2017-06-29arm64: fix endianness annotation for __apply_alternatives()/get_alt_insn()Luc Van Oostenryck1-4/+4
get_alt_insn() is used to read and create ARM instructions, which are always stored in memory in little-endian order. These values are thus correctly converted to/from native order when processed but the pointers used to hold the address of these instructions are declared as for native order values. Fix this by declaring the pointers as __le32* instead of u32* and make the few appropriate needed changes like removing the unneeded cast '(u32*)' in front of __ALT_PTR()'s definition. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: fix endianness annotation in get_kaslr_seed()Luc Van Oostenryck1-1/+1
In the flattened device tree format, all integer properties are in big-endian order. Here the property "kaslr-seed" is read from the fdt and then correctly converted to native order (via fdt64_to_cpu()) but the pointer used for this is not annotated as being for big-endian. Fix this by declaring the pointer as fdt64_t instead of u64 (fdt64_t being itself typedefed to __be64). Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: add missing conversion to __wsum in ip_fast_csum()Luc Van Oostenryck1-1/+1
ARM64 implementation of ip_fast_csum() do most of the work in 128 or 64 bit and call csum_fold() to finalize. csum_fold() itself take a __wsum argument, to insure that this value is always a 32bit native-order value. Fix this by adding the sadly needed '__force' to cast the native 'sum' to the type '__wsum'. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29multi_v7_defconfig: Enable OMAP MTD and DM816 AHCITom Rini1-0/+3
A wide variety of TI platforms support NAND via the CONFIG_MTD_NAND_OMAP2 driver (and related BCH options), so enable this. In addition, multi_v7_defconfig supports the dm8168-evm and that supports root being on a SATA drive, so build the DM816 AHCI driver into the resulting kernel as well. Cc: Russell King <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Mihail Grigorov <[email protected]> Signed-off-by: Tom Rini <[email protected]> Acked-by: Tony Lindgren <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2017-06-29Merge tag 'actions-arm64-soc-for-4.13' of ↵Arnd Bergmann1-0/+6
git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions into next/arm64 Pull "Actions Semi ARM64 SoC for v4.13" from Andreas Färber: This adds a Kconfig symbol for DTs and drivers being added. * tag 'actions-arm64-soc-for-4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions: arm64: Prepare Actions Semi S900
2017-06-29Merge tag 'actions-arm64-dt-for-4.13' of ↵Arnd Bergmann4-0/+205
git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions into next/dt64 Pull "Actions Semi ARM64 based SoC DT for 4.13" from Andreas Färber: This adds an initial DT for the S900 SoC and a devboard based on it. * tag 'actions-arm64-dt-for-4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions: arm64: dts: Add Actions Semi S900 and Bubblegum-96 dt-bindings: Add vendor prefix for uCRobotics
2017-06-29Merge tag 'actions-arm-dt-for-4.13' of ↵Arnd Bergmann4-0/+236
git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions into next/dt Pull "Actions Semi ARM based SoC DT for v4.13" from Andreas Färber: This adds an initial DT for the S500 SoC and a devboard based on it. * tag 'actions-arm-dt-for-4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions: ARM: dts: owl-s500: Add SPS node ARM: dts: owl-s500: Set CPU enable-method dt-bindings: arm: cpus: Add S500 enable-method ARM: dts: Add Actions Semi S500 and LeMaker Guitar dt-bindings: arm: Document Actions Semi S900 dt-bindings: timer: Document Owl timer dt-bindings: arm: Document Actions Semi S500 dt-bindings: Add vendor prefix for Actions Semi
2017-06-29Merge tag 'actions-arm-soc-for-4.13' of ↵Arnd Bergmann7-0/+284
git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions into next/soc Pull "Actions Semi ARM SoC for v4.13" from Andreas Färber: This adds a Kconfig symbol and mach-actions with board and SMP code, plus a MAINTAINERS entry. * tag 'actions-arm-soc-for-4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/afaerber/linux-actions: MAINTAINERS: Update Actions Semi section with SPS ARM: owl: Implement CPU enable-method for S500 MAINTAINERS: Add Actions Semi Owl section ARM: Prepare Actions Semi S500
2017-06-29Merge tag 'qcom-defconfig-for-4.13-2' of ↵Arnd Bergmann1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/agross/linux into next/defconfig Pull "Qualcomm ARM Based defconfig Updates for v4.13 - Part 2" from Andy Gross: * Enable RPMSG_QCOM_SMD to get boards working again * tag 'qcom-defconfig-for-4.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/agross/linux: ARM: qcom_defconfig: enable RPMSG_QCOM_SMD
2017-06-29Merge tag 'amlogic-dt64-2' of ↵Arnd Bergmann5-0/+116
git://git.kernel.org/pub/scm/linux/kernel/git/khilman/linux-amlogic into next/dt64 Pull "Amlogic 64-bit DT changes for v4.13 (round 2)" from Kevin Hilman: - support new SPI controller driver - several more leaf clocks exposed to DT - New board: S905x LibreTech CC board * tag 'amlogic-dt64-2' of git://git.kernel.org/pub/scm/linux/kernel/git/khilman/linux-amlogic: ARM64: dts: meson-gxl: Add Libre Technology CC support dt-bindings: arm: amlogic: Add Libre Technology CC board dt-bindings: add Libre Technology vendor prefix ARM64: dts: meson-gx: Add SPICC nodes clk: meson-gxbb: un-export the CPU clock clk: meson-gxbb: expose UART clocks clk: meson-gxbb: expose SPICC gate clk: meson-gxbb: expose spdif master clock clk: meson-gxbb: expose i2s master clock clk: meson-gxbb: expose spdif clock gates
2017-06-29Merge tag 'amlogic-dt-2' of ↵Arnd Bergmann5-21/+232
git://git.kernel.org/pub/scm/linux/kernel/git/khilman/linux-amlogic into next/dt Merge "Amlogic 32-bit DT changes for v4.13 (round 2)" from Kevin Hilman: - greatly expands DT clock support for meson8b * tag 'amlogic-dt-2' of git://git.kernel.org/pub/scm/linux/kernel/git/khilman/linux-amlogic: (22 commits) ARM: dts: meson: use the real ethernet clock on Meson8 and Meson8b ARM: dts: meson8b: add the SCU device node ARM: dts: meson: add USB support on Meson8 and Meson8b ARM: dts: meson: add the hardware random number generator ARM: dts: meson8: add reserved memory zones ARM: dts: meson: add the SAR ADC ARM: dts: meson8: add the pins for the SDIO controller ARM: dts: meson8: add the PWM_E and PWM_F pins ARM: dts: meson: use GIC_SPI and IRQ_TYPE_EDGE_RISING macros ARM: dts: meson: use C preprocessor friendly include syntax ARM: dts: meson8: fix the IR receiver pins clk: meson8b: export the ethernet gate clock clk: meson8b: export the USB clocks clk: meson8b: export the gate clock for the HW random number generator clk: meson8b: export the SDIO clock clk: meson8b: export the SAR ADC clocks clk: meson-gxbb: un-export the CPU clock clk: meson-gxbb: expose UART clocks clk: meson-gxbb: expose SPICC gate clk: meson-gxbb: expose spdif master clock ...
2017-06-29Merge tag 'v4.12-rc7' into develLinus Walleij166-734/+1342
Linux 4.12-rc7
2017-06-29arm64: fix endianness annotation in acpi_parking_protocol.cLuc Van Oostenryck1-2/+2
Here both variables 'cpu_id' and 'entry_point' are read via read[lq]_relaxed(), from a little-endian annotated pointer and then used as a native endian value. This is correct since the read[lq]() family of function internally do a little-to-native endian conversion. But in this case, it is wrong to declare these variable as little-endian since there are native ones. Fix this by changing the declaration of these variables as 'u32' or 'u64' instead of '__le32' / '__le64'. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: use readq() instead of readl() to read 64bit entry_pointLuc Van Oostenryck1-1/+1
Here the entrypoint, declared as a 64 bit integer, is read from a pointer to 64bit integer but the read is done via readl_relaxed() which is for 32bit quantities. All the high bits will thus be lost which change the meaning of the test against zero done later. Fix this by using readq_relaxed() instead as it should be for 64bit quantities. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: fix endianness annotation for reloc_insn_movw() & reloc_insn_imm()Luc Van Oostenryck1-7/+7
Here the functions reloc_insn_movw() & reloc_insn_imm() are used to read, modify and write back ARM instructions, which are always stored in memory in little-endian order. These values are thus correctly converted to/from native order but the pointers used to hold their addresses are declared as for native order values. Fix this by declaring the pointers as __le32* and remove the casts that are now unneeded. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: fix endianness annotation for aarch64_insn_write()Luc Van Oostenryck1-3/+2
aarch64_insn_write() is used to write an instruction. As on ARM64 in-memory instructions are always stored in little-endian order, this function, taking the instruction opcode in native order, correctly convert it to little-endian before sending it to an helper function __aarch64_insn_write() which will do the effective write. This is all good, but the variable and argument holding the converted value are not annotated for a little-endian value but left for native values. Fix this by adjusting the prototype of the helper and directly using the result of cpu_to_le32() without passing by an intermediate variable (which was not a distinct one but the same as the one holding the native value). Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: fix endianness annotation in aarch64_insn_read()Luc Van Oostenryck1-1/+1
The function arch64_insn_read() is used to read an instruction. On AM64 instructions are always stored in little-endian order and thus the function correctly do a little-to-native endian conversion to the value just read. However, the variable used to hold the value before the conversion is not declared for a little-endian value but for a native one. Fix this by using the correct type for the declaration: __le32 Note: This only works because the function reading the value, probe_kernel_read((), takes a void pointer and void pointers are endian-agnostic. Otherwise probe_kernel_read() should also be properly annotated (or worse, need to be specialized). Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: fix endianness annotation in call_undef_hook()Luc Van Oostenryck1-6/+8
Here we're reading thumb or ARM instructions, which are always stored in memory in little-endian order. These values are thus correctly converted to native order but the intermediate value should be annotated as for little-endian values. Fix this by declaring the intermediate var as __le32 or __le16. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29arm64: fix endianness annotation for debug-monitors.cLuc Van Oostenryck1-6/+8
Here we're reading thumb or ARM instructions, which are always stored in memory in little-endian order. These values are thus correctly converted to native order but the intermediate value should be annotated as for little-endian values. Fix this by declaring the intermediate var as __le32 or __le16. Signed-off-by: Luc Van Oostenryck <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2017-06-29MIPS: VDSO: Fix a mismatch between comment and preprocessor constantAleksandar Markovic1-1/+1
Sync the comment with its preprocessor constant counterpart. Signed-off-by: Aleksandar Markovic <[email protected]> Cc: James Hogan <[email protected]> Cc: Miodrag Dinic <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petar Jovanovic <[email protected]> Cc: Raghu Gandham <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16641/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: VDSO: Add implementation of gettimeofday() fallbackGoran Ferenc1-1/+23
This patch adds gettimeofday_fallback() function that wraps assembly invocation of gettimeofday() syscall using __NR_gettimeofday. This function is used if pure VDSO implementation gettimeofday() does not succeed for any reason. Its imeplementation is enclosed in "#ifdef CONFIG_MIPS_CLOCK_VSYSCALL" to be in sync with the similar arrangement for __vdso_gettimeofday(). If syscall invocation via __NR_gettimeofday fails, register a3 will be set. So, after the syscall, register a3 is tested and the return valuem is negated if it's set. Signed-off-by: Goran Ferenc <[email protected]> Signed-off-by: Miodrag Dinic <[email protected]> Signed-off-by: Aleksandar Markovic <[email protected]> Cc: Douglas Leung <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petar Jovanovic <[email protected]> Cc: Raghu Gandham <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16640/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: VDSO: Add implementation of clock_gettime() fallbackGoran Ferenc1-3/+22
This patch adds clock_gettime_fallback() function that wraps assembly invocation of clock_gettime() syscall using __NR_clock_gettime. This function is used if pure VDSO implementation of clock_gettime() does not succeed for any reason. For example, it is called if the clkid parameter of clock_gettime() is not one of the clkids listed in the switch-case block of the function __vdso_clock_gettime() (one such case for clkid is CLOCK_BOOTIME). If syscall invocation via __NR_clock_gettime fails, register a3 will be set. So, after the syscall, register a3 is tested and the return value is negated if it's set. Signed-off-by: Goran Ferenc <[email protected]> Signed-off-by: Miodrag Dinic <[email protected]> Signed-off-by: Aleksandar Markovic <[email protected]> Cc: Douglas Leung <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petar Jovanovic <[email protected]> Cc: Raghu Gandham <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16639/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: VDSO: Fix conversions in do_monotonic()/do_monotonic_coarse()Goran Ferenc2-6/+6
Fix incorrect calculation in do_monotonic() and do_monotonic_coarse() function that in turn caused incorrect values returned by the vdso version of system call clock_gettime() on mips64 if its system clock ID parameter was CLOCK_MONOTONIC or CLOCK_MONOTONIC_COARSE. Consider these variables and their types on mips32 and mips64: tk->wall_to_monotonic.tv_sec s64, s64 (kernel/vdso.c) vdso_data.wall_to_mono_sec u32, u32 (kernel/vdso.c) to_mono_sec u32, u32 (vdso/gettimeofday.c) ts->tv_sec s32, s64 (vdso/gettimeofday.c) For mips64 case, u32 vdso_data.wall_to_mono_sec variable is updated from the 64-bit signed variable tk->wall_to_monotonic.tv_sec (kernel/vdso.c:76) which is a negative number holding the time passed from 1970-01-01 to the time boot started. This 64-bit signed value is currently around 47+ years, in seconds. For instance, let this value be: -1489757461 or 11111111111111111111111111111111 10100111001101000001101011101011 By updating 32-bit vdso_data.wall_to_mono_sec variable, we lose upper 32 bits (signed 1's). to_mono_sec variable is a parameter of do_monotonic() and do_monotonic_coarse() functions which holds vdso_data.wall_to_mono_sec value. Its value needs to be added (or subtracted considering it holds negative value from the tk->wall_to_monotonic.tv_sec) to the current time passed from 1970-01-01 (ts->tv_sec), which is again something like 47+ years, but increased by the time passed from the boot to the current time. ts->tv_sec is 32-bit long in case of 32-bit architecture and 64-bit long in case of 64-bit architecture. Consider the update of ts->tv_sec (vdso/gettimeofday.c:55 & 167): ts->tv_sec += to_mono_sec; mips32 case: This update will be performed correctly, since both ts->tv_sec and to_mono_sec are 32-bit long and the sign in to_mono_sec is preserved. Implicit conversion from u32 to s32 will be done correctly. mips64 case: This update will be wrong, since the implicit conversion will not be done correctly. The reason is that the conversion will be from u32 to s64. This is because to_mono_sec is 32-bit long for both mips32 and mips64 cases and s64..33 bits of converted to_mono_sec variable will be zeros. So, in order to make MIPS64 implementation work properly for MONOTONIC and MONOTONIC_COARSE clock ids on mips64, the size of wall_to_mono_sec variable in mips_vdso_data union and respective parameters in do_monotonic() and do_monotonic_coarse() functions should be changed from u32 to u64. Because of consistency, this size change from u32 and u64 is also done for wall_to_mono_nsec variable and corresponding function parameters. As far as similar situations for other architectures are concerned, let's take a look at arm. Arm has two distinct vdso_data structures for 32-bit & 64-bit cases, and arm's wall_to_mono_sec and wall_to_mono_nsec are u32 for 32-bit and u64 for 64-bit cases. On the other hand, MIPS has only one structure (mips_vdso_data), hence the need for changing the size of above mentioned parameters. Signed-off-by: Goran Ferenc <[email protected]> Signed-off-by: Miodrag Dinic <[email protected]> Signed-off-by: Aleksandar Markovic <[email protected]> Cc: Douglas Leung <[email protected]> Cc: James Hogan <[email protected]> Cc: Paul Burton <[email protected]> Cc: Petar Jovanovic <[email protected]> Cc: Raghu Gandham <[email protected]> Cc: [email protected] Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16638/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: Use current_cpu_type() in m4kc_tlbp_war()Paul Burton1-2/+1
Use current_cpu_type() to check for 4Kc processors instead of checking the PRID directly. This will allow for the 4Kc case to be optimised out of kernels that can't run on 4KC processors, thanks to __get_cpu_type() and its unreachable() call. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16205/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: Allow storing pgd in C0_CONTEXT for MIPSr6Paul Burton1-1/+1
CONFIG_MIPS_PGD_C0_CONTEXT, which allows a pointer to the page directory to be stored in the cop0 Context register when enabled, was previously only allowed for MIPSr2. MIPSr6 is just as able to make use of it, so allow it there too. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16204/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: Handle tlbex-tlbp race conditionPaul Burton1-1/+37
In systems where there are multiple actors updating the TLB, the potential exists for a race condition wherein a CPU hits a TLB exception but by the time it reaches a TLBP instruction the affected TLB entry may have been replaced. This can happen if, for example, a CPU shares the TLB between hardware threads (VPs) within a core and one of them replaces the entry that another has just taken a TLB exception for. We handle this race in the case of the Hardware Table Walker (HTW) being the other actor already, but didn't take into account the potential for multiple threads racing. Include the code for aborting TLB exception handling in affected multi-threaded systems, those being the I6400 & I6500 CPUs which share TLB entries between VPs. In the case of using RiXi without dedicated exceptions we have never handled this race even for HTW. This patch adds WARN()s to these cases which ought never to be hit because all CPUs with either HTW or shared FTLB RAMs also include dedicated RiXi exceptions, but the WARN()s will ensure this is always the case. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16203/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: Add CPU shared FTLB feature detectionPaul Burton3-0/+56
Some systems share FTLB RAMs or entries between sibling CPUs (ie. hardware threads, or VP(E)s, within a core). These properties require kernel handling in various places. As a start this patch introduces cpu_has_shared_ftlb_ram & cpu_has_shared_ftlb_entries feature macros which we set appropriately for I6400 & I6500 CPUs. Further patches will make use of these macros as appropriate. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16202/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: CPS: Handle spurious VP starts more gracefullyPaul Burton1-1/+6
On pre-r6 systems with the MT ASE the CPS SMP code included checks to halt the VPE running mips_cps_boot_vpes() if its bit in the struct core_boot_config vpe_mask field is clear. This was largely done in order to allow us to start arbitrary VPEs within a core despite the fact that hardware is typically configured to run only VPE0 after powering up a core. VPE0 would start the desired other VPEs, halt itself, and the fact that VPE0 started would be largely hidden & irrelevant. In MIPSr6 multithreading we have control over which VPs start executing when a core powers up via the cores CPC registers accessed remotely through the redirect block. For this reason the MIPSr6 multithreading path in mips_cps_boot_vpes() hasn't bothered up until now to handle halting the VP running it. However it is possible to power up cores entirely in hardware by using a pwr_up pin associated with the core. Unfortunately some systems wire this pin to a logic 1, which means that it is possible for a core to power up at a point that software doesn't expect. The result is that we generally go execute the kernel on a CPU that ought not to be running & the results can be unpredictable. Handle this case by stopping VPs that we don't expect to be running in mips_cps_boot_vpes() - with this change even if a core powers up it will do nothing useful & all VPs within it will stop running before they proceed to run general kernel code & do any damage. Ideally we would produce some sort of warning here, but given the stage of core bringup this happens at that would be non-trivial. We also will only hit this if a core starts up after being offlined via hotplug, and when that happens we will already produce a warning that the CPU didn't power down in cps_cpu_die() which seems sufficient. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16198/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: CPS: Handle cores not powering down more gracefullyPaul Burton1-3/+24
If we get into a state where a core that ought to power down isn't doing so then the current result is that another CPU gets stuck inside cps_cpu_die() waiting for CPU that ought to be powering down to do so. The best case scenario is that we then trigger RCU stall messages or lockup messages, but neither makes it particularly clear what's happening. Handle this more gracefully by introducing a timeout beyond which we warn the user that the core didn't power down & stop waiting for it. This at least allows the CPU running cps_cpu_die() to continue normally, and hopefully presuming the CPU that powered back up is doing nothing harmful the system will continue functioning as normal. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16197/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: CPS: Prevent multi-core with dcache aliasingPaul Burton1-3/+5
Systems using the MIPS Coherence Manager (CM) cannot support multi-core SMP with dcache aliasing. This is because CPU caches are VIPT, but interventions in CM-based systems provide only the physical address to remote caches. This means that interventions may behave incorrectly in the presence of an aliasing dcache, since the physical address used when handling an intervention may lead to operation on an aliased cache line rather than the correct line. Prevent us from running into this issue by refusing to boot secondary cores in systems where dcache aliasing may occur. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16196/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: CPS: Select CONFIG_SYS_SUPPORTS_SCHED_SMT for MIPSr6Paul Burton1-0/+1
Prior to MIPSr6 multithreading is only supported if CONFIG_MIPS_MT_SMP is enabled, so CONFIG_MIPS_MT_SMP selects CONFIG_SYS_SUPPORTS_SCHED_SMT. With MIPSr6 the CONFIG_MIPS_CPS SMP implementation always supports multithreading, so have it select CONFIG_SYS_SUPPORTS_SCHED_SMT in order to allow the scheduler to make better informed decisions on multithreaded MIPSr6 systems (for example those using I6400 or I6500 CPUs). Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16195/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: CM: WARN on attempt to lock invalid VP, not BUGPaul Burton1-1/+1
Rather than using BUG_ON in the case of an invalid attempt to lock access to a non-zero VP on a pre-CM3 system, use WARN_ON so that we have even the slightest chance of recovery. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16194/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: CM: Avoid per-core locking with CM3 & higherPaul Burton1-6/+32
CM3 provides a GCR_CL_OTHER register per VP, rather than only per core. This means that we don't need to prevent other VPs within a core from racing with code that makes use of the core-other register region. Reduce locking overhead by demoting the per-core spinlock providing protection for CM2.5 & lower to a per-CPU/per-VP spinlock for CM3 & higher. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16193/ Signed-off-by: Ralf Baechle <[email protected]>
2017-06-29MIPS: Skip IPI setup if we only have 1 CPUPaul Burton1-0/+3
If we're running on a system with only 1 possible CPU then it makes no sense to reserve or initialise IPIs since we'll never use them. Avoid doing so. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Patchwork: https://patchwork.linux-mips.org/patch/16192/ Signed-off-by: Ralf Baechle <[email protected]>