Age | Commit message (Collapse) | Author | Files | Lines |
|
Currently we hardcode a list of files for which we specify that the
toolchain has DSP ASE support when building for MIPSr2 only. This has a
number of problems:
1) It doesn't actually ensure that the toolchain supports the DSP ASE
at all.
2) It's fragile if we try to use DSP ASE macros in other files.
3) It makes no provision for MIPSr6 & later systems which also support
the DSP ASE & end up using the .word directive implementation of
the DSP macros.
Fix this by detecting assembler support for the DSP ASE globally, not
just for a small set of files, and not just for MIPSr2. This now exposes
use of toolchain DSP support to kernel builds targeting MIPSr1 and
older, so we add .set MIPS_ISA_LEVEL directives prior to all .set dsp
directives in order to prevent the assembler from complaining that the
DSP ASE is only supported with MIPSr2 & higher.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20901/
Cc: [email protected]
|
|
Commit ea7e0480a4b6 ("MIPS: VDSO: Always map near top of user memory")
set VDSO_RANDOMIZE_SIZE to 256MB for 64bit kernel. But take a look at
arch/mips/mm/mmap.c we can see that MIN_GAP is 128MB, which means the
mmap_base may be at (user_address_top - 128MB). This make the stack be
surrounded by mmaped areas, then stack expanding fails and causes a
segmentation fault. Therefore, VDSO_RANDOMIZE_SIZE should be less than
MIN_GAP and this patch reduce it to 64MB.
Signed-off-by: Huacai Chen <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Fixes: ea7e0480a4b6 ("MIPS: VDSO: Always map near top of user memory")
Patchwork: https://patchwork.linux-mips.org/patch/20910/
Cc: Ralf Baechle <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: Fuxin Zhang <[email protected]>
Cc: Zhangjin Wu <[email protected]>
Cc: Huacai Chen <[email protected]>
|
|
All the upper case in unit-address and hex constants are
changed to lower case according to the DT conventions.
Signed-off-by: Songjun Wu <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20768/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Thomas Gleixner <[email protected]>
Cc: Philippe Ombredanne <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Kate Stewart <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Ralf Baechle <[email protected]>
|
|
Add support for the integrated switch, and the SPI and I2C controller found
on MSCC Ocelot.
Signed-off-by: Alexandre Belloni <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20345/
Cc: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
After commit e509bd7da149dc349160 ("genirq: Allow migration of chained
interrupts by installing default action") Loongson-3 fails at here:
setup_irq(LOONGSON_HT1_IRQ, &cascade_irqaction);
This is because both chained_action and cascade_irqaction don't have
IRQF_SHARED flag. This will cause Loongson-3 resume fails because HPET
timer interrupt can't be delivered during S3. So we set the irqchip of
the chained irq to loongson_irq_chip which doesn't disable the chained
irq in CP0.Status.
Cc: [email protected]
Signed-off-by: Huacai Chen <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20434/
Cc: Ralf Baechle <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: Fuxin Zhang <[email protected]>
Cc: Zhangjin Wu <[email protected]>
Cc: Huacai Chen <[email protected]>
|
|
Masking/unmasking the CPU UART irq in CP0_Status (and redirecting it to
other CPUs) may cause interrupts be lost, especially in multi-package
machines (Package-0's UART irq cannot be delivered to others). So make
mask_loongson_irq() and unmask_loongson_irq() be no-ops.
The original problem (UART IRQ may deliver to any core) is also because
of masking/unmasking the CPU UART irq in CP0_Status. So it is safe to
remove all of the stuff.
Signed-off-by: Huacai Chen <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20433/
Cc: Ralf Baechle <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: Fuxin Zhang <[email protected]>
Cc: Zhangjin Wu <[email protected]>
Cc: Huacai Chen <[email protected]>
|
|
asm/asm.h provides PREF(), PREFE() & PREFX() macros which are now
entirely unused. Delete the dead code.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20908/
Cc: [email protected]
|
|
memcpy() is the only user of the PREF() & PREFE() macros from asm/asm.h.
Switch to using the kernel_pref() & user_pref() macros from
asm/asm-eva.h which fit more consistently with other abstractions of EVA
vs non-EVA instructions.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20907/
Cc: [email protected]
|
|
asm/asm.h provides a CAT macro which is unused throughout the tree, and
if anyone wanted it the generic CONCATENATE macro in linux/kernel.h
provides the same functionality. Delete the dead code.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20905/
Cc: [email protected]
|
|
Add kernel_pref & user_pref macros to asm/asm-eva.h, providing an
abstraction around EVA & non-EVA pref instructions consistent with the
existing macros we have for cache & load/store instructions.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20906/
Cc: [email protected]
|
|
asm/asm.h contains a TTABLE macro to generate "text tables" which would
appear to be arrays of pointers to strings. It is unused throughout the
kernel tree, so delete the dead code.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20904/
Cc: [email protected]
|
|
asm/asm.h contains CPRESTORE, CPADD & CPLOAD macros that are intended
for use with position independent code, but are not used anywhere in the
kernel - along with a comment to that effect. Remove the dead code.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20903/
Cc: [email protected]
|
|
We have macros in asm/asm.h to allow for use of the MOVN & MOVZ
instructions with compare-and-branch sequences providing compatibility
for ISA versions which don't include those instructions. However the
macros are unused, and appear to have always been unused. Delete the
dead code.
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20909/
Cc: [email protected]
|
|
Conflicts were easy to resolve using immediate context mostly,
except the cls_u32.c one where I simply too the entire HEAD
chunk.
Signed-off-by: David S. Miller <[email protected]>
|
|
Improve performance for the relevant systems and remove the DMA ordering
barrier from `readX_relaxed' and `writeX_relaxed' MMIO accessors, where
it is not needed according to our requirements[1]. For consistency make
the same arrangement with low-level port I/O accessors, but do not
actually provide any accessors making use of it.
References:
[1] "LINUX KERNEL MEMORY BARRIERS", Documentation/memory-barriers.txt,
Section "KERNEL I/O BARRIER EFFECTS"
Signed-off-by: Maciej W. Rozycki <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20865/
Cc: Ralf Baechle <[email protected]>
|
|
Architecturally the MIPS ISA does not specify ordering requirements for
uncached bus accesses such as MMIO operations normally use and therefore
explicit barriers have to be inserted between MMIO accesses where
unspecified ordering of operations would cause unpredictable results.
For example the R2020 write buffer implements write gathering and
combining[1] and as used with the DECstation models 2100 and 3100 for
MMIO accesses it bypasses the read buffer entirely, because conflicts
are resolved by the memory controller for DRAM accesses only[2] (NB the
R2020 and R3020 buffers are the same except for the maximum clock rate).
Consequently if a device has say a 16-bit control register at offset 0,
a 16-bit event mask register at offset 2 and a 16-bit reset register at
offset 4, and the initial value of the control register is 0x1111, then
in the absence of barriers a hypothetical code sequence like this:
u16 init_dev(u16 __iomem *dev);
u16 x;
write16(dev + 2, 0xffff);
write16(dev + 0, 0x2222);
x = read16(dev + 0);
write16(dev + 1, 0x3333);
write16(dev + 0, 0x4444);
return x;
}
will return 0x1111 and issue a single 32-bit write of 0x33334444 (in the
little-endian bus configuration) to offset 0 on the system bus.
This is because the read to set `x' from offset 0 bypasses the write of
0x2222 that is still in the write buffer pending the completion of the
write of 0xffff to the reset register. Then the write of 0x3333 to the
event mask register is merged with the preceding write to the control
register as they share the same word address, making it a 32-bit write
of 0x33332222 to offset 0. Finally the write of 0x4444 to the control
register is combined with the outstanding 32-bit write of 0x33332222 to
offset 0, because, again, it shares the same address.
This is an example from a legacy system, given here because it is well
documented and affects a machine we actually support. But likewise
modern MIPS systems may implement weak MMIO ordering, possibly even
without having it clearly documented except for being compliant with the
architecture specification with respect to the currently defined SYNC
instruction variants[3].
Considering the above and that we are required to implement MMIO
accessors such that individual accesses made with them are strongly
ordered with respect to each other[4], add the necessary barriers to our
`inX', `outX', `readX' and `writeX' handlers, as well the associated
special use variants. It's up to platforms then to possibly define the
respective barriers so as to expand to nil if no ordering enforcement is
actually needed for a given system; SYNC is supposed to be as cheap as
a NOP on strongly ordered MIPS implementations though.
Retain the option to generate weakly-ordered accessors, so that the
arrangement for `war_io_reorder_wmb' is not lost in case we need it for
fully raw accessors in the future. The reason for this is that it is
unclear from commit 1e820da3c9af ("MIPS: Loongson-3: Introduce
CONFIG_LOONGSON3_ENHANCEMENT") and especially commit 8faca49a6731
("MIPS: Modify core io.h macros to account for the Octeon Errata
Core-301.") why they are needed there under the previous assumption that
these accessors can be weakly ordered.
References:
[1] "LR3020 Write Buffer", LSI Logic Corporation, September 1988,
Section "Byte Gathering", pp. 6-7
[2] "DECstation 3100 Desktop Workstation Functional Specification",
Digital Equipment Corporation, Revision 1.3, August 28, 1990,
Section 6.1 "Processor", p. 4
[3] "MIPS Architecture For Programmers, Volume II-A: The MIPS32
Instruction Set Manual", Imagination Technologies LTD, Document
Number: MD00086, Revision 6.06, December 15, 2016, Table 5.5
"Encodings of the Bits[10:6] of the SYNC instruction; the SType
Field", p. 409
[4] "LINUX KERNEL MEMORY BARRIERS", Documentation/memory-barriers.txt,
Section "KERNEL I/O BARRIER EFFECTS"
Signed-off-by: Maciej W. Rozycki <[email protected]>
References: 8faca49a6731 ("MIPS: Modify core io.h macros to account for the Octeon Errata Core-301.")
References: 1e820da3c9af ("MIPS: Loongson-3: Introduce CONFIG_LOONGSON3_ENHANCEMENT")
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20864/
Cc: Ralf Baechle <[email protected]>
|
|
Redefine `mmiowb' in terms of `iobarrier_w' so that it works correctly
for MIPS I platforms, which have no SYNC machine instruction and use a
call to `wbflush' instead.
This doesn't change the semantics for CONFIG_CPU_CAVIUM_OCTEON, because
`iobarrier_w' expands to `wmb', which is ultimately the same as the
current arrangement. For MIPS I platforms this not only makes any code
that would happen to use `mmiowb' build and run, but it actually
enforces the ordering required as well, as `iobarrier_w' has it already
covered with the use of `wmb'.
Signed-off-by: Maciej W. Rozycki <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20863/
Cc: Ralf Baechle <[email protected]>
|
|
Define MMIO ordering barriers as separate operations so as to allow
making places where such a barrier is required distinct from places
where a memory or a DMA barrier is needed.
Architecturally MIPS does not specify ordering requirements for uncached
bus accesses such as MMIO operations normally use and therefore explicit
barriers have to be inserted between MMIO accesses where unspecified
ordering of operations would cause unpredictable results.
MIPS MMIO ordering barriers are implemented using the same underlying
mechanism that memory or a DMA barrier ordering barriers use, that is
either a suitable SYNC instruction or a platform-specific `wbflush'
call. However platforms may implement different ordering rules for
different kinds of bus activity, so having a separate API makes it
possible to remove unnecessary barriers and avoid a performance hit they
may cause due to unrelated bus activity by making their implementation
expand to nil while keeping the necessary ones.
Also having distinct barriers for each kind of use makes it easier for
the reader to understand what code has been intended to do.
Signed-off-by: Maciej W. Rozycki <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20862/
Cc: Ralf Baechle <[email protected]>
|
|
PCB120 and PCB123 are both development boards based on Microsemi Ocelot
so let's use the same fitImage for both.
Reviewed-by: Alexandre Belloni <[email protected]>
Signed-off-by: Quentin Schulz <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20871/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
The Ocelot PCB120 evaluation board is different from the PCB123 in that
it has 4 external VSC8584 (or VSC8574) PHYs.
It uses the SoC's second MDIO bus for external PHYs which have a
reversed address on the bus (i.e. PHY4 is on address 3, PHY5 is on
address 2, PHY6 on 1 and PHY7 on 0).
Here is how the PHYs are connected to the switch ports:
port 0: phy0 (internal)
port 1: phy1 (internal)
port 2: phy2 (internal)
port 3: phy3 (internal)
port 4: phy7
port 5: phy4
port 6: phy6
port 9: phy5
Reviewed-by: Alexandre Belloni <[email protected]>
Signed-off-by: Quentin Schulz <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20869/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
Rewrite to use the `reorder' assembly mode and remove manually scheduled
delay slots except where GAS cannot schedule a delay-slot instruction
due to a data dependency or a section switch (as is the case with the EX
macro). No change in machine code produced.
Signed-off-by: Maciej W. Rozycki <[email protected]>
[[email protected]:
Fix conflict with commit 932afdeec18b ("MIPS: Add Kconfig variable for
CPUs with unaligned load/store instructions")]
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20834/
Cc: Ralf Baechle <[email protected]>
|
|
Fix a commit 8a8158c85e1e ("MIPS: memset.S: EVA & fault support for
small_memset") regression and remove assembly warnings:
arch/mips/lib/memset.S: Assembler messages:
arch/mips/lib/memset.S:243: Warning: Macro instruction expanded into multiple instructions in a branch delay slot
triggering with the CPU_DADDI_WORKAROUNDS option set and this code:
PTR_SUBU a2, t1, a0
jr ra
PTR_ADDIU a2, 1
This is because with that option in place the DADDIU instruction, which
the PTR_ADDIU CPP macro expands to, becomes a GAS macro, which in turn
expands to an LI/DADDU (or actually ADDIU/DADDU) sequence:
13c: 01a4302f dsubu a2,t1,a0
140: 03e00008 jr ra
144: 24010001 li at,1
148: 00c1302d daddu a2,a2,at
...
Correct this by switching off the `noreorder' assembly mode and letting
GAS schedule this jump's delay slot, as there is nothing special about
it that would require manual scheduling. With this change in place
correct code is produced:
13c: 01a4302f dsubu a2,t1,a0
140: 24010001 li at,1
144: 03e00008 jr ra
148: 00c1302d daddu a2,a2,at
...
Signed-off-by: Maciej W. Rozycki <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Fixes: 8a8158c85e1e ("MIPS: memset.S: EVA & fault support for small_memset")
Patchwork: https://patchwork.linux-mips.org/patch/20833/
Cc: Ralf Baechle <[email protected]>
Cc: [email protected] # 4.17+
|
|
The Microsemi Ocelot has a set of register for SerDes/switch port muxing
as well as PCIe muxing for a specific SerDes, so let's add the device
and all SerDes in the Device Tree.
Reviewed-by: Florian Fainelli <[email protected]>
Acked-by: Paul Burton <[email protected]>
Signed-off-by: Quentin Schulz <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
HSIO contains registers for PLL5 configuration, SerDes/switch port
muxing and a thermal sensor, hence we can't keep it in the switch DT
node.
Acked-by: Paul Burton <[email protected]>
Acked-by: Alexandre Belloni <[email protected]>
Signed-off-by: Quentin Schulz <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Fix a commit 8a8158c85e1e ("MIPS: memset.S: EVA & fault support for
small_memset") regression and remove assembly warnings:
arch/mips/lib/memset.S: Assembler messages:
arch/mips/lib/memset.S:243: Warning: Macro instruction expanded into multiple instructions in a branch delay slot
triggering with the CPU_DADDI_WORKAROUNDS option set and this code:
PTR_SUBU a2, t1, a0
jr ra
PTR_ADDIU a2, 1
This is because with that option in place the DADDIU instruction, which
the PTR_ADDIU CPP macro expands to, becomes a GAS macro, which in turn
expands to an LI/DADDU (or actually ADDIU/DADDU) sequence:
13c: 01a4302f dsubu a2,t1,a0
140: 03e00008 jr ra
144: 24010001 li at,1
148: 00c1302d daddu a2,a2,at
...
Correct this by switching off the `noreorder' assembly mode and letting
GAS schedule this jump's delay slot, as there is nothing special about
it that would require manual scheduling. With this change in place
correct code is produced:
13c: 01a4302f dsubu a2,t1,a0
140: 24010001 li at,1
144: 03e00008 jr ra
148: 00c1302d daddu a2,a2,at
...
Signed-off-by: Maciej W. Rozycki <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Fixes: 8a8158c85e1e ("MIPS: memset.S: EVA & fault support for small_memset")
Patchwork: https://patchwork.linux-mips.org/patch/20833/
Cc: Ralf Baechle <[email protected]>
Cc: [email protected] # 4.17+
|
|
Rework the defintion of struct siginfo so that the array padding
struct siginfo to SI_MAX_SIZE can be placed in a union along side of
the rest of the struct siginfo members. The result is that we no
longer need the __ARCH_SI_PREAMBLE_SIZE or SI_PAD_SIZE definitions.
Signed-off-by: "Eric W. Biederman" <[email protected]>
|
|
platform_nand_xxx definitions are just used by the plat_nand driver.
Let's move those definitions out of the core/driver-agnostic rawnand.h
header.
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
We regularly have new NAND controller drivers that are making use of
fields/hooks that we want to get rid of but can't because of all the
legacy drivers that we might break if we do.
So, instead of removing those fields/hooks, let's move them to a
sub-struct which is clearly documented as deprecated.
We start with the ->IO_ADDR_{R,W] fields.
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
Let's make the raw NAND API consistent by patching all helpers and
hooks to take a nand_chip object instead of an mtd_info one or
remove the mtd_info object when both are passed.
In order to do that, we first need to update the platform_nand_ctrl
hooks to take a nand_chip object instead of an mtd_info.
We add temporary plat_nand_xxx() wrappers to the do the mtd -> chip
conversion, but those will be dropped when patching nand_chip hooks to
take a nand_chip object.
Signed-off-by: Boris Brezillon <[email protected]>
Reviewed-by: Alexander Sverdlin <[email protected]>
Acked-by: Alexander Sverdlin <[email protected]>
Acked-by: Robert Jarzmik <[email protected]>
Acked-by: Krzysztof Halasa <[email protected]>
Acked-by: Paul Burton <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
Add the ISO7816 ioctl and associated accessors and data structure.
Drivers can then use this common implementation to handle ISO7816
(smart cards).
Signed-off-by: Nicolas Ferre <[email protected]>
[[email protected]: squash and rebase, removal of gpios, checkpatch fixes]
Signed-off-by: Ludovic Desroches <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
There is nothing arch specific about building dtb files other than their
location under /arch/*/boot/dts/. Keeping each arch aligned is a pain.
The dependencies and supported targets are all slightly different.
Also, a cross-compiler for each arch is needed, but really the host
compiler preprocessor is perfectly fine for building dtbs. Move the
build rules to a common location and remove the arch specific ones. This
is done in a single step to avoid warnings about overriding rules.
The build dependencies had been a mixture of 'scripts' and/or 'prepare'.
These pull in several dependencies some of which need a target compiler
(specifically devicetable-offsets.h) and aren't needed to build dtbs.
All that is really needed is dtc, so adjust the dependencies to only be
dtc.
This change enables support 'dtbs_install' on some arches which were
missing the target.
Acked-by: Will Deacon <[email protected]>
Acked-by: Paul Burton <[email protected]>
Acked-by: Ley Foon Tan <[email protected]>
Acked-by: Masahiro Yamada <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Chris Zankel <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Rob Herring <[email protected]>
|
|
Commit 8ce355cf2e38 ("MIPS: Setup boot_command_line before
plat_mem_setup") fixed a problem for systems which have
CONFIG_CMDLINE_BOOL=y & use a DT with a chosen node that has either no
bootargs property or an empty one. In this configuration
early_init_dt_scan_chosen() copies CONFIG_CMDLINE into
boot_command_line, but the MIPS code doesn't know this so it appends
CONFIG_CMDLINE (via builtin_cmdline) to boot_command_line again. The
result is that boot_command_line contains the arguments from
CONFIG_CMDLINE twice.
That commit took the approach of simply setting up boot_command_line
from the MIPS code before early_init_dt_scan_chosen() runs, causing it
not to copy CONFIG_CMDLINE to boot_command_line if a chosen node with no
bootargs property is found.
Unfortunately this is problematic for systems which do have a non-empty
bootargs property & CONFIG_CMDLINE_BOOL=y. There
early_init_dt_scan_chosen() will overwrite boot_command_line with the
arguments from DT, which means we lose those from CONFIG_CMDLINE
entirely. This breaks CONFIG_MIPS_CMDLINE_DTB_EXTEND. If we have
CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER or
CONFIG_MIPS_CMDLINE_BUILTIN_EXTEND selected and the DT has a bootargs
property which we should ignore, it will instead be honoured breaking
those configurations too.
Fix this by reverting commit 8ce355cf2e38 ("MIPS: Setup
boot_command_line before plat_mem_setup") to restore the former
behaviour, and fixing the CONFIG_CMDLINE duplication issue by
initializing boot_command_line to a non-empty string that
early_init_dt_scan_chosen() will not overwrite with CONFIG_CMDLINE.
This is a little ugly, but cleanup in this area is on its way. In the
meantime this is at least easy to backport & contains the ugliness
within arch/mips/.
Signed-off-by: Paul Burton <[email protected]>
Fixes: 8ce355cf2e38 ("MIPS: Setup boot_command_line before plat_mem_setup")
References: https://patchwork.linux-mips.org/patch/18804/
Patchwork: https://patchwork.linux-mips.org/patch/20813/
Cc: Frank Rowand <[email protected]>
Cc: Jaedon Shin <[email protected]>
Cc: Mathieu Malaterre <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected] # v4.16+
|
|
When using the legacy mmap layout, for example triggered using ulimit -s
unlimited, get_unmapped_area() fills memory from bottom to top starting
from a fairly low address near TASK_UNMAPPED_BASE.
This placement is suboptimal if the user application wishes to allocate
large amounts of heap memory using the brk syscall. With the VDSO being
located low in the user's virtual address space, the amount of space
available for access using brk is limited much more than it was prior to
the introduction of the VDSO.
For example:
# ulimit -s unlimited; cat /proc/self/maps
00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils
004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils
004fd000-0050f000 rwxp 00000000 00:00 0
00cc3000-00ce4000 rwxp 00000000 00:00 0 [heap]
2ab96000-2ab98000 r--p 00000000 00:00 0 [vvar]
2ab98000-2ab99000 r-xp 00000000 00:00 0 [vdso]
2ab99000-2ab9d000 rwxp 00000000 00:00 0
...
Resolve this by adjusting STACK_TOP to reserve space for the VDSO &
providing an address hint to get_unmapped_area() causing it to use this
space even when using the legacy mmap layout.
We reserve enough space for the VDSO, plus 1MB or 256MB for 32 bit & 64
bit systems respectively within which we randomize the VDSO base
address. Previously this randomization was taken care of by the mmap
base address randomization performed by arch_mmap_rnd(). The 1MB & 256MB
sizes are somewhat arbitrary but chosen such that we have some
randomization without taking up too much of the user's virtual address
space, which is often in short supply for 32 bit systems.
With this the VDSO is always mapped at a high address, leaving lots of
space for statically linked programs to make use of brk:
# ulimit -s unlimited; cat /proc/self/maps
00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils
004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils
004fd000-0050f000 rwxp 00000000 00:00 0
00c28000-00c49000 rwxp 00000000 00:00 0 [heap]
...
7f67c000-7f69d000 rwxp 00000000 00:00 0 [stack]
7f7fc000-7f7fd000 rwxp 00000000 00:00 0
7fcf1000-7fcf3000 r--p 00000000 00:00 0 [vvar]
7fcf3000-7fcf4000 r-xp 00000000 00:00 0 [vdso]
Signed-off-by: Paul Burton <[email protected]>
Reported-by: Huacai Chen <[email protected]>
Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO")
Cc: Huacai Chen <[email protected]>
Cc: [email protected]
Cc: [email protected] # v4.4+
|
|
gcc 3.3 has been retired for a while, use PTRS_PER_PGD and remove the
asm-offsets.h inclusion.
Signed-off-by: Alexandre Belloni <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20814/
Cc: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
The crash utility initializes cpu state by reading the system kernel
memory, which is copied into vmcore.
It is also natural to preserve the online state for CPUs at crash.
Failing to do so could make the analysis tool present info for only 1 CPU
by default, and unable to find panic task.
Signed-off-by: Dengcheng Zhu <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20809/
Cc: Paul Burton <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: "[email protected]" <[email protected]>
Cc: "[email protected]" <[email protected]>
|
|
In much the same vein as commit ac41f9c46282 ("MIPS: Remove a temporary
hack for debugging cache flushes in SMTC configuration") and commit
eb75ecb113f5 ("MIPS: MT: Remove unused MT single-threaded cache flush
code"), remove the long obsolete ndflush & niflush command line
arguments which provided a hack that should not be useful outside of
debug sessions performed long ago.
Signed-off-by: Paul Burton <[email protected]>
|
|
Commit ac41f9c46282 ("MIPS: Remove a temporary hack for debugging cache
flushes in SMTC configuration") removed an ugly hack that allowed cache
flushing to be performed single-threaded, something which should not be
necessary outside of debug sessions performed long ago.
Whilst the hack was removed from the cache flush code itself, the
mt_protdflush & mt_protiflush variables were left behind along with code
providing the protdflush & protiflush command line arguments. The
mt_cflush_lockdown() & mt_cflush_release() functions were also left
behind but are now entirely unused.
Remove all the unused code to complete the removal of the MT ASE
single-threaded cache flush hack.
Signed-off-by: Paul Burton <[email protected]>
|
|
MIPSR6 CPUs do not support unaligned load/store instructions
(LWL, LWR, SWL, SWR and LDL, LDR, SDL, SDR for 64bit).
Currently the MIPS tree has some special cases to avoid these
instructions, and the code is testing for !CONFIG_CPU_MIPSR6.
This patch declares a new Kconfig variable:
CONFIG_CPU_HAS_LOAD_STORE_LR.
This variable indicates that the CPU supports these instructions.
Then, the patch does the following:
- Carefully selects this option on all CPUs except MIPSR6.
- Switches all the special cases to test for the new variable,
and inverts the logic:
'#ifndef CONFIG_CPU_MIPSR6' turns into
'#ifdef CONFIG_CPU_HAS_LOAD_STORE_LR'
and vice-versa.
Also, when this variable is NOT selected (e.g. MIPSR6),
CONFIG_GENERIC_CSUM will default to 'y', to compile generic
C checksum code (instead of special assembly code that uses the
unsupported instructions).
This commit should not affect any existing CPU, and is required
for future Lexra CPU support, that misses these instructions too.
Signed-off-by: Yasha Cherikovsky <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20808/
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
The ELF appended dtb can be accessed now via 'fw_passed_dtb'.
Since raw appended dtb is accessed via that variable too,
this now effectively allows to boot with CONFIG_MIPS_RAW_APPENDED_DTB=y
on Octeon.
Signed-off-by: Yasha Cherikovsky <[email protected]>
[[email protected]: Fix trivial __dtb_octeon_*_begin conflict]
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20805/
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
The ELF appended dtb can be accessed now via 'fw_passed_dtb'.
Signed-off-by: Yasha Cherikovsky <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20804/
Cc: Kevin Cernekee <[email protected]>
Cc: Florian Fainelli <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Since commit 15f37e158892 ("MIPS: store the appended
dtb address in a variable"),
in kernels with MIPS_RAW_APPENDED_DTB=y, the early boot code detects
the dtb and stores it in the 'fw_passed_dtb' variable.
However, the dtb is not stored in 'fw_passed_dtb' in kernels with
MIPS_ELF_APPENDED_DTB=y.
Under MIPS_ELF_APPENDED_DTB=y, the dtb is also located in the
__appended_dtb section, so we just need to update the #ifdef.
This will allow to access the dtb in a more uniform way.
Fixes: 15f37e158892 ("MIPS: store the appended dtb address in a variable")
Signed-off-by: Yasha Cherikovsky <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20803/
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
It makes the code more readable, especially in the nested ifdefs.
Signed-off-by: Yasha Cherikovsky <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20802/
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Out-of-tree platforms may not be based on Generic as shown in customer
communication. Share the prepare method with all using UHI boot protocol,
and put into machine_kexec.c.
The benefit is that, when having kexec_args related problems, developers
will naturally look into machine_kexec.c, where "CONFIG_UHI_BOOT" will be
found, prompting them to add "select UHI_BOOT" to the platform Kconfig. It
would otherwise require a lot debugging or online searching to be aware
that the solution is in Generic code.
Tested-by: Rachel Mozes <[email protected]>
Reported-by: Rachel Mozes <[email protected]>
Signed-off-by: Dengcheng Zhu <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20569/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
We can rely on the system kernel and the dump capture kernel themselves in
memory usage.
Being restrictive with 512MB limit may cause kexec tool failure on some
platforms.
Tested-by: Rachel Mozes <[email protected]>
Reported-by: Rachel Mozes <[email protected]>
Signed-off-by: Dengcheng Zhu <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20568/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
Share code between play_dead() and cps_kexec_nonboot_cpu(). Register the
latter to mp_ops for kexec.
Signed-off-by: Dengcheng Zhu <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20567/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
The existing implementation lets machine_kexec() CPU jump to reboot code
buffer, whereas other CPUs to relocated_kexec_smp_wait. The natural way to
bring up an SMP new kernel would be to let CPU0 do it while others being
halted. For those failing to do so, fall back to the jumping method.
Signed-off-by: Dengcheng Zhu <[email protected]>
[[email protected]: Guard kexec_nonboot_cpu_jump with CONFIG_SMP]
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20570/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
After changing CPU online status, it will not be sent any IPIs such as in
__flush_cache_all() on software coherency systems. Do this before disabling
local IRQ.
Signed-off-by: Dengcheng Zhu <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/20571/
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
The only functional differences (modulo a few missing fixes in the arch
code) is that architectures without coherent caches need a hook to
convert a virtual or dma address into a pfn, given that we don't have
the kernel linear mapping available for the otherwise easy virt_to_page
call. As a side effect we can support mmap of the per-device coherent
area even on architectures not providing the callback, and we make
previous dangerous default methods dma_common_mmap actually save for
non-coherent architectures by rejecting it without the right helper.
In addition to that we need a hook so that some architectures can
override the protection bits when mmaping a dma coherent allocations.
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Paul Burton <[email protected]> # MIPS parts
|
|
All the cache maintainance is already stubbed out when not enabled,
but merging the two allows us to nicely handle the case where
cache maintainance is required for some devices, but not others.
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Paul Burton <[email protected]> # MIPS parts
|
|
Various architectures support both coherent and non-coherent dma on a
per-device basis. Move the dma_noncoherent flag from the mips archdata
field to struct device proper to prepare the infrastructure for reuse on
other architectures.
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Paul Burton <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
|