Age | Commit message (Collapse) | Author | Files | Lines |
|
Use a new config option to enable R4600 V2 cacheop hit workaround
and remove define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
|
|
Use a new config option to enable R4600 V1 cacheop hit workaround
and remove define from the different war.h files.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
|
|
Patch series "mm: consolidate definitions of page table accessors", v2.
The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.
Most of these definitions are actually identical and typically it boils
down to, e.g.
static inline unsigned long pmd_index(unsigned long address)
{
return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
}
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}
These definitions can be shared among 90% of the arches provided
XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
For architectures that really need a custom version there is always
possibility to override the generic version with the usual ifdefs magic.
These patches introduce include/linux/pgtable.h that replaces
include/asm-generic/pgtable.h and add the definitions of the page table
accessors to the new header.
This patch (of 12):
The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h>
in the files that include <linux/mm.h>.
The include statements in such cases are remove with a simple loop:
for f in $(git grep -l "include <linux/mm.h>") ; do
sed -i -e '/include <asm\/pgtable.h>/ d' $f
done
Signed-off-by: Mike Rapoport <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Cain <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chris Zankel <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Greentime Hu <[email protected]>
Cc: Greg Ungerer <[email protected]>
Cc: Guan Xuetao <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Ley Foon Tan <[email protected]>
Cc: Mark Salter <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nick Hu <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Russell King <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Thomas Bogendoerfer <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vincent Chen <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
CPU_LOONGSON2 -> CPU_LOONGSON2EF
CPU_LOONGSON3 -> CPU_LOONGSON64
As newer loongson-2 products (2G/2H/2K1000) can share kernel
implementation with loongson-3 while 2E/2F are less similar with
other LOONGSON64 products.
Signed-off-by: Jiaxun Yang <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
Remove the function sb1_dma_init() that is not used anywhere.
This was partially found by using a static code analysis program called cppcheck.
Signed-off-by: Rickard Strandqvist <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/8873/
Signed-off-by: Paul Burton <[email protected]>
Cc: John Crispin <[email protected]>
Cc: Thomas Bogendoerfer <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Now that EXPORT_SYMBOL can be used from assembly source, move the
EXPORT_SYMBOL invocations for the copy_page & clear_page functions to be
alongside their definitions.
With this change there are no longer any symbols exported from
mips_ksyms.c so remove the file.
Signed-off-by: Paul Burton <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/14515/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends. That changed
when we forked out support for the latter into the export.h file.
This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig. The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.
Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.
Signed-off-by: Paul Gortmaker <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/14033/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
New Loongson 3 CPU (since Loongson-3A R2, as opposed to Loongson-3A R1,
Loongson-3B R1 and Loongson-3B R2) has many enhancements, such as FTLB,
L1-VCache, EI/DI/Wait/Prefetch instruction, DSP/DSPv2 ASE, User Local
register, Read-Inhibit/Execute-Inhibit, SFB (Store Fill Buffer), Fast
TLB refill support, etc.
This patch introduce a config option, CONFIG_LOONGSON3_ENHANCEMENT, to
enable those enhancements which are not probed at run time. If you want
a generic kernel to run on all Loongson 3 machines, please say 'N'
here. If you want a high-performance kernel to run on new Loongson 3
machines only, please say 'Y' here.
Some additional explanations:
1) SFB locates between core and L1 cache, it causes memory access out
of order, so writel/outl (and other similar functions) need a I/O
reorder barrier.
2) Loongson 3 has a bug that di instruction can not save the irqflag,
so arch_local_irq_save() is modified. Since CPU_MIPSR2 is selected
by CONFIG_LOONGSON3_ENHANCEMENT, generic kernel doesn't use ei/di
at all.
3) CPU_HAS_PREFETCH is selected by CONFIG_LOONGSON3_ENHANCEMENT, so
MIPS_CPU_PREFETCH (used by uasm) probing is also put in this patch.
Signed-off-by: Huacai Chen <[email protected]>
Cc: Aurelien Jarno <[email protected]>
Cc: Steven J . Hill <[email protected]>
Cc: Fuxin Zhang <[email protected]>
Cc: Zhangjin Wu <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/12755/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
This allows the kernel to correctly detect an R16000 MIPS CPU on systems that
have those. Otherwise, such systems will detect the CPU as an R14000, due to
similarities in the CPU PRId value.
Signed-off-by: Joshua Kinard <[email protected]>
Cc: Linux MIPS List <[email protected]>
Patchwork: https://patchwork.linux-mips.org/patch/9092/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
The MIPS R6 pref instruction only has 9 bits for the immediate
field so skip the micro-assembler PREF instruction if the offset
does not fit in 9 bits. Moreover, bit 30 (Pref_PrepareForStore) is
no longer valid in MIPS R6, so we change the default for all MIPS R6
processors to bit 5 (Pref_StoreStreamed).
Signed-off-by: Markos Chandras <[email protected]>
|
|
Fix uasm warning, which triggered because of workaround for R4600 V2 CPUs.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/6716/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
None of these files are actually using any __init type directives
and hence don't need to include <linux/init.h>. Most are just a
left over from __devinit and __cpuinit removal, or simply due to
code getting copied from one driver to the next.
Signed-off-by: Paul Gortmaker <[email protected]>
Signed-off-by: John Crispin <[email protected]>
Patchwork: http://patchwork.linux-mips.org/patch/6320/
|
|
o Move current_cpu_type() to a separate header file
o #ifdefing on supported CPU types lets modern GCC know that certain
code in callers may be discarded ideally turning current_cpu_type() into
a function returning a constant.
o Use current_cpu_type() rather than direct access to struct cpuinfo_mips.
Signed-off-by: Ralf Baechle <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/5833/
|
|
commit 3747069b25e419f6b51395f48127e9812abc3596 upstream.
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
Here, we remove all the MIPS __cpuinit from C code and __CPUINIT
from asm files. MIPS is interesting in this respect, because there
are also uasm users hiding behind their own renamed versions of the
__cpuinit macros.
[1] https://lkml.org/lkml/2013/5/20/589
[[email protected]: Folded in Paul's followup fix.]
Signed-off-by: Paul Gortmaker <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/5494/
Patchwork: https://patchwork.linux-mips.org/patch/5495/
Patchwork: https://patchwork.linux-mips.org/patch/5509/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Signed-off-by: Tony Wu <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/5536/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
This and the next patch resolve memory corruption problems while CPU
hotplug. Without these patches, memory corruption can triggered easily
as below:
On a quad-core MIPS platform, use "spawn" of UnixBench-5.1.3 (http://
code.google.com/p/byte-unixbench/) and a CPU hotplug script like this
(hotplug.sh):
while true; do
echo 0 >/sys/devices/system/cpu/cpu1/online
echo 0 >/sys/devices/system/cpu/cpu2/online
echo 0 >/sys/devices/system/cpu/cpu3/online
sleep 1
echo 1 >/sys/devices/system/cpu/cpu1/online
echo 1 >/sys/devices/system/cpu/cpu2/online
echo 1 >/sys/devices/system/cpu/cpu3/online
sleep 1
done
Run "hotplug.sh" and then run "spawn 10000", spawn will get segfault
after a few minutes.
This patch:
Currently, clear_page()/copy_page() are generated by Micro-assembler
dynamically. But they are unavailable until uasm_resolve_relocs() has
finished because jump labels are illegal before that. Since these
functions are shared by every CPU, we only call build_clear_page()/
build_copy_page() only once at boot time. Without this patch, programs
will get random memory corruption (segmentation fault, bus error, etc.)
while CPU Hotplug (e.g. one CPU is using clear_page() while another is
generating it in cpu_cache_init()).
For similar reasons we modify build_tlb_refill_handler()'s invocation.
V2:
1, Rework the code to make CPU#0 can be online/offline.
2, Introduce cpu_has_local_ebase feature since some types of MIPS CPU
need a per-CPU tlb_refill_handler().
Signed-off-by: Huacai Chen <[email protected]>
Signed-off-by: Hongbing Hu <[email protected]>
Acked-by: David Daney <[email protected]>
Patchwork: http://patchwork.linux-mips.org/patch/4994/
Acked-by: John Crispin <[email protected]>
|
|
Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Nobody seems to be interested anymore and upstream also never had an
ethernet driver.
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Remove usage of the '__attribute__((alias("...")))' hack that aliased
to integer arrays containing micro-assembled instructions. This hack
breaks when building a microMIPS kernel. It also makes the code much
easier to understand.
[[email protected]: Added back export of the clear_page and copy_page
symbols so certain modules will work again. Also fixed build with
CONFIG_SIBYTE_DMA_PAGEOPS enabled.]
Signed-off-by: Steven J. Hill <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/3866/
Acked-by: David Daney <[email protected]>
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Disintegrate asm/system.h for MIPS.
Signed-off-by: David Howells <[email protected]>
Acked-by: Ralf Baechle <[email protected]>
cc: [email protected]
|
|
Signed-off-by: Florian Fainelli <[email protected]>
To: [email protected]
To: David Daney <[email protected]>
Patchwork: http://patchwork.linux-mips.org/patch/887/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Some of the were relying into smp.h being dragged in by another header
which of course is fragile. <asm/cpu-info.h> uses smp_processor_id()
only in macros and including smp.h there leads to an include loop, so
don't change cpu-info.h.
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Current VR5500 processor support lacks of some functions which are
expected to be configured/synthesized on arch initialization.
Here're some VR5500A spec notes:
* All execution hazards are handled in hardware.
* Once VR5500A stops the operation of the pipeline by WAIT instruction,
it could return from the standby mode only when either a reset, NMI
request, or all enabled interrupts is/are detected. In other words,
if interrupts are disabled by Status.IE=0, it keeps in standby mode
even when interrupts are internally asserted.
Notes on WAIT: The operation of the processor is undefined if WAIT
insn is in the branch delay slot. The operation is also undefined
if WAIT insn is executed when Status.EXL and Status.ERL are set to 1.
* VR5500A core only implements the Load prefetch.
With these changes, it boots fine.
Signed-off-by: Shinya Kuribayashi <[email protected]>
Signed-off-by: Ralf Baechle <[email protected]>
|
|
The generated copy_page for R4k CPU with a 128 byte cache line size used
Create Dirty Exclusive cache line operations even if only part of the
cache line was filled. This change avoids generating cache operations,
if only part of the cache line size is copied in one loop. It also
increases the maxmimum loop size, because the generated code even fits
into the available space for r4k CPUs with 128 byte cache line size.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Signed-off-by: Yoichi Yuasa <[email protected]>
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Fold the SB-1 specific implementation of clear_page/copy_page in the
generic version, and rewrite that one in tlbex style. The immediate
benefits:
- It converts the compile-time workaround for SB-1 pass 1 prefetches
to a more efficient run-time check.
- It allows adjustment of loop unfolling, which helps to reduce the
number of redundant cdex cache ops.
- It fixes some esoteric cornercases (the cache line length calculations
can go wrong, and support for 64k pages without prefetch instructions
will overflow the addiu immediate).
- Somewhat better guesses of "good" prefetch values.
Signed-off-by: Thiemo Seufer <[email protected]>
Signed-off-by: Ralf Baechle <[email protected]>
|