aboutsummaryrefslogtreecommitdiff
path: root/arch/csky
AgeCommit message (Collapse)AuthorFilesLines
2020-07-31csky: Add support for function error injectionGuo Ren4-0/+18
Inspired by the commit 42d038c4fb00 ("arm64: Add support for function error injection"), this patch supports function error injection for csky. This patch mainly support two functions: one is regs_set_return_value() which is used to overwrite the return value; the another function is override_function_with_return() which is to override the probed function returning and jump to its caller. Test log: cd /sys/kernel/debug/fail_function/ echo sys_clone > inject echo 100 > probability echo 1 > interval ls / [ 108.644163] FAULT_INJECTION: forcing a failure. [ 108.644163] name fail_function, interval 1, probability 100, space 0, times 1 [ 108.647799] CPU: 0 PID: 104 Comm: sh Not tainted 5.8.0-rc5+ #46 [ 108.648384] Call Trace: [ 108.649339] [<8005eed4>] walk_stackframe+0x0/0xf0 [ 108.649679] [<8005f16a>] show_stack+0x32/0x5c [ 108.649927] [<8040f9d2>] dump_stack+0x6e/0x9c [ 108.650271] [<80406f7e>] should_fail+0x15e/0x1ac [ 108.650720] [<80118ba8>] fei_kprobe_handler+0x28/0x5c [ 108.651519] [<80754110>] kprobe_breakpoint_handler+0x144/0x1cc [ 108.652289] [<8005d6da>] trap_c+0x8e/0x110 [ 108.652816] [<8005ce8c>] csky_trap+0x5c/0x70 -sh: can't fork: Invalid argument Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]>
2020-07-31csky: Fixup kprobes handler couldn't change pcGuo Ren1-1/+3
The "Changing Execution Path" section in the Documentation/kprobes.txt said: Since kprobes can probe into a running kernel code, it can change the register set, including instruction pointer. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]>
2020-07-31csky: Fixup duplicated restore sp in RESTORE_REGS_FTRACEGuo Ren1-3/+0
There is no user return for RESTORE_REGS_FTRACE, so it's no need to save sp into ss0 as RESTORE_REGS_ALL. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]>
2020-07-31csky: Add cpu feature register hint for smpGuo Ren1-0/+3
CPU features registers are setup by customers' bootloader, but Linux must help transfer them from the primary to secondary cores. This patch add hint2 CPU feature register supported. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]>
2020-07-31csky: Add SECCOMP_FILTER supportedGuo Ren5-3/+25
secure_computing() is called first in syscall_trace_enter() so that a system call will be aborted quickly without doing succeeding syscall tracing if seccomp rules want to deny that system call. TODO: - Update https://github.com/seccomp/libseccomp csky support Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]>
2020-07-31csky: remove unusued thread_saved_pc and *_segments functions/macrosTobias Klauser2-16/+0
These are used nowhere in the tree (except for some architectures which define them for their own use) and were already removed for other architectures in: commit 6474924e2b5d ("arch: remove unused macro/function thread_saved_pc()") commit c17c02040bf0 ("arch: remove unused *_segments() macros/functions") Remove them from arch/csky as well. Signed-off-by: Tobias Klauser <[email protected]> Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]>
2020-07-27csky: switch to ->regset_get()Al Viro1-14/+10
NB: WTF is fpregs_get() playing at??? Signed-off-by: Al Viro <[email protected]>
2020-07-04arch: rename copy_thread_tls() back to copy_thread()Christian Brauner1-1/+1
Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls() back simply copy_thread(). It's a simpler name, and doesn't imply that only tls is copied here. This finishes an outstanding chunk of internal process creation work since we've added clone3(). Cc: [email protected] Acked-by: Thomas Bogendoerfer <[email protected]>A Acked-by: Stafford Horne <[email protected]> Acked-by: Greentime Hu <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]>A Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
2020-07-04arch: remove HAVE_COPY_THREAD_TLSChristian Brauner1-1/+0
All architectures support copy_thread_tls() now, so remove the legacy copy_thread() function and the HAVE_COPY_THREAD_TLS config option. Everyone uses the same process creation calling convention based on copy_thread_tls() and struct kernel_clone_args. This will make it easier to maintain the core process creation code under kernel/, simplifies the callpaths and makes the identical for all architectures. Cc: [email protected] Acked-by: Thomas Bogendoerfer <[email protected]> Acked-by: Greentime Hu <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Kees Cook <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
2020-06-17maccess: rename probe_kernel_{read,write} to copy_{from,to}_kernel_nofaultChristoph Hellwig1-2/+3
Better describe what these functions do. Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09mmap locking API: use coccinelle to convert mmap_sem rwsem call sitesMichel Lespinasse2-6/+6
This change converts the existing mmap_sem rwsem calls to use the new mmap locking API instead. The change is generated using coccinelle with the following rule: // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir . @@ expression mm; @@ ( -init_rwsem +mmap_init_lock | -down_write +mmap_write_lock | -down_write_killable +mmap_write_lock_killable | -down_write_trylock +mmap_write_trylock | -up_write +mmap_write_unlock | -downgrade_write +mmap_write_downgrade | -down_read +mmap_read_lock | -down_read_killable +mmap_read_lock_killable | -down_read_trylock +mmap_read_trylock | -up_read +mmap_read_unlock ) -(&mm->mmap_sem) +(mm) Signed-off-by: Michel Lespinasse <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Daniel Jordan <[email protected]> Reviewed-by: Laurent Dufour <[email protected]> Reviewed-by: Vlastimil Babka <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: David Rientjes <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jason Gunthorpe <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: John Hubbard <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ying Han <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09mm: consolidate pte_index() and pte_offset_*() definitionsMike Rapoport1-30/+0
All architectures define pte_index() as (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1) and all architectures define pte_offset_kernel() as an entry in the array of PTEs indexed by the pte_index(). For the most architectures the pte_offset_kernel() implementation relies on the availability of pmd_page_vaddr() that converts a PMD entry value to the virtual address of the page containing PTEs array. Let's move x86 definitions of the PTE accessors to the generic place in <linux/pgtable.h> and then simply drop the respective definitions from the other architectures. The architectures that didn't provide pmd_page_vaddr() are updated to have that defined. The generic implementation of pte_offset_kernel() can be overridden by an architecture and alpha makes use of this because it has special ordering requirements for its version of pte_offset_kernel(). [[email protected]: v2] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: update] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: update] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: fix x86 warning] [[email protected]: fix powerpc build] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09csky: replace definitions of __pXd_offset() with pXd_index()Mike Rapoport4-8/+7
All architectures use pXd_index() to get an entry in the page table page corresponding to a virtual address. Align csky with other architectures. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09mm: introduce include/linux/pgtable.hMike Rapoport2-3/+1
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions. Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09mm: don't include asm/pgtable.h if linux/mm.h is already includedMike Rapoport4-4/+0
Patch series "mm: consolidate definitions of page table accessors", v2. The low level page table accessors (pXY_index(), pXY_offset()) are duplicated across all architectures and sometimes more than once. For instance, we have 31 definition of pgd_offset() for 25 supported architectures. Most of these definitions are actually identical and typically it boils down to, e.g. static inline unsigned long pmd_index(unsigned long address) { return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); } static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } These definitions can be shared among 90% of the arches provided XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined. For architectures that really need a custom version there is always possibility to override the generic version with the usual ifdefs magic. These patches introduce include/linux/pgtable.h that replaces include/asm-generic/pgtable.h and add the definitions of the page table accessors to the new header. This patch (of 12): The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the functions involving page table manipulations, e.g. pte_alloc() and pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h> in the files that include <linux/mm.h>. The include statements in such cases are remove with a simple loop: for f in $(git grep -l "include <linux/mm.h>") ; do sed -i -e '/include <asm\/pgtable.h>/ d' $f done Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09kernel: rename show_stack_loglvl() => show_stack()Dmitry Safonov2-10/+3
Now the last users of show_stack() got converted to use an explicit log level, show_stack_loglvl() can drop it's redundant suffix and become once again well known show_stack(). Signed-off-by: Dmitry Safonov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-09csky: add show_stack_loglvl()Dmitry Safonov1-2/+9
Currently, the log-level of show_stack() depends on a platform realization. It creates situations where the headers are printed with lower log level or higher than the stacktrace (depending on a platform or user). Furthermore, it forces the logic decision from user to an architecture side. In result, some users as sysrq/kdb/etc are doing tricks with temporary rising console_loglevel while printing their messages. And in result it not only may print unwanted messages from other CPUs, but also omit printing at all in the unlucky case where the printk() was deferred. Introducing log-level parameter and KERN_UNSUPPRESSED [1] seems an easier approach than introducing more printk buffers. Also, it will consolidate printings with headers. Introduce show_stack_loglvl(), that eventually will substitute show_stack(). [1]: https://lore.kernel.org/lkml/[email protected]/T/#u Signed-off-by: Dmitry Safonov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Guo Ren <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04kmap: consolidate kmap_prot definitionsIra Weiny1-2/+0
Most architectures define kmap_prot to be PAGE_KERNEL. Let sparc and xtensa define there own and define PAGE_KERNEL as the default if not overridden. [[email protected]: coding style fixes] Suggested-by: Christoph Hellwig <[email protected]> Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04kmap: remove kmap_atomic_to_page()Ira Weiny2-14/+0
kmap_atomic_to_page() has no callers and is only defined on 1 arch and declared on another. Remove it. Suggested-by: Al Viro <[email protected]> Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: define kmap_atomic_prot() for all arch'sIra Weiny1-3/+3
To support kmap_atomic_prot(), all architectures need to support protections passed to their kmap_atomic_high() function. Pass protections into kmap_atomic_high() and change the name to kmap_atomic_high_prot() to match. Then define kmap_atomic_prot() as a core function which calls kmap_atomic_high_prot() when needed. Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with the default kmap_prot exported by the architectures. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: don't hard code kmap_prot valuesIra Weiny1-1/+1
To support kmap_atomic_prot() on all architectures each arch must support protections passed in to them. Change csky, mips, nds32 and xtensa to use their global constant kmap_prot rather than a hard coded value which was equal. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kunmap_atomic: consolidate duplicate codeIra Weiny2-7/+3
Every single architecture (including !CONFIG_HIGHMEM) calls... pagefault_enable(); preempt_enable(); ... before returning from __kunmap_atomic(). Lift this code into the kunmap_atomic() macro. While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to be consistent. [[email protected]: don't enable pagefault/preempt twice] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: coding style fixes] Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guenter Roeck <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap_atomic: consolidate duplicate codeIra Weiny2-8/+2
Every arch has the same code to ensure atomic operations and a check for !HIGHMEM page. Remove the duplicate code by defining a core kmap_atomic() which only calls the arch specific kmap_atomic_high() when the page is high memory. [[email protected]: coding style fixes] Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kunmap: remove duplicate kunmap implementationsIra Weiny2-12/+0
All architectures do exactly the same thing for kunmap(); remove all the duplicate definitions and lift the call to the core. This also has the benefit of changing kmap_unmap() on a number of architectures to be an inline call rather than an actual function. [[email protected]: fix CONFIG_HIGHMEM=n build on various architectures] Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: remove redundant arch specific kmapsIra Weiny2-12/+6
The kmap code for all the architectures is almost 100% identical. Lift the common code to the core. Use ARCH_HAS_KMAP_FLUSH_TLB to indicate if an arch defines kmap_flush_tlb() and call if if needed. This also has the benefit of changing kmap() on a number of architectures to be an inline call rather than an actual function. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: remove BUG_ON()Ira Weiny1-1/+1
Patch series "Remove duplicated kmap code", v3. The kmap infrastructure has been copied almost verbatim to every architecture. This series consolidates obvious duplicated code by defining core functions which call into the architectures only when needed. Some of the k[un]map_atomic() implementations have some similarities but the similarities were not sufficient to warrant further changes. In addition we remove a duplicate implementation of kmap() in DRM. This patch (of 15): Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap() in favor of might_sleep(). Besides the benefits of might_sleep(), this normalizes the implementations such that they can be made generic in subsequent patches. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Dan Williams <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Christian König <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Helge Deller <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Max Filippov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-03csky: simplify detection of memory zone boundariesMike Rapoport1-15/+11
The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Hoan Tran <[email protected]> [arm64] Cc: Baoquan He <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-05-28csky: Fixup CONFIG_DEBUG_RSEQGuo Ren1-5/+10
Put the rseq_syscall check point at the prologue of the syscall will break the a0 ... a7. This will casue system call bug when DEBUG_RSEQ is enabled. So move it to the epilogue of syscall, but before syscall_trace. Signed-off-by: Guo Ren <[email protected]>
2020-05-28csky: Coding convention in entry.SGuo Ren4-40/+42
There is no fixup or feature in the patch, we only cleanup with: - Remove unnecessary reg used (r11, r12), just use r9 & r10 & syscallid regs as temp useage. - Add _TIF_SYSCALL_WORK and _TIF_WORK_MASK to gather macros. Signed-off-by: Guo Ren <[email protected]>
2020-05-28csky: Fixup abiv2 syscall_trace break a4 & a5Guo Ren2-2/+6
Current implementation could destory a4 & a5 when strace, so we need to get them from pt_regs by SAVE_ALL. Signed-off-by: Guo Ren <[email protected]>
2020-05-28csky: Fixup CONFIG_PREEMPT panicGuo Ren3-27/+11
log: [    0.13373200] Calibrating delay loop... [    0.14077600] ------------[ cut here ]------------ [    0.14116700] WARNING: CPU: 0 PID: 0 at kernel/sched/core.c:3790 preempt_count_add+0xc8/0x11c [    0.14348000] DEBUG_LOCKS_WARN_ON((preempt_count() < 0))Modules linked in: [    0.14395100] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0 #7 [    0.14410800] [    0.14427400] Call Trace: [    0.14450700] [<807cd226>] dump_stack+0x8a/0xe4 [    0.14473500] [<80072792>] __warn+0x10e/0x15c [    0.14495900] [<80072852>] warn_slowpath_fmt+0x72/0xc0 [    0.14518600] [<800a5240>] preempt_count_add+0xc8/0x11c [    0.14544900] [<807ef918>] _raw_spin_lock+0x28/0x68 [    0.14572600] [<800e0eb8>] vprintk_emit+0x84/0x2d8 [    0.14599000] [<800e113a>] vprintk_default+0x2e/0x44 [    0.14625100] [<800e2042>] vprintk_func+0x12a/0x1d0 [    0.14651300] [<800e1804>] printk+0x30/0x48 [    0.14677600] [<80008052>] lockdep_init+0x12/0xb0 [    0.14703800] [<80002080>] start_kernel+0x558/0x7f8 [    0.14730000] [<800052bc>] csky_start+0x58/0x94 [    0.14756600] irq event stamp: 34 [    0.14775100] hardirqs last  enabled at (33): [<80067370>] ret_from_exception+0x2c/0x72 [    0.14793700] hardirqs last disabled at (34): [<800e0eae>] vprintk_emit+0x7a/0x2d8 [    0.14812300] softirqs last  enabled at (32): [<800655b0>] __do_softirq+0x578/0x6d8 [    0.14830800] softirqs last disabled at (25): [<8007b3b8>] irq_exit+0xec/0x128 The preempt_count of reg could be destroyed after csky_do_IRQ without reload from memory. After reference to other architectures (arm64, riscv), we move preempt entry into ret_from_exception and disable irq at the beginning of ret_from_exception instead of RESTORE_ALL. Signed-off-by: Guo Ren <[email protected]> Reported-by: Lu Baoquan <[email protected]>
2020-05-15csky: Fixup raw_copy_from_user()Al Viro2-29/+28
If raw_copy_from_user(to, from, N) returns K, callers expect the first N - K bytes starting at to to have been replaced with the contents of corresponding area starting at from and the last K bytes of destination *left* *unmodified*. What arch/sky/lib/usercopy.c is doing is broken - it can lead to e.g. data corruption on write(2). raw_copy_to_user() is inaccurate about return value, which is a bug, but consequences are less drastic than for raw_copy_from_user(). And just what are those access_ok() doing in there? I mean, look into linux/uaccess.h; that's where we do that check (as well as zero tail on failure in the callers that need zeroing). AFAICS, all of that shouldn't be hard to fix; something like a patch below might make a useful starting point. I would suggest moving these macros into usercopy.c (they are never used anywhere else) and possibly expanding them there; if you leave them alive, please at least rename __copy_user_zeroing(). Again, it must not zero anything on failed read. Said that, I'm not sure we won't be better off simply turning usercopy.c into usercopy.S - all that is left there is a couple of functions, each consisting only of inline asm. Guo Ren reply: Yes, raw_copy_from_user is wrong, it's no need zeroing code. unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; might_fault(); if (likely(access_ok(from, n))) { kasan_check_write(to, n); res = raw_copy_from_user(to, from, n); } if (unlikely(res)) memset(to + (n - res), 0, res); return res; } EXPORT_SYMBOL(_copy_from_user); You are right and access_ok() should be removed. but, how about: do { ... "2: stw %3, (%1, 0) \n" \ + " subi %0, 4 \n" \ "9: stw %4, (%1, 4) \n" \ + " subi %0, 4 \n" \ "10: stw %5, (%1, 8) \n" \ + " subi %0, 4 \n" \ "11: stw %6, (%1, 12) \n" \ + " subi %0, 4 \n" \ " addi %2, 16 \n" \ " addi %1, 16 \n" \ Don't expand __ex_table AI Viro reply: Hey, I've no idea about the instruction scheduling on csky - if that doesn't slow the things down, all the better. It's just that copy_to_user() and friends are on fairly hot codepaths, and in quite a few situations they will dominate the speed of e.g. read(2). So I tried to keep the fast path unchanged. Up to the architecture maintainers, obviously. Which would be you... As for the fixups size increase (__ex_table size is unchanged)... You have each of those macros expanded exactly once. So the size is not a serious argument, IMO - useless complexity would be, if it is, in fact, useless; the size... not really, especially since those extra subi will at least offset it. Again, up to you - asm optimizations of (essentially) memcpy()-style loops are tricky and can depend upon the fairly subtle details of architecture. So even on something I know reasonably well I would resort to direct experiments if I can't pass the buck to architecture maintainers. It *is* worth optimizing - this is where read() from a file that is already in page cache spends most of the time, etc. Guo Ren reply: Thx, after fixup some typo “sub %0, 4”, apply the patch. TODO: - user copy/from codes are still need optimizing. Signed-off-by: Al Viro <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-15csky: Fixup gdbmacros.txt with name sp in thread_structGuo Ren4-9/+9
The gdbmacros.txt use sp in thread_struct, but csky use ksp. This cause bttnobp fail to excute. TODO: - Still couldn't display the contents of stack. Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup remove unnecessary save/restore PSR codeGuo Ren3-11/+2
All processes' PSR could success from SETUP_MMU, so need set it in INIT_THREAD again. And use a3 instead of r7 in __switch_to for code convention. Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup remove duplicate irq_disableLiu Yibin1-2/+0
Interrupt has been disabled in __schedule() with local_irq_disable() and enabled in finish_task_switch->finish_lock_switch() with local_irq_enabled(), So needn't to disable irq here. Signed-off-by: Liu Yibin <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup calltrace panicGuo Ren8-119/+159
The implementation of show_stack will panic with wrong fp: addr = *fp++; because the fp isn't checked properly. The current implementations of show_stack, wchan and stack_trace haven't been designed properly, so just deprecate them. This patch is a reference to riscv's way, all codes are modified from arm's. The patch is passed with: - cat /proc/<pid>/stack - cat /proc/<pid>/wchan - echo c > /proc/sysrq-trigger Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup perf callchain unwindMao Han1-2/+7
[ 5221.974084] Unable to handle kernel paging request at virtual address 0xfffff000, pc: 0x8002c18e [ 5221.985929] Oops: 00000000 [ 5221.989488] [ 5221.989488] CURRENT PROCESS: [ 5221.989488] [ 5221.992877] COMM=callchain_test PID=11962 [ 5221.995213] TEXT=00008000-000087e0 DATA=00009f1c-0000a018 BSS=0000a018-0000b000 [ 5221.999037] USER-STACK=7fc18e20 KERNEL-STACK=be204680 [ 5221.999037] [ 5222.003292] PC: 0x8002c18e (perf_callchain_kernel+0x3e/0xd4) [ 5222.007957] LR: 0x8002c198 (perf_callchain_kernel+0x48/0xd4) [ 5222.074873] Call Trace: [ 5222.074873] [<800a248e>] get_perf_callchain+0x20a/0x29c [ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80 [ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8 [ 5222.074873] [<8009de6e>] perf_event_output_forward+0x36/0x98 [ 5222.074873] [<800497e0>] search_exception_tables+0x20/0x44 [ 5222.074873] [<8002cbb6>] do_page_fault+0x92/0x378 [ 5222.074873] [<80098608>] __perf_event_overflow+0x54/0xdc [ 5222.074873] [<80098778>] perf_swevent_hrtimer+0xe8/0x164 [ 5222.074873] [<8002ddd0>] update_mmu_cache+0x0/0xd8 [ 5222.074873] [<8002c014>] user_backtrace+0x58/0xc4 [ 5222.074873] [<8002c0b4>] perf_callchain_user+0x34/0xd0 [ 5222.074873] [<800a2442>] get_perf_callchain+0x1be/0x29c [ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80 [ 5222.074873] [<8009d834>] perf_output_sample+0x78c/0x858 [ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8 [ 5222.074873] [<8009de94>] perf_event_output_forward+0x5c/0x98 [ 5222.097846] [ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c [ 5222.097846] [<8006c874>] hrtimer_interrupt+0x104/0x2ec [ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c [ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c [ 5222.097846] [<8006c770>] hrtimer_interrupt+0x0/0x2ec [ 5222.097846] [<8005f2e4>] __handle_irq_event_percpu+0xac/0x19c [ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c [ 5222.097846] [<8005f408>] handle_irq_event_percpu+0x34/0x88 [ 5222.097846] [<8005f480>] handle_irq_event+0x24/0x64 [ 5222.097846] [<8006218c>] handle_level_irq+0x68/0xdc [ 5222.097846] [<8005ec76>] __handle_domain_irq+0x56/0xa8 [ 5222.097846] [<80450e90>] ck_irq_handler+0xac/0xe4 [ 5222.097846] [<80029012>] csky_do_IRQ+0x12/0x24 [ 5222.097846] [<8002a3a0>] csky_irq+0x70/0x80 [ 5222.097846] [<800ca612>] alloc_set_pte+0xd2/0x238 [ 5222.097846] [<8002ddd0>] update_mmu_cache+0x0/0xd8 [ 5222.097846] [<800a0340>] perf_event_exit_task+0x98/0x43c The original fp check doesn't base on the real kernal stack region. Invalid fp address may cause kernel panic. Signed-off-by: Mao Han <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup msa highest 3 bits maskLiu Yibin2-4/+4
Just as comment mentioned, the msa format: cr<30/31, 15> MSA register format: 31 - 29 | 28 - 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 BA Reserved SH WA B SO SEC C D V So we should shift 29 bits not 28 bits for mask Signed-off-by: Liu Yibin <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup perf probe -x hungupGuo Ren2-0/+11
case: # perf probe -x /lib/libc-2.28.9000.so memcpy # perf record -e probe_libc:memcpy -aR sleep 1 System hangup and cpu get in trap_c loop, because our hardware singlestep state could still get interrupt signal. When we get in uprobe_xol singlestep slot, we should disable irq in pt_regs->psr. And is_swbp_insn() need a csky arch implementation with a low 16bit mask. Signed-off-by: Guo Ren <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]>
2020-05-13csky: Fixup compile error for abiv1 entry.SGuo Ren1-4/+4
This bug is from uprobe signal definition in thread_info.h. The instruction (andi) of abiv1 immediate is smaller than abiv2, then it will cause: AS arch/csky/kernel/entry.o arch/csky/kernel/entry.S: Assembler messages: arch/csky/kernel/entry.S:224: Error: Operand 2 immediate is overflow. Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky/ftrace: Fixup error when disable CONFIG_DYNAMIC_FTRACEGuo Ren2-0/+4
When CONFIG_DYNAMIC_FTRACE is enabled, static ftrace will fail to boot up and compile. It's a carelessness when developing "dynamic ftrace" and "ftrace with regs". Signed-off-by: Guo Ren <[email protected]>
2020-04-10mm/special: create generic fallbacks for pte_special() and pte_mkspecial()Anshuman Khandual1-3/+0
Currently there are many platforms that dont enable ARCH_HAS_PTE_SPECIAL but required to define quite similar fallback stubs for special page table entry helpers such as pte_special() and pte_mkspecial(), as they get build in generic MM without a config check. This creates two generic fallback stub definitions for these helpers, eliminating much code duplication. mips platform has a special case where pte_special() and pte_mkspecial() visibility is wider than what ARCH_HAS_PTE_SPECIAL enablement requires. This restricts those symbol visibility in order to avoid redefinitions which is now exposed through this new generic stubs and subsequent build failure. arm platform set_pte_at() definition needs to be moved into a C file just to prevent a build failure. [[email protected]: use defined(CONFIG_ARCH_HAS_PTE_SPECIAL) in mips per Thomas] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Guo Ren <[email protected]> [csky] Acked-by: Geert Uytterhoeven <[email protected]> [m68k] Acked-by: Stafford Horne <[email protected]> [openrisc] Acked-by: Helge Deller <[email protected]> [parisc] Cc: Richard Henderson <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Russell King <[email protected]> Cc: Brian Cain <[email protected]> Cc: Tony Luck <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Sam Creasey <[email protected]> Cc: Michal Simek <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Nick Hu <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Anton Ivanov <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Max Filippov <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-10mm/vma: define a default value for VM_DATA_DEFAULT_FLAGSAnshuman Khandual1-3/+0
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros with standard VMA access flag combinations that are used frequently across many platforms. Apart from simplification, this reduces code duplication as well. Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Vlastimil Babka <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Mark Salter <[email protected]> Cc: Guo Ren <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Brian Cain <[email protected]> Cc: Tony Luck <[email protected]> Cc: Michal Simek <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Nick Hu <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Rich Felker <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Chris Zankel <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-07mm/vma: append unlikely() while testing VMA access permissionsAnshuman Khandual1-1/+1
It is unlikely that an inaccessible VMA without required permission flags will get a page fault. Hence lets just append unlikely() directive to such checks in order to improve performance while also standardizing it across various platforms. Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Guo Ren <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Mike Rapoport <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-07mm/vma: make vma_is_accessible() available for general useAnshuman Khandual1-1/+1
Lets move vma_is_accessible() helper to include/linux/mm.h which makes it available for general use. While here, this replaces all remaining open encodings for VMA access check with vma_is_accessible(). Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Guo Ren <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Guo Ren <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Alexander Viro <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Will Deacon <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-06Merge tag 'csky-for-linus-5.7-rc1' of git://github.com/c-sky/csky-linuxLinus Torvalds34-74/+1807
Pull csky updates from Guo Ren: - Add kproobes/uprobes support - Add lockdep, rseq, gcov support - Fixup init_fpu - Fixup ftrace_modify deadlock - Fixup speculative execution on IO area * tag 'csky-for-linus-5.7-rc1' of git://github.com/c-sky/csky-linux: csky: Fixup cpu speculative execution to IO area csky: Add uprobes support csky: Add kprobes supported csky: Enable LOCKDEP_SUPPORT csky: Enable the gcov function csky: Fixup get wrong psr value from phyical reg csky/ftrace: Fixup ftrace_modify_code deadlock without CPU_HAS_ICACHE_INS csky: Implement ftrace with regs csky: Add support for restartable sequence csky: Implement ptrace regs and stack API csky: Fixup init_fpu compile warning with __init
2020-04-03csky: Fixup cpu speculative execution to IO areaGuo Ren5-58/+25
For the memory size ( > 512MB, < 1GB), the MSA setting is: - SSEG0: PHY_START , PHY_START + 512MB - SSEG1: PHY_START + 512MB, PHY_START + 1GB But the real memory is no more than 1GB, there is a gap between the end size of memory and border of 1GB. CPU could speculatively execute to that gap and if the gap of the bus couldn't respond to the CPU request, then the crash will happen. Now make the setting with: - SSEG0: PHY_START , PHY_START + 512MB (no change) - SSEG1: Disabled (We use highmem to use the memory of 512MB~1GB) We also deprecated zhole_szie[] settings, it's only used by arm style CPUs. All memory gap should use Reserved setting of dts in csky system. Signed-off-by: Guo Ren <[email protected]>
2020-04-03csky: Add uprobes supportGuo Ren8-1/+201
This patch adds support for uprobes on csky architecture. Just like kprobe, it support single-step and simulate instructions. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]>
2020-04-03csky: Add kprobes supportedGuo Ren16-2/+1197
This patch enable kprobes, kretprobes, ftrace interface. It utilized software breakpoint and single step debug exceptions, instructions simulation on csky. We use USR_BKPT replace origin instruction, and the kprobe handler prepares an excutable memory slot for out-of-line execution with a copy of the original instruction being probed. Most of instructions could be executed by single-step, but some instructions need origin pc value to execute and we need software simulate these instructions. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]>
2020-04-02asm-generic: make more kernel-space headers mandatoryMasahiro Yamada1-36/+0
Change a header to mandatory-y if both of the following are met: [1] At least one architecture (except um) specifies it as generic-y in arch/*/include/asm/Kbuild [2] Every architecture (except um) either has its own implementation (arch/*/include/asm/*.h) or specifies it as generic-y in arch/*/include/asm/Kbuild This commit was generated by the following shell script. ----------------------------------->8----------------------------------- arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d') tmpfile=$(mktemp) grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile find arch -path 'arch/*/include/asm/Kbuild' | xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u | while read header do mandatory=yes for arch in $arches do if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild && ! [ -f arch/$arch/include/asm/$header ]; then mandatory=no break fi done if [ "$mandatory" = yes ]; then echo "mandatory-y += $header" >> $tmpfile for arch in $arches do sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild done fi done sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild LANG=C sort $tmpfile >> include/asm-generic/Kbuild ----------------------------------->8----------------------------------- One obvious benefit is the diff stat: 25 files changed, 52 insertions(+), 557 deletions(-) It is tedious to list generic-y for each arch that needs it. So, mandatory-y works like a fallback default (by just wrapping asm-generic one) when arch does not have a specific header implementation. See the following commits: def3f7cefe4e81c296090e1722a76551142c227c a1b39bae16a62ce4aae02d958224f19316d98b24 It is tedious to convert headers one by one, so I processed by a shell script. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Michal Simek <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Arnd Bergmann <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>