aboutsummaryrefslogtreecommitdiff
path: root/arch/csky
AgeCommit message (Collapse)AuthorFilesLines
2020-06-04kmap: consolidate kmap_prot definitionsIra Weiny1-2/+0
Most architectures define kmap_prot to be PAGE_KERNEL. Let sparc and xtensa define there own and define PAGE_KERNEL as the default if not overridden. [[email protected]: coding style fixes] Suggested-by: Christoph Hellwig <[email protected]> Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04kmap: remove kmap_atomic_to_page()Ira Weiny2-14/+0
kmap_atomic_to_page() has no callers and is only defined on 1 arch and declared on another. Remove it. Suggested-by: Al Viro <[email protected]> Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: define kmap_atomic_prot() for all arch'sIra Weiny1-3/+3
To support kmap_atomic_prot(), all architectures need to support protections passed to their kmap_atomic_high() function. Pass protections into kmap_atomic_high() and change the name to kmap_atomic_high_prot() to match. Then define kmap_atomic_prot() as a core function which calls kmap_atomic_high_prot() when needed. Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with the default kmap_prot exported by the architectures. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: don't hard code kmap_prot valuesIra Weiny1-1/+1
To support kmap_atomic_prot() on all architectures each arch must support protections passed in to them. Change csky, mips, nds32 and xtensa to use their global constant kmap_prot rather than a hard coded value which was equal. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kunmap_atomic: consolidate duplicate codeIra Weiny2-7/+3
Every single architecture (including !CONFIG_HIGHMEM) calls... pagefault_enable(); preempt_enable(); ... before returning from __kunmap_atomic(). Lift this code into the kunmap_atomic() macro. While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to be consistent. [[email protected]: don't enable pagefault/preempt twice] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: coding style fixes] Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guenter Roeck <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap_atomic: consolidate duplicate codeIra Weiny2-8/+2
Every arch has the same code to ensure atomic operations and a check for !HIGHMEM page. Remove the duplicate code by defining a core kmap_atomic() which only calls the arch specific kmap_atomic_high() when the page is high memory. [[email protected]: coding style fixes] Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kunmap: remove duplicate kunmap implementationsIra Weiny2-12/+0
All architectures do exactly the same thing for kunmap(); remove all the duplicate definitions and lift the call to the core. This also has the benefit of changing kmap_unmap() on a number of architectures to be an inline call rather than an actual function. [[email protected]: fix CONFIG_HIGHMEM=n build on various architectures] Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: remove redundant arch specific kmapsIra Weiny2-12/+6
The kmap code for all the architectures is almost 100% identical. Lift the common code to the core. Use ARCH_HAS_KMAP_FLUSH_TLB to indicate if an arch defines kmap_flush_tlb() and call if if needed. This also has the benefit of changing kmap() on a number of architectures to be an inline call rather than an actual function. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christian König <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Helge Deller <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-04arch/kmap: remove BUG_ON()Ira Weiny1-1/+1
Patch series "Remove duplicated kmap code", v3. The kmap infrastructure has been copied almost verbatim to every architecture. This series consolidates obvious duplicated code by defining core functions which call into the architectures only when needed. Some of the k[un]map_atomic() implementations have some similarities but the similarities were not sufficient to warrant further changes. In addition we remove a duplicate implementation of kmap() in DRM. This patch (of 15): Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap() in favor of might_sleep(). Besides the benefits of might_sleep(), this normalizes the implementations such that they can be made generic in subsequent patches. Signed-off-by: Ira Weiny <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Dan Williams <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Al Viro <[email protected]> Cc: Christian König <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Helge Deller <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Max Filippov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-06-03csky: simplify detection of memory zone boundariesMike Rapoport1-15/+11
The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone sizes and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Tested-by: Hoan Tran <[email protected]> [arm64] Cc: Baoquan He <[email protected]> Cc: Brian Cain <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Ungerer <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Max Filippov <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Simek <[email protected]> Cc: Nick Hu <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-05-28csky: Fixup CONFIG_DEBUG_RSEQGuo Ren1-5/+10
Put the rseq_syscall check point at the prologue of the syscall will break the a0 ... a7. This will casue system call bug when DEBUG_RSEQ is enabled. So move it to the epilogue of syscall, but before syscall_trace. Signed-off-by: Guo Ren <[email protected]>
2020-05-28csky: Coding convention in entry.SGuo Ren4-40/+42
There is no fixup or feature in the patch, we only cleanup with: - Remove unnecessary reg used (r11, r12), just use r9 & r10 & syscallid regs as temp useage. - Add _TIF_SYSCALL_WORK and _TIF_WORK_MASK to gather macros. Signed-off-by: Guo Ren <[email protected]>
2020-05-28csky: Fixup abiv2 syscall_trace break a4 & a5Guo Ren2-2/+6
Current implementation could destory a4 & a5 when strace, so we need to get them from pt_regs by SAVE_ALL. Signed-off-by: Guo Ren <[email protected]>
2020-05-28csky: Fixup CONFIG_PREEMPT panicGuo Ren3-27/+11
log: [    0.13373200] Calibrating delay loop... [    0.14077600] ------------[ cut here ]------------ [    0.14116700] WARNING: CPU: 0 PID: 0 at kernel/sched/core.c:3790 preempt_count_add+0xc8/0x11c [    0.14348000] DEBUG_LOCKS_WARN_ON((preempt_count() < 0))Modules linked in: [    0.14395100] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0 #7 [    0.14410800] [    0.14427400] Call Trace: [    0.14450700] [<807cd226>] dump_stack+0x8a/0xe4 [    0.14473500] [<80072792>] __warn+0x10e/0x15c [    0.14495900] [<80072852>] warn_slowpath_fmt+0x72/0xc0 [    0.14518600] [<800a5240>] preempt_count_add+0xc8/0x11c [    0.14544900] [<807ef918>] _raw_spin_lock+0x28/0x68 [    0.14572600] [<800e0eb8>] vprintk_emit+0x84/0x2d8 [    0.14599000] [<800e113a>] vprintk_default+0x2e/0x44 [    0.14625100] [<800e2042>] vprintk_func+0x12a/0x1d0 [    0.14651300] [<800e1804>] printk+0x30/0x48 [    0.14677600] [<80008052>] lockdep_init+0x12/0xb0 [    0.14703800] [<80002080>] start_kernel+0x558/0x7f8 [    0.14730000] [<800052bc>] csky_start+0x58/0x94 [    0.14756600] irq event stamp: 34 [    0.14775100] hardirqs last  enabled at (33): [<80067370>] ret_from_exception+0x2c/0x72 [    0.14793700] hardirqs last disabled at (34): [<800e0eae>] vprintk_emit+0x7a/0x2d8 [    0.14812300] softirqs last  enabled at (32): [<800655b0>] __do_softirq+0x578/0x6d8 [    0.14830800] softirqs last disabled at (25): [<8007b3b8>] irq_exit+0xec/0x128 The preempt_count of reg could be destroyed after csky_do_IRQ without reload from memory. After reference to other architectures (arm64, riscv), we move preempt entry into ret_from_exception and disable irq at the beginning of ret_from_exception instead of RESTORE_ALL. Signed-off-by: Guo Ren <[email protected]> Reported-by: Lu Baoquan <[email protected]>
2020-05-15csky: Fixup raw_copy_from_user()Al Viro2-29/+28
If raw_copy_from_user(to, from, N) returns K, callers expect the first N - K bytes starting at to to have been replaced with the contents of corresponding area starting at from and the last K bytes of destination *left* *unmodified*. What arch/sky/lib/usercopy.c is doing is broken - it can lead to e.g. data corruption on write(2). raw_copy_to_user() is inaccurate about return value, which is a bug, but consequences are less drastic than for raw_copy_from_user(). And just what are those access_ok() doing in there? I mean, look into linux/uaccess.h; that's where we do that check (as well as zero tail on failure in the callers that need zeroing). AFAICS, all of that shouldn't be hard to fix; something like a patch below might make a useful starting point. I would suggest moving these macros into usercopy.c (they are never used anywhere else) and possibly expanding them there; if you leave them alive, please at least rename __copy_user_zeroing(). Again, it must not zero anything on failed read. Said that, I'm not sure we won't be better off simply turning usercopy.c into usercopy.S - all that is left there is a couple of functions, each consisting only of inline asm. Guo Ren reply: Yes, raw_copy_from_user is wrong, it's no need zeroing code. unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; might_fault(); if (likely(access_ok(from, n))) { kasan_check_write(to, n); res = raw_copy_from_user(to, from, n); } if (unlikely(res)) memset(to + (n - res), 0, res); return res; } EXPORT_SYMBOL(_copy_from_user); You are right and access_ok() should be removed. but, how about: do { ... "2: stw %3, (%1, 0) \n" \ + " subi %0, 4 \n" \ "9: stw %4, (%1, 4) \n" \ + " subi %0, 4 \n" \ "10: stw %5, (%1, 8) \n" \ + " subi %0, 4 \n" \ "11: stw %6, (%1, 12) \n" \ + " subi %0, 4 \n" \ " addi %2, 16 \n" \ " addi %1, 16 \n" \ Don't expand __ex_table AI Viro reply: Hey, I've no idea about the instruction scheduling on csky - if that doesn't slow the things down, all the better. It's just that copy_to_user() and friends are on fairly hot codepaths, and in quite a few situations they will dominate the speed of e.g. read(2). So I tried to keep the fast path unchanged. Up to the architecture maintainers, obviously. Which would be you... As for the fixups size increase (__ex_table size is unchanged)... You have each of those macros expanded exactly once. So the size is not a serious argument, IMO - useless complexity would be, if it is, in fact, useless; the size... not really, especially since those extra subi will at least offset it. Again, up to you - asm optimizations of (essentially) memcpy()-style loops are tricky and can depend upon the fairly subtle details of architecture. So even on something I know reasonably well I would resort to direct experiments if I can't pass the buck to architecture maintainers. It *is* worth optimizing - this is where read() from a file that is already in page cache spends most of the time, etc. Guo Ren reply: Thx, after fixup some typo “sub %0, 4”, apply the patch. TODO: - user copy/from codes are still need optimizing. Signed-off-by: Al Viro <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-15csky: Fixup gdbmacros.txt with name sp in thread_structGuo Ren4-9/+9
The gdbmacros.txt use sp in thread_struct, but csky use ksp. This cause bttnobp fail to excute. TODO: - Still couldn't display the contents of stack. Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup remove unnecessary save/restore PSR codeGuo Ren3-11/+2
All processes' PSR could success from SETUP_MMU, so need set it in INIT_THREAD again. And use a3 instead of r7 in __switch_to for code convention. Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup remove duplicate irq_disableLiu Yibin1-2/+0
Interrupt has been disabled in __schedule() with local_irq_disable() and enabled in finish_task_switch->finish_lock_switch() with local_irq_enabled(), So needn't to disable irq here. Signed-off-by: Liu Yibin <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup calltrace panicGuo Ren8-119/+159
The implementation of show_stack will panic with wrong fp: addr = *fp++; because the fp isn't checked properly. The current implementations of show_stack, wchan and stack_trace haven't been designed properly, so just deprecate them. This patch is a reference to riscv's way, all codes are modified from arm's. The patch is passed with: - cat /proc/<pid>/stack - cat /proc/<pid>/wchan - echo c > /proc/sysrq-trigger Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup perf callchain unwindMao Han1-2/+7
[ 5221.974084] Unable to handle kernel paging request at virtual address 0xfffff000, pc: 0x8002c18e [ 5221.985929] Oops: 00000000 [ 5221.989488] [ 5221.989488] CURRENT PROCESS: [ 5221.989488] [ 5221.992877] COMM=callchain_test PID=11962 [ 5221.995213] TEXT=00008000-000087e0 DATA=00009f1c-0000a018 BSS=0000a018-0000b000 [ 5221.999037] USER-STACK=7fc18e20 KERNEL-STACK=be204680 [ 5221.999037] [ 5222.003292] PC: 0x8002c18e (perf_callchain_kernel+0x3e/0xd4) [ 5222.007957] LR: 0x8002c198 (perf_callchain_kernel+0x48/0xd4) [ 5222.074873] Call Trace: [ 5222.074873] [<800a248e>] get_perf_callchain+0x20a/0x29c [ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80 [ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8 [ 5222.074873] [<8009de6e>] perf_event_output_forward+0x36/0x98 [ 5222.074873] [<800497e0>] search_exception_tables+0x20/0x44 [ 5222.074873] [<8002cbb6>] do_page_fault+0x92/0x378 [ 5222.074873] [<80098608>] __perf_event_overflow+0x54/0xdc [ 5222.074873] [<80098778>] perf_swevent_hrtimer+0xe8/0x164 [ 5222.074873] [<8002ddd0>] update_mmu_cache+0x0/0xd8 [ 5222.074873] [<8002c014>] user_backtrace+0x58/0xc4 [ 5222.074873] [<8002c0b4>] perf_callchain_user+0x34/0xd0 [ 5222.074873] [<800a2442>] get_perf_callchain+0x1be/0x29c [ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80 [ 5222.074873] [<8009d834>] perf_output_sample+0x78c/0x858 [ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8 [ 5222.074873] [<8009de94>] perf_event_output_forward+0x5c/0x98 [ 5222.097846] [ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c [ 5222.097846] [<8006c874>] hrtimer_interrupt+0x104/0x2ec [ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c [ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c [ 5222.097846] [<8006c770>] hrtimer_interrupt+0x0/0x2ec [ 5222.097846] [<8005f2e4>] __handle_irq_event_percpu+0xac/0x19c [ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c [ 5222.097846] [<8005f408>] handle_irq_event_percpu+0x34/0x88 [ 5222.097846] [<8005f480>] handle_irq_event+0x24/0x64 [ 5222.097846] [<8006218c>] handle_level_irq+0x68/0xdc [ 5222.097846] [<8005ec76>] __handle_domain_irq+0x56/0xa8 [ 5222.097846] [<80450e90>] ck_irq_handler+0xac/0xe4 [ 5222.097846] [<80029012>] csky_do_IRQ+0x12/0x24 [ 5222.097846] [<8002a3a0>] csky_irq+0x70/0x80 [ 5222.097846] [<800ca612>] alloc_set_pte+0xd2/0x238 [ 5222.097846] [<8002ddd0>] update_mmu_cache+0x0/0xd8 [ 5222.097846] [<800a0340>] perf_event_exit_task+0x98/0x43c The original fp check doesn't base on the real kernal stack region. Invalid fp address may cause kernel panic. Signed-off-by: Mao Han <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup msa highest 3 bits maskLiu Yibin2-4/+4
Just as comment mentioned, the msa format: cr<30/31, 15> MSA register format: 31 - 29 | 28 - 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 BA Reserved SH WA B SO SEC C D V So we should shift 29 bits not 28 bits for mask Signed-off-by: Liu Yibin <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky: Fixup perf probe -x hungupGuo Ren2-0/+11
case: # perf probe -x /lib/libc-2.28.9000.so memcpy # perf record -e probe_libc:memcpy -aR sleep 1 System hangup and cpu get in trap_c loop, because our hardware singlestep state could still get interrupt signal. When we get in uprobe_xol singlestep slot, we should disable irq in pt_regs->psr. And is_swbp_insn() need a csky arch implementation with a low 16bit mask. Signed-off-by: Guo Ren <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]>
2020-05-13csky: Fixup compile error for abiv1 entry.SGuo Ren1-4/+4
This bug is from uprobe signal definition in thread_info.h. The instruction (andi) of abiv1 immediate is smaller than abiv2, then it will cause: AS arch/csky/kernel/entry.o arch/csky/kernel/entry.S: Assembler messages: arch/csky/kernel/entry.S:224: Error: Operand 2 immediate is overflow. Signed-off-by: Guo Ren <[email protected]>
2020-05-13csky/ftrace: Fixup error when disable CONFIG_DYNAMIC_FTRACEGuo Ren2-0/+4
When CONFIG_DYNAMIC_FTRACE is enabled, static ftrace will fail to boot up and compile. It's a carelessness when developing "dynamic ftrace" and "ftrace with regs". Signed-off-by: Guo Ren <[email protected]>
2020-04-10mm/special: create generic fallbacks for pte_special() and pte_mkspecial()Anshuman Khandual1-3/+0
Currently there are many platforms that dont enable ARCH_HAS_PTE_SPECIAL but required to define quite similar fallback stubs for special page table entry helpers such as pte_special() and pte_mkspecial(), as they get build in generic MM without a config check. This creates two generic fallback stub definitions for these helpers, eliminating much code duplication. mips platform has a special case where pte_special() and pte_mkspecial() visibility is wider than what ARCH_HAS_PTE_SPECIAL enablement requires. This restricts those symbol visibility in order to avoid redefinitions which is now exposed through this new generic stubs and subsequent build failure. arm platform set_pte_at() definition needs to be moved into a C file just to prevent a build failure. [[email protected]: use defined(CONFIG_ARCH_HAS_PTE_SPECIAL) in mips per Thomas] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Guo Ren <[email protected]> [csky] Acked-by: Geert Uytterhoeven <[email protected]> [m68k] Acked-by: Stafford Horne <[email protected]> [openrisc] Acked-by: Helge Deller <[email protected]> [parisc] Cc: Richard Henderson <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Russell King <[email protected]> Cc: Brian Cain <[email protected]> Cc: Tony Luck <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Sam Creasey <[email protected]> Cc: Michal Simek <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Nick Hu <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Vincent Chen <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Stefan Kristiansson <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Anton Ivanov <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Max Filippov <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-10mm/vma: define a default value for VM_DATA_DEFAULT_FLAGSAnshuman Khandual1-3/+0
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros with standard VMA access flag combinations that are used frequently across many platforms. Apart from simplification, this reduces code duplication as well. Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Vlastimil Babka <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Mark Salter <[email protected]> Cc: Guo Ren <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Brian Cain <[email protected]> Cc: Tony Luck <[email protected]> Cc: Michal Simek <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Nick Hu <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Walmsley <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Rich Felker <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Jeff Dike <[email protected]> Cc: Chris Zankel <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-07mm/vma: append unlikely() while testing VMA access permissionsAnshuman Khandual1-1/+1
It is unlikely that an inaccessible VMA without required permission flags will get a page fault. Hence lets just append unlikely() directive to such checks in order to improve performance while also standardizing it across various platforms. Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Cc: Guo Ren <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Mike Rapoport <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-07mm/vma: make vma_is_accessible() available for general useAnshuman Khandual1-1/+1
Lets move vma_is_accessible() helper to include/linux/mm.h which makes it available for general use. While here, this replaces all remaining open encodings for VMA access check with vma_is_accessible(). Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Geert Uytterhoeven <[email protected]> Acked-by: Guo Ren <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Guo Ren <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Alexander Viro <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Nick Piggin <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Will Deacon <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-06Merge tag 'csky-for-linus-5.7-rc1' of git://github.com/c-sky/csky-linuxLinus Torvalds34-74/+1807
Pull csky updates from Guo Ren: - Add kproobes/uprobes support - Add lockdep, rseq, gcov support - Fixup init_fpu - Fixup ftrace_modify deadlock - Fixup speculative execution on IO area * tag 'csky-for-linus-5.7-rc1' of git://github.com/c-sky/csky-linux: csky: Fixup cpu speculative execution to IO area csky: Add uprobes support csky: Add kprobes supported csky: Enable LOCKDEP_SUPPORT csky: Enable the gcov function csky: Fixup get wrong psr value from phyical reg csky/ftrace: Fixup ftrace_modify_code deadlock without CPU_HAS_ICACHE_INS csky: Implement ftrace with regs csky: Add support for restartable sequence csky: Implement ptrace regs and stack API csky: Fixup init_fpu compile warning with __init
2020-04-03csky: Fixup cpu speculative execution to IO areaGuo Ren5-58/+25
For the memory size ( > 512MB, < 1GB), the MSA setting is: - SSEG0: PHY_START , PHY_START + 512MB - SSEG1: PHY_START + 512MB, PHY_START + 1GB But the real memory is no more than 1GB, there is a gap between the end size of memory and border of 1GB. CPU could speculatively execute to that gap and if the gap of the bus couldn't respond to the CPU request, then the crash will happen. Now make the setting with: - SSEG0: PHY_START , PHY_START + 512MB (no change) - SSEG1: Disabled (We use highmem to use the memory of 512MB~1GB) We also deprecated zhole_szie[] settings, it's only used by arm style CPUs. All memory gap should use Reserved setting of dts in csky system. Signed-off-by: Guo Ren <[email protected]>
2020-04-03csky: Add uprobes supportGuo Ren8-1/+201
This patch adds support for uprobes on csky architecture. Just like kprobe, it support single-step and simulate instructions. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]>
2020-04-03csky: Add kprobes supportedGuo Ren16-2/+1197
This patch enable kprobes, kretprobes, ftrace interface. It utilized software breakpoint and single step debug exceptions, instructions simulation on csky. We use USR_BKPT replace origin instruction, and the kprobe handler prepares an excutable memory slot for out-of-line execution with a copy of the original instruction being probed. Most of instructions could be executed by single-step, but some instructions need origin pc value to execute and we need software simulate these instructions. Signed-off-by: Guo Ren <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steven Rostedt (VMware) <[email protected]>
2020-04-02asm-generic: make more kernel-space headers mandatoryMasahiro Yamada1-36/+0
Change a header to mandatory-y if both of the following are met: [1] At least one architecture (except um) specifies it as generic-y in arch/*/include/asm/Kbuild [2] Every architecture (except um) either has its own implementation (arch/*/include/asm/*.h) or specifies it as generic-y in arch/*/include/asm/Kbuild This commit was generated by the following shell script. ----------------------------------->8----------------------------------- arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d') tmpfile=$(mktemp) grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile find arch -path 'arch/*/include/asm/Kbuild' | xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u | while read header do mandatory=yes for arch in $arches do if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild && ! [ -f arch/$arch/include/asm/$header ]; then mandatory=no break fi done if [ "$mandatory" = yes ]; then echo "mandatory-y += $header" >> $tmpfile for arch in $arches do sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild done fi done sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild LANG=C sort $tmpfile >> include/asm-generic/Kbuild ----------------------------------->8----------------------------------- One obvious benefit is the diff stat: 25 files changed, 52 insertions(+), 557 deletions(-) It is tedious to list generic-y for each arch that needs it. So, mandatory-y works like a fallback default (by just wrapping asm-generic one) when arch does not have a specific header implementation. See the following commits: def3f7cefe4e81c296090e1722a76551142c227c a1b39bae16a62ce4aae02d958224f19316d98b24 It is tedious to convert headers one by one, so I processed by a shell script. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Michal Simek <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Arnd Bergmann <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-02csky: Enable LOCKDEP_SUPPORTGuo Ren2-0/+14
Lockdep is needed by proving the spinlocks and rwlocks. Currently, we only put trace_hardirqs_on/off with csky_irq and ret_from_exception. Signed-off-by: Guo Ren <[email protected]>
2020-04-01csky: Enable the gcov functionMa Jun1-0/+1
Support the gcov function in csky architecture. Signed-off-by: Ma Jun <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-04-01csky: Fixup get wrong psr value from phyical regGuo Ren3-1/+18
We should get psr value from regs->psr in stack, not directly get it from phyiscal register then save the vector number in tsk->trap_no. Signed-off-by: Guo Ren <[email protected]>
2020-03-31csky/ftrace: Fixup ftrace_modify_code deadlock without CPU_HAS_ICACHE_INSGuo Ren2-6/+70
If ICACHE_INS is not supported, we use IPI to sync icache on each core. But ftrace_modify_code is called from stop_machine from default implementation of arch_ftrace_update_code and stop_machine callback is irq_disabled. When you call ipi with irq_disabled, a deadlock will happen. We couldn't use icache_flush with irq_disabled, but startup make_nop is specific case and it needn't ipi other cores. Signed-off-by: Guo Ren <[email protected]>
2020-03-21csky: Remove mm.h from asm/uaccess.hSebastian Andrzej Siewior1-1/+0
The defconfig compiles without linux/mm.h. With mm.h included the include chain leands to: | CC kernel/locking/percpu-rwsem.o | In file included from include/linux/huge_mm.h:8, | from include/linux/mm.h:567, | from arch/csky/include/asm/uaccess.h:, | from include/linux/uaccess.h:11, | from include/linux/sched/task.h:11, | from include/linux/sched/signal.h:9, | from include/linux/rcuwait.h:6, | from include/linux/percpu-rwsem.h:8, | from kernel/locking/percpu-rwsem.c:6: | include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore' | 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS]; once rcuwait.h includes linux/sched/signal.h. Remove the linux/mm.h include. Reported-by: kbuild test robot <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-03-08csky: Implement ftrace with regsGuo Ren6-0/+123
This patch implements FTRACE_WITH_REGS for csky, which allows a traced function's arguments (and some other registers) to be captured into a struct pt_regs, allowing these to be inspected and/or modified. Signed-off-by: Guo Ren <[email protected]>
2020-03-08csky: Add support for restartable sequenceGuo Ren3-1/+8
Copied and adapted from vincent's patch, but modified for csky. ref: https://lore.kernel.org/linux-riscv/[email protected]/raw Add calls to rseq_signal_deliver(), rseq_handle_notify_resume() and rseq_syscall() to introduce RSEQ support. 1. Call the rseq_handle_notify_resume() function on return to userspace if TIF_NOTIFY_RESUME thread flag is set. 2. Call the rseq_signal_deliver() function to fixup on the pre-signal frame when a signal is delivered on top of a restartable sequence critical section. 3. Check that system calls are not invoked from within rseq critical sections by invoking rseq_signal() from ret_from_syscall(). With CONFIG_DEBUG_RSEQ, such behavior results in termination of the process with SIGSEGV. Signed-off-by: Guo Ren <[email protected]>
2020-03-08csky: Implement ptrace regs and stack APIGuo Ren3-0/+145
Needed for kprobes support. Copied and adapted from Patrick's patch, but it has been modified for csky's pt_regs. ref: https://lore.kernel.org/linux-riscv/[email protected]/raw Signed-off-by: Guo Ren <[email protected]> Cc: Patrick Staehlin <[email protected]>
2020-03-08csky: Fixup init_fpu compile warning with __initGuo Ren3-6/+5
WARNING: vmlinux.o(.text+0x2366): Section mismatch in reference from the function csky_start_secondary() to the function .init.text:init_fpu() The function csky_start_secondary() references the function __init init_fpu(). This is often because csky_start_secondary lacks a __init annotation or the annotation of init_fpu is wrong. Reported-by: Lu Chongzhi <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-23csky: Replace <linux/clk-provider.h> by <linux/of_clk.h>Geert Uytterhoeven1-1/+1
The C-Sky platform code is not a clock provider, and just needs to call of_clk_init(). Hence it can include <linux/of_clk.h> instead of <linux/clk-provider.h>. Signed-off-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-21csky: Implement copy_thread_tlsGuo Ren2-3/+5
This is required for clone3 which passes the TLS value through a struct rather than a register. Cc: Amanieu d'Antras <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-21csky: Add PCI supportMaJun3-1/+39
Add the pci related code for csky arch to support basic pci virtual function, such as qemu virt-pci-9pfs. Signed-off-by: MaJun <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-21csky: Minimize defconfig to support buildroot config.fragmentMa Jun1-7/+0
Some bsp (eg: buildroot) has defconfig.fragment design to add more configs into the defconfig in linux source code tree. For example, we could put different cpu configs into different defconfig.fragments, but they all use the same defconfig in Linux. Signed-off-by: Ma Jun <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-21csky: Add setup_initrd check codeGuo Ren2-3/+44
We should give some necessary check for initrd just like other architectures and it seems that setup_initrd() could be a common code for all architectures. Signed-off-by: Guo Ren <[email protected]>
2020-02-21csky: Cleanup old Kconfig optionsKrzysztof Kozlowski2-3/+0
CONFIG_CLKSRC_OF is gone since commit bb0eb050a577 ("clocksource/drivers: Rename CLKSRC_OF to TIMER_OF"). The platform already selects TIMER_OF. CONFIG_HAVE_DMA_API_DEBUG is gone since commit 6e88628d03dd ("dma-debug: remove CONFIG_HAVE_DMA_API_DEBUG"). CONFIG_DEFAULT_DEADLINE is gone since commit f382fb0bcef4 ("block: remove legacy IO schedulers"). Signed-off-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-21arch/csky: fix some Kconfig typosRandy Dunlap1-1/+1
Fix wording in help text for the CPU_HAS_LDSTEX symbol. Signed-off-by: Randy Dunlap <[email protected]> Signed-off-by: Guo Ren <[email protected]> Signed-off-by: Guo Ren <[email protected]>
2020-02-21csky: Fixup compile warning for three unimplemented syscallsGuo Ren1-0/+3
Implement fstat64, fstatat64, clone3 syscalls to fixup checksyscalls.sh compile warnings. Signed-off-by: Guo Ren <[email protected]>