Age | Commit message (Collapse) | Author | Files | Lines |
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt:
"The major update to this release is that there's a new arch config
option called CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS.
Currently, only x86_64 enables it. All the ftrace callbacks now take a
struct ftrace_regs instead of a struct pt_regs. If the architecture
has HAVE_DYNAMIC_FTRACE_WITH_ARGS enabled, then the ftrace_regs will
have enough information to read the arguments of the function being
traced, as well as access to the stack pointer.
This way, if a user (like live kernel patching) only cares about the
arguments, then it can avoid using the heavier weight "regs" callback,
that puts in enough information in the struct ftrace_regs to simulate
a breakpoint exception (needed for kprobes).
A new config option that audits the timestamps of the ftrace ring
buffer at most every event recorded.
Ftrace recursion protection has been cleaned up to move the protection
to the callback itself (this saves on an extra function call for those
callbacks).
Perf now handles its own RCU protection and does not depend on ftrace
to do it for it (saving on that extra function call).
New debug option to add "recursed_functions" file to tracefs that
lists all the places that triggered the recursion protection of the
function tracer. This will show where things need to be fixed as
recursion slows down the function tracer.
The eval enum mapping updates done at boot up are now offloaded to a
work queue, as it caused a noticeable pause on slow embedded boards.
Various clean ups and last minute fixes"
* tag 'trace-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (33 commits)
tracing: Offload eval map updates to a work queue
Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"
ring-buffer: Add rb_check_bpage in __rb_allocate_pages
ring-buffer: Fix two typos in comments
tracing: Drop unneeded assignment in ring_buffer_resize()
tracing: Disable ftrace selftests when any tracer is running
seq_buf: Avoid type mismatch for seq_buf_init
ring-buffer: Fix a typo in function description
ring-buffer: Remove obsolete rb_event_is_commit()
ring-buffer: Add test to validate the time stamp deltas
ftrace/documentation: Fix RST C code blocks
tracing: Clean up after filter logic rewriting
tracing: Remove the useless value assignment in test_create_synth_event()
livepatch: Use the default ftrace_ops instead of REGS when ARGS is available
ftrace/x86: Allow for arguments to be passed in to ftrace_regs by default
ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs
MAINTAINERS: assign ./fs/tracefs to TRACING
tracing: Fix some typos in comments
ftrace: Remove unused varible 'ret'
ring-buffer: Add recording of ring buffer recursion into recursed_functions
...
|
|
Pull TIF_NOTIFY_SIGNAL updates from Jens Axboe:
"This sits on top of of the core entry/exit and x86 entry branch from
the tip tree, which contains the generic and x86 parts of this work.
Here we convert the rest of the archs to support TIF_NOTIFY_SIGNAL.
With that done, we can get rid of JOBCTL_TASK_WORK from task_work and
signal.c, and also remove a deadlock work-around in io_uring around
knowing that signal based task_work waking is invoked with the sighand
wait queue head lock.
The motivation for this work is to decouple signal notify based
task_work, of which io_uring is a heavy user of, from sighand. The
sighand lock becomes a huge contention point, particularly for
threaded workloads where it's shared between threads. Even outside of
threaded applications it's slower than it needs to be.
Roman Gershman <[email protected]> reported that his networked
workload dropped from 1.6M QPS at 80% CPU to 1.0M QPS at 100% CPU
after io_uring was changed to use TIF_NOTIFY_SIGNAL. The time was all
spent hammering on the sighand lock, showing 57% of the CPU time there
[1].
There are further cleanups possible on top of this. One example is
TIF_PATCH_PENDING, where a patch already exists to use
TIF_NOTIFY_SIGNAL instead. Hopefully this will also lead to more
consolidation, but the work stands on its own as well"
[1] https://github.com/axboe/liburing/issues/215
* tag 'tif-task_work.arch-2020-12-14' of git://git.kernel.dk/linux-block: (28 commits)
io_uring: remove 'twa_signal_ok' deadlock work-around
kernel: remove checking for TIF_NOTIFY_SIGNAL
signal: kill JOBCTL_TASK_WORK
io_uring: JOBCTL_TASK_WORK is no longer used by task_work
task_work: remove legacy TWA_SIGNAL path
sparc: add support for TIF_NOTIFY_SIGNAL
riscv: add support for TIF_NOTIFY_SIGNAL
nds32: add support for TIF_NOTIFY_SIGNAL
ia64: add support for TIF_NOTIFY_SIGNAL
h8300: add support for TIF_NOTIFY_SIGNAL
c6x: add support for TIF_NOTIFY_SIGNAL
alpha: add support for TIF_NOTIFY_SIGNAL
xtensa: add support for TIF_NOTIFY_SIGNAL
arm: add support for TIF_NOTIFY_SIGNAL
microblaze: add support for TIF_NOTIFY_SIGNAL
hexagon: add support for TIF_NOTIFY_SIGNAL
csky: add support for TIF_NOTIFY_SIGNAL
openrisc: add support for TIF_NOTIFY_SIGNAL
sh: add support for TIF_NOTIFY_SIGNAL
um: add support for TIF_NOTIFY_SIGNAL
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fixes from Thomas Gleixner:
"Two more places which invoke tracing from RCU disabled regions in the
idle path.
Similar to the entry path the low level idle functions have to be
non-instrumentable"
* tag 'locking-urgent-2020-11-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
intel_idle: Fix intel_idle() vs tracing
sched/idle: Fix arch_cpu_idle() vs tracing
|
|
We call arch_cpu_idle() with RCU disabled, but then use
local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.
Switch all arch_cpu_idle() implementations to use
raw_local_irq_{en,dis}able() and carefully manage the
lockdep,rcu,tracing state like we do in entry.
(XXX: we really should change arch_cpu_idle() to not return with
interrupts enabled)
Reported-by: Sven Schnelle <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
In preparation to have arguments of a function passed to callbacks attached
to functions as default, change the default callback prototype to receive a
struct ftrace_regs as the forth parameter instead of a pt_regs.
For callbacks that set the FL_SAVE_REGS flag in their ftrace_ops flags, they
will now need to get the pt_regs via a ftrace_get_regs() helper call. If
this is called by a callback that their ftrace_ops did not have a
FL_SAVE_REGS flag set, it that helper function will return NULL.
This will allow the ftrace_regs to hold enough just to get the parameters
and stack pointer, but without the worry that callbacks may have a pt_regs
that is not completely filled.
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
|
|
struct perf_sample_data lives on-stack, we should be careful about it's
size. Furthermore, the pt_regs copy in there is only because x86_64 is a
trainwreck, solve it differently.
Reported-by: Thomas Gleixner <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Tested-by: Steven Rostedt <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
Wire up TIF_NOTIFY_SIGNAL handling for csky.
Cc: [email protected]
Acked-by: Guo Ren <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
This adds CONFIG_FTRACE_RECORD_RECURSION that will record to a file
"recursed_functions" all the functions that caused recursion while a
callback to the function tracer was running.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: [email protected]
Cc: "H. Peter Anvin" <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Anton Vorontsov <[email protected]>
Cc: Colin Cross <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Joe Lawrence <[email protected]>
Cc: Kamalesh Babulal <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
|
|
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
The default for ftrace_ops is going to change. It will expect that handlers
provide their own recursion protection, unless its ftrace_ops states
otherwise.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: [email protected]
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Anil S Keshavamurthy <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Acked-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
|
|
All the callers currently do this, clean it up and move the clearing
into tracehook_notify_resume() instead.
Reviewed-by: Oleg Nesterov <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Pull dma-mapping updates from Christoph Hellwig:
- rework the non-coherent DMA allocator
- move private definitions out of <linux/dma-mapping.h>
- lower CMA_ALIGNMENT (Paul Cercueil)
- remove the omap1 dma address translation in favor of the common code
- make dma-direct aware of multiple dma offset ranges (Jim Quinlan)
- support per-node DMA CMA areas (Barry Song)
- increase the default seg boundary limit (Nicolin Chen)
- misc fixes (Robin Murphy, Thomas Tai, Xu Wang)
- various cleanups
* tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits)
ARM/ixp4xx: add a missing include of dma-map-ops.h
dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling
dma-direct: factor out a dma_direct_alloc_from_pool helper
dma-direct check for highmem pages in dma_direct_alloc_pages
dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>
dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma
dma-mapping: move dma-debug.h to kernel/dma/
dma-mapping: remove <asm/dma-contiguous.h>
dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>
dma-contiguous: remove dma_contiguous_set_default
dma-contiguous: remove dev_set_cma_area
dma-contiguous: remove dma_declare_contiguous
dma-mapping: split <linux/dma-mapping.h>
cma: decrease CMA_ALIGNMENT lower limit to 2
firewire-ohci: use dma_alloc_pages
dma-iommu: implement ->alloc_noncoherent
dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods
dma-mapping: add a new dma_alloc_pages API
dma-mapping: remove dma_cache_sync
53c700: convert to dma_alloc_noncoherent
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf/kprobes updates from Ingo Molnar:
"This prepares to unify the kretprobe trampoline handler and make
kretprobe lockless (those patches are still work in progress)"
* tag 'perf-kprobes-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
kprobes: Fix to check probe enabled before disarm_kprobe_ftrace()
kprobes: Make local functions static
kprobes: Free kretprobe_instance with RCU callback
kprobes: Remove NMI context check
sparc: kprobes: Use generic kretprobe trampoline handler
sh: kprobes: Use generic kretprobe trampoline handler
s390: kprobes: Use generic kretprobe trampoline handler
powerpc: kprobes: Use generic kretprobe trampoline handler
parisc: kprobes: Use generic kretprobe trampoline handler
mips: kprobes: Use generic kretprobe trampoline handler
ia64: kprobes: Use generic kretprobe trampoline handler
csky: kprobes: Use generic kretprobe trampoline handler
arc: kprobes: Use generic kretprobe trampoline handler
arm64: kprobes: Use generic kretprobe trampoline handler
arm: kprobes: Use generic kretprobe trampoline handler
x86/kprobes: Use generic kretprobe trampoline handler
kprobes: Add generic kretprobe trampoline handler
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull orphan section checking from Ingo Molnar:
"Orphan link sections were a long-standing source of obscure bugs,
because the heuristics that various linkers & compilers use to handle
them (include these bits into the output image vs discarding them
silently) are both highly idiosyncratic and also version dependent.
Instead of this historically problematic mess, this tree by Kees Cook
(et al) adds build time asserts and build time warnings if there's any
orphan section in the kernel or if a section is not sized as expected.
And because we relied on so many silent assumptions in this area, fix
a metric ton of dependencies and some outright bugs related to this,
before we can finally enable the checks on the x86, ARM and ARM64
platforms"
* tag 'core-build-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
x86/boot/compressed: Warn on orphan section placement
x86/build: Warn on orphan section placement
arm/boot: Warn on orphan section placement
arm/build: Warn on orphan section placement
arm64/build: Warn on orphan section placement
x86/boot/compressed: Add missing debugging sections to output
x86/boot/compressed: Remove, discard, or assert for unwanted sections
x86/boot/compressed: Reorganize zero-size section asserts
x86/build: Add asserts for unwanted sections
x86/build: Enforce an empty .got.plt section
x86/asm: Avoid generating unused kprobe sections
arm/boot: Handle all sections explicitly
arm/build: Assert for unwanted sections
arm/build: Add missing sections
arm/build: Explicitly keep .ARM.attributes sections
arm/build: Refactor linker script headers
arm64/build: Assert for unwanted sections
arm64/build: Add missing DWARF sections
arm64/build: Use common DISCARDS in linker script
arm64/build: Remove .eh_frame* sections due to unwind tables
...
|
|
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment
describing the contiguous allocator into kernel/dma/contigous.c.
Signed-off-by: Christoph Hellwig <[email protected]>
|
|
Use the generic kretprobe trampoline handler. Don't use
framepointer verification.
Signed-off-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Guo Ren <[email protected]>
Link: https://lore.kernel.org/r/159870605775.1229682.2627276871589951304.stgit@devnote2
|
|
The .comment section doesn't belong in STABS_DEBUG. Split it out into a
new macro named ELF_DETAILS. This will gain other non-debug sections
that need to be accounted for when linking with --orphan-handling=warn.
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
|
|
Replace the existing /* fall through */ comments and its variants with
the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
fall-through markings when it is the case.
[1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through
Signed-off-by: Gustavo A. R. Silva <[email protected]>
|
|
Merge misc updates from Andrew Morton:
- a few MM hotfixes
- kthread, tools, scripts, ntfs and ocfs2
- some of MM
Subsystems affected by this patch series: kthread, tools, scripts, ntfs,
ocfs2 and mm (hofixes, pagealloc, slab-generic, slab, slub, kcsan,
debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, mincore,
sparsemem, vmalloc, kasan, pagealloc, hugetlb and vmscan).
* emailed patches from Andrew Morton <[email protected]>: (162 commits)
mm: vmscan: consistent update to pgrefill
mm/vmscan.c: fix typo
khugepaged: khugepaged_test_exit() check mmget_still_valid()
khugepaged: retract_page_tables() remember to test exit
khugepaged: collapse_pte_mapped_thp() protect the pmd lock
khugepaged: collapse_pte_mapped_thp() flush the right range
mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible
mm: thp: replace HTTP links with HTTPS ones
mm/page_alloc: fix memalloc_nocma_{save/restore} APIs
mm/page_alloc.c: skip setting nodemask when we are in interrupt
mm/page_alloc: fallbacks at most has 3 elements
mm/page_alloc: silence a KASAN false positive
mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask()
mm/page_alloc.c: simplify pageblock bitmap access
mm/page_alloc.c: extract the common part in pfn_to_bitidx()
mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits
mm/shuffle: remove dynamic reconfiguration
mm/memory_hotplug: document why shuffle_zone() is relevant
mm/page_alloc: remove nr_free_pagecache_pages()
mm: remove vm_total_pages
...
|
|
Patch series "mm: cleanup usage of <asm/pgalloc.h>"
Most architectures have very similar versions of pXd_alloc_one() and
pXd_free_one() for intermediate levels of page table. These patches add
generic versions of these functions in <asm-generic/pgalloc.h> and enable
use of the generic functions where appropriate.
In addition, functions declared and defined in <asm/pgalloc.h> headers are
used mostly by core mm and early mm initialization in arch and there is no
actual reason to have the <asm/pgalloc.h> included all over the place.
The first patch in this series removes unneeded includes of
<asm/pgalloc.h>
In the end it didn't work out as neatly as I hoped and moving
pXd_alloc_track() definitions to <asm-generic/pgalloc.h> would require
unnecessary changes to arches that have custom page table allocations, so
I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local
to mm/.
This patch (of 8):
In most cases <asm/pgalloc.h> header is required only for allocations of
page table memory. Most of the .c files that include that header do not
use symbols declared in <asm/pgalloc.h> and do not require that header.
As for the other header files that used to include <asm/pgalloc.h>, it is
possible to move that include into the .c file that actually uses symbols
from <asm/pgalloc.h> and drop the include from the header file.
The process was somewhat automated using
sed -i -E '/[<"]asm\/pgalloc\.h/d' \
$(grep -L -w -f /tmp/xx \
$(git grep -E -l '[<"]asm/pgalloc\.h'))
where /tmp/xx contains all the symbols defined in
arch/*/include/asm/pgalloc.h.
[[email protected]: fix powerpc warning]
Signed-off-by: Mike Rapoport <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: Pekka Enberg <[email protected]>
Acked-by: Geert Uytterhoeven <[email protected]> [m68k]
Cc: Abdul Haleem <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Satheesh Rajendran <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull ptrace regset updates from Al Viro:
"Internal regset API changes:
- regularize copy_regset_{to,from}_user() callers
- switch to saner calling conventions for ->get()
- kill user_regset_copyout()
The ->put() side of things will have to wait for the next cycle,
unfortunately.
The balance is about -1KLoC and replacements for ->get() instances are
a lot saner"
* 'work.regset' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (41 commits)
regset: kill user_regset_copyout{,_zero}()
regset(): kill ->get_size()
regset: kill ->get()
csky: switch to ->regset_get()
xtensa: switch to ->regset_get()
parisc: switch to ->regset_get()
nds32: switch to ->regset_get()
nios2: switch to ->regset_get()
hexagon: switch to ->regset_get()
h8300: switch to ->regset_get()
openrisc: switch to ->regset_get()
riscv: switch to ->regset_get()
c6x: switch to ->regset_get()
ia64: switch to ->regset_get()
arc: switch to ->regset_get()
arm: switch to ->regset_get()
sh: convert to ->regset_get()
arm64: switch to ->regset_get()
mips: switch to ->regset_get()
sparc: switch to ->regset_get()
...
|
|
Pull arch/csky updates from Guo Ren:
"New features:
- seccomp-filter
- err-injection
- top-down&random mmap-layout
- irq_work
- show_ipi
- context-tracking
Fixes & Optimizations:
- kprobe_on_ftrace
- optimize panic print"
* tag 'csky-for-linus-5.9-rc1' of https://github.com/c-sky/csky-linux:
csky: Add context tracking support
csky: Add arch_show_interrupts for IPI interrupts
csky: Add irq_work support
csky: Fixup warning by EXPORT_SYMBOL(kmap)
csky: Set CONFIG_NR_CPU 4 as default
csky: Use top-down mmap layout
csky: Optimize the trap processing flow
csky: Add support for function error injection
csky: Fixup kprobes handler couldn't change pc
csky: Fixup duplicated restore sp in RESTORE_REGS_FTRACE
csky: Add cpu feature register hint for smp
csky: Add SECCOMP_FILTER supported
csky: remove unusued thread_saved_pc and *_segments functions/macros
|
|
This patch support context tracking with no hz full.
Here is the test result with dynticks-testing (see tick_stop):
cat /sys/kernel/debug/tracing/per_cpu/cpu0/trace
tracer: nop
entries-in-buffer/entries-written: 356/356 #P:1
_-----=> irqs-off
/ _----=> need-resched
| / _---=> hardirq/softirq
|| / _--=> preempt-depth
||| / delay
TASK-PID CPU# |||| TIMESTAMP FUNCTION
| | | |||| | |
...
sleep-192 [000] d.h. 167.088270: hrtimer_expire_entry: hrtimer=(ptrval) function=tick_sched_timer now=166436355700
sleep-192 [000] d.h. 167.092279: hrtimer_expire_entry: hrtimer=(ptrval) function=tick_sched_timer now=166440365700
<idle>-0 [000] d.h2 167.096492: hrtimer_expire_entry: hrtimer=(ptrval) function=tick_sched_timer now=166444578400
<idle>-0 [000] d..1 167.097876: tick_stop: success=1 dependency=NONE
^^^^^^^^^
<idle>-0 [000] d.h1 168.818206: hrtimer_expire_entry: hrtimer=(ptrval) function=tick_sched_timer now=168166280900
kworker/u2:0-7 [000] .... 168.821760: workqueue_execute_start: work struct (ptrval): function wb_workfn
kworker/u2:0-7 [000] d.h1 168.824464: hrtimer_expire_entry: hrtimer=(ptrval) function=tick_sched_timer now=168172547100
kworker/0:1-18 [000] .... 168.825053: workqueue_execute_start: work struct (ptrval): function vmstat_update
kworker/0:1-18 [000] .... 168.825238: workqueue_execute_start: work struct (ptrval): function vmstat_shepherd
kworker/0:1-18 [000] .... 168.825516: workqueue_execute_start: work struct (ptrval): function neigh_periodic_work
<idle>-0 [000] d..1 168.826121: tick_stop: success=1 dependency=NONE
kworker/u2:0-7 [000] .... 169.377327: workqueue_execute_start: work struct (ptrval): function flush_to_ldisc
<idle>-0 [000] d..1 169.379832: tick_stop: success=1 dependency=NONE
kworker/u2:0-7 [000] .... 169.607935: workqueue_execute_start: work struct (ptrval): function flush_to_ldisc
kworker/u2:0-7 [000] d.h1 169.608148: hrtimer_expire_entry: hrtimer=(ptrval) function=tick_sched_timer now=168956235500
<idle>-0 [000] d..1 169.608654: tick_stop: success=1 dependency=NONE
Signed-off-by: Guo Ren <[email protected]>
Cc: Greentime Hu <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
Here is the result:
cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
15: 1348 1299 952 1076 C-SKY SMP Intc 15 IPI Interrupt
16: 1203 1825 1598 1307 C-SKY SMP Intc 16 csky_mp_timer
43: 292 0 0 0 C-SKY SMP Intc 43 ttyS0
57: 106 0 0 0 C-SKY SMP Intc 57 virtio0
IPI0: 0 0 0 0 Empty interrupts
IPI1: 19 41 45 27 Rescheduling interrupts
IPI2: 1330 1259 908 1050 Function call interrupts
IPI3: 0 0 0 0 Irq work interrupts
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
Running work in hardware interrupt context for csky. Implement:
- arch_irq_work_raise()
- arch_irq_work_has_interrupt()
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
- Seperate different trap functions
- Add trap_no()
- Remove panic code print
- Redesign die_if_kerenl to die with riscv's
- Print exact trap info for app segment fault
[ 17.389321] gzip[126]: unhandled signal 11 code 0x3 at 0x0007835a in busybox[8000+d4000]
[ 17.393882]
[ 17.393882] CURRENT PROCESS:
[ 17.393882]
[ 17.394309] COMM=gzip PID=126
[ 17.394513] TEXT=00008000-000db2e4 DATA=000dcf14-000dd1ad BSS=000dd1ad-000ff000
[ 17.395499] USER-STACK=7f888e50 KERNEL-STACK=bf130300
[ 17.395499]
[ 17.396801] PC: 0x0007835a (0x7835a)
[ 17.397048] LR: 0x000058b4 (0x58b4)
[ 17.397285] SP: 0xbe519f68
[ 17.397555] orig_a0: 0x00002852
[ 17.397886] PSR: 0x00020341
[ 17.398356] a0: 0x00002852 a1: 0x000f2f5a a2: 0x0000d7ae a3: 0x0000005d
[ 17.399289] r4: 0x000de150 r5: 0x00000002 r6: 0x00000102 r7: 0x00007efa
[ 17.399800] r8: 0x7f888bc4 r9: 0x00000001 r10: 0x000002eb r11: 0x0000aac1
[ 17.400166] r12: 0x00002ef2 r13: 0x00000007 r15: 0x000058b4
[ 17.400531] r16: 0x0000004c r17: 0x00000031 r18: 0x000f5816 r19: 0x000e8068
[ 17.401006] r20: 0x000f5818 r21: 0x000e8068 r22: 0x000f5918 r23: 0x90000000
[ 17.401721] r24: 0x00000031 r25: 0x000000c8 r26: 0x00000000 r27: 0x00000000
[ 17.402199] r28: 0x2ac2a000 r29: 0x00000000 r30: 0x00000000 tls: 0x2aadbaa8
[ 17.402686] hi: 0x00120340 lo: 0x7f888bec
/etc/init.ci/ntfs3g_run: line 61: 126 Segmentation fault gzip -c -9 /mnt/test.bin > /mnt/test_bin.gz
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
CPU features registers are setup by customers' bootloader, but
Linux must help transfer them from the primary to secondary cores.
This patch add hint2 CPU feature register supported.
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
secure_computing() is called first in syscall_trace_enter() so that
a system call will be aborted quickly without doing succeeding syscall
tracing if seccomp rules want to deny that system call.
TODO:
- Update https://github.com/seccomp/libseccomp csky support
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
These are used nowhere in the tree (except for some architectures which
define them for their own use) and were already removed for other
architectures in:
commit 6474924e2b5d ("arch: remove unused macro/function thread_saved_pc()")
commit c17c02040bf0 ("arch: remove unused *_segments() macros/functions")
Remove them from arch/csky as well.
Signed-off-by: Tobias Klauser <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
|
|
NB: WTF is fpregs_get() playing at???
Signed-off-by: Al Viro <[email protected]>
|
|
Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls()
back simply copy_thread(). It's a simpler name, and doesn't imply that only
tls is copied here. This finishes an outstanding chunk of internal process
creation work since we've added clone3().
Cc: [email protected]
Acked-by: Thomas Bogendoerfer <[email protected]>A
Acked-by: Stafford Horne <[email protected]>
Acked-by: Greentime Hu <[email protected]>
Acked-by: Geert Uytterhoeven <[email protected]>A
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Christian Brauner <[email protected]>
|
|
Better describe what these functions do.
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This change converts the existing mmap_sem rwsem calls to use the new mmap
locking API instead.
The change is generated using coccinelle with the following rule:
// spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir .
@@
expression mm;
@@
(
-init_rwsem
+mmap_init_lock
|
-down_write
+mmap_write_lock
|
-down_write_killable
+mmap_write_lock_killable
|
-down_write_trylock
+mmap_write_trylock
|
-up_write
+mmap_write_unlock
|
-downgrade_write
+mmap_write_downgrade
|
-down_read
+mmap_read_lock
|
-down_read_killable
+mmap_read_lock_killable
|
-down_read_trylock
+mmap_read_trylock
|
-up_read
+mmap_read_unlock
)
-(&mm->mmap_sem)
+(mm)
Signed-off-by: Michel Lespinasse <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: Daniel Jordan <[email protected]>
Reviewed-by: Laurent Dufour <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: Liam Howlett <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ying Han <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Patch series "mm: consolidate definitions of page table accessors", v2.
The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.
Most of these definitions are actually identical and typically it boils
down to, e.g.
static inline unsigned long pmd_index(unsigned long address)
{
return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
}
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}
These definitions can be shared among 90% of the arches provided
XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
For architectures that really need a custom version there is always
possibility to override the generic version with the usual ifdefs magic.
These patches introduce include/linux/pgtable.h that replaces
include/asm-generic/pgtable.h and add the definitions of the page table
accessors to the new header.
This patch (of 12):
The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h>
in the files that include <linux/mm.h>.
The include statements in such cases are remove with a simple loop:
for f in $(git grep -l "include <linux/mm.h>") ; do
sed -i -e '/include <asm\/pgtable.h>/ d' $f
done
Signed-off-by: Mike Rapoport <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Cain <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chris Zankel <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Greentime Hu <[email protected]>
Cc: Greg Ungerer <[email protected]>
Cc: Guan Xuetao <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Ley Foon Tan <[email protected]>
Cc: Mark Salter <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nick Hu <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Russell King <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Thomas Bogendoerfer <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vincent Chen <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now the last users of show_stack() got converted to use an explicit log
level, show_stack_loglvl() can drop it's redundant suffix and become once
again well known show_stack().
Signed-off-by: Dmitry Safonov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Currently, the log-level of show_stack() depends on a platform
realization. It creates situations where the headers are printed with
lower log level or higher than the stacktrace (depending on a platform or
user).
Furthermore, it forces the logic decision from user to an architecture
side. In result, some users as sysrq/kdb/etc are doing tricks with
temporary rising console_loglevel while printing their messages. And in
result it not only may print unwanted messages from other CPUs, but also
omit printing at all in the unlucky case where the printk() was deferred.
Introducing log-level parameter and KERN_UNSUPPRESSED [1] seems an easier
approach than introducing more printk buffers. Also, it will consolidate
printings with headers.
Introduce show_stack_loglvl(), that eventually will substitute
show_stack().
[1]: https://lore.kernel.org/lkml/[email protected]/T/#u
Signed-off-by: Dmitry Safonov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Guo Ren <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available to all architectures.
Using this function instead of free_area_init_node() simplifies the zone
detection.
Signed-off-by: Mike Rapoport <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Tested-by: Hoan Tran <[email protected]> [arm64]
Cc: Baoquan He <[email protected]>
Cc: Brian Cain <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Greentime Hu <[email protected]>
Cc: Greg Ungerer <[email protected]>
Cc: Guan Xuetao <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Ley Foon Tan <[email protected]>
Cc: Mark Salter <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Nick Hu <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Russell King <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Thomas Bogendoerfer <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Put the rseq_syscall check point at the prologue of the syscall
will break the a0 ... a7. This will casue system call bug when
DEBUG_RSEQ is enabled.
So move it to the epilogue of syscall, but before syscall_trace.
Signed-off-by: Guo Ren <[email protected]>
|
|
There is no fixup or feature in the patch, we only cleanup with:
- Remove unnecessary reg used (r11, r12), just use r9 & r10 &
syscallid regs as temp useage.
- Add _TIF_SYSCALL_WORK and _TIF_WORK_MASK to gather macros.
Signed-off-by: Guo Ren <[email protected]>
|
|
Current implementation could destory a4 & a5 when strace, so we need to get them
from pt_regs by SAVE_ALL.
Signed-off-by: Guo Ren <[email protected]>
|
|
log:
[ 0.13373200] Calibrating delay loop...
[ 0.14077600] ------------[ cut here ]------------
[ 0.14116700] WARNING: CPU: 0 PID: 0 at kernel/sched/core.c:3790 preempt_count_add+0xc8/0x11c
[ 0.14348000] DEBUG_LOCKS_WARN_ON((preempt_count() < 0))Modules linked in:
[ 0.14395100] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0 #7
[ 0.14410800]
[ 0.14427400] Call Trace:
[ 0.14450700] [<807cd226>] dump_stack+0x8a/0xe4
[ 0.14473500] [<80072792>] __warn+0x10e/0x15c
[ 0.14495900] [<80072852>] warn_slowpath_fmt+0x72/0xc0
[ 0.14518600] [<800a5240>] preempt_count_add+0xc8/0x11c
[ 0.14544900] [<807ef918>] _raw_spin_lock+0x28/0x68
[ 0.14572600] [<800e0eb8>] vprintk_emit+0x84/0x2d8
[ 0.14599000] [<800e113a>] vprintk_default+0x2e/0x44
[ 0.14625100] [<800e2042>] vprintk_func+0x12a/0x1d0
[ 0.14651300] [<800e1804>] printk+0x30/0x48
[ 0.14677600] [<80008052>] lockdep_init+0x12/0xb0
[ 0.14703800] [<80002080>] start_kernel+0x558/0x7f8
[ 0.14730000] [<800052bc>] csky_start+0x58/0x94
[ 0.14756600] irq event stamp: 34
[ 0.14775100] hardirqs last enabled at (33): [<80067370>] ret_from_exception+0x2c/0x72
[ 0.14793700] hardirqs last disabled at (34): [<800e0eae>] vprintk_emit+0x7a/0x2d8
[ 0.14812300] softirqs last enabled at (32): [<800655b0>] __do_softirq+0x578/0x6d8
[ 0.14830800] softirqs last disabled at (25): [<8007b3b8>] irq_exit+0xec/0x128
The preempt_count of reg could be destroyed after csky_do_IRQ without reload
from memory.
After reference to other architectures (arm64, riscv), we move preempt entry
into ret_from_exception and disable irq at the beginning of
ret_from_exception instead of RESTORE_ALL.
Signed-off-by: Guo Ren <[email protected]>
Reported-by: Lu Baoquan <[email protected]>
|
|
The gdbmacros.txt use sp in thread_struct, but csky use ksp. This
cause bttnobp fail to excute.
TODO:
- Still couldn't display the contents of stack.
Signed-off-by: Guo Ren <[email protected]>
|
|
All processes' PSR could success from SETUP_MMU, so need set it
in INIT_THREAD again.
And use a3 instead of r7 in __switch_to for code convention.
Signed-off-by: Guo Ren <[email protected]>
|
|
Interrupt has been disabled in __schedule() with local_irq_disable()
and enabled in finish_task_switch->finish_lock_switch() with
local_irq_enabled(), So needn't to disable irq here.
Signed-off-by: Liu Yibin <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
|
|
The implementation of show_stack will panic with wrong fp:
addr = *fp++;
because the fp isn't checked properly.
The current implementations of show_stack, wchan and stack_trace
haven't been designed properly, so just deprecate them.
This patch is a reference to riscv's way, all codes are modified from
arm's. The patch is passed with:
- cat /proc/<pid>/stack
- cat /proc/<pid>/wchan
- echo c > /proc/sysrq-trigger
Signed-off-by: Guo Ren <[email protected]>
|
|
[ 5221.974084] Unable to handle kernel paging request at virtual address 0xfffff000, pc: 0x8002c18e
[ 5221.985929] Oops: 00000000
[ 5221.989488]
[ 5221.989488] CURRENT PROCESS:
[ 5221.989488]
[ 5221.992877] COMM=callchain_test PID=11962
[ 5221.995213] TEXT=00008000-000087e0 DATA=00009f1c-0000a018 BSS=0000a018-0000b000
[ 5221.999037] USER-STACK=7fc18e20 KERNEL-STACK=be204680
[ 5221.999037]
[ 5222.003292] PC: 0x8002c18e (perf_callchain_kernel+0x3e/0xd4)
[ 5222.007957] LR: 0x8002c198 (perf_callchain_kernel+0x48/0xd4)
[ 5222.074873] Call Trace:
[ 5222.074873] [<800a248e>] get_perf_callchain+0x20a/0x29c
[ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80
[ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8
[ 5222.074873] [<8009de6e>] perf_event_output_forward+0x36/0x98
[ 5222.074873] [<800497e0>] search_exception_tables+0x20/0x44
[ 5222.074873] [<8002cbb6>] do_page_fault+0x92/0x378
[ 5222.074873] [<80098608>] __perf_event_overflow+0x54/0xdc
[ 5222.074873] [<80098778>] perf_swevent_hrtimer+0xe8/0x164
[ 5222.074873] [<8002ddd0>] update_mmu_cache+0x0/0xd8
[ 5222.074873] [<8002c014>] user_backtrace+0x58/0xc4
[ 5222.074873] [<8002c0b4>] perf_callchain_user+0x34/0xd0
[ 5222.074873] [<800a2442>] get_perf_callchain+0x1be/0x29c
[ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80
[ 5222.074873] [<8009d834>] perf_output_sample+0x78c/0x858
[ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8
[ 5222.074873] [<8009de94>] perf_event_output_forward+0x5c/0x98
[ 5222.097846]
[ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c
[ 5222.097846] [<8006c874>] hrtimer_interrupt+0x104/0x2ec
[ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c
[ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c
[ 5222.097846] [<8006c770>] hrtimer_interrupt+0x0/0x2ec
[ 5222.097846] [<8005f2e4>] __handle_irq_event_percpu+0xac/0x19c
[ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c
[ 5222.097846] [<8005f408>] handle_irq_event_percpu+0x34/0x88
[ 5222.097846] [<8005f480>] handle_irq_event+0x24/0x64
[ 5222.097846] [<8006218c>] handle_level_irq+0x68/0xdc
[ 5222.097846] [<8005ec76>] __handle_domain_irq+0x56/0xa8
[ 5222.097846] [<80450e90>] ck_irq_handler+0xac/0xe4
[ 5222.097846] [<80029012>] csky_do_IRQ+0x12/0x24
[ 5222.097846] [<8002a3a0>] csky_irq+0x70/0x80
[ 5222.097846] [<800ca612>] alloc_set_pte+0xd2/0x238
[ 5222.097846] [<8002ddd0>] update_mmu_cache+0x0/0xd8
[ 5222.097846] [<800a0340>] perf_event_exit_task+0x98/0x43c
The original fp check doesn't base on the real kernal stack region.
Invalid fp address may cause kernel panic.
Signed-off-by: Mao Han <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
|
|
case:
# perf probe -x /lib/libc-2.28.9000.so memcpy
# perf record -e probe_libc:memcpy -aR sleep 1
System hangup and cpu get in trap_c loop, because our hardware
singlestep state could still get interrupt signal. When we get in
uprobe_xol singlestep slot, we should disable irq in pt_regs->psr.
And is_swbp_insn() need a csky arch implementation with a low 16bit
mask.
Signed-off-by: Guo Ren <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
|
|
When CONFIG_DYNAMIC_FTRACE is enabled, static ftrace will fail to
boot up and compile. It's a carelessness when developing "dynamic
ftrace" and "ftrace with regs".
Signed-off-by: Guo Ren <[email protected]>
|
|
For the memory size ( > 512MB, < 1GB), the MSA setting is:
- SSEG0: PHY_START , PHY_START + 512MB
- SSEG1: PHY_START + 512MB, PHY_START + 1GB
But the real memory is no more than 1GB, there is a gap between the
end size of memory and border of 1GB. CPU could speculatively
execute to that gap and if the gap of the bus couldn't respond to
the CPU request, then the crash will happen.
Now make the setting with:
- SSEG0: PHY_START , PHY_START + 512MB (no change)
- SSEG1: Disabled (We use highmem to use the memory of 512MB~1GB)
We also deprecated zhole_szie[] settings, it's only used by arm
style CPUs. All memory gap should use Reserved setting of dts in
csky system.
Signed-off-by: Guo Ren <[email protected]>
|
|
This patch adds support for uprobes on csky architecture.
Just like kprobe, it support single-step and simulate instructions.
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
|
|
This patch enable kprobes, kretprobes, ftrace interface. It utilized
software breakpoint and single step debug exceptions, instructions
simulation on csky.
We use USR_BKPT replace origin instruction, and the kprobe handler
prepares an excutable memory slot for out-of-line execution with a
copy of the original instruction being probed. Most of instructions
could be executed by single-step, but some instructions need origin
pc value to execute and we need software simulate these instructions.
Signed-off-by: Guo Ren <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
|