aboutsummaryrefslogtreecommitdiff
path: root/tools/perf/scripts/python/exported-sql-viewer.py
diff options
context:
space:
mode:
authorDavidlohr Bueso <[email protected]>2020-09-06 18:33:26 -0700
committerSteven Rostedt (VMware) <[email protected]>2020-09-21 21:06:02 -0400
commit40d14da383670db21a09e63d52db8dee9b77741e (patch)
tree4913385409f94966a7932c13a3c09f21c0109f2d /tools/perf/scripts/python/exported-sql-viewer.py
parenteb8d8b4c9848b200586aa98e105b39f159656ba6 (diff)
fgraph: Convert ret_stack tasklist scanning to rcu
It seems that alloc_retstack_tasklist() can also take a lockless approach for scanning the tasklist, instead of using the big global tasklist_lock. For this we also kill another deprecated and rcu-unsafe tsk->thread_group user replacing it with for_each_process_thread(), maintaining semantics. Here tasklist_lock does not protect anything other than the list against concurrent fork/exit. And considering that the whole thing is capped by FTRACE_RETSTACK_ALLOC_SIZE (32), it should not be a problem to have a pontentially stale, yet stable, list. The task cannot go away either, so we don't risk racing with ftrace_graph_exit_task() which clears the retstack. The tsk->ret_stack management is not protected by tasklist_lock, being serialized with the corresponding publish/subscribe barriers against concurrent ftrace_push_return_trace(). In addition this plays nicer with cachelines by avoiding two atomic ops in the uncontended case. Link: https://lkml.kernel.org/r/[email protected] Acked-by: Oleg Nesterov <[email protected]> Signed-off-by: Davidlohr Bueso <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/exported-sql-viewer.py')
0 files changed, 0 insertions, 0 deletions