aboutsummaryrefslogtreecommitdiff
path: root/tools/perf/util/scripting-engines/trace-event-python.c
diff options
context:
space:
mode:
authorMing Lei <[email protected]>2024-03-22 10:12:44 +0800
committerJens Axboe <[email protected]>2024-04-01 11:53:36 -0600
commita46c27026da10a126dd870f7b65380010bd20db5 (patch)
tree0db399aa3791a62457c41f495cbd4708c5eb7a37 /tools/perf/util/scripting-engines/trace-event-python.c
parent7d8d35791b1b87a503ebe1f2f48407ee05dbaf5e (diff)
blk-mq: don't schedule block kworker on isolated CPUs
Kernel parameter of `isolcpus=` or 'nohz_full=' are used to isolate CPUs for specific task, and it isn't expected to let block IO disturb these CPUs. blk-mq kworker shouldn't be scheduled on isolated CPUs. Also if isolated CPUs is run for blk-mq kworker, long block IO latency can be caused. Kernel workqueue only respects CPU isolation for WQ_UNBOUND, for bound WQ, the responsibility is on user because CPU is specified as WQ API parameter, such as mod_delayed_work_on(cpu), queue_delayed_work_on(cpu) and queue_work_on(cpu). So not run blk-mq kworker on isolated CPUs by removing isolated CPUs from hctx->cpumask. Meantime use queue map to check if all CPUs in this hw queue are offline instead of hctx->cpumask, this way can avoid any cost in fast IO code path, and is safe since hctx->cpumask are only used in the two cases. Cc: Tim Chen <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Andrew Theurer <[email protected]> Cc: Joe Mario <[email protected]> Cc: Sebastian Jug <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Bart Van Assche <[email protected]> Cc: Tejun Heo <[email protected]> Tesed-by: Joe Mario <[email protected]> Signed-off-by: Ming Lei <[email protected]> Reviewed-by: Ewan D. Milne <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
Diffstat (limited to 'tools/perf/util/scripting-engines/trace-event-python.c')
0 files changed, 0 insertions, 0 deletions