diff options
author | Ming Lei <[email protected]> | 2019-04-30 09:52:24 +0800 |
---|---|---|
committer | Jens Axboe <[email protected]> | 2019-05-04 07:24:04 -0600 |
commit | fbc2a15e3433058582e5635aabe48a3011a644a8 (patch) | |
tree | 0d65a92b4719bfb308bc0b52744296c089a906ef | |
parent | e87eb301bee183d82bb3d04bd71b6660889a2588 (diff) |
blk-mq: move cancel of requeue_work into blk_mq_release
With holding queue's kobject refcount, it is safe for driver
to schedule requeue. However, blk_mq_kick_requeue_list() may
be called after blk_sync_queue() is done because of concurrent
requeue activities, then requeue work may not be completed when
freeing queue, and kernel oops is triggered.
So moving the cancel of requeue_work into blk_mq_release() for
avoiding race between requeue and freeing queue.
Cc: Dongli Zhang <[email protected]>
Cc: James Smart <[email protected]>
Cc: Bart Van Assche <[email protected]>
Cc: [email protected],
Cc: Martin K . Petersen <[email protected]>,
Cc: Christoph Hellwig <[email protected]>,
Cc: James E . J . Bottomley <[email protected]>,
Reviewed-by: Bart Van Assche <[email protected]>
Reviewed-by: Johannes Thumshirn <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Tested-by: James Smart <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
-rw-r--r-- | block/blk-core.c | 1 | ||||
-rw-r--r-- | block/blk-mq.c | 2 |
2 files changed, 2 insertions, 1 deletions
diff --git a/block/blk-core.c b/block/blk-core.c index b044829135c9..2af1040b2fa6 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -238,7 +238,6 @@ void blk_sync_queue(struct request_queue *q) struct blk_mq_hw_ctx *hctx; int i; - cancel_delayed_work_sync(&q->requeue_work); queue_for_each_hw_ctx(q, hctx, i) cancel_delayed_work_sync(&hctx->run_work); } diff --git a/block/blk-mq.c b/block/blk-mq.c index c9bf9b92d2db..741cf8d55e9c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2635,6 +2635,8 @@ void blk_mq_release(struct request_queue *q) struct blk_mq_hw_ctx *hctx; unsigned int i; + cancel_delayed_work_sync(&q->requeue_work); + /* hctx kobj stays in hctx */ queue_for_each_hw_ctx(q, hctx, i) { if (!hctx) |