Age | Commit message (Collapse) | Author | Files | Lines |
|
Was only used by sysfs code, can be reinstated if/when needed.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Ken Raeburn <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Expose control over dm-vdo's log-level in terms of a module param. It
can be read and written via /sys/module/dm_vdo/parameters/log_level.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Ken Raeburn <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Also update target major version number.
All info is (or will be) accessible through alternative interfaces
(e.g. "dmsetup message", module params, etc).
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Ken Raeburn <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Most should be VDO_SUCCESS. But comparing the return from
kstrtouint() with UDS_SUCCESS (happens to be 0) makes no sense.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Update indexer uses of ASSERT and ASSERT_LOG_ONLY to
VDO_ASSERT and VDO_ASSERT_LOG_ONLY, respectively. Remove
ASSERT and ASSERT_LOG_ONLY. Also rename uds_assertion_failed
to vdo_assertion_failed.
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Also rename ASSERT to VDO_ASSERT and ASSERT_LOG_ONLY to
VDO_ASSERT_LOG_ONLY.
But re-introduce ASSERT and ASSERT_LOG_ONLY as a placeholder
for the benefit of dm-vdo/indexer.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Update all callers to check for VDO_SUCCESS.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Update all callers to check for VDO_SUCCESS (most already did).
Also fix whitespace for update_mapping() parameters.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
VDO_SUCCESS and UDS_SUCCESS were used interchangably, update all
callers of VDO's memory-alloc functions to consistently check for
VDO_SUCCESS.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Also define VDO_SUCCESS in a more central location, and
rename error block constants for clarity.
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
__vdo_do_allocation shouldn't be used outside of memory-alloc.h, so
add hidden prefix.
Also, tabify the vdo_allocate_extended macro.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Bruce Johnston <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Update outdated comments referring to separate VDO and UDS
modules.
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
No functional modification involved.
Reported-by: Abaci Robot <[email protected]>
Signed-off-by: Jiapeng Chong <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
The goal is to assist high-level understanding of which code is
conceptually specific to VDO's indexer.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Ignore scnprintf return status since it is not necessary. Change
write_* functions type from int to void since we no longer return
any result. Also, clean up any code that checks or uses any scnprintf
return results.
Check uds_allocate return code which was previous ignored, return
and log error when uds_allocate failed.
Reported-by: Dan Carpenter <[email protected]>
Signed-off-by: Chung Chung <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Reported when building on loongarch.
Reported-by: Randy Dunlap <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Bruce Johnston <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Must mutex_lock after dm_bufio_read, before dm_bufio_read error
handling, otherwise process_entry error path will return without
volume->read_threads_mutex held. This fixes potential double
mutex_unlock.
Reported-by: Dan Carpenter <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Otherwise, error path could result in allocate_flush's subsequent
check for flush being non-NULL leading to false positive.
Reported-by: Dan Carpenter <[email protected]>
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
This is a duplicate check so it can't be true. Delete it.
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.
This commit converts dm-verity from tasklet to BH workqueue. It
backfills tasklet code that was removed with commit 0a9bab391e33
("dm-crypt, dm-verity: disable tasklets") and tweaks to use BH
workqueue (and does some renaming).
This is a minimal conversion which doesn't rename the related names
including the "try_verify_in_tasklet" option. If this patch is applied, a
follow-up patch would be necessary. I couldn't decide whether the option
name would need to be updated too.
Signed-off-by: Tejun Heo <[email protected]>
[snitzer: rename 'use_tasklet' to 'use_bh_wq' and 'in_tasklet' to 'in_bh']
Signed-off-by: Mike Snitzer <[email protected]>
|
|
The only generic interface to execute asynchronously in the BH context is
tasklet; however, it's marked deprecated and has some design flaws. To
replace tasklets, BH workqueue support was recently added. A BH workqueue
behaves similarly to regular workqueues except that the queued work items
are executed in the BH context.
This commit converts dm-crypt from tasklet to BH workqueue. It
backfills tasklet code that was removed with commit 0a9bab391e33
("dm-crypt, dm-verity: disable tasklets") and tweaks to use BH
workqueue.
Like a regular workqueue, a BH workqueue allows freeing the currently
executing work item. Converting from tasklet to BH workqueue removes the
need for deferring bio_endio() again to a work item, which was buggy anyway.
I tested this lightly with "--perf-no_read_workqueue
--perf-no_write_workqueue" + some code modifications, but would really
-appreciate if someone who knows the code base better could take a look.
Signed-off-by: Tejun Heo <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
[snitzer: rebase ontop of commit 0a9bab391e33 reduced this commit's changes]
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Use queue_limits_set which validates the limits and takes care of
updating the readahead settings instead of directly assigning them to
the queue. For that make sure all limits are actually updated before
the assignment.
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Mike Snitzer <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jens Axboe <[email protected]>
|
|
Also moved vdo_init()'s call to vdo_initialize_thread_device_registry
next to other registry initialization.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Otherwise, uds_ prefix is misleading (vdo_ is the new catch-all for
code that is used by vdo-only or _both_ vdo and the indexer code).
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Change thread function prefix from "uds_" to "vdo_" and fix
vdo_join_threads() to return void.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Just use mutex_init, mutex_lock and mutex_unlock.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Only used by indexer components. Also return void from
uds_init_cond(), remove uds_destroy_cond(), and fix up
all callers.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Further cleanup is needed for thread-utils interfaces given many
functions should return void or be removed entirely because they
amount to obfuscation via wrappers.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Also remove unnecessary include from funnel-queue.c.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Rename 'barrier' to 'threads_barrier', remove useless
uds_destroy_barrier(), return void from remaining methods and
clean up uds_make_sparse_cache() accordingly.
Also remove uds_ prefix from the 2 remaining threads_barrier
functions.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
The sparse-cache is the only user of the 'barrier' data structure,
so just move it private to it.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
The implementation of thread 'barrier' data structure does not require
overdone private semaphore wrappers. Also rename the barrier
structure's 'mutex' member (a semaphore) to 'lock'.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
Only used for log message, but no need for "UDS_" prefix.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
|
|
start_restoring_volume_sub_index()
Use "==" instead of "=" in ASSERT() statement.
Fixes: ef074a31e88e ("dm vdo: implement the volume index")
Signed-off-by: Harshit Mogalapalli <[email protected]>
Signed-off-by: Susan LeGendre-McGhee <[email protected]>
Signed-off-by: Matthew Sakai <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
The way that best rdev is chosen:
1) If the read is sequential from one rdev:
- if rdev is rotational, use this rdev;
- if rdev is non-rotational, use this rdev until total read length
exceed disk opt io size;
2) If the read is not sequential:
- if there is idle disk, use it, otherwise:
- if the array has non-rotational disk, choose the rdev with minimal
inflight IO;
- if all the underlaying disks are rotational disk, choose the rdev
with closest IO;
There are no functional changes, just to make code cleaner and prepare
for following refactor.
Co-developed-by: Paul Luse <[email protected]>
Signed-off-by: Paul Luse <[email protected]>
Signed-off-by: Yu Kuai <[email protected]>
Reviewed-by: Xiao Ni <[email protected]>
Signed-off-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
There is no functional change for now, make read_balance() cleaner and
prepare to fix problems and refactor the handler of sequential IO.
Co-developed-by: Paul Luse <[email protected]>
Signed-off-by: Paul Luse <[email protected]>
Signed-off-by: Yu Kuai <[email protected]>
Reviewed-by: Xiao Ni <[email protected]>
Signed-off-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
read_balance() is hard to understand because there are too many status
and branches, and it's overlong.
This patch factor out the case to read the rdev with bad blocks from
read_balance(), there are no functional changes.
Co-developed-by: Paul Luse <[email protected]>
Signed-off-by: Paul Luse <[email protected]>
Signed-off-by: Yu Kuai <[email protected]>
Reviewed-by: Xiao Ni <[email protected]>
Signed-off-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|