aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2023-04-17btrfs: calculate correct amount of space for delayed reference when evictingFilipe Manana1-1/+1
When evicting an inode, we are incorrectly calculating the amount of space required for a single delayed reference in case the free space tree is enabled. We have to multiply by 2 the result of btrfs_calc_insert_metadata_size(). We should be calculating according to the size update and space release of the delayed block reserve logic at btrfs_update_delayed_refs_rsv() and btrfs_delayed_refs_rsv_release(). Fix this by using the btrfs_calc_delayed_ref_bytes() helper at evict_refill_and_join() instead of btrfs_calc_insert_metadata_size(). Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: add helper to calculate space for delayed referencesFilipe Manana3-49/+26
Instead of duplicating the logic for calculating how much space is required for a given number of delayed references, add an inline helper to encapsulate that logic and use it everywhere we are calculating the space required. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: constify fs_info argument for the reclaim items calculation helpersFilipe Manana1-2/+2
Now that btrfs_calc_insert_metadata_size() can take a const fs_info argument, make the fs_info argument of calc_reclaim_items_nr() and of calc_delayed_refs_nr() const as well. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: constify fs_info argument of the metadata size calculation helpersFilipe Manana1-2/+2
The fs_info argument of the helpers btrfs_calc_insert_metadata_size() and btrfs_calc_metadata_size() is not modified so it can be const. This will also allow a new helper function in one of the next patches to have its fs_info argument as const. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: accurately calculate number of delayed refs when flushingFilipe Manana1-1/+25
When flushing a limited number of delayed references (FLUSH_DELAYED_REFS_NR state), we are assuming each delayed reference is holding a number of bytes matching the needed space for inserting for a single metadata item (the result of btrfs_calc_insert_metadata_size()). That is not correct when using the free space tree, as in that case we have to multiply that value by 2 since we need to touch the free space tree as well. This is the same computation as we do at btrfs_update_delayed_refs_rsv() and at btrfs_delayed_refs_rsv_release(). So correct the computation for the amount of delayed references we need to flush in case we have the free space tree. This does not fix a functional issue, instead it makes the flush code flush less delayed references, only the minimum necessary to satisfy a ticket. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: calculate the right space for a single delayed ref when refillingFilipe Manana1-0/+11
When refilling the delayed block reserve we are incorrectly computing the amount of bytes for a single delayed reference if the free space tree is being used. In that case we should double the calculated amount. Everywhere else we compute the correct amount, like when updating the delayed block reserve, at btrfs_update_delayed_refs_rsv(), or when releasing space from the delayed block reserve, at btrfs_delayed_refs_rsv_release(). So fix btrfs_delayed_refs_rsv_refill() to multiply the amount of bytes for a single delayed reference by two in case the free space tree is used. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: don't throttle on delayed items when evicting deleted inodeFilipe Manana1-1/+6
During inode eviction, if we are truncating a deleted inode, we don't add delayed items for our inode, so there's no need to throttle on delayed items on each iteration of the loop that truncates inode items from its subvolume tree. But we dirty extent buffers from its subvolume tree, so we only need to throttle on btree inode dirty pages. So use btrfs_btree_balance_dirty_nodelay() in the loop that truncates inode items. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove obsolete delayed ref throttling logic when truncating itemsFilipe Manana6-51/+7
We have this logic encapsulated in btrfs_should_throttle_delayed_refs() where we try to estimate if running the current amount of delayed references we have will take more than half a second, and if so, the caller btrfs_should_throttle_delayed_refs() should do something to prevent more and more delayed refs from being accumulated. This logic was added in commit 0a2b2a844af6 ("Btrfs: throttle delayed refs better") and then further refined in commit a79b7d4b3e81 ("Btrfs: async delayed refs"). The idea back then was that the caller of btrfs_should_throttle_delayed_refs() would release its transaction handle (by calling btrfs_end_transaction()) when that function returned true, then btrfs_end_transaction() would trigger an async job to run delayed references in a workqueue, and later start/join a transaction again and do more work. However we don't run delayed references asynchronously anymore, that was removed in commit db2462a6ad3d ("btrfs: don't run delayed refs in the end transaction logic"). That makes the logic that tries to estimate how long we will take to run our current delayed references, at btrfs_should_throttle_delayed_refs(), pointless as we don't take any action to run delayed references anymore. We do have other type of throttling, which consists of checking the size and reserved space of the delayed and global block reserves, as well as if fluhsing delayed references for the current transaction was already started, etc - this is all done by btrfs_should_end_transaction(), and the only user of btrfs_should_throttle_delayed_refs() does periodically call btrfs_should_end_transaction(). So remove btrfs_should_throttle_delayed_refs() and the infrastructure that keeps track of the average time used for running delayed references, as well as adapting btrfs_truncate_inode_items() to call btrfs_check_space_for_delayed_refs() instead. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: simplify variables in btrfs_block_rsv_refill()Filipe Manana2-4/+2
At btrfs_block_rsv_refill(), there's no point in initializing the 'num_bytes' variable to 0 and then, after taking the block reserve's spinlock, initializing it to the value of the 'min_reserved' parameter. So just get rid of the 'num_bytes' local variable and rename the 'min_reserved' parameter to 'num_bytes'. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove redundant counter check at btrfs_truncate_inode_items()Filipe Manana1-2/+1
At btrfs_truncate_inode_items(), in the while loop when we decide that we are going to delete an item, it's pointless to check that 'pending_del_nr' is non-zero in an else clause because the corresponding if statement is checking if 'pending_del_nr' has a value of zero. So just remove that condition from the else clause. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: count extents before taking inode's spinlock when reserving metadataFilipe Manana1-1/+1
When reserving metadata space for delalloc (and direct IO too), at btrfs_delalloc_reserve_metadata(), there's no need to count the number of extents while holding the inode's spinlock, since that does not require access to any field of the inode. This section of code can be called concurrently, when we have direct IO writes against different file ranges that don't increase the inode's i_size, so it's beneficial to shorten the critical section by counting the number of extents before taking the inode's spinlock. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove bytes_used argument from btrfs_make_block_group()Filipe Manana3-7/+4
The only caller of btrfs_make_block_group() always passes 0 as the value for the bytes_used argument, so remove it. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: collapse should_end_transaction() into btrfs_should_end_transaction()Filipe Manana1-11/+4
The function should_end_transaction() is very short and only has one caller, which is btrfs_should_end_transaction(). So move the code from should_end_transaction() into btrfs_should_end_transaction(). Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: simplify btrfs_should_throttle_delayed_refs()Filipe Manana2-5/+3
Currently btrfs_should_throttle_delayed_refs() returns 1 or 2 in case the delayed refs should be throttled, however the only caller (inode eviction and truncation path) does not care about those two different conditions, it treats the return value as a boolean. This allows us to remove one of the conditions in btrfs_should_throttle_delayed_refs() and change its return value from 'int' to 'bool'. So just do that. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: initialize ret to -ENOSPC at __reserve_bytes()Filipe Manana1-2/+1
At space-info.c:__reserve_bytes(), instead of initializing 'ret' to 0 when it's declared and then shortly after set it to -ENOSPC under the space info's spinlock, initialize it to -ENOSPC when declaring it. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: update flush method assertion when reserving spaceFilipe Manana1-1/+12
When reserving space, at space-info.c:__reserve_bytes(), we assert that either the current task is not holding a transacion handle, or, if it is, that the flush method is not BTRFS_RESERVE_FLUSH_ALL. This is because that flush method can trigger transaction commits, and therefore could lead to a deadlock. However there are other 2 flush methods that can trigger transaction commits: 1) BTRFS_RESERVE_FLUSH_ALL_STEAL 2) BTRFS_RESERVE_FLUSH_EVICT So update the assertion to check the flush method is also not one those two methods if the current task is holding a transaction handle. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: update documentation for BTRFS_RESERVE_FLUSH_EVICT flush methodFilipe Manana1-0/+1
The BTRFS_RESERVE_FLUSH_EVICT flush method can also commit transactions, see the definition of the evict_flush_states const array at space-info.c, but the documentation for it at space-info.h does not mention it. So update the documentation. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove check for NULL block reserve at btrfs_block_rsv_check()Filipe Manana1-3/+0
The block reserve passed to btrfs_block_rsv_check() is never NULL, so remove the check. In case it can ever become NULL in the future, then we'll get a pretty obvious and clear NULL pointer dereference crash and stack trace. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: pass a bool size update argument to btrfs_block_rsv_add_bytes()Filipe Manana1-1/+1
At btrfs_delayed_refs_rsv_refill(), we are passing a value of 0 to the 'update_size' argument of btrfs_block_rsv_add_bytes(), which is defined as a boolean. Functionally this is fine because a 0 is, implicitly, converted to a boolean false value. However it's easier to read an explicit 'false' value, so just pass 'false' instead of 0. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: pass a bool to btrfs_block_rsv_migrate() at evict_refill_and_join()Filipe Manana1-1/+1
The last argument of btrfs_block_rsv_migrate() is a boolean, but we are passing an integer, with a value of 1, to it at evict_refill_and_join(). While this is not a bug, due to type conversion, it's a lot more clear to simply pass the boolean true value instead. So just do that. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove btrfs_lru_cache_is_full() inline functionFilipe Manana1-5/+0
It's not used anywhere at the moment, but it was used in earlier version of a patch that removed its use in the second version. So just remove btrfs_lru_cache_is_full(). Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: simplify adding pages in btrfs_add_compressed_bio_pagesChristoph Hellwig1-27/+7
btrfs_add_compressed_bio_pages is needlessly complicated. Instead of iterating over the logic disk offset just to add pages to the bio use a simple offset starting at 0, which also removes most of the claiming. Additionally __bio_add_pages already takes care of the assert that the bio is always properly sized, and btrfs_submit_bio called right after asserts that the bio size is non-zero. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: move the bi_sector assignment out of btrfs_add_compressed_bio_pagesChristoph Hellwig1-7/+6
Adding pages to a bio has nothing to do with the sector. Move the assignment to the two callers in preparation for cleaning up btrfs_add_compressed_bio_pages. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: sysfs: relax bg_reclaim_threshold for debugging purposesNaohiro Aota1-0/+5
Currently, /sys/fs/btrfs/<UUID>/bg_reclaim_threshold is limited to 0 (disable) or [50 .. 100]%, so we need to fill 50% of a device to start the auto reclaim process. It is cumbersome to do so when we want to shake out possible race issues of normal write vs reclaim. Relax the threshold check under the BTRFS_DEBUG option. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Naohiro Aota <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: make btrfs_split_bio work on struct btrfs_bioChristoph Hellwig1-13/+14
btrfs_split_bio expects a btrfs_bio as argument and always allocates one. Type both the orig_bio argument and the return value as struct btrfs_bio to improve type safety. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: return a btrfs_bio from btrfs_bio_allocChristoph Hellwig4-26/+28
Return the containing struct btrfs_bio instead of the less type safe struct bio from btrfs_bio_alloc. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: store a pointer to a btrfs_bio in struct btrfs_bio_ctrlChristoph Hellwig1-24/+24
The bio in struct btrfs_bio_ctrl must be a btrfs_bio, so store a pointer to the btrfs_bio for better type checking. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: simplify finding the inode in submit_one_bioChristoph Hellwig1-11/+4
struct btrfs_bio now has an always valid inode pointer that can be used to find the inode in submit_one_bio, so use that and initialize all variables for which it is possible at declaration time. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: store a pointer to the original btrfs_bio in struct compressed_bioChristoph Hellwig2-8/+9
The original bio must be a btrfs_bio, so store a pointer to the btrfs_bio for better type checking. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: pass a btrfs_bio to btrfs_submit_compressed_readChristoph Hellwig3-10/+10
btrfs_submit_compressed_read expects the bio passed to it to be embedded into a btrfs_bio structure. Pass the btrfs_bio directly to increase type safety and make the code self-documenting. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: pass a btrfs_bio to btrfs_submit_bioChristoph Hellwig5-14/+14
btrfs_submit_bio expects the bio passed to it to be embedded into a btrfs_bio structure. Pass the btrfs_bio directly to increase type safety and make the code self-documenting. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: move zero filling of compressed read bios into common codeChristoph Hellwig4-12/+7
All algorithms have to fill the remainder of the orig_bio with zeroes, so do it in common code. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: cleanup main loop in btrfs_encoded_read_regular_fill_pagesChristoph Hellwig1-28/+23
btrfs_encoded_read_regular_fill_pages has a pretty odd control flow. Unwind it so that there is a single loop over the pages array. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove unused members from struct btrfs_encoded_read_privateChristoph Hellwig1-4/+0
The inode and file_offset members in struct btrfs_encoded_read_private are unused, so remove them. Last used in commit 7959bd441176 ("btrfs: remove the start argument to check_data_csum and export") and commit 7609afac6775 ("btrfs: handle checksum validation and repair at the storage layer"). Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: locking: use atomic for DREW lock writersDavid Sterba3-33/+9
The DREW lock uses percpu variable to track lock counters and for that it needs to allocate the structure. In btrfs_read_tree_root() or btrfs_init_fs_root() this may add another error case or requires the NOFS scope protection. One way is to preallocate the structure as was suggested in https://lore.kernel.org/linux-btrfs/[email protected]/ We may avoid the allocation altogether if we don't use the percpu variables but an atomic for the writer counter. This should not make any difference, the DREW lock is used for truncate and NOCOW writes along with other IO operations. The percpu counter for writers has been there since the original commit 8257b2dc3c1a1057 "Btrfs: introduce btrfs_{start, end}_nocow_write() for each subvolume". The reason could be to avoid hammering the same cacheline from all the readers but then the writers do that anyway. Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove redundant clearing of NODISCARDAnand Jain1-1/+0
If no discard mount option is specified, including the NODISCARD option, we make the async discard the default option then we don't have to call the clear_opt again to clear the NODISCARD flag. Though this makes no difference, that the call is redundant has been pointed out several times so we better remove it. Signed-off-by: Anand Jain <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: avoid repetitive define BTRFS_FEATURE_INCOMPAT_SUPPAnand Jain1-20/+15
BTRFS_FEATURE_INCOMPAT_SUPP is defined twice, once under CONFIG_BTRFS_DEBUG and once without it, resulting in repetitive code. The reason for this is to add experimental features under CONFIG_BTRFS_DEBUG. To avoid repetitive code, add a common list BTRFS_FEATURE_INCOMPAT_SUPP_STABLE, and append experimental features only under CONFIG_BTRFS_DEBUG. Signed-off-by: Anand Jain <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: scrub: remove root and csum_root arguments from scrub_simple_mirror()Qu Wenruo1-19/+9
We don't need to pass the roots as arguments, reading them from the rb-tree is cheap. Thus there is really not much need to pre-fetch it and pass it all the way from scrub_stripe(). And we already have more than enough arguments in scrub_simple_mirror() and scrub_simple_stripe(), it's better to remove them and only grab those roots in scrub_simple_mirror(). Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: scrub: remove unused path inside scrub_stripe()Qu Wenruo1-15/+0
The variable @path is no longer passed into any call sites after commit 18d30ab96149 ("btrfs: scrub: use scrub_simple_mirror() to handle RAID56 data stripe scrub"), thus we can remove the variable completely. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: do not use replace target device as an extra mirrorQu Wenruo1-120/+7
[BUG] Currently btrfs can use dev-replace device as an extra mirror for read-repair. But it can lead to NODATASUM corruption in the following case: There is a RAID1 data chunk, and dev-replace is running from dev2 to dev0. |//| = Replaced data X X+1MB X+2MB Dev 2: | | | <- Source dev Dev 0: |///////| | <- Target dev Then a read on dev 2 X+2MB happens. And something wrong happened inside devid 2, causing an -EIO. In that case, read-repair would try the next mirror, and since we can use target device as an extra mirror, we will use that mirror instead. But unfortunately since the read is beyond the current replace cursor, we should not trust it at all, what we get would be just uninitialized garbage. But if this read is for NODATASUM range, then we just trust them and cause data corruption. [CAUSE] We used to have some checks to make sure we only return such extra mirror when the range is before our left cursor. The first commit introducing this behavior is ad6d620e2a57 ("Btrfs: allow repair code to include target disk when searching mirrors"). But later a fix, 22ab04e814f4 ("Btrfs: fix race between device replace and chunk allocation") changed the behavior, to always let btrfs_map_block() include the extra mirror to address a race in dev-replace which can cause missing writes to target device. This means, we lose the tracking of cursor for the extra mirror, thus can lead to above corruption. [FIX] The extra mirror is never a reliable one, at the beginning of dev-replace, the reliability is zero, while only at the end of the replace it's a fully reliable mirror. We either do the complex tracking, or never trust it. IMHO it's much easier to maintain if we don't trust it at all, and the extra mirror can only benefit for a limited period of time (during replace). Thus this patch would completely remove the ability to use target device as an extra mirror. Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: open_ctree() error handling cleanupQu Wenruo1-34/+31
Currently open_ctree() still uses two variables for error handling, err and ret. This can be confusing and missing some errors and does not conform to current coding style. This patch will fix the problems by: - Use only ret for error handling - Add proper ret assignment Originally we rely on the default value (-EINVAL) of err to handle errors, but that doesn't really reflects the error. This will change it use the correct error number for the following call sites: * subpage_info allocation * btrfs_free_extra_devids() * btrfs_check_rw_degradable() * cleaner_kthread allocation * transaction_kthread allocation - Add an extra ASSERT() To make sure we error out instead of returning 0. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: cleanup the main loop in btrfs_lookup_bio_sumsChristoph Hellwig1-24/+9
Introduce a bio_offset variable for the current offset into the bio instead of recalculating it over and over. Remove the now only used once search_len and sector_offset variables, and reduce the scope for count and cur_disk_bytenr. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove search_file_offset_in_bioChristoph Hellwig1-49/+3
There is no need to search for a file offset in a bio, it is now always provided in bbio->file_offset (set at bio allocation time since 0d495430db8d ("btrfs: set bbio->file_offset in alloc_new_bio")). Just use that with the offset into the bio. Reviewed-by: Anand Jain <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: sink calc_bio_boundaries into its only callerJohannes Thumshirn1-22/+15
Nowadays calc_bio_boundaries() is a relatively simple function that only guarantees the one bio equals to one ordered extent rule for uncompressed Zone Append bios. Sink it into it's only caller alloc_new_bio(). Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Johannes Thumshirn <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: simplify main loop in submit_extent_pageChristoph Hellwig1-86/+30
bio_add_page always adds either the entire range passed to it or nothing. Based on that btrfs_bio_add_page can only return a length smaller than the passed in one when hitting the ordered extent limit, which can only happen for writes. Given that compressed writes never even use this code path, this means that all the special cases for compressed extent offset handling are dead code. Reflow submit_extent_page to take advantage of this by inlining btrfs_bio_add_page and handling the ordered extent limit by decrementing it for each added range and thus significantly simplifying the loop. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: check for contiguity in submit_extent_pageChristoph Hellwig1-33/+36
Different loop iterations in btrfs_bio_add_page not only have the same contiguity parameters, but also any non-initial operation operates on a fresh bio anyway. Factor out the contiguity check into a new btrfs_bio_is_contig and only call it once in submit_extent_page before descending into the bio_add_page loop. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: simplify the error handling in __extent_writepage_ioChristoph Hellwig1-11/+7
Remove the has_error and saved_ret variables, and just jump to a goto label for error handling from the only place returning an error from the main loop. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove the submit_extent_page return valueChristoph Hellwig1-120/+35
submit_extent_page always returns 0 since commit d5e4377d5051 ("btrfs: split zone append bios in btrfs_submit_bio"). Change it to a void return type and remove all the unreachable error handling code in the callers. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: remove the compress_type argument to submit_extent_pageChristoph Hellwig1-18/+14
Update the compress_type in the btrfs_bio_ctrl after forcing out the previous bio in btrfs_do_readpage, so that alloc_new_bio can just use the compress_type member in struct btrfs_bio_ctrl instead of passing the same information redundantly as a function argument. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2023-04-17btrfs: rename the this_bio_flag variable in btrfs_do_readpageChristoph Hellwig1-5/+5
Rename this_bio_flag to compress_type to match the surrounding code and better document the intent. Also use the proper enum type instead of unsigned long. Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>