aboutsummaryrefslogtreecommitdiff
path: root/fs/btrfs/raid56.c
AgeCommit message (Collapse)AuthorFilesLines
2022-05-16btrfs: raid56: make raid56_add_scrub_pages() subpage compatibleQu Wenruo1-4/+6
This requires one extra parameter @pgoff for the function. In the current code base, scrub is still one page per sector, thus the new parameter will always be 0. It needs the extra subpage scrub optimization code to fully take advantage. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: open code rbio_stripe_page_index()Qu Wenruo1-6/+1
There is only one caller for that helper now, and we're definitely fine to open-code it. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: make finish_rmw() subpage compatibleQu Wenruo1-77/+32
With this function converted to subpage compatible sector interfaces, the following helper functions can be removed: - rbio_stripe_page() - rbio_pstripe_page() - rbio_qstripe_page() - page_in_rbio() Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: make __raid_recover_endio_io() subpage compatibleQu Wenruo1-23/+28
This involves: - Use sector_ptr interface to grab the pointers - Add sector->pgoff to pointers[] - Rebuild data using sectorsize instead of PAGE_SIZE - Use memcpy() to replace copy_page() Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: make finish_parity_scrub() subpage compatibleQu Wenruo1-24/+32
The core is to convert direct page usage into sector_ptr usage, and use memcpy() to replace copy_page(). For pointers usage, we need to convert it to kmap_local_page() + sector->pgoff. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: make rbio_add_io_page() subpage compatibleQu Wenruo1-89/+165
Make rbio_add_io_page() subpage compatible, which involves: - Rename rbio_add_io_page() to rbio_add_io_sector() Although we still rely on PAGE_SIZE == sectorsize, so add a new ASSERT() inside rbio_add_io_sector() to make sure all pgoff is 0. - Introduce rbio_stripe_sector() helper The equivalent of rbio_stripe_page(). This new helper has extra ASSERT()s to validate the stripe and sector number. - Introduce sector_in_rbio() helper The equivalent of page_in_rbio(). - Rename @pagenr variables to @sectornr - Use rbio::stripe_nsectors when iterating the bitmap Please note that, this only changes the interface, the bios are still using full page for IO. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: introduce btrfs_raid_bio::bio_sectorsQu Wenruo1-2/+55
This new member is going to fully replace bio_pages in the future, but for now let's keep them co-exist, until the full switch is done. Currently cache_rbio_pages() and index_rbio_pages() will also populate the new array. And cache_rbio_pages() need to record which sectors are uptodate, so we also need to introduce sector_ptr::uptodate bit. To avoid extra memory usage, we let the new @uptodate bit to share bits with @pgoff. Now pgoff only has at most 31 bits, which is already more than enough, as even for 256K page size, we only need 18 bits. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: introduce btrfs_raid_bio::stripe_sectorsQu Wenruo1-4/+56
The new member is an array of sector_ptr pointers, they will represent all sectors inside a full stripe (including P/Q). They co-operate with btrfs_raid_bio::stripe_pages: stripe_pages: | Page 0, range [0, 64K) | Page 1 ... stripe_sectors: | | | ... | | | | \- sector 15, page 0, pgoff=60K | \- sector 1, page 0, pgoff=4K \---- sector 0, page 0, pfoff=0 With such structure, we can represent subpage sectors without using extra pages. Here we introduce a new helper, index_stripe_sectors(), to update stripe_sectors[] to point to correct page and pgoff. So every time rbio::stripe_pages[] pointer gets updated, the new helper should be called. The following functions have to call the new helper: - steal_rbio() - alloc_rbio_pages() - alloc_rbio_parity_pages() - alloc_rbio_essential_pages() Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: introduce new cached members for btrfs_raid_bioQu Wenruo1-6/+17
The new members are all related to number of sectors, but the existing number of pages members are kept as is: - nr_sectors Total sectors of the full stripe including P/Q. - stripe_nsectors The sectors of a single stripe. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: make btrfs_raid_bio more compactQu Wenruo1-19/+21
There are a lot of members using much larger type in btrfs_raid_bio than necessary, like nr_pages which represents the total number of a full stripe. Instead of int (which is at least 32bits), u16 is already enough (max stripe length will be 256MiB, already beyond current RAID56 device number limit). So this patch will reduce the width of the following members: - stripe_len to u32 - nr_pages to u16 - nr_data to u8 - real_stripes to u8 - scrubp to u8 - faila/b to s8 As -1 is used to indicate no corruption This will slightly reduce the size of btrfs_raid_bio from 272 bytes to 256 bytes, reducing 16 bytes usage. But please note that, when using btrfs_raid_bio, we allocate extra space for it to cover various pointer array, so the reduce memory is not really a big saving overall. As we're here modifying the comments already, update existing comments to current code standard. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: raid56: open code rbio_nr_pages()Qu Wenruo1-13/+5
The function rbio_nr_pages() is only called once inside alloc_rbio(), there is no reason to make it dedicated helper. Furthermore, the return type doesn't match, the function return "unsigned long" which may not be necessary, while the only caller only uses "int". Since we're doing cleaning up here, also fix the type to "const unsigned int" for all involved local variables. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: reduce width for stripe_len from u64 to u32Qu Wenruo1-8/+8
Currently btrfs uses fixed stripe length (64K), thus u32 is wide enough for the usage. Furthermore, even in the future we choose to enlarge stripe length to larger values, I don't believe we would want stripe as large as 4G or larger. So this patch will reduce the width for all in-memory structures and parameters, this involves: - RAID56 related function argument lists This allows us to do direct division related to stripe_len. Although we will use bits shift to replace the division anyway. - btrfs_io_geometry structure This involves one change to simplify the calculation of both @stripe_nr and @stripe_offset, using div64_u64_rem(). And add extra sanity check to make sure @stripe_offset is always small enough for u32. This saves 8 bytes for the structure. - map_lookup structure This convert @stripe_len to u32, which saves 8 bytes. (saved 4 bytes, and removed a 4-bytes hole) Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: stop using the btrfs_bio saved iter in index_rbio_pagesChristoph Hellwig1-3/+0
The bios added to ->bio_list are the original bios fed into btrfs_map_bio, which are never advanced. Just use the iter in the bio itself. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: don't allocate a btrfs_bio for raid56 per-stripe biosChristoph Hellwig1-5/+2
Except for the spurious initialization of ->device just after allocation nothing uses the btrfs_bio, so just allocate a normal bio without extra data. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: pass bio opf to rbio_add_io_pageChristoph Hellwig1-20/+17
Prepare for further refactoring by moving this initialization to a single place instead of setting it in the callers. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2022-05-16btrfs: factor out allocating an array of pagesSweet Tea Dorminy1-25/+4
Several functions currently populate an array of page pointers one allocated page at a time. Factor out the common code so as to allow improvements to all of the sites at once. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Sweet Tea Dorminy <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-10-26btrfs: remove btrfs_raid_bio::fs_info memberQu Wenruo1-24/+24
We can grab fs_info reliably from btrfs_raid_bio::bioc, as the bioc is always passed into alloc_rbio(), and only get released when the raid bio is released. Remove btrfs_raid_bio::fs_info member, and cleanup all the @fs_info parameters for alloc_rbio() callers. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-10-26btrfs: rename struct btrfs_io_bio to btrfs_bioQu Wenruo1-4/+4
Previously we had "struct btrfs_bio", which records IO context for mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra btrfs specific info for logical bytenr bio. With "btrfs_bio" renamed to "btrfs_io_context", we are safe to rename "btrfs_io_bio" to "btrfs_bio" which is a more suitable name now. The struct btrfs_bio changes meaning by this commit. There was a suggested name like btrfs_logical_bio but it's a bit long and we'd prefer to use a shorter name. This could be a concern for backports to older kernels where the different meaning could possibly cause confusion or bugs. Comparing the new and old structures, there's no overlap among the struct members so a build would break in case of incorrect backport. We haven't had many backports to bio code anyway so this is more of a theoretical cause of bugs and a matter of precaution but we'll need to keep the semantic change in mind. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-10-26btrfs: rename btrfs_bio to btrfs_io_contextQu Wenruo1-64/+63
The structure btrfs_bio is used by two different sites: - bio->bi_private for mirror based profiles For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records how many mirrors are still pending, and save the original endio function of the bio. - RAID56 code In that case, RAID56 only utilize the stripes info, and no long uses that to trace the pending mirrors. So btrfs_bio is not always bind to a bio, and contains more info for IO context, thus renaming it will make the naming less confusing. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-08-23btrfs: constify and cleanup variables in comparatorsDavid Sterba1-4/+4
Comparators just read the data and thus get const parameters. This should be also preserved by the local variables, update all comparators passed to sort or bsearch. Cleanups: - unnecessary casts are dropped - btrfs_cmp_device_free_bytes is cleaned up to follow the common pattern and 'inline' is dropped as the function address is taken Signed-off-by: David Sterba <[email protected]>
2021-08-23btrfs: drop from __GFP_HIGHMEM all allocationsDavid Sterba1-5/+5
The highmem flag is used for allocating pages for compression and for raid56 pages. The high memory makes sense on 32bit systems but is not without problems. On 64bit system's it's just another layer of wrappers. The time the pages are allocated for compression or raid56 is relatively short (about a transaction commit), so the pages are not blocked indefinitely. As the number of pages depends on the amount of data being written/read, there's a theoretical problem. A fast device on a 32bit system could use most of the low memory pool, while with the highmem allocation that would not happen. This was possibly the original idea long time ago, but nowadays we optimize for 64bit systems. This patch removes all usage of the __GFP_HIGHMEM flag for page allocation, the kmap/kunmap are still in place and will be removed in followup patches. Remaining is masking out the bit in alloc_extent_state and __lookup_free_space_inode, that can safely stay. Signed-off-by: David Sterba <[email protected]>
2021-04-27Merge tag 'cfi-v5.13-rc1' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull CFI on arm64 support from Kees Cook: "This builds on last cycle's LTO work, and allows the arm64 kernels to be built with Clang's Control Flow Integrity feature. This feature has happily lived in Android kernels for almost 3 years[1], so I'm excited to have it ready for upstream. The wide diffstat is mainly due to the treewide fixing of mismatched list_sort prototypes. Other things in core kernel are to address various CFI corner cases. The largest code portion is the CFI runtime implementation itself (which will be shared by all architectures implementing support for CFI). The arm64 pieces are Acked by arm64 maintainers rather than coming through the arm64 tree since carrying this tree over there was going to be awkward. CFI support for x86 is still under development, but is pretty close. There are a handful of corner cases on x86 that need some improvements to Clang and objtool, but otherwise works well. Summary: - Clean up list_sort prototypes (Sami Tolvanen) - Introduce CONFIG_CFI_CLANG for arm64 (Sami Tolvanen)" * tag 'cfi-v5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: arm64: allow CONFIG_CFI_CLANG to be selected KVM: arm64: Disable CFI for nVHE arm64: ftrace: use function_nocfi for ftrace_call arm64: add __nocfi to __apply_alternatives arm64: add __nocfi to functions that jump to a physical address arm64: use function_nocfi with __pa_symbol arm64: implement function_nocfi psci: use function_nocfi for cpu_resume lkdtm: use function_nocfi treewide: Change list_sort to use const pointers bpf: disable CFI in dispatcher functions kallsyms: strip ThinLTO hashes from static functions kthread: use WARN_ON_FUNCTION_MISMATCH workqueue: use WARN_ON_FUNCTION_MISMATCH module: ensure __cfi_check alignment mm: add generic function_nocfi macro cfi: add __cficanonical add support for Clang CFI
2021-04-19btrfs: raid56: convert kmaps to kmap_local_pageIra Weiny1-31/+34
These kmaps are thread local and don't need to be atomic. So they can use the more efficient kmap_local_page(). However, the mapping of pages in the stripes and the additional parity and qstripe pages are a bit trickier because the unmapping must occur in the opposite order from the mapping. Furthermore, the pointer array in __raid_recover_end_io() may get reordered. Convert these calls to kmap_local_page() taking care to reverse the unmappings of any page arrays as well as being careful with the mappings of any special pages such as the parity and qstripe pages. Signed-off-by: Ira Weiny <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-04-19btrfs: convert kmap to kmap_local_page, simple casesIra Weiny1-2/+2
Use a simple coccinelle script to help convert the most common kmap()/kunmap() patterns to kmap_local_page()/kunmap_local(). Note that some kmaps which were caught by this script needed to be handled by hand because of the strict unmapping order of kunmap_local() so they are not included in this patch. But this script got us started. There's another temp variable added for the final length write to the first page so it does not interfere with cpage_out that is used for mapping other pages. The development of this patch was aided by the follow script: // <smpl> // SPDX-License-Identifier: GPL-2.0-only // Find kmap and replace with kmap_local_page then mark kunmap // // Confidence: Low // Copyright: (C) 2021 Intel Corporation // URL: http://coccinelle.lip6.fr/ @ catch_all @ expression e, e2; @@ ( -kmap(e) +kmap_local_page(e) ) ... ( -kunmap(...) +kunmap_local() ) // </smpl> Signed-off-by: Ira Weiny <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-04-19btrfs: remove duplicated in_range() macroJohannes Thumshirn1-0/+1
The in_range() macro is defined twice in btrfs' source, once in ctree.h and once in misc.h. Remove the definition in ctree.h and include misc.h in the files depending on it. Signed-off-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-04-08treewide: Change list_sort to use const pointersSami Tolvanen1-1/+2
list_sort() internally casts the comparison function passed to it to a different type with constant struct list_head pointers, and uses this pointer to call the functions, which trips indirect call Control-Flow Integrity (CFI) checking. Instead of removing the consts, this change defines the list_cmp_func_t type and changes the comparison function types of all list_sort() callers to use const pointers, thus avoiding type mismatches. Suggested-by: Nick Desaulniers <[email protected]> Signed-off-by: Sami Tolvanen <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Kees Cook <[email protected]> Tested-by: Nick Desaulniers <[email protected]> Tested-by: Nathan Chancellor <[email protected]> Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-03-01Merge branch 'kmap-conversion-for-5.12' of ↵Linus Torvalds1-9/+1
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull kmap conversion updates from David Sterba: "This contains changes regarding kmap API use and eg conversion from kmap_atomic to kmap_local_page. The API belongs to memory management but to save cross-tree dependency headaches we've agreed to take it through the btrfs tree because there are some trivial conversions possible, while the rest will need some time and getting the easy cases out of the way would be convenient. The changes can be grouped: - function exports, new helpers - new VM_BUG_ON for additional verification; it's been discussed if it should be VM_BUG_ON or BUG_ON, the former was chosen due to performance reasons - code replaced by relevant helpers" [ This is an updated version of a request that originally came in during the merge window, but I asked for some updates: https://lore.kernel.org/lkml/[email protected]/ which is why this got merge after the merge window closed. - Linus ] * 'kmap-conversion-for-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: use copy_highpage() instead of 2 kmaps() btrfs: use memcpy_[to|from]_page() and kmap_local_page() mm/highmem: Add VM_BUG_ON() to mem*_page() calls mm/highmem: Introduce memcpy_page(), memmove_page(), and memset_page() mm/highmem: Convert memcpy_[to|from]_page() to kmap_local_page() mm/highmem: Lift memcpy_[to|from]_page to core
2021-03-01Merge tag 'for-5.12-rc1-tag' of ↵Linus Torvalds1-11/+10
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: "This is the first batch of fixes that usually arrive during the merge window code freeze. Regressions and stable material. Regressions: - fix deadlock in log sync in zoned mode - fix bugs in subpage mode still wrongly assuming sectorsize == page size Fixes: - fix missing kunmap of the Q stripe in RAID6 - block group fixes: - fix race between extent freeing/allocation when using bitmaps - avoid double put of block group when emptying cluster - swapfile fixes: - fix swapfile writes vs running scrub - fix swapfile activation vs snapshot creation - fix stale data exposure after cloning a hole with NO_HOLES enabled - remove tree-checker check that does not work in case information from other leaves is necessary" * tag 'for-5.12-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: zoned: fix deadlock on log sync btrfs: avoid double put of block group when emptying cluster btrfs: fix stale data exposure after cloning a hole with NO_HOLES enabled btrfs: tree-checker: do not error out if extent ref hash doesn't match btrfs: fix race between swap file activation and snapshot creation btrfs: fix race between writes to swap files and scrub btrfs: avoid checking for RO block group twice during nocow writeback btrfs: fix race between extent freeing/allocation when using bitmaps btrfs: make check_compressed_csum() to be subpage compatible btrfs: make btrfs_submit_compressed_read() subpage compatible btrfs: fix raid6 qstripe kmap
2021-02-26btrfs: use copy_highpage() instead of 2 kmaps()Ira Weiny1-9/+1
There are many places where kmap/memove/kunmap patterns occur. This pattern exists in the core common function copy_highpage(). Use copy_highpage to avoid open coding the use of kmap and leverages the core functions use of kmap_local_page(). Development of this patch was aided by the following coccinelle script: // <smpl> // SPDX-License-Identifier: GPL-2.0-only // Find kmap/copypage/kunmap pattern and replace with copy_highpage calls // // NOTE: The expressions in the copy page version of this kmap pattern are // overly complex and so these all need individual attention. // // Confidence: Low // Copyright: (C) 2021 Intel Corporation // URL: http://coccinelle.lip6.fr/ // Comments: // Options: // // Then a copy_page where we have 2 pages involved. // @ copy_page_rule @ expression page, page2, To, From, Size; identifier ptr, ptr2; type VP, VP2; @@ /* kmap */ ( -VP ptr = kmap(page); ... -VP2 ptr2 = kmap(page2); | -VP ptr = kmap_atomic(page); ... -VP2 ptr2 = kmap_atomic(page2); | -ptr = kmap(page); ... -ptr2 = kmap(page2); | -ptr = kmap_atomic(page); ... -ptr2 = kmap_atomic(page2); ) // 1 or more copy versions of the entire page <+... ( -copy_page(To, From); +copy_highpage(To, From); | -memmove(To, From, Size); +memmoveExtra(To, From, Size); ) ...+> /* kunmap */ ( -kunmap(page2); ... -kunmap(page); | -kunmap(page); ... -kunmap(page2); | -kmap_atomic(ptr2); ... -kmap_atomic(ptr); ) // Remove any pointers left unused @ depends on copy_page_rule @ identifier copy_page_rule.ptr; identifier copy_page_rule.ptr2; type VP, VP1; type VP2, VP21; @@ -VP ptr; ... when != ptr; ? VP1 ptr; -VP2 ptr2; ... when != ptr2; ? VP21 ptr2; // </smpl> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Ira Weiny <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-02-22btrfs: fix raid6 qstripe kmapIra Weiny1-11/+10
When a qstripe is required an extra page is allocated and mapped. There were 3 problems: 1) There is no corresponding call of kunmap() for the qstripe page. 2) There is no reason to map the qstripe page more than once if the number of bits set in rbio->dbitmap is greater than one. 3) There is no reason to map the parity page and unmap it each time through the loop. The page memory can continue to be reused with a single mapping on each iteration by raid6_call.gen_syndrome() without remapping. So map the page for the duration of the loop. Similarly, improve the algorithm by mapping the parity page just 1 time. Fixes: 5a6ac9eacb49 ("Btrfs, raid56: support parity scrub on raid56") CC: [email protected] # 4.4.x: c17af96554a8: btrfs: raid56: simplify tracking of Q stripe presence CC: [email protected] # 4.4.x Signed-off-by: Ira Weiny <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-02-21Merge tag 'for-5.12/block-2021-02-17' of git://git.kernel.dk/linux-blockLinus Torvalds1-5/+2
Pull core block updates from Jens Axboe: "Another nice round of removing more code than what is added, mostly due to Christoph's relentless pursuit of tech debt removal/cleanups. This pull request contains: - Two series of BFQ improvements (Paolo, Jan, Jia) - Block iov_iter improvements (Pavel) - bsg error path fix (Pan) - blk-mq scheduler improvements (Jan) - -EBUSY discard fix (Jan) - bvec allocation improvements (Ming, Christoph) - bio allocation and init improvements (Christoph) - Store bdev pointer in bio instead of gendisk + partno (Christoph) - Block trace point cleanups (Christoph) - hard read-only vs read-only split (Christoph) - Block based swap cleanups (Christoph) - Zoned write granularity support (Damien) - Various fixes/tweaks (Chunguang, Guoqing, Lei, Lukas, Huhai)" * tag 'for-5.12/block-2021-02-17' of git://git.kernel.dk/linux-block: (104 commits) mm: simplify swapdev_block sd_zbc: clear zone resources for non-zoned case block: introduce blk_queue_clear_zone_settings() zonefs: use zone write granularity as block size block: introduce zone_write_granularity limit block: use blk_queue_set_zoned in add_partition() nullb: use blk_queue_set_zoned() to setup zoned devices nvme: cleanup zone information initialization block: document zone_append_max_bytes attribute block: use bi_max_vecs to find the bvec pool md/raid10: remove dead code in reshape_request block: mark the bio as cloned in bio_iov_bvec_set block: set BIO_NO_PAGE_REF in bio_iov_bvec_set block: remove a layer of indentation in bio_iov_iter_get_pages block: turn the nr_iovecs argument to bio_alloc* into an unsigned short block: remove the 1 and 4 vec bvec_slabs entries block: streamline bvec_alloc block: factor out a bvec_alloc_gfp helper block: move struct biovec_slab to bio.c block: reuse BIO_INLINE_VECS for integrity bvecs ...
2021-02-08btrfs: remove redundant NULL check before kvfreeYang Li1-2/+1
Fix below warnings reported by coccicheck: ./fs/btrfs/raid56.c:237:2-8: WARNING: NULL check before some freeing functions is not needed. Reported-by: Abaci Robot <[email protected]> Reviewed-by: Anand Jain <[email protected]> Signed-off-by: Yang Li <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-01-24block: store a block_device pointer in struct bioChristoph Hellwig1-5/+2
Replace the gendisk pointer in struct bio with a pointer to the newly improved struct block device. From that the gendisk can be trivially accessed with an extra indirection, but it also allows to directly look up all information related to partition remapping. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Tejun Heo <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-12-09btrfs: drop casts of bio bi_sectorDavid Sterba1-4/+4
Since commit 72deb455b5ec ("block: remove CONFIG_LBDAF") (5.2) the sector_t type is u64 on all arches and configs so we don't need to typecast it. It used to be unsigned long and the result of sector size shifts were not guaranteed to fit in the type. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-07-27btrfs: raid56: remove out label in __raid56_parity_recoverNikolay Borisov1-2/+2
There's no cleanup that occurs so we can simply return 0 directly. Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-07-27btrfs: raid56: don't opencode swap() in __raid_recover_end_ioNikolay Borisov1-5/+2
Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-07-27btrfs: raid56: use in_range where applicableNikolay Borisov1-12/+5
While at it use the opportunity to simplify find_logical_bio_stripe by reducing the scope of 'stripe_start' variable and squash the sector-to-bytes conversion on one line. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-07-27btrfs: raid56: assign bio in while() when using bio_list_popNikolay Borisov1-25/+5
Unify the style in the file such that return value of bio_list_pop is assigned directly in the while loop. This is in line with the rest of the kernel. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-07-27btrfs: raid56: remove redundant device check in rbio_add_io_pageNikolay Borisov1-4/+2
The merging logic is always executed if the current stripe's device is not missing. So there's no point in duplicating the check. Simply remove it, while at it reduce the scope of the 'last_end' variable. If the current stripe's device is missing we fail the stripe early on. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-07-27btrfs: record btrfs_device directly in btrfs_io_bioNikolay Borisov1-0/+1
Instead of recording stripe_index and using that to access correct btrfs_device from btrfs_bio::stripes record the btrfs_device in btrfs_io_bio. This will enable endio handlers to increment device error counters on checksum errors. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-03-23btrfs: use struct_size to calculate size of raid hash tableDavid Sterba1-3/+1
The struct_size macro does the same calculation and is safe regarding overflows. Though we're not expecting them to happen, use the helper for clarity. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-03-23btrfs: raid56: simplify tracking of Q stripe presenceDavid Sterba1-22/+15
There are temporary variables tracking the index of P and Q stripes, but none of them is really used as such, merely for determining if the Q stripe is present. This leads to compiler warnings with -Wunused-but-set-variable and has been reported several times. fs/btrfs/raid56.c: In function ‘finish_rmw’: fs/btrfs/raid56.c:1199:6: warning: variable ‘p_stripe’ set but not used [-Wunused-but-set-variable] 1199 | int p_stripe = -1; | ^~~~~~~~ fs/btrfs/raid56.c: In function ‘finish_parity_scrub’: fs/btrfs/raid56.c:2356:6: warning: variable ‘p_stripe’ set but not used [-Wunused-but-set-variable] 2356 | int p_stripe = -1; | ^~~~~~~~ Replace the two variables with one that has a clear meaning and also get rid of the warnings. The logic that verifies that there are only 2 valid cases is unchanged. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
2019-11-18btrfs: remove pointless local variable in lock_stripe_add()Johannes Thumshirn1-2/+3
In lock_stripe_add() we're caching the bucket for the stripe hash table just for a single call to dereference the stripe hash. If we just directly call rbio_bucket() we can safe the pointless local variable. Also move the dereferencing of the stripe hash outside of the variable declaration block to not break over the 80 characters limit. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Johannes Thumshirn <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2019-11-18btrfs: raid56: reduce indentation in lock_stripe_addJohannes Thumshirn1-47/+44
In lock_stripe_add() we're traversing the stripe hash list and check if the current list element's raid_map equals is equal to the raid bio's raid_map. If both are equal we continue processing. If we'd check for inequality instead of equality we can reduce one level of indentation. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Johannes Thumshirn <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2019-11-18btrfs: get rid of unique workqueue helper functionsOmar Sandoval1-3/+2
Commit 9e0af2376434 ("Btrfs: fix task hang under heavy compressed write") worked around the issue that a recycled work item could get a false dependency on the original work item due to how the workqueue code guarantees non-reentrancy. It did so by giving different work functions to different types of work. However, the fixes in the previous few patches are more complete, as they prevent a work item from being recycled at all (except for a tiny window that the kernel workqueue code handles for us). This obsoletes the previous fix, so we don't need the unique helpers for correctness. The only other reason to keep them would be so they show up in stack traces, but they always seem to be optimized to a tail call, so they don't show up anyways. So, let's just get rid of the extra indirection. While we're here, rename normal_work_helper() to the more informative btrfs_work_helper(). Reviewed-by: Nikolay Borisov <[email protected]> Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Omar Sandoval <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2019-09-09btrfs: move private raid56 definitions from ctree.hDavid Sterba1-0/+16
Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
2019-04-30block: remove the i argument to bio_for_each_segment_allChristoph Hellwig1-2/+1
We only have two callers that need the integer loop iterator, and they can easily maintain it themselves. Suggested-by: Matthew Wilcox <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Acked-by: David Sterba <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Acked-by: Coly Li <[email protected]> Reviewed-by: Matthew Wilcox <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-03-26Merge tag 'for-5.1-rc2-tag' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: - fsync fixes: i_size for truncate vs fsync, dio vs buffered during snapshotting, remove complicated but incomplete assertion - removed excessive warnigs, misreported device stats updates - fix raid56 page mapping for 32bit arch - fixes reported by static analyzer * tag 'for-5.1-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: Btrfs: fix assertion failure on fsync with NO_HOLES enabled btrfs: Avoid possible qgroup_rsv_size overflow in btrfs_calculate_inode_block_rsv_size btrfs: Fix bound checking in qgroup_trace_new_subtree_blocks btrfs: raid56: properly unmap parity page in finish_parity_scrub() btrfs: don't report readahead errors and don't update statistics Btrfs: fix file corruption after snapshotting due to mix of buffered/DIO writes btrfs: remove WARN_ON in log_dir_items Btrfs: fix incorrect file size after shrinking truncate and fsync
2019-03-18btrfs: raid56: properly unmap parity page in finish_parity_scrub()Andrea Righi1-1/+2
Parity page is incorrectly unmapped in finish_parity_scrub(), triggering a reference counter bug on i386, i.e.: [ 157.662401] kernel BUG at mm/highmem.c:349! [ 157.666725] invalid opcode: 0000 [#1] SMP PTI The reason is that kunmap(p_page) was completely left out, so we never did an unmap for the p_page and the loop unmapping the rbio page was iterating over the wrong number of stripes: unmapping should be done with nr_data instead of rbio->real_stripes. Test case to reproduce the bug: - create a raid5 btrfs filesystem: # mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd /dev/sde - mount it: # mount /dev/sdb /mnt - run btrfs scrub in a loop: # while :; do btrfs scrub start -BR /mnt; done BugLink: https://bugs.launchpad.net/bugs/1812845 Fixes: 5a6ac9eacb49 ("Btrfs, raid56: support parity scrub on raid56") CC: [email protected] # 4.4+ Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Andrea Righi <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2019-02-15block: allow bio_for_each_segment_all() to iterate over multi-page bvecMing Lei1-1/+2
This patch introduces one extra iterator variable to bio_for_each_segment_all(), then we can allow bio_for_each_segment_all() to iterate over multi-page bvec. Given it is just one mechannical & simple change on all bio_for_each_segment_all() users, this patch does tree-wide change in one single patch, so that we can avoid to use a temporary helper for this conversion. Reviewed-by: Omar Sandoval <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>