aboutsummaryrefslogtreecommitdiff
path: root/fs/btrfs
AgeCommit message (Collapse)AuthorFilesLines
2024-07-11btrfs: use delayed iput during extent map shrinkingFilipe Manana1-1/+1
When putting an inode during extent map shrinking we're doing a standard iput() but that may take a long time in case the inode is dirty and we are doing the final iput that triggers eviction - the VFS will have to wait for writeback before calling the btrfs evict callback (see fs/inode.c:evict()). This slows down the task running the shrinker which may have been triggered while updating some tree for example, meaning locks are held as well as an open transaction handle. Also if the iput() ends up triggering eviction and the inode has no links anymore, then we trigger item truncation which requires flushing delayed items, space reservation to start a transaction and that may trigger the space reclaim task and wait for it, resulting in deadlocks in case the reclaim task needs for example to commit a transaction and the shrinker is being triggered from a path holding a transaction handle. Syzbot reported such a case with the following stack traces: ====================================================== WARNING: possible circular locking dependency detected 6.10.0-rc2-syzkaller-00010-g2ab795141095 #0 Not tainted ------------------------------------------------------ kswapd0/111 is trying to acquire lock: ffff88801eae4610 (sb_internal#3){.+.+}-{0:0}, at: btrfs_commit_inode_delayed_inode+0x110/0x330 fs/btrfs/delayed-inode.c:1275 but task is already holding lock: ffffffff8dd3a9a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xa88/0x1970 mm/vmscan.c:6924 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:3783 [inline] fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3797 might_alloc include/linux/sched/mm.h:334 [inline] slab_pre_alloc_hook mm/slub.c:3890 [inline] slab_alloc_node mm/slub.c:3980 [inline] kmem_cache_alloc_lru_noprof+0x58/0x2f0 mm/slub.c:4019 btrfs_alloc_inode+0x118/0xb20 fs/btrfs/inode.c:8411 alloc_inode+0x5d/0x230 fs/inode.c:261 iget5_locked fs/inode.c:1235 [inline] iget5_locked+0x1c9/0x2c0 fs/inode.c:1228 btrfs_iget_locked fs/btrfs/inode.c:5590 [inline] btrfs_iget_path fs/btrfs/inode.c:5607 [inline] btrfs_iget+0xfb/0x230 fs/btrfs/inode.c:5636 create_reloc_inode+0x403/0x820 fs/btrfs/relocation.c:3911 btrfs_relocate_block_group+0x471/0xe60 fs/btrfs/relocation.c:4114 btrfs_relocate_chunk+0x143/0x450 fs/btrfs/volumes.c:3373 __btrfs_balance fs/btrfs/volumes.c:4157 [inline] btrfs_balance+0x211a/0x3f00 fs/btrfs/volumes.c:4534 btrfs_ioctl_balance fs/btrfs/ioctl.c:3675 [inline] btrfs_ioctl+0x12ed/0x8290 fs/btrfs/ioctl.c:4742 __do_compat_sys_ioctl+0x2c3/0x330 fs/ioctl.c:1007 do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline] __do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386 do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411 entry_SYSENTER_compat_after_hwframe+0x84/0x8e -> #2 (btrfs_trans_num_extwriters){++++}-{0:0}: join_transaction+0x164/0xf40 fs/btrfs/transaction.c:315 start_transaction+0x427/0x1a70 fs/btrfs/transaction.c:700 btrfs_rebuild_free_space_tree+0xaa/0x480 fs/btrfs/free-space-tree.c:1323 btrfs_start_pre_rw_mount+0x218/0xf60 fs/btrfs/disk-io.c:2999 open_ctree+0x41ab/0x52e0 fs/btrfs/disk-io.c:3554 btrfs_fill_super fs/btrfs/super.c:946 [inline] btrfs_get_tree_super fs/btrfs/super.c:1863 [inline] btrfs_get_tree+0x11e9/0x1b90 fs/btrfs/super.c:2089 vfs_get_tree+0x8f/0x380 fs/super.c:1780 fc_mount+0x16/0xc0 fs/namespace.c:1125 btrfs_get_tree_subvol fs/btrfs/super.c:2052 [inline] btrfs_get_tree+0xa53/0x1b90 fs/btrfs/super.c:2090 vfs_get_tree+0x8f/0x380 fs/super.c:1780 do_new_mount fs/namespace.c:3352 [inline] path_mount+0x6e1/0x1f10 fs/namespace.c:3679 do_mount fs/namespace.c:3692 [inline] __do_sys_mount fs/namespace.c:3898 [inline] __se_sys_mount fs/namespace.c:3875 [inline] __ia32_sys_mount+0x295/0x320 fs/namespace.c:3875 do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline] __do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386 do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411 entry_SYSENTER_compat_after_hwframe+0x84/0x8e -> #1 (btrfs_trans_num_writers){++++}-{0:0}: join_transaction+0x148/0xf40 fs/btrfs/transaction.c:314 start_transaction+0x427/0x1a70 fs/btrfs/transaction.c:700 btrfs_rebuild_free_space_tree+0xaa/0x480 fs/btrfs/free-space-tree.c:1323 btrfs_start_pre_rw_mount+0x218/0xf60 fs/btrfs/disk-io.c:2999 open_ctree+0x41ab/0x52e0 fs/btrfs/disk-io.c:3554 btrfs_fill_super fs/btrfs/super.c:946 [inline] btrfs_get_tree_super fs/btrfs/super.c:1863 [inline] btrfs_get_tree+0x11e9/0x1b90 fs/btrfs/super.c:2089 vfs_get_tree+0x8f/0x380 fs/super.c:1780 fc_mount+0x16/0xc0 fs/namespace.c:1125 btrfs_get_tree_subvol fs/btrfs/super.c:2052 [inline] btrfs_get_tree+0xa53/0x1b90 fs/btrfs/super.c:2090 vfs_get_tree+0x8f/0x380 fs/super.c:1780 do_new_mount fs/namespace.c:3352 [inline] path_mount+0x6e1/0x1f10 fs/namespace.c:3679 do_mount fs/namespace.c:3692 [inline] __do_sys_mount fs/namespace.c:3898 [inline] __se_sys_mount fs/namespace.c:3875 [inline] __ia32_sys_mount+0x295/0x320 fs/namespace.c:3875 do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline] __do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386 do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411 entry_SYSENTER_compat_after_hwframe+0x84/0x8e -> #0 (sb_internal#3){.+.+}-{0:0}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3869 [inline] __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137 lock_acquire kernel/locking/lockdep.c:5754 [inline] lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719 percpu_down_read include/linux/percpu-rwsem.h:51 [inline] __sb_start_write include/linux/fs.h:1655 [inline] sb_start_intwrite include/linux/fs.h:1838 [inline] start_transaction+0xbc1/0x1a70 fs/btrfs/transaction.c:694 btrfs_commit_inode_delayed_inode+0x110/0x330 fs/btrfs/delayed-inode.c:1275 btrfs_evict_inode+0x960/0xe80 fs/btrfs/inode.c:5291 evict+0x2ed/0x6c0 fs/inode.c:667 iput_final fs/inode.c:1741 [inline] iput.part.0+0x5a8/0x7f0 fs/inode.c:1767 iput+0x5c/0x80 fs/inode.c:1757 btrfs_scan_root fs/btrfs/extent_map.c:1118 [inline] btrfs_free_extent_maps+0xbd3/0x1320 fs/btrfs/extent_map.c:1189 super_cache_scan+0x409/0x550 fs/super.c:227 do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435 shrink_slab+0x18a/0x1310 mm/shrinker.c:662 shrink_one+0x493/0x7c0 mm/vmscan.c:4790 shrink_many mm/vmscan.c:4851 [inline] lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951 shrink_node mm/vmscan.c:5910 [inline] kswapd_shrink_node mm/vmscan.c:6720 [inline] balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911 kswapd+0x5ea/0xbf0 mm/vmscan.c:7180 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Chain exists of: sb_internal#3 --> btrfs_trans_num_extwriters --> fs_reclaim Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(btrfs_trans_num_extwriters); lock(fs_reclaim); rlock(sb_internal#3); *** DEADLOCK *** 2 locks held by kswapd0/111: #0: ffffffff8dd3a9a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xa88/0x1970 mm/vmscan.c:6924 #1: ffff88801eae40e0 (&type->s_umount_key#62){++++}-{3:3}, at: super_trylock_shared fs/super.c:562 [inline] #1: ffff88801eae40e0 (&type->s_umount_key#62){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196 stack backtrace: CPU: 0 PID: 111 Comm: kswapd0 Not tainted 6.10.0-rc2-syzkaller-00010-g2ab795141095 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3869 [inline] __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137 lock_acquire kernel/locking/lockdep.c:5754 [inline] lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719 percpu_down_read include/linux/percpu-rwsem.h:51 [inline] __sb_start_write include/linux/fs.h:1655 [inline] sb_start_intwrite include/linux/fs.h:1838 [inline] start_transaction+0xbc1/0x1a70 fs/btrfs/transaction.c:694 btrfs_commit_inode_delayed_inode+0x110/0x330 fs/btrfs/delayed-inode.c:1275 btrfs_evict_inode+0x960/0xe80 fs/btrfs/inode.c:5291 evict+0x2ed/0x6c0 fs/inode.c:667 iput_final fs/inode.c:1741 [inline] iput.part.0+0x5a8/0x7f0 fs/inode.c:1767 iput+0x5c/0x80 fs/inode.c:1757 btrfs_scan_root fs/btrfs/extent_map.c:1118 [inline] btrfs_free_extent_maps+0xbd3/0x1320 fs/btrfs/extent_map.c:1189 super_cache_scan+0x409/0x550 fs/super.c:227 do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435 shrink_slab+0x18a/0x1310 mm/shrinker.c:662 shrink_one+0x493/0x7c0 mm/vmscan.c:4790 shrink_many mm/vmscan.c:4851 [inline] lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951 shrink_node mm/vmscan.c:5910 [inline] kswapd_shrink_node mm/vmscan.c:6720 [inline] balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911 kswapd+0x5ea/0xbf0 mm/vmscan.c:7180 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 </TASK> So fix this by using btrfs_add_delayed_iput() so that the final iput is delegated to the cleaner kthread. Link: https://lore.kernel.org/linux-btrfs/[email protected]/ Reported-by: [email protected] Fixes: 956a17d9d050 ("btrfs: add a shrinker for extent maps") Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: fix extent map use-after-free when adding pages to compressed bioFilipe Manana1-1/+1
At add_ra_bio_pages() we are accessing the extent map to calculate 'add_size' after we dropped our reference on the extent map, resulting in a use-after-free. Fix this by computing 'add_size' before dropping our extent map reference. Reported-by: [email protected] Link: https://lore.kernel.org/linux-btrfs/[email protected]/ Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible") CC: [email protected] # 6.1+ Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: fix bitmap leak when loading free space cache on duplicate entryFilipe Manana1-0/+1
If we failed to link a free space entry because there's already a conflicting entry for the same offset, we free the free space entry but we don't free the associated bitmap that we had just allocated before. Fix that by freeing the bitmap before freeing the entry. Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: remove the BUG_ON() inside extent_range_clear_dirty_for_io()Qu Wenruo1-6/+20
Previously we had a BUG_ON() inside extent_range_clear_dirty_for_io(), as we expected all involved folios to be still locked, thus no folio should be missing. However for extent_range_clear_dirty_for_io() itself, we can skip the missing folio and handle the remaining ones, and return an error if there is anything wrong. Remove the BUG_ON() and let the caller to handle the error. In the caller we do not have a quick way to cleanup the error, but all the compression routines would handle the missing folio as an error and properly error out, so we only need to do an ASSERT() for developers, while for non-debug build the compression routine would handle the error correctly. Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: move extent_range_clear_dirty_for_io() into inode.cQu Wenruo3-16/+15
The function is only used inside inode.c by compress_file_range(), so move it to inode.c and unexport it. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: enhance compression error messagesDavid Sterba3-44/+125
Add more verbose and specific messages to all main error points in compression code for all algorithms. Currently there's no way to know which inode is affected or where in the data errors happened. The messages follow a common format: - what happened - error code if relevant - root and inode - additional data like offsets or lengths There's no helper for the messages as they differ in some details and that would be cumbersome to generalize to a single function. As all the errors are "almost never happens" there are the unlikely annotations done as compression is hot path. Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: fix data race when accessing the last_trans field of a rootFilipe Manana6-13/+23
KCSAN complains about a data race when accessing the last_trans field of a root: [ 199.553628] BUG: KCSAN: data-race in btrfs_record_root_in_trans [btrfs] / record_root_in_trans [btrfs] [ 199.555186] read to 0x000000008801e308 of 8 bytes by task 2812 on cpu 1: [ 199.555210] btrfs_record_root_in_trans+0x9a/0x128 [btrfs] [ 199.555999] start_transaction+0x154/0xcd8 [btrfs] [ 199.556780] btrfs_join_transaction+0x44/0x60 [btrfs] [ 199.557559] btrfs_dirty_inode+0x9c/0x140 [btrfs] [ 199.558339] btrfs_update_time+0x8c/0xb0 [btrfs] [ 199.559123] touch_atime+0x16c/0x1e0 [ 199.559151] pipe_read+0x6a8/0x7d0 [ 199.559179] vfs_read+0x466/0x498 [ 199.559204] ksys_read+0x108/0x150 [ 199.559230] __s390x_sys_read+0x68/0x88 [ 199.559257] do_syscall+0x1c6/0x210 [ 199.559286] __do_syscall+0xc8/0xf0 [ 199.559318] system_call+0x70/0x98 [ 199.559431] write to 0x000000008801e308 of 8 bytes by task 2808 on cpu 0: [ 199.559464] record_root_in_trans+0x196/0x228 [btrfs] [ 199.560236] btrfs_record_root_in_trans+0xfe/0x128 [btrfs] [ 199.561097] start_transaction+0x154/0xcd8 [btrfs] [ 199.561927] btrfs_join_transaction+0x44/0x60 [btrfs] [ 199.562700] btrfs_dirty_inode+0x9c/0x140 [btrfs] [ 199.563493] btrfs_update_time+0x8c/0xb0 [btrfs] [ 199.564277] file_update_time+0xb8/0xf0 [ 199.564301] pipe_write+0x8ac/0xab8 [ 199.564326] vfs_write+0x33c/0x588 [ 199.564349] ksys_write+0x108/0x150 [ 199.564372] __s390x_sys_write+0x68/0x88 [ 199.564397] do_syscall+0x1c6/0x210 [ 199.564424] __do_syscall+0xc8/0xf0 [ 199.564452] system_call+0x70/0x98 This is because we update and read last_trans concurrently without any type of synchronization. This should be generally harmless and in the worst case it can make us do extra locking (btrfs_record_root_in_trans()) trigger some warnings at ctree.c or do extra work during relocation - this would probably only happen in case of load or store tearing. So fix this by always reading and updating the field using READ_ONCE() and WRITE_ONCE(), this silences KCSAN and prevents load and store tearing. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: rename the extra_gfp parameter of btrfs_alloc_page_array()Qu Wenruo5-16/+16
There is only one caller utilizing the @extra_gfp parameter, alloc_eb_folio_array(). And in that case the extra_gfp is only assigned to __GFP_NOFAIL. Rename the @extra_gfp parameter to @nofail to indicate that. Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: remove the extra_gfp parameter from btrfs_alloc_folio_array()Qu Wenruo3-8/+5
The function btrfs_alloc_folio_array() is only utilized in btrfs_submit_compressed_read() and no other location, and the only caller is not utilizing the @extra_gfp parameter. Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: introduce new "rescue=ignoresuperflags" mount optionQu Wenruo4-5/+26
This new mount option allows the kernel to skip the super flags check, it's mostly to allow the kernel to do a rescue mount of an interrupted checksum conversion. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: introduce new "rescue=ignoremetacsums" mount optionQu Wenruo8-12/+34
Introduce "rescue=ignoremetacsums" to ignore metadata csums, all the other metadata sanity checks are still kept as is. This new mount option is mostly to allow the kernel to mount an interrupted checksum conversion (at the metadata csum overwrite stage). And since the main part of metadata sanity checks is inside tree-checker, we shouldn't lose much safety, and the new mount option is rescue mount option it requires full read-only mount. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: output the unrecognized super block flags as hexQu Wenruo1-1/+1
Most of the extra super block flags are beyond 32bits (from CHANGING_FSID_V2 to CHANGING_*_CSUMS), thus using %llu is not only too long and pretty hard to read. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: remove unused Opt enumsQu Wenruo1-3/+0
The following three Opt_* enums haven't been utilized since the port to new mount API: - Opt_ignorebadroots - Opt_ignoredatacsums - Opt_rescue_all All those enums are from the old day where we have dedicated mount options, nowadays they have been moved to "rescue=" mount option groups, and no more global tokens for them. So we can safely remove them now. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: tree-checker: add extra ram_bytes and disk_num_bytes checkQu Wenruo1-0/+18
This is to ensure non-compressed file extents (both regular and prealloc) should have matching ram_bytes and disk_num_bytes. This is only for CONFIG_BTRFS_DEBUG and CONFIG_BTRFS_ASSERT case, furthermore this will not return error, but just a kernel warning to inform developers. Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: fix the ram_bytes assignment for truncated ordered extentsQu Wenruo1-3/+1
[HICCUP] After adding extra checks on btrfs_file_extent_item::ram_bytes to tree-checker, running fsstress leads to tree-checker warning at write time, as we created file extent items with an invalid ram_bytes. All those offending file extents have offset 0, and ram_bytes matching num_bytes, and smaller than disk_num_bytes. This would also trigger the recently enhanced btrfs-check, which catches such mismatches and report them as minor errors. [CAUSE] When a folio/page is invalidated and it is part of a submitted OE, we mark the OE truncated just to the beginning of the folio/page. And for truncated OE, we insert the file extent item with incorrect value for ram_bytes (using num_bytes instead of the usual value). This is not a big deal for end users, as we do not utilize the ram_bytes field for regular non-compressed extents. This mismatch is just a small violation against on-disk format. [FIX] Fix it by removing the override on btrfs_file_extent_item::ram_bytes. Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: make validate_extent_map() catch ram_bytes mismatchQu Wenruo1-0/+5
Previously validate_extent_map() is only to catch bugs related to extent_map member cleanups. But with recent btrfs-check enhancement to catch ram_bytes mismatch with disk_num_bytes, it would be much better to catch such extent maps earlier. So this patch adds extra ram_bytes validation for extent maps. Please note that, older filesystems with such mismatch won't trigger this error: - extent_map::ram_bytes is already fixed Previous patch has already fixed the ram_bytes for affected file extents. So this enhanced sanity check should not affect end users. Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: ignore incorrect btrfs_file_extent_item::ram_bytesQu Wenruo1-0/+7
[HICCUP] Kernels can create file extent items with incorrect ram_bytes like this: item 6 key (257 EXTENT_DATA 0) itemoff 15816 itemsize 53 generation 7 type 1 (regular) extent data disk byte 13631488 nr 32768 extent data offset 0 nr 4096 ram 4096 extent compression 0 (none) Thankfully kernel can handle them properly, as in that case ram_bytes is not utilized at all. [ENHANCEMENT] Since the hiccup is not going to cause any data-loss and is only a minor violation of on-disk format, here we only need to ignore the incorrect ram_bytes value, and use the correct one from btrfs_file_extent_item::disk_num_bytes. Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: cleanup the bytenr usage inside btrfs_extent_item_to_extent_map()Qu Wenruo1-5/+4
[HICCUP] Before commit 85de2be7129c ("btrfs: remove extent_map::block_start member"), we utilized @bytenr variable inside btrfs_extent_item_to_extent_map() to calculate block_start. But that commit removed block_start completely, we have no need to advance @bytenr at all. [ENHANCEMENT] - Rename @bytenr as @disk_bytenr - Only declare @disk_bytenr inside the if branch - Make @disk_bytenr const and remove the modification on it Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: fix typo in error message in btrfs_validate_super()Mark Harmstone1-1/+1
There's a typo in an error message when checking the block group tree feature, it mentions fres-space-tree instead of free-space-tree. Fix that. Signed-off-by: Mark Harmstone <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: move the direct IO code into its own fileFilipe Manana8-1059/+1095
The direct IO code is over a thousand lines and it's currently spread between file.c and inode.c, which makes it not easy to locate some parts of it sometimes. Also inode.c is about 11 thousand lines and file.c about 4 thousand lines, both too big. So move all the direct IO code into a dedicated file, so that it's easy to locate all its code and reduce the sizes of inode.c and file.c. This is a pure move of code without any other changes except export a a couple functions from inode.c (get_extent_allocation_hint() and create_io_em()) because they are used in inode.c and the new direct-io.c file, and a couple functions from file.c (btrfs_buffered_write() and btrfs_write_check()) because they are used both in file.c and in the new direct-io.c file. Reviewed-by: Boris Burkov <[email protected]> Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: pass a btrfs_inode to btrfs_set_prop()David Sterba4-13/+13
Pass a struct btrfs_inode to btrfs_set_prop() as it's an internal interface, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: pass a btrfs_inode to btrfs_compress_heuristic()David Sterba3-4/+4
Pass a struct btrfs_inode to btrfs_compress_heuristic() as it's an internal interface, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: switch btrfs_ordered_extent::inode to struct btrfs_inodeDavid Sterba6-20/+20
The structure is internal so we should use struct btrfs_inode for that, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: switch btrfs_pending_snapshot::dir to btrfs_inodeDavid Sterba3-3/+3
The structure is internal so we should use struct btrfs_inode for that. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: pass a btrfs_inode to btrfs_ioctl_send()David Sterba3-7/+7
Pass a struct btrfs_inode to btrfs_ioctl_send() and _btrfs_ioctl_send() as it's an internal interface, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: switch btrfs_block_group::inode to struct btrfs_inodeDavid Sterba3-5/+5
The structure is internal so we should use struct btrfs_inode for that. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: pass a btrfs_inode to is_data_inode()David Sterba5-7/+7
Pass a struct btrfs_inode to is_data_inode() as it's an internal interface, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: pass a btrfs_inode to btrfs_readdir_get_delayed_items()David Sterba3-6/+6
Pass a struct btrfs_inode to btrfs_readdir_get_delayed_items() as it's an internal interface, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: pass a btrfs_inode to btrfs_readdir_put_delayed_items()David Sterba3-4/+4
Pass a struct btrfs_inode to btrfs_readdir_put_delayed_items() as it's an internal interface, allowing to remove some use of BTRFS_I. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: remove raid-stripe-tree encoding field from stripe_extentJohannes Thumshirn5-42/+1
Remove the encoding field from 'struct btrfs_stripe_extent'. It was originally intended to encode the RAID type as well as if we're a data or a parity stripe. But the RAID type can be inferred form the block-group and the data vs. parity differentiation can be done easier with adding a new key type for parity stripes in the RAID stripe tree. Signed-off-by: Johannes Thumshirn <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: print-tree: add generation and type dump for EXTENT_DATA_KEYQu Wenruo1-0/+3
When debugging the recent ram_bytes mismatch bug, I can hit it with enhanced tree-checker for file extent items at write time. But the bug is not that easy to trigger (mostly triggered with btrfs/06*, which uses 20 threads fsstress), and when I hit it, the only info is the kernel leaf dump, but it doesn't include things like the file extent type (REGULAR or PREALLOC). Add the dump for generation and type (although only numeric output) to make debugging a little easier. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: urgent periodic reclaim passBoris Burkov1-1/+34
Periodic reclaim attempts to avoid block_groups seeing active use with a sweep mark that gets cleared on allocation and set on a sweep. In urgent conditions where we have very little unallocated space (less than one chunk used by the threshold calculation for the unallocated target), we want to be able to override this mechanism. Introduce a second pass that only happens if we fail to find a reclaim candidate and reclaim is urgent. In that case, do a second pass where all block groups are eligible. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: prevent pathological periodic reclaim loopsBoris Burkov3-11/+71
Periodic reclaim runs the risk of getting stuck in a state where it keeps reclaiming the same block group over and over. This can happen if 1. reclaiming that block_group fails 2. reclaiming that block_group fails to move any extents into existing block_groups and just allocates a fresh chunk and moves everything. Currently, 1. is a very tight loop inside the reclaim worker. That is critical for edge triggered reclaim or else we risk forgetting about a reclaimable group. On the other hand, with level triggered reclaim we can break out of that loop and get it later. With that fixed, 2. applies to both failures and "successes" with no progress. If we have done a periodic reclaim on a space_info and nothing has changed in that space_info, there is not much point to trying again, so don't, until enough space gets free, which we capture with a heuristic of needing to net free 1 chunk. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: periodic block_group reclaimBoris Burkov5-0/+95
We currently employ a edge-triggered block group reclaim strategy which marks block groups for reclaim as they free down past a threshold. With a dynamic threshold, this is worse than doing it in a level-triggered fashion periodically. That is because the reclaim itself happens periodically, so the threshold at that point in time is what really matters, not the threshold at freeing time. If we mark the reclaim in a big pass, then sort by usage and do reclaim, we also benefit from a negative feedback loop preventing unnecessary reclaims as we crunch through the "best" candidates. Since this is quite a different model, it requires some additional support. The edge triggered reclaim has a good heuristic for not reclaiming fresh block groups, so we need to replace that with a typical GC sweep mark which skips block groups that have seen an allocation since the last sweep. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: dynamic block_group reclaim thresholdBoris Burkov4-27/+169
We can currently recover allocated block_groups by: - explicitly starting balance operations - "auto reclaim" via bg_reclaim_threshold The latter works by checking against a fixed threshold on frees. If we pass from above the threshold to below, relocation triggers and the block group will get reclaimed by the cleaner thread (assuming it is still eligible) Picking a threshold is challenging. Too high, and you end up trying to reclaim very full block_groups which is quite costly, and you don't do reclaim on block_groups that don't get quite THAT full, but could still be quite fragmented and stranding a lot of space. Too low, and you similarly miss out on reclaim even if you badly need it to avoid running out of unallocated space, if you have heavily fragmented block groups living above the threshold. No matter the threshold, it suffers from a workload that happens to bounce around that threshold, which can introduce arbitrary amounts of reclaim waste. To improve this situation, introduce a dynamic threshold. The basic idea behind this threshold is that it should be very lax when there is plenty of unallocated space, and increasingly aggressive as we approach zero unallocated space. To that end, it sets a target for unallocated space (10 chunks) and then linearly increases the threshold as the amount of space short of the target we are increases. The formula is: (target - unalloc) / target I tested this by running it on three interesting workloads: 1. bounce allocations around X% full. 2. fill up all the way and introduce full fragmentation. 3. write in a fragmented way until the filesystem is just about full. 1. and 2. attack the weaknesses of a fixed threshold; fixed either works perfectly or fully falls apart, depending on the threshold. Dynamic always handles these cases well. 3. attacks dynamic by checking whether it is too zealous to reclaim in conditions with low unallocated and low unused. It tends to claw back 1GiB of unallocated fairly aggressively, but not much more. Early versions of dynamic threshold struggled on this test. Additional work could be done to intelligently ratchet up the urgency of reclaim in very low unallocated conditions. Existing mechanisms are already useless in that case anyway. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: store fs_info in space_infoBoris Burkov2-0/+2
This is handy when computing space_info dynamic reclaim thresholds where we do not have access to a block group. We could add it to the various functions as a parameter, but it seems reasonable for space_info to have an fs_info pointer. Reviewed-by: Josef Bacik <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: report reclaim stats in sysfsBoris Burkov3-0/+34
When evaluating various reclaim strategies/thresholds against each other, it is useful to collect data about the amount of reclaim happening. Expose a count, error count, and byte count via sysfs per space_info. Note that this is only for automatic reclaim, not manually invoked balances or other codepaths that use "relocate_block_group" Reviewed-by: Josef Bacik <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: qgroup: warn about inconsistent qgroups when relation update failsDavid Sterba1-2/+3
Calling btrfs_handle_fs_error() after btrfs_run_qgroups() fails to update the qgroup status is probably not necessary, this would turn the filesystem to read-only. For the same reason aborting the transaction is also not a good option. The state is left inconsistent and can be fixed by rescan, printing a warning should be sufficient. Return code reflects the status of adding/deleting the relation and if the transaction was ended properly. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: qgroup: preallocate memory before adding a relationDavid Sterba3-19/+34
There's a transaction joined in the qgroup relation add/remove ioctl and any error will lead to abort/error. We could lift the allocation from btrfs_add_qgroup_relation() and move it outside of the transaction context. The relation deletion does not need that. The ownership of the structure is moved to the add relation handler. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: abort transaction on errors in btrfs_free_chunk()David Sterba1-6/+9
The errors during removing a chunk item are fatal, we expect to have a matching item in the chunk map from which the chunk_offset is taken. Handle that by transaction abort. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: only print error message when checking item size in print_extent_item()David Sterba1-1/+1
The extent item used to have a v0 that was removed in 6.6. There's a check for minimum expected size that could lead to btrfs_handle_fs_error() that would make the filesystem read-only. As we don't have v0 anymore (and haven't seen any reports in the deprecation period), handle this in a less intrusive way. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: abort transaction if we don't find extref in btrfs_del_inode_extref()David Sterba1-2/+2
When an extended ref is deleted we do a sanity check right before removing the item, if we can't find it then handle the error. This is done by btrfs_handle_fs_error() but this is from the time before we had the transaction abort infrastructure, so switch to that. The end result is the same, the error is reported and switched to read-only. We newly return the -ENOENT error code as this better represents what happened. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: avoid allocating and running pointless delayed extent operationsFilipe Manana3-39/+39
We always allocate a delayed extent op structure when allocating a tree block (except for log trees), but most of the time we don't need it as we only need to set the BTRFS_BLOCK_FLAG_FULL_BACKREF if we're dealing with a relocation tree and we only need to set the key of a tree block in a btrfs_tree_block_info structure if we are not using skinny metadata (feature enabled by default since btrfs-progs 3.18 and available as of kernel 3.10). In these cases, where we don't need neither to update flags nor to set the key, we only use the delayed extent op structure to set the tree block's level. This is a waste of memory and besides that, the memory allocation can fail and can add additional latency. Instead of using a delayed extent op structure to store the level of the tree block, use the delayed ref head to store it. This doesn't change the size of neither structure and helps us avoid allocating delayed extent ops structures when using the skinny metadata feature and there's no relocation going on. This also gets rid of a BUG_ON(). For example, for a fs_mark run, with 5 iterations, 8 threads and 100K files per iteration, before this patch there were 118109 allocations of delayed extent op structures and after it there were none. Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: preallocate ulist memory for qgroup rsvBoris Burkov4-3/+29
When qgroups are enabled, during data reservation, we allocate the ulist_nodes that track the exact reserved extents with GFP_ATOMIC unconditionally. This is unnecessary, and we can follow the model already employed by the struct extent_state we preallocate in the non qgroups case, which should reduce the risk of allocation failures with GFP_ATOMIC. Add a prealloc node to struct ulist which ulist_add will grab when it is present, and try to allocate it before taking the tree lock while we can still take advantage of a less strict gfp mask. The lifetime of that node belongs to the new prealloc field, until it is used, at which point it belongs to the ulist linked list. Reviewed-by: Qu Wenruo <[email protected]> Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Boris Burkov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: don't BUG_ON() when 0 reference count at btrfs_lookup_extent_info()Filipe Manana1-4/+20
Instead of doing a BUG_ON() handle the error by returning -EUCLEAN, aborting the transaction and logging an error message. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: reduce nesting for extent processing at btrfs_lookup_extent_info()Filipe Manana1-13/+9
Instead of using an if-else statement when processing the extent item at btrfs_lookup_extent_info(), use a single if statement for the error case since it does a goto at the end and leave the success (expected) case following the if statement, reducing indentation and making the logic a bit easier to follow. Also make the if statement's condition as unlikely since it's not expected to ever happen, as it signals some corruption, making it clear and hint the compiler to generate more efficient code. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: remove superfluous metadata check at btrfs_lookup_extent_info()Filipe Manana1-1/+1
If we didn't found an extent item with the initial btrfs_search_slot() call, it's pointless to test if the "metadata" variable is "true", because right after we check if the key type is BTRFS_METADATA_ITEM_KEY and that is the case only when "metadata" is set to "true". So remove the redundant check. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: replace BUG_ON() with error handling at update_ref_for_cow()Filipe Manana1-2/+10
Instead of a BUG_ON() just return an error, log an error message and abort the transaction in case we find an extent buffer belonging to the relocation tree that doesn't have the full backref flag set. This is unexpected and should never happen (save for bugs or a potential bad memory). Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: simplify setting the full backref flag at update_ref_for_cow()Filipe Manana1-7/+4
We keep a "new_flags" variable only to keep track if we need to update the metadata extent's flags, and when we set BTRFS_BLOCK_FLAG_FULL_BACKREF in the variable, we do it in an inner scope. Then check in an outer scope if the variable is not 0 and if so call btrfs_set_disk_extent_flags(). The variable isn't used for anything else. This is somewhat confusing, so to make it more straightforward update the extent's flags where we are currently updating "new_flags" and remove the variable. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2024-07-11btrfs: remove NULL transaction support for btrfs_lookup_extent_info()Filipe Manana1-14/+2
There are no callers of btrfs_lookup_extent_info() that pass a NULL value for the transaction handle argument, so there's no point in having special logic to deal with the NULL. The last caller that passed a NULL value was removed in commit 19b546d7a1b2 ("btrfs: relocation: Use btrfs_find_all_leafs to locate data extent parent tree leaves"). So remove the NULL handling from btrfs_lookup_extent_info(). Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>