aboutsummaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)AuthorFilesLines
2021-06-23f2fs: compress: add compress_inode to cache compressed blocksChao Yu10-14/+357
Support to use address space of inner inode to cache compressed block, in order to improve cache hit ratio of random read. Signed-off-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: clean up /sys/fs/f2fs/<disk>/featuresJaegeuk Kim2-64/+134
Let's create /sys/fs/f2fs/<disk>/feature_list/ to meet sysfs rule. Note that there are three feature list entries: 1) /sys/fs/f2fs/features : shows runtime features supported by in-kernel f2fs along with Kconfig. - ref. F2FS_FEATURE_RO_ATTR() 2) /sys/fs/f2fs/$s_id/features <deprecated> : shows on-disk features enabled by mkfs.f2fs, used for old kernels. This won't add new feature anymore, and thus, users should check entries in 3) instead of this 2). 3) /sys/fs/f2fs/$s_id/feature_list : shows on-disk features enabled by mkfs.f2fs per instance, which follows sysfs entry rule where each entry should expose single value. This list covers old feature list provided by 2) and beyond. Therefore, please add new on-disk feature in this list only. - ref. F2FS_SB_FEATURE_RO_ATTR() Reviewed-by: Chao Yu <[email protected]> Reviewed-by: Eric Biggers <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: add pin_file in feature listJaegeuk Kim1-0/+2
This patch adds missing pin_file feature supported by kernel. Fixes: f5a53edcf01e ("f2fs: support aligned pinned file") Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: Advertise encrypted casefolding in sysfsDaniel Rosenberg1-0/+8
Older kernels don't support encryption with casefolding. This adds the sysfs entry encrypted_casefold to show support for those combined features. Support for this feature was originally added by commit 7ad08a58bf67 ("f2fs: Handle casefolding with Encryption") Fixes: 7ad08a58bf67 ("f2fs: Handle casefolding with Encryption") Cc: [email protected] # v5.11+ Signed-off-by: Daniel Rosenberg <[email protected]> Reviewed-by: Eric Biggers <[email protected]> Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: Show casefolding support only when supportedDaniel Rosenberg1-0/+4
The casefolding feature is only supported when CONFIG_UNICODE is set. This modifies the feature list f2fs presents under sysfs accordingly. Fixes: 5aba54302a46 ("f2fs: include charset encoding information in the superblock") Cc: [email protected] # v5.4+ Signed-off-by: Daniel Rosenberg <[email protected]> Reviewed-by: Eric Biggers <[email protected]> Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: support RO featureJaegeuk Kim4-6/+46
Given RO feature in superblock, we don't need to check provisioning/reserve spaces and SSA area. Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: logging neateningJoe Perches5-21/+17
Update the logging uses that have unnecessary newlines as the f2fs_printk function and so its f2fs_<level> macro callers already adds one. This allows searching single line logging entries with an easier grep and also avoids unnecessary blank lines in the logging. Miscellanea: o Coalesce formats o Align to open parenthesis Signed-off-by: Joe Perches <[email protected]> Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: introduce FI_COMPRESS_RELEASED instead of using IMMUTABLE bitJaegeuk Kim3-7/+20
Once we release compressed blocks, we used to set IMMUTABLE bit. But it turned out it disallows every fs operations which we don't need for compression. Let's just prevent writing data only. Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-23f2fs: compress: remove unneeded preallocationChao Yu2-28/+3
We will reserve iblocks for compression saved, so during compressed cluster overwrite, we don't need to preallocate blocks for later write. In addition, it adds a bug_on to detect wrong reserved iblock number in __f2fs_cluster_blocks(). Bug fix in the original patch by Jaegeuk: If we released compressed blocks having an immutable bit, we can see less number of compressed block addresses. Let's fix wrong BUG_ON. Signed-off-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
2021-06-22ext4: use local variable ei instead of EXT4_I() macrochenyichong1-1/+1
Signed-off-by: chenyichong <[email protected]> Reviewed-by: Ritesh Harjani <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22ext4: fix avefreec in find_group_orlovPan Dong1-6/+5
The avefreec should be average free clusters instead of average free blocks, otherwize Orlov's allocator will not work properly when bigalloc enabled. Cc: [email protected] Signed-off-by: Pan Dong <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22ext4: correct the cache_nr in tracepoint ext4_es_shrink_exitZhang Yi1-0/+1
The cache_cnt parameter of tracepoint ext4_es_shrink_exit means the remaining cache count after shrink, but now it is the cache count before shrink, fix it by read sbi->s_extent_cache_cnt again. Fixes: 1ab6c4997e04 ("fs: convert fs shrinkers to new scan/count API") Cc: [email protected] # 3.12+ Signed-off-by: Zhang Yi <[email protected]> Reviewed-by: Jan Kara <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22ext4: remove check for zero nr_to_scan in ext4_es_scan()Zhang Yi1-3/+0
After converting fs shrinkers to new scan/count API, we are no longer pass zero nr_to_scan parameter to detect the number of objects to free, just remove this check. Fixes: 1ab6c4997e04 ("fs: convert fs shrinkers to new scan/count API") Cc: [email protected] # 3.12+ Signed-off-by: Zhang Yi <[email protected]> Reviewed-by: Jan Kara <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22ext4: remove set but rewrite variablesTian Tao1-1/+1
In the ext4_dx_add_entry function, the at variable is assigned but will reset just after “again:” label. So delete the unnecessary assignment. this will not chang the logic. Signed-off-by: Tian Tao <[email protected]> Reviewed-by: Artem Blagodarenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22ext4: add ioctl EXT4_IOC_CHECKPOINTLeah Rumancik2-0/+64
ioctl EXT4_IOC_CHECKPOINT checkpoints and flushes the journal. This includes forcing all the transactions to the log, checkpointing the transactions, and flushing the log to disk. This ioctl takes u32 "flags" as an argument. Three flags are supported. EXT4_IOC_CHECKPOINT_FLAG_DRY_RUN can be used to verify input to the ioctl. It returns error if there is any invalid input, otherwise it returns success without performing any checkpointing. The other two flags, EXT4_IOC_CHECKPOINT_FLAG_DISCARD and EXT4_IOC_CHECKPOINT_FLAG_ZEROOUT, can be used to issue requests to discard or zeroout the journal logs blocks, respectively. At this point, EXT4_IOC_CHECKPOINT_FLAG_ZEROOUT is primarily added to enable testing of this codepath on devices that don't support discard. EXT4_IOC_CHECKPOINT_FLAG_DISCARD and EXT4_IOC_CHECKPOINT_FLAG_ZEROOUT cannot both be set. Systems that wish to achieve content deletion SLO can set up a daemon that calls this ioctl at a regular interval such that it matches with the SLO requirement. Thus, with this patch, the ext4_dir_entry2 wipeout patch[1], and the Ext4 "-o discard" mount option set, Ext4 can now guarantee that all file contents, file metatdata, and filenames will not be accessible through the filesystem and will have had discard or zeroout requests issued for corresponding device blocks. The __jbd2_journal_erase function could also be used to discard or zero-fill the journal during journal load after recovery. This would provide a potential solution to a journal replay bug reported earlier this year[2]. After a successful journal recovery, e2fsck can call this ioctl to discard the journal as well. [1] https://lore.kernel.org/linux-ext4/[email protected]/ [2] https://lore.kernel.org/linux-ext4/[email protected]/ Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Leah Rumancik <[email protected]> Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22ext4: add discard/zeroout flags to journal flushLeah Rumancik6-16/+129
Add a flags argument to jbd2_journal_flush to enable discarding or zero-filling the journal blocks while flushing the journal. Signed-off-by: Leah Rumancik <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2021-06-22btrfs: remove unused btrfs_fs_info::total_pinnedNikolay Borisov1-2/+0
This got added 14 years ago in 324ae4df00fd ("Btrfs: Add block group pinned accounting back") but it was not ever used. Subsequently its usage got gradually removed in 8790d502e440 ("Btrfs: Add support for mirroring across drives") and 11833d66be94 ("Btrfs: improve async block group caching"). Let's remove it for good! Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Nikolay Borisov <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22smbdirect: missing rc checks while waiting for rdma eventsSteve French1-2/+12
There were two places where we weren't checking for error (e.g. ERESTARTSYS) while waiting for rdma resolution. Addresses-Coverity: 1462165 ("Unchecked return value") Reviewed-by: Tom Talpey <[email protected]> Reviewed-by: Long Li <[email protected]> Signed-off-by: Steve French <[email protected]>
2021-06-22btrfs: rip out btrfs_space_info::total_bytes_pinnedJosef Bacik7-97/+0
We used this in may_commit_transaction() in order to determine if we needed to commit the transaction. However we no longer have that logic and thus have no use of this counter anymore, so delete it. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: rip the first_ticket_bytes logic from fail_all_ticketsJosef Bacik1-16/+0
This was a trick implemented to handle the case where we had a giant reservation in front of a bunch of little reservations in the ticket queue. If the giant reservation was too large for the transaction commit to make a difference we'd ENOSPC everybody out instead of committing the transaction. This logic was put in to force us to go back and re-try the transaction commit logic to see if we could make progress. Instead now we know we've committed the transaction, so any space that would have been recovered is now available, and would be caught by the btrfs_try_granting_tickets() in this loop, so we no longer need this code and can simply delete it. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: remove FLUSH_DELAYED_REFS from data ENOSPC flushingJosef Bacik1-16/+0
Since we unconditionally commit the transaction now we no longer need to run the delayed refs to make sure our total_bytes_pinned value is uptodate, we can simply commit the transaction. Remove this stage from the data flushing list. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: rip out may_commit_transactionJosef Bacik2-126/+11
may_commit_transaction was introduced before the ticketing infrastructure existed. There was a problem where we'd legitimately be out of space, but every reservation would trigger a transaction commit and then fail. Thus if you had 1000 things trying to make a reservation, they'd all do the flushing loop and thus commit the transaction 1000 times before they'd get their ENOSPC. This helper was introduced to short circuit this, if there wasn't space that could be reclaimed by committing the transaction then simply ENOSPC out. This made true ENOSPC tests much faster as we didn't waste a bunch of time. However many of our bugs over the years have been from cases where we didn't account for some space that would be reclaimed by committing a transaction. The delayed refs rsv space, delayed rsv, many pinned bytes miscalculations, etc. And in the meantime the original problem has been solved with ticketing. We no longer will commit the transaction 1000 times. Instead we'll get 1000 waiters, we will go through the flushing mechanisms, and if there's no progress after 2 loops we ENOSPC everybody out. The ticketing infrastructure gives us a deterministic way to see if we're making progress or not, thus we avoid a lot of extra work. So simplify this step by simply unconditionally committing the transaction. This removes what is arguably our most common source of early ENOSPC bugs and will allow us to drastically simplify many of the things we track because we simply won't need them with this stuff gone. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: send: fix crash when memory allocations trigger reclaimFilipe Manana5-31/+2
When doing a send we don't expect the task to ever start a transaction after the initial check that verifies if commit roots match the regular roots. This is because after that we set current->journal_info with a stub (special value) that signals we are in send context, so that we take a read lock on an extent buffer when reading it from disk and verifying it is valid (its generation matches the generation stored in the parent). This stub was introduced in 2014 by commit a26e8c9f75b0bf ("Btrfs: don't clear uptodate if the eb is under IO") in order to fix a concurrency issue between send and balance. However there is one particular exception where we end up needing to start a transaction and when this happens it results in a crash with a stack trace like the following: [60015.902283] kernel: WARNING: CPU: 3 PID: 58159 at arch/x86/include/asm/kfence.h:44 kfence_protect_page+0x21/0x80 [60015.902292] kernel: Modules linked in: uinput rfcomm snd_seq_dummy (...) [60015.902384] kernel: CPU: 3 PID: 58159 Comm: btrfs Not tainted 5.12.9-300.fc34.x86_64 #1 [60015.902387] kernel: Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./F2A88XN-WIFI, BIOS F6 12/24/2015 [60015.902389] kernel: RIP: 0010:kfence_protect_page+0x21/0x80 [60015.902393] kernel: Code: ff 0f 1f 84 00 00 00 00 00 55 48 89 fd (...) [60015.902396] kernel: RSP: 0018:ffff9fb583453220 EFLAGS: 00010246 [60015.902399] kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff9fb583453224 [60015.902401] kernel: RDX: ffff9fb583453224 RSI: 0000000000000000 RDI: 0000000000000000 [60015.902402] kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 [60015.902404] kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002 [60015.902406] kernel: R13: ffff9fb583453348 R14: 0000000000000000 R15: 0000000000000001 [60015.902408] kernel: FS: 00007f158e62d8c0(0000) GS:ffff93bd37580000(0000) knlGS:0000000000000000 [60015.902410] kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [60015.902412] kernel: CR2: 0000000000000039 CR3: 00000001256d2000 CR4: 00000000000506e0 [60015.902414] kernel: Call Trace: [60015.902419] kernel: kfence_unprotect+0x13/0x30 [60015.902423] kernel: page_fault_oops+0x89/0x270 [60015.902427] kernel: ? search_module_extables+0xf/0x40 [60015.902431] kernel: ? search_bpf_extables+0x57/0x70 [60015.902435] kernel: kernelmode_fixup_or_oops+0xd6/0xf0 [60015.902437] kernel: __bad_area_nosemaphore+0x142/0x180 [60015.902440] kernel: exc_page_fault+0x67/0x150 [60015.902445] kernel: asm_exc_page_fault+0x1e/0x30 [60015.902450] kernel: RIP: 0010:start_transaction+0x71/0x580 [60015.902454] kernel: Code: d3 0f 84 92 00 00 00 80 e7 06 0f 85 63 (...) [60015.902456] kernel: RSP: 0018:ffff9fb5834533f8 EFLAGS: 00010246 [60015.902458] kernel: RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000 [60015.902460] kernel: RDX: 0000000000000801 RSI: 0000000000000000 RDI: 0000000000000039 [60015.902462] kernel: RBP: ffff93bc0a7eb800 R08: 0000000000000001 R09: 0000000000000000 [60015.902463] kernel: R10: 0000000000098a00 R11: 0000000000000001 R12: 0000000000000001 [60015.902464] kernel: R13: 0000000000000000 R14: ffff93bc0c92b000 R15: ffff93bc0c92b000 [60015.902468] kernel: btrfs_commit_inode_delayed_inode+0x5d/0x120 [60015.902473] kernel: btrfs_evict_inode+0x2c5/0x3f0 [60015.902476] kernel: evict+0xd1/0x180 [60015.902480] kernel: inode_lru_isolate+0xe7/0x180 [60015.902483] kernel: __list_lru_walk_one+0x77/0x150 [60015.902487] kernel: ? iput+0x1a0/0x1a0 [60015.902489] kernel: ? iput+0x1a0/0x1a0 [60015.902491] kernel: list_lru_walk_one+0x47/0x70 [60015.902495] kernel: prune_icache_sb+0x39/0x50 [60015.902497] kernel: super_cache_scan+0x161/0x1f0 [60015.902501] kernel: do_shrink_slab+0x142/0x240 [60015.902505] kernel: shrink_slab+0x164/0x280 [60015.902509] kernel: shrink_node+0x2c8/0x6e0 [60015.902512] kernel: do_try_to_free_pages+0xcb/0x4b0 [60015.902514] kernel: try_to_free_pages+0xda/0x190 [60015.902516] kernel: __alloc_pages_slowpath.constprop.0+0x373/0xcc0 [60015.902521] kernel: ? __memcg_kmem_charge_page+0xc2/0x1e0 [60015.902525] kernel: __alloc_pages_nodemask+0x30a/0x340 [60015.902528] kernel: pipe_write+0x30b/0x5c0 [60015.902531] kernel: ? set_next_entity+0xad/0x1e0 [60015.902534] kernel: ? switch_mm_irqs_off+0x58/0x440 [60015.902538] kernel: __kernel_write+0x13a/0x2b0 [60015.902541] kernel: kernel_write+0x73/0x150 [60015.902543] kernel: send_cmd+0x7b/0xd0 [60015.902545] kernel: send_extent_data+0x5a3/0x6b0 [60015.902549] kernel: process_extent+0x19b/0xed0 [60015.902551] kernel: btrfs_ioctl_send+0x1434/0x17e0 [60015.902554] kernel: ? _btrfs_ioctl_send+0xe1/0x100 [60015.902557] kernel: _btrfs_ioctl_send+0xbf/0x100 [60015.902559] kernel: ? enqueue_entity+0x18c/0x7b0 [60015.902562] kernel: btrfs_ioctl+0x185f/0x2f80 [60015.902564] kernel: ? psi_task_change+0x84/0xc0 [60015.902569] kernel: ? _flat_send_IPI_mask+0x21/0x40 [60015.902572] kernel: ? check_preempt_curr+0x2f/0x70 [60015.902576] kernel: ? selinux_file_ioctl+0x137/0x1e0 [60015.902579] kernel: ? expand_files+0x1cb/0x1d0 [60015.902582] kernel: ? __x64_sys_ioctl+0x82/0xb0 [60015.902585] kernel: __x64_sys_ioctl+0x82/0xb0 [60015.902588] kernel: do_syscall_64+0x33/0x40 [60015.902591] kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae [60015.902595] kernel: RIP: 0033:0x7f158e38f0ab [60015.902599] kernel: Code: ff ff ff 85 c0 79 9b (...) [60015.902602] kernel: RSP: 002b:00007ffcb2519bf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [60015.902605] kernel: RAX: ffffffffffffffda RBX: 00007ffcb251ae00 RCX: 00007f158e38f0ab [60015.902607] kernel: RDX: 00007ffcb2519cf0 RSI: 0000000040489426 RDI: 0000000000000004 [60015.902608] kernel: RBP: 0000000000000004 R08: 00007f158e297640 R09: 00007f158e297640 [60015.902610] kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000 [60015.902612] kernel: R13: 0000000000000002 R14: 00007ffcb251aee0 R15: 0000558c1a83e2a0 [60015.902615] kernel: ---[ end trace 7bbc33e23bb887ae ]--- This happens because when writing to the pipe, by calling kernel_write(), we end up doing page allocations using GFP_HIGHUSER | __GFP_ACCOUNT as the gfp flags, which allow reclaim to happen if there is memory pressure. This allocation happens at fs/pipe.c:pipe_write(). If the reclaim is triggered, inode eviction can be triggered and that in turn can result in starting a transaction if the inode has a link count of 0. The transaction start happens early on during eviction, when we call btrfs_commit_inode_delayed_inode() at btrfs_evict_inode(). This happens if there is currently an open file descriptor for an inode with a link count of 0 and the reclaim task gets a reference on the inode before that descriptor is closed, in which case the reclaim task ends up doing the final iput that triggers the inode eviction. When we have assertions enabled (CONFIG_BTRFS_ASSERT=y), this triggers the following assertion at transaction.c:start_transaction(): /* Send isn't supposed to start transactions. */ ASSERT(current->journal_info != BTRFS_SEND_TRANS_STUB); And when assertions are not enabled, it triggers a crash since after that assertion we cast current->journal_info into a transaction handle pointer and then dereference it: if (current->journal_info) { WARN_ON(type & TRANS_EXTWRITERS); h = current->journal_info; refcount_inc(&h->use_count); (...) Which obviously results in a crash due to an invalid memory access. The same type of issue can happen during other memory allocations we do directly in the send code with kmalloc (and friends) as they use GFP_KERNEL and therefore may trigger reclaim too, which started to happen since 2016 after commit e780b0d1c1523e ("btrfs: send: use GFP_KERNEL everywhere"). The issue could be solved by setting up a NOFS context for the entire send operation so that reclaim could not be triggered when allocating memory or pages through kernel_write(). However that is not very friendly and we can in fact get rid of the send stub because: 1) The stub was introduced way back in 2014 by commit a26e8c9f75b0bf ("Btrfs: don't clear uptodate if the eb is under IO") to solve an issue exclusive to when send and balance are running in parallel, however there were other problems between balance and send and we do not allow anymore to have balance and send run concurrently since commit 9e967495e0e0ae ("Btrfs: prevent send failures and crashes due to concurrent relocation"). More generically the issues are between send and relocation, and that last commit eliminated only the possibility of having send and balance run concurrently, but shrinking a device also can trigger relocation, and on zoned filesystems we have relocation of partially used block groups triggered automatically as well. The previous patch that has a subject of: "btrfs: ensure relocation never runs while we have send operations running" Addresses all the remaining cases that can trigger relocation. 2) We can actually allow starting and even committing transactions while in a send context if needed because send is not holding any locks that would block the start or the commit of a transaction. So get rid of all the logic added by commit a26e8c9f75b0bf ("Btrfs: don't clear uptodate if the eb is under IO"). We can now always call clear_extent_buffer_uptodate() at verify_parent_transid() since send is the only case that uses commit roots without having a transaction open or without holding the commit_root_sem. Reported-by: Chris Murphy <[email protected]> Link: https://lore.kernel.org/linux-btrfs/CAJCQCtRQ57=qXo3kygwpwEBOU_CA_eKvdmjP52sU=eFvuVOEGw@mail.gmail.com/ Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: ensure relocation never runs while we have send operations runningFilipe Manana6-19/+32
Relocation and send do not play well together because while send is running a block group can be relocated, a transaction committed and the respective disk extents get re-allocated and written to or discarded while send is about to do something with the extents. This was explained in commit 9e967495e0e0ae ("Btrfs: prevent send failures and crashes due to concurrent relocation"), which prevented balance and send from running in parallel but it did not address one remaining case where chunk relocation can happen: shrinking a device (and device deletion which shrinks a device's size to 0 before deleting the device). We also have now one more case where relocation is triggered: on zoned filesystems partially used block groups get relocated by a background thread, introduced in commit 18bb8bbf13c183 ("btrfs: zoned: automatically reclaim zones"). So make sure that instead of preventing balance from running when there are ongoing send operations, we prevent relocation from happening. This uses the infrastructure recently added by a patch that has the subject: "btrfs: add cancellable chunk relocation support". Also it adds a spinlock used exclusively for the exclusivity between send and relocation, as before fs_info->balance_mutex was used, which would make an attempt to run send to block waiting for balance to finish, which can take a lot of time on large filesystems. Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: shorten integrity checker extent data mount optionDavid Sterba3-6/+4
Subjectively, CHECK_INTEGRITY_INCLUDING_EXTENT_DATA is quite long and calling it CHECK_INTEGRITY_DATA still keeps the meaning and matches the mount option name. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: switch mount option bits to enums and use wider typeDavid Sterba1-32/+33
Switch defines of BTRFS_MOUNT_* to an enum (the symbolic names are recorded in the debugging information for convenience). There are two more things done but separating them would not make much sense as it's touching the same lines: - Renumber shifts 18..31 to 17..30 to get rid of the hole in the sequence. - Use 1UL as the value that gets shifted because we're approaching the 32bit limit and due to integer promotions the value of (1 << 31) becomes 0xffffffff80000000 when cast to unsigned long (eg. the option manipulating helpers). This is not causing any problems yet as the operations are in-memory and masking the 31st bit works, we don't have more than 31 bits so the ill effects of not masking higher bits don't happen. But once we have more, the problems will emerge. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: props: change how empty value is interpretedDavid Sterba1-0/+14
Based on user feedback and actual problems with compression property, there's no support to unset any compression options, or to force no compression flag. Note: This has changed recently in e2fsprogs 1.46.2, 'chattr +m' (setting NOCOMPRESS). In btrfs properties, the empty value should really mean reset to defaults, for all properties in general. Right now there's only the compression one, so this change should not cause too many problems. Old behaviour: $ lsattr file ---------------------- file # the NOCOMPRESS bit is set $ btrfs prop set file compression '' $ lsattr file ---------------------m file This is equivalent to 'btrfs prop set file compression no' in current btrfs-progs as the 'no' or 'none' values are translated to an empty string. This is where the new behaviour is different: empty string drops the compression flag (-c) and nocompress (-m): $ lsattr file ---------------------- file # No change $ btrfs prop set file compression '' $ lsattr file ---------------------- file $ btrfs prop set file compression lzo $ lsattr file --------c------------- file $ btrfs prop get file compression compression=lzo $ btrfs prop set file compression '' # Reset to the initial state $ lsattr file ---------------------- file # Set NOCOMPRESS bit $ btrfs prop set file compression no $ lsattr file ---------------------m file This obviously brings problems with backward compatibility, so this patch should not be backported without making sure the updated btrfs-progs are also used and that scripts have been updated to use the new semantics. Summary: - old kernel: no, none, "" - set NOCOMPRESS bit - new kernel: no, none - set NOCOMPRESS bit "" - drop all compression flags, ie. COMPRESS and NOCOMPRESS Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: compression: don't try to compress if we don't have enough pagesDavid Sterba1-1/+1
The early check if we should attempt compression does not take into account the number of input pages. It can happen that there's only one page, eg. a tail page after some ranges of the BTRFS_MAX_UNCOMPRESSED have been processed, or an isolated page that won't be converted to an inline extent. The single page would be compressed but a later check would drop it again because the result size must be at least one block shorter than the input. That can never work with just one page. CC: [email protected] # 4.4+ Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: fix unbalanced unlock in qgroup_account_snapshot()Naohiro Aota1-1/+1
qgroup_account_snapshot() is trying to unlock the not taken tree_log_mutex in a error path. Since ret != 0 in this case, we can just return from here. Fixes: 2a4d84c11a87 ("btrfs: move delayed ref flushing for qgroup into qgroup helper") CC: [email protected] # 5.12+ Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Naohiro Aota <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: sysfs: export dev stats in devinfo directoryDavid Sterba1-0/+29
The device stats can be read by ioctl, wrapped by command 'btrfs device stats'. Provide another source where to read the information in /sys/fs/btrfs/FSID/devinfo/DEVID/error_stats . The format is a list of 'key value' pairs one per line, which is common in other stat files. The names are the same as used in other device stat outputs. The stats are all in one file as it's the snapshot of all available stats. The 'one value per file' format is not very suitable here. The stats should be valid right after the stats item is read from disk, shortly after initializing the device. In case the stats are not yet valid, print just 'invalid' as the file contents. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: fix typos in commentsDavid Sterba19-28/+28
Fix typos that have snuck in since the last round. Found by codespell. Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: remove a stale comment for btrfs_decompress_bio()Qu Wenruo1-14/+0
Since commit 8140dc30a432 ("btrfs: btrfs_decompress_bio() could accept compressed_bio instead"), btrfs_decompress_bio() accepts "struct compressed_bio" other than open-coded parameter list. Thus the comments for the parameter list is no longer needed. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: send: use list_move_tail instead of list_del/list_add_tailBaokun Li1-11/+7
Use list_move_tail() instead of list_del() + list_add_tail() as it's doing the same thing and allows further cleanups. Open code name_cache_used() as there is only one user. Reported-by: Hulk Robot <[email protected]> Signed-off-by: Baokun Li <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: disable build on platforms having page size 256KChristophe Leroy1-0/+2
With a config having PAGE_SIZE set to 256K, BTRFS build fails with the following message include/linux/compiler_types.h:326:38: error: call to '__compiletime_assert_791' declared with attribute error: BUILD_BUG_ON failed: (BTRFS_MAX_COMPRESSED % PAGE_SIZE) != 0 BTRFS_MAX_COMPRESSED being 128K, BTRFS cannot support platforms with 256K pages at the time being. There are two platforms that can select 256K pages: - hexagon - powerpc Disable BTRFS when 256K page size is selected. Supporting this would require changes to the subpage mode that's currently being developed. Given that 256K is many times larger than page sizes commonly used and for what the algorithms and structures have been tuned, it's out of scope and disabling build is a reasonable option. Reported-by: kernel test robot <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> [ update changelog ] Signed-off-by: David Sterba <[email protected]>
2021-06-22btrfs: send: fix invalid path for unlink operations after parent orphanizationFilipe Manana1-0/+11
During an incremental send operation, when processing the new references for the current inode, we might send an unlink operation for another inode that has a conflicting path and has more than one hard link. However this path was computed and cached before we processed previous new references for the current inode. We may have orphanized a directory of that path while processing a previous new reference, in which case the path will be invalid and cause the receiver process to fail. The following reproducer triggers the problem and explains how/why it happens in its comments: $ cat test-send-unlink.sh #!/bin/bash DEV=/dev/sdi MNT=/mnt/sdi mkfs.btrfs -f $DEV >/dev/null mount $DEV $MNT # Create our test files and directory. Inode 259 (file3) has two hard # links. touch $MNT/file1 touch $MNT/file2 touch $MNT/file3 mkdir $MNT/A ln $MNT/file3 $MNT/A/hard_link # Filesystem looks like: # # . (ino 256) # |----- file1 (ino 257) # |----- file2 (ino 258) # |----- file3 (ino 259) # |----- A/ (ino 260) # |---- hard_link (ino 259) # # Now create the base snapshot, which is going to be the parent snapshot # for a later incremental send. btrfs subvolume snapshot -r $MNT $MNT/snap1 btrfs send -f /tmp/snap1.send $MNT/snap1 # Move inode 257 into directory inode 260. This results in computing the # path for inode 260 as "/A" and caching it. mv $MNT/file1 $MNT/A/file1 # Move inode 258 (file2) into directory inode 260, with a name of # "hard_link", moving first inode 259 away since it currently has that # location and name. mv $MNT/A/hard_link $MNT/tmp mv $MNT/file2 $MNT/A/hard_link # Now rename inode 260 to something else (B for example) and then create # a hard link for inode 258 that has the old name and location of inode # 260 ("/A"). mv $MNT/A $MNT/B ln $MNT/B/hard_link $MNT/A # Filesystem now looks like: # # . (ino 256) # |----- tmp (ino 259) # |----- file3 (ino 259) # |----- B/ (ino 260) # | |---- file1 (ino 257) # | |---- hard_link (ino 258) # | # |----- A (ino 258) # Create another snapshot of our subvolume and use it for an incremental # send. btrfs subvolume snapshot -r $MNT $MNT/snap2 btrfs send -f /tmp/snap2.send -p $MNT/snap1 $MNT/snap2 # Now unmount the filesystem, create a new one, mount it and try to # apply both send streams to recreate both snapshots. umount $DEV mkfs.btrfs -f $DEV >/dev/null mount $DEV $MNT # First add the first snapshot to the new filesystem by applying the # first send stream. btrfs receive -f /tmp/snap1.send $MNT # The incremental receive operation below used to fail with the # following error: # # ERROR: unlink A/hard_link failed: No such file or directory # # This is because when send is processing inode 257, it generates the # path for inode 260 as "/A", since that inode is its parent in the send # snapshot, and caches that path. # # Later when processing inode 258, it first processes its new reference # that has the path of "/A", which results in orphanizing inode 260 # because there is a a path collision. This results in issuing a rename # operation from "/A" to "/o260-6-0". # # Finally when processing the new reference "B/hard_link" for inode 258, # it notices that it collides with inode 259 (not yet processed, because # it has a higher inode number), since that inode has the name # "hard_link" under the directory inode 260. It also checks that inode # 259 has two hardlinks, so it decides to issue a unlink operation for # the name "hard_link" for inode 259. However the path passed to the # unlink operation is "/A/hard_link", which is incorrect since currently # "/A" does not exists, due to the orphanization of inode 260 mentioned # before. The path is incorrect because it was computed and cached # before the orphanization. This results in the receiver to fail with # the above error. btrfs receive -f /tmp/snap2.send $MNT umount $MNT When running the test, it fails like this: $ ./test-send-unlink.sh Create a readonly snapshot of '/mnt/sdi' in '/mnt/sdi/snap1' At subvol /mnt/sdi/snap1 Create a readonly snapshot of '/mnt/sdi' in '/mnt/sdi/snap2' At subvol /mnt/sdi/snap2 At subvol snap1 At snapshot snap2 ERROR: unlink A/hard_link failed: No such file or directory Fix this by recomputing a path before issuing an unlink operation when processing the new references for the current inode if we previously have orphanized a directory. A test case for fstests will follow soon. CC: [email protected] # 4.4+ Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
2021-06-22ceph: fix error handling in ceph_atomic_open and ceph_lookupJeff Layton3-17/+21
Commit aa60cfc3f7ee broke the error handling in these functions such that they don't handle non-ENOENT errors from ceph_mdsc_do_request properly. Move the checking of -ENOENT out of ceph_handle_snapdir and into the callers, and if we get a different error, return it immediately. Fixes: aa60cfc3f7ee ("ceph: don't use d_add in ceph_handle_snapdir") Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2021-06-22ceph: must hold snap_rwsem when filling inode for async createJeff Layton2-0/+5
...and add a lockdep assertion for it to ceph_fill_inode(). Cc: [email protected] # v5.7+ Fixes: 9a8d03ca2e2c3 ("ceph: attempt to do async create when possible") Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2021-06-22quota: remove unnecessary oom messageZhen Lei1-21/+12
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. After that, the static function getdqbuf() becomes unnecessary, get rid of it and instead call kmalloc() directly. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Jan Kara <[email protected]>
2021-06-22configfs: simplify configfs_release_bin_fileChristoph Hellwig1-6/+0
Remove the clearing of various fields just before freeing the buffer structure. Signed-off-by: Christoph Hellwig <[email protected]>
2021-06-22configfs: fix memleak in configfs_release_bin_fileChung-Chiang Cheng1-5/+5
When reading binary attributes in progress, buffer->bin_buffer is setup in configfs_read_bin_file() but never freed. Fixes: 03607ace807b4 ("configfs: implement binary attributes") Signed-off-by: Chung-Chiang Cheng <[email protected]> [hch: move the vfree rather than duplicating it] Signed-off-by: Christoph Hellwig <[email protected]>
2021-06-22ubifs: Set/Clear I_LINKABLE under i_lock for whiteout inodeZhihao Cheng1-0/+7
xfstests-generic/476 reports a warning message as below: WARNING: CPU: 2 PID: 30347 at fs/inode.c:361 inc_nlink+0x52/0x70 Call Trace: do_rename+0x502/0xd40 [ubifs] ubifs_rename+0x8b/0x180 [ubifs] vfs_rename+0x476/0x1080 do_renameat2+0x67c/0x7b0 __x64_sys_renameat2+0x6e/0x90 do_syscall_64+0x66/0xe0 entry_SYSCALL_64_after_hwframe+0x44/0xae Following race case can cause this: rename_whiteout(Thread 1) wb_workfn(Thread 2) ubifs_rename do_rename __writeback_single_inode spin_lock(&inode->i_lock) whiteout->i_state |= I_LINKABLE inode->i_state &= ~dirty; ---- How race happens on i_state: (tmp = whiteout->i_state | I_LINKABLE) (tmp = inode->i_state & ~dirty) (whiteout->i_state = tmp) (inode->i_state = tmp) ---- spin_unlock(&inode->i_lock) inc_nlink(whiteout) WARN_ON(!(inode->i_state & I_LINKABLE)) !!! Fix to add i_lock to avoid i_state update race condition. Fixes: 9e0a1fff8db56ea ("ubifs: Implement RENAME_WHITEOUT") Signed-off-by: Zhihao Cheng <[email protected]> Signed-off-by: Richard Weinberger <[email protected]>
2021-06-22ubifs: Fix spelling mistakesZheng Yongjun6-6/+6
Fix some spelling mistakes in comments: withoug ==> without numer ==> number aswell ==> as well referes ==> refers childs ==> children unnecesarry ==> unnecessary Signed-off-by: Zheng Yongjun <[email protected]> Reviewed-by: Alexander Dahl <[email protected]> Signed-off-by: Richard Weinberger <[email protected]>
2021-06-22virtiofs: Fix spelling mistakesZheng Yongjun3-5/+5
Fix some spelling mistakes in comments: refernce ==> reference happnes ==> happens threhold ==> threshold splitted ==> split mached ==> matched Signed-off-by: Zheng Yongjun <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: use DIV_ROUND_UP helper macro for calculationsWu Bo1-1/+1
Replace open coded divisor calculations with the DIV_ROUND_UP kernel macro for better readability. Signed-off-by: Wu Bo <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: fix illegal access to inode with reused nodeidAmir Goldstein4-5/+15
Server responds to LOOKUP and other ops (READDIRPLUS/CREATE/MKNOD/...) with ourarg containing nodeid and generation. If a fuse inode is found in inode cache with the same nodeid but different generation, the existing fuse inode should be unhashed and marked "bad" and a new inode with the new generation should be hashed instead. This can happen, for example, with passhrough fuse filesystem that returns the real filesystem ino/generation on lookup and where real inode numbers can get recycled due to real files being unlinked not via the fuse passthrough filesystem. With current code, this situation will not be detected and an old fuse dentry that used to point to an older generation real inode, can be used to access a completely new inode, which should be accessed only via the new dentry. Note that because the FORGET message carries the nodeid w/o generation, the server should wait to get FORGET counts for the nlookup counts of the old and reused inodes combined, before it can free the resources associated to that nodeid. Signed-off-by: Amir Goldstein <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: allow fallocate(FALLOC_FL_ZERO_RANGE)Richard W.M. Jones1-4/+6
The current fuse module filters out fallocate(FALLOC_FL_ZERO_RANGE) returning -EOPNOTSUPP. libnbd's nbdfuse would like to translate FALLOC_FL_ZERO_RANGE requests into the NBD command NBD_CMD_WRITE_ZEROES which allows NBD servers that support it to do zeroing efficiently. This commit treats this flag exactly like FALLOC_FL_PUNCH_HOLE. A way to test this, requiring fuse >= 3, nbdkit >= 1.8 and the latest nbdfuse from https://gitlab.com/nbdkit/libnbd/-/tree/master/fuse is to create a file containing some data and "mirror" it to a fuse file: $ dd if=/dev/urandom of=disk.img bs=1M count=1 $ nbdkit file disk.img $ touch mirror.img $ nbdfuse mirror.img nbd://localhost & (mirror.img -> nbdfuse -> NBD over loopback -> nbdkit -> disk.img) You can then run commands such as: $ fallocate -z -o 1024 -l 1024 mirror.img and check that the content of the original file ("disk.img") stays synchronized. To show NBD commands, export LIBNBD_DEBUG=1 before running nbdfuse. To clean up: $ fusermount3 -u mirror.img $ killall nbdkit Signed-off-by: Richard W.M. Jones <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: Make fuse_fill_super_submount() staticGreg Kurz2-11/+2
This function used to be called from fuse_dentry_automount(). This code was moved to fuse_get_tree_submount() in the same file since then. It is unlikely there will ever be another user. No need to be extern in this case. Signed-off-by: Greg Kurz <[email protected]> Reviewed-by: Max Reitz <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: Switch to fc_mount() for submountsGreg Kurz1-23/+6
fc_mount() already handles the vfs_get_tree(), sb->s_umount unlocking and vfs_create_mount() sequence. Using it greatly simplifies fuse_dentry_automount(). Signed-off-by: Greg Kurz <[email protected]> Reviewed-by: Max Reitz <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: Call vfs_get_tree() for submountsGreg Kurz2-48/+41
We recently fixed an infinite loop by setting the SB_BORN flag on submounts along with the write barrier needed by super_cache_count(). This is the job of vfs_get_tree() and FUSE shouldn't have to care about the barrier at all. Split out some code from fuse_dentry_automount() to the dedicated fuse_get_tree_submount() handler for submounts and call vfs_get_tree(). Signed-off-by: Greg Kurz <[email protected]> Reviewed-by: Max Reitz <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
2021-06-22fuse: add dedicated filesystem context ops for submountsGreg Kurz3-0/+24
The creation of a submount is open-coded in fuse_dentry_automount(). This brings a lot of complexity and we recently had to fix bugs because we weren't setting SB_BORN or because we were unlocking sb->s_umount before sb was fully configured. Most of these could have been avoided by using the mount API instead of open-coding. Basically, this means coming up with a proper ->get_tree() implementation for submounts and call vfs_get_tree(), or better fc_mount(). The creation of the superblock for submounts is quite different from the root mount. Especially, it doesn't require to allocate a FUSE filesystem context, nor to parse parameters. Introduce a dedicated context ops for submounts to make this clear. This is just a placeholder for now, fuse_get_tree_submount() will be populated in a subsequent patch. Only visible change is that we stop allocating/freeing a useless FUSE filesystem context with submounts. Signed-off-by: Greg Kurz <[email protected]> Reviewed-by: Max Reitz <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>