aboutsummaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)AuthorFilesLines
2020-11-06ext4: clean up the JBD2 API that initializes fast commitsHarshad Shirwadkar4-57/+64
This patch removes jbd2_fc_init() API and its related functions to simplify enabling fast commits. With this change, the number of fast commit blocks to use is solely determined by the JBD2 layer. So, we move the default value for minimum number of fast commit blocks from ext4/fast_commit.h to include/linux/jbd2.h. However, whether or not to use fast commits is determined by the file system. The file system just sets the fast commit feature using jbd2_journal_set_features(). JBD2 layer then determines how many blocks to use for fast commits (based on the value found in the JBD2 superblock). Note that the JBD2 feature flag of fast commits is just an indication that there are fast commit blocks present on disk. It doesn't tell JBD2 layer about the intent of the file system of whether to it wants to use fast commit or not. That's why, we blindly clear the fast commit flag in journal_reset() after the recovery is done. Suggested-by: Jan Kara <[email protected]> Signed-off-by: Harshad Shirwadkar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2020-11-06jbd2: rename j_maxlen to j_total_len and add jbd2_journal_max_txn_bufsHarshad Shirwadkar6-13/+13
The on-disk superblock field sb->s_maxlen represents the total size of the journal including the fast commit area and is no more the max number of blocks available for a transaction. The maximum number of blocks available to a transaction is reduced by the number of fast commit blocks. So, this patch renames j_maxlen to j_total_len to better represent its intent. Also, it adds a function to calculate max number of bufs available for a transaction. Suggested-by: Jan Kara <[email protected]> Signed-off-by: Harshad Shirwadkar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2020-11-06ext4: fixup ext4_fc_track_* functions' signatureHarshad Shirwadkar5-63/+74
Firstly, pass handle to all ext4_fc_track_* functions and use transaction id found in handle->h_transaction->h_tid for tracking fast commit updates. Secondly, don't pass inode to ext4_fc_track_link/create/unlink functions. inode can be found inside these functions as d_inode(dentry). However, rename path is an exeception. That's because in that case, we need inode that's not same as d_inode(dentry). To handle that, add a couple of low-level wrapper functions that take inode and dentry as arguments. Suggested-by: Jan Kara <[email protected]> Signed-off-by: Harshad Shirwadkar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2020-11-06ext4: drop redundant calls ext4_fc_track_rangeHarshad Shirwadkar2-9/+0
ext4_fc_track_range() should only be called when blocks are added or removed from an inode. So, the only places from where we need to call this function are ext4_map_blocks(), punch hole, collapse / zero range, truncate. Remove all the other redundant calls to ths function. Signed-off-by: Harshad Shirwadkar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2020-11-06ext4: mark fc ineligible if inode gets evictied due to mem pressureHarshad Shirwadkar3-3/+5
If inode gets evicted due to memory pressure, we have to remove it from the fast commit list. However, that inode may have uncommitted changes that fast commits will lose. So, just fall back to full commits in this case. Also, rename the fast commit ineligiblity reason from "EXT4_FC_REASON_MEM" to "EXT4_FC_REASON_MEM_NOMEM" for better expression. Suggested-by: Jan Kara <[email protected]> Signed-off-by: Harshad Shirwadkar <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2020-11-06ext4: describe fast_commit feature flagsHarshad Shirwadkar1-0/+7
Fast commit feature has flags in the file system as well in JBD2. The meaning of fast commit feature flags can get confusing. Update docs and code to add more documentation about it. Suggested-by: Jan Kara <[email protected]> Signed-off-by: Harshad Shirwadkar <[email protected]> Reviewed-by: Jan Kara <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
2020-11-06ext4: unlock xattr_sem properly in ext4_inline_data_truncate()Joseph Qi1-0/+1
It takes xattr_sem to check inline data again but without unlock it in case not have. So unlock it before return. Fixes: aef1c8513c1f ("ext4: let ext4_truncate handle inline data correctly") Reported-by: Dan Carpenter <[email protected]> Cc: Tao Ma <[email protected]> Signed-off-by: Joseph Qi <[email protected]> Reviewed-by: Andreas Dilger <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]> Cc: [email protected]
2020-11-06ext4: silence an uninitialized variable warningDan Carpenter1-1/+1
Smatch complains that "i" can be uninitialized if we don't enter the loop. I don't know if it's possible but we may as well silence this warning. [ Initialize i to sb->s_blocksize instead of 0. The only way the for loop could be skipped entirely is the in-memory data structures, in particular the bh->b_data for the on-disk superblock has gotten corrupted enough that calculated value of group is >= to ext4_get_groups_count(sb). In that case, we want to exit immediately without allocating a block. -- TYT ] Fixes: 8016e29f4362 ("ext4: fast commit recovery path") Signed-off-by: Dan Carpenter <[email protected]> Link: https://lore.kernel.org/r/20201030114620.GB3251003@mwanda Signed-off-by: Theodore Ts'o <[email protected]> Cc: [email protected]
2020-11-06ext4: correctly report "not supported" for {usr,grp}jquota when !CONFIG_QUOTAKaixu Xia1-2/+2
The macro MOPT_Q is used to indicates the mount option is related to quota stuff and is defined to be MOPT_NOSUPPORT when CONFIG_QUOTA is disabled. Normally the quota options are handled explicitly, so it didn't matter that the MOPT_STRING flag was missing, even though the usrjquota and grpjquota mount options take a string argument. It's important that's present in the !CONFIG_QUOTA case, since without MOPT_STRING, the mount option matcher will match usrjquota= followed by an integer, and will otherwise skip the table entry, and so "mount option not supported" error message is never reported. [ Fixed up the commit description to better explain why the fix works. --TYT ] Fixes: 26092bf52478 ("ext4: use a table-driven handler for mount options") Signed-off-by: Kaixu Xia <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]> Cc: [email protected]
2020-11-06Merge tag 'ceph-for-5.10-rc3' of git://github.com/ceph/ceph-clientLinus Torvalds5-18/+39
Pull ceph fix from Ilya Dryomov: "A fix for a potential stall on umount caused by the MDS dropping our REQUEST_CLOSE message. The code that handled this case was inadvertently disabled in 5.9, this patch removes it entirely and fixes the problem in a way that is consistent with ceph-fuse" * tag 'ceph-for-5.10-rc3' of git://github.com/ceph/ceph-client: ceph: check session state after bumping session->s_seq
2020-11-06proc "seq files": switch to ->read_iterChristoph Hellwig1-1/+1
Implement ->read_iter for all proc "seq files" so that splice works on them. Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-06proc "single files": switch to ->read_iterGreg Kroah-Hartman1-1/+1
Implement ->read_iter for all proc "single files" so that more bionic tests cases can pass when they call splice() on other fun files like /proc/version Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-06proc/stat: switch to ->read_iterChristoph Hellwig1-1/+1
Implement ->read_iter so that splice can be used on this file. Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-06proc/cpuinfo: switch to ->read_iterChristoph Hellwig1-1/+1
Implement ->read_iter so that the Android bionic test suite can use this random proc file for its splice test case. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-06proc: wire up generic_file_splice_read for iter opsChristoph Hellwig1-0/+2
Wire up generic_file_splice_read for the iter based proxy ops, so that splice reads from them work. Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-06seq_file: add seq_read_iterChristoph Hellwig1-13/+32
iov_iter based variant for reading a seq_file. seq_read is reimplemented on top of the iter variant. Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-06fscrypt: remove reachable WARN in fscrypt_setup_iv_ino_lblk_32_key()Eric Biggers1-3/+1
I_CREATING isn't actually set until the inode has been assigned an inode number and inserted into the inode hash table. So the WARN_ON() in fscrypt_setup_iv_ino_lblk_32_key() is wrong, and it can trigger when creating an encrypted file on ext4. Remove it. This was sometimes causing xfstest generic/602 to fail on ext4. I didn't notice it before because due to a separate oversight, new inodes that haven't been assigned an inode number yet don't necessarily have i_ino == 0 as I had thought, so by chance I never saw the test fail. Fixes: a992b20cd4ee ("fscrypt: add fscrypt_prepare_new_inode() and fscrypt_set_context()") Reported-by: Theodore Y. Ts'o <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Eric Biggers <[email protected]>
2020-11-05io_uring: fix link lookup racing with link timeoutPavel Begunkov1-1/+15
We can't just go over linked requests because it may race with linked timeouts. Take ctx->completion_lock in that case. Cc: [email protected] # v5.7+ Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-11-05NFSD: fix missing refcount in nfsd4_copy by nfsd4_do_async_copyDai Ngo1-0/+1
Need to initialize nfsd4_copy's refcount to 1 to avoid use-after-free warning when nfs4_put_copy is called from nfsd4_cb_offload_release. Fixes: ce0887ac96d3 ("NFSD add nfs4 inter ssc to nfsd4_copy") Signed-off-by: Dai Ngo <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2020-11-05NFSD: Fix use-after-free warning when doing inter-server copyDai Ngo1-1/+1
The source file nfsd_file is not constructed the same as other nfsd_file's via nfsd_file_alloc. nfsd_file_put should not be called to free the object; nfsd_file_put is not the inverse of kzalloc, instead kfree is called by nfsd4_do_async_copy when done. Fixes: ce0887ac96d3 ("NFSD add nfs4 inter ssc to nfsd4_copy") Signed-off-by: Dai Ngo <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2020-11-05NFSD: MKNOD should return NFSERR_BADTYPE instead of NFSERR_INVALChuck Lever1-5/+1
A late paragraph of RFC 1813 Section 3.3.11 states: | ... if the server does not support the target type or the | target type is illegal, the error, NFS3ERR_BADTYPE, should | be returned. Note that NF3REG, NF3DIR, and NF3LNK are | illegal types for MKNOD. The Linux NFS server incorrectly returns NFSERR_INVAL in these cases. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2020-11-05NFSD: NFSv3 PATHCONF Reply is improperly formedChuck Lever1-0/+1
Commit cc028a10a48c ("NFSD: Hoist status code encoding into XDR encoder functions") missed a spot. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2020-11-05Merge tag 'gfs2-v5.10-rc1-fixes' of ↵Linus Torvalds10-48/+70
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 fixes from Andreas Gruenbacher: "Various gfs2 fixes" * tag 'gfs2-v5.10-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: gfs2: Wake up when sd_glock_disposal becomes zero gfs2: Don't call cancel_delayed_work_sync from within delete work function gfs2: check for live vs. read-only file system in gfs2_fitrim gfs2: don't initialize statfs_change inodes in spectator mode gfs2: Split up gfs2_meta_sync into inode and rgrp versions gfs2: init_journal's undo directive should also undo the statfs inodes gfs2: Add missing truncate_inode_pages_final for sd_aspace gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-free
2020-11-05io_uring: use correct pointer for io_uring_show_cred()Jens Axboe1-1/+2
Previous commit changed how we index the registered credentials, but neglected to update one spot that is used when the personalities are iterated through ->show_fdinfo(). Ensure we use the right struct type for the iteration. Reported-by: [email protected] Fixes: 1e6fa5216a0e ("io_uring: COW io_identity on mismatch") Signed-off-by: Jens Axboe <[email protected]>
2020-11-05io_uring: don't forget to task-cancel drained reqsPavel Begunkov1-2/+8
If there is a long-standing request of one task locking up execution of deferred requests, and the defer list contains requests of another task (all files-less), then a potential execution of __io_uring_task_cancel() by that another task will sleep until that first long-standing request completion, and that may take long. E.g. tsk1: req1/read(empty_pipe) -> tsk2: req(DRAIN) Then __io_uring_task_cancel(tsk2) waits for req1 completion. It seems we even can manufacture a complicated case with many tasks sharing many rings that can lock them forever. Cancel deferred requests for __io_uring_task_cancel() as well. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-11-05btrfs: ref-verify: fix memory leak in btrfs_ref_tree_modDinghao Liu1-0/+1
There is one error handling path that does not free ref, which may cause a minor memory leak. CC: [email protected] # 4.19+ Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Dinghao Liu <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-11-05btrfs: dev-replace: fail mount if we don't have replace item with target deviceAnand Jain2-21/+31
If there is a device BTRFS_DEV_REPLACE_DEVID without the device replace item, then it means the filesystem is inconsistent state. This is either corruption or a crafted image. Fail the mount as this needs a closer look what is actually wrong. As of now if BTRFS_DEV_REPLACE_DEVID is present without the replace item, in __btrfs_free_extra_devids() we determine that there is an extra device, and free those extra devices but continue to mount the device. However, we were wrong in keeping tack of the rw_devices so the syzbot testcase failed: WARNING: CPU: 1 PID: 3612 at fs/btrfs/volumes.c:1166 close_fs_devices.part.0+0x607/0x800 fs/btrfs/volumes.c:1166 Kernel panic - not syncing: panic_on_warn set ... CPU: 1 PID: 3612 Comm: syz-executor.2 Not tainted 5.9.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x198/0x1fd lib/dump_stack.c:118 panic+0x347/0x7c0 kernel/panic.c:231 __warn.cold+0x20/0x46 kernel/panic.c:600 report_bug+0x1bd/0x210 lib/bug.c:198 handle_bug+0x38/0x90 arch/x86/kernel/traps.c:234 exc_invalid_op+0x14/0x40 arch/x86/kernel/traps.c:254 asm_exc_invalid_op+0x12/0x20 arch/x86/include/asm/idtentry.h:536 RIP: 0010:close_fs_devices.part.0+0x607/0x800 fs/btrfs/volumes.c:1166 RSP: 0018:ffffc900091777e0 EFLAGS: 00010246 RAX: 0000000000040000 RBX: ffffffffffffffff RCX: ffffc9000c8b7000 RDX: 0000000000040000 RSI: ffffffff83097f47 RDI: 0000000000000007 RBP: dffffc0000000000 R08: 0000000000000001 R09: ffff8880988a187f R10: 0000000000000000 R11: 0000000000000001 R12: ffff88809593a130 R13: ffff88809593a1ec R14: ffff8880988a1908 R15: ffff88809593a050 close_fs_devices fs/btrfs/volumes.c:1193 [inline] btrfs_close_devices+0x95/0x1f0 fs/btrfs/volumes.c:1179 open_ctree+0x4984/0x4a2d fs/btrfs/disk-io.c:3434 btrfs_fill_super fs/btrfs/super.c:1316 [inline] btrfs_mount_root.cold+0x14/0x165 fs/btrfs/super.c:1672 The fix here is, when we determine that there isn't a replace item then fail the mount if there is a replace target device (devid 0). CC: [email protected] # 4.19+ Reported-by: [email protected] Signed-off-by: Anand Jain <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-11-05btrfs: scrub: update message regarding read-only statusDavid Sterba1-2/+3
Based on user feedback update the message printed when scrub fails to start due to write requirements. To make a distinction add a device id to the messages. Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-11-05btrfs: clean up NULL checks in qgroup_unreserve_range()Dan Carpenter1-8/+4
Smatch complains that this code dereferences "entry" before checking whether it's NULL on the next line. Fortunately, rb_entry() will never return NULL so it doesn't cause a problem. We can clean up the NULL checking a bit to silence the warning and make the code more clear. Reviewed-by: Qu Wenruo <[email protected]> Signed-off-by: Dan Carpenter <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-11-05btrfs: fix min reserved size calculation in merge_reloc_rootJosef Bacik1-1/+3
The minimum reserve size was adjusted to take into account the height of the tree we are merging, however we can have a root with a level == 0. What we want is root_level + 1 to get the number of nodes we may have to cow. This fixes the enospc_debug warning pops with btrfs/101. Nikolay: this fixes failures on btrfs/060 btrfs/062 btrfs/063 and btrfs/195 That I was seeing, the call trace was: [ 3680.515564] ------------[ cut here ]------------ [ 3680.515566] BTRFS: block rsv returned -28 [ 3680.515585] WARNING: CPU: 2 PID: 8339 at fs/btrfs/block-rsv.c:521 btrfs_use_block_rsv+0x162/0x180 [ 3680.515587] Modules linked in: [ 3680.515591] CPU: 2 PID: 8339 Comm: btrfs Tainted: G W 5.9.0-rc8-default #95 [ 3680.515593] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1 04/01/2014 [ 3680.515595] RIP: 0010:btrfs_use_block_rsv+0x162/0x180 [ 3680.515600] RSP: 0018:ffffa01ac9753910 EFLAGS: 00010282 [ 3680.515602] RAX: 0000000000000000 RBX: ffff984b34200000 RCX: 0000000000000027 [ 3680.515604] RDX: 0000000000000027 RSI: 0000000000000000 RDI: ffff984b3bd19e28 [ 3680.515606] RBP: 0000000000004000 R08: ffff984b3bd19e20 R09: 0000000000000001 [ 3680.515608] R10: 0000000000000004 R11: 0000000000000046 R12: ffff984b264fdc00 [ 3680.515609] R13: ffff984b13149000 R14: 00000000ffffffe4 R15: ffff984b34200000 [ 3680.515613] FS: 00007f4e2912b8c0(0000) GS:ffff984b3bd00000(0000) knlGS:0000000000000000 [ 3680.515615] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 3680.515617] CR2: 00007fab87122150 CR3: 0000000118e42000 CR4: 00000000000006e0 [ 3680.515620] Call Trace: [ 3680.515627] btrfs_alloc_tree_block+0x8b/0x340 [ 3680.515633] ? __lock_acquire+0x51a/0xac0 [ 3680.515646] alloc_tree_block_no_bg_flush+0x4f/0x60 [ 3680.515651] __btrfs_cow_block+0x14e/0x7e0 [ 3680.515662] btrfs_cow_block+0x144/0x2c0 [ 3680.515670] merge_reloc_root+0x4d4/0x610 [ 3680.515675] ? btrfs_lookup_fs_root+0x78/0x90 [ 3680.515686] merge_reloc_roots+0xee/0x280 [ 3680.515695] relocate_block_group+0x2ce/0x5e0 [ 3680.515704] btrfs_relocate_block_group+0x16e/0x310 [ 3680.515711] btrfs_relocate_chunk+0x38/0xf0 [ 3680.515716] btrfs_shrink_device+0x200/0x560 [ 3680.515728] btrfs_rm_device+0x1ae/0x6a6 [ 3680.515744] ? _copy_from_user+0x6e/0xb0 [ 3680.515750] btrfs_ioctl+0x1afe/0x28c0 [ 3680.515755] ? find_held_lock+0x2b/0x80 [ 3680.515760] ? do_user_addr_fault+0x1f8/0x418 [ 3680.515773] ? __x64_sys_ioctl+0x77/0xb0 [ 3680.515775] __x64_sys_ioctl+0x77/0xb0 [ 3680.515781] do_syscall_64+0x31/0x70 [ 3680.515785] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Reported-by: Nikolay Borisov <[email protected]> Fixes: 44d354abf33e ("btrfs: relocation: review the call sites which can be interrupted by signal") CC: [email protected] # 5.4+ Reviewed-by: Nikolay Borisov <[email protected]> Tested-by: Nikolay Borisov <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: David Sterba <[email protected]>
2020-11-05btrfs: print the block rsv type when we fail our reservationJosef Bacik1-1/+2
To help with debugging, print the type of the block rsv when we fail to use our target block rsv in btrfs_use_block_rsv. This now produces: [ 544.672035] BTRFS: block rsv 1 returned -28 which is still cryptic without consulting the enum in block-rsv.h but I guess it's better than nothing. Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Reviewed-by: David Sterba <[email protected]> [ add note from Nikolay ] Signed-off-by: David Sterba <[email protected]>
2020-11-05btrfs: fix potential overflow in cluster_pages_for_defrag on 32bit archMatthew Wilcox (Oracle)1-6/+4
On 32-bit systems, this shift will overflow for files larger than 4GB as start_index is unsigned long while the calls to btrfs_delalloc_*_space expect u64. CC: [email protected] # 4.4+ Fixes: df480633b891 ("btrfs: extent-tree: Switch to new delalloc space reserve and release") Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: David Sterba <[email protected]> [ define the variable instead of repeating the shift ] Signed-off-by: David Sterba <[email protected]>
2020-11-04xfs: only flush the unshared range in xfs_reflink_unshareDarrick J. Wong1-1/+2
There's no reason to flush an entire file when we're unsharing part of a file. Therefore, only initiate writeback on the selected range. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Chandan Babu R <[email protected]>
2020-11-04ceph: check session state after bumping session->s_seqJeff Layton5-18/+39
Some messages sent by the MDS entail a session sequence number increment, and the MDS will drop certain types of requests on the floor when the sequence numbers don't match. In particular, a REQUEST_CLOSE message can cross with one of the sequence morphing messages from the MDS which can cause the client to stall, waiting for a response that will never come. Originally, this meant an up to 5s delay before the recurring workqueue job kicked in and resent the request, but a recent change made it so that the client would never resend, causing a 60s stall unmounting and sometimes a blockisting event. Add a new helper for incrementing the session sequence and then testing to see whether a REQUEST_CLOSE needs to be resent, and move the handling of CEPH_MDS_SESSION_CLOSING into that function. Change all of the bare sequence counter increments to use the new helper. Reorganize check_session_state with a switch statement. It should no longer be called when the session is CLOSING, so throw a warning if it ever is (but still handle that case sanely). [ idryomov: whitespace, pr_err() call fixup ] URL: https://tracker.ceph.com/issues/47563 Fixes: fa9967734227 ("ceph: fix potential mdsc use-after-free crash") Reported-by: Patrick Donnelly <[email protected]> Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2020-11-04io_uring: fix overflowed cancel w/ linked ->filesPavel Begunkov1-22/+21
Current io_match_files() check in io_cqring_overflow_flush() is useless because requests drop ->files before going to the overflow list, however linked to it request do not, and we don't check them. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-11-04io_uring: drop req/tctx io_identity separatelyJens Axboe1-3/+6
We can't bundle this into one operation, as the identity may not have originated from the tctx to begin with. Drop one ref for each of them separately, if they don't match the static assignment. If we don't, then if the identity is a lookup from registered credentials, we could be freeing that identity as we're dropping a reference assuming it came from the tctx. syzbot reports this as a use-after-free, as the identity is still referencable from idr lookup: ================================================================== BUG: KASAN: use-after-free in instrument_atomic_read_write include/linux/instrumented.h:101 [inline] BUG: KASAN: use-after-free in atomic_fetch_add_relaxed include/asm-generic/atomic-instrumented.h:142 [inline] BUG: KASAN: use-after-free in __refcount_add include/linux/refcount.h:193 [inline] BUG: KASAN: use-after-free in __refcount_inc include/linux/refcount.h:250 [inline] BUG: KASAN: use-after-free in refcount_inc include/linux/refcount.h:267 [inline] BUG: KASAN: use-after-free in io_init_req fs/io_uring.c:6700 [inline] BUG: KASAN: use-after-free in io_submit_sqes+0x15a9/0x25f0 fs/io_uring.c:6774 Write of size 4 at addr ffff888011e08e48 by task syz-executor165/8487 CPU: 1 PID: 8487 Comm: syz-executor165 Not tainted 5.10.0-rc1-next-20201102-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:118 print_address_description.constprop.0.cold+0xae/0x4c8 mm/kasan/report.c:385 __kasan_report mm/kasan/report.c:545 [inline] kasan_report.cold+0x1f/0x37 mm/kasan/report.c:562 check_memory_region_inline mm/kasan/generic.c:186 [inline] check_memory_region+0x13d/0x180 mm/kasan/generic.c:192 instrument_atomic_read_write include/linux/instrumented.h:101 [inline] atomic_fetch_add_relaxed include/asm-generic/atomic-instrumented.h:142 [inline] __refcount_add include/linux/refcount.h:193 [inline] __refcount_inc include/linux/refcount.h:250 [inline] refcount_inc include/linux/refcount.h:267 [inline] io_init_req fs/io_uring.c:6700 [inline] io_submit_sqes+0x15a9/0x25f0 fs/io_uring.c:6774 __do_sys_io_uring_enter+0xc8e/0x1b50 fs/io_uring.c:9159 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x440e19 Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 eb 0f fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007fff644ff178 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 0000000000440e19 RDX: 0000000000000000 RSI: 000000000000450c RDI: 0000000000000003 RBP: 0000000000000004 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000022b4850 R13: 0000000000000010 R14: 0000000000000000 R15: 0000000000000000 Allocated by task 8487: kasan_save_stack+0x1b/0x40 mm/kasan/common.c:48 kasan_set_track mm/kasan/common.c:56 [inline] __kasan_kmalloc.constprop.0+0xc2/0xd0 mm/kasan/common.c:461 kmalloc include/linux/slab.h:552 [inline] io_register_personality fs/io_uring.c:9638 [inline] __io_uring_register fs/io_uring.c:9874 [inline] __do_sys_io_uring_register+0x10f0/0x40a0 fs/io_uring.c:9924 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Freed by task 8487: kasan_save_stack+0x1b/0x40 mm/kasan/common.c:48 kasan_set_track+0x1c/0x30 mm/kasan/common.c:56 kasan_set_free_info+0x1b/0x30 mm/kasan/generic.c:355 __kasan_slab_free+0x102/0x140 mm/kasan/common.c:422 slab_free_hook mm/slub.c:1544 [inline] slab_free_freelist_hook+0x5d/0x150 mm/slub.c:1577 slab_free mm/slub.c:3140 [inline] kfree+0xdb/0x360 mm/slub.c:4122 io_identity_cow fs/io_uring.c:1380 [inline] io_prep_async_work+0x903/0xbc0 fs/io_uring.c:1492 io_prep_async_link fs/io_uring.c:1505 [inline] io_req_defer fs/io_uring.c:5999 [inline] io_queue_sqe+0x212/0xed0 fs/io_uring.c:6448 io_submit_sqe fs/io_uring.c:6542 [inline] io_submit_sqes+0x14f6/0x25f0 fs/io_uring.c:6784 __do_sys_io_uring_enter+0xc8e/0x1b50 fs/io_uring.c:9159 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 The buggy address belongs to the object at ffff888011e08e00 which belongs to the cache kmalloc-96 of size 96 The buggy address is located 72 bytes inside of 96-byte region [ffff888011e08e00, ffff888011e08e60) The buggy address belongs to the page: page:00000000a7104751 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x11e08 flags: 0xfff00000000200(slab) raw: 00fff00000000200 ffffea00004f8540 0000001f00000002 ffff888010041780 raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888011e08d00: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ffff888011e08d80: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc > ffff888011e08e00: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc ^ ffff888011e08e80: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ffff888011e08f00: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ================================================================== Reported-by: [email protected] Fixes: 1e6fa5216a0e ("io_uring: COW io_identity on mismatch") Signed-off-by: Jens Axboe <[email protected]>
2020-11-04io_uring: ensure consistent view of original task ->mm from SQPOLLJens Axboe1-7/+20
Ensure we get a valid view of the task mm, by using task_lock() when attempting to grab the original task mm. Reported-by: [email protected] Fixes: 2aede0e417db ("io_uring: stash ctx task reference for SQPOLL") Signed-off-by: Jens Axboe <[email protected]>
2020-11-04io_uring: properly handle SQPOLL request cancelationsJens Axboe1-12/+65
Track if a given task io_uring context contains SQPOLL instances, so we can iterate those for cancelation (and request counts). This ensures that we properly wait on SQPOLL contexts, and find everything that needs canceling. Signed-off-by: Jens Axboe <[email protected]>
2020-11-04io-wq: cancel request if it's asking for files and we don't have themJens Axboe1-0/+4
This can't currently happen, but will be possible shortly. Handle missing files just like we do not being able to grab a needed mm, and mark the request as needing cancelation. Signed-off-by: Jens Axboe <[email protected]>
2020-11-04xfs: fix scrub flagging rtinherit even if there is no rt deviceDarrick J. Wong1-2/+1
The kernel has always allowed directories to have the rtinherit flag set, even if there is no rt device, so this check is wrong. Fixes: 80e4e1268802 ("xfs: scrub inodes") Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]>
2020-11-04xfs: fix missing CoW blocks writeback conversion retryDarrick J. Wong1-2/+4
In commit 7588cbeec6df, we tried to fix a race stemming from the lack of coordination between higher level code that wants to allocate and remap CoW fork extents into the data fork. Christoph cites as examples the always_cow mode, and a directio write completion racing with writeback. According to the comments before the goto retry, we want to restart the lookup to catch the extent in the data fork, but we don't actually reset whichfork or cow_fsb, which means the second try executes using stale information. Up until now I think we've gotten lucky that either there's something left in the CoW fork to cause cow_fsb to be reset, or either data/cow fork sequence numbers have advanced enough to force a fresh lookup from the data fork. However, if we reach the retry with an empty stable CoW fork and a stable data fork, neither of those things happens. The retry foolishly re-calls xfs_convert_blocks on the CoW fork which fails again. This time, we toss the write. I've recently been working on extending reflink to the realtime device. When the realtime extent size is larger than a single block, we have to force the page cache to CoW the entire rt extent if a write (or fallocate) are not aligned with the rt extent size. The strategy I've chosen to deal with this is derived from Dave's blocksize > pagesize series: dirtying around the write range, and ensuring that writeback always starts mapping on an rt extent boundary. This has brought this race front and center, since generic/522 blows up immediately. However, I'm pretty sure this is a bug outright, independent of that. Fixes: 7588cbeec6df ("xfs: retry COW fork delalloc conversion when no extent was found") Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]>
2020-11-04iomap: clean up writeback state logic on writepage errorBrian Foster1-13/+2
The iomap writepage error handling logic is a mash of old and slightly broken XFS writepage logic. When keepwrite writeback state tracking was introduced in XFS in commit 0d085a529b42 ("xfs: ensure WB_SYNC_ALL writeback handles partial pages correctly"), XFS had an additional cluster writeback context that scanned ahead of ->writepage() to process dirty pages over the current ->writepage() extent mapping. This context expected a dirty page and required retention of the TOWRITE tag on partial page processing so the higher level writeback context would revisit the page (in contrast to ->writepage(), which passes a page with the dirty bit already cleared). The cluster writeback mechanism was eventually removed and some of the error handling logic folded into the primary writeback path in commit 150d5be09ce4 ("xfs: remove xfs_cancel_ioend"). This patch accidentally conflated the two contexts by using the keepwrite logic in ->writepage() without accounting for the fact that the page is not dirty. Further, the keepwrite logic has no practical effect on the core ->writepage() caller (write_cache_pages()) because it never revisits a page in the current function invocation. Technically, the page should be redirtied for the keepwrite logic to have any effect. Otherwise, write_cache_pages() may find the tagged page but will skip it since it is clean. Even if the page was redirtied, however, there is still no practical effect to keepwrite since write_cache_pages() does not wrap around within a single invocation of the function. Therefore, the dirty page would simply end up retagged on the next writeback sequence over the associated range. All that being said, none of this really matters because redirtying a partially processed page introduces a potential infinite redirty -> writeback failure loop that deviates from the current design principle of clearing the dirty state on writepage failure to avoid building up too much dirty, unreclaimable memory on the system. Therefore, drop the spurious keepwrite usage and dirty state clearing logic from iomap_writepage_map(), treat the partially processed page the same as a fully processed page, and let the imminent ioend failure clean up the writeback state. Signed-off-by: Brian Foster <[email protected]> Reviewed-by: Darrick J. Wong <[email protected]> Signed-off-by: Darrick J. Wong <[email protected]>
2020-11-04iomap: support partial page discard on writeback block mapping failureBrian Foster2-13/+16
iomap writeback mapping failure only calls into ->discard_page() if the current page has not been added to the ioend. Accordingly, the XFS callback assumes a full page discard and invalidation. This is problematic for sub-page block size filesystems where some portion of a page might have been mapped successfully before a failure to map a delalloc block occurs. ->discard_page() is not called in that error scenario and the bio is explicitly failed by iomap via the error return from ->prepare_ioend(). As a result, the filesystem leaks delalloc blocks and corrupts the filesystem block counters. Since XFS is the only user of ->discard_page(), tweak the semantics to invoke the callback unconditionally on mapping errors and provide the file offset that failed to map. Update xfs_discard_page() to discard the corresponding portion of the file and pass the range along to iomap_invalidatepage(). The latter already properly handles both full and sub-page scenarios by not changing any iomap or page state on sub-page invalidations. Signed-off-by: Brian Foster <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Darrick J. Wong <[email protected]> Signed-off-by: Darrick J. Wong <[email protected]>
2020-11-04xfs: flush new eof page on truncate to avoid post-eof corruptionBrian Foster1-0/+10
It is possible to expose non-zeroed post-EOF data in XFS if the new EOF page is dirty, backed by an unwritten block and the truncate happens to race with writeback. iomap_truncate_page() will not zero the post-EOF portion of the page if the underlying block is unwritten. The subsequent call to truncate_setsize() will, but doesn't dirty the page. Therefore, if writeback happens to complete after iomap_truncate_page() (so it still sees the unwritten block) but before truncate_setsize(), the cached page becomes inconsistent with the on-disk block. A mapped read after the associated page is reclaimed or invalidated exposes non-zero post-EOF data. For example, consider the following sequence when run on a kernel modified to explicitly flush the new EOF page within the race window: $ xfs_io -fc "falloc 0 4k" -c fsync /mnt/file $ xfs_io -c "pwrite 0 4k" -c "truncate 1k" /mnt/file ... $ xfs_io -c "mmap 0 4k" -c "mread -v 1k 8" /mnt/file 00000400: 00 00 00 00 00 00 00 00 ........ $ umount /mnt/; mount <dev> /mnt/ $ xfs_io -c "mmap 0 4k" -c "mread -v 1k 8" /mnt/file 00000400: cd cd cd cd cd cd cd cd ........ Update xfs_setattr_size() to explicitly flush the new EOF page prior to the page truncate to ensure iomap has the latest state of the underlying block. Fixes: 68a9f5e7007c ("xfs: implement iomap based buffered write path") Signed-off-by: Brian Foster <[email protected]> Reviewed-by: Darrick J. Wong <[email protected]> Signed-off-by: Darrick J. Wong <[email protected]>
2020-11-04erofs: fix setting up pcluster for temporary pagesGao Xiang1-2/+5
pcluster should be only set up for all managed pages instead of temporary pages. Since it currently uses page->mapping to identify, the impact is minor for now. [ Update: Vladimir reported the kernel log becomes polluted because PAGE_FLAGS_CHECK_AT_FREE flag(s) set if the page allocation debug option is enabled. ] Link: https://lore.kernel.org/r/[email protected] Fixes: 5ddcee1f3a1c ("erofs: get rid of __stagingpage_alloc helper") Cc: <[email protected]> # 5.5+ Tested-by: Vladimir Zapolskiy <[email protected]> Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Gao Xiang <[email protected]>
2020-11-04erofs: derive atime instead of leaving it emptyGao Xiang1-10/+11
EROFS has _only one_ ondisk timestamp (ctime is currently documented and recorded, we might also record mtime instead with a new compat feature if needed) for each extended inode since EROFS isn't mainly for archival purposes so no need to keep all timestamps on disk especially for Android scenarios due to security concerns. Also, romfs/cramfs don't have their own on-disk timestamp, and squashfs only records mtime instead. Let's also derive access time from ondisk timestamp rather than leaving it empty, and if mtime/atime for each file are really needed for specific scenarios as well, we can also use xattrs to record them then. Link: https://lore.kernel.org/r/[email protected] [ Gao Xiang: It'd be better to backport for user-friendly concern. ] Fixes: 431339ba9042 ("staging: erofs: add inode operations") Cc: stable <[email protected]> # 4.19+ Reported-by: nl6720 <[email protected]> Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Gao Xiang <[email protected]>
2020-11-03afs: Fix incorrect freeing of the ACL passed to the YFS ACL store opDavid Howells1-6/+1
The cleanup for the yfs_store_opaque_acl2_operation calls the wrong function to destroy the ACL content buffer. It's an afs_acl struct, not a yfs_acl struct - and the free function for latter may pass invalid pointers to kfree(). Fix this by using the afs_acl_put() function. The yfs_acl_put() function is then no longer used and can be removed. general protection fault, probably for non-canonical address 0x7ebde00000000: 0000 [#1] SMP PTI ... RIP: 0010:compound_head+0x0/0x11 ... Call Trace: virt_to_cache+0x8/0x51 kfree+0x5d/0x79 yfs_free_opaque_acl+0x16/0x29 afs_put_operation+0x60/0x114 __vfs_setxattr+0x67/0x72 __vfs_setxattr_noperm+0x66/0xe9 vfs_setxattr+0x67/0xce setxattr+0x14e/0x184 __do_sys_fsetxattr+0x66/0x8f do_syscall_64+0x2d/0x3a entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: e49c7b2f6de7 ("afs: Build an abstraction around an "operation" concept") Signed-off-by: David Howells <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-03afs: Fix warning due to unadvanced marshalling pointerDavid Howells1-0/+1
When using the afs.yfs.acl xattr to change an AuriStor ACL, a warning can be generated when the request is marshalled because the buffer pointer isn't increased after adding the last element, thereby triggering the check at the end if the ACL wasn't empty. This just causes something like the following warning, but doesn't stop the call from happening successfully: kAFS: YFS.StoreOpaqueACL2: Request buffer underflow (36<108) Fix this simply by increasing the count prior to the check. Fixes: f5e4546347bc ("afs: Implement YFS ACL setting") Signed-off-by: David Howells <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-11-03gfs2: Wake up when sd_glock_disposal becomes zeroAlexander Aring1-1/+2
Commit fc0e38dae645 ("GFS2: Fix glock deallocation race") fixed a sd_glock_disposal accounting bug by adding a missing atomic_dec statement, but it failed to wake up sd_glock_wait when that decrement causes sd_glock_disposal to reach zero. As a consequence, gfs2_gl_hash_clear can now run into a 10-minute timeout instead of being woken up. Add the missing wakeup. Fixes: fc0e38dae645 ("GFS2: Fix glock deallocation race") Cc: [email protected] # v2.6.39+ Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: Andreas Gruenbacher <[email protected]>
2020-11-02gfs2: Don't call cancel_delayed_work_sync from within delete work functionAndreas Gruenbacher1-1/+2
Right now, we can end up calling cancel_delayed_work_sync from within delete_work_func via gfs2_lookup_by_inum -> gfs2_inode_lookup -> gfs2_cancel_delete_work. When that happens, it will result in a deadlock. Instead, gfs2_inode_lookup should skip the call to gfs2_cancel_delete_work when called from delete_work_func (blktype == GFS2_BLKST_UNLINKED). Reported-by: Alexander Ahring Oder Aring <[email protected]> Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: [email protected] # v5.8+ Signed-off-by: Andreas Gruenbacher <[email protected]>