aboutsummaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)AuthorFilesLines
2023-08-10xfs: fix agf_fllast when repairing an empty AGFLDarrick J. Wong1-1/+4
xfs/139 with parent pointers enabled occasionally pops up a corruption message when online fsck force-rebuild repairs an AGFL: XFS (sde): Metadata corruption detected at xfs_agf_verify+0x11e/0x220 [xfs], xfs_agf block 0x9e0001 XFS (sde): Unmount and run xfs_repair XFS (sde): First 128 bytes of corrupted metadata buffer: 00000000: 58 41 47 46 00 00 00 01 00 00 00 4f 00 00 40 00 XAGF.......O..@. 00000010: 00 00 00 01 00 00 00 02 00 00 00 05 00 00 00 01 ................ 00000020: 00 00 00 01 00 00 00 01 00 00 00 00 ff ff ff ff ................ 00000030: 00 00 00 00 00 00 00 05 00 00 00 05 00 00 00 00 ................ 00000040: 91 2e 6f b1 ed 61 4b 4d 8c 9b 6e 87 08 bb f6 36 ..o..aKM..n....6 00000050: 00 00 00 01 00 00 00 01 00 00 00 06 00 00 00 01 ................ 00000060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ The root cause of this failure is that prior to the repair, there were zero blocks in the AGFL. This scenario is set up by the test case, since it formats with 64MB AGs and tries to ENOSPC the whole filesystem. In this case of flcount==0, we reset fllast to -1U, which then trips the write verifier's check that fllast is less than xfs_agfl_size(). Correct this code to set fllast to the last possible slot in the AGFL when flcount is zero, which mirrors the behavior of xfs_repair phase5 when it has to create a totally empty AGFL. Fixes: 0e93d3f43ec7 ("xfs: repair the AGFL") Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: clear pagf_agflreset when repairing the AGFLDarrick J. Wong1-1/+4
Clear the pagf_agflreset flag when we're repairing the AGFL because we fix all the same padding problems that xfs_agfl_reset does. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: allow userspace to rebuild metadata structuresDarrick J. Wong3-3/+17
Add a new (superuser-only) flag to the online metadata repair ioctl to force it to rebuild structures, even if they're not broken. We will use this to move metadata structures out of the way during a free space defragmentation operation. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: don't complain about unfixed metadata when repairs were injectedDarrick J. Wong2-5/+14
While debugging other parts of online repair, I noticed that if someone injects FORCE_SCRUB_REPAIR, starts an IFLAG_REPAIR scrub on a piece of metadata, and the metadata repair fails, we'll log a message about uncorrected errors in the filesystem. This isn't strictly true if the scrub function didn't set OFLAG_CORRUPT and we're only doing the repair because the error injection knob is set. Repair functions are allowed to abort the entire operation at any point before committing new metadata, in which case the piece of metadata is in the same state as it was before. Therefore, the log message should be gated on the results of the scrub. Refactor the predicate and rearrange the code flow to make this happen. Note: If the repair function errors out after it commits the new metadata, the transaction cancellation will shut down the filesystem, which is an obvious sign of corrupt metadata. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: allow the user to cancel repairs before we start writingDarrick J. Wong1-0/+16
All online repair functions have the same structure: walk filesystem metadata structures gathering enough data to rebuild the structure, stage a new copy, and then commit the new copy. The gathering steps do not write anything to disk, so they are peppered with xchk_should_terminate calls to avoid softlockup warnings and to provide an opportunity to abort the repair (by killing xfs_scrub). However, it's not clear in the code base when is the last chance to abort cleanly without having to undo a bunch of structure. Therefore, add one more call to xchk_should_terminate (along with a comment) providing the sysadmin with the ability to abort before it's too late and to make it clear in the source code when it's no longer convenient or safe to abort a repair. As there are only four repair functions right now, this patch exists more to establish a precedent for subsequent additions than to deliver practical functionality. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: always rescan allegedly healthy per-ag metadata after repairDarrick J. Wong1-0/+10
After an online repair function runs for a per-AG metadata structure, sc->sick_mask is supposed to reflect the per-AG metadata that the repair function fixed. Our next move is to re-check the metadata to assess the completeness of our repair, so we don't want the rebuilt structure to be excluded from the rescan just because the health system previously logged a problem with the data structure. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: implement online scrubbing of rtsummary infoDarrick J. Wong7-28/+298
Finish the realtime summary scrubber by adding the functions we need to compute a fresh copy of the rtsummary info and comparing it to the copy on disk. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: move the realtime summary file scrubber to a separate source fileDarrick J. Wong3-38/+60
Move the realtime summary file checking code to a separate file in preparation to actually implement it. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: wrap ilock/iunlock operations on sc->ipDarrick J. Wong8-29/+53
Scrub tracks the resources that it's holding onto in the xfs_scrub structure. This includes the inode being checked (if applicable) and the inode lock state of that inode. Replace the open-coded structure manipulation with a trivial helper to eliminate sources of error. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: get our own reference to inodes that we want to scrubDarrick J. Wong6-13/+36
When we want to scrub a file, get our own reference to the inode unconditionally. This will make disposal rules simpler in the long run. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: track usage statistics of online fsckDarrick J. Wong10-9/+535
Track the usage, outcomes, and run times of the online fsck code, and report these values via debugfs. The columns in the file are: * scrubber name * number of scrub invocations * clean objects found * corruptions found * optimizations found * cross referencing failures * inconsistencies found during cross referencing * incomplete scrubs * warnings * number of time scrub had to retry * cumulative amount of time spent scrubbing (microseconds) * number of repair inovcations * successfully repaired objects * cumuluative amount of time spent repairing (microseconds) Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: create scaffolding for creating debugfs entriesDarrick J. Wong4-2/+34
Set up debugfs directories for xfs as a whole, and a subdirectory for each mounted filesystem. This will enable the creation of debugfs files in the next patch. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: improve xfarray quicksort pivotDarrick J. Wong2-69/+148
Now that we have the means to do insertion sorts of small in-memory subsets of an xfarray, use it to improve the quicksort pivot algorithm by reading 7 records into memory and finding the median of that. This should prevent bad partitioning when a[lo] and a[hi] end up next to each other in the final sort, which can happen when sorting for cntbt repair when the free space is extremely fragmented (e.g. generic/176). This doesn't speed up the average quicksort run by much, but it will (hopefully) avoid the quadratic time collapse for which quicksort is famous. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: cache pages used for xfarray quicksort convergenceDarrick J. Wong2-10/+86
After quicksort picks a pivot item for a particular subsort, it walks the records in that subset from the outside in, rearranging them so that every record less than the pivot comes before it, and every record greater than the pivot comes after it. This scan has a lot of locality, so we can speed it up quite a bit by grabbing the xfile backing page and holding onto it as long as we possibly can. Doing so reduces the runtime by another 5% on the author's computer. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: speed up xfarray sort by sorting xfile page contents directlyDarrick J. Wong3-0/+121
If all the records in an xfarray subset live within the same memory page, we can short-circuit even more quicksort recursion by mapping that page into the local CPU and using the kernel's heapsort function to sort the subset. On the author's computer, this reduces the runtime by another 15% on a 500,000 element array. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: teach xfile to pass back direct-map pages to callerDarrick J. Wong3-0/+120
Certain xfile array operations (such as sorting) can be sped up quite a bit by allowing xfile users to grab a page to bulk-read the records contained within it. Create helper methods to facilitate this. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: convert xfarray insertion sort to heapsort using scratchpad memoryDarrick J. Wong3-120/+39
In the previous patch, we created a very basic quicksort implementation for xfile arrays. While the use of an alternate sorting algorithm to avoid quicksort recursion on very small subsets reduces the runtime modestly, we could do better than a load and store-heavy insertion sort, particularly since each load and store requires a page mapping lookup in the xfile. For a small increase in kernel memory requirements, we could instead bulk load the xfarray records into memory, use the kernel's existing heapsort implementation to sort the records, and bulk store the memory buffer back into the xfile. On the author's computer, this reduces the runtime by about 5% on a 500,000 element array. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: enable sorting of xfile-backed arraysDarrick J. Wong3-0/+750
The btree bulk loading code requires that records be provided in the correct record sort order for the given btree type. In general, repair code cannot be required to collect records in order, and it is not feasible to insert new records in the middle of an array to maintain sort order. Implement a sorting algorithm so that we can sort the records just prior to bulk loading. In principle, an xfarray could consume many gigabytes of memory and its backing pages can be sent out to disk at any time. This means that we cannot map the entire array into memory at once, so we must find a way to divide the work into smaller portions (e.g. a page) that /can/ be mapped into memory. Quicksort seems like a reasonable fit for this purpose, since it uses a divide and conquer strategy to keep its average runtime logarithmic. The solution presented here is a port of the glibc implementation, which itself is derived from the median-of-three and tail call recursion strategies outlined by Sedgwick. Subsequent patches will optimize the implementation further by utilizing the kernel's heapsort on directly-mapped memory whenever possible, and improving the quicksort pivot selection algorithm to try to avoid O(n^2) collapses. Note: The sorting functionality gets its own patch because the basic big array mechanisms were plenty for a single code patch. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: create a big array data structureDarrick J. Wong8-1/+922
Create a simple 'big array' data structure for storage of fixed-size metadata records that will be used to reconstruct a btree index. For repair operations, the most important operations are append, iterate, and sort. Earlier implementations of the big array used linked lists and suffered from severe problems -- pinning all records in kernel memory was not a good idea and frequently lead to OOM situations; random access was very inefficient; and record overhead for the lists was unacceptably high at 40-60%. Therefore, the big memory array relies on the 'xfile' abstraction, which creates a memfd file and stores the records in page cache pages. Since the memfd is created in tmpfs, the memory pages can be pushed out to disk if necessary and we have a built-in usage limit of 50% of physical memory. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Kent Overstreet <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: use per-AG bitmaps to reap unused AG metadata blocks during repairDarrick J. Wong5-95/+45
The AGFL repair code uses a series of bitmaps to figure out where there are OWN_AG blocks that are not claimed by the free space and rmap btrees. These blocks become the new AGFL, and any overflow is reaped. The bitmaps current track xfs_fsblock_t even though we already know the AG number. In the last patch, we introduced a new bitmap "type" for tracking xfs_agblock_t extents. Port the reaping code and the AGFL repair to use this new type, which makes it very obvious what we're tracking. This also eliminates a bunch of unnecessary agblock <-> fsblock conversions. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: reap large AG metadata extents when possibleDarrick J. Wong8-121/+409
When we're freeing extents that have been set in a bitmap, break the bitmap extent into multiple sub-extents organized by fate, and reap the extents. This enables us to dispose of old resources more efficiently than doing them block by block. While we're at it, rename the reaping functions to make it clear that they're reaping per-AG extents. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: allow scanning ranges of the buffer cache for live buffersDarrick J. Wong3-2/+22
After an online repair, we need to invalidate buffers representing the blocks from the old metadata that we're replacing. It's possible that parts of a tree that were previously cached in memory are no longer accessible due to media failure or other corruption on interior nodes, so repair figures out the old blocks from the reverse mapping data and scans the buffer cache directly. In other words, online fsck needs to find all the live (i.e. non-stale) buffers for a range of fsblocks so that it can invalidate them. Unfortunately, the current buffer cache code triggers asserts if the rhashtable lookup finds a non-stale buffer of a different length than the key we searched for. For regular operation this is desirable, but for this repair procedure, we don't care since we're going to forcibly stale the buffer anyway. Add an internal lookup flag to avoid the assert. Skip buffers that are already XBF_STALE. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: rearrange xrep_reap_block to make future code flow easierDarrick J. Wong3-17/+25
Rearrange the logic inside xrep_reap_block to make it more obvious that crosslinked metadata blocks are handled differently. Add a couple of tracepoints so that we can tell what's going on at the end of a btree rebuild operation. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: use deferred frees to reap old btree blocksDarrick J. Wong1-4/+26
Use deferred frees (EFIs) to reap the blocks of a btree that we just replaced. This helps us to shrink the window in which those old blocks could be lost due to a system crash, though we try to flush the EFIs every few hundred blocks so that we don't also overflow the transaction reservations during and after we commit the new btree. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: only allow reaping of per-AG blocks in xrep_reap_extentsDarrick J. Wong1-32/+13
Now that we've refactored btree cursors to require the caller to pass in a perag structure, there are numerous problems in xrep_reap_extents if it's being called to reap extents for an inode metadata repair. We don't have any repair functions that can do that, so drop the support for now. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: only invalidate blocks if we're going to free themDarrick J. Wong2-55/+42
When we're discarding old btree blocks after a repair, only invalidate the buffers for the ones that we're freeing -- if the metadata was crosslinked with another data structure, we don't want to touch it. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: move the post-repair block reaping code to a separate fileDarrick J. Wong6-234/+283
Reaping blocks after a repair is a complicated affair involving a lot of rmap btree lookups and figuring out if we're going to unmap or free old metadata blocks that might be crosslinked. Eventually, we will need to be able to reap per-AG metadata blocks, bmbt blocks from inode forks, garbage CoW staging extents, and (even later) blocks from btrees rooted in inodes. This results in a lot of reaping code, so we might as well split that off while it's easy. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-10xfs: cull repair code that will never get usedDarrick J. Wong3-111/+0
These two functions date from the era when I thought that we could rebuild btrees by creating an alternate root and adding records one by one. In other words, they predate the btree bulk loader. They're not necessary now, so remove them. Signed-off-by: Darrick J. Wong <[email protected]> Reviewed-by: Dave Chinner <[email protected]>
2023-08-06Merge tag 'v6.5-rc5.vfs.fixes' of ↵Linus Torvalds13-42/+79
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs Pull vfs fixes from Christian Brauner: - Fix a wrong check for O_TMPFILE during RESOLVE_CACHED lookup - Clean up directory iterators and clarify file_needs_f_pos_lock() * tag 'v6.5-rc5.vfs.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: fs: rely on ->iterate_shared to determine f_pos locking vfs: get rid of old '->iterate' directory operation proc: fix missing conversion to 'iterate_shared' open: make RESOLVE_CACHED correctly test for O_TMPFILE
2023-08-06fs: rely on ->iterate_shared to determine f_pos lockingChristian Brauner1-1/+1
Now that we removed ->iterate we don't need to check for either ->iterate or ->iterate_shared in file_needs_f_pos_lock(). Simply check for ->iterate_shared instead. This will tell us whether we need to unconditionally take the lock. Not just does it allow us to avoid checking f_inode's mode it also actually clearly shows that we're locking because of readdir. Signed-off-by: Christian Brauner <[email protected]>
2023-08-06vfs: get rid of old '->iterate' directory operationLinus Torvalds10-39/+76
All users now just use '->iterate_shared()', which only takes the directory inode lock for reading. Filesystems that never got convered to shared mode now instead use a wrapper that drops the lock, re-takes it in write mode, calls the old function, and then downgrades the lock back to read mode. This way the VFS layer and other callers no longer need to care about filesystems that never got converted to the modern era. The filesystems that use the new wrapper are ceph, coda, exfat, jfs, ntfs, ocfs2, overlayfs, and vboxsf. Honestly, several of them look like they really could just iterate their directories in shared mode and skip the wrapper entirely, but the point of this change is to not change semantics or fix filesystems that haven't been fixed in the last 7+ years, but to finally get rid of the dual iterators. Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
2023-08-06proc: fix missing conversion to 'iterate_shared'Linus Torvalds1-1/+1
I'm looking at the directory handling due to the discussion about f_pos locking (see commit 797964253d35: "file: reinstate f_pos locking optimization for regular files"), and wanting to clean that up. And one source of ugliness is how we were supposed to move filesystems over to the '->iterate_shared()' function that only takes the inode lock for reading many many years ago, but several filesystems still use the bad old '->iterate()' that takes the inode lock for exclusive access. See commit 6192269444eb ("introduce a parallel variant of ->iterate()") that also added some documentation stating Old method is only used if the new one is absent; eventually it will be removed. Switch while you still can; the old one won't stay. and that was back in April 2016. Here we are, many years later, and the old version is still clearly sadly alive and well. Now, some of those old style iterators are probably just because the filesystem may end up having per-inode mutable data that it uses for iterating a directory, but at least one case is just a mistake. Al switched over most filesystems to use '->iterate_shared()' back when it was introduced. In particular, the /proc filesystem was converted as one of the first ones in commit f50752eaa0b0 ("switch all procfs directories ->iterate_shared()"). But then later one new user of '->iterate()' was then re-introduced by commit 6d9c939dbe4d ("procfs: add smack subdir to attrs"). And that's clearly not what we wanted, since that new case just uses the same 'proc_pident_readdir()' and 'proc_pident_lookup()' helper functions that other /proc pident directories use, and they are most definitely safe to use with the inode lock held shared. So just fix it. This still leaves a fair number of oddball filesystems using the old-style directory iterator (ceph, coda, exfat, jfs, ntfs, ocfs2, overlayfs, and vboxsf), but at least we don't have any remaining in the core filesystems. I'm going to add a wrapper function that just drops the read-lock and takes it as a write lock, so that we can clean up the core vfs layer and make all the ugly 'this filesystem needs exclusive inode locking' be just filesystem-internal warts. I just didn't want to make that conversion when we still had a core user left. Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
2023-08-06open: make RESOLVE_CACHED correctly test for O_TMPFILEAleksa Sarai1-1/+1
O_TMPFILE is actually __O_TMPFILE|O_DIRECTORY. This means that the old fast-path check for RESOLVE_CACHED would reject all users passing O_DIRECTORY with -EAGAIN, when in fact the intended test was to check for __O_TMPFILE. Cc: [email protected] # v5.12+ Fixes: 99668f618062 ("fs: expose LOOKUP_CACHED through openat2() RESOLVE_CACHED") Signed-off-by: Aleksa Sarai <[email protected]> Message-Id: <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
2023-08-05Merge tag '6.5-rc4-smb3-client-fix' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds1-1/+5
Pull smb client fix from Steve French: - Fix DFS interlink problem (different namespace) * tag '6.5-rc4-smb3-client-fix' of git://git.samba.org/sfrench/cifs-2.6: smb: client: fix dfs link mount against w2k8
2023-08-04Merge tag 'ceph-for-6.5-rc5' of https://github.com/ceph/ceph-clientLinus Torvalds3-2/+17
Pull ceph fixes from Ilya Dryomov: "Two patches to improve RBD exclusive lock interaction with osd_request_timeout option and another fix to reduce the potential for erroneous blocklisting -- this time in CephFS. All going to stable" * tag 'ceph-for-6.5-rc5' of https://github.com/ceph/ceph-client: libceph: fix potential hang in ceph_osdc_notify() rbd: prevent busy loop when requesting exclusive lock ceph: defer stopping mdsc delayed_work
2023-08-04file: reinstate f_pos locking optimization for regular filesLinus Torvalds1-1/+17
In commit 20ea1e7d13c1 ("file: always lock position for FMODE_ATOMIC_POS") we ended up always taking the file pos lock, because pidfd_getfd() could get a reference to the file even when it didn't have an elevated file count due to threading of other sharing cases. But Mateusz Guzik reports that the extra locking is actually measurable, so let's re-introduce the optimization, and only force the locking for directory traversal. Directories need the lock for correctness reasons, while regular files only need it for "POSIX semantics". Since pidfd_getfd() is about debuggers etc special things that are _way_ outside of POSIX, we can relax the rules for that case. Reported-by: Mateusz Guzik <[email protected]> Cc: Christian Brauner <[email protected]> Link: https://lore.kernel.org/linux-fsdevel/20230803095311.ijpvhx3fyrbkasul@f/ Signed-off-by: Linus Torvalds <[email protected]>
2023-08-03Merge tag 'nfsd-6.5-3' of ↵Linus Torvalds1-3/+6
git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux Pull nfsd fix from Chuck Lever: - Fix tmpfs splice read support * tag 'nfsd-6.5-3' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux: nfsd: Fix reading via splice
2023-08-03Merge tag 'erofs-for-6.5-rc5-fixes' of ↵Linus Torvalds2-5/+4
git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs Pull erofs fixes from Gao Xiang: - Fix data corruption caused by insufficient decompression on deduplicated compressed extents - Drop a useless s_magic checking in erofs_kill_sb() * tag 'erofs-for-6.5-rc5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs: erofs: drop unnecessary WARN_ON() in erofs_kill_sb() erofs: fix wrong primary bvec selection on deduplicated extents
2023-08-02Merge tag 'exfat-for-6.5-rc5' of ↵Linus Torvalds2-20/+22
git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat Pull exfat fixes from Namjae Jeon: - Fix page allocation failure from allocation bitmap by using kvmalloc_array/kvfree - Add the check to validate if filename entries exceeds max filename length - Fix potential deadlock condition from dir_emit*() * tag 'exfat-for-6.5-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat: exfat: release s_lock before calling dir_emit() exfat: check if filename entries exceeds max filename length exfat: use kvmalloc_array/kvfree instead of kmalloc_array/kfree
2023-08-02smb: client: fix dfs link mount against w2k8Paulo Alcantara1-1/+5
Customer reported that they couldn't mount their DFS link that was seen by the client as a DFS interlink -- special form of DFS link where its single target may point to a different DFS namespace -- and it turned out that it was just a regular DFS link where its referral header flags missed the StorageServers bit thus making the client think it couldn't tree connect to target directly without requiring further referrals. When the DFS link referral header flags misses the StoraServers bit and its target doesn't respond to any referrals, then tree connect to it. Fixes: a1c0d00572fc ("cifs: share dfs connections and supers") Cc: [email protected] Signed-off-by: Paulo Alcantara (SUSE) <[email protected]> Signed-off-by: Steve French <[email protected]>
2023-08-02ceph: defer stopping mdsc delayed_workXiubo Li3-2/+17
Flushing the dirty buffer may take a long time if the cluster is overloaded or if there is network issue. So we should ping the MDSs periodically to keep alive, else the MDS will blocklist the kclient. Cc: [email protected] Link: https://tracker.ceph.com/issues/61843 Signed-off-by: Xiubo Li <[email protected]> Reviewed-by: Milind Changire <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2023-08-01erofs: drop unnecessary WARN_ON() in erofs_kill_sb()Gao Xiang1-2/+0
Previously, .kill_sb() will be called only after fill_super fails. It will be changed [1]. Besides, checking for s_magic in erofs_kill_sb() is unnecessary from any point of view. Let's get rid of it now. [1] https://lore.kernel.org/r/20230731-flugbereit-wohnlage-78acdf95ab7e@brauner Signed-off-by: Gao Xiang <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Acked-by: Christian Brauner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-08-01erofs: fix wrong primary bvec selection on deduplicated extentsGao Xiang1-3/+4
When handling deduplicated compressed data, there can be multiple decompressed extents pointing to the same compressed data in one shot. In such cases, the bvecs which belong to the longest extent will be selected as the primary bvecs for real decompressors to decode and the other duplicated bvecs will be directly copied from the primary bvecs. Previously, only relative offsets of the longest extent were checked to decompress the primary bvecs. On rare occasions, it can be incorrect if there are several extents with the same start relative offset. As a result, some short bvecs could be selected for decompression and then cause data corruption. For example, as Shijie Sun reported off-list, considering the following extents of a file: 117: 903345.. 915250 | 11905 : 385024.. 389120 | 4096 ... 119: 919729.. 930323 | 10594 : 385024.. 389120 | 4096 ... 124: 968881.. 980786 | 11905 : 385024.. 389120 | 4096 The start relative offset is the same: 2225, but extent 119 (919729.. 930323) is shorter than the others. Let's restrict the bvec length in addition to the start offset if bvecs are not full. Reported-by: Shijie Sun <[email protected]> Fixes: 5c2a64252c5d ("erofs: introduce partial-referenced pclusters") Tested-by Shijie Sun <[email protected]> Reviewed-by: Yue Hu <[email protected]> Reviewed-by: Chao Yu <[email protected]> Signed-off-by: Gao Xiang <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2023-07-30nfsd: Fix reading via spliceDavid Howells1-3/+6
nfsd_splice_actor() has a clause in its loop that chops up a compound page into individual pages such that if the same page is seen twice in a row, it is discarded the second time. This is a problem with the advent of shmem_splice_read() as that inserts zero_pages into the pipe in lieu of pages that aren't present in the pagecache. Fix this by assuming that the last page is being extended only if the currently stored length + starting offset is not currently on a page boundary. This can be tested by NFS-exporting a tmpfs filesystem on the test machine and truncating it to more than a page in size (eg. truncate -s 8192) and then reading it by NFS. The first page will be all zeros, but thereafter garbage will be read. Note: I wonder if we can ever get a situation now where we get a splice that gives us contiguous parts of a page in separate actor calls. As NFSD can only be splicing from a file (I think), there are only three sources of the page: copy_splice_read(), shmem_splice_read() and file_splice_read(). The first allocates pages for the data it reads, so the problem cannot occur; the second should never see a partial page; and the third waits for each page to become available before we're allowed to read from it. Fixes: bd194b187115 ("shmem: Implement splice-read") Reported-by: Chuck Lever <[email protected]> Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Reviewed-by: NeilBrown <[email protected]> cc: Hugh Dickins <[email protected]> cc: Jens Axboe <[email protected]> cc: Matthew Wilcox <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] Signed-off-by: Chuck Lever <[email protected]>
2023-07-29Merge tag '6.5-rc3-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds8-7/+19
Pull smb client fixes from Steve French: "Four small SMB3 client fixes: - two reconnect fixes (to address the case where non-default iocharset gets incorrectly overridden at reconnect with the default charset) - fix for NTLMSSP_AUTH request setting a flag incorrectly) - Add missing check for invalid tlink (tree connection) in ioctl" * tag '6.5-rc3-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6: cifs: add missing return value check for cifs_sb_tlink smb3: do not set NTLMSSP_VERSION flag for negotiate not auth request cifs: fix charset issue in reconnection fs/nls: make load_nls() take a const parameter
2023-07-29arch/*/configs/*defconfig: Replace AUTOFS4_FS by AUTOFS_FSSven Joachim1-12/+0
Commit a2225d931f75 ("autofs: remove left-over autofs4 stubs") promised the removal of the fs/autofs/Kconfig fragment for AUTOFS4_FS within a couple of releases, but five years later this still has not happened yet, and AUTOFS4_FS is still enabled in 63 defconfigs. Get rid of it mechanically: git grep -l CONFIG_AUTOFS4_FS -- '*defconfig' | xargs sed -i 's/AUTOFS4_FS/AUTOFS_FS/' Also just remove the AUTOFS4_FS config option stub. Anybody who hasn't regenerated their config file in the last five years will need to just get the new name right when they do. Signed-off-by: Sven Joachim <[email protected]> Acked-by: Ian Kent <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2023-07-28Merge tag 'mm-hotfixes-stable-2023-07-28-15-52' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull hotfixes from Andrew Morton: "11 hotfixes. Five are cc:stable and the remainder address post-6.4 issues or aren't considered serious enough to justify backporting" * tag 'mm-hotfixes-stable-2023-07-28-15-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: mm/memory-failure: fix hardware poison check in unpoison_memory() proc/vmcore: fix signedness bug in read_from_oldmem() mailmap: update remaining active codeaurora.org email addresses mm: lock VMA in dup_anon_vma() before setting ->anon_vma mm: fix memory ordering for mm_lock_seq and vm_lock_seq scripts/spelling.txt: remove 'thead' as a typo mm/pagewalk: fix EFI_PGT_DUMP of espfix area shmem: minor fixes to splice-read implementation tmpfs: fix Documentation of noswap and huge mount options Revert "um: Use swap() to make code cleaner" mm/damon/core-test: initialise context before test in damon_test_set_attrs()
2023-07-28Merge tag 'ceph-for-6.5-rc4' of https://github.com/ceph/ceph-clientLinus Torvalds1-1/+1
Pull ceph fixes from Ilya Dryomov: "A patch to reduce the potential for erroneous RBD exclusive lock blocklisting (fencing) with a couple of prerequisites and a fixup to prevent metrics from being sent to the MDS even just once after that has been disabled by the user. All marked for stable" * tag 'ceph-for-6.5-rc4' of https://github.com/ceph/ceph-client: rbd: retrieve and check lock owner twice before blocklisting rbd: harden get_lock_owner_info() a bit rbd: make get_lock_owner_info() return a single locker or NULL ceph: never send metrics if disable_send_metrics is set
2023-07-28Merge tag '9p-fixes-6.5-rc3' of ↵Linus Torvalds7-19/+8
git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs Pull 9p fixes from Eric Van Hensbergen: "Misc set of fixes for 9p. Most of these clean up warnings we've gotten out of compilation tools, but several of them were from inspection while hunting down a couple of regressions. The most important one is 75b396821cb7 ("fs/9p: remove unnecessary and overrestrictive check") which caused a regression for some folks by restricting mmap in any case where writeback caches weren't enabled. Most of the other bugs caught via inspection were type mismatches" * tag '9p-fixes-6.5-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs: fs/9p: Remove unused extern declaration 9p: remove dead stores (variable set again without being read) 9p: virtio: skip incrementing unused variable 9p: virtio: make sure 'offs' is initialized in zc_request 9p: virtio: fix unlikely null pointer deref in handle_rerror 9p: fix ignored return value in v9fs_dir_release fs/9p: remove unnecessary invalidate_inode_pages2 fs/9p: fix type mismatch in file cache mode helper fs/9p: fix typo in comparison logic for cache mode fs/9p: remove unnecessary and overrestrictive check fs/9p: Fix a datatype used with V9FS_DIRECT_IO
2023-07-27proc/vmcore: fix signedness bug in read_from_oldmem()Dan Carpenter1-1/+1
The bug is the error handling: if (tmp < nr_bytes) { "tmp" can hold negative error codes but because "nr_bytes" is type size_t the negative error codes are treated as very high positive values (success). Fix this by changing "nr_bytes" to type ssize_t. The "nr_bytes" variable is used to store values between 1 and PAGE_SIZE and they can fit in ssize_t without any issue. Link: https://lkml.kernel.org/r/[email protected] Fixes: 5d8de293c224 ("vmcore: convert copy_oldmem_page() to take an iov_iter") Signed-off-by: Dan Carpenter <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Baoquan He <[email protected]> Cc: Dave Young <[email protected]> Cc: Vivek Goyal <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>