aboutsummaryrefslogtreecommitdiff
path: root/fs/btrfs/compression.c
AgeCommit message (Collapse)AuthorFilesLines
2017-11-15Btrfs: add write_flags for compression bioLiu Bo1-3/+4
Compression code path has only flaged bios with REQ_OP_WRITE no matter where the bios come from, but it could be a sync write if fsync starts this writeback or a normal writeback write if wb kthread starts a periodic writeback. It breaks the rule that sync writes and writeback writes need to be differentiated from each other, because from the POV of block layer, all bios need to be recognized by these flags in order to do some management, e.g. throttlling. This passes writeback_control to compression write path so that it can send bios with proper flags to block layer. Signed-off-by: Liu Bo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: heuristic: add Shannon entropy calculationTimofey Titovets1-1/+84
Byte distribution check in heuristic will filter edge data cases and some time fail to classify input data. Let's fix that by adding Shannon entropy calculation, that will cover classification of most other data types. As Shannon entropy needs log2 with some precision to work, let's use ilog2(N) and for increased precision, by do ilog2(pow(N, 4)). Shannon entropy has been slightly changed to avoid signed numbers and division. The calculation is direct by the formula, successor of precalculated table or chains of if-else. The accuracy errors of ilog2 are compensated by @ENTROPY_LVL_ACEPTABLE 70 -> 65 @ENTROPY_LVL_HIGH 85 -> 80 Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ update comments ] Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: heuristic: add byte core set calculationTimofey Titovets1-0/+65
Calculate byte core set for data sample: - sort buckets' numbers in decreasing order - count how many values cover 90% of the sample If the core set size is low (<=25%), data are easily compressible. If the core set size is high (>=80%), data are not compressible. Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ update comments ] Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: heuristic: add byte set calculationTimofey Titovets1-0/+45
Calculate byte set size for data sample: - calculate how many unique bytes have been in the sample - for all bytes count > 0, check if we're still in the low count range (~25%), such data are easily compressible, otherwise furhter analysis is needed Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ update comments ] Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: heuristic: add detection of repeated data patternsTimofey Titovets1-1/+14
Walk over data sample and use memcmp to detect repeated patterns, like zeros, but a bit more general. Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ minor coding style fixes ] Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: heuristic: implement sampling logicTimofey Titovets1-9/+62
Copy sample data from the input data range to sample buffer then calculate byte value count for that sample into bucket. Signed-off-by: Timofey Titovets <[email protected]> [ minor comment updates ] Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: heuristic: add bucket and sample counters and other definesTimofey Titovets1-1/+52
Add basic defines and structures for data sampling. Added macros: - For future sampling algo - For bucket size Heuristic workspace: - Add bucket for storing byte type counters - Add sample array for storing partial copy of input data range - Add counter for store current sample size to workspace Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ minor coding style fixes, comments updated ] Signed-off-by: David Sterba <[email protected]>
2017-11-01Btrfs: compression: separate heuristic/compression workspacesTimofey Titovets1-18/+121
Compression heuristic itself is not a compression type, as current infrastructure provides workspaces for several compression types, it's difficult to just add heuristic workspace. Just refactor the code to support compression/heuristic workspaces with maximum code sharing and minimum changes in it. Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ coding style fixes ] Signed-off-by: David Sterba <[email protected]>
2017-11-01btrfs: allow setting zlib compression level via :9Adam Borowski1-2/+3
This is bikeshedding, but it seems people are drastically more likely to understand "zlib:9" as compression level rather than an algorithm version compared to "zlib9". Based on feedback on the mailinglist, the ":9" will be the only accepted syntax. The level must be a single digit. Unrecognized format will result to the default, for forward compatibility in a similar way the compression algorithm specifier was relaxed in commit a7164fa4e055daf6368c ("btrfs: prepare for extensions in compression options"). Signed-off-by: Adam Borowski <[email protected]> Reviewed-by: David Sterba <[email protected]> [ tighten the accepted format ] Signed-off-by: David Sterba <[email protected]>
2017-11-01btrfs: allow to set compression level for zlibDavid Sterba1-1/+19
Preliminary support for setting compression level for zlib, the following works: $ mount -o compess=zlib # default $ mount -o compess=zlib0 # same $ mount -o compess=zlib9 # level 9, slower sync, less data $ mount -o compess=zlib1 # level 1, faster sync, more data $ mount -o remount,compress=zlib3 # level set by remount The compress-force works the same as compress'. The level is visible in the same format in /proc/mounts. Level set via file property does not work yet. Required patch: "btrfs: prepare for extensions in compression options" Signed-off-by: David Sterba <[email protected]>
2017-10-30btrfs: use BLK_STS defines where neededAnand Jain1-1/+2
At few places we could use BLK_STS_OK and BLK_STS_NOSUPP. Signed-off-by: Anand Jain <[email protected]> Reviewed-by: Satoru Taekeuchi <[email protected]> Reviewed-by: David Sterba <[email protected]> [ dropped first hunk btrfs_endio_direct_read ] Signed-off-by: David Sterba <[email protected]>
2017-09-29Merge branch 'for-4.14-rc3' of ↵Linus Torvalds1-1/+17
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: "We've collected a bunch of isolated fixes, for crashes, user-visible behaviour or missing bits from other subsystem cleanups from the past. The overall number is not small but I was not able to make it significantly smaller. Most of the patches are supposed to go to stable" * 'for-4.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: log csums for all modified extents Btrfs: fix unexpected result when dio reading corrupted blocks btrfs: Report error on removing qgroup if del_qgroup_item fails Btrfs: skip checksum when reading compressed data if some IO have failed Btrfs: fix kernel oops while reading compressed data Btrfs: use btrfs_op instead of bio_op in __btrfs_map_block Btrfs: do not backup tree roots when fsync btrfs: remove BTRFS_FS_QUOTA_DISABLING flag btrfs: propagate error to btrfs_cmp_data_prepare caller btrfs: prevent to set invalid default subvolid Btrfs: send: fix error number for unknown inode types btrfs: fix NULL pointer dereference from free_reloc_roots() btrfs: finish ordered extent cleaning if no progress is found btrfs: clear ordered flag on cleaning up ordered extents Btrfs: fix incorrect {node,sector}size endianness from BTRFS_IOC_FS_INFO Btrfs: do not reset bio->bi_ops while writing bio Btrfs: use the new helper wbc_to_write_flags
2017-09-26Btrfs: skip checksum when reading compressed data if some IO have failedLiu Bo1-1/+8
Currently even if the underlying disk reports failure on IO, compressed read endio still gets to verify checksum and reports it as a checksum error. In fact, if some IO have failed during reading a compressed data extent , there's no way the checksum could match, therefore, we can skip that in order to return error quickly to the upper layer. Please note that we need to do this after recording the failed mirror index so that read-repair in the upper layer's endio can work properly. Signed-off-by: Liu Bo <[email protected]> Tested-by: Paul Jones <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-09-26Btrfs: fix kernel oops while reading compressed dataLiu Bo1-0/+9
The kernel oops happens at kernel BUG at fs/btrfs/extent_io.c:2104! ... RIP: clean_io_failure+0x263/0x2a0 [btrfs] It's showing that read-repair code is using an improper mirror index. This is due to the fact that compression read's endio hasn't recorded the failed mirror index in %cb->orig_bio. With this, btrfs's read-repair can work properly on reading compressed data. Signed-off-by: Liu Bo <[email protected]> Reported-by: Paul Jones <[email protected]> Tested-by: Paul Jones <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-09-14Merge branch 'zstd-minimal' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull zstd support from Chris Mason: "Nick Terrell's patch series to add zstd support to the kernel has been floating around for a while. After talking with Dave Sterba, Herbert and Phillip, we decided to send the whole thing in as one pull request. zstd is a big win in speed over zlib and in compression ratio over lzo, and the compression team here at FB has gotten great results using it in production. Nick will continue to update the kernel side with new improvements from the open source zstd userland code. Nick has a number of benchmarks for the main zstd code in his lib/zstd commit: I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM. The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor, 16 GB of RAM, and a SSD. I benchmarked using `silesia.tar` [3], which is 211,988,480 B large. Run the following commands for the benchmark: sudo modprobe zstd_compress_test sudo mknod zstd_compress_test c 245 0 sudo cp silesia.tar zstd_compress_test The time is reported by the time of the userland `cp`. The MB/s is computed with 1,536,217,008 B / time(buffer size, hash) which includes the time to copy from userland. The Adjusted MB/s is computed with 1,536,217,088 B / (time(buffer size, hash) - time(buffer size, none)). The memory reported is the amount of memory the compressor requests. | Method | Size (B) | Time (s) | Ratio | MB/s | Adj MB/s | Mem (MB) | |----------|----------|----------|-------|---------|----------|----------| | none | 11988480 | 0.100 | 1 | 2119.88 | - | - | | zstd -1 | 73645762 | 1.044 | 2.878 | 203.05 | 224.56 | 1.23 | | zstd -3 | 66988878 | 1.761 | 3.165 | 120.38 | 127.63 | 2.47 | | zstd -5 | 65001259 | 2.563 | 3.261 | 82.71 | 86.07 | 2.86 | | zstd -10 | 60165346 | 13.242 | 3.523 | 16.01 | 16.13 | 13.22 | | zstd -15 | 58009756 | 47.601 | 3.654 | 4.45 | 4.46 | 21.61 | | zstd -19 | 54014593 | 102.835 | 3.925 | 2.06 | 2.06 | 60.15 | | zlib -1 | 77260026 | 2.895 | 2.744 | 73.23 | 75.85 | 0.27 | | zlib -3 | 72972206 | 4.116 | 2.905 | 51.50 | 52.79 | 0.27 | | zlib -6 | 68190360 | 9.633 | 3.109 | 22.01 | 22.24 | 0.27 | | zlib -9 | 67613382 | 22.554 | 3.135 | 9.40 | 9.44 | 0.27 | I benchmarked zstd decompression using the same method on the same machine. The benchmark file is located in the upstream zstd repo under `contrib/linux-kernel/zstd_decompress_test.c` [4]. The memory reported is the amount of memory required to decompress data compressed with the given compression level. If you know the maximum size of your input, you can reduce the memory usage of decompression irrespective of the compression level. | Method | Time (s) | MB/s | Adjusted MB/s | Memory (MB) | |----------|----------|---------|---------------|-------------| | none | 0.025 | 8479.54 | - | - | | zstd -1 | 0.358 | 592.15 | 636.60 | 0.84 | | zstd -3 | 0.396 | 535.32 | 571.40 | 1.46 | | zstd -5 | 0.396 | 535.32 | 571.40 | 1.46 | | zstd -10 | 0.374 | 566.81 | 607.42 | 2.51 | | zstd -15 | 0.379 | 559.34 | 598.84 | 4.61 | | zstd -19 | 0.412 | 514.54 | 547.77 | 8.80 | | zlib -1 | 0.940 | 225.52 | 231.68 | 0.04 | | zlib -3 | 0.883 | 240.08 | 247.07 | 0.04 | | zlib -6 | 0.844 | 251.17 | 258.84 | 0.04 | | zlib -9 | 0.837 | 253.27 | 287.64 | 0.04 | I ran a long series of tests and benchmarks on the btrfs side and the gains are very similar to the core benchmarks Nick ran" * 'zstd-minimal' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: squashfs: Add zstd support btrfs: Add zstd support lib: Add zstd modules lib: Add xxhash module
2017-08-16Btrfs: add skeleton code for compression heuristicTimofey Titovets1-0/+33
Add skeleton code for compresison heuristics. Now it iterates over all the pages, but in the end always says "yes, compress please", ie it does not change the current behaviour. In the future we're going to add various heuristics to analyze the data. This patch can be used as a baseline for measuring if the effectivness and performance. Signed-off-by: Timofey Titovets <[email protected]> Reviewed-by: David Sterba <[email protected]> [ enhanced changelog, modified comments ] Signed-off-by: David Sterba <[email protected]>
2017-08-16btrfs: Keep one more workspace aroundNick Terrell1-1/+1
find_workspace() allocates up to num_online_cpus() + 1 workspaces. free_workspace() will only keep num_online_cpus() workspaces. When (de)compressing we will allocate num_online_cpus() + 1 workspaces, then free one, and repeat. Instead, we can just keep num_online_cpus() + 1 workspaces around, and never have to allocate/free another workspace in the common case. I tested on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM. I mounted a BtrFS partition with -o compress-force={lzo,zlib,zstd} and logged whenever a workspace was allocated of freed. Then I copied vmlinux (527 MB) to the partition. Before the patch, during the copy it would allocate and free 5-6 workspaces. After, it only allocated the initial 3. This held true for lzo, zlib, and zstd. The time it took to execute cp vmlinux /mnt/btrfs && sync dropped from 1.70s to 1.44s with lzo compression, and from 2.04s to 1.80s for zstd compression. Signed-off-by: Nick Terrell <[email protected]> Reviewed-by: Omar Sandoval <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-08-15btrfs: Add zstd supportNick Terrell1-0/+1
Add zstd compression and decompression support to BtrFS. zstd at its fastest level compresses almost as well as zlib, while offering much faster compression and decompression, approaching lzo speeds. I benchmarked btrfs with zstd compression against no compression, lzo compression, and zlib compression. I benchmarked two scenarios. Copying a set of files to btrfs, and then reading the files. Copying a tarball to btrfs, extracting it to btrfs, and then reading the extracted files. After every operation, I call `sync` and include the sync time. Between every pair of operations I unmount and remount the filesystem to avoid caching. The benchmark files can be found in the upstream zstd source repository under `contrib/linux-kernel/{btrfs-benchmark.sh,btrfs-extract-benchmark.sh}` [1] [2]. I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM. The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor, 16 GB of RAM, and a SSD. The first compression benchmark is copying 10 copies of the unzipped Silesia corpus [3] into a BtrFS filesystem mounted with `-o compress-force=Method`. The decompression benchmark times how long it takes to `tar` all 10 copies into `/dev/null`. The compression ratio is measured by comparing the output of `df` and `du`. See the benchmark file [1] for details. I benchmarked multiple zstd compression levels, although the patch uses zstd level 1. | Method | Ratio | Compression MB/s | Decompression speed | |---------|-------|------------------|---------------------| | None | 0.99 | 504 | 686 | | lzo | 1.66 | 398 | 442 | | zlib | 2.58 | 65 | 241 | | zstd 1 | 2.57 | 260 | 383 | | zstd 3 | 2.71 | 174 | 408 | | zstd 6 | 2.87 | 70 | 398 | | zstd 9 | 2.92 | 43 | 406 | | zstd 12 | 2.93 | 21 | 408 | | zstd 15 | 3.01 | 11 | 354 | The next benchmark first copies `linux-4.11.6.tar` [4] to btrfs. Then it measures the compression ratio, extracts the tar, and deletes the tar. Then it measures the compression ratio again, and `tar`s the extracted files into `/dev/null`. See the benchmark file [2] for details. | Method | Tar Ratio | Extract Ratio | Copy (s) | Extract (s)| Read (s) | |--------|-----------|---------------|----------|------------|----------| | None | 0.97 | 0.78 | 0.981 | 5.501 | 8.807 | | lzo | 2.06 | 1.38 | 1.631 | 8.458 | 8.585 | | zlib | 3.40 | 1.86 | 7.750 | 21.544 | 11.744 | | zstd 1 | 3.57 | 1.85 | 2.579 | 11.479 | 9.389 | [1] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-benchmark.sh [2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/btrfs-extract-benchmark.sh [3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia [4] https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.11.6.tar.xz zstd source repository: https://github.com/facebook/zstd Signed-off-by: Nick Terrell <[email protected]> Signed-off-by: Chris Mason <[email protected]>
2017-07-14Merge branch 'for-4.13-part2' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: "We've identified and fixed a silent corruption (introduced by code in the first pull), a fixup after the blk_status_t merge and two fixes to incremental send that Filipe has been hunting for some time" * 'for-4.13-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: Btrfs: fix unexpected return value of bio_readpage_error btrfs: btrfs_create_repair_bio never fails, skip error handling btrfs: cloned bios must not be iterated by bio_for_each_segment_all Btrfs: fix write corruption due to bio cloning on raid5/6 Btrfs: incremental send, fix invalid memory access Btrfs: incremental send, fix invalid path for link commands
2017-07-14btrfs: cloned bios must not be iterated by bio_for_each_segment_allDavid Sterba1-0/+1
We've started using cloned bios more in 4.13, there are some specifics regarding the iteration. Filipe found [1] that the raid56 iterated a cloned bio using bio_for_each_segment_all, which is incorrect. The cloned bios have wrong bi_vcnt and this could lead to silent corruptions. This patch adds assertions to all remaining bio_for_each_segment_all cases. [1] https://patchwork.kernel.org/patch/9838535/ Reviewed-by: Liu Bo <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-07-05Merge branch 'for-4.13-part1' of ↵Linus Torvalds1-73/+21
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs updates from David Sterba: "The core updates improve error handling (mostly related to bios), with the usual incremental work on the GFP_NOFS (mis)use removal, refactoring or cleanups. Except the two top patches, all have been in for-next for an extensive amount of time. User visible changes: - statx support - quota override tunable - improved compression thresholds - obsoleted mount option alloc_start Core updates: - bio-related updates: - faster bio cloning - no allocation failures - preallocated flush bios - more kvzalloc use, memalloc_nofs protections, GFP_NOFS updates - prep work for btree_inode removal - dir-item validation - qgoup fixes and updates - cleanups: - removed unused struct members, unused code, refactoring - argument refactoring (fs_info/root, caller -> callee sink) - SEARCH_TREE ioctl docs" * 'for-4.13-part1' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: (115 commits) btrfs: Remove false alert when fiemap range is smaller than on-disk extent btrfs: Don't clear SGID when inheriting ACLs btrfs: fix integer overflow in calc_reclaim_items_nr btrfs: scrub: fix target device intialization while setting up scrub context btrfs: qgroup: Fix qgroup reserved space underflow by only freeing reserved ranges btrfs: qgroup: Introduce extent changeset for qgroup reserve functions btrfs: qgroup: Fix qgroup reserved space underflow caused by buffered write and quotas being enabled btrfs: qgroup: Return actually freed bytes for qgroup release or free data btrfs: qgroup: Cleanup btrfs_qgroup_prepare_account_extents function btrfs: qgroup: Add quick exit for non-fs extents Btrfs: rework delayed ref total_bytes_pinned accounting Btrfs: return old and new total ref mods when adding delayed refs Btrfs: always account pinned bytes when dropping a tree block ref Btrfs: update total_bytes_pinned when pinning down extents Btrfs: make BUG_ON() in add_pinned_bytes() an ASSERT() Btrfs: make add_pinned_bytes() take an s64 num_bytes instead of u64 btrfs: fix validation of XATTR_ITEM dir items btrfs: Verify dir_item in iterate_object_props btrfs: Check name_len before in btrfs_del_root_ref btrfs: Check name_len before reading btrfs_get_name ...
2017-06-21btrfs: use new block error codeDan Carpenter1-1/+1
This function is supposed to return blk_status_t error codes now but there was a stray -ENOMEM left behind. Fixes: 4e4cbee93d56 ("block: switch bios to blk_status_t") Signed-off-by: Dan Carpenter <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Acked-by: David Sterba <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2017-06-19btrfs: pass bytes to btrfs_bio_allocDavid Sterba1-4/+4
Most callers of btrfs_bio_alloc convert from bytes to sectors. Hide that in the helper and simplify the logic in the callsers. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-06-19btrfs: opencode trivial compressed_bio_alloc, simplify error handlingDavid Sterba1-19/+4
compressed_bio_alloc is now a trivial wrapper around btrfs_bio_alloc, no point keeping it. The error handling can be simplified, as we know btrfs_bio_alloc will never fail. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-06-19btrfs: remove redundant parameters from btrfs_bio_allocDavid Sterba1-1/+1
All callers pass gfp_flags=GFP_NOFS and nr_vecs=BIO_MAX_PAGES. submit_extent_page adds __GFP_HIGH that does not make a difference in our case as it allows access to memory reserves but otherwise does not change the constraints. Signed-off-by: David Sterba <[email protected]>
2017-06-19btrfs: add memalloc_nofs protections around alloc_workspace callbackDavid Sterba1-0/+10
The workspaces are preallocated at the beginning where we can safely use GFP_KERNEL, but in some cases the find_workspace might reach the allocation again, now in a more restricted context when the bios or pages are being compressed. To avoid potential lockup when alloc_workspace -> vmalloc would silently use the GFP_KERNEL, add the memalloc_nofs helpers around the critical call site. Reviewed-by: Anand Jain <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-06-19btrfs: reduce arguments for decompress_bio opsAnand Jain1-44/+2
struct compressed_bio pointer can be used instead. Signed-off-by: Anand Jain <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-06-19btrfs: btrfs_decompress_bio() could accept compressed_bio insteadAnand Jain1-14/+9
Instead of sending each argument of struct compressed_bio, send the compressed_bio itself. Also by having struct compressed_bio in btrfs_decompress_bio() it would help tracing. Signed-off-by: Anand Jain <[email protected]> Reviewed-by: Nikolay Borisov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-06-09block: switch bios to blk_status_tChristoph Hellwig1-22/+22
Replace bi_error with a new bi_status to allow for a clear conversion. Note that device mapper overloaded bi_error with a private value, which we'll have to keep arround at least for now and thus propagate to a proper blk_status_t value. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2017-04-18btrfs: convert compressed_bio.pending_bios from atomic_t to refcount_tElena Reshetova1-9/+9
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: Elena Reshetova <[email protected]> Signed-off-by: Hans Liljestrand <[email protected]> Signed-off-by: Kees Cook <[email protected]> Signed-off-by: David Windsor <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-02-28btrfs: derive maximum output size in the compression implementationDavid Sterba1-4/+2
The value of max_out can be calculated from the parameters passed to the compressors, which is number of pages and the page size, and we don't have to needlessly pass it around. Signed-off-by: David Sterba <[email protected]>
2017-02-28btrfs: merge nr_pages input and output parameter in compress_pagesDavid Sterba1-4/+3
The parameter saying how many pages can be allocated at maximum can be merged with the output page counter, to save some stack space. The compression implementation will sink the parameter to a local variable so everything works as before. The nr_pages variables can also be simply merged in compress_file_range into one. Signed-off-by: David Sterba <[email protected]>
2017-02-28btrfs: merge length input and output parameter in compress_pagesDavid Sterba1-13/+11
The length parameter is basically duplicated for input and output in the top level caller of the compress_pages chain. We can simply use one variable for that and reduce stack consumption. The compression implementation will sink the parameter to a local variable so everything works as before. Signed-off-by: David Sterba <[email protected]>
2017-02-28btrfs: constify buffers used by compression helpersDavid Sterba1-1/+1
Signed-off-by: David Sterba <[email protected]>
2017-02-28btrfs: make check_compressed_csum take btrfs_inodeNikolay Borisov1-5/+4
Signed-off-by: Nikolay Borisov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-02-28btrfs: make btrfs_print_data_csum_error take btrfs_inodeNikolay Borisov1-2/+3
Signed-off-by: Nikolay Borisov <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-02-17btrfs: Better csum error message for data csum mismatchQu Wenruo1-4/+2
The original csum error message only outputs inode number, offset, check sum and expected check sum. However no root objectid is outputted, which sometimes makes debugging quite painful under multi-subvolume case (including relocation). Also the checksum output is decimal, which seldom makes sense for users/developers and is hard to read in most time. This patch will add root objectid, which will be %lld for rootid larger than LAST_FREE_OBJECTID, and hex csum output for better readability. Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2017-02-14btrfs: Make btrfs_ino take a struct btrfs_inodeNikolay Borisov1-1/+1
Currently btrfs_ino takes a struct inode and this causes a lot of internal btrfs functions which consume this ino to take a VFS inode, rather than btrfs' own struct btrfs_inode. In order to fix this "leak" of VFS structs into the internals of btrfs first it's necessary to eliminate all uses of struct inode for the purpose of inode. This patch does that by using BTRFS_I to convert an inode to btrfs_inode. With this problem eliminated subsequent patches will start eliminating the passing of struct inode altogether, eventually resulting in a lot cleaner code. Signed-off-by: Nikolay Borisov <[email protected]> [ fix btrfs_get_extent tracepoint prototype ] Signed-off-by: David Sterba <[email protected]>
2017-02-10Btrfs: fix btrfs_decompress_buf2page()Omar Sandoval1-15/+24
If btrfs_decompress_buf2page() is handed a bio with its page in the middle of the working buffer, then we adjust the offset into the working buffer. After we copy into the bio, we advance the iterator by the number of bytes we copied. Then, we have some logic to handle the case of discontiguous pages and adjust the offset into the working buffer again. However, if we didn't advance the bio to a new page, we may enter this case in error, essentially repeating the adjustment that we already made when we entered the function. The end result is bogus data in the bio. Previously, we only checked for this case when we advanced to a new page, but the conversion to bio iterators changed that. This restores the old, correct behavior. A case I saw when testing with zlib was: buf_start = 42769 total_out = 46865 working_bytes = total_out - buf_start = 4096 start_byte = 45056 The condition (total_out > start_byte && buf_start < start_byte) is true, so we adjust the offset: buf_offset = start_byte - buf_start = 2287 working_bytes -= buf_offset = 1809 current_buf_start = buf_start = 42769 Then, we copy bytes = min(bvec.bv_len, PAGE_SIZE - buf_offset, working_bytes) = 1809 buf_offset += bytes = 4096 working_bytes -= bytes = 0 current_buf_start += bytes = 44578 After bio_advance(), we are still in the same page, so start_byte is the same. Then, we check (total_out > start_byte && current_buf_start < start_byte), which is true! So, we adjust the values again: buf_offset = start_byte - buf_start = 2287 working_bytes = total_out - start_byte = 1809 current_buf_start = buf_start + buf_offset = 45056 But note that working_bytes was already zero before this, so we should have stopped copying. Fixes: 974b1adc3b10 ("btrfs: use bio iterators for the decompression handlers") Reported-by: Pat Erley <[email protected]> Reviewed-by: Chris Mason <[email protected]> Signed-off-by: Omar Sandoval <[email protected]> Signed-off-by: Chris Mason <[email protected]> Reviewed-by: Liu Bo <[email protected]> Tested-by: Liu Bo <[email protected]>
2016-12-06btrfs: take an fs_info directly when the root is not used otherwiseJeff Mahoney1-16/+12
There are loads of functions in btrfs that accept a root parameter but only use it to obtain an fs_info pointer. Let's convert those to just accept an fs_info pointer directly. Signed-off-by: Jeff Mahoney <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-12-06btrfs: root->fs_info cleanup, add fs_info convenience variablesJeff Mahoney1-13/+15
In routines where someptr->fs_info is referenced multiple times, we introduce a convenience variable. This makes the code considerably more readable. Signed-off-by: Jeff Mahoney <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-12-06btrfs: pull node/sector/stripe sizes out of root and into fs_infoJeff Mahoney1-2/+2
We track the node sizes per-root, but they never vary from the values in the superblock. This patch messes with the 80-column style a bit, but subsequent patches to factor out root->fs_info into a convenience variable fix it up again. Signed-off-by: Jeff Mahoney <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-11-30btrfs: calculate end of bio offset properlyChristoph Hellwig1-2/+8
Use the bvec offset and len members to prepare for multipage bvecs. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-11-30btrfs: use bi_sizeChristoph Hellwig1-5/+2
Instead of using bi_vcnt to calculate it. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Omar Sandoval <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-11-30btrfs: use bio iterators for the decompression handlersChristoph Hellwig1-81/+42
Pass the full bio to the decompression routines and use bio iterators to iterate over the data in the bio. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-11-30btrfs: change btrfs_csum_final result param type to u8Domagoj Tršan1-1/+1
csum member of struct btrfs_super_block has array type of u8. It makes sense that function btrfs_csum_final should be also declared to accept u8 *. I changed the declaration of method void btrfs_csum_final(u32 crc, char *result); to void btrfs_csum_final(u32 crc, u8 *result); Signed-off-by: Domagoj Tršan <[email protected]> [ changed cast to u8 at several call sites ] Signed-off-by: David Sterba <[email protected]>
2016-10-17btrfs: assign error values to the correct bio structsJunjie Mao1-2/+2
Fixes: 4246a0b63bd8 ("block: add a bi_error field to struct bio") Signed-off-by: Junjie Mao <[email protected]> Acked-by: David Sterba <[email protected]> Cc: [email protected] # 4.3+ Signed-off-by: Linus Torvalds <[email protected]>
2016-09-26btrfs: convert pr_* to btrfs_* where possibleJeff Mahoney1-1/+1
For many printks, we want to know which file system issued the message. This patch converts most pr_* calls to use the btrfs_* versions instead. In some cases, this means adding plumbing to allow call sites access to an fs_info pointer. fs/btrfs/check-integrity.c is left alone for another day. Signed-off-by: Jeff Mahoney <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-09-26btrfs: convert printk(KERN_* to use pr_* callsJeff Mahoney1-4/+2
This patch converts printk(KERN_* style messages to use the pr_* versions. One side effect is that anything that was KERN_DEBUG is now automatically a dynamic debug message. Signed-off-by: Jeff Mahoney <[email protected]> Signed-off-by: David Sterba <[email protected]>
2016-08-04Merge branch 'for-linus-4.8' of ↵Linus Torvalds1-2/+8
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull more btrfs updates from Chris Mason: "This is part two of my btrfs pull, which is some cleanups and a batch of fixes. Most of the code here is from Jeff Mahoney, making the pointers we pass around internally more consistent and less confusing overall. I noticed a small problem right before I sent this out yesterday, so I fixed it up and re-tested overnight" * 'for-linus-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (40 commits) Btrfs: fix __MAX_CSUM_ITEMS btrfs: btrfs_abort_transaction, drop root parameter btrfs: add btrfs_trans_handle->fs_info pointer btrfs: btrfs_relocate_chunk pass extent_root to btrfs_end_transaction btrfs: convert nodesize macros to static inlines btrfs: introduce BTRFS_MAX_ITEM_SIZE btrfs: cleanup, remove prototype for btrfs_find_root_ref btrfs: copy_to_sk drop unused root parameter btrfs: simpilify btrfs_subvol_inherit_props btrfs: tests, use BTRFS_FS_STATE_DUMMY_FS_INFO instead of dummy root btrfs: tests, require fs_info for root btrfs: tests, move initialization into tests/ btrfs: btrfs_test_opt and friends should take a btrfs_fs_info btrfs: prefix fsid to all trace events btrfs: plumb fs_info into btrfs_work btrfs: remove obsolete part of comment in statfs btrfs: hide test-only member under ifdef btrfs: Ratelimit "no csum found" info message btrfs: Add ratelimit to btrfs printing Btrfs: fix unexpected balance crash due to BUG_ON ...