aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-10-06scsi: hisi_sas: Switch v3 hw to MQJohn Garry3-70/+56
Now that the block layer provides a shared tag, we can switch the driver to expose all HW queues. Signed-off-by: John Garry <[email protected]> Tested-by: Douglas Gilbert <[email protected]> Acked-by: Martin K. Petersen <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-06scsi: core: Show nr_hw_queues in sysfsJohn Garry1-0/+11
So that we don't use a value of 0 for when Scsi_Host.nr_hw_queues is unset, use the tag_set->nr_hw_queues (which holds 1 for this case). Signed-off-by: John Garry <[email protected]> Tested-by: Douglas Gilbert <[email protected]> Acked-by: Martin K. Petersen <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-06scsi: Add host and host template flag 'host_tagset'Hannes Reinecke3-1/+11
Add Host and host template flag 'host_tagset' so hostwide tagset can be shared on multiple reply queues after the SCSI device's reply queue is converted to blk-mq hw queue. [jpg: Update comment on .can_queue and add Scsi_Host.host_tagset] Signed-off-by: Hannes Reinecke <[email protected]> Signed-off-by: John Garry <[email protected]> Tested-by: Don Brace<[email protected]> #SCSI resv cmds patches used Tested-by: Douglas Gilbert <[email protected]> Acked-by: Martin K. Petersen <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02block: scsi_ioctl: Avoid the use of one-element arraysGustavo A. R. Silva2-4/+7
One-element arrays are being deprecated[1]. Replace the one-element array with a simple object of type compat_caddr_t: 'compat_caddr_t unused'[2], once it seems this field is actually never used. Also, update struct cdrom_generic_command in UAPI by adding an anonimous union to avoid using the one-element array _reserved_. [1] https://www.kernel.org/doc/html/v5.9-rc1/process/deprecated.html#zero-length-and-one-element-arrays [2] https://github.com/KSPP/linux/issues/86 Signed-off-by: Gustavo A. R. Silva <[email protected]> Link: https://lore.kernel.org/lkml/5f76f5d0.qJ4t%2FHWuRzSW7bTa%[email protected]/ Build-tested-by: kernel test robot <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02rsxx: Use fallthrough pseudo-keywordGustavo A. R. Silva1-1/+1
Replace /* Fall through. */ comment with the new pseudo-keyword macro fallthrough[1]. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove embedded struct cache_sb from struct cache_setColy Li11-59/+46
Since bcache code was merged into mainline kerrnel, each cache set only as one single cache in it. The multiple caches framework is here but the code is far from completed. Considering the multiple copies of cached data can also be stored on e.g. md raid1 devices, it is unnecessary to support multiple caches in one cache set indeed. The previous preparation patches fix the dependencies of explicitly making a cache set only have single cache. Now we don't have to maintain an embedded partial super block in struct cache_set, the in-memory super block can be directly referenced from struct cache. This patch removes the embedded struct cache_sb from struct cache_set, and fixes all locations where the superb lock was referenced from this removed super block by referencing the in-memory super block of struct cache. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: check and set sync status on cache's in-memory super blockColy Li4-10/+7
Currently the cache's sync status is checked and set on cache set's in- memory partial super block. After removing the embedded struct cache_sb from cache set and reference cache's in-memory super block from struct cache_set, the sync status can set and check directly on cache's super block. This patch checks and sets the cache sync status directly on cache's in-memory super block. This is a preparation for later removing embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove can_attach_cache()Coly Li1-10/+0
After removing the embedded struct cache_sb from struct cache_set, cache set will directly reference the in-memory super block of struct cache. It is unnecessary to compare block_size, bucket_size and nr_in_set from the identical in-memory super block in can_attach_cache(). This is a preparation patch for latter removing cache_set->sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: don't check seq numbers in register_cache_set()Coly Li1-15/+0
In order to update the partial super block of cache set, the seq numbers of cache and cache set are checked in register_cache_set(). If cache's seq number is larger than cache set's seq number, cache set must update its partial super block from cache's super block. It is unncessary when the embedded struct cache_sb is removed from struct cache set. This patch removed the seq numbers checking from register_cache_set(), because later there will be no such partial super block in struct cache set, the cache set will directly reference in-memory super block from struct cache. This is a preparation patch for removing embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: only use bucket_bytes() on struct cacheColy Li2-2/+2
Because struct cache_set and struct cache both have struct cache_sb, macro bucket_bytes() currently are used on both of them. When removing the embedded struct cache_sb from struct cache_set, this macro won't be used on struct cache_set anymore. This patch unifies all bucket_bytes() usage only on struct cache, this is one of the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove useless bucket_pages()Coly Li1-1/+0
It seems alloc_bucket_pages() is the only user of bucket_pages(). Considering alloc_bucket_pages() is removed from bcache code, it is safe to remove the useless macro bucket_pages() now. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove useless alloc_bucket_pages()Coly Li1-3/+0
Now no one uses alloc_bucket_pages() anymore, remove it from bcache.h. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: only use block_bytes() on struct cacheColy Li7-24/+24
Because struct cache_set and struct cache both have struct cache_sb, therefore macro block_bytes() can be used on both of them. When removing the embedded struct cache_sb from struct cache_set, this macro won't be used on struct cache_set anymore. This patch unifies all block_bytes() usage only on struct cache, this is one of the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: add set_uuid in struct cache_setColy Li4-15/+16
This patch adds a separated set_uuid[16] in struct cache_set, to store the uuid of the cache set. This is the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove for_each_cache()Coly Li6-294/+237
Since now each cache_set explicitly has single cache, for_each_cache() is unnecessary. This patch removes this macro, and update all locations where it is used, and makes sure all code logic still being consistent. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: explicitly make cache_set only have single cacheColy Li3-17/+12
Currently although the bcache code has a framework for multiple caches in a cache set, but indeed the multiple caches never completed and users use md raid1 for multiple copies of the cached data. This patch does the following change in struct cache_set, to explicitly make a cache_set only have single cache, - Change pointer array "*cache[MAX_CACHES_PER_SET]" to a single pointer "*cache". - Remove pointer array "*cache_by_alloc[MAX_CACHES_PER_SET]". - Remove "caches_loaded". Now the code looks as exactly what it does in practic: only one cache is used in the cache set. Signed-off-by: Coly Li <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove 'int n' from parameter list of bch_bucket_alloc_set()Coly Li4-24/+19
The parameter 'int n' from bch_bucket_alloc_set() is not cleared defined. From the code comments n is the number of buckets to alloc, but from the code itself 'n' is the maximum cache to iterate. Indeed all the locations where bch_bucket_alloc_set() is called, 'n' is alwasy 1. This patch removes the confused and unnecessary 'int n' from parameter list of bch_bucket_alloc_set(), and explicitly allocates only 1 bucket for its caller. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: Convert to DEFINE_SHOW_ATTRIBUTEQinglang Miao1-13/+3
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code. As inode->iprivate equals to third parameter of debugfs_create_file() which is NULL. So it's equivalent to original code logic. Signed-off-by: Qinglang Miao <[email protected]> Signed-off-by: Coly Li <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: check c->root with IS_ERR_OR_NULL() in mca_reserve()Dongsheng Yang1-1/+1
In mca_reserve(c) macro, we are checking root whether is NULL or not. But that's not enough, when we read the root node in run_cache_set(), if we got an error in bch_btree_node_read_done(), we will return ERR_PTR(-EIO) to c->root. And then we will go continue to unregister, but before calling unregister_shrinker(&c->shrink), there is a possibility to call bch_mca_count(), and we would get a crash with call trace like that: [ 2149.876008] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000b5 ... ... [ 2150.598931] Call trace: [ 2150.606439] bch_mca_count+0x58/0x98 [escache] [ 2150.615866] do_shrink_slab+0x54/0x310 [ 2150.624429] shrink_slab+0x248/0x2d0 [ 2150.632633] drop_slab_node+0x54/0x88 [ 2150.640746] drop_slab+0x50/0x88 [ 2150.648228] drop_caches_sysctl_handler+0xf0/0x118 [ 2150.657219] proc_sys_call_handler.isra.18+0xb8/0x110 [ 2150.666342] proc_sys_write+0x40/0x50 [ 2150.673889] __vfs_write+0x48/0x90 [ 2150.681095] vfs_write+0xac/0x1b8 [ 2150.688145] ksys_write+0x6c/0xd0 [ 2150.695127] __arm64_sys_write+0x24/0x30 [ 2150.702749] el0_svc_handler+0xa0/0x128 [ 2150.710296] el0_svc+0x8/0xc Signed-off-by: Dongsheng Yang <[email protected]> Signed-off-by: Coly Li <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: share register sysfs with async registerColy Li1-5/+7
Previously the experimental async registration uses a separate sysfs file register_async. Now the async registration code seems working well for a while, we can do furtuher testing with it now. This patch changes the async bcache registration shares the same sysfs file /sys/fs/bcache/register (and register_quiet). Async registration will be default behavior if BCACHE_ASYNC_REGISTRATION is set in kernel configure. By default, BCACHE_ASYNC_REGISTRATION is not configured yet. Signed-off-by: Coly Li <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-09-29null_blk: add support for max open/active zone limit for zoned devicesNiklas Cassel3-58/+282
Add support for user space to set a max open zone and a max active zone limit via configfs. By default, the default values are 0 == no limit. Call the block layer API functions used for exposing the configured limits to sysfs. Add accounting in null_blk_zoned so that these new limits are respected. Performing an operation that would exceed these limits results in a standard I/O error. A max open zone limit exists in the ZBC standard. While null_blk_zoned is used to test the Zoned Block Device model in Linux, when it comes to differences between ZBC and ZNS, null_blk_zoned mostly follows ZBC. Therefore, implement the manage open zone resources function from ZBC, but additionally add support for max active zones. This enables user space not only to test against a device with an open zone limit, but also to test against a device with an active zone limit. Signed-off-by: Niklas Cassel <[email protected]> Reviewed-by: Damien Le Moal <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-09-28Merge tag 'nvme-5.10-2020-09-27' of git://git.infradead.org/nvme into ↵Jens Axboe10-117/+93
for-5.10/drivers Pull NVMe updates from Christoph: "nvme updates for 5.10 - fix keep alive timer modification (Amit Engel) - order the PCI ID list more sensibly (Andy Shevchenko) - cleanup the open by controller helper (Chaitanya Kulkarni) - use an xarray for th CSE log lookup (Chaitanya Kulkarni) - support ZNS in nvmet passthrough mode (Chaitanya Kulkarni) - fix nvme_ns_report_zones (me) - add a sanity check to nvmet-fc (James Smart) - fix interrupt allocation when too many polled queues are specified (Jeffle Xu) - small nvmet-tcp optimization (Mark Wunderlich)" * tag 'nvme-5.10-2020-09-27' of git://git.infradead.org/nvme: nvme-pci: allocate separate interrupt for the reserved non-polled I/O queue nvme: fix error handling in nvme_ns_report_zones nvmet-fc: fix missing check for no hostport struct nvmet: add passthru ZNS support nvmet: handle keep-alive timer when kato is modified by a set features cmd nvmet-tcp: have queue io_work context run on sock incoming cpu nvme-pci: Move enumeration by class to be last in the table nvme: use an xarray to lookup the Commands Supported and Effects log nvme: lift the file open code from nvme_ctrl_get_by_path
2020-09-27nvme-pci: allocate separate interrupt for the reserved non-polled I/O queueJeffle Xu1-17/+15
One queue will be reserved for non-polled IO when nvme.poll_queues is greater or equal than the number of IO queues that the nvme controller can provide. Currently the reserved queue for non-polled IO will reuse the interrupt used by admin queue in this case, e.g, vector 0. This can work and the performance may not be an issue since the admin queue is used unfrequently. However this behaviour may be inconsistent with that when nvme.poll_queues is smaller than the number of IO queues available. Thus allocate separate interrupt for this reserved queue, and thus make the behaviour consistent. Signed-off-by: Jeffle Xu <[email protected]> [hch: minor cleanups, mostly to the pre-existing surrounding code] Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvme: fix error handling in nvme_ns_report_zonesChristoph Hellwig1-25/+16
nvme_submit_sync_cmd can return positive NVMe error codes in addition to the negative Linux error code, which are currently ignored. Fix this by removing __nvme_ns_report_zones and handling the errors from nvme_submit_sync_cmd in the caller instead of multiplexing the return value and the number of zones reported into a single return value. Fixes: 240e6ee272c0 ("nvme: support for zoned namespaces") Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Damien Le Moal <[email protected]>
2020-09-27nvmet-fc: fix missing check for no hostport structJames Smart1-1/+1
A hostport port pointer is allowed to be NULL as it is not allocated if the lldd does not support the new interfaces for NVME LS request support. The hostport free routine validates the handle but forgot to validate the hostport pointer. Validate the hostport pointer before using it to validate the handle. Signed-off-by: James Smart <[email protected]> Reviewed-by: Himanshu Madhani <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvmet: add passthru ZNS supportChaitanya Kulkarni1-0/+16
In the default passthru implementation NVMeOF target passthru ctrl is not capable of handling Zoned Namespaces (ZNS). Update the nvmet_parse_pasthru_admin_cmd() to allow NVME_ID_CNS_CS_CTRL/NVME_CIS_ZNS and NVME_ID_CNS_CS_NS/NVME_CIS_ZNS. With this addition NVMeOF Passthru allows Zoned Namespaces. Signed-off-by: Chaitanya Kulkarni <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvmet: handle keep-alive timer when kato is modified by a set features cmdAmit Engel3-2/+6
A user may modify the kato by a set features cmd. To properly deal with races or a kato value of 0 (no keep alive enabled) change nvmet_set_feat_kato to first disable the timer, then set the value and then re-enable the timer. Signed-off-by: Amit Engel <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvmet-tcp: have queue io_work context run on sock incoming cpuMark Wunderlich1-11/+10
No real good need to spread queues artificially. Usually the target will serve multiple hosts, and it's better to run on the socket incoming cpu for better affinitization rather than spread queues on all online cpus. We rely on RSS to spread the work around sufficiently. Signed-off-by: Mark Wunderlich <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvme-pci: Move enumeration by class to be last in the tableAndy Shevchenko1-1/+2
It's unusual that we have enumeration by class in the middle of the table. It might potentially be problematic in the future if we add another entry after it. So, move class matching entry to be the last in the ID table. Signed-off-by: Andy Shevchenko <[email protected]> Reviewed-by: Keith Busch <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Reviewed-by: Chaitanya Kulkarni <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvme: use an xarray to lookup the Commands Supported and Effects logChaitanya Kulkarni2-28/+5
When using linked list we have to open code the locking, search, and destroy operations with the loops even if data structure doesn't fall into the fast path. One of the main advantage of having XArray to store, search, and remove items is that it handles all the locking by itself, avoids the loops when using linked lists, provides clear API to replace the linked list's search and destroy loops. This patch replaces the ctrl->cel list with XArray and removes :- a. Extra code needed for the linked list for ctrl->cel item management such as nvme_find_cel(). b. Destroy loop in the nvme_free_ctrl(). c. Explicit insertion locking in the nvme_get_effects_log(). Signed-off-by: Chaitanya Kulkarni <[email protected]> Reviewed-by: Keith Busch <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2020-09-27nvme: lift the file open code from nvme_ctrl_get_by_pathChaitanya Kulkarni3-32/+22
Lift opening the file open/close code from nvme_ctrl_get_by_path into the caller, just keeping a simple nvme_ctrl_from_file() helper. Signed-off-by: Chaitanya Kulkarni <[email protected]> [hch: refactored a bit, split the bug fixes into a separate prep patch] Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Logan Gunthorpe <[email protected]>
2020-09-25Merge branch 'md-next' of ↵Jens Axboe13-213/+967
https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.10/drivers Pull MD updates from Song. * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: md/raid10: improve discard request for far layout md/raid10: improve raid10 discard request md/raid10: pull codes that wait for blocked dev into one function md/raid10: extend r10bio devs to raid disks md: add md_submit_discard_bio() for submitting discard bio md: Simplify code with existing definition RESYNC_SECTORS in raid10.c md/raid5: reallocate page array after setting new stripe_size md/raid5: resize stripe_head when reshape array md/raid5: let multiple devices of stripe_head share page md/raid6: let async recovery function support different page offset md/raid6: let syndrome computor support different page offset md/raid5: convert to new xor compution interface md/raid5: add new xor function to support different page offset md/raid5: make async_copy_data() to support different page offset md/raid5: add a new member of offset into r5dev md: only calculate blocksize once and use i_blocksize()
2020-09-24md/raid10: improve discard request for far layoutXiao Ni2-23/+64
For far layout, the discard region is not continuous on disks. So it needs far copies r10bio to cover all regions. It needs a way to know all r10bios have finish or not. Similar with raid10_sync_request, only the first r10bio master_bio records the discard bio. Other r10bios master_bio record the first r10bio. The first r10bio can finish after other r10bios finish and then return the discard bio. Signed-off-by: Xiao Ni <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid10: improve raid10 discard requestXiao Ni1-1/+255
Now the discard request is split by chunk size. So it takes a long time to finish mkfs on disks which support discard function. This patch improve handling raid10 discard request. It uses the similar way with patch 29efc390b (md/md0: optimize raid0 discard handling). But it's a little complex than raid0. Because raid10 has different layout. If raid10 is offset layout and the discard request is smaller than stripe size. There are some holes when we submit discard bio to underlayer disks. For example: five disks (disk1 - disk5) D01 D02 D03 D04 D05 D05 D01 D02 D03 D04 D06 D07 D08 D09 D10 D10 D06 D07 D08 D09 The discard bio just wants to discard from D03 to D10. For disk3, there is a hole between D03 and D08. For disk4, there is a hole between D04 and D09. D03 is a chunk, raid10_write_request can handle one chunk perfectly. So the part that is not aligned with stripe size is still handled by raid10_write_request. If reshape is running when discard bio comes and the discard bio spans the reshape position, raid10_write_request is responsible to handle this discard bio. I did a test with this patch set. Without patch: time mkfs.xfs /dev/md0 real4m39.775s user0m0.000s sys0m0.298s With patch: time mkfs.xfs /dev/md0 real0m0.105s user0m0.000s sys0m0.007s nvme3n1 259:1 0 477G 0 disk └─nvme3n1p1 259:10 0 50G 0 part nvme4n1 259:2 0 477G 0 disk └─nvme4n1p1 259:11 0 50G 0 part nvme5n1 259:6 0 477G 0 disk └─nvme5n1p1 259:12 0 50G 0 part nvme2n1 259:9 0 477G 0 disk └─nvme2n1p1 259:15 0 50G 0 part nvme0n1 259:13 0 477G 0 disk └─nvme0n1p1 259:14 0 50G 0 part Reviewed-by: Coly Li <[email protected]> Reviewed-by: Guoqing Jiang <[email protected]> Signed-off-by: Xiao Ni <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid10: pull codes that wait for blocked dev into one functionXiao Ni1-51/+67
The following patch will reuse these logics, so pull the same codes into one function. Signed-off-by: Xiao Ni <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid10: extend r10bio devs to raid disksXiao Ni1-5/+4
Now it allocs r10bio->devs[conf->copies]. Discard bio needs to submit to all member disks and it needs to use r10bio. So extend to r10bio->devs[geo.raid_disks]. Reviewed-by: Coly Li <[email protected]> Signed-off-by: Xiao Ni <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md: add md_submit_discard_bio() for submitting discard bioXiao Ni3-12/+24
Move these logic from raid0.c to md.c, so that we can also use it in raid10.c. Reviewed-by: Coly Li <[email protected]> Reviewed-by: Guoqing Jiang <[email protected]> Signed-off-by: Xiao Ni <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md: Simplify code with existing definition RESYNC_SECTORS in raid10.cZhen Lei1-4/+4
#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9) "RESYNC_BLOCK_SIZE/512" is equal to "RESYNC_BLOCK_SIZE >> 9", replace it with RESYNC_SECTORS. Signed-off-by: Zhen Lei <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: reallocate page array after setting new stripe_sizeYufen Yu1-0/+20
When try to resize stripe_size, we also need to free old shared page array and allocate new. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: resize stripe_head when reshape arrayYufen Yu1-0/+24
When reshape array, we try to reuse shared pages of old stripe_head, and allocate more for the new one if needed. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: let multiple devices of stripe_head share pageYufen Yu2-3/+114
In current implementation, grow_buffers() uses alloc_page() to allocate the buffers for each stripe_head, i.e. allocate a page for each dev[i] in stripe_head. After setting stripe_size as a configurable value by writing sysfs entry, it means that we always allocate 64K buffers, but just use 4K of them when stripe_size is 4K in 64KB arm64. To avoid wasting memory, we try to let multiple sh->dev share one real page. That means, multiple sh->dev[i].page will point to the only page with different offset. Example of 64K PAGE_SIZE and 4K stripe_size as following: 64K PAGE_SIZE +---+---+---+---+------------------------------+ | | | | | | | | | | +-+-+-+-+-+-+-+-+------------------------------+ ^ ^ ^ ^ | | | +----------------------------+ | | | | | | +-------------------+ | | | | | | +----------+ | | | | | | +-+ | | | | | | | +-----+-----+------+-----+------+-----+------+------+ sh | offset(0) | offset(4K) | offset(8K) | offset(12K) | + +-----------+------------+------------+-------------+ +----> dev[0].page dev[1].page dev[2].page dev[3].page A new 'pages' array will be added into stripe_head to record shared page used by this stripe_head. Allocate them when grow_buffers() and free them when shrink_buffers(). After trying to share page, the users of sh->dev[i].page need to take care of the related page offset: page of issued bio and page passed to xor compution functions. But thanks for previous different page offset supported. Here, we just need to set correct dev[i].offset. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid6: let async recovery function support different page offsetYufen Yu4-49/+128
For now, asynchronous raid6 recovery calculate functions are require common offset for pages. But, we expect them to support different page offset after introducing stripe shared page. Do that by simplily adding page offset where each page address are referred. Then, replace the old interface with the new ones in raid6 and raid6test. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid6: let syndrome computor support different page offsetYufen Yu4-47/+91
For now, syndrome compute functions require common offset in the pages array. However, we expect them to support different offset when try to use shared page in the following. Simplily covert them by adding page offset where each page address are referred. Since the only caller of async_gen_syndrome() and async_syndrome_val() are in raid6, we don't want to reserve the old interface but modify the interface directly. After that, replacing old interfaces with new ones for raid6 and raid6test. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: convert to new xor compution interfaceYufen Yu1-9/+34
We try to replace async_xor() and async_xor_val() with the new introduced interface async_xor_offs() and async_xor_val_offs() for raid456. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: add new xor function to support different page offsetYufen Yu2-17/+114
raid5 will call async_xor() and async_xor_val() to compute xor. For now, both of them require the common src/dst page offset. But, we want them to support different src/dst page offset for following shared page. Here, adding two new function async_xor_offs() and async_xor_val_offs() respectively for async_xor() and async_xor_val(). Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: make async_copy_data() to support different page offsetYufen Yu1-3/+5
ops_run_biofill() and ops_run_biodrain() will call async_copy_data() to copy sh->dev[i].page from or to bio page. For now, it implies the offset of dev[i].page is 0. But we want to support different page offset in the following. Thus, pass page offset to these functions and replace 'page_offset' with 'page_offset + poff'. No functional change. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md/raid5: add a new member of offset into r5devYufen Yu2-11/+40
Add a new member of offset into struct r5dev. It indicates the offset of related dev[i].page. For now, since each device have a privated page, the value is always 0. Thus, we set offset as 0 when allcate page in grow_buffers() and resize_stripes(). To support following different page offset, we try to use the page offset rather than '0' directly for async_memcpy() and ops_run_io(). We try to support different page offset for xor compution functions in the following. To avoid repeatly allocate a new array each time, we add a memory region into scribble buffer to record offset. No functional change. Signed-off-by: Yufen Yu <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24md: only calculate blocksize once and use i_blocksize()Xianting Tian1-3/+4
We alreday has the interface i_blocksize(), which can be used to get blocksize, so use it. Only calculate blocksize once and use it within read_page(). Signed-off-by: Xianting Tian <[email protected]> Signed-off-by: Song Liu <[email protected]>
2020-09-24null_blk: Support shared tag bitmapJohn Garry1-0/+6
Support a shared tag bitmap, whereby request tags are unique over all submission queues, and not just per submission queue. As such, per device total queue depth is normally hw_queue_depth * submit_queues, but hw_queue_depth when set. And a similar story for when shared_tags is set, where that is the queue depth over all null blk devices. Signed-off-by: John Garry <[email protected]> Tested-by: Douglas Gilbert <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-09-24Merge branch 'for-5.10/block' into for-5.10/driversJens Axboe117-2094/+2663
* for-5.10/block: (140 commits) bdi: replace BDI_CAP_NO_{WRITEBACK,ACCT_DIRTY} with a single flag bdi: invert BDI_CAP_NO_ACCT_WB bdi: replace BDI_CAP_STABLE_WRITES with a queue and a sb flag mm: use SWP_SYNCHRONOUS_IO more intelligently bdi: remove BDI_CAP_SYNCHRONOUS_IO bdi: remove BDI_CAP_CGROUP_WRITEBACK block: lift setting the readahead size into the block layer md: update the optimal I/O size on reshape bdi: initialize ->ra_pages and ->io_pages in bdi_init aoe: set an optimal I/O size bcache: inherit the optimal I/O size drbd: remove dead code in device_to_statistics fs: remove the unused SB_I_MULTIROOT flag block: mark blkdev_get static PM: mm: cleanup swsusp_swap_check mm: split swap_type_of PM: rewrite is_hibernate_resume_dev to not require an inode mm: cleanup claim_swapfile ocfs2: cleanup o2hb_region_dev_store dasd: cleanup dasd_scan_partitions ...