aboutsummaryrefslogtreecommitdiff
path: root/drivers/mmc/core/queue.c
AgeCommit message (Collapse)AuthorFilesLines
2024-10-03mmc: core: Only set maximum DMA segment size if DMA is supportedGuenter Roeck1-1/+2
Since upstream commit 334304ac2bac ("dma-mapping: don't return errors from dma_set_max_seg_size") calling dma_set_max_seg_size() on a device not supporting DMA results in a warning traceback. This is seen when booting the sifive_u machine from SD. The underlying SPI controller (sifive,spi0 compatible) explicitly sets dma_mask to NULL. Avoid the backtrace by only calling dma_set_max_seg_size() if DMA is supported. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Fixes: 334304ac2bac ("dma-mapping: don't return errors from dma_set_max_seg_size") Link: https://lore.kernel.org/r/20240924210123.2288529-1-linux@roeck-us.net Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-06-19block: move the stable_writes flag to queue_limitsChristoph Hellwig1-2/+3
Move the stable_writes flag into the queue_limits feature field so that it can be set atomically with the queue frozen. The flag is now inherited by blk_stack_limits, which greatly simplifies the code in dm, and fixed md which previously did not pass on the flag set on lower devices. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-18-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-19block: move the add_random flag to queue_limitsChristoph Hellwig1-2/+0
Move the add_random flag into the queue_limits feature field so that it can be set atomically with the queue frozen. Note that this also removes code from dm to clear the flag based on the underlying devices, which can't be reached as dm devices will always start out without the flag set. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-16-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-19block: move the nonrot flag to queue_limitsChristoph Hellwig1-1/+0
Move the nonrot flag into the queue_limits feature field so that it can be set atomically with the queue frozen. Use the chance to switch to defaulting to non-rotational and require the driver to opt into rotational, which matches the polarity of the sysfs interface. For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new rotational flag is not set as they clearly are not rotational despite this being a behavior change. There are some other drivers that unconditionally set the rotational flag to keep the existing behavior as they arguably can be used on rotational devices even if that is probably not their main use today (e.g. virtio_blk and drbd). The flag is automatically inherited in blk_stack_limits matching the existing behavior in dm and md. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-15-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-19block: move cache control settings out of queue->flagsChristoph Hellwig1-4/+8
Move the cache control settings into the queue_limits so that the flags can be set atomically with the device queue frozen. Add new features and flags field for the driver set flags, and internal (usually sysfs-controlled) flags in the block layer. Note that we'll eventually remove enough field from queue_limits to bring it back to the previous size. The disable flag is inverted compared to the previous meaning, which means it now survives a rescan, similar to the max_sectors and max_discard_sectors user limits. The FLUSH and FUA flags are now inherited by blk_stack_limits, which simplified the code in dm a lot, but also causes a slight behavior change in that dm-switch and dm-unstripe now advertise a write cache despite setting num_flush_bios to 0. The I/O path will handle this gracefully, but as far as I can tell the lack of num_flush_bios and thus flush support is a pre-existing data integrity bug in those targets that really needs fixing, after which a non-zero num_flush_bios should be required in dm for targets that map to underlying devices. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-14-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-03-13Merge tag 'mmc-v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmcLinus Torvalds1-3/+0
Pull MMC updates from Ulf Hansson: "MMC core: - Drop the use of BLK_BOUNCE_HIGH - Fix partition switch for GP3 - Remove usage of the deprecated ida_simple API MMC host: - cqhci: Update bouncing email-addresses in MAINTAINERS - davinci_mmc: Use sg_miter for PIO - dw_mmc-hi3798cv200: Convert the DT bindings to YAML - dw_mmc-hi3798mv200: Add driver for the new dw_mmc variant - fsl-imx-esdhc: A couple of corrections/updates to the DT bindings - meson-mx-sdhc: Drop use of the ->card_hw_reset() callback - moxart-mmc: Use sg_miter for PIO - moxart-mmc: Fix accounting for DMA transfers - mvsdio: Use sg_miter for PIO - mxcmmc: Use sg_miter for PIO - omap: Use sg_miter for PIO - renesas,sdhi: Add support for R-Car V4M variant - sdhci-esdhc-mcf: Use sg_miter for swapping - sdhci-of-dwcmshc: Add support for Sophgo CV1800B and SG2002 variants - sh_mmcif: Use sg_miter for PIO - tmio: Avoid concurrent runs of mmc_request_done()" * tag 'mmc-v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (44 commits) mmc: core: make mmc_host_class constant mmc: core: Fix switch on gp3 partition mmc: tmio: comment the ERR_PTR usage in this driver mmc: mmc_spi: Don't mention DMA direction mmc: dw_mmc: Remove unused of_gpio.h mmc: dw_mmc: add support for hi3798mv200 dt-bindings: mmc: hisilicon,hi3798cv200-dw-mshc: add Hi3798MV200 binding dt-bindings: mmc: dw-mshc-hi3798cv200: convert to YAML mmc: dw_mmc-hi3798cv200: remove MODULE_ALIAS() mmc: core: Use a struct device* as in-param to mmc_of_parse_clk_phase() mmc: wmt-sdmmc: remove an incorrect release_mem_region() call in the .remove function mmc: tmio: avoid concurrent runs of mmc_request_done() dt-bindings: mmc: fsl-imx-mmc: Document the required clocks mmc: sh_mmcif: Advance sg_miter before reading blocks mmc: sh_mmcif: sg_miter must not be atomic mmc: sdhci-esdhc-mcf: Flag the sg_miter as atomic dt-bindings: mmc: fsl-imx-esdhc: add default and 100mhz state mmc: core: constify the struct device_type usage mmc: sdhci-of-dwcmshc: Add support for Sophgo CV1800B and SG2002 dt-bindings: mmc: sdhci-of-dwcmhsc: Add Sophgo CV1800B and SG2002 support ...
2024-02-19mmc: pass queue_limits to blk_mq_alloc_diskChristoph Hellwig1-45/+52
Pass the queue limit set at initialization time directly to blk_mq_alloc_disk instead of updating it right after the allocation. This requires refactoring the code a bit so that what was mmc_setup_queue before also allocates the gendisk now and actually sets all limits. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Link: https://lore.kernel.org/r/20240215070300.2200308-18-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-02-13block: pass a queue_limits argument to blk_mq_alloc_diskChristoph Hellwig1-1/+1
Pass a queue_limits to blk_mq_alloc_disk and apply it if non-NULL. This will allow allocating queues with valid queue limits instead of setting the values one at a time later. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240213073425.1621680-11-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-02-13mmc: core Drop BLK_BOUNCE_HIGHLinus Walleij1-2/+0
The MMC core sets BLK_BOUNCE_HIGH for devices where dma_mask is unassigned. For the majority of MMC hosts this path is never taken: the OF core will unconditionally assign a 32-bit mask to any OF device, and most MMC hosts are probed from device tree, see drivers/of/platform.c: of_platform_device_create_pdata() dev->dev.coherent_dma_mask = DMA_BIT_MASK(32); if (!dev->dev.dma_mask) dev->dev.dma_mask = &dev->dev.coherent_dma_mask; of_amba_device_create() dev->dev.coherent_dma_mask = DMA_BIT_MASK(32); dev->dev.dma_mask = &dev->dev.coherent_dma_mask; MMC devices that are probed from ACPI or PCI will likewise have a proper dma_mask assigned. The only remaining devices that could have a blank dma_mask are platform devices instantiated from board files. These are mostly used on systems without CONFIG_HIGHMEM enabled which means the block layer will not bounce, and in the few cases where it is enabled it is not used anyway: for example some OMAP2 systems such as Nokia n800/n810 will create a platform_device and not assign a dma_mask, however they do not have any highmem, so no bouncing will happen anyway: the block core checks if max_low_pfn >= max_pfn and this will always be false. Should it turn out there is a platform_device with blank DMA mask actually using CONFIG_HIGHMEM somewhere out there we should set dma_mask for it, not do this trickery. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240125-mmc-no-blk-bounce-high-v1-1-d0f92a30e085@linaro.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-09-27mmc: core: Allow dynamical updates of the number of requests for hsqWenchao Chen1-5/+1
To allow dynamical updates of the current number of used in-flight requests, let's move away from using a hard-coded value to a use a corresponding variable in the struct mmc_host. This can be valuable when optimizing for certain I/O request sequences, as shown by subsequent changes. Signed-off-by: Wenchao Chen <wenchao.chen@unisoc.com> Link: https://lore.kernel.org/r/20230919074707.25517-2-wenchao.chen@unisoc.com [Ulf: Re-wrote the commitmsg to clarify the change] Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2022-10-24mmc: core: Fix WRITE_ZEROES CQE handlingVincent Whitchurch1-0/+1
WRITE_ZEROES requests use TRIM, so mark them as needing to be issued synchronously even when a CQE is being used. Without this, mmc_blk_mq_issue_rq() triggers a WARN_ON_ONCE() and fails the request since we don't have any handling for issuing this asynchronously. Fixes: f7b6fc327327 ("mmc: core: Support zeroout using TRIM for eMMC") Reported-by: Jon Hunter <jonathanh@nvidia.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Reviewed-by: Avri Altman <avri.altman@wdc.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20221020130123.4033218-1-vincent.whitchurch@axis.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2022-10-17mmc: queue: Cancel recovery work on cleanupChristian Löhle1-0/+7
To prevent any recovery work running after the queue cleanup cancel it. Any recovery running post-cleanup dereferenced mq->card as NULL and was not meaningful to begin with. Cc: stable@vger.kernel.org Signed-off-by: Christian Loehle <cloehle@hyperstone.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Link: https://lore.kernel.org/r/c865c0c9789d428494b67b820a78923e@hyperstone.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2022-07-06blk-mq: Drop blk_mq_ops.timeout 'reserved' argJohn Garry1-2/+1
With new API blk_mq_is_reserved_rq() we can tell if a request is from the reserved pool, so stop passing 'reserved' arg. There is actually only a single user of that arg for all the callback implementations, which can use blk_mq_is_reserved_rq() instead. This will also allow us to stop passing the same 'reserved' around the blk-mq iter functions next. Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/1657109034-206040-4-git-send-email-john.garry@huawei.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-06-28block: simplify disk shutdownChristoph Hellwig1-1/+0
Set the queue dying flag and call blk_mq_exit_queue from del_gendisk for all disks that do not have separately allocated queues, and thus remove the need to call blk_cleanup_queue for them. Rename blk_cleanup_disk to blk_mq_destroy_queue to make it clear that this function is intended only for separately allocated blk-mq queues. This saves an extra queue freeze for devices without a separately allocated queue. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20220619060552.1850436-6-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-24Merge tag 'mmc-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmcLinus Torvalds1-0/+2
Pull MMC updates from Ulf Hansson: "MMC core: - Support zero-out using TRIM for eMMC - Allow to override the busy-timeout for the ioctl-cmds MMC host: - Continued the conversion of DT bindings into the JSON schema - jz4740: Apply DMA engine limits to maximum segment size - mmci_stm32: Use a buffer for unaligned DMA requests - mmc_spi: Enabled high-speed modes via parsing of DT - omap: Make clock management to be compliant with CCF - renesas_sdhi: - Support eMMC HS400 mode for R-Car V3H ES2.0 - Don't allow support for eMMC HS400 for R-Car V3M/D3 - sdhci_am654: Fix problem when SD card slot lacks the card detect line - sdhci-esdhc-imx: Add support for the imx8dxl variant - sdhci-brcmstb: Enable support for clock gating to save power - sdhci-msm: - Add support for the sdx65 variant - Add support for the sm8150 variant - sdhci-of-dwcmshc: Add support for the Rockchip rk3588 variant - sdhci-pci-gli: Add workaround to allow GL9755 to enter ASPM L1.2" * tag 'mmc-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (52 commits) mmc: sdhci-of-arasan: Add NULL check for data field mmc: core: Support zeroout using TRIM for eMMC mmc: sdhci-brcmstb: Fix compiler warning mmc: sdhci-msm: Add compatible string check for sdx65 dt-bindings: mmc: sdhci-msm: Document the SDX65 compatible mmc: sdhci-msm: Add compatible string check for sm8150 dt-bindings: mmc: sdhci-msm: Add compatible string for sm8150 mmc: sdhci-msm: Add SoC specific compatibles dt-bindings: mmc: sdhci-msm: Convert bindings to yaml dt-bindings: mmc: brcm,sdhci-brcmstb: cleanup example dt-bindings: mmc: brcm,sdhci-brcmstb: correct number of reg entries mmc: sdhci-brcmstb: Enable Clock Gating to save power mmc: sdhci-brcmstb: Re-organize flags mmc: mmci: Remove custom ios handler mmc: atmel-mci: Simplify if(chan) and if(!chan) mmc: core: use kobj_to_dev() dt-bindings: mmc: sdhci-of-dwcmhsc: Add rk3588 mmc: core: Add CIDs for cards to the entropy pool mmc: core: Allows to override the timeout value for ioctl() path mmc: sdhci-omap: Use of_device_get_match_data() helper ...
2022-05-10mmc: core: Support zeroout using TRIM for eMMCVincent Whitchurch1-0/+2
If an eMMC card supports TRIM and indicates that it erases to zeros, we can use it to support hardware offloading of REQ_OP_WRITE_ZEROES, so let's add support for this. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Reviewed-by: Avri Altman <Avri.Altman@wdc.com> Link: https://lore.kernel.org/r/20220429152118.3617303-1-vincent.whitchurch@axis.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2022-04-17block: decouple REQ_OP_SECURE_ERASE from REQ_OP_DISCARDChristoph Hellwig1-1/+1
Secure erase is a very different operation from discard in that it is a data integrity operation vs hint. Fully split the limits and helper infrastructure to make the separation more clear. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Acked-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> [drbd] Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> [nifs2] Acked-by: Jaegeuk Kim <jaegeuk@kernel.org> [f2fs] Acked-by: Coly Li <colyli@suse.de> [bcache] Acked-by: David Sterba <dsterba@suse.com> [btrfs] Acked-by: Chao Yu <chao@kernel.org> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20220415045258.199825-27-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-17block: remove QUEUE_FLAG_DISCARDChristoph Hellwig1-1/+0
Just use a non-zero max_discard_sectors as an indicator for discard support, similar to what is done for write zeroes. The only places where needs special attention is the RAID5 driver, which must clear discard support for security reasons by default, even if the default stacking rules would allow for it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Acked-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> [drbd] Acked-by: Jan Höppner <hoeppner@linux.ibm.com> [s390] Acked-by: Coly Li <colyli@suse.de> [bcache] Acked-by: David Sterba <dsterba@suse.com> [btrfs] Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20220415045258.199825-25-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-12-21mmc: core: Fix blk_status_t handlingJoel Stanley1-1/+1
Sparse spits out this following warning: drivers/mmc/core/queue.c:311:21: warning: incorrect type in assignment (different base types) drivers/mmc/core/queue.c:311:21: expected int ret drivers/mmc/core/queue.c:311:21: got restricted blk_status_t [usertype] drivers/mmc/core/queue.c:314:21: warning: incorrect type in assignment (different base types) drivers/mmc/core/queue.c:314:21: expected int ret drivers/mmc/core/queue.c:314:21: got restricted blk_status_t [usertype] drivers/mmc/core/queue.c:336:16: warning: incorrect type in return expression (different base types) drivers/mmc/core/queue.c:336:16: expected restricted blk_status_t drivers/mmc/core/queue.c:336:16: got int [assigned] ret ret is only used for blk_status_t types, so make it that type. Signed-off-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20211215011336.194089-1-joel@jms.id.au Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-08-25mmc: queue: Remove unused parameters(request_queue)ChanWoo Lee1-24/+6
In function mmc_exit_request, the request_queue structure(*q) is not used. I remove the unnecessary code related to the request_queue structure. Signed-off-by: ChanWoo Lee <cw9316.lee@samsung.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Link: https://lore.kernel.org/r/20210825074601.8881-1-cw9316.lee@samsung.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-08-24mmc: queue: Match the data type of max_segmentsChanWoo Lee1-2/+2
Each function has a different data type for max_segments, Modify to match unsigned short(host->max_segs). * unsigned short max_segs; /* see blk_queue_max_segments */ 1) Return type : unsigned int static unsigned int mmc_get_max_segments(struct mmc_host *host) { return host->can_dma_map_merge ? MMC_DMA_MAP_MERGE_SEGMENTS : host->max_segs; } 2) Parameter type : int mmc_alloc_sg(mmc_get_max_segments(host), gfp); -> static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp) 3) Parameter type : unsigned short blk_queue_max_segments(mq->queue, mmc_get_max_segments(host)); -> void blk_queue_max_segments(struct request_queue *q, unsigned short max_segments) Signed-off-by: ChanWoo Lee <cw9316.lee@samsung.com> Acked-by: Coly Li <colyli@suse.de> Link: https://lore.kernel.org/r/20210824073934.19727-1-cw9316.lee@samsung.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-06-30mmc: switch to blk_mq_alloc_diskChristoph Hellwig1-13/+10
Use the blk_mq_alloc_disk to allocate the request_queue and gendisk together. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Link: https://lore.kernel.org/r/20210616053934.880951-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-30mmc: core: Remove mq->use_cqe from the struct mmc_queueLuca Porzio1-6/+5
The host->cqe_enabled is already containing the needed information about whether the CQE is enabled or not, hence there is no need to keep another copy of it around. Signed-off-by: Luca Porzio <lporzio@micron.com> Signed-off-by: Zhan Liu <zliua@micron.com> Link: https://lore.kernel.org/r/20210215003217.GA12240@lupo-laptop Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-02-08mmc: queue: Remove unused defineChanWoo Lee1-2/+0
MMC_CQE_QUEUE_FULL is not set and is only cleared. Therefore, define is unnecessary. Signed-off-by: ChanWoo Lee <cw9316.lee@samsung.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Link: https://lore.kernel.org/r/20210203072014.30272-1-cw9316.lee@samsung.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-02-01mmc: core: Exclude unnecessary header fileChanWoo Lee1-1/+0
From the 4.19 kernel, thread related code has been removed in queue.c. So we can exclude unnecessary header file. Signed-off-by: ChanWoo Lee <cw9316.lee@samsung.com> Acked-by: Coly Li <colyli@suse.de> Link: https://lore.kernel.org/r/20210125064355.28545-1-cw9316.lee@samsung.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-02-01mmc: core: Add basic support for inline encryptionEric Biggers1-0/+3
In preparation for adding CQHCI crypto engine (inline encryption) support, add the code required to make mmc_core and mmc_block aware of inline encryption. Specifically: - Add a capability flag MMC_CAP2_CRYPTO to struct mmc_host. Drivers will set this if the host and driver support inline encryption. - Embed a blk_keyslot_manager in struct mmc_host. Drivers will initialize this (as a device-managed resource) if the host and driver support inline encryption. mmc_block registers this keyslot manager with the request_queue of any MMC card attached to the host. - Make mmc_block copy the crypto keyslot and crypto data unit number from struct request to struct mmc_request, so that drivers will have access to them. - If the MMC host is reset, reprogram all the keyslots to ensure that the software state stays in sync with the hardware state. Co-developed-by: Satya Tangirala <satyat@google.com> Signed-off-by: Satya Tangirala <satyat@google.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Satya Tangirala <satyat@google.com> Reviewed-and-tested-by: Peng Zhou <peng.zhou@mediatek.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20210126001456.382989-2-ebiggers@kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-01-15mmc: core: don't initialize block size from ext_csd if not presentPeter Collingbourne1-1/+3
If extended CSD was not available, the eMMC driver would incorrectly set the block size to 0, as the data_sector_size field of ext_csd was never initialized. This issue was exposed by commit 817046ecddbc ("block: Align max_hw_sectors to logical blocksize") which caused max_sectors and max_hw_sectors to be set to 0 after setting the block size to 0, resulting in a kernel panic in bio_split when attempting to read from the device. Fix it by only reading the block size from ext_csd if it is available. Fixes: a5075eb94837 ("mmc: block: Allow disabling 512B sector size emulation") Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Link: https://linux-review.googlesource.com/id/If244d178da4d86b52034459438fec295b02d6e60 Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210114201405.2934886-1-pcc@google.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-10-13Merge tag 'block-5.10-2020-10-12' of git://git.kernel.dk/linux-blockLinus Torvalds1-2/+1
Pull block updates from Jens Axboe: - Series of merge handling cleanups (Baolin, Christoph) - Series of blk-throttle fixes and cleanups (Baolin) - Series cleaning up BDI, seperating the block device from the backing_dev_info (Christoph) - Removal of bdget() as a generic API (Christoph) - Removal of blkdev_get() as a generic API (Christoph) - Cleanup of is-partition checks (Christoph) - Series reworking disk revalidation (Christoph) - Series cleaning up bio flags (Christoph) - bio crypt fixes (Eric) - IO stats inflight tweak (Gabriel) - blk-mq tags fixes (Hannes) - Buffer invalidation fixes (Jan) - Allow soft limits for zone append (Johannes) - Shared tag set improvements (John, Kashyap) - Allow IOPRIO_CLASS_RT for CAP_SYS_NICE (Khazhismel) - DM no-wait support (Mike, Konstantin) - Request allocation improvements (Ming) - Allow md/dm/bcache to use IO stat helpers (Song) - Series improving blk-iocost (Tejun) - Various cleanups (Geert, Damien, Danny, Julia, Tetsuo, Tian, Wang, Xianting, Yang, Yufen, yangerkun) * tag 'block-5.10-2020-10-12' of git://git.kernel.dk/linux-block: (191 commits) block: fix uapi blkzoned.h comments blk-mq: move cancel of hctx->run_work to the front of blk_exit_queue blk-mq: get rid of the dead flush handle code path block: get rid of unnecessary local variable block: fix comment and add lockdep assert blk-mq: use helper function to test hw stopped block: use helper function to test queue register block: remove redundant mq check block: invoke blk_mq_exit_sched no matter whether have .exit_sched percpu_ref: don't refer to ref->data if it isn't allocated block: ratelimit handle_bad_sector() message blk-throttle: Re-use the throtl_set_slice_end() blk-throttle: Open code __throtl_de/enqueue_tg() blk-throttle: Move service tree validation out of the throtl_rb_first() blk-throttle: Move the list operation after list validation blk-throttle: Fix IO hang for a corner case blk-throttle: Avoid tracking latency if low limit is invalid blk-throttle: Avoid getting the current time if tg->last_finish_time is 0 blk-throttle: Remove a meaningless parameter for throtl_downgrade_state() block: Remove redundant 'return' statement ...
2020-10-09mmc: core: don't set limits.discard_granularity as 0Coly Li1-1/+1
In mmc_queue_setup_discard() the mmc driver queue's discard_granularity might be set as 0 (when card->pref_erase > max_discard) while the mmc device still declares to support discard operation. This is buggy and triggered the following kernel warning message, WARNING: CPU: 0 PID: 135 at __blkdev_issue_discard+0x200/0x294 CPU: 0 PID: 135 Comm: f2fs_discard-17 Not tainted 5.9.0-rc6 #1 Hardware name: Google Kevin (DT) pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--) pc : __blkdev_issue_discard+0x200/0x294 lr : __blkdev_issue_discard+0x54/0x294 sp : ffff800011dd3b10 x29: ffff800011dd3b10 x28: 0000000000000000 x27: ffff800011dd3cc4 x26: ffff800011dd3e18 x25: 000000000004e69b x24: 0000000000000c40 x23: ffff0000f1deaaf0 x22: ffff0000f2849200 x21: 00000000002734d8 x20: 0000000000000008 x19: 0000000000000000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000394 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 00000000000008b0 x9 : ffff800011dd3cb0 x8 : 000000000004e69b x7 : 0000000000000000 x6 : ffff0000f1926400 x5 : ffff0000f1940800 x4 : 0000000000000000 x3 : 0000000000000c40 x2 : 0000000000000008 x1 : 00000000002734d8 x0 : 0000000000000000 Call trace: __blkdev_issue_discard+0x200/0x294 __submit_discard_cmd+0x128/0x374 __issue_discard_cmd_orderly+0x188/0x244 __issue_discard_cmd+0x2e8/0x33c issue_discard_thread+0xe8/0x2f0 kthread+0x11c/0x120 ret_from_fork+0x10/0x1c ---[ end trace e4c8023d33dfe77a ]--- This patch fixes the issue by setting discard_granularity as SECTOR_SIZE instead of 0 when (card->pref_erase > max_discard) is true. Now no more complain from __blkdev_issue_discard() for the improper value of discard granularity. This issue is exposed after commit b35fd7422c2f ("block: check queue's limits.discard_granularity in __blkdev_issue_discard()"), a "Fixes:" tag is also added for the commit to make sure people won't miss this patch after applying the change of __blkdev_issue_discard(). Fixes: e056a1b5b67b ("mmc: queue: let host controllers specify maximum discard timeout") Fixes: b35fd7422c2f ("block: check queue's limits.discard_granularity in __blkdev_issue_discard()"). Reported-and-tested-by: Vicente Bergas <vicencb@gmail.com> Signed-off-by: Coly Li <colyli@suse.de> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ulf Hansson <ulf.hansson@linaro.org> Link: https://lore.kernel.org/r/20201002013852.51968-1-colyli@suse.de Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-09-24bdi: replace BDI_CAP_STABLE_WRITES with a queue and a sb flagChristoph Hellwig1-2/+1
The BDI_CAP_STABLE_WRITES is one of the few bits of information in the backing_dev_info shared between the block drivers and the writeback code. To help untangling the dependency replace it with a queue flag and a superblock flag derived from it. This also helps with the case of e.g. a file system requiring stable writes due to its own checksumming, but not forcing it on other users of the block device like the swap code. One downside is that we an't support the stable_pages_required bdi attribute in sysfs anymore. It is replaced with a queue attribute which also is writable for easier testing. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-07-13mmc: core: Correct misspelling of 'mq' in mmc_init_request()'s docsLee Jones1-1/+1
Correcting this misspelling squashes the following W=1 build warning(s): mmc/core/queue.c:212: warning: Function parameter or member 'mq' not described in '__mmc_init_request' mmc/core/queue.c:212: warning: Excess function parameter 'q' description in '__mmc_init_request' Signed-off-by: Lee Jones <lee.jones@linaro.org> Link: https://lore.kernel.org/r/20200701124702.908713-8-lee.jones@linaro.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-05-08mmc: block: Fix request completion in the CQE timeout pathAdrian Hunter1-2/+1
First, it should be noted that the CQE timeout (60 seconds) is substantial so a CQE request that times out is really stuck, and the race between timeout and completion is extremely unlikely. Nevertheless this patch fixes an issue with it. Commit ad73d6feadbd7b ("mmc: complete requests from ->timeout") preserved the existing functionality, to complete the request. However that had only been necessary because the block layer timeout handler had been marking the request to prevent it from being completed normally. That restriction was removed at the same time, the result being that a request that has gone will have been completed anyway. That is, the completion was unnecessary. At the time, the unnecessary completion was harmless because the block layer would ignore it, although that changed in kernel v5.0. Note for stable, this patch will not apply cleanly without patch "mmc: core: Fix recursive locking issue in CQE recovery path" Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Fixes: ad73d6feadbd7b ("mmc: complete requests from ->timeout") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200508062227.23144-1-adrian.hunter@intel.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-05-08mmc: core: Fix recursive locking issue in CQE recovery pathSarthak Garg1-9/+4
Consider the following stack trace -001|raw_spin_lock_irqsave -002|mmc_blk_cqe_complete_rq -003|__blk_mq_complete_request(inline) -003|blk_mq_complete_request(rq) -004|mmc_cqe_timed_out(inline) -004|mmc_mq_timed_out mmc_mq_timed_out acquires the queue_lock for the first time. The mmc_blk_cqe_complete_rq function also tries to acquire the same queue lock resulting in recursive locking where the task is spinning for the same lock which it has already acquired leading to watchdog bark. Fix this issue with the lock only for the required critical section. Cc: <stable@vger.kernel.org> Fixes: 1e8e55b67030 ("mmc: block: Add CQE support") Suggested-by: Sahitya Tummala <stummala@codeaurora.org> Signed-off-by: Sarthak Garg <sartgarg@codeaurora.org> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Link: https://lore.kernel.org/r/1588868135-31783-1-git-send-email-vbadigan@codeaurora.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-03-24mmc: Add MMC host software queue supportBaolin Wang1-4/+18
Now the MMC read/write stack will always wait for previous request is completed by mmc_blk_rw_wait(), before sending a new request to hardware, or queue a work to complete request, that will bring context switching overhead and spend some extra time to poll the card for busy completion for I/O writes via sending CMD13, especially for high I/O per second rates, to affect the IO performance. Thus this patch introduces MMC software queue interface based on the hardware command queue engine's interfaces, which is similar with the hardware command queue engine's idea, that can remove the context switching. Moreover we set the default queue depth as 64 for software queue, which allows more requests to be prepared, merged and inserted into IO scheduler to improve performance, but we only allow 2 requests in flight, that is enough to let the irq handler always trigger the next request without a context switch, as well as avoiding a long latency. Moreover the host controller should support HW busy detection for I/O operations when enabling the host software queue. That means, the host controller must not complete a data transfer request, until after the card stops signals busy. From the fio testing data in cover letter, we can see the software queue can improve some performance with 4K block size, increasing about 16% for random read, increasing about 90% for random write, though no obvious improvement for sequential read and write. Moreover we can expand the software queue interface to support MMC packed request or packed command in future. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Baolin Wang <baolin.wang7@gmail.com> Link: https://lore.kernel.org/r/4409c1586a9b3ed20d57ad2faf6c262fc3ccb6e2.1581478568.git.baolin.wang7@gmail.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-09-12mmc: queue: Fix bigger segments usageYoshihiro Shimoda1-1/+7
The commit 38c38cb73223 ("mmc: queue: use bigger segments if DMA MAP layer can merge the segments") always enables the bugger segments if DMA MAP layer can merge the segments, but some controllers (SDHCI) have strictly limitation about the segments size, and then the commit breaks on the controllers. To fix the issue, this patch adds a new flag MMC_CAP2_MERGE_CAPABLE into the struct mmc_host and the bigger segments usage is disabled as default. Reported-by: Thierry Reding <treding@nvidia.com> Fixes: 38c38cb73223 ("mmc: queue: use bigger segments if DMA MAP layer can merge the segments") Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-03mmc: queue: use bigger segments if DMA MAP layer can merge the segmentsYoshihiro Shimoda1-3/+32
When the max_segs of a mmc host is smaller than 512, the mmc subsystem tries to use 512 segments if DMA MAP layer can merge the segments, and then the mmc subsystem exposes such information to the block layer by using blk_queue_can_use_dma_map_merging(). Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-07-22mmc: mmc_spi: Enable stable writesAndreas Koop1-0/+5
While using the mmc_spi driver occasionally errors like this popped up: mmcblk0: error -84 transferring data end_request: I/O error, dev mmcblk0, sector 581756 I looked on the Internet for occurrences of the same problem and came across a helpful post [1]. It includes source code to reproduce the bug. There is also an analysis about the cause. During transmission data in the supplied buffer is being modified. Thus the previously calculated checksum is not correct anymore. After some digging I found out that device drivers are supposed to report they need stable writes. To fix this I set the appropriate flag at queue initialization if CRC checksumming is enabled for that SPI host. [1] https://groups.google.com/forum/#!msg/sim1/gLlzWeXGFr8/KevXinUXfc8J Signed-off-by: Andreas Koop <andreas.koop@zf.com> [shihpo: Rebase on top of v5.3-rc1] Signed-off-by: ShihPo Hung <shihpo.hung@sifive.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> CC: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-07-11Merge tag 'mmc-v5.3' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmcLinus Torvalds1-5/+2
Pull MMC updates from Ulf Hansson: "MMC core: - Let the dma map ops deal with bouncing and drop dma_max_pfn() from the dma-mapping interface for ARM - Convert the generic MMC DT doc to YAML schemas - Drop questionable support for powered-on re-init of SDIO cards at runtime resume and for SDIO HW reset - Prevent questionable re-init of powered-on removable SDIO cards at system resume - Cleanup and clarify some SDIO core code MMC host: - tmio: Make runtime PM enablement more flexible for variants - tmio/renesas_sdhi: Rename DT doc tmio_mmc.txt to renesas,sdhi.txt to clarify - sdhci-pci: Add support for Intel EHL - sdhci-pci-o2micro: Enable support for 8-bit bus - sdhci-msm: Prevent acquiring a mutex while holding a spin_lock - sdhci-of-esdhc: Improve clock management and tuning - sdhci_am654: Enable support for 4 and 8-bit bus on J721E - sdhci-sprd: Use pinctrl for a proper signal voltage switch - sdhci-sprd: Add support for HS400 enhanced strobe mode - sdhci-sprd: Enable PHY DLL and allow delay config to stabilize the clock - sdhci-sprd: Add support for optional gate clock - sunxi-mmc: Convert DT doc to YAML schemas - meson-gx: Add support for broken DRAM access for DMA MEMSTICK core: - Fixup error path of memstick_init()" * tag 'mmc-v5.3' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (52 commits) mmc: sdhci_am654: Add dependency on MMC_SDHCI_AM654 mmc: alcor: remove a redundant greater or equal to zero comparison mmc: sdhci-msm: fix mutex while in spinlock mmc: sdhci_am654: Make some symbols static dma-mapping: remove dma_max_pfn mmc: core: let the dma map ops handle bouncing dt-binding: mmc: rename tmio_mmc.txt to renesas,sdhi.txt mmc: sdhci-sprd: Add pin control support for voltage switch dt-bindings: mmc: sprd: Add pinctrl support mmc: sdhci-sprd: Add start_signal_voltage_switch ops mmc: sdhci-pci: Add support for Intel EHL mmc: tmio: Use dma_max_mapping_size() instead of a workaround mmc: sdio: Drop unused in-parameter from mmc_sdio_init_card() mmc: sdio: Drop unused in-parameter to mmc_sdio_reinit_card() mmc: sdio: Don't re-initialize powered-on removable SDIO cards at resume mmc: sdio: Drop powered-on re-init at runtime resume and HW reset mmc: sdio: Move comment about re-initialization to mmc_sdio_reinit_card() mmc: sdio: Drop mmc_claim|release_host() in mmc_sdio_power_restore() mmc: sdio: Turn sdio_run_irqs() into static mmc: sdhci: Fix indenting on SDHCI_CTRL_8BITBUS ...
2019-07-10mmc: core: let the dma map ops handle bouncingChristoph Hellwig1-5/+2
Just like we do for all other block drivers. Especially as the limit imposed at the moment might be way to pessimistic for iommus. This also means we are not going to set a bounce limit for the queue, in case we have a dma mask. On most architectures it was never needed, the major hold out was x86-32 with PAE, but that has been fixed by now. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-5/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05mmc: also set max_segment_size in the deviceChristoph Hellwig1-0/+2
If we only set the max_segment_size on the queue an IOMMU merge might create bigger segments again, so limit the IOMMU merges as well. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-05-06mmc: core: Fix tag set memory leakRaul E Rangel1-0/+1
The tag set is allocated in mmc_init_queue but never freed. This results in a memory leak. This change makes sure we free the tag set when the queue is also freed. Signed-off-by: Raul E Rangel <rrangel@chromium.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Fixes: 81196976ed94 ("mmc: block: Add blk-mq support") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-03-08Merge tag 'for-5.1/block-20190302' of git://git.kernel.dk/linux-blockLinus Torvalds1-2/+1
Pull block layer updates from Jens Axboe: "Not a huge amount of changes in this round, the biggest one is that we finally have Mings multi-page bvec support merged. Apart from that, this pull request contains: - Small series that avoids quiescing the queue for sysfs changes that match what we currently have (Aleksei) - Series of bcache fixes (via Coly) - Series of lightnvm fixes (via Mathias) - NVMe pull request from Christoph. Nothing major, just SPDX/license cleanups, RR mp policy (Hannes), and little fixes (Bart, Chaitanya). - BFQ series (Paolo) - Save blk-mq cpu -> hw queue mapping, removing a pointer indirection for the fast path (Jianchao) - fops->iopoll() added for async IO polling, this is a feature that the upcoming io_uring interface will use (Christoph, me) - Partition scan loop fixes (Dongli) - mtip32xx conversion from managed resource API (Christoph) - cdrom registration race fix (Guenter) - MD pull from Song, two minor fixes. - Various documentation fixes (Marcos) - Multi-page bvec feature. This brings a lot of nice improvements with it, like more efficient splitting, larger IOs can be supported without growing the bvec table size, and so on. (Ming) - Various little fixes to core and drivers" * tag 'for-5.1/block-20190302' of git://git.kernel.dk/linux-block: (117 commits) block: fix updating bio's front segment size block: Replace function name in string with __func__ nbd: propagate genlmsg_reply return code floppy: remove set but not used variable 'q' null_blk: fix checking for REQ_FUA block: fix NULL pointer dereference in register_disk fs: fix guard_bio_eod to check for real EOD errors blk-mq: use HCTX_TYPE_DEFAULT but not 0 to index blk_mq_tag_set->map block: optimize bvec iteration in bvec_iter_advance block: introduce mp_bvec_for_each_page() for iterating over page block: optimize blk_bio_segment_split for single-page bvec block: optimize __blk_segment_map_sg() for single-page bvec block: introduce bvec_nth_page() iomap: wire up the iopoll method block: add bio_set_polled() helper block: wire up block device iopoll method fs: add an iopoll method to struct file_operations loop: set GENHD_FL_NO_PART_SCAN after blkdev_reread_part() loop: do not print warn message if partition scan is successful block: bounce: make sure that bvec table is updated ...
2019-02-27mmc: core: align max segment size with logical block sizeMing Lei1-1/+8
Logical block size is the lowest possible block size that the storage device can address. Max segment size is often related with controller's DMA capability. And it is reasonable to align max segment size with logical block size. SDHCI sets un-aligned max segment size, and causes ADMA error, so fix it by aligning max segment size with logical block size. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Naresh Kamboju <naresh.kamboju@linaro.org> Cc: Faiz Abbas <faiz_abbas@ti.com> Cc: linux-block@vger.kernel.org Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-02-15block: kill BLK_MQ_F_SG_MERGEMing Lei1-2/+1
QUEUE_FLAG_NO_SG_MERGE has been killed, so kill BLK_MQ_F_SG_MERGE too. Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-11-17mmc: stop abusing the request queue_lock pointerChristoph Hellwig1-16/+15
Replace the lock in mmc_blk_data that is only used through a pointer in struct mmc_queue and to protect fields in that structure with an actual lock in struct mmc_queue. Suggested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-11-15mmc: stop abusing the request queue_lock pointerChristoph Hellwig1-13/+13
mmc uses the block layer struct request pointer to indirect their own lock to the mmc_queue structure, given that the original lock isn't reachable outside of block.c. Add a lock pointer to struct mmc_queue instead and stop overriding the block layer lock which protects fields entirely separate from the mmc use. Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-11-15mmc: simplify queue initializationChristoph Hellwig1-56/+29
Merge three functions initializing the queue into a single one, and drop an unused argument for it. Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-08-21mmc: block: Fix unsupported parallel dispatch of requestsAdrian Hunter1-5/+7
The mmc block driver does not support parallel dispatch of requests. In normal circumstances, all requests are anyway funneled through a single work item, so parallel dispatch never happens. However it can happen if there is no elevator. Fix that by detecting if a dispatch is in progress and returning busy (BLK_STS_RESOURCE) in that case Fixes: 81196976ed94 ("mmc: block: Add blk-mq support") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2018-05-29mmc: complete requests from ->timeoutChristoph Hellwig1-2/+3
By completing the request entirely in the driver we can remove the BLK_EH_HANDLED return value and thus the split responsibility between the driver and the block layer that has been causing trouble. [While this keeps existing behavior it seems to mismatch the comment, maintainers please chime in!] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>