aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-10-03KVM: VMX: update PFEC_MASK/PFEC_MATCH together with PF interceptPaolo Bonzini1-10/+12
The PFEC_MASK and PFEC_MATCH fields in the VMCS reverse the meaning of the #PF intercept bit in the exception bitmap when they do not match. This means that, if PFEC_MASK and/or PFEC_MATCH are set, the hypervisor can get a vmexit for #PF exceptions even when the corresponding bit is clear in the exception bitmap. This is unexpected and is promptly detected by a WARN_ON_ONCE. To fix it, reset PFEC_MASK and PFEC_MATCH when the #PF intercept is disabled (as is common with enable_ept && !allow_smaller_maxphyaddr). Reported-by: Qian Cai <[email protected]>> Reported-by: Naresh Kamboju <[email protected]> Tested-by: Naresh Kamboju <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-10-03media: add Zoran cardlistMauro Carvalho Chehab2-0/+53
Now that Zoran was re-added, we need to add its cardlist to the admin guide. Signed-off-by: Mauro Carvalho Chehab <[email protected]>
2020-10-03security/keys: remove compat_keyctl_instantiate_key_iovChristoph Hellwig3-40/+3
Now that import_iovec handles compat iovecs, the native version of keyctl_instantiate_key_iov can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03mm: remove compat_process_vm_{readv,writev}Christoph Hellwig17-109/+30
Now that import_iovec handles compat iovecs, the native syscalls can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03fs: remove compat_sys_vmspliceChristoph Hellwig17-62/+28
Now that import_iovec handles compat iovecs, the native vmsplice syscall can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03fs: remove the compat readv/writev syscallsChristoph Hellwig17-46/+30
Now that import_iovec handles compat iovecs, the native readv and writev syscalls can be used for the compat case as well. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03fs: remove various compat readv/writev helpersChristoph Hellwig2-159/+40
Now that import_iovec handles compat iovecs as well, all the duplicated code in the compat readv/writev helpers is not needed. Remove them and switch the compat syscall handlers to use the native helpers. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03iov_iter: transparently handle compat iovecs in import_iovecChristoph Hellwig11-65/+26
Use in compat_syscall to import either native or the compat iovecs, and remove the now superflous compat_import_iovec. This removes the need for special compat logic in most callers, and the remaining ones can still be simplified by using __import_iovec with a bool compat parameter. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-03iov_iter: refactor rw_copy_check_uvector and import_iovecChristoph Hellwig5-232/+143
Split rw_copy_check_uvector into two new helpers with more sensible calling conventions: - iovec_from_user copies a iovec from userspace either into the provided stack buffer if it fits, or allocates a new buffer for it. Returns the actually used iovec. It also verifies that iov_len does fit a signed type, and handles compat iovecs if the compat flag is set. - __import_iovec consolidates the native and compat versions of import_iovec. It calls iovec_from_user, then validates each iovec actually points to user addresses, and ensures the total length doesn't overflow. This has two major implications: - the access_process_vm case loses the total lenght checking, which wasn't required anyway, given that each call receives two iovecs for the local and remote side of the operation, and it verifies the total length on the local side already. - instead of a single loop there now are two loops over the iovecs. Given that the iovecs are cache hot this doesn't make a major difference Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2020-10-02block: scsi_ioctl: Avoid the use of one-element arraysGustavo A. R. Silva2-4/+7
One-element arrays are being deprecated[1]. Replace the one-element array with a simple object of type compat_caddr_t: 'compat_caddr_t unused'[2], once it seems this field is actually never used. Also, update struct cdrom_generic_command in UAPI by adding an anonimous union to avoid using the one-element array _reserved_. [1] https://www.kernel.org/doc/html/v5.9-rc1/process/deprecated.html#zero-length-and-one-element-arrays [2] https://github.com/KSPP/linux/issues/86 Signed-off-by: Gustavo A. R. Silva <[email protected]> Link: https://lore.kernel.org/lkml/5f76f5d0.qJ4t%2FHWuRzSW7bTa%[email protected]/ Build-tested-by: kernel test robot <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02rsxx: Use fallthrough pseudo-keywordGustavo A. R. Silva1-1/+1
Replace /* Fall through. */ comment with the new pseudo-keyword macro fallthrough[1]. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02pata_cmd64x: Use fallthrough pseudo-keywordGustavo A. R. Silva1-1/+1
Replace /* FALL THRU */ comment with the new pseudo-keyword macro fallthrough[1]. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02Merge tag 'mlx5-fixes-2020-09-30' of ↵David S. Miller13-119/+350
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux From: Saeed Mahameed <[email protected]> ==================== This series introduces some fixes to mlx5 driver. v1->v2: - Patch #1 Don't return while mutex is held. (Dave) v2->v3: - Drop patch #1, will consider a better approach (Jakub) - use cpu_relax() instead of cond_resched() (Jakub) - while(i--) to reveres a loop (Jakub) - Drop old mellanox email sign-off and change the committer email (Jakub) Please pull and let me know if there is any problem. For -stable v4.15 ('net/mlx5e: Fix VLAN cleanup flow') ('net/mlx5e: Fix VLAN create flow') For -stable v4.16 ('net/mlx5: Fix request_irqs error flow') For -stable v5.4 ('net/mlx5e: Add resiliency in Striding RQ mode for packets larger than MTU') ('net/mlx5: Avoid possible free of command entry while timeout comp handler') For -stable v5.7 ('net/mlx5e: Fix return status when setting unsupported FEC mode') For -stable v5.8 ('net/mlx5e: Fix race condition on nhe->n pointer in neigh update') ==================== Signed-off-by: David S. Miller <[email protected]>
2020-10-02tcp: fix syn cookied MPTCP request socket leakPaolo Abeni1-1/+1
If a syn-cookies request socket don't pass MPTCP-level validation done in syn_recv_sock(), we need to release it immediately, or it will be leaked. Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/89 Fixes: 9466a1ccebbe ("mptcp: enable JOIN requests even if cookies are in use") Reported-and-tested-by: Geliang Tang <[email protected]> Reviewed-by: Matthieu Baerts <[email protected]> Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02Merge branch ↵David S. Miller7-10/+28
'Introduce-sendpage_ok-to-detect-misused-sendpage-in-network-related-drivers' Coly Li says: ==================== Introduce sendpage_ok() to detect misused sendpage in network related drivers As Sagi Grimberg suggested, the original fix is refind to a more common inline routine: static inline bool sendpage_ok(struct page *page) { return (!PageSlab(page) && page_count(page) >= 1); } If sendpage_ok() returns true, the checking page can be handled by the concrete zero-copy sendpage method in network layer. The v10 series has 7 patches, fixes a WARN_ONCE() usage from v9 series, - The 1st patch in this series introduces sendpage_ok() in header file include/linux/net.h. - The 2nd patch adds WARN_ONCE() for improper zero-copy send in kernel_sendpage(). - The 3rd patch fixes the page checking issue in nvme-over-tcp driver. - The 4th patch adds page_count check by using sendpage_ok() in do_tcp_sendpages() as Eric Dumazet suggested. - The 5th and 6th patches just replace existing open coded checks with the inline sendpage_ok() routine. ==================== Signed-off-by: David S. Miller <[email protected]>
2020-10-02libceph: use sendpage_ok() in ceph_tcp_sendpage()Coly Li1-1/+1
In libceph, ceph_tcp_sendpage() does the following checks before handle the page by network layer's zero copy sendpage method, if (page_count(page) >= 1 && !PageSlab(page)) This check is exactly what sendpage_ok() does. This patch replace the open coded checks by sendpage_ok() as a code cleanup. Signed-off-by: Coly Li <[email protected]> Acked-by: Jeff Layton <[email protected]> Cc: Ilya Dryomov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02scsi: libiscsi: use sendpage_ok() in iscsi_tcp_segment_map()Coly Li1-1/+1
In iscsci driver, iscsi_tcp_segment_map() uses the following code to check whether the page should or not be handled by sendpage: if (!recv && page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg))) The "page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)" part is to make sure the page can be sent to network layer's zero copy path. This part is exactly what sendpage_ok() does. This patch uses use sendpage_ok() in iscsi_tcp_segment_map() to replace the original open coded checks. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Lee Duncan <[email protected]> Acked-by: Martin K. Petersen <[email protected]> Cc: Vasily Averin <[email protected]> Cc: Cong Wang <[email protected]> Cc: Mike Christie <[email protected]> Cc: Chris Leech <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Hannes Reinecke <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02drbd: code cleanup by using sendpage_ok() to check page for kernel_sendpage()Coly Li1-1/+1
In _drbd_send_page() a page is checked by following code before sending it by kernel_sendpage(), (page_count(page) < 1) || PageSlab(page) If the check is true, this page won't be send by kernel_sendpage() and handled by sock_no_sendpage(). This kind of check is exactly what macro sendpage_ok() does, which is introduced into include/linux/net.h to solve a similar send page issue in nvme-tcp code. This patch uses macro sendpage_ok() to replace the open coded checks to page type and refcount in _drbd_send_page(), as a code cleanup. Signed-off-by: Coly Li <[email protected]> Cc: Philipp Reisner <[email protected]> Cc: Sagi Grimberg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02tcp: use sendpage_ok() to detect misused .sendpageColy Li1-1/+2
commit a10674bf2406 ("tcp: detecting the misuse of .sendpage for Slab objects") adds the checks for Slab pages, but the pages don't have page_count are still missing from the check. Network layer's sendpage method is not designed to send page_count 0 pages neither, therefore both PageSlab() and page_count() should be both checked for the sending page. This is exactly what sendpage_ok() does. This patch uses sendpage_ok() in do_tcp_sendpages() to detect misused .sendpage, to make the code more robust. Fixes: a10674bf2406 ("tcp: detecting the misuse of .sendpage for Slab objects") Suggested-by: Eric Dumazet <[email protected]> Signed-off-by: Coly Li <[email protected]> Cc: Vasily Averin <[email protected]> Cc: David S. Miller <[email protected]> Cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
2020-10-02nvme-tcp: check page by sendpage_ok() before calling kernel_sendpage()Coly Li1-4/+3
Currently nvme_tcp_try_send_data() doesn't use kernel_sendpage() to send slab pages. But for pages allocated by __get_free_pages() without __GFP_COMP, which also have refcount as 0, they are still sent by kernel_sendpage() to remote end, this is problematic. The new introduced helper sendpage_ok() checks both PageSlab tag and page_count counter, and returns true if the checking page is OK to be sent by kernel_sendpage(). This patch fixes the page checking issue of nvme_tcp_try_send_data() with sendpage_ok(). If sendpage_ok() returns true, send this page by kernel_sendpage(), otherwise use sock_no_sendpage to handle this page. Signed-off-by: Coly Li <[email protected]> Cc: Chaitanya Kulkarni <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Hannes Reinecke <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Mikhail Skorzhinskii <[email protected]> Cc: Philipp Reisner <[email protected]> Cc: Sagi Grimberg <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
2020-10-02net: add WARN_ONCE in kernel_sendpage() for improper zero-copy sendColy Li1-2/+4
If a page sent into kernel_sendpage() is a slab page or it doesn't have ref_count, this page is improper to send by the zero copy sendpage() method. Otherwise such page might be unexpected released in network code path and causes impredictable panic due to kernel memory management data structure corruption. This path adds a WARN_ON() on the sending page before sends it into the concrete zero-copy sendpage() method, if the page is improper for the zero-copy sendpage() method, a warning message can be observed before the consequential unpredictable kernel panic. This patch does not change existing kernel_sendpage() behavior for the improper page zero-copy send, it just provides hint warning message for following potential panic due the kernel memory heap corruption. Signed-off-by: Coly Li <[email protected]> Cc: Cong Wang <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: David S. Miller <[email protected]> Cc: Sridhar Samudrala <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02net: introduce helper sendpage_ok() in include/linux/net.hColy Li1-0/+16
The original problem was from nvme-over-tcp code, who mistakenly uses kernel_sendpage() to send pages allocated by __get_free_pages() without __GFP_COMP flag. Such pages don't have refcount (page_count is 0) on tail pages, sending them by kernel_sendpage() may trigger a kernel panic from a corrupted kernel heap, because these pages are incorrectly freed in network stack as page_count 0 pages. This patch introduces a helper sendpage_ok(), it returns true if the checking page, - is not slab page: PageSlab(page) is false. - has page refcount: page_count(page) is not zero All drivers who want to send page to remote end by kernel_sendpage() may use this helper to check whether the page is OK. If the helper does not return true, the driver should try other non sendpage method (e.g. sock_no_sendpage()) to handle the page. Signed-off-by: Coly Li <[email protected]> Cc: Chaitanya Kulkarni <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Hannes Reinecke <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Mikhail Skorzhinskii <[email protected]> Cc: Philipp Reisner <[email protected]> Cc: Sagi Grimberg <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
2020-10-02net: usb: pegasus: Proper error handing when setting pegasus' MAC addressPetko Manolov1-8/+27
v2: If reading the MAC address from eeprom fail don't throw an error, use randomly generated MAC instead. Either way the adapter will soldier on and the return type of set_ethernet_addr() can be reverted to void. v1: Fix a bug in set_ethernet_addr() which does not take into account possible errors (or partial reads) returned by its helpers. This can potentially lead to writing random data into device's MAC address registers. Signed-off-by: Petko Manolov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02net: core: document two new elements of struct net_deviceMauro Carvalho Chehab1-0/+5
As warned by "make htmldocs", there are two new struct elements that aren't documented: ../include/linux/netdevice.h:2159: warning: Function parameter or member 'unlink_list' not described in 'net_device' ../include/linux/netdevice.h:2159: warning: Function parameter or member 'nested_level' not described in 'net_device' Fixes: 1fc70edb7d7b ("net: core: add nested_level variable in net_device") Signed-off-by: Mauro Carvalho Chehab <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-10-02Merge tag 'pinctrl-v5.9-2' of ↵Linus Torvalds4-3/+19
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl Pull pin control fixes from Linus Walleij: "Some pin control fixes here. All of them are driver fixes, the Intel Cherryview being the most interesting one. - Fix a mux problem for I2C in the MVEBU driver. - Fix a really hairy inversion problem in the Intel Cherryview driver. - Fix the register for the sdc2_clk in the Qualcomm SM8250 driver. - Check the virtual GPIO boot failur in the Mediatek driver" * tag 'pinctrl-v5.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: pinctrl: mediatek: check mtk_is_virt_gpio input parameter pinctrl: qcom: sm8250: correct sdc2_clk pinctrl: cherryview: Preserve CHV_PADCTRL1_INVRXTX_TXDATA flag on GPIOs pinctrl: mvebu: Fix i2c sda definition for 98DX3236
2020-10-02Merge tag 'pci-v5.9-fixes-2' of ↵Linus Torvalds2-7/+5
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull PCI fixes from Bjorn Helgaas: - Fix rockchip regression in rockchip_pcie_valid_device() (Lorenzo Pieralisi) - Add Pali Rohár as aardvark PCI maintainer (Pali Rohár) * tag 'pci-v5.9-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: MAINTAINERS: Add Pali Rohár as aardvark PCI maintainer PCI: rockchip: Fix bus checks in rockchip_pcie_valid_device()
2020-10-02Merge tag 'scsi-fixes' of ↵Linus Torvalds2-8/+17
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Two patches in driver frameworks. The iscsi one corrects a bug induced by a BPF change to network locking and the other is a regression we introduced" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: iscsi: iscsi_tcp: Avoid holding spinlock while calling getpeername() scsi: target: Fix lun lookup for TARGET_SCF_LOOKUP_LUN_FROM_TAG case
2020-10-02Merge tag 'io_uring-5.9-2020-10-02' of git://git.kernel.dk/linux-blockLinus Torvalds2-6/+23
Pull io_uring fixes from Jens Axboe: - fix for async buffered reads if read-ahead is fully disabled (Hao) - double poll match fix - ->show_fdinfo() potential ABBA deadlock complaint fix * tag 'io_uring-5.9-2020-10-02' of git://git.kernel.dk/linux-block: io_uring: fix async buffered reads when readahead is disabled io_uring: fix potential ABBA deadlock in ->show_fdinfo() io_uring: always delete double poll wait entry on match
2020-10-02Merge tag 'block-5.9-2020-10-02' of git://git.kernel.dk/linux-blockLinus Torvalds1-9/+9
Pull block fix from Jens Axboe: "Single fix for a ->commit_rqs failure case" * tag 'block-5.9-2020-10-02' of git://git.kernel.dk/linux-block: blk-mq: call commit_rqs while list empty but error happen
2020-10-02spi: spi-s3c64xx: Turn on interrupts upon resumeŁukasz Stelmach1-0/+4
s3c64xx_spi_hwinit() disables interrupts. In s3c64xx_spi_probe() after calling s3c64xx_spi_hwinit() they are enabled with the following call. Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: Increase transfer timeoutŁukasz Stelmach1-1/+2
Increase timeout by 30 ms for some wiggle room and set the minimum value to 100 ms. This ensures a non-zero value for short transfers which may take less than 1 ms. The timeout value does not affect performance because it is used with a completion. Similar formula is used in other drivers e.g. sun4i, sun6i. Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: Ensure cur_speed holds actual clock valueŁukasz Stelmach1-0/+1
Make sure the cur_speed value used in s3c64xx_enable_datapath() to configure DMA channel and in s3c64xx_wait_for_*() to calculate the transfer timeout is set to the actual value of (half) the clock speed. Don't change non-CMU case, because no frequency calculation errors have been reported. Reviewed-by: Krzysztof Kozlowski <[email protected]> Suggested-by: Tomasz Figa <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: Fix doc comment for struct s3c64xx_spi_driver_dataŁukasz Stelmach1-4/+1
Remove descriptions for non-existent fields and fix indentation. Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: Rename S3C64XX_SPI_SLAVE_* to S3C64XX_SPI_CS_*Łukasz Stelmach1-13/+13
Rename S3C64XX_SPI_SLAVE_* to S3C64XX_SPI_CS_* to match documentation. Signed-off-by: Łukasz Stelmach <[email protected]> Reviewed-by: Krzysztof Kozlowski <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: Report more information when errors occurŁukasz Stelmach1-4/+15
Report amount of pending data when a transfer stops due to errors. Report if DMA was used to transfer data and print the status code. Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: Check return valuesŁukasz Stelmach1-9/+41
Check return values in prepare_dma() and s3c64xx_spi_config() and propagate errors upwards. Fixes: 788437273fa8 ("spi: s3c64xx: move to generic dmaengine API") Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3s64xx: Add S3C64XX_SPI_QUIRK_CS_AUTO for Exynos3250Łukasz Stelmach1-0/+1
Fix issues with DMA transfers bigger than 512 bytes on Exynos3250. Without the patches such transfers fail. The vendor kernel for ARTIK5 handles CS in a simmilar way. Signed-off-by: Łukasz Stelmach <[email protected]> Reviewed-by: Krzysztof Kozlowski <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02spi: spi-s3c64xx: swap s3c64xx_spi_set_cs() and s3c64xx_enable_datapath()Łukasz Stelmach1-2/+2
Fix issues with DMA transfers bigger than 512 bytes on Exynos3250. Without the patches such transfers fail to complete. This solution to the problem is found in the vendor kernel for ARTIK5 boards based on Exynos3250. Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Łukasz Stelmach <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mark Brown <[email protected]>
2020-10-02ahci: qoriq: enable acpi support in qoriq ahci driverYuantian Tang1-3/+17
This patch enables ACPI support in qoriq ahci driver. Signed-off-by: Udit Kumar <[email protected]> Signed-off-by: Yuantian Tang <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02sata, highbank: simplify the return expression of ahci_highbank_suspendLiu Shixin1-6/+1
Simplify the return expression. Signed-off-by: Liu Shixin <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02ahci: Add Intel Rocket Lake PCH-H RAID PCI IDsMika Westerberg1-0/+4
Add Intel Rocket Lake PCH-H RAID PCI IDs to the list of supported controllers. Signed-off-by: Mika Westerberg <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02Documentation/x86: Fix incorrect references to zero-page.txtHeinrich Schuchardt1-3/+3
The file zero-page.txt does not exit. Add links to zero-page.rst instead. [ bp: Massage a bit. ] Signed-off-by: Heinrich Schuchardt <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2020-10-02bcache: remove embedded struct cache_sb from struct cache_setColy Li11-59/+46
Since bcache code was merged into mainline kerrnel, each cache set only as one single cache in it. The multiple caches framework is here but the code is far from completed. Considering the multiple copies of cached data can also be stored on e.g. md raid1 devices, it is unnecessary to support multiple caches in one cache set indeed. The previous preparation patches fix the dependencies of explicitly making a cache set only have single cache. Now we don't have to maintain an embedded partial super block in struct cache_set, the in-memory super block can be directly referenced from struct cache. This patch removes the embedded struct cache_sb from struct cache_set, and fixes all locations where the superb lock was referenced from this removed super block by referencing the in-memory super block of struct cache. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: check and set sync status on cache's in-memory super blockColy Li4-10/+7
Currently the cache's sync status is checked and set on cache set's in- memory partial super block. After removing the embedded struct cache_sb from cache set and reference cache's in-memory super block from struct cache_set, the sync status can set and check directly on cache's super block. This patch checks and sets the cache sync status directly on cache's in-memory super block. This is a preparation for later removing embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove can_attach_cache()Coly Li1-10/+0
After removing the embedded struct cache_sb from struct cache_set, cache set will directly reference the in-memory super block of struct cache. It is unnecessary to compare block_size, bucket_size and nr_in_set from the identical in-memory super block in can_attach_cache(). This is a preparation patch for latter removing cache_set->sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: don't check seq numbers in register_cache_set()Coly Li1-15/+0
In order to update the partial super block of cache set, the seq numbers of cache and cache set are checked in register_cache_set(). If cache's seq number is larger than cache set's seq number, cache set must update its partial super block from cache's super block. It is unncessary when the embedded struct cache_sb is removed from struct cache set. This patch removed the seq numbers checking from register_cache_set(), because later there will be no such partial super block in struct cache set, the cache set will directly reference in-memory super block from struct cache. This is a preparation patch for removing embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: only use bucket_bytes() on struct cacheColy Li2-2/+2
Because struct cache_set and struct cache both have struct cache_sb, macro bucket_bytes() currently are used on both of them. When removing the embedded struct cache_sb from struct cache_set, this macro won't be used on struct cache_set anymore. This patch unifies all bucket_bytes() usage only on struct cache, this is one of the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove useless bucket_pages()Coly Li1-1/+0
It seems alloc_bucket_pages() is the only user of bucket_pages(). Considering alloc_bucket_pages() is removed from bcache code, it is safe to remove the useless macro bucket_pages() now. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: remove useless alloc_bucket_pages()Coly Li1-3/+0
Now no one uses alloc_bucket_pages() anymore, remove it from bcache.h. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2020-10-02bcache: only use block_bytes() on struct cacheColy Li7-24/+24
Because struct cache_set and struct cache both have struct cache_sb, therefore macro block_bytes() can be used on both of them. When removing the embedded struct cache_sb from struct cache_set, this macro won't be used on struct cache_set anymore. This patch unifies all block_bytes() usage only on struct cache, this is one of the preparation to remove the embedded struct cache_sb from struct cache_set. Signed-off-by: Coly Li <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>