aboutsummaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)AuthorFilesLines
2022-09-08net: phy: lan87xx: change interrupt src of link_up to comm_readyArun Ramadoss1-4/+54
Currently phy link up/down interrupt is enabled using the LAN87xx_INTERRUPT_MASK register. In the lan87xx_read_status function, phy link is determined using the T1_MODE_STAT_REG register comm_ready bit. comm_ready bit is set using the loc_rcvr_status & rem_rcvr_status. Whenever the phy link is up, LAN87xx_INTERRUPT_SOURCE link_up bit is set first but comm_ready bit takes some time to set based on local and remote receiver status. As per the current implementation, interrupt is triggered using link_up but the comm_ready bit is still cleared in the read_status function. So, link is always down. Initially tested with the shared interrupt mechanism with switch and internal phy which is working, but after implementing interrupt controller it is not working. It can fixed either by updating the read_status function to read from LAN87XX_INTERRUPT_SOURCE register or enable the interrupt mask for comm_ready bit. But the validation team recommends the use of comm_ready for link detection. This patch fixes by enabling the comm_ready bit for link_up in the LAN87XX_INTERRUPT_MASK_2 register (MISC Bank) and link_down in LAN87xx_INTERRUPT_MASK register. Fixes: 8a1b415d70b7 ("net: phy: added ethtool master-slave configuration support") Signed-off-by: Arun Ramadoss <[email protected]> Reviewed-by: Andrew Lunn <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Paolo Abeni <[email protected]>
2022-09-08drm/ttm: cleanup the resource of ghost objects after locking themChristian König1-5/+5
Otherwise lockdep will complain about cleaning up the bulk_move. Signed-off-by: Christian König <[email protected]> Reviewed-by: Matthew Auld <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Fixes: d91c411c744b ("drm/ttm: update bulk move object of ghost BO")
2022-09-07drm/amdgpu: prevent toc firmware memory leakGuchun Chen1-2/+5
It's missed in psp fini. Signed-off-by: Guchun Chen <[email protected]> Reviewed-by: Alex Deucher <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
2022-09-07drm/amdgpu: correct doorbell range/size value for CSDMA_DOORBELL_RANGEYifan Zhang1-6/+0
current function mixes CSDMA_DOORBELL_RANGE and SDMA0_DOORBELL_RANGE range/size manipulation, while these 2 registers have difference size field mask. Remove range/size manipulation for SDMA0_DOORBELL_RANGE. Signed-off-by: Yifan Zhang <[email protected]> Reviewed-by: Xiaojian Du <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
2022-09-07drm/amdkfd: print address in hex format rather than decimalYifan Zhang1-1/+1
Addresses should be printed in hex format. Signed-off-by: Yifan Zhang <[email protected]> Reviewed-by: Felix Kuehling <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
2022-09-07drm/amd/display: fix memory leak when using debugfs_lookup()Greg Kroah-Hartman1-0/+1
When calling debugfs_lookup() the result must have dput() called on it, otherwise the memory will leak over time. Fix this up by properly calling dput(). Cc: Harry Wentland <[email protected]> Cc: Leo Li <[email protected]> Cc: Rodrigo Siqueira <[email protected]> Cc: Alex Deucher <[email protected]> Cc: "Christian König" <[email protected]> Cc: "Pan, Xinhui" <[email protected]> Cc: David Airlie <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Wayne Lin <[email protected]> Cc: hersen wu <[email protected]> Cc: Wenjing Liu <[email protected]> Cc: Patrik Jakobsson <[email protected]> Cc: Thelford Williams <[email protected]> Cc: Fangzhi Zuo <[email protected]> Cc: Yongzhi Liu <[email protected]> Cc: Mikita Lipski <[email protected]> Cc: Jiapeng Chong <[email protected]> Cc: Bhanuprakash Modem <[email protected]> Cc: Sean Paul <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Reviewed-by: Rodrigo Siqueira <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Rodrigo Siqueira <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
2022-09-07drm/amd/pm: add missing SetMGpuFanBoostLimitRpm mapping for SMU 13.0.7Evan Quan1-0/+1
Missing SetMGpuFanBoostLimitRpm mapping leads to loading failure for SMU 13.0.7. Signed-off-by: Evan Quan <[email protected]> Reviewed-by: Hawking Zhang <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
2022-09-07drm/amd/amdgpu: add rlc_firmware_header_v2_4 to amdgpu_firmware_headerChengming Gui1-0/+1
Add missing structure to avoid incorrect size and version check. Signed-off-by: Chengming Gui <[email protected]> Reviewed-by: Feifei Xu <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
2022-09-07efi: capsule-loader: Fix use-after-free in efi_capsule_writeHyunwoo Kim1-24/+7
A race condition may occur if the user calls close() on another thread during a write() operation on the device node of the efi capsule. This is a race condition that occurs between the efi_capsule_write() and efi_capsule_flush() functions of efi_capsule_fops, which ultimately results in UAF. So, the page freeing process is modified to be done in efi_capsule_release() instead of efi_capsule_flush(). Cc: <[email protected]> # v4.9+ Signed-off-by: Hyunwoo Kim <[email protected]> Link: https://lore.kernel.org/all/20220907102920.GA88602@ubuntu/ Signed-off-by: Ard Biesheuvel <[email protected]>
2022-09-07arch_topology: Make cluster topology span at least SMT CPUsYicong Yang1-1/+1
Currently cpu_clustergroup_mask() will return CPU mask if cluster span more or the same CPUs as cpu_coregroup_mask(). This will result topology borken on non-Cluster SMT machines when building with CONFIG_SCHED_CLUSTER=y. Test with: qemu-system-aarch64 -enable-kvm -machine virt \ -net none \ -cpu host \ -bios ./QEMU_EFI.fd \ -m 2G \ -smp 48,sockets=2,cores=12,threads=2 \ -kernel $Image \ -initrd $Rootfs \ -nographic -append "rdinit=init console=ttyAMA0 sched_verbose loglevel=8" We'll get below error: [ 3.084568] BUG: arch topology borken [ 3.084570] the SMT domain not a subset of the CLS domain Since cluster is a level higher than SMT, fix this by making cluster spans at least SMT CPUs. Fixes: bfcc4397435d ("arch_topology: Limit span of cpu_clustergroup_mask()") Cc: Sudeep Holla <[email protected]> Cc: Vincent Guittot <[email protected]> Cc: Ionela Voinescu <[email protected]> Cc: Greg KH <[email protected]> Reviewed-by: Sudeep Holla <[email protected]> Signed-off-by: Yicong Yang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07serial: tegra-tcu: Use uart_xmit_advance(), fixes icount.tx accountingIlpo Järvinen1-1/+1
Tx'ing does not correctly account Tx'ed characters into icount.tx. Using uart_xmit_advance() fixes the problem. Fixes: 2d908b38d409 ("serial: Add Tegra Combined UART driver") Cc: <[email protected]> # serial: Create uart_xmit_advance() Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Ilpo Järvinen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07serial: tegra: Use uart_xmit_advance(), fixes icount.tx accountingIlpo Järvinen1-3/+2
DMA complete & stop paths did not correctly account Tx'ed characters into icount.tx. Using uart_xmit_advance() fixes the problem. Fixes: e9ea096dd225 ("serial: tegra: add serial driver") Cc: <[email protected]> # serial: Create uart_xmit_advance() Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Ilpo Järvinen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: dwc3: core: leave default DMA if the controller does not support 64-bit DMAWilliam Wu1-6/+7
On some DWC3 controllers (e.g. Rockchip SoCs), the DWC3 core doesn't support 64-bit DMA address width. In this case, this driver should use the default 32-bit mask. Otherwise, the DWC3 controller will break if it runs on above 4GB physical memory environment. This patch reads the DWC_USB3_AWIDTH bits of GHWPARAMS0 which used for the DMA address width, and only configure 64-bit DMA mask if the DWC_USB3_AWIDTH is 64. Fixes: 45d39448b4d0 ("usb: dwc3: support 64 bit DMA in platform driver") Cc: stable <[email protected]> Reviewed-by: Sven Peter <[email protected]> Signed-off-by: William Wu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07net: ethernet: mtk_eth_soc: check max allowed hash in mtk_ppe_check_skbLorenzo Bianconi1-0/+3
Even if max hash configured in hw in mtk_ppe_hash_entry is MTK_PPE_ENTRIES - 1, check theoretical OOB accesses in mtk_ppe_check_skb routine Fixes: c4f033d9e03e9 ("net: ethernet: mtk_eth_soc: rework hardware flow table management") Signed-off-by: Lorenzo Bianconi <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-09-07usb: dwc3: gadget: Submit endxfer command if delayed during disconnectWesley Cheng1-1/+12
During a cable disconnect sequence, if ep0state is not in the SETUP phase, then nothing will trigger any pending end transfer commands. Force stopping of any pending SETUP transaction, and move back to the SETUP phase. Reviewed-by: Thinh Nguyen <[email protected]> Signed-off-by: Wesley Cheng <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: dwc3: gadget: Skip waiting for CMDACT cleared during endxferWesley Cheng1-1/+3
For endxfer commands that do not require an endpoint complete interrupt, avoid having to wait for the command active bit to clear. This allows for EP0 events to continue to be handled, which allows for the controller to complete it. Otherwise, it is known that the endxfer command will fail if there is a pending SETUP token that needs to be read. Suggested-by: Thinh Nguyen <[email protected]> Reviewed-by: Thinh Nguyen <[email protected]> Signed-off-by: Wesley Cheng <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: dwc3: Increase DWC3 controller halt timeoutWesley Cheng1-1/+2
Since EP0 transactions need to be completed before the controller halt sequence is finished, this may take some time depending on the host and the enabled functions. Increase the controller halt timeout, so that we give the controller sufficient time to handle EP0 transfers. Signed-off-by: Wesley Cheng <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: dwc3: Remove DWC3 locking during gadget suspend/resumeWesley Cheng2-4/+5
Remove the need for making dwc3_gadget_suspend() and dwc3_gadget_resume() to be called in a spinlock, as dwc3_gadget_run_stop() could potentially take some time to complete. Signed-off-by: Wesley Cheng <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: dwc3: Avoid unmapping USB requests if endxfer is not completeWesley Cheng3-2/+10
If DWC3_EP_DELAYED_STOP is set during stop active transfers, then do not continue attempting to unmap request buffers during dwc3_remove_requests(). This can lead to SMMU faults, as the controller has not stopped the processing of the TRB. Defer this sequence to the EP0 out start, which ensures that there are no pending SETUP transactions before issuing the endxfer. Reviewed-by: Thinh Nguyen <[email protected]> Signed-off-by: Wesley Cheng <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07net: ethernet: mtk_eth_soc: fix typo in __mtk_foe_entry_clearLorenzo Bianconi1-1/+1
Set ib1 state to MTK_FOE_STATE_UNBIND in __mtk_foe_entry_clear routine. Fixes: 33fc42de33278 ("net: ethernet: mtk_eth_soc: support creating mac address based offload entries") Signed-off-by: Lorenzo Bianconi <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-09-07USB/ARM: Switch S3C2410 UDC to GPIO descriptorsLinus Walleij2-46/+35
This converts the S3C2410 UDC USB device controller to use GPIO descriptor tables and modern GPIO. Cc: Krzysztof Kozlowski <[email protected]> Cc: Alim Akhtar <[email protected]> Cc: [email protected] Cc: [email protected] Acked-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Linus Walleij <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: misc: uss720: fix uninitialized variable rlenDongliang Mu1-4/+4
Smatch reports the following error: uninitialized symbol 'rlen' drivers/usb/misc/uss720.c:514 parport_uss720_epp_write_data() error drivers/usb/misc/uss720.c:575 parport_uss720_ecp_write_data() error drivers/usb/misc/uss720.c:593 parport_uss720_ecp_read_data() error drivers/usb/misc/uss720.c:626 parport_uss720_write_compat() error The root cause is, the failure of usb_bulk_msg leads to the uninitialized variable rlen in printk function. Fix this by initializing rlen with zero. Signed-off-by: Dongliang Mu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07usb: gadget: f_fs: stricter integer overflow checksDan Carpenter1-2/+2
This from static analysis. The vla_item() takes a size and adds it to the total. It has a built in integer overflow check so if it encounters an integer overflow anywhere then it records the total as SIZE_MAX. However there is an issue here because the "lang_count*(needed_count+1)" multiplication can overflow. Technically the "lang_count + 1" addition could overflow too, but that would be detected and is harmless. Fix both using the new size_add() and size_mul() functions. Fixes: e6f3862fa1ec ("usb: gadget: FunctionFS: Remove VLAIS usage from gadget code") Signed-off-by: Dan Carpenter <[email protected]> Link: https://lore.kernel.org/r/YxDI3lMYomE7WCjn@kili Signed-off-by: Greg Kroah-Hartman <[email protected]>
2022-09-07iommu/virtio: Fix interaction with VFIOJean-Philippe Brucker1-0/+11
Commit e8ae0e140c05 ("vfio: Require that devices support DMA cache coherence") requires IOMMU drivers to advertise IOMMU_CAP_CACHE_COHERENCY, in order to be used by VFIO. Since VFIO does not provide to userspace the ability to maintain coherency through cache invalidations, it requires hardware coherency. Advertise the capability in order to restore VFIO support. The meaning of IOMMU_CAP_CACHE_COHERENCY also changed from "IOMMU can enforce cache coherent DMA transactions" to "IOMMU_CACHE is supported". While virtio-iommu cannot enforce coherency (of PCIe no-snoop transactions), it does support IOMMU_CACHE. We can distinguish different cases of non-coherent DMA: (1) When accesses from a hardware endpoint are not coherent. The host would describe such a device using firmware methods ('dma-coherent' in device-tree, '_CCA' in ACPI), since they are also needed without a vIOMMU. In this case mappings are created without IOMMU_CACHE. virtio-iommu doesn't need any additional support. It sends the same requests as for coherent devices. (2) When the physical IOMMU supports non-cacheable mappings. Supporting those would require a new feature in virtio-iommu, new PROBE request property and MAP flags. Device drivers would use a new API to discover this since it depends on the architecture and the physical IOMMU. (3) When the hardware supports PCIe no-snoop. It is possible for assigned PCIe devices to issue no-snoop transactions, and the virtio-iommu specification is lacking any mention of this. Arm platforms don't necessarily support no-snoop, and those that do cannot enforce coherency of no-snoop transactions. Device drivers must be careful about assuming that no-snoop transactions won't end up cached; see commit e02f5c1bb228 ("drm: disable uncached DMA optimization for ARM and arm64"). On x86 platforms, the host may or may not enforce coherency of no-snoop transactions with the physical IOMMU. But according to the above commit, on x86 a driver which assumes that no-snoop DMA is compatible with uncached CPU mappings will also work if the host enforces coherency. Although these issues are not specific to virtio-iommu, it could be used to facilitate discovery and configuration of no-snoop. This would require a new feature bit, PROBE property and ATTACH/MAP flags. Cc: [email protected] Fixes: e8ae0e140c05 ("vfio: Require that devices support DMA cache coherence") Signed-off-by: Jean-Philippe Brucker <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2022-09-07iommu/vt-d: Fix lockdep splat due to klist iteration in atomic contextLu Baolu1-28/+19
With CONFIG_INTEL_IOMMU_DEBUGFS enabled, below lockdep splat are seen when an I/O fault occurs on a machine with an Intel IOMMU in it. DMAR: DRHD: handling fault status reg 3 DMAR: [DMA Write NO_PASID] Request device [00:1a.0] fault addr 0x0 [fault reason 0x05] PTE Write access is not set DMAR: Dump dmar0 table entries for IOVA 0x0 DMAR: root entry: 0x0000000127f42001 DMAR: context entry: hi 0x0000000000001502, low 0x000000012d8ab001 ================================ WARNING: inconsistent lock state 5.20.0-0.rc0.20220812git7ebfc85e2cd7.10.fc38.x86_64 #1 Not tainted -------------------------------- inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage. rngd/1006 [HC1[1]:SC0[0]:HE0:SE1] takes: ff177021416f2d78 (&k->k_lock){?.+.}-{2:2}, at: klist_next+0x1b/0x160 {HARDIRQ-ON-W} state was registered at: lock_acquire+0xce/0x2d0 _raw_spin_lock+0x33/0x80 klist_add_tail+0x46/0x80 bus_add_device+0xee/0x150 device_add+0x39d/0x9a0 add_memory_block+0x108/0x1d0 memory_dev_init+0xe1/0x117 driver_init+0x43/0x4d kernel_init_freeable+0x1c2/0x2cc kernel_init+0x16/0x140 ret_from_fork+0x1f/0x30 irq event stamp: 7812 hardirqs last enabled at (7811): [<ffffffff85000e86>] asm_sysvec_apic_timer_interrupt+0x16/0x20 hardirqs last disabled at (7812): [<ffffffff84f16894>] irqentry_enter+0x54/0x60 softirqs last enabled at (7794): [<ffffffff840ff669>] __irq_exit_rcu+0xf9/0x170 softirqs last disabled at (7787): [<ffffffff840ff669>] __irq_exit_rcu+0xf9/0x170 The klist iterator functions using spin_*lock_irq*() but the klist insertion functions using spin_*lock(), combined with the Intel DMAR IOMMU driver iterating over klists from atomic (hardirq) context, where pci_get_domain_bus_and_slot() calls into bus_find_device() which iterates over klists. As currently there's no plan to fix the klist to make it safe to use in atomic context, this fixes the lockdep splat by avoid calling pci_get_domain_bus_and_slot() in the hardirq context. Fixes: 8ac0b64b9735 ("iommu/vt-d: Use pci_get_domain_bus_and_slot() in pgtable_walk()") Reported-by: Lennert Buytenhek <[email protected]> Link: https://lore.kernel.org/linux-iommu/Yvo2dfpEh%[email protected]/ Link: https://lore.kernel.org/linux-iommu/[email protected]/ Signed-off-by: Lu Baolu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2022-09-07iommu/vt-d: Fix recursive lock issue in iommu_flush_dev_iotlb()Lu Baolu1-16/+23
The per domain spinlock is acquired in iommu_flush_dev_iotlb(), which is possbile to be called in the interrupt context. For example, the drm-intel's CI system got completely blocked with below error: WARNING: inconsistent lock state 6.0.0-rc1-CI_DRM_11990-g6590d43d39b9+ #1 Not tainted -------------------------------- inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. swapper/6/0 [HC0[0]:SC1[1]:HE1:SE0] takes: ffff88810440d678 (&domain->lock){+.?.}-{2:2}, at: iommu_flush_dev_iotlb.part.61+0x23/0x80 {SOFTIRQ-ON-W} state was registered at: lock_acquire+0xd3/0x310 _raw_spin_lock+0x2a/0x40 domain_update_iommu_cap+0x20b/0x2c0 intel_iommu_attach_device+0x5bd/0x860 __iommu_attach_device+0x18/0xe0 bus_iommu_probe+0x1f3/0x2d0 bus_set_iommu+0x82/0xd0 intel_iommu_init+0xe45/0x102a pci_iommu_init+0x9/0x31 do_one_initcall+0x53/0x2f0 kernel_init_freeable+0x18f/0x1e1 kernel_init+0x11/0x120 ret_from_fork+0x1f/0x30 irq event stamp: 162354 hardirqs last enabled at (162354): [<ffffffff81b59274>] _raw_spin_unlock_irqrestore+0x54/0x70 hardirqs last disabled at (162353): [<ffffffff81b5901b>] _raw_spin_lock_irqsave+0x4b/0x50 softirqs last enabled at (162338): [<ffffffff81e00323>] __do_softirq+0x323/0x48e softirqs last disabled at (162349): [<ffffffff810c1588>] irq_exit_rcu+0xb8/0xe0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&domain->lock); <Interrupt> lock(&domain->lock); *** DEADLOCK *** 1 lock held by swapper/6/0: This coverts the spin_lock/unlock() into the irq save/restore varieties to fix the recursive locking issues. Fixes: ffd5869d93530 ("iommu/vt-d: Replace spin_lock_irqsave() with spin_lock()") Signed-off-by: Lu Baolu <[email protected]> Acked-by: Lucas De Marchi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2022-09-07iommu/vt-d: Correctly calculate sagaw value of IOMMULu Baolu1-3/+25
The Intel IOMMU driver possibly selects between the first-level and the second-level translation tables for DMA address translation. However, the levels of page-table walks for the 4KB base page size are calculated from the SAGAW field of the capability register, which is only valid for the second-level page table. This causes the IOMMU driver to stop working if the hardware (or the emulated IOMMU) advertises only first-level translation capability and reports the SAGAW field as 0. This solves the above problem by considering both the first level and the second level when calculating the supported page table levels. Fixes: b802d070a52a1 ("iommu/vt-d: Use iova over first level") Cc: [email protected] Signed-off-by: Lu Baolu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2022-09-07iommu/vt-d: Fix kdump kernels boot failure with scalable modeLu Baolu2-59/+50
The translation table copying code for kdump kernels is currently based on the extended root/context entry formats of ECS mode defined in older VT-d v2.5, and doesn't handle the scalable mode formats. This causes the kexec capture kernel boot failure with DMAR faults if the IOMMU was enabled in scalable mode by the previous kernel. The ECS mode has already been deprecated by the VT-d spec since v3.0 and Intel IOMMU driver doesn't support this mode as there's no real hardware implementation. Hence this converts ECS checking in copying table code into scalable mode. The existing copying code consumes a bit in the context entry as a mark of copied entry. It needs to work for the old format as well as for the extended context entries. As it's hard to find such a common bit for both legacy and scalable mode context entries. This replaces it with a per- IOMMU bitmap. Fixes: 7373a8cc38197 ("iommu/vt-d: Setup context and enable RID2PASID support") Cc: [email protected] Reported-by: Jerry Snitselaar <[email protected]> Tested-by: Wen Jin <[email protected]> Signed-off-by: Lu Baolu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2022-09-07net: dsa: felix: access QSYS_TAG_CONFIG under tas_lock in ↵Vladimir Oltean1-2/+2
vsc9959_sched_speed_set The read-modify-write of QSYS_TAG_CONFIG from vsc9959_sched_speed_set() runs unlocked with respect to the other functions that access it, which are vsc9959_tas_guard_bands_update(), vsc9959_qos_port_tas_set() and vsc9959_tas_clock_adjust(). All the others are under ocelot->tas_lock, so move the vsc9959_sched_speed_set() access under that lock as well, to resolve the concurrency. Fixes: 55a515b1f5a9 ("net: dsa: felix: drop oversized frames with tc-taprio instead of hanging the port") Signed-off-by: Vladimir Oltean <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-09-07net: dsa: felix: disable cut-through forwarding for frames oversized for ↵Vladimir Oltean1-43/+79
tc-taprio Experimentally, it looks like when QSYS_QMAXSDU_CFG_7 is set to 605, frames even way larger than 601 octets are transmitted even though these should be considered as oversized, according to the documentation, and dropped. Since oversized frame dropping depends on frame size, which is only known at the EOF stage, and therefore not at SOF when cut-through forwarding begins, it means that the switch cannot take QSYS_QMAXSDU_CFG_* into consideration for traffic classes that are cut-through. Since cut-through forwarding has no UAPI to control it, and the driver enables it based on the mantra "if we can, then why not", the strategy is to alter vsc9959_cut_through_fwd() to take into consideration which tc's have oversize frame dropping enabled, and disable cut-through for them. Then, from vsc9959_tas_guard_bands_update(), we re-trigger the cut-through determination process. There are 2 strategies for vsc9959_cut_through_fwd() to determine whether a tc has oversized dropping enabled or not. One is to keep a bit mask of traffic classes per port, and the other is to read back from the hardware registers (a non-zero value of QSYS_QMAXSDU_CFG_* means the feature is enabled). We choose reading back from registers, because struct ocelot_port is shared with drivers (ocelot, seville) that don't support either cut-through nor tc-taprio, and we don't have a felix specific extension of struct ocelot_port. Furthermore, reading registers from the Felix hardware is quite cheap, since they are memory-mapped. Fixes: 55a515b1f5a9 ("net: dsa: felix: drop oversized frames with tc-taprio instead of hanging the port") Signed-off-by: Vladimir Oltean <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-09-07net: dsa: felix: tc-taprio intervals smaller than MTU should send at least ↵Vladimir Oltean1-4/+31
one packet The blamed commit broke tc-taprio schedules such as this one: tc qdisc replace dev $swp1 root taprio \ num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ base-time 0 \ sched-entry S 0x7f 990000 \ sched-entry S 0x80 10000 \ flags 0x2 because the gate entry for TC 7 (S 0x80 10000 ns) now has a static guard band added earlier than its 'gate close' event, such that packet overruns won't occur in the worst case of the largest packet possible. Since guard bands are statically determined based on the per-tc QSYS_QMAXSDU_CFG_* with a fallback on the port-based QSYS_PORT_MAX_SDU, we need to discuss what happens with TC 7 depending on kernel version, since the driver, prior to commit 55a515b1f5a9 ("net: dsa: felix: drop oversized frames with tc-taprio instead of hanging the port"), did not touch QSYS_QMAXSDU_CFG_*, and therefore relied on QSYS_PORT_MAX_SDU. 1 (before vsc9959_tas_guard_bands_update): QSYS_PORT_MAX_SDU defaults to 1518, and at gigabit this introduces a static guard band (independent of packet sizes) of 12144 ns, plus QSYS::HSCH_MISC_CFG.FRM_ADJ (bit time of 20 octets => 160 ns). But this is larger than the time window itself, of 10000 ns. So, the queue system never considers a frame with TC 7 as eligible for transmission, since the gate practically never opens, and these frames are forever stuck in the TX queues and hang the port. 2 (after vsc9959_tas_guard_bands_update): Under the sole goal of enabling oversized frame dropping, we make an effort to set QSYS_QMAXSDU_CFG_7 to 1230 bytes. But QSYS_QMAXSDU_CFG_7 plays one more role, which we did not take into account: per-tc static guard band, expressed in L2 byte time (auto-adjusted for FCS and L1 overhead). There is a discrepancy between what the driver thinks (that there is no guard band, and 100% of min_gate_len[tc] is available for egress scheduling) and what the hardware actually does (crops the equivalent of QSYS_QMAXSDU_CFG_7 ns out of min_gate_len[tc]). In practice, this means that the hardware thinks it has exactly 0 ns for scheduling tc 7. In both cases, even minimum sized Ethernet frames are stuck on egress rather than being considered for scheduling on TC 7, even if they would fit given a proper configuration. Considering the current situation, with vsc9959_tas_guard_bands_update(), frames between 60 octets and 1230 octets in size are not eligible for oversized dropping (because they are smaller than QSYS_QMAXSDU_CFG_7), but won't be considered as eligible for scheduling either, because the min_gate_len[7] (10000 ns) minus the guard band determined by QSYS_QMAXSDU_CFG_7 (1230 octets * 8 ns per octet == 9840 ns) minus the guard band auto-added for L1 overhead by QSYS::HSCH_MISC_CFG.FRM_ADJ (20 octets * 8 ns per octet == 160 octets) leaves 0 ns for scheduling in the queue system proper. Investigating the hardware behavior, it becomes apparent that the queue system needs precisely 33 ns of 'gate open' time in order to consider a frame as eligible for scheduling to a tc. So the solution to this problem is to amend vsc9959_tas_guard_bands_update(), by giving the per-tc guard bands less space by exactly 33 ns, just enough for one frame to be scheduled in that interval. This allows the queue system to make forward progress for that port-tc, and prevents it from hanging. Fixes: 297c4de6f780 ("net: dsa: felix: re-enable TAS guard band mode") Reported-by: Xiaoliang Yang <[email protected]> Signed-off-by: Vladimir Oltean <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-09-07gpio: mpc8xxx: Fix support for IRQ_TYPE_LEVEL_LOW flow_type in mpc85xxPali Rohár1-0/+1
Commit e39d5ef67804 ("powerpc/5xxx: extend mpc8xxx_gpio driver to support mpc512x gpios") implemented support for IRQ_TYPE_LEVEL_LOW flow type in mpc512x via falling edge type. Do same for mpc85xx which support was added in commit 345e5c8a1cc3 ("powerpc: Add interrupt support to mpc8xxx_gpio"). Fixes probing of lm90 hwmon driver on mpc85xx based board which use level interrupt. Without it kernel prints error and refuse lm90 to work: [ 15.258370] genirq: Setting trigger mode 8 for irq 49 failed (mpc8xxx_irq_set_type+0x0/0xf8) [ 15.267168] lm90 0-004c: cannot request IRQ 49 [ 15.272708] lm90: probe of 0-004c failed with error -22 Fixes: 345e5c8a1cc3 ("powerpc: Add interrupt support to mpc8xxx_gpio") Signed-off-by: Pali Rohár <[email protected]> Signed-off-by: Bartosz Golaszewski <[email protected]>
2022-09-07iommu/amd: use full 64-bit value in build_completion_wait()John Sperbeck1-1/+2
We started using a 64 bit completion value. Unfortunately, we only stored the low 32-bits, so a very large completion value would never be matched in iommu_completion_wait(). Fixes: c69d89aff393 ("iommu/amd: Use 4K page for completion wait write-back semaphore") Signed-off-by: John Sperbeck <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2022-09-07RDMA/irdma: Report RNR NAK generation in device capsSindhu-Devale1-1/+4
Report RNR NAK generation when device capabilities are queried Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Sindhu-Devale <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Leon Romanovsky <[email protected]>
2022-09-07RDMA/irdma: Use s/g array in post send only when its validSindhu-Devale1-1/+2
Send with invalidate verb call can pass in an uninitialized s/g array with 0 sge's which is filled into irdma WQE and causes a HW asynchronous event. Fix this by using the s/g array in irdma post send only when its valid. Fixes: 551c46e ("RDMA/irdma: Add user/kernel shared libraries") Signed-off-by: Sindhu-Devale <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Leon Romanovsky <[email protected]>
2022-09-07RDMA/irdma: Return correct WC error for bind operation failureSindhu-Devale1-1/+3
When a QP and a MR on a local host are in different PDs, the HW generates an asynchronous event (AE). The same AE is generated when a QP and a MW are in different PDs during a bind operation. Return the more appropriate IBV_WC_MW_BIND_ERR for the latter case by checking the OP type from the CQE in error. Fixes: 551c46edc769 ("RDMA/irdma: Add user/kernel shared libraries") Signed-off-by: Sindhu-Devale <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Leon Romanovsky <[email protected]>
2022-09-07RDMA/irdma: Return error on MR deregister CQP failureSindhu-Devale2-6/+13
The MR deregister CQP can fail if an MW is bound to it. Return an appropriate error for this case. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Sindhu-Devale <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Leon Romanovsky <[email protected]>
2022-09-07RDMA/irdma: Report the correct max cqes from query deviceSindhu-Devale1-1/+1
Report the correct max cqes available to an application taking into account a reserved entry to detect overflow. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Sindhu-Devale <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Leon Romanovsky <[email protected]>
2022-09-07Merge tag 'phy-fixes-6.0' of ↵Greg Kroah-Hartman1-70/+17
git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy into char-misc-linus Vinod writes: "phy: fixes for 6.0 Fix for broken reset in marvell a3700-comphy" * tag 'phy-fixes-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy: phy: marvell: phy-mvebu-a3700-comphy: Remove broken reset support
2022-09-07wifi: iwlwifi: don't spam logs with NSS>2 messagesJason A. Donenfeld1-2/+2
I get a log line like this every 4 seconds when connected to my AP: [15650.221468] iwlwifi 0000:09:00.0: Got NSS = 4 - trimming to 2 Looking at the code, this seems to be related to a hardware limitation, and there's nothing to be done. In an effort to keep my dmesg manageable, downgrade this error to "debug" rather than "info". Cc: Johannes Berg <[email protected]> Signed-off-by: Jason A. Donenfeld <[email protected]> Signed-off-by: Kalle Valo <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2022-09-07efi/x86: libstub: remove unused variablechen zhang1-1/+0
The variable "has_system_memory" is unused in function ‘adjust_memory_range_protection’, remove it. Signed-off-by: chen zhang <[email protected]> Signed-off-by: Ard Biesheuvel <[email protected]>
2022-09-07nvme: requeue aen after firmware activationKeith Busch1-3/+11
The driver prevents async event work while handling a processing paused event, but someone needs to restart it after the controller returns to a live state. Link: https://bugzilla.kernel.org/show_bug.cgi?id=216400 Signed-off-by: Keith Busch <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2022-09-07nvmet: fix mar and mor off-by-one errorsDennis Maisenbacher1-2/+15
Maximum Active Resources (MAR) and Maximum Open Resources (MOR) are 0's based vales where a value of 0xffffffff indicates that there is no limit. Decrement the values that are returned by bdev_max_open_zones and bdev_max_active_zones as the block layer helpers are not 0's based. A 0 returned by the block layer helpers indicates no limit, thus convert it to 0xffffffff (U32_MAX). Fixes: aaf2e048af27 ("nvmet: add ZBD over ZNS backend support") Suggested-by: Niklas Cassel <[email protected]> Signed-off-by: Dennis Maisenbacher <[email protected]> Reviewed-by: Chaitanya Kulkarni <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2022-09-07thunderbolt: debugfs: Fix spelling mistakes in seq_puts textColin Ian King1-2/+2
There are a handful of spelling mistakes in seq_puts text. Fix them. Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>
2022-09-07thunderbolt: Add support for ASMedia NVM image formatSzuying Chen1-0/+37
Add support for ASMedia specific NVM image format. This makes it possible to upgrade the NVM firmware of ASMedia routers in addition to Intel ones. Signed-off-by: Szuying Chen <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>
2022-09-07thunderbolt: Move vendor specific NVM handling into nvm.cSzuying Chen4-167/+410
As there will be more USB4 devices that support NVM firmware upgrade from various vendors, it makes sense to split out the Intel specific NVM image handling from the generic code. This moves the Intel specific NVM handling into a new structure that will be matched by the device type and the vendor ID. Do this for both routers and retimers. This makes it easier to extend the NVM support to cover new vendors and NVM image formats in the future. Signed-off-by: Szuying Chen <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>
2022-09-07thunderbolt: Provide tb_retimer_nvm_read() analogous to tb_switch_nvm_read()Mika Westerberg2-7/+23
As we are moving the NVM vendor specifics into nvm.c we need to deal witht he retimer NVM formats too. For this reason provide retimer specific function that can be used to read the contents of the NVM and rename the internal ones accordingly analogous to what we do with routers. Signed-off-by: Mika Westerberg <[email protected]>
2022-09-07thunderbolt: Rename and make nvm_read() available for other filesSzuying Chen2-18/+28
In order to support non-Intel NVM formats the vendor specific NVM validation code that will live in nvm.c needs to be able to read various parts of the NVM so make the function available outside of switch.c and rename it accordingly. Signed-off-by: Szuying Chen <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>
2022-09-07thunderbolt: Extend NVM version fields to 32-bitsSzuying Chen3-6/+6
In order to support non-Intel NVM image formats extend the NVM major and minor version to 32-bits to better accommondate different versioning schemes. No functional impact. Signed-off-by: Szuying Chen <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>
2022-09-07thunderbolt: Allow NVM upgrade of USB4 host routersSzuying Chen1-1/+4
Intel pre-USB4 host routers required the firmware connection manager to be active in order to perform NVM firmware upgrade and for this reason it was disabled when software connection manager is active. However, this is not necessary for USB4 host routers as this functionality is part of router operations that the router implements if it wants to support this. Signed-off-by: Szuying Chen <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>