aboutsummaryrefslogtreecommitdiff
path: root/drivers/infiniband/hw
AgeCommit message (Collapse)AuthorFilesLines
2020-01-13RDMA/core: Simplify destruction of FD uobjectsJason Gunthorpe1-63/+48
FD uobjects have a weird split between the struct file and uobject world. Simplify this to make them pure uobjects and use a generic release method for all struct file operations. This fixes the control flow so that mlx5_cmd_cleanup_async_ctx() is always called before erasing the linked list contents to make the concurrancy simpler to understand. For this to work the uobject destruction must fence anything that it is cleaning up - the design must not rely on struct file lifetime. Only deliver_event() relies on the struct file to when adding new events to the queue, add a is_destroyed check under lock to block it. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-13RDMA/mlx5: Use RCU and direct refcounts to keep memory aliveJason Gunthorpe1-17/+17
dispatch_event_fd() runs from a notifier with minimal locking, and relies on RCU and a file refcount to keep the uobject and eventfd alive. As the next patch wants to remove the file_operations release function from the drivers, re-organize things so that the devx_event_notifier() path uses the existing RCU to manage the lifetime of the uobject and eventfd. Move the refcount puts to a call_rcu so that the objects are guaranteed to exist and remove the indirect file refcount. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-12IB/mlx5: Add mmap support for VARYishai Hadas1-1/+4
Add mmap support for VAR, it uses the 'offset' command mode with involvement of IB core APIs to find the previously allocated mmap entry. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-12IB/mlx5: Introduce VAR object and its alloc/destroy methodsYishai Hadas2-0/+164
Introduce VAR object and its alloc/destroy KABI methods. The internal implementation uses the IB core API to manage mmap/munamp calls. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-12IB/mlx5: Extend caps stage to handle VAR capabilitiesYishai Hadas2-2/+48
Extend caps stage to handle VAR capabilities. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10Merge branch 'x86/mm' into efi/core, to pick up dependenciesIngo Molnar1-1/+1
Signed-off-by: Ingo Molnar <[email protected]>
2020-01-10RDMA/hns: Add support for reporting wc as software modeXi Wang6-34/+252
When hardware is in resetting stage, we may can't poll back all the expected work completions as the hardware won't generate cqe anymore. This patch allows the driver to compose the expected wc instead of the hardware during resetting stage. Once the hardware finished resetting, we can poll cq from hardware again. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Xi Wang <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10RDMA/hns: Bugfix for posting a wqe with sgeLijun Ou1-16/+25
Driver should first check whether the sge is valid, then fill the valid sge and the caculated total into hardware, otherwise invalid sges will cause an error. Fixes: 52e3b42a2f58 ("RDMA/hns: Filter for zero length of sge in hip08 kernel mode") Fixes: 7bdee4158b37 ("RDMA/hns: Fill sq wqe context of ud type in hip08") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Add RcvShortLengthErrCnt to hfi1statsMike Marciniszyn3-0/+3
This counter, RxShrErr, is required for error analysis and debug. Fixes: 7724105686e7 ("IB/hfi1: add driver files") Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Kaike Wan <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Add software counter for ctxt0 seq dropMike Marciniszyn4-0/+14
All other code paths increment some form of drop counter. This was missed in the original implementation. Fixes: 82c2611daaf0 ("staging/rdma/hfi1: Handle packets with invalid RHF on context 0") Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Kaike Wan <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Return void in packet receiving functionsGrzegorz Andrejczuk2-25/+18
Packet receiving functions returns int value, and yet the return values are not used at all. This patch converts the functions to return void. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Mike Marciniszyn <[email protected]> Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Grzegorz Andrejczuk <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Decouple IRQ name from typeGrzegorz Andrejczuk2-48/+59
IRQ name was connected to IRQ type, this is not sufficient and it would be better to use name as argument to msix_request_irq instead of assigning it to variables when function is called. Index argument was required to generate name and now it can be removed. To generate name correctly helpers function were added and updated. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Dennis Dalessandro <[email protected]> Reviewed-by: Mike Marciniszyn <[email protected]> Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Grzegorz Andrejczuk <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Create API for auto activateMike Marciniszyn1-14/+23
Add an auto activate routine for use by the interrupt handler. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: IB/hfi1: Add an API to handle special case dropMike Marciniszyn3-8/+23
This patch pushes special case drop logic into an API to be shared by all interrupt handlers. Additionally, convert do_drop to a bool. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Move common receive IRQ code to functionGrzegorz Andrejczuk1-30/+52
Tracing interrupts, incrementing interrupt counter and ASPM are part that will be reused by HFI1 receive IRQ handlers. Create common function to have shared code in one place. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Dennis Dalessandro <[email protected]> Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Grzegorz Andrejczuk <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Add fast and slow handlers for receive contextMike Marciniszyn4-60/+58
This patch eliminate special cases by adding a fast_handler member to the receive context and changes to the fast handler as specified in the new variable. Initialize the variable as soon as the setting for dma tail is known when the context is created. Setting fast path is called every time when any context has entered slow path. Add function to check if contexts is using fast path and do not set fast path when it is already done to improve RCD fastpath setting. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Dennis Dalessandro <[email protected]> Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Grzegorz Andrejczuk <[email protected]> Signed-off-by: Sadanand Warrier <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-10IB/hfi1: Move chip specific functions to chip.cMike Marciniszyn3-69/+87
Move routines and defines associated with hdrq size validation to a chip specific routine since the limits are specific to the device. Fix incorrect value for min size 2 -> 32 CSR writes should also be in chip.c. Create a chip routine to write the hdrq specific CSRs and call as appropriate. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Dennis Dalessandro <[email protected]> Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07IB/mlx5: Do reverse sequence during device removalParav Pandit1-0/+2
When IB device profile initialization completes, device is marked as active. However, IB device is not marked inactive, during device removal flow. It should be the mirror of the add flow. Hence, mark it inactive during remove sequence. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Parav Pandit <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Fix coding style issuesLijun Ou2-143/+92
Fix some coding style issuses without changing logic of codes, most of the modification is unreasonable line breaks and alignments. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Lang Cheng <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Replace custom macros HNS_ROCE_ALIGN_UPWenpeng Liang2-26/+20
HNS_ROCE_ALIGN_UP can be replaced by round_up() which is defined in kernel.h. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Wenpeng Liang <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Remove redundant print informationYixing Liu1-1/+0
There are already necessary prints in outer function, prints in hns_roce_function_clear() may confuse users. So these prints is removed. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yixing Liu <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Delete unnessary parameters in hns_roce_v2_qp_modify()Lijun Ou1-3/+1
Current state and new state of qp won't be configured when modifying qp, so these two redundant parameters should be removed. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Update the value of qp typeLijun Ou1-5/+7
The values used to represent service type of RC and UD should be interchanged according to design of hardware. And it's better to define these types in enumeration than macros. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Remove unused function hns_roce_init_eq_table()Lijun Ou1-1/+0
hns_roce_init_eq_table() is an unused function that only retains its declaration in driver. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07RDMA/hns: Avoid printing address of mtt pageWenpeng Liang1-2/+2
Address of a page shouldn't be printed in case of security issues. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Wenpeng Liang <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-07i40iw: Remove setting of VMA private data and use rdma_user_mmap_ioShiraz Saleem1-8/+6
vm_ops is now initialized in ib_uverbs_mmap() with the recent rdma mmap API changes. Earlier it was done in rdma_umap_priv_init() which would not be called unless a driver called rdma_user_mmap_io() in its mmap. i40iw does not use the rdma_user_mmap_io API but sets the vma's vm_private_data to a driver object. This now conflicts with the vm_op rdma_umap_close as priv pointer points to the i40iw driver object instead of the private data setup by core when rdma_user_mmap_io is called. This leads to a crash in rdma_umap_close with a mmap put being called when it should not have. Remove the redundant setting of the vma private_data in i40iw as it is not used. Also move i40iw over to use the rdma_user_mmap_io API. This gives the extra protection of having the mappings zapped when the context is detsroyed. BUG: unable to handle page fault for address: 0000000100000001 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP PTI CPU: 6 PID: 9528 Comm: rping Kdump: loaded Not tainted 5.5.0-rc4+ #117 Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./Q87M-D2H, BIOS F7 01/17/2014 RIP: 0010:rdma_user_mmap_entry_put+0xa/0x30 [ib_core] RSP: 0018:ffffb340c04c7c38 EFLAGS: 00010202 RAX: 00000000ffffffff RBX: ffff9308e7be2a00 RCX: 000000000000cec0 RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000100000001 RBP: ffff9308dc7641f0 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000001 R11: ffffffff8d4414d8 R12: ffff93075182c780 R13: 0000000000000001 R14: ffff93075182d2a8 R15: ffff9308e2ddc840 FS: 0000000000000000(0000) GS:ffff9308fdc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000100000001 CR3: 00000002e0412004 CR4: 00000000001606e0 Call Trace: rdma_umap_close+0x40/0x90 [ib_uverbs] remove_vma+0x43/0x80 exit_mmap+0xfd/0x1b0 mmput+0x6e/0x130 do_exit+0x290/0xcc0 ? get_signal+0x152/0xc40 do_group_exit+0x46/0xc0 get_signal+0x1bd/0xc40 ? prepare_to_wait_event+0x97/0x190 do_signal+0x36/0x630 ? remove_wait_queue+0x60/0x60 ? __audit_syscall_exit+0x1d9/0x290 ? rcu_read_lock_sched_held+0x52/0x90 ? kfree+0x21c/0x2e0 exit_to_usermode_loop+0x4f/0xc3 do_syscall_64+0x1ed/0x270 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7fae715a81fd Code: Bad RIP value. RSP: 002b:00007fae6e163cb0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001 RAX: fffffffffffffe00 RBX: 00007fae6e163d30 RCX: 00007fae715a81fd RDX: 0000000000000010 RSI: 00007fae6e163cf0 RDI: 0000000000000003 RBP: 00000000013413a0 R08: 00007fae68000000 R09: 0000000000000017 R10: 0000000000000001 R11: 0000000000000293 R12: 00007fae680008c0 R13: 00007fae6e163cf0 R14: 00007fae717c9804 R15: 00007fae6e163ed0 CR2: 0000000100000001 ---[ end trace b33d58d3a06782cb ]--- RIP: 0010:rdma_user_mmap_entry_put+0xa/0x30 [ib_core] Fixes: b86deba977a9 ("RDMA/core: Move core content from ib_uverbs to ib_core") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Shiraz Saleem <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-06remove ioremap_nocache and devm_ioremap_nocacheChristoph Hellwig7-10/+10
ioremap has provided non-cached semantics by default since the Linux 2.6 days, so remove the additional ioremap_nocache interface. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Arnd Bergmann <[email protected]>
2020-01-03RDMA/i40iw: fix a potential NULL pointer dereferenceXiyu Yang1-0/+2
A NULL pointer can be returned by in_dev_get(). Thus add a corresponding check so that a NULL pointer dereference will be avoided at this place. Fixes: 8e06af711bf2 ("i40iw: add main, hdr, status") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Xiyu Yang <[email protected]> Signed-off-by: Xin Tan <[email protected]> Reviewed-by: Leon Romanovsky <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/mlx5: use true,false for bool variablezhengbin2-3/+3
Fixes coccicheck warning: drivers/infiniband/hw/mlx5/mr.c:150:2-26: WARNING: Assignment of 0/1 to bool variable drivers/infiniband/hw/mlx5/mr.c:1455:2-26: WARNING: Assignment of 0/1 to bool variable drivers/infiniband/hw/mlx5/qp.c:1874:6-20: WARNING: Assignment of 0/1 to bool variable Link: https://lore.kernel.org/r/[email protected] Reported-by: Hulk Robot <[email protected]> Signed-off-by: zhengbin <[email protected]> Acked-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/mlx4: use true,false for bool variablezhengbin1-2/+2
Fixes coccicheck warning: drivers/infiniband/hw/mlx4/qp.c:852:2-14: WARNING: Assignment of 0/1 to bool variable drivers/infiniband/hw/mlx4/qp.c:3087:3-10: WARNING: Assignment of 0/1 to bool variable Link: https://lore.kernel.org/r/[email protected] Reported-by: Hulk Robot <[email protected]> Signed-off-by: zhengbin <[email protected]> Reviewed-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/hfi1: use true,false for bool variablezhengbin1-1/+1
Fixes coccicheck warning: drivers/infiniband/hw/hfi1/rc.c:2602:1-8: WARNING: Assignment of 0/1 to bool variable Link: https://lore.kernel.org/r/[email protected] Reported-by: Hulk Robot <[email protected]> Signed-off-by: zhengbin <[email protected]> Reviewed-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/mlx5: Unify ODP MR code paths to allow extra flexibilityArtemy Kovalyov4-67/+67
Building MR translation table in the ODP case requires additional flexibility, namely random access to DMA addresses. Make both direct and indirect ODP MR use same code path, separated from the non-ODP MR code path. With the restructuring the correct page_shift is now used around __mlx5_ib_populate_pas(). Fixes: d2183c6f1958 ("RDMA/umem: Move page_shift from ib_umem to ib_odp_umem") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Artemy Kovalyov <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/hfi1: Adjust flow PSN with the correct resync_psnKaike Wan1-0/+9
When a TID RDMA ACK to RESYNC request is received, the flow PSNs for pending TID RDMA WRITE segments will be adjusted with the next flow generation number, based on the resync_psn value extracted from the flow PSN of the TID RDMA ACK packet. The resync_psn value indicates the last flow PSN for which a TID RDMA WRITE DATA packet has been received by the responder and the requester should resend TID RDMA WRITE DATA packets, starting from the next flow PSN. However, if resync_psn points to the last flow PSN for a segment and the next segment flow PSN starts with a new generation number, use of the old resync_psn to adjust the flow PSN for the next segment will lead to miscalculation, resulting in WARN_ON and sge rewinding errors: WARNING: CPU: 4 PID: 146961 at /nfs/site/home/phcvs2/gitrepo/ifs-all/components/Drivers/tmp/rpmbuild/BUILD/ifs-kernel-updates-3.10.0_957.el7.x86_64/hfi1/tid_rdma.c:4764 hfi1_rc_rcv_tid_rdma_ack+0x8f6/0xa90 [hfi1] Modules linked in: ib_ipoib(OE) hfi1(OE) rdmavt(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfsv3 nfs_acl nfs lockd grace fscache iTCO_wdt iTCO_vendor_support skx_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm irqbypass crc32_pclmul ghash_clmulni_intel ib_isert iscsi_target_mod target_core_mod aesni_intel lrw gf128mul glue_helper ablk_helper cryptd rpcrdma sunrpc opa_vnic ast ttm ib_iser libiscsi drm_kms_helper scsi_transport_iscsi ipmi_ssif syscopyarea sysfillrect sysimgblt fb_sys_fops drm joydev ipmi_si pcspkr sg drm_panel_orientation_quirks ipmi_devintf lpc_ich i2c_i801 ipmi_msghandler wmi rdma_ucm ib_ucm ib_uverbs acpi_cpufreq acpi_power_meter ib_umad rdma_cm ib_cm iw_cm ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic crct10dif_pclmul i2c_algo_bit crct10dif_common crc32c_intel e1000e ib_core ahci libahci ptp libata pps_core nfit libnvdimm [last unloaded: rdmavt] CPU: 4 PID: 146961 Comm: kworker/4:0H Kdump: loaded Tainted: G W OE ------------ 3.10.0-957.el7.x86_64 #1 Hardware name: Intel Corporation S2600WFT/S2600WFT, BIOS SE5C620.86B.0X.02.0117.040420182310 04/04/2018 Workqueue: hfi0_0 _hfi1_do_tid_send [hfi1] Call Trace: <IRQ> [<ffffffff9e361dc1>] dump_stack+0x19/0x1b [<ffffffff9dc97648>] __warn+0xd8/0x100 [<ffffffff9dc9778d>] warn_slowpath_null+0x1d/0x20 [<ffffffffc05d28c6>] hfi1_rc_rcv_tid_rdma_ack+0x8f6/0xa90 [hfi1] [<ffffffffc05c21cc>] hfi1_kdeth_eager_rcv+0x1dc/0x210 [hfi1] [<ffffffffc05c23ef>] ? hfi1_kdeth_expected_rcv+0x1ef/0x210 [hfi1] [<ffffffffc0574f15>] kdeth_process_eager+0x35/0x90 [hfi1] [<ffffffffc0575b5a>] handle_receive_interrupt_nodma_rtail+0x17a/0x2b0 [hfi1] [<ffffffffc056a623>] receive_context_interrupt+0x23/0x40 [hfi1] [<ffffffff9dd4a294>] __handle_irq_event_percpu+0x44/0x1c0 [<ffffffff9dd4a442>] handle_irq_event_percpu+0x32/0x80 [<ffffffff9dd4a4cc>] handle_irq_event+0x3c/0x60 [<ffffffff9dd4d27f>] handle_edge_irq+0x7f/0x150 [<ffffffff9dc2e554>] handle_irq+0xe4/0x1a0 [<ffffffff9e3795dd>] do_IRQ+0x4d/0xf0 [<ffffffff9e36b362>] common_interrupt+0x162/0x162 <EOI> [<ffffffff9dfa0f79>] ? swiotlb_map_page+0x49/0x150 [<ffffffffc05c2ed1>] hfi1_verbs_send_dma+0x291/0xb70 [hfi1] [<ffffffffc05c2c40>] ? hfi1_wait_kmem+0xf0/0xf0 [hfi1] [<ffffffffc05c3f26>] hfi1_verbs_send+0x126/0x2b0 [hfi1] [<ffffffffc05ce683>] _hfi1_do_tid_send+0x1d3/0x320 [hfi1] [<ffffffff9dcb9d4f>] process_one_work+0x17f/0x440 [<ffffffff9dcbade6>] worker_thread+0x126/0x3c0 [<ffffffff9dcbacc0>] ? manage_workers.isra.25+0x2a0/0x2a0 [<ffffffff9dcc1c31>] kthread+0xd1/0xe0 [<ffffffff9dcc1b60>] ? insert_kthread_work+0x40/0x40 [<ffffffff9e374c1d>] ret_from_fork_nospec_begin+0x7/0x21 [<ffffffff9dcc1b60>] ? insert_kthread_work+0x40/0x40 This patch fixes the issue by adjusting the resync_psn first if the flow generation has been advanced for a pending segment. Fixes: 9e93e967f7b4 ("IB/hfi1: Add a function to receive TID RDMA ACK packet") Link: https://lore.kernel.org/r/[email protected] Cc: <[email protected]> Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/hfi1: List all receive contexts from debugfsMichael J. Ruhl2-4/+10
The current debugfs output for receive contexts (rcds), stops after the kernel receive contexts have been displayed. This is not enough information to fully diagnose packet drops. Display all of the receive contexts. Augment the output with some more context information. Limit the ring buffer header output to 5 entries to avoid overextending the sequential file output. Fixes: bf808b5039c ("IB/hfi1: Add kernel receive context info to debugfs") Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Michael J. Ruhl <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/hfi1: Add accessor API routines to access context membersMike Marciniszyn9-85/+183
This patch adds a set of accessor routines to access context members. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/hfi1: Don't cancel unused work itemKaike Wan1-1/+3
In the iowait structure, two iowait_work entries were included to queue a given object: one for normal IB operations, and the other for TID RDMA operations. For non-TID RDMA operations, the iowait_work structure for TID RDMA is initialized to contain a NULL function (not used). When the QP is reset, the function iowait_cancel_work will be called to cancel any pending work. The problem is that this function will call cancel_work_sync() for both iowait_work entries, even though the one for TID RDMA is not used at all. Eventually, the call cascades to __flush_work(), wherein a WARN_ON will be triggered due to the fact that work->func is NULL. The WARN_ON was introduced in commit 4d43d395fed1 ("workqueue: Try to catch flush_work() without INIT_WORK().") This patch fixes the issue by making sure that a work function is present for TID RDMA before calling cancel_work_sync in iowait_cancel_work. Fixes: 4d43d395fed1 ("workqueue: Try to catch flush_work() without INIT_WORK().") Fixes: 5da0fc9dbf89 ("IB/hfi1: Prepare resource waits for dual leg") Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Kaike Wan <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/mlx4: Redo TX checksum offload in line with docsEugene Crosser1-11/+7
Ingress checksum offload was not working for IPv6 frames because the conditional expression that checks validation status passed from the hardware was not matching the algorithm described in the documentation. This patch defines L4_CSUM flag (which falls inside the badfcs_enc field in the existing definition of the CQE layout) and replaces the conditional expression with the one defined in the "ConnectX(r) Family Programmer's Manual" document. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Eugene Crosser <[email protected]> Reviewed-by: Jack Wang <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03IB/mlx5: Fix outstanding_pi index for GSI qpsPrabhath Sajeepa1-2/+1
Commit b0ffeb537f3a ("IB/mlx5: Fix iteration overrun in GSI qps") changed the way outstanding WRs are tracked for the GSI QP. But the fix did not cover the case when a call to ib_post_send() fails and updates index to track outstanding. Since the prior commmit outstanding_pi should not be bounded otherwise the loop generate_completions() will fail. Fixes: b0ffeb537f3a ("IB/mlx5: Fix iteration overrun in GSI qps") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Prabhath Sajeepa <[email protected]> Acked-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/hns: Simplify the calculation and usage of wqe idx for post verbsYixian Liu3-48/+35
Currently, the wqe idx is calculated repeatly everywhere it is used. This patch defines wqe_idx and calculated it only once, then just use it as needed. Fixes: 2d40788825ac ("RDMA/hns: Add support for processing send wr and receive wr") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Yixian Liu <[email protected]> Signed-off-by: Weihang Li <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/bnxt_re: Report more number of completion vectorsSelvin Xavier1-1/+1
Report the the data path MSIx vectors allocated by driver as number of completion vectors. One interrupt vector is used for Control path. So reporting one less than the total number of MSIx vectors allocated by the driver. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/bnxt_re: Fix Send Work Entry state check while polling completionsSelvin Xavier1-6/+6
Some adapters need a fence Work Entry to handle retransmission. Currently the driver checks for this condition, only if the Send queue entry is signalled. Implement the condition check, irrespective of the signalled state of the Work queue entries Failure to add the fence can result in access to memory that is already marked as completed, triggering data corruption, transmission failure, IOMMU failures, etc. Fixes: 9152e0b722b2 ("RDMA/bnxt_re: HW workarounds for handling specific conditions") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/bnxt_re: Avoid freeing MR resources if dereg failsSelvin Xavier1-1/+3
The driver returns an error code for MR dereg, but frees the MR structure. When the MR dereg is retried due to previous error, the system crashes as the structure is already freed. BUG: unable to handle kernel NULL pointer dereference at 00000000000001b8 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 7 PID: 12178 Comm: ib_send_bw Kdump: loaded Not tainted 4.18.0-124.el8.x86_64 #1 Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.1.10 03/10/2015 RIP: 0010:__dev_printk+0x2a/0x70 Code: 0f 1f 44 00 00 49 89 d1 48 85 f6 0f 84 f6 2b 00 00 4c 8b 46 70 4d 85 c0 75 04 4c 8b 46 10 48 8b 86 a8 00 00 00 48 85 c0 74 16 <48> 8b 08 0f be 7f 01 48 c7 c2 13 ac ac 83 83 ef 30 e9 10 fe ff ff RSP: 0018:ffffaf7c04607a60 EFLAGS: 00010006 RAX: 00000000000001b8 RBX: ffffa0010c91c488 RCX: 0000000000000246 RDX: ffffaf7c04607a68 RSI: ffffa0010c91caa8 RDI: ffffffff83a788eb RBP: ffffaf7c04607ac8 R08: 0000000000000000 R09: ffffaf7c04607a68 R10: 0000000000000000 R11: 0000000000000001 R12: ffffaf7c04607b90 R13: 000000000000000e R14: 0000000000000000 R15: 00000000ffffa001 FS: 0000146fa1f1cdc0(0000) GS:ffffa0012fac0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000000001b8 CR3: 000000007680a003 CR4: 00000000001606e0 Call Trace: dev_err+0x6c/0x90 ? dev_printk_emit+0x4e/0x70 bnxt_qplib_rcfw_send_message+0x594/0x660 [bnxt_re] ? dev_err+0x6c/0x90 bnxt_qplib_free_mrw+0x80/0xe0 [bnxt_re] bnxt_re_dereg_mr+0x2e/0xd0 [bnxt_re] ib_dereg_mr+0x2f/0x50 [ib_core] destroy_hw_idr_uobject+0x20/0x70 [ib_uverbs] uverbs_destroy_uobject+0x2e/0x170 [ib_uverbs] __uverbs_cleanup_ufile+0x6e/0x90 [ib_uverbs] uverbs_destroy_ufile_hw+0x61/0x130 [ib_uverbs] ib_uverbs_close+0x1f/0x80 [ib_uverbs] __fput+0xb7/0x230 task_work_run+0x8a/0xb0 do_exit+0x2da/0xb40 ... RIP: 0033:0x146fa113a387 Code: Bad RIP value. RSP: 002b:00007fff945d1478 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff02 RAX: 0000000000000000 RBX: 000055a248908d70 RCX: 0000000000000000 RDX: 0000146fa1f2b000 RSI: 0000000000000001 RDI: 000055a248906488 RBP: 000055a248909630 R08: 0000000000010000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 000055a248906488 R13: 0000000000000001 R14: 0000000000000000 R15: 000055a2489095f0 Do not free the MR structures, when driver returns error to the stack. Fixes: 872f3578241d ("RDMA/bnxt_re: Add support for MRs with Huge pages") Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2020-01-03RDMA/qedr: Add kernel capability flags for dpm enabled modeMichal Kalderon1-1/+12
HW/FW support two types of latency enhancement features. Until now user-space implemented only edpm (enhanced dpm). We add kernel capability flags to differentiate between current FW in kernel that supports both ldpm and edpm. Since edpm is not yet supported for iWARP we add different flags for iWARP + RoCE. We also fix bad practice of defining sizes in rdma-core and pass initialization to kernel, for forward compatibility. The capability flags are added for backward-forward compatibility between kernel and rdma-core for qedr. Before this change there was a field called dpm_enabled which could hold either 0 or 1 value, this indicated whether RoCE edpm was enabled or not. We modified this field to be dpm_flags, and bit 1 still holds the same meaning of RoCE edpm being enabled or not. Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Michal Kalderon <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2019-12-25Merge branch 'core/kprobes' into perf/core, to pick up a completed branchIngo Molnar2-5/+5
Signed-off-by: Ingo Molnar <[email protected]>
2019-12-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds5-63/+116
Pull rdma fixes from Doug Ledford: "A small collection of -rc fixes. Mostly. One API addition, but that's because we wanted to use it in a fix. There's also a bug fix that is going to render the 5.5 kernel's soft-RoCE driver incompatible with all soft-RoCE versions prior, but it's required to actually implement the protocol according to the RoCE spec and required in order for the soft-RoCE driver to be able to successfully work with actual RoCE hardware. Summary: - Update Steve Wise info - Fix for soft-RoCE crc calculations (will break back compatibility, but only with the soft-RoCE driver, which has had this bug since it was introduced and it is an on-the-wire bug, but will make soft-RoCE fully compatible with real RoCE hardware) - cma init fixup - counters oops fix - fix for mlx4 init/teardown sequence - fix for mkx5 steering rules - introduce a cleanup API, which isn't a fix, but we want to use it in the next fix - fix for mlx5 memory management that uses API in previous patch" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: IB/mlx5: Fix device memory flows IB/core: Introduce rdma_user_mmap_entry_insert_range() API IB/mlx5: Fix steering rule of drop and count IB/mlx4: Follow mirror sequence of device add during device removal RDMA/counter: Prevent auto-binding a QP which are not tracked with res rxe: correctly calculate iCRC for unaligned payloads Update mailmap info for Steve Wise RDMA/cma: add missed unregister_pernet_subsys in init failure
2019-12-12IB/mlx5: Fix device memory flowsYishai Hadas4-52/+105
Fix device memory flows so that only once there will be no live mmaped VA to a given allocation the matching object will be destroyed. This prevents a potential scenario that existing VA that was mmaped by one process might still be used post its deallocation despite that it's owned now by other process. The above is achieved by integrating with IB core APIs to manage mmap/munmap. Only once the refcount will become 0 the DM object and its underlay area will be freed. Fixes: 3b113a1ec3d4 ("IB/mlx5: Support device memory type attribute") Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Doug Ledford <[email protected]>
2019-12-12IB/mlx5: Fix steering rule of drop and countMaor Gottlieb1-7/+6
There are two flow rule destinations: QP and packet. While users are setting DROP packet rule, the QP should not be set as a destination. Fixes: 3b3233fbf02e ("IB/mlx5: Add flow counters binding support") Signed-off-by: Maor Gottlieb <[email protected]> Reviewed-by: Raed Salem <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Doug Ledford <[email protected]>
2019-12-12IB/mlx4: Follow mirror sequence of device add during device removalParav Pandit1-4/+5
Current code device add sequence is: ib_register_device() ib_mad_init() init_sriov_init() register_netdev_notifier() Therefore, the remove sequence should be, unregister_netdev_notifier() close_sriov() mad_cleanup() ib_unregister_device() However it is not above. Hence, make do above remove sequence. Fixes: fa417f7b520ee ("IB/mlx4: Add support for IBoE") Signed-off-by: Parav Pandit <[email protected]> Reviewed-by: Maor Gottlieb <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Doug Ledford <[email protected]>
2019-12-10x86/mm/pat: Rename <asm/pat.h> => <asm/memtype.h>Ingo Molnar1-1/+1
pat.h is a file whose main purpose is to provide the memtype_*() APIs. PAT is the low level hardware mechanism - but the high level abstraction is memtype. So name the header <memtype.h> as well - this goes hand in hand with memtype.c and memtype_interval.c. Signed-off-by: Ingo Molnar <[email protected]>
2019-12-10Merge tag 'v5.5-rc1' into core/kprobes, to resolve conflictsIngo Molnar111-11846/+2760
Signed-off-by: Ingo Molnar <[email protected]>