aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2018-05-16IB/cm: Avoid AV ah_attr overwriting during LAP message handlingParav Pandit1-8/+8
AH attribute of the cm_id can be overwritten if LAP message is received on CM request which is in progress. This bug got introduced to avoid sleeping when spin lock is held as part of commit in Fixes tag. Therefore validate the cm_id state first and continue to perform AV ah_attr initialization. Given that Aleternative path related messages are not supported for RoCE, init_av_from_response/path is such messages are ok to be called from blocking context. Fixes: 33f93e1ebcf5 ("IB/cm: Fix sleeping while spin lock is held") Signed-off-by: Parav Pandit <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-16i40iw: Extend port reuse support for listenersShiraz Saleem3-42/+59
If two listeners are created with different IP's but same port, the second rdma_listen fails due to a duplicate port entry being added from the CQP add APBVT OP. commit f16dc0aa5ea2 ("i40iw: Add support for port reuse on active side connections") does not account for listener side port reuse. Check for duplicate port before invoking the CQP command to add APBVT entry and delete the entry only if the port is not in use. Additionally, consolidate all port-reuse logic into i40iw_manage_apbvt. Fixes: f16dc0aa5ea2 ("i40iw: Add support for port reuse on active side connections") Signed-off-by: Shiraz Saleem <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-15IB/umem: Use the correct mm during ib_umem_releaseLidong Chen2-7/+1
User-space may invoke ibv_reg_mr and ibv_dereg_mr in different threads. If ibv_dereg_mr is called after the thread which invoked ibv_reg_mr has exited, get_pid_task will return NULL and ib_umem_release will not decrease mm->pinned_vm. Instead of using threads to locate the mm, use the overall tgid from the ib_ucontext struct instead. This matches the behavior of ODP and disassociate in handling the mm of the process that called ibv_reg_mr. Cc: <[email protected]> Fixes: 87773dd56d54 ("IB: ib_umem_release() should decrement mm->pinned_vm from ib_umem_get") Signed-off-by: Lidong Chen <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-15IB/core: Remove redundant returnYuval Shaia1-2/+0
"return" statement at the end of void function is redundant, removing it. Signed-off-by: Yuval Shaia <[email protected]> Reviewed-by: Zhu Yanjun <[email protected]> Reviewed-by: Qing Huang <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-15iw_cxgb4: remove wr_id attributesSteve Wise1-55/+0
Remove sq/rq wr_id attributes because typically they are pointers and we don't want to pass up kernel pointers. Fixes: 056f9c7f39bf ("iw_cxgb4: dump detailed driver-specific QP information") Signed-off-by: Steve Wise <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-15RDMA/NLDEV: remove mr iova attributeSteve Wise1-3/+0
Remove mr iova attribute because we don't want to pass up kernel pointers. Fixes: fccec5b89ac6 ("RDMA/nldev: provide detailed MR information") Signed-off-by: Steve Wise <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-15iw_cxgb4: fix uninitialized variable warningsSteve Wise1-2/+2
Fixes: 056f9c7f39bf ("iw_cxgb4: dump detailed driver-specific QP information") Signed-off-by: Steve Wise <[email protected]> Signed-off-by: Jason Gunthorpe <[email protected]>
2018-05-15RDMA/uapi: Fix uapi breakageDoug Ledford1-11/+13
During this merge window, we added support for addition RDMA netlink operations. Unfortunately, we added the items in the middle of our uapi enum. Fix that before final release. Fixes: da5c85078215 ("RDMA/nldev: add driver-specific resource tracking") Signed-off-by: Doug Ledford <[email protected]>
2018-05-15RDMA/hfi1: Fix build error with debugfs disabledDoug Ledford1-15/+0
A recent patch set to rework the usage of debugfs and to add fault injection capabilities via debugfs files to the hfi1 driver introduced a build error that only shows up when debugfs is fully disabled. The patchset mistakenly defines some empty stub functions in two different headers when debugfs is disabled. Remove the set that shouldn't have been there to resolve the issue. Fixes: a74d5307caba ("IB/hfi1: Rework fault injection machinery") Signed-off-by: Doug Ledford <[email protected]>
2018-05-15IB: Fix RDMA_RXE and INFINIBAND_RDMAVT dependencies for DMA_VIRT_OPSBen Hutchings2-1/+2
DMA_VIRT_OPS requires that dma_addr_t is at least as wide as a pointer, which is expressed as a dependency on !64BIT || ARCH_DMA_ADDR_T_64BIT. For parisc64 this is not true, and if these IB modules are enabled, kconfig warns: WARNING: unmet direct dependencies detected for DMA_VIRT_OPS Depends on [n]: HAS_DMA [=y] && (!64BIT [=y] || ARCH_DMA_ADDR_T_64BIT) Selected by [m]: - INFINIBAND_RDMAVT [=m] && INFINIBAND [=m] && 64BIT [=y] && PCI [=y] - RDMA_RXE [=m] && INET [=y] && PCI [=y] && INFINIBAND [=m] Add dependencies to fix this. Signed-off-by: Ben Hutchings <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-15Merge tag 'mlx5-updates-2018-05-07' of ↵Doug Ledford6-27/+27
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into k.o/wip/dl-for-next mlx5-updates-2018-05-07 mlx5 core driver misc cleanups and updates: - fix spelling mistake: "modfiy" -> "modify" - Cleanup unused field in Work Queue parameters - dump_command mailbox length printed - Refactor num of blocks in mailbox calculation - Decrease level of prints about non-existent MKEY - remove some extraneous spaces in indentations Pulling the same update already pulled into net-next by Dave Miller. Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/{hfi1, qib, rdmavt}: Move logic to allocate receive WQE into rdmavtBrian Welty12-324/+170
Moving receive-side WQE allocation logic into rdmavt will allow further code reuse between qib and hfi1 drivers. Reviewed-by: Mike Marciniszyn <[email protected]> Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Brian Welty <[email protected]> Signed-off-by: Harish Chegondi <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/{hfi1, rdmavt, qib}: Implement CQ completion vector supportSebastian Sanchez15-101/+534
Currently the driver doesn't support completion vectors. These are used to indicate which sets of CQs should be grouped together into the same vector. A vector is a CQ processing thread that runs on a specific CPU. If an application has several CQs bound to different completion vectors, and each completion vector runs on different CPUs, then the completion queue workload is balanced. This helps scale as more nodes are used. Implement CQ completion vector support using a global workqueue where a CQ entry is queued to the CPU corresponding to the CQ's completion vector. Since the workqueue is global, it's guaranteed to always be there when queueing CQ entries; Therefore, the RCU locking for cq->rdi->worker in the hot path is superfluous. Each completion vector is assigned to a different CPU. The number of completion vectors available is computed by taking the number of online, physical CPUs from the local NUMA node and subtracting the CPUs used for kernel receive queues and the general interrupt. Special use cases: * If there are no CPUs left for completion vectors, the same CPU for the general interrupt is used; Therefore, there would only be one completion vector available. * For multi-HFI systems, the number of completion vectors available for each device is the total number of completion vectors in the local NUMA node divided by the number of devices in the same NUMA node. If there's a division remainder, the first device to get initialized gets an extra completion vector. Upon a CQ creation, an invalid completion vector could be specified. Handle it as follows: * If the completion vector is less than 0, set it to 0. * Set the completion vector to the result of the passed completion vector moded with the number of device completion vectors available. Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Sebastian Sanchez <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Create common functions for affinity CPU mask operationsSebastian Sanchez1-23/+60
CPU masks are used to keep track of affinity assignments for IRQs and processes. Operations performed on these affinity CPU masks are duplicated throughout the code. Create common functions for affinity CPU mask operations to remove duplicate code. Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Sebastian Sanchez <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Optimize kthread pointer locking when queuing CQ entriesSebastian Sanchez2-13/+20
All threads queuing CQ entries on different CQs are unnecessarily synchronized by a spin lock to check if the CQ kthread worker hasn't been destroyed before queuing an CQ entry. The lock used in 6efaf10f163d ("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") is a device global lock and will have poor performance at scale as completions are entered from a large number of CPUs. Convert to use RCU where the read side of RCU is rvt_cq_enter() to determine that the worker is alive prior to triggering the completion event. Apply write side RCU semantics in rvt_driver_cq_init() and rvt_cq_exit(). Fixes: 6efaf10f163d ("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") Cc: <[email protected]> # 4.14.x Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Sebastian Sanchez <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/Hfi1: Read CCE Revision register to verify the device is responsiveKamenee Arumugam2-7/+8
When Hfi1 device is unresponsive, reading the RcvArrayCnt register will return all 1's. This value is then used to remap chip's RcvArray. The incorrect all ones value used in remapping RcvArray will cause warn on as shown by trace below: [<ffffffff81685eac>] dump_stack+0x19/0x1b [<ffffffff81085820>] warn_slowpath_common+0x70/0xb0 [<ffffffff810858bc>] warn_slowpath_fmt+0x5c/0x80 [<ffffffff81065c29>] __ioremap_caller+0x279/0x320 [<ffffffff8142873c>] ? _dev_info+0x6c/0x90 [<ffffffffa021d155>] ? hfi1_pcie_ddinit+0x1d5/0x330 [hfi1] [<ffffffff81065d62>] ioremap_wc+0x32/0x40 [<ffffffffa021d155>] hfi1_pcie_ddinit+0x1d5/0x330 [hfi1] [<ffffffffa0204851>] hfi1_init_dd+0x1d1/0x2440 [hfi1] [<ffffffff813503dc>] ? pci_write_config_word+0x1c/0x20 Read CCE revision register first to verify that WFR device is responsive. If the read return "all ones", bail out from init and fail the driver load. Reviewed-by: Mike Marciniszyn <[email protected]> Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Kamenee Arumugam <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Rework fault injection machineryMitko Haralanov10-358/+577
The packet fault injection code present in the HFI1 driver had some issues which not only fragment the code but also created user confusion. Furthermore, it suffered from the following issues: 1. The fault_packet method only worked for received packets. This meant that the only fault injection mode available for sent packets is fault_opcode, which did not allow for random packet drops on all egressing packets. 2. The mask available for the fault_opcode mode did not really work due to the fact that the opcode values are not bits in a bitmask but rather sequential integer values. Creating a opcode/mask pair that would successfully capture a set of packets was nearly impossible. 3. The code was fragmented and used too many debugfs entries to operate and control. This was confusing to users. 4. It did not allow filtering fault injection on a per direction basis - egress vs. ingress. In order to improve or fix the above issues, the following changes have been made: 1. The fault injection methods have been combined into a single fault injection facility. As such, the fault injection has been plugged into both the send and receive code paths. Regardless of method used the fault injection will operate on both egress and ingress packets. 2. The type of fault injection - by packet or by opcode - is now controlled by changing the boolean value of the file "opcode_mode". When the value is set to True, fault injection is done by opcode. Otherwise, by packet. 2. The masking ability has been removed in favor of a bitmap that holds opcodes of interest (one bit per opcode, a total of 256 bits). This works in tandem with the "opcode_mode" value. When the value of "opcode_mode" is False, this bitmap is ignored. When the value is True, the bitmap lists all opcodes to be considered for fault injection. By default, the bitmap is empty. When the user wants to filter by opcode, the user sets the corresponding bit in the bitmap by echo'ing the bit position into the 'opcodes' file. This gets around the issue that the set of opcodes does not lend itself to effective masks and allow for extremely fine-grained filtering by opcode. 4. fault_packet and fault_opcode methods have been combined. Hence, there is only one debugfs directory controlling the entire operation of the fault injection machinery. This reduces the number of debugfs entries and provides a more unified user experience. 5. A new control files - "direction" - is provided to allow the user to control the direction of packets, which are subject to fault injection. 6. A new control file - "skip_usec" - is added that would allow the user to specify a "timeout" during which no fault injection will occur. In addition, the following bug fixes have been applied: 1. The fault injection code has been split into its own header and source files. This was done to better organize the code and support conditional compilation without littering the code with #ifdef's. 2. The method by which the TX PIO packets were being marked for drop conflicted with the way send contexts were being setup. As a result, the send context was repeatedly being reset. 3. The fault injection only makes sense when the user can control it through the debugfs entries. However, a kernel configuration can enable fault injection but keep fault injection debugfs entries disabled. Therefore, it makes sense that the HFI fault injection code depends on both. 4. Error suppression did not take into account the method by which PIO packets were being dropped. Therefore, even with error suppression turned on, errors would still be displayed to the screen. A larger enough packet drop percentage would case the kernel to crash because the driver would be stuck printing errors. Reviewed-by: Dennis Dalessandro <[email protected]> Reviewed-by: Don Hiatt <[email protected]> Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Mitko Haralanov <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/{hfi1, qib}: Add handling of kernel restartAlex Estrin4-0/+28
A warm restart will fail to unload the driver, leaving link state potentially flapping up to the point the BIOS resets the adapter. Correct the issue by hooking the shutdown pci method, which will bring port down. Cc: <[email protected]> # 4.9.x Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Alex Estrin <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Reorder incorrect send context disableMichael J. Ruhl2-11/+35
User send context integrity bits are cleared before the context is disabled. If the send context is still processing data, any packets that need those integrity bits will cause an error and halt the send context. During the disable handling, the driver waits for the context to drain. If the context is halted, the driver will eventually timeout because the context won't drain and then incorrectly bounce the link. Reorder the bit clearing and the context disable. Examine the software state and send context status as well as the egress status to determine if a send context is in the halted state. Promote the check macros to static functions for consistency with the new check and to follow kernel style. Remove an unused define that refers to the egress timeout. Cc: <[email protected]> # 4.9.x Reviewed-by: Mitko Haralanov <[email protected]> Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Michael J. Ruhl <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Return correct value for device stateMichael J. Ruhl1-2/+2
The driver_pstate() function is used to map internal driver state information to externally defined states. The VERIFY_CAP and GOING_UP states are config/training states, but the mapping routing returns the POLLING value. Update the return values for VERIFY_CAP and GOING_UP to return the correct value: TRAINING. Reviewed-by: Sebastian Sanchez <[email protected]> Signed-off-by: Michael J. Ruhl <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Fix fault injection init/exit issuesMike Marciniszyn1-2/+6
There are config dependent code paths that expose panics in unload paths both in this file and in debugfs_remove_recursive() because CONFIG_FAULT_INJECTION and CONFIG_FAULT_INJECTION_DEBUG_FS can be set independently. Having CONFIG_FAULT_INJECTION set and CONFIG_FAULT_INJECTION_DEBUG_FS reset causes fault_create_debugfs_attr() to return an error. The debugfs.c routines tolerate failures, but the module unload panics dereferencing a NULL in the two exit routines. If that is fixed, the dir passed to debugfs_remove_recursive comes from a memory location that was freed and potentially reused causing a segfault or corrupting memory. Here is an example of the NULL deref panic: [66866.286829] BUG: unable to handle kernel NULL pointer dereference at 0000000000000088 [66866.295602] IP: hfi1_dbg_ibdev_exit+0x2a/0x80 [hfi1] [66866.301138] PGD 858496067 P4D 858496067 PUD 8433a7067 PMD 0 [66866.307452] Oops: 0000 [#1] SMP [66866.310953] Modules linked in: hfi1(-) rdmavt rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm iw_cm ib_cm ib_core rpcsec_gss_krb5 nfsv4 dns_resolver nfsv3 nfs fscache sb_edac x86_pkg_temp_thermal intel_powerclamp vfat fat coretemp kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc aesni_intel iTCO_wdt iTCO_vendor_support crypto_simd mei_me glue_helper cryptd mxm_wmi ipmi_si pcspkr lpc_ich sg mei ioatdma ipmi_devintf i2c_i801 mfd_core shpchp ipmi_msghandler wmi acpi_power_meter acpi_cpufreq nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables ext4 mbcache jbd2 sd_mod mgag200 drm_kms_helper syscopyarea sysfillrect sysimgblt igb fb_sys_fops ttm ahci ptp crc32c_intel libahci pps_core drm dca libata i2c_algo_bit i2c_core [last unloaded: opa_vnic] [66866.385551] CPU: 8 PID: 7470 Comm: rmmod Not tainted 4.14.0-mam-tid-rdma #2 [66866.393317] Hardware name: Intel Corporation S2600WT2/S2600WT2, BIOS SE5C610.86B.01.01.0018.C4.072020161249 07/20/2016 [66866.405252] task: ffff88084f28c380 task.stack: ffffc90008454000 [66866.411866] RIP: 0010:hfi1_dbg_ibdev_exit+0x2a/0x80 [hfi1] [66866.417984] RSP: 0018:ffffc90008457da0 EFLAGS: 00010202 [66866.423812] RAX: 0000000000000000 RBX: ffff880857de0000 RCX: 0000000180040001 [66866.431773] RDX: 0000000180040002 RSI: ffffea0021088200 RDI: 0000000040000000 [66866.439734] RBP: ffffc90008457da8 R08: ffff88084220e000 R09: 0000000180040001 [66866.447696] R10: 000000004220e001 R11: ffff88084220e000 R12: ffff88085a31c000 [66866.455657] R13: ffffffffa07c9820 R14: ffffffffa07c9890 R15: ffff881059d78100 [66866.463618] FS: 00007f6876047740(0000) GS:ffff88085f800000(0000) knlGS:0000000000000000 [66866.472644] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [66866.479053] CR2: 0000000000000088 CR3: 0000000856357006 CR4: 00000000001606e0 [66866.487013] Call Trace: [66866.489747] remove_one+0x1f/0x220 [hfi1] [66866.494221] pci_device_remove+0x39/0xc0 [66866.498596] device_release_driver_internal+0x141/0x210 [66866.504424] driver_detach+0x3f/0x80 [66866.508409] bus_remove_driver+0x55/0xd0 [66866.512784] driver_unregister+0x2c/0x50 [66866.517164] pci_unregister_driver+0x2a/0xa0 [66866.521934] hfi1_mod_cleanup+0x10/0xaa2 [hfi1] [66866.526988] SyS_delete_module+0x171/0x250 [66866.531558] do_syscall_64+0x67/0x1b0 [66866.535644] entry_SYSCALL64_slow_path+0x25/0x25 [66866.540792] RIP: 0033:0x7f6875525c27 [66866.544777] RSP: 002b:00007ffd48528e78 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0 [66866.553224] RAX: ffffffffffffffda RBX: 0000000001cc01d0 RCX: 00007f6875525c27 [66866.561185] RDX: 00007f6875596000 RSI: 0000000000000800 RDI: 0000000001cc0238 [66866.569146] RBP: 0000000000000000 R08: 00007f68757e9060 R09: 00007f6875596000 [66866.577120] R10: 00007ffd48528c00 R11: 0000000000000206 R12: 00007ffd48529db4 [66866.585080] R13: 0000000000000000 R14: 0000000001cc01d0 R15: 0000000001cc0010 [66866.593040] Code: 90 0f 1f 44 00 00 48 83 3d a3 8b 03 00 00 55 48 89 e5 53 48 89 fb 74 4e 48 8d bf 18 0c 00 00 e8 9d f2 ff ff 48 8b 83 20 0c 00 00 <48> 8b b8 88 00 00 00 e8 2a 21 b3 e0 48 8b bb 20 0c 00 00 e8 0e [66866.614127] RIP: hfi1_dbg_ibdev_exit+0x2a/0x80 [hfi1] RSP: ffffc90008457da0 [66866.621885] CR2: 0000000000000088 [66866.625618] ---[ end trace c4817425783fb092 ]--- Fix by insuring that upon failure from fault_create_debugfs_attr() the parent pointer for the routines is always set to NULL and guards added in the exit routines to insure that debugfs_remove_recursive() is not called when when the parent pointer is NULL. Fixes: 0181ce31b260 ("IB/hfi1: Add receive fault injection feature") Cc: <[email protected]> # 4.14.x Reviewed-by: Michael J. Ruhl <[email protected]> Signed-off-by: Mike Marciniszyn <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Complete check for locally terminated smpAlex Estrin1-16/+20
For lid routed packets 'hop_cnt' is zero, therefore current test is incomplete. Fix it by using local mad check for both lid routed and direct routed MADs. Reviewed-by: Mike Mariciniszyn <[email protected]> Signed-off-by: Alex Estrin <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Return actual error value from program_rcvarray()Michael J. Ruhl1-1/+0
A failure of program_rcvarray() is treated inconsistently by the calling function. In one case the error is returned, in a second case, the error is overwritten with EFAULT. In both cases the code path is doing the same thing, allocating memory for groups, so it should be consistent. Make the error path consistent and return the error generated by program_rcvarray(). Reviewed-by: Harish Chegondi <[email protected]> Fixes: 7e7a436ecb6e ("staging/hfi1: Add TID entry program function body") Signed-off-by: Michael J. Ruhl <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Prevent LNI hang when LCB can't obtain lanesSebastian Sanchez3-20/+53
When the LCB isn't able to get any lanes operational on the first transition into mission mode, the link transfer active never happens and the LNI stays in the polling state indefinitely. Reset LCB upon receiving an 8051 interrupt for LCB to try to obtain lanes with firmware version 1.25.0 or later. Also, update the LCB reset value in other parts of the code with a macro defined to make the code more maintainable and rename functions with the link_width label to link_mode to reflect the fact that those functions set and read link related data not just the link width. Reviewed-by: Michael J. Ruhl <[email protected]> Reviewed-by: Mike Marciniszyn <[email protected]> Signed-off-by: Sebastian Sanchez <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09Merge branch 'k.o/for-rc' into k.o/wip/dl-for-nextDoug Ledford63-208/+407
Several items of conflict have arisen between the RDMA stack's for-rc branch and upcoming for-next work: 9fd4350ba895 ("IB/rxe: avoid double kfree_skb") directly conflicts with 2e47350789eb ("IB/rxe: optimize the function duplicate_request") Patches already submitted by Intel for the hfi1 driver will fail to apply cleanly without this merge Other people on the mailing list have notified that their upcoming patches also fail to apply cleanly without this merge Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/mlx5: posting klm/mtt list inline in the send queue for reg_wrIdan Burstein1-7/+36
As most kernel RDMA ULPs, (e.g. NVMe over Fabrics in its default "register_always=Y" mode) registers and invalidates user buffer upon each IO. Today the mlx5 driver is posting the registration work request using scatter/gather entry for the MTT/KLM list. The fetch of the MTT/KLM list becomes the bottleneck in number of IO operation could be done by NVMe over Fabrics host driver on a single adapter as shown below. This patch is adding the support for inline registration work request upon MTT/KLM list of size <=64B. The result for NVMe over Fabrics is increase of > x3.5 for small IOs as shown below, I expect other ULPs (e.g iSER, SRP, NFS over RDMA) performance to be enhanced as well. The following results were taken against a single NVMe-oF (RoCE link layer) subsystem with a single namespace backed by null_blk using fio benchmark (with rw=randread, numjobs=48, iodepth={16,64}, ioengine=libaio direct=1): ConnectX-5 (pci Width x16) --------------------------- Block Size s/g reg_wr inline reg_wr ++++++++++ +++++++++++++++ ++++++++++++++++ 512B 1302.8K/34.82% 4951.9K/99.02% 1KB 1284.3K/33.86% 4232.7K/98.09% 2KB 1238.6K/34.1% 2797.5K/80.04% 4KB 1169.3K/32.46% 1941.3K/61.35% 8KB 1013.4K/30.08% 1236.6K/39.47% 16KB 695.7K/20.19% 696.9K/20.59% 32KB 350.3K/9.64% 350.6K/10.3% 64KB 175.86K/5.27% 175.9K/5.28% ConnectX-4 (pci Width x8) --------------------------- Block Size s/g reg_wr inline reg_wr ++++++++++ +++++++++++++++ ++++++++++++++++ 512B 1285.8K/42.66% 4242.7K/98.18% 1KB 1254.1K/41.74% 3569.2K/96.00% 2KB 1185.9K/39.83% 2173.9K/75.58% 4KB 1069.4K/36.46% 1343.3K/47.47% 8KB 755.1K/27.77% 748.7K/29.14% Tested-by: Nitzan Carmi <[email protected]> Signed-off-by: Idan Burstein <[email protected]> Signed-off-by: Max Gurtovoy <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Drop local zgid in favor of core defined variableLeon Romanovsky1-1/+0
The zgid is already provided by IB/core, so there is no need in locally defined variable, let's drop it and reuse common one. Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/core: Reuse gid_table_release_one() in table allocation failureParav Pandit1-26/+16
_gid_table_setup_one() only performs GID table cache memory allocation, marks entries as invalid (free) and marks the reserved entries. At this point GID table is empty and no entries are added. On dual port device if _gid_table_setup_one() fails to allocate the gid table for 2nd port, there is no need to perform cleanup_gid_table_port() to delete GID entries, as GID table is empty. Therefore make use of existing gid_table_release_one() routine which frees the GID table memory and avoid code duplication. Reviewed-by: Daniel Jurgens <[email protected]> Signed-off-by: Parav Pandit <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/core: Make gid_table_reserve_default() return voidParav Pandit1-13/+5
gid_table_reserve_default() always returns zero. Make it return void and simplify error checking. rdma_port is already calculated, use that while calling gid_table_reserve_default() instead of recalculating it. Reviewed-by: Daniel Jurgens <[email protected]> Signed-off-by: Parav Pandit <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09iw_cxgb4: Fix an error handling path in 'c4iw_get_dma_mr()'Christophe Jaillet1-2/+2
The error handling path of 'c4iw_get_dma_mr()' does not free resources in the correct order. If an error occures, it can leak 'mhp->wr_waitp'. Fixes: a3f12da0e99a ("iw_cxgb4: allocate wait object for each memory object") Signed-off-by: Christophe JAILLET <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/i40iw: Avoid panic when reading back the IRQ affinity hintAndrew Boyer2-4/+4
The current code sets an affinity hint with a cpumask_t stored on the stack. This value can then be accessed through /proc/irq/*/affinity_hint/, causing a segfault or returning corrupt data. Move the cpumask_t into struct i40iw_msix_vector so it is available later. Backtrace: BUG: unable to handle kernel paging request at ffffb16e600e7c90 IP: irq_affinity_hint_proc_show+0x60/0xf0 PGD 17c0c6d067 PUD 17c0c6e067 PMD 15d4a0e067 PTE 0 Oops: 0000 [#1] SMP Modules linked in: ... CPU: 3 PID: 172543 Comm: grep Tainted: G OE ... #1 Hardware name: ... task: ffff9a5caee08000 task.stack: ffffb16e659d8000 RIP: 0010:irq_affinity_hint_proc_show+0x60/0xf0 RSP: 0018:ffffb16e659dbd20 EFLAGS: 00010086 RAX: 0000000000000246 RBX: ffffb16e659dbd20 RCX: 0000000000000000 RDX: ffffb16e600e7c90 RSI: 0000000000000003 RDI: 0000000000000046 RBP: ffffb16e659dbd88 R08: 0000000000000038 R09: 0000000000000001 R10: 0000000070803079 R11: 0000000000000000 R12: ffff9a59d1d97a00 R13: ffff9a5da47a6cd8 R14: ffff9a5da47a6c00 R15: ffff9a59d1d97a00 FS: 00007f946c31d740(0000) GS:ffff9a5dc1800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffb16e600e7c90 CR3: 00000016a4339000 CR4: 00000000007406e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: seq_read+0x12d/0x430 ? sched_clock_cpu+0x11/0xb0 proc_reg_read+0x48/0x70 __vfs_read+0x37/0x140 ? security_file_permission+0xa0/0xc0 vfs_read+0x96/0x140 SyS_read+0x58/0xc0 do_syscall_64+0x5a/0x190 entry_SYSCALL64_slow_path+0x25/0x25 RIP: 0033:0x7f946bbc97e0 RSP: 002b:00007ffdd0c4ae08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 RAX: ffffffffffffffda RBX: 000000000096b000 RCX: 00007f946bbc97e0 RDX: 000000000096b000 RSI: 00007f946a2f0000 RDI: 0000000000000004 RBP: 0000000000001000 R08: 00007f946a2ef011 R09: 000000000000000a R10: 0000000000001000 R11: 0000000000000246 R12: 00007f946a2f0000 R13: 0000000000000004 R14: 0000000000000000 R15: 00007f946a2f0000 Code: b9 08 00 00 00 49 89 c6 48 89 df 31 c0 4d 8d ae d8 00 00 00 f3 48 ab 4c 89 ef e8 6c 9a 56 00 49 8b 96 30 01 00 00 48 85 d2 74 3f <48> 8b 0a 48 89 4d 98 48 8b 4a 08 48 89 4d a0 48 8b 4a 10 48 89 RIP: irq_affinity_hint_proc_show+0x60/0xf0 RSP: ffffb16e659dbd20 CR2: ffffb16e600e7c90 Fixes: 8e06af711bf2 ("i40iw: add main, hdr, status") Signed-off-by: Andrew Boyer <[email protected]> Reviewed-by: Shiraz Saleem <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/i40iw: Avoid reference leaks when processing the AEQAndrew Boyer1-2/+2
In this switch there is a reference held on the QP. 'continue' will grab the next event without releasing the reference, causing a leak. Change it to 'break' to drop the reference before grabbing the next event. Fixes: 4e9042e647ff ("i40iw: add hw and utils files") Signed-off-by: Andrew Boyer <[email protected]> Reviewed-by: Shiraz Saleem <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/i40iw: Avoid panic when objects are being created and destroyedAndrew Boyer2-2/+10
A panic occurs when there is a newly-registered element on the QP/CQ MR list waiting to be attached, but a different MR is deregistered. The current code only checks for whether the list is empty, not whether the element being deregistered is actually on the list. Fix the panic by adding a boolean to track if the object is on the list. Fixes: d37498417947 ("i40iw: add files for iwarp interface") Signed-off-by: Andrew Boyer <[email protected]> Reviewed-by: Shiraz Saleem <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Fix the bug with NULL pointeroulijun1-1/+1
When the last QP of eight QPs is not exist in hns_roce_v1_mr_free_work_fn function, the print for qpn of hr_qp may introduce a calltrace for NULL pointer. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Set NULL for __internal_mroulijun1-0/+1
This patch mainly configure value for __internal_mr of mr_free_pd. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Enable inner_pa_vld filed of mptoulijun1-0/+2
When enabled inner_pa_vld field of mpt, The pa0 and pa1 will be valid and the hardware will use it directly and not use base address of pbl. As a result, it can reduce the delay. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Set desc_dma_addr for zero when free cmq descoulijun1-0/+2
In order to avoid illegal use for desc_dma_addr of ring, it needs to set it zero when free cmq desc. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Fix the bug with rq sgeoulijun1-2/+2
When received multiply rq sge, it should tag the invalid lkey for the last non-zero length sge when have some sges' length are zero. This patch fixes it. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Not support qp transition from reset to reset for hip06oulijun1-1/+8
Because hip06 hardware is not support for qp transition from reset to reset state, it need to return errno when qp transited from reset to reset. This patch fixes it. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Add return operation when configured global param failoulijun1-0/+1
When configure global param function run fail, it should directly return and the initial flow will stop. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Update convert function of endian formatoulijun1-1/+1
Because the sys_image_guid of ib_device_attr structure is __be64, it need to use cpu_to_be64 for converting. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Load the RoCE dirver automaticallyoulijun1-0/+2
To enable the linux-kernel system to load the hns-roce-hw-v2 driver automatically when hns-roce-hw-v2 is plugged in pci bus, it need to create a MODULE_DEVICE_TABLE for expose the pci_table of hns-roce-hw-v2 to user. Signed-off-by: Lijun Ou <[email protected]> Reported-by: Zhou Wang <[email protected]> Tested-by: Xiaojun Tan <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Bugfix for rq record db for kerneloulijun1-0/+1
When used rq record db for kernel, it needs to set the rdb_en of hr_qp to 1 and configures the dma address of record rq db of qp context. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/hns: Add rq inline flags judgementoulijun1-6/+12
It needs to set the rqie field of qp context by configured rq inline flags. Besides, it need to decide whether posting inline rqwqe by judged rq inline flags. Signed-off-by: Lijun Ou <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09nvmet,rxe: defer ip datagram sending to taskletAlexandru Moise1-9/+1
This addresses 3 separate problems: 1. When using NVME over Fabrics we may end up sending IP packets in interrupt context, we should defer this work to a tasklet. [ 50.939957] WARNING: CPU: 3 PID: 0 at kernel/softirq.c:161 __local_bh_enable_ip+0x1f/0xa0 [ 50.942602] CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Tainted: G W 4.17.0-rc3-ARCH+ #104 [ 50.945466] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014 [ 50.948163] RIP: 0010:__local_bh_enable_ip+0x1f/0xa0 [ 50.949631] RSP: 0018:ffff88009c183900 EFLAGS: 00010006 [ 50.951029] RAX: 0000000080010403 RBX: 0000000000000200 RCX: 0000000000000001 [ 50.952636] RDX: 0000000000000000 RSI: 0000000000000200 RDI: ffffffff817e04ec [ 50.954278] RBP: ffff88009c183910 R08: 0000000000000001 R09: 0000000000000614 [ 50.956000] R10: ffffea00021d5500 R11: 0000000000000001 R12: ffffffff817e04ec [ 50.957779] R13: 0000000000000000 R14: ffff88009566f400 R15: ffff8800956c7000 [ 50.959402] FS: 0000000000000000(0000) GS:ffff88009c180000(0000) knlGS:0000000000000000 [ 50.961552] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 50.963798] CR2: 000055c4ec0ccac0 CR3: 0000000002209001 CR4: 00000000000606e0 [ 50.966121] Call Trace: [ 50.966845] <IRQ> [ 50.967497] __dev_queue_xmit+0x62d/0x690 [ 50.968722] dev_queue_xmit+0x10/0x20 [ 50.969894] neigh_resolve_output+0x173/0x190 [ 50.971244] ip_finish_output2+0x2b8/0x370 [ 50.972527] ip_finish_output+0x1d2/0x220 [ 50.973785] ? ip_finish_output+0x1d2/0x220 [ 50.975010] ip_output+0xd4/0x100 [ 50.975903] ip_local_out+0x3b/0x50 [ 50.976823] rxe_send+0x74/0x120 [ 50.977702] rxe_requester+0xe3b/0x10b0 [ 50.978881] ? ip_local_deliver_finish+0xd1/0xe0 [ 50.980260] rxe_do_task+0x85/0x100 [ 50.981386] rxe_run_task+0x2f/0x40 [ 50.982470] rxe_post_send+0x51a/0x550 [ 50.983591] nvmet_rdma_queue_response+0x10a/0x170 [ 50.985024] __nvmet_req_complete+0x95/0xa0 [ 50.986287] nvmet_req_complete+0x15/0x60 [ 50.987469] nvmet_bio_done+0x2d/0x40 [ 50.988564] bio_endio+0x12c/0x140 [ 50.989654] blk_update_request+0x185/0x2a0 [ 50.990947] blk_mq_end_request+0x1e/0x80 [ 50.991997] nvme_complete_rq+0x1cc/0x1e0 [ 50.993171] nvme_pci_complete_rq+0x117/0x120 [ 50.994355] __blk_mq_complete_request+0x15e/0x180 [ 50.995988] blk_mq_complete_request+0x6f/0xa0 [ 50.997304] nvme_process_cq+0xe0/0x1b0 [ 50.998494] nvme_irq+0x28/0x50 [ 50.999572] __handle_irq_event_percpu+0xa2/0x1c0 [ 51.000986] handle_irq_event_percpu+0x32/0x80 [ 51.002356] handle_irq_event+0x3c/0x60 [ 51.003463] handle_edge_irq+0x1c9/0x200 [ 51.004473] handle_irq+0x23/0x30 [ 51.005363] do_IRQ+0x46/0xd0 [ 51.006182] common_interrupt+0xf/0xf [ 51.007129] </IRQ> 2. Work must always be offloaded to tasklet for rxe_post_send_kernel() when using NVMEoF in order to solve lock ordering between neigh->ha_lock seqlock and the nvme queue lock: [ 77.833783] Possible interrupt unsafe locking scenario: [ 77.833783] [ 77.835831] CPU0 CPU1 [ 77.837129] ---- ---- [ 77.838313] lock(&(&n->ha_lock)->seqcount); [ 77.839550] local_irq_disable(); [ 77.841377] lock(&(&nvmeq->q_lock)->rlock); [ 77.843222] lock(&(&n->ha_lock)->seqcount); [ 77.845178] <Interrupt> [ 77.846298] lock(&(&nvmeq->q_lock)->rlock); [ 77.847986] [ 77.847986] *** DEADLOCK *** 3. Same goes for the lock ordering between sch->q.lock and nvme queue lock: [ 47.634271] Possible interrupt unsafe locking scenario: [ 47.634271] [ 47.636452] CPU0 CPU1 [ 47.637861] ---- ---- [ 47.639285] lock(&(&sch->q.lock)->rlock); [ 47.640654] local_irq_disable(); [ 47.642451] lock(&(&nvmeq->q_lock)->rlock); [ 47.644521] lock(&(&sch->q.lock)->rlock); [ 47.646480] <Interrupt> [ 47.647263] lock(&(&nvmeq->q_lock)->rlock); [ 47.648492] [ 47.648492] *** DEADLOCK *** Using NVMEoF after this patch seems to finally be stable, without it, rxe eventually deadlocks the whole system and causes RCU stalls. Signed-off-by: Alexandru Moise <[email protected]> Reviewed-by: Zhu Yanjun <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09i40iw: Use correct address in dst_neigh_lookup for IPv6Mustafa Ismail1-1/+1
Use of incorrect structure address for IPv6 neighbor lookup causes connections to IPv6 addresses to fail. Fix this by using correct address in call to dst_neigh_lookup. Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Mustafa Ismail <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09i40iw: Fix memory leak in error path of create QPMustafa Ismail1-1/+1
If i40iw_allocate_dma_mem fails when creating a QP, the memory allocated for the QP structure using kzalloc is not freed because iwqp->allocated_buffer is used to free the memory and it is not setup until later. Fix this by setting iwqp->allocated_buffer before allocating the dma memory. Fixes: d37498417947 ("i40iw: add files for iwarp interface") Signed-off-by: Mustafa Ismail <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/mlx5: Use proper spec flow label typeDaria Velikovsky1-1/+1
Flow label is defined as u32 in the in ipv6 flow spec, but used internally in the flow specs parsing as u8. That was causing loss of part of flow_label value. Fixes: 2d1e697e9b716 ('IB/mlx5: Add support to match inner packet fields') Reviewed-by: Maor Gottlieb <[email protected]> Signed-off-by: Daria Velikovsky <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09RDMA/mlx5: Don't assume that medium blueFlame register existsYishai Hadas1-7/+11
User can leave system without medium BlueFlames registers, however the code assumed that at least one such register exists. This patch fixes that assumption. Fixes: c1be5232d21d ("IB/mlx5: Fix micro UAR allocator") Reported-by: Rohit Zambre <[email protected]> Signed-off-by: Yishai Hadas <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2018-05-09IB/hfi1: Use after free race condition in send context error pathMichael J. Ruhl1-0/+4
A pio send egress error can occur when the PSM library attempts to to send a bad packet. That issue is still being investigated. The pio error interrupt handler then attempts to progress the recovery of the errored pio send context. Code inspection reveals that the handling lacks the necessary locking if that recovery interleaves with a PSM close of the "context" object contains the pio send context. The lack of the locking can cause the recovery to access the already freed pio send context object and incorrectly deduce that the pio send context is actually a kernel pio send context as shown by the NULL deref stack below: [<ffffffff8143d78c>] _dev_info+0x6c/0x90 [<ffffffffc0613230>] sc_restart+0x70/0x1f0 [hfi1] [<ffffffff816ab124>] ? __schedule+0x424/0x9b0 [<ffffffffc06133c5>] sc_halted+0x15/0x20 [hfi1] [<ffffffff810aa3ba>] process_one_work+0x17a/0x440 [<ffffffff810ab086>] worker_thread+0x126/0x3c0 [<ffffffff810aaf60>] ? manage_workers.isra.24+0x2a0/0x2a0 [<ffffffff810b252f>] kthread+0xcf/0xe0 [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40 [<ffffffff816b8798>] ret_from_fork+0x58/0x90 [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40 This is the best case scenario and other scenarios can corrupt the already freed memory. Fix by adding the necessary locking in the pio send context error handler. Cc: <[email protected]> # 4.9.x Reviewed-by: Mike Marciniszyn <[email protected]> Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Michael J. Ruhl <[email protected]> Signed-off-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>