aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2015-10-28RDS/IW: Convert to new memory registration APISagi Grimberg3-115/+75
Get rid of fast_reg page list and its construction. Instead, just pass the RDS sg list to ib_map_mr_sg and post the new ib_reg_wr. This is done both for server IW RDMA_READ registration and the client remote key registration. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Acked-by: Santosh Shilimkar <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28svcrdma: Port to new memory registration APISagi Grimberg3-61/+55
Instead of maintaining a fastreg page list, keep an sg table and convert an array of pages to a sg list. Then call ib_map_mr_sg and construct ib_reg_wr. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Tested-by: Steve Wise <[email protected]> Tested-by: Selvin Xavier <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28xprtrdma: Port to new memory registration APISagi Grimberg2-52/+69
Instead of maintaining a fastreg page list, keep an sg table and convert an array of pages to a sg list. Then call ib_map_mr_sg and construct ib_reg_wr. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Tested-by: Steve Wise <[email protected]> Tested-by: Selvin Xavier <[email protected]> Reviewed-by: Chuck Lever <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28iser-target: Port to new memory registration APISagi Grimberg2-104/+28
Remove fastreg page list allocation as the page vector is now private to the provider. Instead of constructing the page list and fast_req work request, call ib_map_mr_sg and construct ib_reg_wr. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/iser: Port to new fast registration APISagi Grimberg3-52/+28
Remove fastreg page list allocation as the page vector is now private to the provider. Instead of constructing the page list and fast_req work request, call ib_map_mr_sg and construct ib_reg_wr. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28RDMA/nes: Support the new memory registration APISagi Grimberg2-1/+120
Support the new memory registration API by allocating a private page list array in nes_mr and populate it when nes_map_mr_sg is invoked. Also, support IB_WR_REG_MR by duplicating IB_WR_FAST_REG_MR handling and take the needed information from different places: - page_size, iova, length (ib_mr) - page array (nes_mr) - key, access flags (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/qib: Support the new memory registration APISagi Grimberg4-1/+104
Support the new memory registration API by allocating a private page list array in qib_mr and populate it when qib_map_mr_sg is invoked. Also, support IB_WR_REG_MR by duplicating qib_fastreg_mr just take the needed information from different places: - page_size, iova, length (ib_mr) - page array (qib_mr) - key, access flags (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28iw_cxgb4: Support the new memory registration APISagi Grimberg4-0/+120
Support the new memory registration API by allocating a private page list array in c4iw_mr and populate it when c4iw_map_mr_sg is invoked. Also, support IB_WR_REG_MR by duplicating build_fastreg just take the needed information from different places: - page_size, iova, length (ib_mr) - page array (c4iw_mr) - key, access flags (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Tested-by: Steve Wise <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28RDMA/cxgb3: Support the new memory registration APISagi Grimberg3-0/+83
Support the new memory registration API by allocating a private page list array in iwch_mr and populate it when iwch_map_mr_sg is invoked. Also, support IB_WR_REG_MR by duplicating build_fastreg just take the needed information from different places: - page_size, iova, length (ib_mr) - page array (iwch_mr) - key, access flags (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28RDMA/ocrdma: Support the new memory registration APISagi Grimberg4-0/+96
Support the new memory registration API by allocating a private page list array in ocrdma_mr and populate it when ocrdma_map_mr_sg is invoked. Also, support IB_WR_REG_MR by duplicating IB_WR_FAST_REG_MR, but take the needed information from different places: - page_size, iova, length, access flags (ib_mr) - page array (ocrdma_mr) - key (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/mlx4: Support the new memory registration APISagi Grimberg5-6/+132
Support the new memory registration API by allocating a private page list array in mlx4_ib_mr and populate it when mlx4_ib_map_mr_sg is invoked. Also, support IB_WR_REG_MR by setting the exact WQE as IB_WR_FAST_REG_MR, just take the needed information from different places: - page_size, iova, length, access flags (ib_mr) - page array (mlx4_ib_mr) - key (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Tested-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/mlx5: Support the new memory registration APISagi Grimberg5-0/+189
Support the new memory registration API by allocating a private page list array in mlx5_ib_mr and populate it when mlx5_ib_map_mr_sg is invoked. Also, support IB_WR_REG_MR by setting the exact WQE as IB_WR_FAST_REG_MR, just take the needed information from different places: - page_size, iova, length, access flags (ib_mr) - page array (mlx5_ib_mr) - key (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: Sagi Grimberg <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/mlx5: Remove dead fmr codeSagi Grimberg1-25/+0
Just function declarations - no need for those laying arround. If for some reason someone will want FMR support in mlx5, it should be easy enough to restore a few structs. Signed-off-by: Sagi Grimberg <[email protected]> Reviewed-by: Bart Van Assche <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/core: Introduce new fast registration APISagi Grimberg2-0/+151
The new fast registration verb ib_map_mr_sg receives a scatterlist and converts it to a page list under the verbs API thus hiding the specific HW mapping details away from the consumer. The provider drivers are provided with a generic helper ib_sg_to_pages that converts a scatterlist into a vector of page addresses. The drivers can still perform any HW specific page address setting by passing a set_page function pointer which will be invoked for each page address. This allows drivers to avoid keeping a shadow page vectors and convert them to HW specific translations by doing extra copies. This API will allow ULPs to remove the duplicated code of constructing a page vector from a given sg list. The send work request ib_reg_wr also shrinks as it will contain only mr, key and access flags in addition. Signed-off-by: Sagi Grimberg <[email protected]> Tested-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28Merge branch 'wr-cleanup' into k.o/for-4.4Doug Ledford63-988/+1152
2015-10-28Merge branch 'wr-cleanup' of git://git.infradead.org/users/hch/rdma into ↵Doug Ledford98-1247/+1698
wr-cleanup Signed-off-by: Doug Ledford <[email protected]> Conflicts: drivers/infiniband/ulp/isert/ib_isert.c - Commit 4366b19ca5eb (iser-target: Change the recv buffers posting logic) changed the logic in isert_put_datain() and had to be hand merged
2015-10-28IB/ucma: Take the network namespace from the processGuy Shapiro1-2/+3
Add support for network namespaces from user space. This is done by passing the network namespace of the process instead of init_net. Signed-off-by: Haggai Eran <[email protected]> Signed-off-by: Yotam Kenneth <[email protected]> Signed-off-by: Shachar Raindel <[email protected]> Signed-off-by: Guy Shapiro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/cma: Add support for network namespacesGuy Shapiro14-34/+52
Add support for network namespaces in the ib_cma module. This is accomplished by: 1. Adding network namespace parameter for rdma_create_id. This parameter is used to populate the network namespace field in rdma_id_private. rdma_create_id keeps a reference on the network namespace. 2. Using the network namespace from the rdma_id instead of init_net inside of ib_cma, when listening on an ID and when looking for an ID for an incoming request. 3. Decrementing the reference count for the appropriate network namespace when calling rdma_destroy_id. In order to preserve the current behavior init_net is passed when calling from other modules. Signed-off-by: Guy Shapiro <[email protected]> Signed-off-by: Haggai Eran <[email protected]> Signed-off-by: Yotam Kenneth <[email protected]> Signed-off-by: Shachar Raindel <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/cma: Separate port allocation to network namespacesHaggai Eran1-24/+70
Keep a struct for each network namespace containing the IDRs for the RDMA CM port spaces. The struct is created dynamically using the generic_net mechanism. This patch is internal infrastructure work for the following patches. In this patch, init_net is statically used as the network namespace for the new port-space API. Signed-off-by: Haggai Eran <[email protected]> Signed-off-by: Yotam Kenneth <[email protected]> Signed-off-by: Shachar Raindel <[email protected]> Signed-off-by: Guy Shapiro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/addr: Pass network namespace as a parameterGuy Shapiro3-9/+25
Add network namespace support to the ib_addr module. For that, all the address resolution and matching should be done using the appropriate namespace instead of init_net. This is achieved by: 1. Adding an explicit network namespace argument to exported function that require a namespace. 2. Saving the namespace in the rdma_addr_client structure. 3. Using it when calling networking functions. In order to preserve the behavior of calling modules, &init_net is passed as the parameter in calls from other modules. This is modified as namespace support is added on more levels. Signed-off-by: Haggai Eran <[email protected]> Signed-off-by: Yotam Kenneth <[email protected]> Signed-off-by: Shachar Raindel <[email protected]> Signed-off-by: Guy Shapiro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/iser: Enable SG clusteringSagi Grimberg1-1/+1
iser is perfectly capable supporting SG clustering as it translates the SG list to a page vector. Enabling SG clustering can dramatically reduce the number of SG elements, which doesn't make much of a difference at this point, but with arbitrary SG list support, reducing the number of SG elements can benefit greatly as as it would reduce the length of the HW descriptors array. Signed-off-by: Sagi Grimberg <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-28IB/iser: set block queue_virt_boundarySagi Grimberg4-326/+18
The block layer can reliably guarantee that SG lists won't contain gaps (page unaligned) if a driver set the queue virt_boundary. With this setting the block layer will: - refuse merges if bios are not aligned to the virtual boundary - split bios/requests that are not aligned to the virtual boundary - or, bounce buffer SG_IOs that are not aligned to the virtual boundary Since iser is working in 4K page size, set the virt_boundary to 4K pages. With this setting, we can now safely remove the bounce buffering logic in iser. Signed-off-by: Sagi Grimberg <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-22iser-target: Remove an unused variableBart Van Assche1-3/+2
Detected this by compiling with W=1. Signed-off-by: Bart Van Assche <[email protected]> Cc: Sagi Grimberg <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-22IB/iser: Remove an unused variableBart Van Assche1-4/+0
Detected this by compiling with W=1. Signed-off-by: Bart Van Assche <[email protected]> Cc: Sagi Grimberg <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Remove smac and vlan id from path recordMatan Barak4-8/+0
The GID cache accompanies every GID with attributes. The GID attributes link the GID with its netdevice, which could be resolved to smac and vlan id easily. Since we've added the netdevice (ifindex and net) to the path record, storing the L2 attributes is duplicated data and hence these attributes are removed. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Remove smac and vlan id from qp_attr and ah_attrMatan Barak4-15/+4
Smac and vlan id could be resolved from the GID attribute, and thus these attributes aren't needed anymore. Removing them. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/cm: Remove the usage of smac and vid of qp_attr and cm_avMatan Barak2-36/+0
The cm and cma don't need to explicitly handle vlan and smac, as they are resolved from the GID index now. Removing this portion of code. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Use GID table in AH creation and dmac resolutionMatan Barak12-110/+193
Previously, vlan id and source MAC were used from QP attributes. Since the net device is now stored in the GID attributes, they could be used instead of getting this information from the QP attributes. IB_QP_SMAC, IB_QP_ALT_SMAC, IB_QP_VID and IB_QP_ALT_VID were removed because there is no known libibverbs that uses them. This commit also modifies the vendors (mlx4, ocrdma) drivers in order to use the new approach. ocrdma driver changes were done by Somnath Kotur <[email protected]> Signed-off-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/cache: Add ib_find_gid_by_filter cache APIMatan Barak2-0/+101
GID cache API users might want to search for GIDs with specific attributes rather than just specifying GID, net device and port. This is used in a later patch, where we find the sgid index by L2 Ethernet attributes. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/cma: cma_validate_port should verify the port and netdeviceMatan Barak1-8/+18
Previously, cma_validate_port searched for GIDs in IB cache and then tried to verify the found port. This could fail when there are identical GIDs on both ports. In addition, netdevice should be taken into account when searching the GID table. Fixing cma_validate_port to search only the relevant port's cache and netdevice. Signed-off-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/cm: cm_init_av_by_path should find a GID by its netdeviceMatan Barak1-2/+5
Previously, the CM has searched the cache for any sgid_index whose GID matches the path's GID. Since the path record stores the net device, the CM should now search only for GIDs which originated from this net device. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Add netdev to path recordMatan Barak3-2/+23
In order to find the sgid_index, one could just query the IB cache with the correct GID and netdevice. Therefore, instead of storing the L2 attributes directly in the path, we only store the ifindex and net and use them later to get the sgid_index. The vlan_id and smac L2 attributes are removed in a later patch. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Expose and rename ib_find_cached_gid_by_port cache APIMatan Barak4-11/+26
Sometime consumers might want to search for a GID in a specific port. For example, when a WC arrives and we want to search the GID that matches that port - it's better to search only the relevant port. Exposing and renaming ib_cache_gid_find_by_port in order to match the naming convention of the module. Signed-off-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Add netdev and gid attributes paramteres to cacheMatan Barak19-38/+60
Adding an ability to query the IB cache by a netdev and get the attributes of a GID. These parameters are necessary in order to successfully resolve the required GID (when the netdevice is known) and get the Ethernet L2 attributes from a GID. Signed-off-by: Matan Barak <[email protected]> Reviewed-By: Devesh Sharma <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/mlx4: Add support for blocking multicast loopback QP creation user flagEran Ben Elisha2-6/+10
MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK is now supported downstream. In addition, this flag was supported only for IB_QPT_UD, now, with the new implementation it is supported for all QP types. Support IB_USER_VERBS_EX_CMD_CREATE_QP in order to get the flag from user space using the extension create qp command. Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/mlx4: Add counter based implementation for QP multicast loopback blockEran Ben Elisha2-0/+68
Current implementation for MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK is not supported when link layer is Ethernet. This patch will add counter based implementation for multicast loopback prevention. HW can drop multicast loopback packets if sender QP counter index is equal to receiver QP counter index. If qp flag MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK is set and link layer is Ethernet, create a new counter and attach it to the QP so it will continue receiving multicast loopback traffic but it's own. The decision if to create a new counter is being made at the qp modification to RTR after the QP's port is set. When QP is destroyed or moved back to reset state, delete the counter. Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/mlx4: Add IB counters tableEran Ben Elisha4-24/+81
This is an infrastructure step for allocating and attaching more than one counter to QPs on the same port. Allocate a counters table and manage the insertion and removals of the counters in load and unload of mlx4 IB. Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21net/mlx4_en: Implement mcast loopback prevention for ETH qpsMaor Gottlieb3-1/+49
Set the mcast loopback prevention bit in the QPC for ETH MLX QPs (not RSS QPs), when the firmware supports this feature. In addition, all rx ring QPs need to be updated in order not to enforce loopback checks. This prevents getting packets we sent both from the network stack and the HCA. Loopback prevention is done by comparing the counter indices of the sent and receiving QPs. If they're equal, packets aren't loopback-ed. Signed-off-by: Maor Gottlieb <[email protected]> Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21net/mlx4_core: Add support for filtering multicast loopbackMaor Gottlieb5-13/+68
Update device capabilities regarding HW filtering multicast loopback support. Add MLX4_UPDATE_QP_ETH_SRC_CHECK_MC_LB attribute to mlx4_update_qp to enable changing QP context to support filtering incoming multicast loopback traffic according the sender's counter index. Set the corresponding bits in QP context to force the loopback source checks if attribute is given and HW supports it. Signed-off-by: Maor Gottlieb <[email protected]> Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Allow setting create flags in QP init attributeEran Ben Elisha1-1/+1
Allow setting IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK at create_flags in ib_uverbs_create_qp_ex. Signed-off-by: Eran Ben Elisha <[email protected]> Reviewed-by: Haggai Eran <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21IB/core: Extend ib_uverbs_create_qpEran Ben Elisha4-65/+217
ib_uverbs_ex_create_qp follows the extension verbs mechanism. New features (for example, QP creation flags field which is added in a downstream patch) could used via user-space libraries without breaking the ABI. Signed-off-by: Eran Ben Elisha <[email protected]> Reviewed-by: Haggai Eran <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21iw_cxgb4: Adds support for T6 adapterHariprasad S5-146/+204
Signed-off-by: Hariprasad Shenai <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21cxgb4: T6 adapter lld support for iw_cxgb4 driverHariprasad S5-40/+158
Signed-off-by: Hariprasad Shenai <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21RDMA/ocrdma: Bump up ocrdma version number to 11.0.0.0Selvin Xavier1-1/+1
Updating the version number to 11.0.0.0 Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21RDMA/ocrdma: Prevent CQ-Doorbell floodsDevesh Sharma1-8/+3
Changing CQ-Doorbell(DB) logic to prevent DB floods, it is supposed to be pressed only if any hw CQE is polled. If cq-arm was requested previously then don't bother about number of hw CQEs polled and arm the CQ. Signed-off-by: Devesh Sharma <[email protected]> Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21RDMA/ocrdma: Check resource ids received in Async CQENaga Irrinki1-4/+26
Some versions of the FW sends wrong QP or CQ IDs in the Async CQE. Adding a check to see whether qp or cq structures associated with the CQE is valid. Signed-off-by: Devesh Sharma <[email protected]> Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21RDMA/ocrdma: Avoid a possible crash in ocrdma_rem_port_statsSelvin Xavier1-1/+1
debugfs_remove should be called before freeing the driver stats resources to avoid any crash during ocrdma_remove. Signed-off-by: Devesh Sharma <[email protected]> Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21RDMA/ocrdma: Cleanup unused device list and rcu variablesSelvin Xavier2-15/+2
ocrdma_dev_list is not used by the driver. So removing the references of this variable. dev->rcu was introduced for the ipv6 notifier for GID management. This is no longer required as the GID management is outside the HW driver. Signed-off-by: Padmanabh Ratnakar <[email protected]> Signed-off-by: Selvin Xavier <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21iw_cxgb4: reverse the ord/ird in the ESTABLISHED upcallHariprasad S1-2/+2
The ESTABLISHED event should have the peer's ord/ird so swap the values in the event before the upcall. Signed-off-by: Hariprasad Shenai <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2015-10-21iw_cxgb4: fix misuse of ep->ord for minimum ird calculationHariprasad S1-1/+1
When calculating the minimum ird in c4iw_accept_cr(), we need to always have a value of at least 1 if the RTR message is a 0B read. The code was incorrectly using ep->ord for this logic which was incorrectly adjusting the ird and causing incorrect ord/ird negotiation when using MPAv2 to negotiate these values. Signed-off-by: Hariprasad Shenai <[email protected]> Signed-off-by: Doug Ledford <[email protected]>