aboutsummaryrefslogtreecommitdiff
path: root/net/sunrpc
AgeCommit message (Collapse)AuthorFilesLines
2015-02-08SUNRPC: Do not clear the source port in xs_reset_transportTrond Myklebust1-2/+0
Now that we can reuse bound ports after a close, we never really want to clear the transport's source port after it has been set. Doing so really messes up the NFSv3 DRC on the server. Signed-off-by: Trond Myklebust <[email protected]>
2015-02-08SUNRPC: Handle EADDRINUSE on connectTrond Myklebust2-0/+5
Now that we're setting SO_REUSEPORT, we still need to handle the case where a connect() is attempted, but the old socket is still lingering. Essentially, all we want to do here is handle the error by waiting a few seconds and then retrying. Signed-off-by: Trond Myklebust <[email protected]>
2015-02-08SUNRPC: Set SO_REUSEPORT socket option for TCP connectionsTrond Myklebust1-4/+49
When using TCP, we need the ability to reuse port numbers after a disconnection, so that the NFSv3 server knows that we're the same client. Currently we use a hack to work around the TCP socket's TIME_WAIT: we send an RST instead of closing, which doesn't always work... The SO_REUSEPORT option added in Linux 3.9 allows us to bind multiple TCP connections to the same source address+port combination, and thus to use ordinary TCP close() instead of the current hack. Signed-off-by: Trond Myklebust <[email protected]>
2015-02-08Merge tag 'nfs-rdma-for-3.20-part-2' of ↵Trond Myklebust1-3/+4
git://git.linux-nfs.org/projects/anna/nfs-rdma NFS: RDMA Client Sparse Fixes This patch fixes a sparse warning in the initial submission. Signed-off-by: Anna Schumaker <[email protected]> * tag 'nfs-rdma-for-3.20-part-2' of git://git.linux-nfs.org/projects/anna/nfs-rdma: xprtrdma: Address sparse complaint in rpcr_to_rdmar()
2015-02-05xprtrdma: Address sparse complaint in rpcr_to_rdmar()Chuck Lever1-3/+4
With "make ARCH=x86_64 allmodconfig make C=1 CF=-D__CHECK_ENDIAN__": linux-2.6/net/sunrpc/xprtrdma/xprt_rdma.h:273:30: warning: incorrect type in initializer (different base types) linux-2.6/net/sunrpc/xprtrdma/xprt_rdma.h:273:30: expected restricted __be32 [usertype] *buffer linux-2.6/net/sunrpc/xprtrdma/xprt_rdma.h:273:30: got unsigned int [usertype] *rq_buffer As far as I can tell this is a false positive. Reported-by: [email protected] Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-02-03SUNRPC: NULL utsname dereference on NFS umount during namespace cleanupTrond Myklebust2-7/+13
Fix an Oopsable condition when nsm_mon_unmon is called as part of the namespace cleanup, which now apparently happens after the utsname has been freed. Link: http://lkml.kernel.org/r/[email protected] Reported-by: Bruno Prémont <[email protected]> Cc: [email protected] # 3.18 Signed-off-by: Trond Myklebust <[email protected]>
2015-02-03Merge branch 'flexfiles'Trond Myklebust1-7/+19
* flexfiles: (53 commits) pnfs: lookup new lseg at lseg boundary nfs41: .init_read and .init_write can be called with valid pg_lseg pnfs: Update documentation on the Layout Drivers pnfs/flexfiles: Add the FlexFile Layout Driver nfs: count DIO good bytes correctly with mirroring nfs41: wait for LAYOUTRETURN before retrying LAYOUTGET nfs: add a helper to set NFS_ODIRECT_RESCHED_WRITES to direct writes nfs41: add NFS_LAYOUT_RETRY_LAYOUTGET to layout header flags nfs/flexfiles: send layoutreturn before freeing lseg nfs41: introduce NFS_LAYOUT_RETURN_BEFORE_CLOSE nfs41: allow async version layoutreturn nfs41: add range to layoutreturn args pnfs: allow LD to ask to resend read through pnfs nfs: add nfs_pgio_current_mirror helper nfs: only reset desc->pg_mirror_idx when mirroring is supported nfs41: add a debug warning if we destroy an unempty layout pnfs: fail comparison when bucket verifier not set nfs: mirroring support for direct io nfs: add mirroring support to pgio layer pnfs: pass ds_commit_idx through the commit path ... Conflicts: fs/nfs/pnfs.c fs/nfs/pnfs.h
2015-02-03sunrpc: add rpc_count_iostats_idxWeston Andros Adamson1-7/+19
Add a call to tally stats for a task under a different statsidx than what's contained in the task structure. This is needed to properly account for pnfs reads/writes when the DS nfs version != the MDS version. Signed-off-by: Weston Andros Adamson <[email protected]> Signed-off-by: Tom Haynes <[email protected]>
2015-02-03Merge tag 'nfs-rdma-for-3.20' of git://git.linux-nfs.org/projects/anna/nfs-rdmaTrond Myklebust4-344/+468
NFS: Client side changes for RDMA These patches improve the scalability of the NFSoRDMA client and take large variables off of the stack. Additionally, the GFP_* flags are updated to match what TCP uses. Signed-off-by: Anna Schumaker <[email protected]> * tag 'nfs-rdma-for-3.20' of git://git.linux-nfs.org/projects/anna/nfs-rdma: (21 commits) xprtrdma: Update the GFP flags used in xprt_rdma_allocate() xprtrdma: Clean up after adding regbuf management xprtrdma: Allocate zero pad separately from rpcrdma_buffer xprtrdma: Allocate RPC/RDMA receive buffer separately from struct rpcrdma_rep xprtrdma: Allocate RPC/RDMA send buffer separately from struct rpcrdma_req xprtrdma: Allocate RPC send buffer separately from struct rpcrdma_req xprtrdma: Add struct rpcrdma_regbuf and helpers xprtrdma: Refactor rpcrdma_buffer_create() and rpcrdma_buffer_destroy() xprtrdma: Simplify synopsis of rpcrdma_buffer_create() xprtrdma: Take struct ib_qp_attr and ib_qp_init_attr off the stack xprtrdma: Take struct ib_device_attr off the stack xprtrdma: Free the pd if ib_query_qp() fails xprtrdma: Remove rpcrdma_ep::rep_func and ::rep_xprt xprtrdma: Move credit update to RPC reply handler xprtrdma: Remove rl_mr field, and the mr_chunk union xprtrdma: Remove rpcrdma_ep::rep_ia xprtrdma: Rename "xprt" and "rdma_connect" fields in struct rpcrdma_xprt xprtrdma: Clean up hdrlen xprtrdma: Display XIDs in host byte order xprtrdma: Modernize htonl and ntohl ...
2015-01-30xprtrdma: Update the GFP flags used in xprt_rdma_allocate()Chuck Lever1-2/+5
Reflect the more conservative approach used in the socket transport's version of this transport method. An RPC buffer allocation should avoid forcing not just FS activity, but any I/O. In particular, two recent changes missed updating xprtrdma: - Commit c6c8fe79a83e ("net, sunrpc: suppress allocation warning ...") - Commit a564b8f03986 ("nfs: enable swap on NFS") Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Clean up after adding regbuf managementChuck Lever2-11/+2
rpcrdma_{de}register_internal() are used only in verbs.c now. MAX_RPCRDMAHDR is no longer used and can be removed. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Allocate zero pad separately from rpcrdma_bufferChuck Lever3-23/+13
Use the new rpcrdma_alloc_regbuf() API to shrink the amount of contiguous memory needed for a buffer pool by moving the zero pad buffer into a regbuf. This is for consistency with the other uses of internally registered memory. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Allocate RPC/RDMA receive buffer separately from struct rpcrdma_repChuck Lever3-23/+23
The rr_base field is currently the buffer where RPC replies land. An RPC/RDMA reply header lands in this buffer. In some cases an RPC reply header also lands in this buffer, just after the RPC/RDMA header. The inline threshold is an agreed-on size limit for RDMA SEND operations that pass from server and client. The sum of the RPC/RDMA reply header size and the RPC reply header size must be less than this threshold. The largest RDMA RECV that the client should have to handle is the size of the inline threshold. The receive buffer should thus be the size of the inline threshold, and not related to RPCRDMA_MAX_SEGS. RPC replies received via RDMA WRITE (long replies) are caught in rq_rcv_buf, which is the second half of the RPC send buffer. Ie, such replies are not involved in any way with rr_base. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Allocate RPC/RDMA send buffer separately from struct rpcrdma_reqChuck Lever4-29/+19
The rl_base field is currently the buffer where each RPC/RDMA call header is built. The inline threshold is an agreed-on size limit to for RDMA SEND operations that pass between client and server. The sum of the RPC/RDMA header size and the RPC header size must be less than or equal to this threshold. Increasing the r/wsize maximum will require MAX_SEGS to grow significantly, but the inline threshold size won't change (both sides agree on it). The server's inline threshold doesn't change. Since an RPC/RDMA header can never be larger than the inline threshold, make all RPC/RDMA header buffers the size of the inline threshold. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Allocate RPC send buffer separately from struct rpcrdma_reqChuck Lever4-104/+78
Because internal memory registration is an expensive and synchronous operation, xprtrdma pre-registers send and receive buffers at mount time, and then re-uses them for each RPC. A "hardway" allocation is a memory allocation and registration that replaces a send buffer during the processing of an RPC. Hardway must be done if the RPC send buffer is too small to accommodate an RPC's call and reply headers. For xprtrdma, each RPC send buffer is currently part of struct rpcrdma_req so that xprt_rdma_free(), which is passed nothing but the address of an RPC send buffer, can find its matching struct rpcrdma_req and rpcrdma_rep quickly via container_of / offsetof. That means that hardway currently has to replace a whole rpcrmda_req when it replaces an RPC send buffer. This is often a fairly hefty chunk of contiguous memory due to the size of the rl_segments array and the fact that both the send and receive buffers are part of struct rpcrdma_req. Some obscure re-use of fields in rpcrdma_req is done so that xprt_rdma_free() can detect replaced rpcrdma_req structs, and restore the original. This commit breaks apart the RPC send buffer and struct rpcrdma_req so that increasing the size of the rl_segments array does not change the alignment of each RPC send buffer. (Increasing rl_segments is needed to bump up the maximum r/wsize for NFS/RDMA). This change opens up some interesting possibilities for improving the design of xprt_rdma_allocate(). xprt_rdma_allocate() is now the one place where RPC send buffers are allocated or re-allocated, and they are now always left in place by xprt_rdma_free(). A large re-allocation that includes both the rl_segments array and the RPC send buffer is no longer needed. Send buffer re-allocation becomes quite rare. Good send buffer alignment is guaranteed no matter what the size of the rl_segments array is. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Add struct rpcrdma_regbuf and helpersChuck Lever2-0/+98
There are several spots that allocate a buffer via kmalloc (usually contiguously with another data structure) and then register that buffer internally. I'd like to split the buffers out of these data structures to allow the data structures to scale. Start by adding functions that can kmalloc and register a buffer, and can manage/preserve the buffer's associated ib_sge and ib_mr fields. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Refactor rpcrdma_buffer_create() and rpcrdma_buffer_destroy()Chuck Lever1-53/+95
Move the details of how to create and destroy rpcrdma_req and rpcrdma_rep structures into helper functions. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Simplify synopsis of rpcrdma_buffer_create()Chuck Lever3-7/+7
Clean up: There is one call site for rpcrdma_buffer_create(). All of the arguments there are fields of an rpcrdma_xprt. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Take struct ib_qp_attr and ib_qp_init_attr off the stackChuck Lever2-7/+10
Reduce stack footprint of the connection upcall handler function. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Take struct ib_device_attr off the stackChuck Lever2-24/+14
Device attributes are large, and are used in more than one place. Stash a copy in dynamically allocated memory. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Free the pd if ib_query_qp() failsChuck Lever1-3/+7
If ib_query_qp() fails or the memory registration mode isn't supported, don't leak the PD. An orphaned IB/core resource will cause IB module removal to hang. Fixes: bd7ed1d13304 ("RPC/RDMA: check selected memory registration ...") Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Remove rpcrdma_ep::rep_func and ::rep_xprtChuck Lever4-8/+6
Clean up: The rep_func field always refers to rpcrdma_conn_func(). rep_func should have been removed by commit b45ccfd25d50 ("xprtrdma: Remove MEMWINDOWS registration modes"). Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Move credit update to RPC reply handlerChuck Lever3-16/+10
Reduce work in the receive CQ handler, which can be run at hardware interrupt level, by moving the RPC/RDMA credit update logic to the RPC reply handler. This has some additional benefits: More header sanity checking is done before trusting the incoming credit value, and the receive CQ handler no longer touches the RPC/RDMA header (the CPU stalls while waiting for the header contents to be brought into the cache). This further extends work begun by commit e7ce710a8802 ("xprtrdma: Avoid deadlock when credit window is reset"). Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Remove rl_mr field, and the mr_chunk unionChuck Lever2-17/+13
Clean up: Since commit 0ac531c18323 ("xprtrdma: Remove REGISTER memory registration mode"), the rl_mr pointer is no longer used anywhere. After removal, there's only a single member of the mr_chunk union, so mr_chunk can be removed as well, in favor of a single pointer field. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Remove rpcrdma_ep::rep_iaChuck Lever2-2/+0
Clean up: This field is not used. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Rename "xprt" and "rdma_connect" fields in struct rpcrdma_xprtChuck Lever2-12/+13
Clean up: Use consistent field names in struct rpcrdma_xprt. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Clean up hdrlenChuck Lever1-5/+7
Clean up: Replace naked integers with a documenting macro. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Display XIDs in host byte orderChuck Lever1-3/+5
xprtsock.c and the backchannel code display XIDs in host byte order. Follow suit in xprtrdma. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: Modernize htonl and ntohlChuck Lever1-22/+26
Clean up: Replace htonl and ntohl with the be32 equivalents. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-30xprtrdma: human-readable completion statusChuck Lever1-13/+57
Make it easier to grep the system log for specific error conditions. The wc.opcode field is not included because opcode numbers are sparse, and because wc.opcode is not necessarily valid when completion reports an error. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2015-01-24SUNRPC: Allow waiting on memory allocationTrond Myklebust1-2/+2
We should be safe now, as long as we don't do GFP_IO or higher allocations Signed-off-by: Trond Myklebust <[email protected]>
2015-01-24SUNRPC: Adjust rpciod workqueue parametersTrond Myklebust1-1/+2
Increase the concurrency level for rpciod threads to allow for allocations etc that happen in the RPCSEC_GSS layer. Also note that the NFSv4 byte range locks may now need to allocate memory from inside rpciod. Add the WQ_HIGHPRI flag to improve latency guarantees while we're at it. Signed-off-by: Trond Myklebust <[email protected]>
2015-01-23sunrpc/lockd: fix references to the BKLJeff Layton2-4/+3
The BKL is completely out of the picture in the lockd and sunrpc code these days. Update the antiquated comments that refer to it. Signed-off-by: Jeff Layton <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Handle additional inline contentChuck Lever1-0/+55
Most NFS RPCs place their large payload argument at the end of the RPC header (eg, NFSv3 WRITE). For NFSv3 WRITE and SYMLINK, RPC/RDMA sends the complete RPC header inline, and the payload argument in the read list. Data in the read list is the last part of the XDR stream. One important case is not like this, however. NFSv4 COMPOUND is a counted array of operations. A WRITE operation, with its large data payload, can appear in the middle of the compound's operations array. Thus NFSv4 WRITE compounds can have header content after the WRITE payload. The Linux client, for example, performs an NFSv4 WRITE like this: { PUTFH, WRITE, GETATTR } Though RFC 5667 is not precise about this, the proper way to convey this compound is to place the GETATTR inline, _after_ the front of the RPC header. The receiver inserts the read list payload into the XDR stream after the initial WRITE arguments, and before the GETATTR operation, thanks to the value of the read list "position" field. The Linux client currently sends the GETATTR at the end of the RPC/RDMA read list, which is incorrect. It will be corrected in the future. The Linux server currently rejects NFSv4 compounds with inline content after the read list. For the above NFSv4 WRITE compound, the NFS compound header indicates there are three operations, but the server finds nonsense when it looks in the XDR stream for the third operation, and the compound fails with OP_ILLEGAL. Move trailing inline content to the end of the XDR buffer's page list. This presents incoming NFSv4 WRITE compounds to NFSD in the same way the socket transport does. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Move read list XDR round-up logicChuck Lever1-28/+9
This is a pre-requisite for a subsequent patch. Read list XDR round-up needs to be done _before_ additional inline content is copied to the end of the XDR buffer's page list. Move the logic added by commit e560e3b510d2 ("svcrdma: Add zero padding if the client doesn't send it"). Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Support RDMA_NOMSG requestsChuck Lever1-3/+36
Currently the Linux server can not decode RDMA_NOMSG type requests. Operations whose length exceeds the fixed size of RDMA SEND buffers, like large NFSv4 CREATE(NF4LNK) operations, must be conveyed via RDMA_NOMSG. For an RDMA_MSG type request, the client sends the RPC/RDMA, RPC headers, and some or all of the NFS arguments via RDMA SEND. For an RDMA_NOMSG type request, the client sends just the RPC/RDMA header via RDMA SEND. The request's read list contains elements for the entire RPC message, including the RPC header. NFSD expects the RPC/RMDA header and RPC header to be contiguous in page zero of the XDR buffer. Add logic in the RDMA READ path to make the read list contents land where the server prefers, when the incoming message is a type RDMA_NOMSG message. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: rc_position sanity checkingChuck Lever1-4/+12
An RPC/RDMA client may send large RPC arguments via a read list. This is a list of scatter/gather elements which convey RPC call arguments too large to fit in a small RDMA SEND. Each entry in the read list has a "position" field, whose value is the byte offset in the XDR stream where the data in that entry is to be inserted. Entries which share the same "position" value make up the same RPC argument. The receiver inserts entries with the same position field value in list order into the XDR stream. Currently the Linux NFS/RDMA server cannot handle receiving read chunks in more than one position, mostly because no current client sends read lists with elements in more than one position. As a sanity check, ensure that all received chunks have the same "rc_position." Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Plant reader function in struct svcxprt_rdmaChuck Lever2-44/+29
The RDMA reader function doesn't change once an svcxprt_rdma is instantiated. Instead of checking sc_devcap during every incoming RPC, set the reader function once when the connection is accepted. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Find rmsgp more reliablyChuck Lever1-14/+4
xdr_start() can return the wrong rmsgp address if an assumption about how the xdr_buf was constructed changes. When it gets it wrong, the client receives a reply that has gibberish in the RPC/RDMA header, preventing it from matching a waiting RPC request. Instead, make (and document) just one assumption: that the RDMA header for the client's RPC call is at the start of the first page in rq_pages. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Scrub BUG_ON() and WARN_ON() call sitesChuck Lever3-33/+49
Current convention is to avoid using BUG_ON() in places where an oops could cause complete system failure. Replace BUG_ON() call sites in svcrdma with an assertion error message and allow execution to continue safely. Some BUG_ON() calls are removed because they have never fired in production (that we are aware of). Some WARN_ON() calls are also replaced where a back trace is not helpful; e.g., in a workqueue task. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Clean up read chunk countingChuck Lever2-19/+12
The byte_count argument is not used, and the function is called only from one place. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Remove unused variableChuck Lever1-2/+0
Nit: remove an unused variable to squelch a compiler warning. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-15svcrdma: Clean up dprintkChuck Lever1-4/+4
Nit: Fix inconsistent white space in dprintk messages. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2015-01-07rpc: fix xdr_truncate_encode to handle buffer ending on page boundaryJ. Bruce Fields1-3/+3
A struct xdr_stream at a page boundary might point to the end of one page or the beginning of the next, but xdr_truncate_encode isn't prepared to handle the former. This can cause corruption of NFSv4 READDIR replies in the case that a readdir entry that would have exceeded the client's dircount/maxcount limit would have ended exactly on a 4k page boundary. You're more likely to hit this case on large directories. Other xdr_truncate_encode callers are probably also affected. Reported-by: Holger Hoffstätte <[email protected]> Tested-by: Holger Hoffstätte <[email protected]> Fixes: 3e19ce762b53 "rpc: xdr_truncate_encode" Cc: [email protected] Signed-off-by: J. Bruce Fields <[email protected]>
2014-12-09sunrpc/cache: convert to use string_escape_str()Andy Shevchenko1-20/+6
There is nice kernel helper to escape a given strings by provided rules. Let's use it instead of custom approach. Signed-off-by: Andy Shevchenko <[email protected]> [[email protected]: fix length calculation] Signed-off-by: J. Bruce Fields <[email protected]>
2014-12-09sunrpc: only call test_bit once in svc_xprt_receivedJeff Layton1-2/+4
...move the WARN_ON_ONCE inside the following if block since they use the same condition. Signed-off-by: Jeff Layton <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2014-12-09sunrpc: add some tracepoints around enqueue and dequeue of svc_xprtJeff Layton1-7/+15
These were useful when I was tracking down a race condition between svc_xprt_do_enqueue and svc_get_next_xprt. Signed-off-by: Jeff Layton <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2014-12-09sunrpc: convert to lockless lookup of queued server threadsJeff Layton2-100/+128
Testing has shown that the pool->sp_lock can be a bottleneck on a busy server. Every time data is received on a socket, the server must take that lock in order to dequeue a thread from the sp_threads list. Address this problem by eliminating the sp_threads list (which contains threads that are currently idle) and replacing it with a RQ_BUSY flag in svc_rqst. This allows us to walk the sp_all_threads list under the rcu_read_lock and find a suitable thread for the xprt by doing a test_and_set_bit. Note that we do still have a potential atomicity problem however with this approach. We don't want svc_xprt_do_enqueue to set the rqst->rq_xprt pointer unless a test_and_set_bit of RQ_BUSY returned zero (which indicates that the thread was idle). But, by the time we check that, the bit could be flipped by a waking thread. To address this, we acquire a new per-rqst spinlock (rq_lock) and take that before doing the test_and_set_bit. If that returns false, then we can set rq_xprt and drop the spinlock. Then, when the thread wakes up, it must set the bit under the same spinlock and can trust that if it was already set then the rq_xprt is also properly set. With this scheme, the case where we have an idle thread no longer needs to take the highly contended pool->sp_lock at all, and that removes the bottleneck. That still leaves one issue: What of the case where we walk the whole sp_all_threads list and don't find an idle thread? Because the search is lockess, it's possible for the queueing to race with a thread that is going to sleep. To address that, we queue the xprt and then search again. If we find an idle thread at that point, we can't attach the xprt to it directly since that might race with a different thread waking up and finding it. All we can do is wake the idle thread back up and let it attempt to find the now-queued xprt. Signed-off-by: Jeff Layton <[email protected]> Tested-by: Chris Worley <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2014-12-09sunrpc: fix potential races in pool_stats collectionJeff Layton1-6/+6
In a later patch, we'll be removing some spinlocking around the socket and thread queueing code in order to fix some contention problems. At that point, the stats counters will no longer be protected by the sp_lock. Change the counters to atomic_long_t fields, except for the "sockets_queued" counter which will still be manipulated under a spinlock. Signed-off-by: Jeff Layton <[email protected]> Tested-by: Chris Worley <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2014-12-09sunrpc: add a rcu_head to svc_rqst and use kfree_rcu to free itJeff Layton1-4/+6
...also make the manipulation of sp_all_threads list use RCU-friendly functions. Signed-off-by: Jeff Layton <[email protected]> Tested-by: Chris Worley <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>