aboutsummaryrefslogtreecommitdiff
path: root/include/linux/sunrpc
AgeCommit message (Collapse)AuthorFilesLines
2024-10-02move asm/unaligned.h to linux/unaligned.hAl Viro1-1/+1
asm/unaligned.h is always an include of asm-generic/unaligned.h; might as well move that thing to linux/unaligned.h and include that - there's nothing arch-specific in that header. auto-generated by the following: for i in `git grep -l -w asm/unaligned.h`; do sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i done for i in `git grep -l -w asm-generic/unaligned.h`; do sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i done git mv include/asm-generic/unaligned.h include/linux/unaligned.h git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
2024-09-23SUNRPC: replace program list with program arrayNeilBrown1-3/+4
A service created with svc_create_pooled() can be given a linked list of programs and all of these will be served. Using a linked list makes it cumbersome when there are several programs that can be optionally selected with CONFIG settings. After this patch is applied, API consumers must use only svc_create_pooled() when creating an RPC service that listens for more than one RPC program. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Mike Snitzer <snitzer@kernel.org> Acked-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Anna Schumaker <anna.schumaker@oracle.com>
2024-09-23SUNRPC: add svcauth_map_clnt_to_svc_cred_localWeston Andros Adamson1-0/+5
Add new funtion svcauth_map_clnt_to_svc_cred_local which maps a generic cred to a svc_cred suitable for use in nfsd. This is needed by the localio code to map nfs client creds to nfs server credentials. Following from net/sunrpc/auth_unix.c:unx_marshal() it is clear that ->fsuid and ->fsgid must be used (rather than ->uid and ->gid). In addition, these uid and gid must be translated with from_kuid_munged() so local client uses correct uid and gid when acting as local server. Jeff Layton noted: This is where the magic happens. Since we're working in kuid_t/kgid_t, we don't need to worry about further idmapping. Suggested-by: NeilBrown <neilb@suse.de> # to approximate unx_marshal() Signed-off-by: Weston Andros Adamson <dros@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Co-developed-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Mike Snitzer <snitzer@kernel.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: Anna Schumaker <anna.schumaker@oracle.com>
2024-09-23SUNRPC: convert RPC_TASK_* constants to enumStephen Brennan1-7/+9
The RPC_TASK_* constants are defined as macros, which means that most kernel builds will not contain their definitions in the debuginfo. However, it's quite useful for debuggers to be able to view the task state constant and interpret it correctly. Conversion to an enum will ensure the constants are present in debuginfo and can be interpreted by debuggers without needing to hard-code them and track their changes. Signed-off-by: Stephen Brennan <stephen.s.brennan@oracle.com> Signed-off-by: Anna Schumaker <anna.schumaker@oracle.com>
2024-09-20xdrgen: Fix return code checking in built-in XDR decodersChuck Lever1-2/+2
xdr_stream_encode_u32() returns XDR_UNIT on success. xdr_stream_decode_u32() returns zero or -EMSGSIZE, but never XDR_UNIT. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-09-20tools: Add xdrgenChuck Lever2-0/+269
Add a Python-based tool for translating XDR specifications into XDR encoder and decoder functions written in the Linux kernel's C coding style. The generator attempts to match the usual C coding style of the Linux kernel's SunRPC consumers. This approach is similar to the netlink code generator in tools/net/ynl . The maintainability benefits of machine-generated XDR code include: - Stronger type checking - Reduces the number of bugs introduced by human error - Makes the XDR code easier to audit and analyze - Enables rapid prototyping of new RPC-based protocols - Hardens the layering between protocol logic and marshaling - Makes it easier to add observability on demand - Unit tests might be built for both the tool and (automatically) for the generated code In addition, converting the XDR layer to use memory-safe languages such as Rust will be easier if much of the code can be converted automatically. Tested-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-09-20svcrdma: Handle device removal outside of the CM event handlerChuck Lever1-0/+2
Synchronously wait for all disconnects to complete to ensure the transports have divested all hardware resources before the underlying RDMA device can safely be removed. Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-09-20sunrpc: allow svc threads to fail initialisation cleanlyNeilBrown1-0/+31
If an svc thread needs to perform some initialisation that might fail, it has no good way to handle the failure. Before the thread can exit it must call svc_exit_thread(), but that requires the service mutex to be held. The thread cannot simply take the mutex as that could deadlock if there is a concurrent attempt to shut down all threads (which is unlikely, but not impossible). nfsd currently call svc_exit_thread() unprotected in the unlikely event that unshare_fs_struct() fails. We can clean this up by introducing svc_thread_init_status() by which an svc thread can report whether initialisation has succeeded. If it has, it continues normally into the action loop. If it has not, svc_thread_init_status() immediately aborts the thread. svc_start_kthread() waits for either of these to happen, and calls svc_exit_thread() (under the mutex) if the thread aborted. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-09-20sunrpc: change sp_nrthreads from atomic_t to unsigned int.NeilBrown1-2/+2
sp_nrthreads is only ever accessed under the service mutex nlmsvc_mutex nfs_callback_mutex nfsd_mutex so these is no need for it to be an atomic_t. The fact that all code using it is single-threaded means that we can simplify svc_pool_victim and remove the temporary elevation of sp_nrthreads. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-09-01SUNRPC: make various functions static, or not exported.NeilBrown3-12/+0
Various functions are only used within the sunrpc module, and several are only use in the one file. So clean up: These are marked static, and any EXPORT is removed. svc_rcpb_setup() svc_rqst_alloc() svc_rqst_free() - also moved before first use svc_rpcbind_set_version() svc_drop() - also moved to svc.c These are now not EXPORTed, but are not static. svc_authenticate() svc_sock_update_bufs() Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-07-18Merge tag 'nfs-for-6.11-1' of git://git.linux-nfs.org/projects/anna/linux-nfsLinus Torvalds1-0/+27
Pull NFS client updates from Anna Schumaker: "New Features: - Add support for large folios - Implement rpcrdma generic device removal notification - Add client support for attribute delegations - Use a LAYOUTRETURN during reboot recovery to report layoutstats and errors - Improve throughput for random buffered writes - Add NVMe support to pnfs/blocklayout Bugfixes: - Fix rpcrdma_reqs_reset() - Avoid soft lockups when using UDP - Fix an nfs/blocklayout premature PR key unregestration - Another fix for EXCHGID4_FLAG_USE_PNFS_DS for DS server - Do not extend writes to the entire folio - Pass explicit offset and count values to tracepoints - Fix a race to wake up sleeping SUNRPC sync tasks - Fix gss_status tracepoint output Cleanups: - Add missing MODULE_DESCRIPTION() macros - Add blocklayout / SCSI layout tracepoints - Remove asm-generic headers from xprtrdma verbs.c - Remove unused 'struct mnt_fhstatus' - Other delegation related cleanups - Other folio related cleanups - Other pNFS related cleanups - Other xprtrdma cleanups" * tag 'nfs-for-6.11-1' of git://git.linux-nfs.org/projects/anna/linux-nfs: (63 commits) SUNRPC: Fixup gss_status tracepoint error output SUNRPC: Fix a race to wake a sync task nfs: split nfs_read_folio nfs: pass explicit offset/count to trace events nfs: do not extend writes to the entire folio nfs/blocklayout: add support for NVMe nfs: remove nfs_page_length nfs: remove the unused max_deviceinfo_size field from struct pnfs_layoutdriver_type nfs: don't reuse partially completed requests in nfs_lock_and_join_requests nfs: move nfs_wait_on_request to write.c nfs: fold nfs_page_group_lock_subrequests into nfs_lock_and_join_requests nfs: fold nfs_folio_find_and_lock_request into nfs_lock_and_join_requests nfs: simplify nfs_folio_find_and_lock_request nfs: remove nfs_folio_private_request nfs: remove dead code for the old swap over NFS implementation NFSv4.1 another fix for EXCHGID4_FLAG_USE_PNFS_DS for DS server nfs: Block on write congestion nfs: Properly initialize server->writeback nfs: Drop pointless check from nfs_commit_release_pages() nfs/blocklayout: SCSI layout trace points for reservation key reg/unreg ...
2024-07-08sunrpc: refactor pool_mode setting codeJeff Layton1-0/+2
Allow the pool_mode setting code to be called from internal callers so we can call it from a new netlink op. Add a new svc_pool_map_get function to return the current setting. Change the existing module parameter handling to use the new interfaces under the hood. Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-07-08sunrpc: fix up the special handling of sv_nrpools == 1Jeff Layton1-0/+1
Only pooled services take a reference to the svc_pool_map. The sunrpc code has always used the sv_nrpools value to detect whether the service is pooled. The problem there is that nfsd is a pooled service, but when it's running in "global" pool_mode, it doesn't take a reference to the pool map because it has a sv_nrpools value of 1. This means that we have two separate codepaths for starting the server, depending on whether it's pooled or not. Fix this by adding a new flag to the svc_serv, that indicates whether the serv is pooled. With this we can have the nfsd service unconditionally take a reference, regardless of pool_mode. Note that this is a behavior change for /sys/module/sunrpc/parameters/pool_mode. Usually this file does not allow you to change the pool-mode while there are nfsd threads running, but if the pool-mode is "global" it's allowed. My assumption is that this is a bug, since it probably should never have worked this way. This patch changes the behavior such that you get back EBUSY even when nfsd is running in global mode. I think this is more reasonable behavior, and given that most people set this today using the module parameter, it's doubtful anyone will notice. Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-07-08rpcrdma: Implement generic device removalChuck Lever1-0/+27
Commit e87a911fed07 ("nvme-rdma: use ib_client API to detect device removal") explains the benefits of handling device removal outside of the CM event handler. Sketch in an IB device removal notification mechanism that can be used by both the client and server side RPC-over-RDMA transport implementations. Suggested-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2024-05-06SUNRPC: add a new svc_find_listener helperJeff Layton1-0/+2
svc_find_listener will return the transport instance pointer for the endpoint accepting connections/peer traffic from the specified transport class and matching sockaddr. Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-05-06SUNRPC: introduce svc_xprt_create_from_sa utility routineLorenzo Bianconi1-0/+3
Add svc_xprt_create_from_sa utility routine and refactor svc_xprt_create() codebase in order to introduce the capability to create a svc port from socket address. Reviewed-by: Jeff Layton <jlayton@kernel.org> Tested-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-04-22Merge tag 'nfsd-6.9-4' of ↵Linus Torvalds1-10/+3
git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux Pull nfsd fix from Chuck Lever: - Fix an NFS/RDMA performance regression in v6.9-rc * tag 'nfsd-6.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux: Revert "svcrdma: Add Write chunk WRs to the RPC's Send WR chain"
2024-04-20Revert "svcrdma: Add Write chunk WRs to the RPC's Send WR chain"Chuck Lever1-10/+3
Performance regression reported with NFS/RDMA using Omnipath, bisected to commit e084ee673c77 ("svcrdma: Add Write chunk WRs to the RPC's Send WR chain"). Tracing on the server reports: nfsd-7771 [060] 1758.891809: svcrdma_sq_post_err: cq.id=205 cid=226 sc_sq_avail=13643/851 status=-12 sq_post_err reports ENOMEM, and the rdma->sc_sq_avail (13643) is larger than rdma->sc_sq_depth (851). The number of available Send Queue entries is always supposed to be smaller than the Send Queue depth. That seems like a Send Queue accounting bug in svcrdma. As it's getting to be late in the 6.9-rc cycle, revert this commit. It can be revisited in a subsequent kernel release. Link: https://bugzilla.kernel.org/show_bug.cgi?id=218743 Fixes: e084ee673c77 ("svcrdma: Add Write chunk WRs to the RPC's Send WR chain") Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-16Merge tag 'nfs-for-6.9-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfsLinus Torvalds3-1/+3
Pull NFS client updates from Trond Myklebust: "Highlights include: Bugfixes: - Fix for an Oops in the NFSv4.2 listxattr handler - Correct an incorrect buffer size in listxattr - Fix for an Oops in the pNFS flexfiles layout - Fix a refcount leak in NFS O_DIRECT writes - Fix missing locking in NFS O_DIRECT - Avoid an infinite loop in pnfs_update_layout - Fix an overflow in the RPC waitqueue queue length counter - Ensure that pNFS I/O is also protected by TLS when xprtsec is specified by the mount options - Fix a leaked folio lock in the netfs read code - Fix a potential deadlock in fscache - Allow setting the fscache uniquifier in NFSv4 - Fix an off by one in root_nfs_cat() - Fix another off by one in rpc_sockaddr2uaddr() - nfs4_do_open() can incorrectly trigger state recovery - Various fixes for connection shutdown Features and cleanups: - Ensure that containers only see their own RPC and NFS stats - Enable nconnect for RDMA - Remove dead code from nfs_writepage_locked() - Various tracepoint additions to track EXCHANGE_ID, GETDEVICEINFO, and mount options" * tag 'nfs-for-6.9-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: (29 commits) nfs: fix panic when nfs4_ff_layout_prepare_ds() fails NFS: trace the uniquifier of fscache NFS: Read unlock folio on nfs_page_create_from_folio() error NFS: remove unused variable nfs_rpcstat nfs: fix UAF in direct writes nfs: properly protect nfs_direct_req fields NFS: enable nconnect for RDMA NFSv4: nfs4_do_open() is incorrectly triggering state recovery NFS: avoid infinite loop in pnfs_update_layout. NFS: remove sync_mode test from nfs_writepage_locked() NFSv4.1/pnfs: fix NFS with TLS in pnfs NFS: Fix an off by one in root_nfs_cat() nfs: make the rpc_stat per net namespace nfs: expose /proc/net/sunrpc/nfs in net namespaces sunrpc: add a struct rpc_stats arg to rpc_create_args nfs: remove unused NFS_CALL macro NFSv4.1: add tracepoint to trunked nfs4_exchange_id calls NFS: Fix nfs_netfs_issue_read() xarray locking for writeback interrupt SUNRPC: increase size of rpc_wait_queue.qlen from unsigned short to unsigned int nfs: fix regression in handling of fsc= option in NFSv4 ...
2024-03-09sunrpc: add a struct rpc_stats arg to rpc_create_argsJosef Bacik1-0/+1
We want to be able to have our rpc stats handled in a per network namespace manner, so add an option to rpc_create_args to specify a different rpc_stats struct instead of using the one on the rpc_program. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2024-03-01svcrdma: Add Write chunk WRs to the RPC's Send WR chainChuck Lever1-3/+10
Chain RDMA Writes that convey Write chunks onto the local Send chain. This means all WRs for an RPC Reply are now posted with a single ib_post_send() call, and there is a single Send completion when all of these are done. That reduces both the per-transport doorbell rate and completion rate. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Post WRs for Write chunks in svc_rdma_sendto()Chuck Lever1-3/+3
Refactor to eventually enable svcrdma to post the Write WRs for each RPC response using the same ib_post_send() as the Send WR (ie, as a single WR chain). svc_rdma_result_payload (originally svc_rdma_read_payload) was added so that the upper layer XDR encoder could identify a range of bytes to be possibly conveyed by RDMA (if a Write chunk was provided by the client). The purpose of commit f6ad77590a5d ("svcrdma: Post RDMA Writes while XDR encoding replies") was to post as much of the result payload outside of svc_rdma_sendto() as possible because svc_rdma_sendto() used to be called with the xpt_mutex held. However, since commit ca4faf543a33 ("SUNRPC: Move xpt_mutex into socket xpo_sendto methods"), the xpt_mutex is no longer held when calling svc_rdma_sendto(). Thus, that benefit is no longer an issue. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Post the Reply chunk and Send WR togetherChuck Lever1-4/+9
Reduce the doorbell and Send completion rates when sending RPC/RDMA replies that have Reply chunks. NFS READDIR procedures typically return their result in a Reply chunk, for example. Instead of calling ib_post_send() to post the Write WRs for the Reply chunk, and then calling it again to post the Send WR that conveys the transport header, chain the Write WRs to the Send WR and call ib_post_send() only once. Thanks to the Send Queue completion ordering rules, when the Send WR completes, that guarantees that Write WRs posted before it have also completed successfully. Thus all Write WRs for the Reply chunk can remain unsignaled. Instead of handling a Write completion and then a Send completion, only the Send completion is seen, and it handles clean up for both the Writes and the Send. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Move write_info for Reply chunks into struct svc_rdma_send_ctxtChuck Lever1-0/+25
Since the RPC transaction's svc_rdma_send_ctxt will stay around for the duration of the RDMA Write operation, the write_info structure for the Reply chunk can reside in the request's svc_rdma_send_ctxt instead of being allocated separately. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Post Send WR chainChuck Lever1-2/+4
Eventually I'd like the server to post the reply's Send WR along with any Write WRs using only a single call to ib_post_send(), in order to reduce the NIC's doorbell rate. To do this, add an anchor for a WR chain to svc_rdma_send_ctxt, and refactor svc_rdma_send() to post this WR chain to the Send Queue. For the moment, the posted chain will continue to contain a single Send WR. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01sunrpc: remove ->pg_stats from svc_programJosef Bacik1-1/+0
Now that this isn't used anywhere, remove it. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01sunrpc: pass in the sv_stats struct through svc_create_pooledJosef Bacik1-1/+3
Since only one service actually reports the rpc stats there's not much of a reason to have a pointer to it in the svc_program struct. Adjust the svc_create_pooled function to take the sv_stats as an argument and pass the struct through there as desired instead of getting it from the svc_program->pg_stats. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-02-28SUNRPC: increase size of rpc_wait_queue.qlen from unsigned short to unsigned intDai Ngo1-1/+1
When the NFS client is under extreme load the rpc_wait_queue.qlen counter can be overflowed. Here is an instant of the backlog queue overflow in a real world environment shown by drgn helper: rpc_task_stats(rpc_clnt): ------------------------- rpc_clnt: 0xffff92b65d2bae00 rpc_xprt: 0xffff9275db64f000 Queue: sending[64887] pending[524] backlog[30441] binding[0] XMIT task: 0xffff925c6b1d8e98 WRITE: 750654 __dta_call_status_580: 65463 __dta_call_transmit_status_579: 1 call_reserveresult: 685189 nfs_client_init_is_complete: 1 COMMIT: 584 call_reserveresult: 573 __dta_call_status_580: 11 ACCESS: 1 __dta_call_status_580: 1 GETATTR: 10 __dta_call_status_580: 4 call_reserveresult: 6 751249 tasks for server 111.222.333.444 Total tasks: 751249 count_rpc_wait_queues(xprt): ---------------------------- **** rpc_xprt: 0xffff9275db64f000 num_reqs: 65511 wait_queue: xprt_binding[0] cnt: 0 wait_queue: xprt_binding[1] cnt: 0 wait_queue: xprt_binding[2] cnt: 0 wait_queue: xprt_binding[3] cnt: 0 rpc_wait_queue[xprt_binding].qlen: 0 maxpriority: 0 wait_queue: xprt_sending[0] cnt: 0 wait_queue: xprt_sending[1] cnt: 64887 wait_queue: xprt_sending[2] cnt: 0 wait_queue: xprt_sending[3] cnt: 0 rpc_wait_queue[xprt_sending].qlen: 64887 maxpriority: 3 wait_queue: xprt_pending[0] cnt: 524 wait_queue: xprt_pending[1] cnt: 0 wait_queue: xprt_pending[2] cnt: 0 wait_queue: xprt_pending[3] cnt: 0 rpc_wait_queue[xprt_pending].qlen: 524 maxpriority: 0 wait_queue: xprt_backlog[0] cnt: 0 wait_queue: xprt_backlog[1] cnt: 685801 wait_queue: xprt_backlog[2] cnt: 0 wait_queue: xprt_backlog[3] cnt: 0 rpc_wait_queue[xprt_backlog].qlen: 30441 maxpriority: 3 [task cnt mismatch] There is no effect on operations when this overflow occurs. However it causes confusion when trying to diagnose the performance problem. Signed-off-by: Dai Ngo <dai.ngo@oracle.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2024-02-28SUNRPC: Add a transport callback to handle dequeuing of an RPC requestTrond Myklebust1-0/+1
Add a transport level callback to allow it to handle the consequences of dequeuing the request that was in the process of being transmitted. For something like a TCP connection, we may need to disconnect if the request was partially transmitted. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2024-01-10Merge tag 'nfs-for-6.8-1' of git://git.linux-nfs.org/projects/anna/linux-nfsLinus Torvalds5-14/+17
Pull nfs client updates from Anna Schumaker: "New Features: - Always ask for type with READDIR - Remove nfs_writepage() Bugfixes: - Fix a suspicious RCU usage warning - Fix a blocklayoutdriver reference leak - Fix the block driver's calculation of layoutget size - Fix handling NFS4ERR_RETURNCONFLICT - Fix _xprt_switch_find_current_entry() - Fix v4.1 backchannel request timeouts - Don't add zero-length pnfs block devices - Use the parent cred in nfs_access_login_time() Cleanups: - A few improvements when dealing with referring calls from the server - Clean up various unused variables, struct fields, and function calls - Various tracepoint improvements" * tag 'nfs-for-6.8-1' of git://git.linux-nfs.org/projects/anna/linux-nfs: (21 commits) NFSv4.1: Use the nfs_client's rpc timeouts for backchannel SUNRPC: Fixup v4.1 backchannel request timeouts rpc_pipefs: Replace one label in bl_resolve_deviceid() nfs: Remove writepage NFS: drop unused nfs_direct_req bytes_left pNFS: Fix the pnfs block driver's calculation of layoutget size nfs: print fileid in lookup tracepoints nfs: rename the nfs_async_rename_done tracepoint nfs: add new tracepoint at nfs4 revalidate entry point SUNRPC: fix _xprt_switch_find_current_entry logic NFSv4.1/pnfs: Ensure we handle the error NFS4ERR_RETURNCONFLICT NFSv4.1: if referring calls are complete, trust the stateid argument NFSv4: Track the number of referring calls in struct cb_process_state NFS: Use parent's objective cred in nfs_access_login_time() NFSv4: Always ask for type with READDIR pnfs/blocklayout: Don't add zero-length pnfs_block_dev blocklayoutdriver: Fix reference leak of pnfs_device_node SUNRPC: Fix a suspicious RCU usage warning SUNRPC: Create a helper function for accessing the rpc_clnt's xprt_switch SUNRPC: Remove unused function rpc_clnt_xprt_switch_put() ...
2024-01-07SUNRPC: discard sv_refcnt, and svc_get/svc_putNeilBrown1-26/+1
sv_refcnt is no longer useful. lockd and nfs-cb only ever have the svc active when there are a non-zero number of threads, so sv_refcnt mirrors sv_nrthreads. nfsd also keeps the svc active between when a socket is added and when the first thread is started, but we don't really need a refcount for that. We can simply not destroy the svc while there are any permanent sockets attached. So remove sv_refcnt and the get/put functions. Instead of a final call to svc_put(), call svc_destroy() instead. This is changed to also store NULL in the passed-in pointer to make it easier to avoid use-after-free situations. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svc: don't hold reference for poolstats, only mutex.NeilBrown1-1/+7
A future patch will remove refcounting on svc_serv as it is of little use. It is currently used to keep the svc around while the pool_stats file is open. Change this to get the pointer, protected by the mutex, only in seq_start, and the release the mutex in seq_stop. This means that if the nfsd server is stopped and restarted while the pool_stats file it open, then some pool stats info could be from the first instance and some from the second. This might appear odd, but is unlikely to be a problem in practice. Signed-off-by: NeilBrown <neilb@suse.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Implement multi-stage Read completion againChuck Lever1-2/+4
Having an nfsd thread waiting for an RDMA Read completion is problematic if the Read responder (ie, the client) stops responding. We need to go back to handling RDMA Reads by getting the svc scheduler to call svc_rdma_recvfrom() a second time to finish building an RPC message after a Read completion. This is the final patch, and makes several changes that have to happen concurrently: 1. svc_rdma_process_read_list no longer waits for a completion, but simply builds and posts the Read WRs. 2. svc_rdma_read_done() now queues a completed Read on sc_read_complete_q for later processing rather than calling complete(). 3. The completed RPC message is no longer built in the svc_rdma_process_read_list() path. Finishing the message is now done in svc_rdma_recvfrom() when it notices work on the sc_read_complete_q. The "finish building this RPC message" code is removed from the svc_rdma_process_read_list() path. This arrangement avoids the need for an nfsd thread to wait for an RDMA Read non-interruptibly without a timeout. It's basically the same code structure that Tom Tucker used for Read chunks along with some clean-up and modernization. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Add back svcxprt_rdma::sc_read_complete_qChuck Lever1-0/+1
Having an nfsd thread waiting for an RDMA Read completion is problematic if the Read responder (ie, the client) stops responding. We need to go back to handling RDMA Reads by allowing the nfsd thread to return to the svc scheduler, then waking a second thread finish the RPC message once the Read completion fires. As a next step, add a list_head upon which completed Reads are queued. A subsequent patch will make use of this queue. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Add back svc_rdma_recv_ctxt::rc_pagesChuck Lever1-1/+3
Having an nfsd thread waiting for an RDMA Read completion is problematic if the Read responder (the client) stops responding. We need to go back to handling RDMA Reads by allowing the nfsd thread to return to the svc scheduler, then waking a second thread finish the RPC message once the Read completion fires. To start with, restore the rc_pages field so that RDMA Read pages can be managed across calls to svc_rdma_recvfrom(). Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: De-duplicate completion ID initialization helpersChuck Lever1-0/+24
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Move the svc_rdma_cc_init() callChuck Lever1-0/+2
Now that the chunk_ctxt for Reads is no longer dynamically allocated it can be initialized once for the life of the object that contains it (struct svc_rdma_recv_ctxt). Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Update synopsis of svc_rdma_build_read_segment()Chuck Lever1-0/+7
Since the RDMA Read I/O state is now contained in the recv_ctxt, svc_rdma_build_read_segment() can use the recv_ctxt to derive that information rather than the other way around. This removes one usage of the ri_readctxt field, enabling its removal in a subsequent patch. At the same time, the use of ri_rqst can similarly be replaced with a passed-in function parameter. Start with build_read_segment() because it is a common utility function at the bottom of the Read chunk path. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Move read_info::ri_pageoff into struct svc_rdma_recv_ctxtChuck Lever1-0/+1
Further clean up: move the starting byte offset field into svc_rdma_recv_ctxt. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Move svc_rdma_read_info::ri_pageno to struct svc_rdma_recv_ctxtChuck Lever1-0/+1
Further clean up: move the page index field into svc_rdma_recv_ctxt. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Start moving fields out of struct svc_rdma_read_infoChuck Lever1-0/+4
Since the request's svc_rdma_recv_ctxt will stay around for the duration of the RDMA Read operation, the contents of struct svc_rdma_read_info can reside in the request's svc_rdma_recv_ctxt rather than being allocated separately. This will eventually save a call to kmalloc() in a hot path. Start this clean-up by moving the Read chunk's svc_rdma_chunk_ctxt. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Move struct svc_rdma_chunk_ctxt to svc_rdma.hChuck Lever1-0/+15
Prepare for nestling these into the send and recv ctxts so they no longer have to be allocated dynamically. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Add an async version of svc_rdma_send_ctxt_put()Chuck Lever1-0/+2
DMA unmapping can take quite some time, so it should not be handled in a single-threaded completion handler. Defer releasing send_ctxts to the recently-added workqueue. With this patch, DMA unmapping can be handled in parallel, and it does not cause head-of-queue blocking of Send completions. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Add a utility workqueue to svcrdmaChuck Lever1-0/+1
To handle work in the background, set up an UNBOUND workqueue for svcrdma. Subsequent patches will make use of it. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07svcrdma: Eliminate allocation of recv_ctxt objects in backchannelChuck Lever1-1/+2
The svc_rdma_recv_ctxt free list uses a lockless list to avoid the need for a spin lock in the fast path. llist_del_first(), which is used by svc_rdma_recv_ctxt_get(), requires serialization, however, when there are multiple list producers that are unserialized. I mistakenly thought there was only one caller of svc_rdma_recv_ctxt_get() (svc_rdma_refresh_recvs()), thus explicit serialization would not be necessary. But there is another caller: svc_rdma_bc_sendto(), and these two are not serialized against each other. I haven't seen ill effects that I could directly ascribe to a lack of serialization. It's just an observation based on code audit. When DMA-mapping before sending a Reply, the passed-in struct svc_rdma_recv_ctxt is used only for its write and reply PCLs. These are currently always empty in the backchannel case. So, instead of passing a full svc_rdma_recv_ctxt object to svc_rdma_map_reply_msg(), let's pass in just the Write and Reply PCLs. This change makes it unnecessary for the backchannel to acquire a dummy svc_rdma_recv_ctxt object when sending an RPC Call. The need for svc_rdma_recv_ctxt free list serialization is now completely avoided. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07SUNRPC: Remove RQ_SPLICE_OKChuck Lever1-2/+0
This flag is no longer used. Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-07SUNRPC: Add a server-side API for retrieving an RPC's pseudoflavorChuck Lever1-1/+6
NFSD will use this new API to determine whether nfsd_splice_read is safe to use. This avoids the need to add a dependency to NFSD for CONFIG_SUNRPC_GSS. Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-01-04NFSv4.1: Use the nfs_client's rpc timeouts for backchannelBenjamin Coddington4-13/+17
For backchannel requests that lookup the appropriate nfs_client, use the state-management rpc_clnt's rpc_timeout parameters for the backchannel's response. When the nfs_client cannot be found, fall back to using the xprt's default timeout parameters. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Tested-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2024-01-04SUNRPC: Remove unused function rpc_clnt_xprt_switch_put()Anna Schumaker1-1/+0
Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-11-08Merge tag 'nfs-for-6.7-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfsLinus Torvalds1-0/+1
Pull NFS client updates from Trond Myklebust: "Bugfixes: - SUNRPC: - re-probe the target RPC port after an ECONNRESET error - handle allocation errors from rpcb_call_async() - fix a use-after-free condition in rpc_pipefs - fix up various checks for timeouts - NFSv4.1: - Handle NFS4ERR_DELAY errors during session trunking - fix SP4_MACH_CRED protection for pnfs IO - NFSv4: - Ensure that we test all delegations when the server notifies us that it may have revoked some of them Features: - Allow knfsd processes to break out of NFS4ERR_DELAY loops when re-exporting NFSv4.x by setting appropriate values for the 'delay_retrans' module parameter - nfs: Convert nfs_symlink() to use a folio" * tag 'nfs-for-6.7-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: nfs: Convert nfs_symlink() to use a folio SUNRPC: Fix RPC client cleaned up the freed pipefs dentries NFSv4.1: fix SP4_MACH_CRED protection for pnfs IO SUNRPC: Add an IS_ERR() check back to where it was NFSv4.1: fix handling NFS4ERR_DELAY when testing for session trunking nfs41: drop dependency between flexfiles layout driver and NFSv3 modules NFSv4: fairly test all delegations on a SEQ4_ revocation SUNRPC: SOFTCONN tasks should time out when on the sending list SUNRPC: Force close the socket when a hard error is reported SUNRPC: Don't skip timeout checks in call_connect_status() SUNRPC: ECONNRESET might require a rebind NFSv4/pnfs: Allow layoutget to return EAGAIN for softerr mounts NFSv4: Add a parameter to limit the number of retries after NFS4ERR_DELAY