Age | Commit message (Collapse) | Author | Files | Lines |
|
Add the queue that will enforce the ordering of RPC task transmission.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
When storing a struct rpc_rqst on the slot allocation list, we currently
use the same field 'rq_list' as we use to store the request on the
receive queue. Since the structure is never on both lists at the same
time, this is OK.
However, for clarity, let's make that a union with different names for
the different lists so that we can more easily distinguish between
the two states.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Allow the caller in clnt.c to call into the code to wait for a reply
after calling xprt_transmit(). Again, the reason is that the backchannel
code does not need this functionality.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Separate out the action of adding a request to the reply queue so that the
backchannel code can simply skip calling it altogether.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
We will use the same lock to protect both the transmit and receive queues.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Rather than waking up the entire queue of RPC messages a second time,
just wake up the task that was put to sleep.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
When asked to wake up an RPC task, it makes sense to test whether or not
the task is still queued.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Add a helper that will wake up a task that is sleeping on a specific
queue, and will set the value of task->tk_status. This is mainly
intended for use by the transport layer to notify the task of an
error condition.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
We are going to need to pin for both send and receive.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
If the previous message was only partially transmitted, we need to close
the socket in order to avoid corruption of the message stream. To do so,
we currently hijack the unlocking of the socket in order to schedule
the close.
Now that we track the message offset in the socket state, we can move
that kind of checking out of the socket lock code, which is needed to
allow messages to remain queued after dropping the socket lock.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Rather than resetting state variables in socket state_change() callback,
do it in the sunrpc TCP connect function itself.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Since we will want to introduce similar TCP state variables for the
transmission of requests, let's rename the existing ones to label
that they are for the receive side.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Currently, we grab the socket bit lock before we allow the message
to be XDR encoded. That significantly slows down the transmission
rate, since we serialise on a potentially blocking operation.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Add states to indicate that the message send and receive are not yet
complete.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
If a message has been encoded using RPCSEC_GSS, the server is
maintaining a window of sequence numbers that it considers valid.
The client should normally be tracking that window, and needs to
verify that the sequence number used by the message being transmitted
still lies inside the window of validity.
So far, we've been able to assume this condition would be realised
automatically, since the client has been encoding the message only
after taking the socket lock. Once we change that condition, we
will need the explicit check.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Move the initialisation back into xprt.c.
Signed-off-by: Trond Myklebust <[email protected]>
|
|
We see the following scenario:
1) Link endpoint B on node 1 discovers that its peer endpoint is gone.
Since there is a second working link, failover procedure is started.
2) Link endpoint A on node 1 sends a FAILOVER message to peer endpoint
A on node 2. The node item 1->2 goes to state FAILINGOVER.
3) Linke endpoint A/2 receives the failover, and is supposed to take
down its parallell link endpoint B/2, while producing a FAILOVER
message to send back to A/1.
4) However, B/2 has already been deleted, so no FAILOVER message can
created.
5) Node 1->2 remains in state FAILINGOVER forever, refusing to receive
any messages that can bring B/1 up again. We are left with a non-
redundant link between node 1 and 2.
We fix this with letting endpoint A/2 build a dummy FAILOVER message
to send to back to A/1, so that the situation can be resolved.
Signed-off-by: LUU Duc Canh <[email protected]>
Signed-off-by: Jon Maloy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch fixes the bug that all datapath and vport ops are returning
wrong values (OVS_FLOW_CMD_NEW or OVS_DP_CMD_NEW) in their replies.
Signed-off-by: Yifeng Sun <[email protected]>
Acked-by: Pravin B Shelar <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Structure 'tls_rec' contains sg_aead_in and sg_aead_out which point
to a aad_space and then chain scatterlists sg_plaintext_data,
sg_encrypted_data respectively. Rather than using chained scatterlists
for plaintext and encrypted data in aead_req, it is efficient to store
aad_space in sg_encrypted_data and sg_plaintext_data itself in the
first index and get rid of sg_aead_in, sg_aead_in and further chaining.
This requires increasing size of sg_encrypted_data & sg_plaintext_data
arrarys by 1 to accommodate entry for aad_space. The code which uses
sg_encrypted_data and sg_plaintext_data has been modified to skip first
index as it points to aad_space.
Signed-off-by: Vakul Garg <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Fixes
Here are some miscellaneous fixes for AF_RXRPC:
(1) Remove a duplicate variable initialisation.
(2) Fix one of the checks made when we decide to set up a new incoming
service call in which a flag is being checked in the wrong field of
the packet header. This check is abstracted out into helper
functions.
(3) Fix RTT gathering. The code has been trying to make use of socket
timestamps, but wasn't actually enabling them. The code has also been
recording a transmit time for the outgoing packet for which we're
going to measure the RTT after sending the message - but we can get
the incoming packet before we get to that and record a negative RTT.
(4) Fix the emission of BUSY packets (we are emitting ABORTs instead).
(5) Improve error checking on incoming packets.
(6) Try to fix a bug in new service call handling whereby a BUG we should
never be able to reach somehow got triggered. Do this by moving much
of the checking as early as possible and not repeating it later
(depends on (5) above).
(7) Fix the sockopts set on a UDP6 socket to include the ones set on a
UDP4 socket so that we receive UDP4 errors and packet-too-large
notifications too.
(8) Fix the distribution of errors so that we do it at the point of
receiving an error in the UDP callback rather than deferring it
thereby cutting short any transmissions that would otherwise occur in
the window.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
Default socket receive buffer size for a listener socket is 2Mb. For
each arriving empty SYN, the linux kernel allocates a 768 bytes buffer.
This means that a listener socket can serve maximum 2700 simultaneous
empty connection setup requests before it hits a receive buffer
overflow, and much fewer if the SYN is carrying any significant
amount of data.
When this happens the setup request is rejected, and the client
receives an ECONNREFUSED error.
This commit mitigates this problem by letting the client socket try to
retransmit the SYN message multiple times when it sees it rejected with
the code TIPC_ERR_OVERLOAD. Retransmission is done at random intervals
in the range of [100 ms, setup_timeout / 4], as many times as there is
room for within the setup timeout limit.
Signed-off-by: Tung Nguyen <[email protected]>
Acked-by: Ying Xue <[email protected]>
Signed-off-by: Jon Maloy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Messages intended for intitating a connection are currently
indistinguishable from regular datagram messages. The TIPC
protocol specification defines bit 17 in word 0 as a SYN bit
to allow sanity check of such messages in the listening socket,
but this has so far never been implemented.
We do that in this commit.
Acked-by: Ying Xue <[email protected]>
Signed-off-by: Jon Maloy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We refactor the function tipc_sk_filter_connect(), both to make it
more readable and as a preparation for the next commit.
Acked-by: Ying Xue <[email protected]>
Signed-off-by: Jon Maloy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We refactor this function as a preparation for the coming commits in
the same series.
Acked-by: Ying Xue <[email protected]>
Signed-off-by: Jon Maloy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The function tipc_msg_reverse() is reversing the header of a message
while reusing the original buffer. We have seen at several occasions
that this may have unfortunate side effects when the buffer to be
reversed is a clone.
In one of the following commits we will again need to reverse cloned
buffers, so this is the right time to permanently eliminate this
problem. In this commit we let the said function always consume the
original buffer and replace it with a new one when applicable.
Acked-by: Ying Xue <[email protected]>
Signed-off-by: Jon Maloy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Previously TCP initial receive buffer is ~87KB by default and
the initial receive window is ~29KB (20 MSS). This patch changes
the two numbers to 128KB and ~64KB (rounding down to the multiples
of MSS) respectively. The patch also simplifies the calculations s.t.
the two numbers are directly controlled by sysctl tcp_rmem[1]:
1) Initial receiver buffer budget (sk_rcvbuf): while this should
be configured via sysctl tcp_rmem[1], previously tcp_fixup_rcvbuf()
always override and set a larger size when a new connection
establishes.
2) Initial receive window in SYN: previously it is set to 20
packets if MSS <= 1460. The number 20 was based on the initial
congestion window of 10: the receiver needs twice amount to
avoid being limited by the receive window upon out-of-order
delivery in the first window burst. But since this only
applies if the receiving MSS <= 1460, connection using large MTU
(e.g. to utilize receiver zero-copy) may be limited by the
receive window.
With this patch TCP memory configuration is more straight-forward and
more properly sized to modern high-speed networks by default. Several
popular stacks have been announcing 64KB rwin in SYNs as well.
Signed-off-by: Yuchung Cheng <[email protected]>
Signed-off-by: Wei Wang <[email protected]>
Signed-off-by: Neal Cardwell <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Reviewed-by: Soheil Hassas Yeganeh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Fix crash caused by NULL pointer dereference when debugfs functions
le_max_key_read, le_max_key_size_write, le_min_key_size_read or
le_min_key_size_write and Bluetooth adapter was powered off.
Fix is to move max_key_size and min_key_size from smp_dev to hci_dev.
At the same time they were renamed to le_max_key_size and
le_min_key_size.
BUG: unable to handle kernel NULL pointer dereference at 00000000000002e8
PGD 0 P4D 0
Oops: 0000 [#24] SMP PTI
CPU: 2 PID: 6255 Comm: cat Tainted: G D OE 4.18.9-200.fc28.x86_64 #1
Hardware name: LENOVO 4286CTO/4286CTO, BIOS 8DET76WW (1.46 ) 06/21/2018
RIP: 0010:le_max_key_size_read+0x45/0xb0 [bluetooth]
Code: 00 00 00 48 83 ec 10 65 48 8b 04 25 28 00 00 00 48 89 44 24 08 31 c0 48 8b 87 c8 00 00 00 48 8d 7c 24 04 48 8b 80 48 0a 00 00 <48> 8b 80 e8 02 00 00 0f b6 48 52 e8 fb b6 b3 ed be 04 00 00 00 48
RSP: 0018:ffffab23c3ff3df0 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 00007f0b4ca2e000 RCX: ffffab23c3ff3f08
RDX: ffffffffc0ddb033 RSI: 0000000000000004 RDI: ffffab23c3ff3df4
RBP: 0000000000020000 R08: 0000000000000000 R09: 0000000000000000
R10: ffffab23c3ff3ed8 R11: 0000000000000000 R12: ffffab23c3ff3f08
R13: 00007f0b4ca2e000 R14: 0000000000020000 R15: ffffab23c3ff3f08
FS: 00007f0b4ca0f540(0000) GS:ffff91bd5e280000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000002e8 CR3: 00000000629fa006 CR4: 00000000000606e0
Call Trace:
full_proxy_read+0x53/0x80
__vfs_read+0x36/0x180
vfs_read+0x8a/0x140
ksys_read+0x4f/0xb0
do_syscall_64+0x5b/0x160
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Matias Karhumaa <[email protected]>
Signed-off-by: Marcel Holtmann <[email protected]>
|
|
Since we do no longer require NAPI drivers to provide
an ndo_poll_controller(), napi_schedule() has not been done
before poll_one_napi() invocation.
So testing NAPI_STATE_SCHED is likely to cause early returns.
While we are at it, remove outdated comment.
Note to future bisections : This change might surface prior
bugs in drivers. See commit 73f21c653f93 ("bnxt_en: Fix TX
timeout during netpoll.") for one occurrence.
Fixes: ac3d9dd034e5 ("netpoll: make ndo_poll_controller() optional")
Signed-off-by: Eric Dumazet <[email protected]>
Tested-by: Song Liu <[email protected]>
Cc: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Fixes the following sparse warning:
net/sched/sch_generic.c:944:6: warning:
symbol 'qdisc_free_cb' was not declared. Should it be static?
Fixes: 3a7d0d07a386 ("net: sched: extend Qdisc with rcu")
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We need to explicitly zero is_sticky when creating a new fdb, otherwise
we might get a stale value for a new entry.
Fixes: 435f2e7cc0b7 ("net: bridge: add support for sticky fdb entries")
Signed-off-by: Nikolay Aleksandrov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211
Johannes Berg says:
====================
More patches than I'd like perhaps, but each seems reasonable:
* two new spectre-v1 mitigations in nl80211
* TX status fix in general, and mesh in particular
* powersave vs. offchannel fix
* regulatory initialization fix
* fix for a queue hang due to a bad return value
* allocate TXQs for active monitor interfaces, fixing my
earlier patch to avoid unnecessary allocations where I
missed this case needed them
* fix TDLS data frames priority assignment
* fix scan results processing to take into account duplicate
channel numbers (over different operating classes, but we
don't necessarily know the operating class)
* various hwsim fixes for radio destruction and new radio
announcement messages
* remove an extraneous kernel-doc line
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
Fixes the following sparse warning:
net/tls/tls_sw.c:655:16: warning:
symbol 'get_rec' was not declared. Should it be static?
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption of records for performance")
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Fixes the following sparse warning:
net/core/gen_stats.c:166:1: warning:
symbol '___gnet_stats_copy_basic' was not declared. Should it be static?
Fixes: 5e111210a443 ("net/core: Add new basic hardware counter")
Signed-off-by: Wei Yongjun <[email protected]>
Acked-by: Eelco Chaudron <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The method ndo_start_xmit() is defined as returning an 'netdev_tx_t',
which is a typedef for an enum type, so make sure the implementation in
this driver has returns 'netdev_tx_t' value, and change the function
return type to netdev_tx_t.
Found by coccinelle.
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Only check for the network namespace if the socket is available.
Fixes: f564650106a6 ("netfilter: check if the socket netns is correct.")
Reported-by: Guenter Roeck <[email protected]>
Tested-by: Guenter Roeck <[email protected]>
Signed-off-by: Flavio Leitner <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
The nft_set_gc_batch_check() checks whether gc buffer is full.
If gc buffer is full, gc buffer is released by
the nft_set_gc_batch_complete() internally.
In case of rbtree, the rb_erase() should be called before calling the
nft_set_gc_batch_complete(). therefore the rb_erase() should
be called before calling the nft_set_gc_batch_check() too.
test commands:
table ip filter {
set set1 {
type ipv4_addr; flags interval, timeout;
gc-interval 10s;
timeout 1s;
elements = {
1-2,
3-4,
5-6,
...
10000-10001,
}
}
}
%nft -f test.nft
splat looks like:
[ 430.273885] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 430.282158] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
[ 430.283116] CPU: 1 PID: 190 Comm: kworker/1:2 Tainted: G B 4.18.0+ #7
[ 430.283116] Workqueue: events_power_efficient nft_rbtree_gc [nf_tables_set]
[ 430.313559] RIP: 0010:rb_next+0x81/0x130
[ 430.313559] Code: 08 49 bd 00 00 00 00 00 fc ff df 48 bb 00 00 00 00 00 fc ff df 48 85 c0 75 05 eb 58 48 89 d4
[ 430.313559] RSP: 0018:ffff88010cdb7680 EFLAGS: 00010207
[ 430.313559] RAX: 0000000000b84854 RBX: dffffc0000000000 RCX: ffffffff83f01973
[ 430.313559] RDX: 000000000017090c RSI: 0000000000000008 RDI: 0000000000b84864
[ 430.313559] RBP: ffff8801060d4588 R08: fffffbfff09bc349 R09: fffffbfff09bc349
[ 430.313559] R10: 0000000000000001 R11: fffffbfff09bc348 R12: ffff880100f081a8
[ 430.313559] R13: dffffc0000000000 R14: ffff880100ff8688 R15: dffffc0000000000
[ 430.313559] FS: 0000000000000000(0000) GS:ffff88011b400000(0000) knlGS:0000000000000000
[ 430.313559] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 430.313559] CR2: 0000000001551008 CR3: 000000005dc16000 CR4: 00000000001006e0
[ 430.313559] Call Trace:
[ 430.313559] nft_rbtree_gc+0x112/0x5c0 [nf_tables_set]
[ 430.313559] process_one_work+0xc13/0x1ec0
[ 430.313559] ? _raw_spin_unlock_irq+0x29/0x40
[ 430.313559] ? pwq_dec_nr_in_flight+0x3c0/0x3c0
[ 430.313559] ? set_load_weight+0x270/0x270
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __switch_to_asm+0x40/0x70
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __switch_to_asm+0x40/0x70
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __switch_to_asm+0x40/0x70
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __switch_to_asm+0x40/0x70
[ 430.313559] ? __switch_to_asm+0x34/0x70
[ 430.313559] ? __schedule+0x6d3/0x1f50
[ 430.313559] ? find_held_lock+0x39/0x1c0
[ 430.313559] ? __sched_text_start+0x8/0x8
[ 430.313559] ? cyc2ns_read_end+0x10/0x10
[ 430.313559] ? save_trace+0x300/0x300
[ 430.313559] ? sched_clock_local+0xd4/0x140
[ 430.313559] ? find_held_lock+0x39/0x1c0
[ 430.313559] ? worker_thread+0x353/0x1120
[ 430.313559] ? worker_thread+0x353/0x1120
[ 430.313559] ? lock_contended+0xe70/0xe70
[ 430.313559] ? __lock_acquire+0x4500/0x4500
[ 430.535635] ? do_raw_spin_unlock+0xa5/0x330
[ 430.535635] ? do_raw_spin_trylock+0x101/0x1a0
[ 430.535635] ? do_raw_spin_lock+0x1f0/0x1f0
[ 430.535635] ? _raw_spin_lock_irq+0x10/0x70
[ 430.535635] worker_thread+0x15d/0x1120
[ ... ]
Fixes: 8d8540c4f5e0 ("netfilter: nft_set_rbtree: add timeout support")
Signed-off-by: Taehee Yoo <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
rhashtable_lookup_fast()
Internally, rhashtable_lookup_fast() calls rcu_read_lock() then,
calls rhashtable_lookup(). so that in places where are guaranteed
by rcu read lock, rhashtable_lookup() is enough.
Signed-off-by: Taehee Yoo <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
nf_flow_offload_{ip/ipv6}_hook() check nat flag then, call
nf_flow_nat_{ip/ipv6} but that also check nat flag. so that
nat flag check code in nf_flow_offload_{ip/ipv6}_hook() are unnecessary.
Signed-off-by: Taehee Yoo <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
Add ability to set the connection tracking secmark value.
Add ability to set the meta secmark value.
Signed-off-by: Christian Göttsche <[email protected]>
Acked-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
Add the ability to set the security context of packets within the nf_tables framework.
Add a nft_object for holding security contexts in the kernel and manipulating packets on the wire.
Convert the security context strings at rule addition time to security identifiers.
This is the same behavior like in xt_SECMARK and offers better performance than computing it per packet.
Set the maximum security context length to 256.
Signed-off-by: Christian Göttsche <[email protected]>
Acked-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
deleted on device
We configured iptables as below, which only allowed incoming data on
established connections:
iptables -t mangle -A PREROUTING -m state --state ESTABLISHED -j ACCEPT
iptables -t mangle -P PREROUTING DROP
When deleting a secondary address, current masquerade implements would
flush all conntracks on this device. All the established connections on
primary address also be deleted, then subsequent incoming data on the
connections would be dropped wrongly because it was identified as NEW
connection.
So when an address was delete, it should only flush connections related
with the address.
Signed-off-by: Tan Hu <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
Fix error distribution by immediately delivering the errors to all the
affected calls rather than deferring them to a worker thread. The problem
with the latter is that retries and things can happen in the meantime when we
want to stop that sooner.
To this end:
(1) Stop the error distributor from removing calls from the error_targets
list so that peer->lock isn't needed to synchronise against other adds
and removals.
(2) Require the peer's error_targets list to be accessed with RCU, thereby
avoiding the need to take peer->lock over distribution.
(3) Don't attempt to affect a call's state if it is already marked complete.
Signed-off-by: David Howells <[email protected]>
|
|
It seems that enabling IPV6_RECVERR on an IPv6 socket doesn't also turn on
IP_RECVERR, so neither local errors nor ICMP-transported remote errors from
IPv4 peer addresses are returned to the AF_RXRPC protocol.
Make the sockopt setting code in rxrpc_open_socket() fall through from the
AF_INET6 case to the AF_INET case to turn on all the AF_INET options too in
the AF_INET6 case.
Fixes: f2aeed3a591f ("rxrpc: Fix error reception on AF_INET6 sockets")
Signed-off-by: David Howells <[email protected]>
|
|
Make the following changes to improve the robustness of the code that sets
up a new service call:
(1) Cache the rxrpc_sock struct obtained in rxrpc_data_ready() to do a
service ID check and pass that along to rxrpc_new_incoming_call().
This means that I can remove the check from rxrpc_new_incoming_call()
without the need to worry about the socket attached to the local
endpoint getting replaced - which would invalidate the check.
(2) Cache the rxrpc_peer struct, thereby allowing the peer search to be
done once. The peer is passed to rxrpc_new_incoming_call(), thereby
saving the need to repeat the search.
This also reduces the possibility of rxrpc_publish_service_conn()
BUG()'ing due to the detection of a duplicate connection, despite the
initial search done by rxrpc_find_connection_rcu() having turned up
nothing.
This BUG() shouldn't ever get hit since rxrpc_data_ready() *should* be
non-reentrant and the result of the initial search should still hold
true, but it has proven possible to hit.
I *think* this may be due to __rxrpc_lookup_peer_rcu() cutting short
the iteration over the hash table if it finds a matching peer with a
zero usage count, but I don't know for sure since it's only ever been
hit once that I know of.
Another possibility is that a bug in rxrpc_data_ready() that checked
the wrong byte in the header for the RXRPC_CLIENT_INITIATED flag
might've let through a packet that caused a spurious and invalid call
to be set up. That is addressed in another patch.
(3) Fix __rxrpc_lookup_peer_rcu() to skip peer records that have a zero
usage count rather than stopping and returning not found, just in case
there's another peer record behind it in the bucket.
(4) Don't search the peer records in rxrpc_alloc_incoming_call(), but
rather either use the peer cached in (2) or, if one wasn't found,
preemptively install a new one.
Fixes: 8496af50eb38 ("rxrpc: Use RCU to access a peer's service connection tree")
Signed-off-by: David Howells <[email protected]>
|
|
Do more up-front checking on incoming packets to weed out invalid ones and
also ones aimed at services that we don't support.
Whilst we're at it, replace the clearing of call and skew if we don't find
a connection with just initialising the variables to zero at the top of the
function.
Signed-off-by: David Howells <[email protected]>
|
|
In the input path, a received sk_buff can be marked for rejection by
setting RXRPC_SKB_MARK_* in skb->mark and, if needed, some auxiliary data
(such as an abort code) in skb->priority. The rejection is handled by
queueing the sk_buff up for dealing with in process context. The output
code reads the mark and priority and, theoretically, generates an
appropriate response packet.
However, if RXRPC_SKB_MARK_BUSY is set, this isn't noticed and an ABORT
message with a random abort code is generated (since skb->priority wasn't
set to anything).
Fix this by outputting the appropriate sort of packet.
Also, whilst we're at it, most of the marks are no longer used, so remove
them and rename the remaining two to something more obvious.
Fixes: 248f219cb8bc ("rxrpc: Rewrite the data and ack handling code")
Signed-off-by: David Howells <[email protected]>
|
|
Fix RTT information gathering in AF_RXRPC by the following means:
(1) Enable Rx timestamping on the transport socket with SO_TIMESTAMPNS.
(2) If the sk_buff doesn't have a timestamp set when rxrpc_data_ready()
collects it, set it at that point.
(3) Allow ACKs to be requested on the last packet of a client call, but
not a service call. We need to be careful lest we undo:
bf7d620abf22c321208a4da4f435e7af52551a21
Author: David Howells <[email protected]>
Date: Thu Oct 6 08:11:51 2016 +0100
rxrpc: Don't request an ACK on the last DATA packet of a call's Tx phase
but that only really applies to service calls that we're handling,
since the client side gets to send the final ACK (or not).
(4) When about to transmit an ACK or DATA packet, record the Tx timestamp
before only; don't update the timestamp afterwards.
(5) Switch the ordering between recording the serial and recording the
timestamp to always set the serial number first. The serial number
shouldn't be seen referenced by an ACK packet until we've transmitted
the packet bearing it - so in the Rx path, we don't need the timestamp
until we've checked the serial number.
Fixes: cf1a6474f807 ("rxrpc: Add per-peer RTT tracker")
Signed-off-by: David Howells <[email protected]>
|