Age | Commit message (Collapse) | Author | Files | Lines |
|
Don't assume that a 500 microsecond time limit should be used for
all receive endpoints that support aggregation. Instead, specify
the time limit to use in the configuration data.
Set a 500 microsecond limit for all existing RX endpoints, as before.
Checking for overflow for the time limit field is a bit complicated.
Rather than duplicate a lot of code in ipa_endpoint_data_valid_one(),
call WARN() if any value is found to be too large when encoding it.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add a new flag for AP receive endpoints that indicates whether
a "hard limit" is used as a criterion for closing aggregation.
Add comments explaining the difference between "hard" and "soft"
aggregation limits.
Pass a flag to ipa_aggr_size_kb() so it computes the proper
aggregation size value whether using hard or soft limits. Move
that function earlier in "ipa_endpoint.c" so it can be used
without a forward-reference.
Update ipa_endpoint_data_valid_one() so it validates endpoints whose
data indicate a hard aggregation limit is used, and so it reports
set aggregation flags for endpoints without aggregation enabled.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add a new Boolean flag for RX endpoints defining whether HOLB drop
is initially enabled or disabled for the endpoint. All existing AP
endpoints should have HOLB drop disabled.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistakes (triple letters) in comments.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistake (triple letters) in comment.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Reviewed-by: Niklas Söderlund <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistake (triple letters) in comment.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistake (triple letters) in comment.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistakes (triple letters) in comments.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistake (triple letters) in comment.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistake (triple letters) in comment.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Spelling mistake (triple letters) in comment.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Joanne Koong says:
====================
Add a bhash2 table hashed by port + address
This patchset proposes adding a bhash2 table that hashes by port and address.
The motivation behind bhash2 is to expedite bind requests in situations where
the port has many sockets in its bhash table entry, which makes checking bind
conflicts costly especially given that we acquire the table entry spinlock
while doing so, which can cause softirq cpu lockups and can prevent new tcp
connections.
We ran into this problem at Meta where the traffic team binds a large number
of IPs to port 443 and the bind() call took a significant amount of time
which led to cpu softirq lockups, which caused packet drops and other failures
on the machine
The patches are as follows:
1/2 - Adds a second bhash table (bhash2) hashed by port and address
2/2 - Adds a test for timing how long an additional bind request takes when
the bhash entry is populated
When experimentally testing this on a local server for ~24k sockets bound to
the port, the results seen were:
ipv4:
before - 0.002317 seconds
with bhash2 - 0.000018 seconds
ipv6:
before - 0.002431 seconds
with bhash2 - 0.000021 seconds
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
bhash entry
This test populates the bhash table for a given port with
MAX_THREADS * MAX_CONNECTIONS sockets, and then times how long
a bind request on the port takes.
When populating the bhash table, we create the sockets and then bind
the sockets to the same address and port (SO_REUSEADDR and SO_REUSEPORT
are set). When timing how long a bind on the port takes, we bind on a
different address without SO_REUSEPORT set. We do not set SO_REUSEPORT
because we are interested in the case where the bind request does not
go through the tb->fastreuseport path, which is fragile (eg
tb->fastreuseport path does not work if binding with a different uid).
To run the test locally, I did:
* ulimit -n 65535000
* ip addr add 2001:0db8:0:f101::1 dev eth0
* ./bind_bhash_test 443
Signed-off-by: Joanne Koong <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
We currently have one tcp bind table (bhash) which hashes by port
number only. In the socket bind path, we check for bind conflicts by
traversing the specified port's inet_bind2_bucket while holding the
bucket's spinlock (see inet_csk_get_port() and inet_csk_bind_conflict()).
In instances where there are tons of sockets hashed to the same port
at different addresses, checking for a bind conflict is time-intensive
and can cause softirq cpu lockups, as well as stops new tcp connections
since __inet_inherit_port() also contests for the spinlock.
This patch proposes adding a second bind table, bhash2, that hashes by
port and ip address. Searching the bhash2 table leads to significantly
faster conflict resolution and less time holding the spinlock.
Signed-off-by: Joanne Koong <[email protected]>
Reviewed-by: Eric Dumazet <[email protected]>
Acked-by: Kuniyuki Iwashima <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
GCC array-bounds warns that ipc_coredump_get_list() under-allocates
the size of struct iosm_cd_table *cd_table.
This is avoidable - we just need a flexible array. Nothing calls
sizeof() on struct iosm_cd_list or anything that contains it.
Reviewed-by: M Chetan Kumar <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Use memset_after() helper to simplify the code, there is no functional
change in this patch.
Signed-off-by: Xiu Jianfeng <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
align the extack argument of the ocelot_port_fdb_del()
function.
Signed-off-by: Alaa Mohamed <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The continuation line does not align with the opening bracket
and this patch fix it.
Signed-off-by: Alaa Mohamed <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
GCC array bounds checking complains that ulp_id is validated
only against upper bound. Make it unsigned.
Reviewed-by: Michael Chan <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Commit 49bb39bddad2 ("selftests: fib_nexthops: Make the test more robust")
increased the timeout of ping commands to 5 seconds, to make the test
more robust. Make the timeout configurable using '-w' argument to allow
user to change it depending on the system that runs the test. Some systems
suffer from slow forwarding performance, so they may need to change the
timeout.
Signed-off-by: Amit Cohen <[email protected]>
Reviewed-by: Shuah Khan <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Sometimes t7xx_cldma_gpd_set_next_ptr() is called under spin lock,
so add 'gfp_mask' parameter in t7xx_cldma_gpd_set_next_ptr() to pass
the flag.
Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
Reported-by: Hulk Robot <[email protected]>
Signed-off-by: Yang Yingliang <[email protected]>
Reviewed-by: Loic Poulain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Fix typo which breaks build for parisc.
Fixes: 3daebfbeb455 ("net: tulip: convert to devres")
Reported-by: Linux Kernel Functional Testing <[email protected]>
Link: https://lore.kernel.org/all/CA+G9fYuCzU5VZ_nc+6NEdBXJdVCH=J2SB1Na1G_NS_0BNdGYtg@mail.gmail.com/
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Rolf Eike Beer <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
When user sets skb_defer_max to 1 the kick threshold is 0
(half of 1). If we increment queue length before the check
the kick will never happen, and the skb may get stranded.
This is likely harmless but can be avoided by moving the
increment after the check. This way skb_defer_max == 1
will always kick. Still a silly config to have, but
somehow that feels more correct.
While at it drop a comment which seems to be outdated
or confusing, and wrap the defer_count write with
a WRITE_ONCE() since it's read on the fast path
that avoids taking the lock.
Reviewed-by: Eric Dumazet <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Siena only supports software TSO. This means more code can be deleted,
as pointed out by the Smatch static checker warning:
drivers/net/ethernet/sfc/siena/tx.c:184 __efx_siena_enqueue_skb()
warn: duplicate check 'segments' (previous on line 158)
Fixes: 956f2d86cb37 ("sfc/siena: Remove build references to missing functionality")
Reported-by: Dan Carpenter <[email protected]>
Link: https://lore.kernel.org/kernel-janitors/YoH5tJMnwuGTrn1Z@kili/
Signed-off-by: Martin Habets <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Looks like the IPv6 version of the patch under Fixes was
a copy/paste of the IPv4 but hit the wrong spot.
It is tcp_v6_rcv() which uses drop_reason as a boolean, and
needs to be protected against reason == 0 before calling free.
tcp_v6_do_rcv() has a pretty straightforward flow.
The resulting warning looks like this:
WARNING: CPU: 1 PID: 0 at net/core/skbuff.c:775
Call Trace:
tcp_v6_rcv (net/ipv6/tcp_ipv6.c:1767)
ip6_protocol_deliver_rcu (net/ipv6/ip6_input.c:438)
ip6_input_finish (include/linux/rcupdate.h:726)
ip6_input (include/linux/netfilter.h:307)
Fixes: f8319dfd1b3b ("net: tcp: reset 'drop_reason' to NOT_SPCIFIED in tcp_v{4,6}_rcv()")
Tested-by: Matthieu Baerts <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Alex Elder says:
====================
net: ipa: a mix of patches
This series includes a mix of things things that are generally
minor. The first four are sort of unrelated fixes, and summarizing
them here wouldn't be that helpful.
The last three together make it so only the "configuration data" we
need after initialization is saved for later use. Most such data is
used only during driver initialization. But endpoint configuration
is needed later, so the last patch saves a copy of that. Eventually
we'll want to support reconfiguring endpoints at runtime as well,
and this will facilitate that.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
All elements of the default endpoint configuration are used in the
code when programming an endpoint for use. But none of the other
configuration data is ever needed once things are initialized.
So rather than saving a pointer to *all* of the configuration data,
save a copy of only the endpoint configuration portion.
This will eventually allow endpoint configuration to be modifiable
at runtime. But even before that it means we won't keep a pointer
to configuration data after when no longer needed.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Rename the just-moved data structure types to drop the "_data"
suffix, to make it more obvious they are no longer meant to be used
just as read-only initialization data. Rename the fields and
variables of these types to use "config" instead of "data" in the
name. This is another small step meant to facilitate review.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Move the definitions of the structures defining endpoint-specific
configuration data out of "ipa_data.h" and into "ipa_endpoint.h".
This is a trivial movement of code without any other change, to
prepare for the next few patches.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
About half of the fields set by the call in ipa_modem_netdev_setup()
are overwritten after the call. Instead, just skip the call, and
open-code the (other) assignments it makes to the net_device
structure fields.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If we program an RX endpoint to have no header (header length is 0),
header-related endpoint configuration values are meaningless and are
ignored.
The only case we support that defines a header is QMAP endpoints.
In ipa_endpoint_init_hdr_ext() we set the endianness mask value
unconditionally, but it should not be done if there is no header
(meaning it is not configured for QMAP).
Set the endianness conditionally, and rearrange the logic in that
function slightly to avoid testing the qmap flag twice.
Delete an incorrect comment in ipa_endpoint_init_aggr().
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The CHANNEL_NOT_RUNNING error condition has been generalized, so
rename it to be INCORRECT_CHANNEL_STATE.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
In gsi_channel_update(), a reference count is taken on the last
completed transaction "to keep it from completing" before we give
the event back to the hardware. Completion processing for that
transaction (and any other "new" ones) will not occur until after
this function returns, so there's no risk it completing early. So
there's no need to take and drop the additional transaction
reference.
Use local variables in the call to gsi_evt_ring_doorbell().
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for net-next, misc
updates and fallout fixes from recent Florian's code rewritting (from
last pull request):
1) Use new flowi4_l3mdev field in ip_route_me_harder(), from Martin Willi.
2) Avoid unnecessary GC with a timestamp in conncount, from William Tu
and Yifeng Sun.
3) Remove TCP conntrack debugging, from Florian Westphal.
4) Fix compilation warning in ctnetlink, from Florian.
* git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next:
netfilter: ctnetlink: fix up for "netfilter: conntrack: remove unconfirmed list"
netfilter: conntrack: remove pr_debug callsites from tcp tracker
netfilter: nf_conncount: reduce unnecessary GC
netfilter: Use l3mdev flow key when re-routing mangled packets
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
I missed this in the barrage of GCC 12 warnings. Commit cf2df74e202d
("net: fix dev_fill_forward_path with pppoe + bridge") changed
the pointer into an array.
Fixes: d7e6f5836038 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net")
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Mat Martineau says:
====================
mptcp: Miscellaneous fixes and a new test case
Patches 1 and 3 remove helpers that were iterating over the subflow
connection list without proper locking. Iteration was not needed in
either case.
Patch 2 fixes handling of MP_FAIL timeout, checking for orphaned
subflows instead of using the MPTCP socket data lock and connection
state.
Patch 4 adds a test for MP_FAIL timeout using tc pedit to induce checksum
failures.
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Add the multiple subflows test case for MP_FAIL, to test the MP_FAIL
reset case. Use the test_linkfail value to make 1024KB test files.
Invoke reset_with_fail() to use 'iptables' and 'tc action pedit' rules
to produce the bit flips to trigger the checksum failures on ns2eth2.
Add delays on ns2eth1 to make sure more data can translate on ns2eth2.
The check_invert flag is enabled in reset_with_fail(), so this test
prints out the inverted bytes, instead of the file mismatch errors.
Invoke pedit_action_pkts() to get the numbers of the packets edited
by the tc pedit actions, and print this numbers to the output.
Co-developed-by: Paolo Abeni <[email protected]>
Signed-off-by: Paolo Abeni <[email protected]>
Signed-off-by: Geliang Tang <[email protected]>
Signed-off-by: Mat Martineau <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The MPTCP socket's conn_list (list of subflows) requires the socket lock
to access. The MP_FAIL timeout code added such an access, where it would
check the list of subflows both in timer context and (later) in workqueue
context where the socket lock is held.
Rather than check the list twice, remove the check in the timeout
handler and only depend on the check in the workqueue. Also remove the
MPTCP_FAIL_NO_RESPONSE flag, since mptcp_mp_fail_no_response() has
insignificant overhead and can be checked on each worker run.
Fixes: 49fa1919d6bc ("mptcp: reset subflow when MP_FAIL doesn't respond")
Reported-by: Paolo Abeni <[email protected]>
Reviewed-by: Paolo Abeni <[email protected]>
Signed-off-by: Mat Martineau <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
MP_FAIL timeout (waiting for a peer to respond to an MP_FAIL with
another MP_FAIL) is implemented using the MPTCP socket's sk_timer. That
timer is also used at MPTCP socket close, so it's important to not have
the two timer users interfere with each other.
At MPTCP socket close, all subflows are orphaned before sk_timer is
manipulated. By checking the SOCK_DEAD flag on the subflows, each
subflow can determine if the timer is safe to alter without acquiring
any MPTCP-level lock. This replaces code that was using the
mptcp_data_lock and MPTCP-level socket state checks that did not
correctly protect the timer.
Fixes: 49fa1919d6bc ("mptcp: reset subflow when MP_FAIL doesn't respond")
Reviewed-by: Paolo Abeni <[email protected]>
Signed-off-by: Mat Martineau <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The mentioned helper requires the msk socket lock, and the
current callers don't own it nor can't acquire it, so the
access is racy.
All the current callers are really checking for infinite mapping
fallback, and the latter condition is explicitly tracked by
the relevant msk variable: we can safely remove the caller usage
- and the caller itself.
The issue is present since MP_FAIL implementation, but the
fix only applies since the infinite fallback support, ence the
somewhat unexpected fixes tag.
Fixes: 0530020a7c8f ("mptcp: track and update contiguous data status")
Acked-and-tested-by: Geliang Tang <[email protected]>
Signed-off-by: Paolo Abeni <[email protected]>
Signed-off-by: Mat Martineau <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
This patch improves TCP PRR loss recovery behavior for a corner
case. Previously during PRR conservation-bound mode, it strictly
sends the amount equals to the amount newly acked or s/acked.
The patch changes s.t. PRR may send additional amount that was banked
previously (e.g. application-limited) in the conservation-bound
mode, similar to the slow-start mode. This unifies and simplifies the
algorithm further and may improve the recovery latency. This change
still follow the general packet conservation design principle and
always keep inflight/cwnd below the slow start threshold set
by the congestion control module.
PRR is described in RFC 6937. We'll include this change in the
latest revision rfc6937-bis as well.
Reported-by: Neal Cardwell <[email protected]>
Signed-off-by: Yuchung Cheng <[email protected]>
Signed-off-by: Neal Cardwell <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The current title of our section of the documentation is
Linux Networking Documentation. Since we're describing
a section of Linux Documentation repeating those two
words seems redundant.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
GCC 12 seems upset that we check ipa_irq against array bound
but then proceed, anyway:
drivers/net/ipa/ipa_interrupt.c: In function ‘ipa_interrupt_add’:
drivers/net/ipa/ipa_interrupt.c:196:27: warning: array subscript 30 is above array bounds of ‘void (*[30])(struct ipa *, enum ipa_irq_id)’ [-Warray-bounds]
196 | interrupt->handler[ipa_irq] = handler;
| ~~~~~~~~~~~~~~~~~~^~~~~~~~~
drivers/net/ipa/ipa_interrupt.c:42:27: note: while referencing ‘handler’
42 | ipa_irq_handler_t handler[IPA_IRQ_COUNT];
| ^~~~~~~
Reviewed-by: Alex Elder <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
GCC 12 warns:
drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c: In function ‘ipc_protocol_dl_td_process’:
drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c:406:13: warning: the comparison will always evaluate as ‘true’ for the address of ‘cb’ will never be NULL [-Waddress]
406 | if (!IPC_CB(skb)) {
| ^
Indeed the check seems entirely pointless. Hopefully the other
validation checks will catch if the cb is bad, but it can't be
NULL.
Reviewed-by: M Chetan Kumar <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Martin Blumenstingl says:
====================
lantiq_gswip: Two small fixes
While updating the Lantiq target in OpenWrt to Linux 5.15 I came across
an FDB related error message. While that still needs to be solved I
found two other small issues on the way.
This series fixes the two minor issues found while revisiting the FDB
code in the lantiq_gswip driver:
- The first patch fixes the start index used in gswip_port_fdb() to
find the entry with the matching bridge. The updated logic is now
consistent with the rest of the driver.
- The second patch fixes a typo in a dev_err() message.
[0] https://lore.kernel.org/netdev/[email protected]/
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
gswip_port_fdb_dump() reads the MAC bridge entries. The error message
should say "failed to read mac bridge entry". While here, also add the
index to the error print so humans can get to the cause of the problem
easier.
Acked-by: Hauke Mehrtens <[email protected]>
Reviewed-by: Vladimir Oltean <[email protected]>
Signed-off-by: Martin Blumenstingl <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The first N entries in priv->vlans are reserved for managing ports which
are not part of a bridge. Use priv->hw_info->max_ports to consistently
access per-bridge entries at index 7. Starting at
priv->hw_info->cpu_port (6) is harmless in this case because
priv->vlan[6].bridge is always NULL so the comparison result is always
false (which results in this entry being skipped).
Acked-by: Hauke Mehrtens <[email protected]>
Signed-off-by: Martin Blumenstingl <[email protected]>
Reviewed-by: Vladimir Oltean <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
t7xx_request_irq() error: uninitialized symbol 'ret'.
t7xx_core_hk_handler() error: potentially dereferencing uninitialized 'event'.
If the condition to enter the loop that waits for the handshake event
is false on the first iteration then the uninitialized 'event' will be
dereferenced, fix this by initializing 'event' to NULL.
t7xx_port_proxy_recv_skb() warn: variable dereferenced before check 'skb'.
No need to check skb at t7xx_port_proxy_recv_skb() since we know it
is always called with a valid skb by t7xx_cldma_gpd_rx_from_q().
Reported-by: Dan Carpenter <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Russell King says:
====================
mtk_eth_soc phylink updates
This series ultimately updates mtk_eth_soc to use phylink_pcs, with some
fixes along the way.
Previous attempts to update this driver (which is now marked as legacy)
have failed due to lack of testing. I am hoping that this time will be
different; Marek can test RGMII modes, but not SGMII. So all that we
know is that this patch series probably doesn't break RGMII.
1) remove unused mac_mode and sgmii flags members from structures.
2) remove unnecessary interpretation of speed when configuring 1000
and 2500 Base-X
3) move configuration of SGMII duplex setting from mac_config() to
link_up()
4) only pass in interface mode to mtk_sgmii_setup_mode_force()
5) move decision about which mtk_sgmii_setup_mode_*() function to call
into mtk_sgmii.c
6) add a fixme comment for RGMII explaning why the call to
mtk_gmac0_rgmii_adjust() is completely wrong - this needs to be
addressed by someone who has the hardware and can test an appropriate
fix. This fixme means that the driver still can't become non-legacy.
7) move gmac setup from mac_config() to mac_finish() - this preserves
the order that we write to the hardware when we eventually convert to
phylink_pcs()
8) move configuration of syscfg0 in SGMII/802.3z mode to mac_finish()
for the same reasons as (7).
9) convert mtk_sgmii.c code structure and the mtk_sgmii structure to
suit conversion to phylink_pcs
10) finally convert to phylink_pcs
As there has been no feedback from mtk_eth_soc maintainers to my RFC
on April 6th, not my reminder on April 11th, so it's now time to merge
this anyway. Mediatek code seems to be submitted to the kernel and
then the maintainers scarper...
====================
Link: https://lore.kernel.org/r/YoUIX+BN/[email protected]
Tested-by: Marek Behún <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Partially convert mtk_eth_soc to phylink_pcs, moving the configuration,
link up and AN restart over. However, it seems mac_pcs_get_state()
doesn't actually get the state from the PCS, so we can't convert that
over without a better understanding of the hardware.
Signed-off-by: Russell King (Oracle) <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|