Age | Commit message (Collapse) | Author | Files | Lines |
|
This looks a bit nicer than the open-coded "(x + 3) % 4" idiom.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The ocelot_rx_frame_word() function can return a negative error code,
however this isn't being checked for consistently. Errors being ignored
have not been seen in practice though.
Also, some constructs can be simplified by using "goto" instead of
repeated "break" statements.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
It appears that the intention of this snippet of code is to not exit
ocelot_xtr_irq_handler() while in the middle of extracting a frame.
The problem in extracting it word by word is that future extraction
attempts are really easy to get desynchronized, since the IRQ handler
assumes that the first 16 bytes are the IFH, which give further
information about the frame, such as frame length.
But during normal operation, "err" will not be 0, but 4, set from here:
for (i = 0; i < OCELOT_TAG_LEN / 4; i++) {
err = ocelot_rx_frame_word(ocelot, grp, true, &ifh[i]);
if (err != 4)
break;
}
if (err != 4)
break;
In that case, draining the extraction queue is a no-op. So explicitly
make this code execute only on negative err.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Since the xtr (extraction) IRQ of the ocelot switch is not shared, then
if it fired, it means that some data must be present in the queues of
the CPU port module. So simplify the code.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We currently only log the error recovery settings if it is enabled.
In some cases, firmware disables error recovery after it was
initially enabled. Without logging anything, the user will not be
aware of this change in setting.
Log it when error recovery is disabled. Also, change the reset count
value from hexadecimal to decimal.
Reviewed-by: Edwin Peer <[email protected]>
Reviewed-by: Pavan Chebbi <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This is a new async message that the firmware can send to check if it
can communicate with the driver. This is an added error detection
scheme that firmware can use if it suspects errors in the PCIe
interface. When the driver receives this async message, it will reply
back echoing some data in the async message. If the firmware is not
getting the reply with the proper data after some retries, error
recovery will kick in.
Reviewed-by: Andy Gospodarek <[email protected]>
Reviewed-by: Edwin Peer <[email protected]>
Reviewed-by: Vasundhara Volam <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If firmware provides the offset to the "context kind" field of the
relevant context memory blocks, we'll initialize just that field for
each block instead of initializing all of context memory.
Populate the bnxt_mem_init structure with the proper offset returned
by firmware. If it is older firmware and the information is not
available, we set the offset to an invalid value and fall back to
the old behavior of initializing every byte. Otherwise, we initialize
only the "context kind" byte at the offset.
Reviewed-by: Edwin Peer <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Currently, the driver calls memset() to set all relevant context memory
used by the chip to the initial value. This can take many milliseconds
with the potentially large number of context pages allocated for the
chip.
To make this faster, we only need to initialize the "context kind" field
of each block of context memory. This patch sets up the infrastructure
to do that with the bnxt_mem_init structure. In the next patch, we'll
add the logic to obtain the offset of the "context kind" from the
firmware. This patch is not changing the current behavior of calling
memset() to initialize all relevant context memory.
Reviewed-by: Pavan Chebbi <[email protected]>
Reviewed-by: Edwin Peer <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
During some fatal firmware error conditions, the PCI config space
register 0x2e which normally contains the subsystem ID will become
0xffff. This register will revert back to the normal value after
the chip has completed core reset. If we detect this condition,
we can poll this config register immediately for the value to revert.
Because we use config read cycles to poll this register, there is no
possibility of Master Abort if we happen to read it during core reset.
This speeds up recovery significantly as we don't have to wait for the
conservative min_time before polling MMIO to see if the firmware has
come out of reset. As soon as this register changes value we can
proceed to re-initialize the device.
Reviewed-by: Edwin Peer <[email protected]>
Reviewed-by: Vasundhara Volam <[email protected]>
Reviewed-by: Andy Gospodarek <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Newer devices may have local context memory instead of relying on the
host for backing store. In these cases, HWRM_FUNC_BACKING_STORE_QCAPS
will return a zero entry size to indicate contexts for which the host
should not allocate backing store.
Selectively allocate context memory based on device capabilities and
only enable backing store for the appropriate contexts.
Signed-off-by: Edwin Peer <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The main changes are the echo request/response from firmware for error
detection and the NO_FCS feature to transmit frames without FCS.
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Newer versions of the Xilinx AXI Ethernet core (specifically version 7.2 or
later) allow the core to be configured with a PHY interface mode of "Both",
allowing either 1000BaseX or SGMII modes to be selected at runtime. Add
support for this in the driver to allow better support for applications
which can use both fiber and copper SFP modules.
Signed-off-by: Robert Hancock <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Hook up the nway_reset ethtool operation to the corresponding phylink
function so that "ethtool -r" can be supported.
Signed-off-by: Robert Hancock <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This driver is set up to use a clock mapping in the device tree if it is
present, but still work without one for backward compatibility. However,
if getting the clock returns -EPROBE_DEFER, then we need to abort and
return that error from our driver initialization so that the probe can
be retried later after the clock is set up.
Move clock initialization to earlier in the process so we do not waste as
much effort if the clock is not yet available. Switch to use
devm_clk_get_optional and abort initialization on any error reported.
Also enable the clock regardless of whether the controller is using an MDIO
bus, as the clock is required in any case.
Fixes: 09a0354cadec267be7f ("net: axienet: Use clock framework to get device clock rate")
Signed-off-by: Robert Hancock <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2021-02-12
This series contains updates to i40e, ice, and ixgbe drivers.
Maciej does cleanups on the following drivers.
For i40e, removes redundant check for XDP prog, cleans up no longer
relevant information, and removes an unused function argument.
For ice, removes local variable use, instead returning values directly.
Moves skb pointer from buffer to ring and removes an unneeded check for
xdp_prog in zero copy path. Also removes a redundant MTU check when
changing it.
For i40e, ice, and ixgbe, stores the rx_offset in the Rx ring as
the value is constant so there's no need for continual calls.
Bjorn folds a decrement into a while statement.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
of_dev_get() and of_dev_put are just wrappers for get_device()/put_device()
on a platform_device. There's also already platform_device_{get,put}()
wrappers for this purpose. Let's update the few users and remove
of_dev_{get,put}().
Cc: Michael Ellerman <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Jakub Kicinski <[email protected]>
Cc: Frank Rowand <[email protected]>
Cc: Patrice Chotard <[email protected]>
Cc: Felipe Balbi <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: Gilles Muller <[email protected]>
Cc: Nicolas Palix <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Rob Herring <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The supported indirect subcrq entries on Power8 is 16. Power9
supports 128. Redefined this value to 16 to minimize the driver from
having to reset when migrating between Power9 and Power8. In our rx/tx
performance testing, we found no performance difference between 16 and
128 at this time.
Fixes: f019fb6392e5 ("ibmvnic: Introduce indirect subordinate Command Response Queue buffer")
Signed-off-by: Dany Madden <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The chip can configure unicast flooding, broadcast flooding and learning.
Learning is per port, while flooding is per {ingress, egress} port pair
and we need to configure the same value for all possible ingress ports
towards the requested one.
While multicast flooding is not officially supported, we can hack it by
using a feature of the second generation (P/Q/R/S) devices, which is that
FDB entries are maskable, and multicast addresses always have an odd
first octet. So by putting a match-all for 00:01:00:00:00:00 addr and
00:01:00:00:00:00 mask at the end of the FDB, we make sure that it is
always checked last, and does not take precedence in front of any other
MDB. So it behaves effectively as an unknown multicast entry.
For the first generation switches, this feature is not available, so
unknown multicast will always be treated the same as unknown unicast.
So the only thing we can do is request the user to offload the settings
for these 2 flags in tandem, i.e.
ip link set swp2 type bridge_slave flood off
Error: sja1105: This chip cannot configure multicast flooding independently of unicast.
ip link set swp2 type bridge_slave flood off mcast_flood off
ip link set swp2 type bridge_slave mcast_flood on
Error: sja1105: This chip cannot configure multicast flooding independently of unicast.
Signed-off-by: Vladimir Oltean <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We should not be unconditionally enabling address learning, since doing
that is actively detrimential when a port is standalone and not offloading
a bridge. Namely, if a port in the switch is standalone and others are
offloading the bridge, then we could enter a situation where we learn an
address towards the standalone port, but the bridged ports could not
forward the packet there, because the CPU is the only path between the
standalone and the bridged ports. The solution of course is to not
enable address learning unless the bridge asks for it.
We need to set up the initial port flags for no learning and flooding
everything, and also when the port joins and leaves the bridge.
The flood configuration was already configured ok for standalone mode
in ocelot_init, we just need to disable learning in ocelot_init_port.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Alexandre Belloni <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
In preparation of offloading the bridge port flags which have
independent settings for unknown multicast and for broadcast, we should
also start reserving one destination Port Group ID for the flooding of
broadcast packets, to allow configuring it individually.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
ocelot_init sets up PGID_MC to include the CPU port module, and that is
fine, but the ocelot-8021q tagger removes the CPU port module from the
unknown multicast replicator. So after a transition from the default
ocelot tagger towards ocelot-8021q and then again towards ocelot,
multicast flooding towards the CPU port module will be disabled.
Fixes: e21268efbe26 ("net: dsa: felix: perform switch setup for tag_8021q")
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
There are multiple ways in which a PORT_BRIDGE_FLAGS attribute can be
expressed by the bridge through switchdev, and not all of them can be
emulated by DSA mid-layer API at the same time.
One possible configuration is when the bridge offloads the port flags
using a mask that has a single bit set - therefore only one feature
should change. However, DSA currently groups together unicast and
multicast flooding in the .port_egress_floods method, which limits our
options when we try to add support for turning off broadcast flooding:
do we extend .port_egress_floods with a third parameter which b53 and
mv88e6xxx will ignore? But that means that the DSA layer, which
currently implements the PRE_BRIDGE_FLAGS attribute all by itself, will
see that .port_egress_floods is implemented, and will report that all 3
types of flooding are supported - not necessarily true.
Another configuration is when the user specifies more than one flag at
the same time, in the same netlink message. If we were to create one
individual function per offloadable bridge port flag, we would limit the
expressiveness of the switch driver of refusing certain combinations of
flag values. For example, a switch may not have an explicit knob for
flooding of unknown multicast, just for flooding in general. In that
case, the only correct thing to do is to allow changes to BR_FLOOD and
BR_MCAST_FLOOD in tandem, and never allow mismatched values. But having
a separate .port_set_unicast_flood and .port_set_multicast_flood would
not allow the driver to possibly reject that.
Also, DSA doesn't consider it necessary to inform the driver that a
SWITCHDEV_ATTR_ID_BRIDGE_MROUTER attribute was offloaded, because it
just calls .port_egress_floods for the CPU port. When we'll add support
for the plain SWITCHDEV_ATTR_ID_PORT_MROUTER, that will become a real
problem because the flood settings will need to be held statefully in
the DSA middle layer, otherwise changing the mrouter port attribute will
impact the flooding attribute. And that's _assuming_ that the underlying
hardware doesn't have anything else to do when a multicast router
attaches to a port than flood unknown traffic to it. If it does, there
will need to be a dedicated .port_set_mrouter anyway.
So we need to let the DSA drivers see the exact form that the bridge
passes this switchdev attribute in, otherwise we are standing in the
way. Therefore we also need to use this form of language when
communicating to the driver that it needs to configure its initial
(before bridge join) and final (after bridge leave) port flags.
The b53 and mv88e6xxx drivers are converted to the passthrough API and
their implementation of .port_egress_floods is split into two: a
function that configures unicast flooding and another for multicast.
The mv88e6xxx implementation is quite hairy, and it turns out that
the implementations of unknown unicast flooding are actually the same
for 6185 and for 6352:
behind the confusing names actually lie two individual bits:
NO_UNKNOWN_MC -> FLOOD_UC = 0x4 = BIT(2)
NO_UNKNOWN_UC -> FLOOD_MC = 0x8 = BIT(3)
so there was no reason to entangle them in the first place.
Whereas the 6185 writes to MV88E6185_PORT_CTL0_FORWARD_UNKNOWN of
PORT_CTL0, which has the exact same bit index. I have left the
implementations separate though, for the only reason that the names are
different enough to confuse me, since I am not able to double-check with
a user manual. The multicast flooding setting for 6185 is in a different
register than for 6352 though.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This switchdev attribute offers a counterproductive API for a driver
writer, because although br_switchdev_set_port_flag gets passed a
"flags" and a "mask", those are passed piecemeal to the driver, so while
the PRE_BRIDGE_FLAGS listener knows what changed because it has the
"mask", the BRIDGE_FLAGS listener doesn't, because it only has the final
value. But certain drivers can offload only certain combinations of
settings, like for example they cannot change unicast flooding
independently of multicast flooding - they must be both on or both off.
The way the information is passed to switchdev makes drivers not
expressive enough, and unable to reject this request ahead of time, in
the PRE_BRIDGE_FLAGS notifier, so they are forced to reject it during
the deferred BRIDGE_FLAGS attribute, where the rejection is currently
ignored.
This patch also changes drivers to make use of the "mask" field for edge
detection when possible.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Grygorii Strashko <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
When a struct switchdev_attr is notified through switchdev, there is no
way to report informational messages, unlike for struct switchdev_obj.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Ido Schimmel <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Reviewed-by: Nikolay Aleksandrov <[email protected]>
Reviewed-by: Grygorii Strashko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Fixes: 93efb0c656837 ("octeontx2-pf: Fix out-of-bounds read in otx2_get_fecparam()")
Signed-off-by: David S. Miller <[email protected]>
|
|
Create a simple helper function that indicates whether a channel has
been initialized. This abstacts/hides the details of how this is
determined.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Introduce a new function to abstract the knowledge of whether hashed
routing and filter tables are supported for a given IPA instance.
IPA v4.2 is the only one that doesn't support hashed tables (now
and for the foreseeable future), but the name of the helper function
is better for explaining what's going on.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
In ipa_cmd_register_write_valid() we verify that values we will
supply to a REGISTER_WRITE IPA immediate command will fit in
the fields that need to hold them. This patch fixes some issues
in that function and ipa_cmd_register_write_offset_valid().
The dev_err() call in ipa_cmd_register_write_offset_valid() has
some printf format errors:
- The name of the register (corresponding to the string format
specifier) was not supplied.
- The IPA base offset and offset need to be supplied separately to
match the other format specifiers.
Also make the ~0 constant used there to compute the maximum
supported offset value explicitly unsigned.
There are two other issues in ipa_cmd_register_write_valid():
- There's no need to check the hash flush register for platforms
(like IPA v4.2) that do not support hashed tables
- The highest possible endpoint number, whose status register
offset is computed, is COUNT - 1, not COUNT.
Fix these problems, and add some additional commentary.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
When initializing the IPA core clock and interconnects, it's
possible we'll get an EPROBE_DEFER error. This isn't really an
error, it's just means we need to be re-probed later.
Use dev_err_probe() to report the error rather than dev_err().
This avoids polluting the log with these "error" messages.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch actually fixes a bug, though it doesn't affect the two
platforms supported currently. The fix implements GSI memory
pointers a bit differently.
For IPA version 4.5 and above, the address space for almost all GSI
registers is adjusted downward by a fixed amount. This is currently
handled by adjusting the I/O virtual address pointer after it has
been mapped. The bug is that the pointer is not "de-adjusted" as it
should be when it's unmapped.
This patch fixes that error, but it does so by maintaining one "raw"
pointer for the mapped memory range. This is assigned when the
memory is mapped and used to unmap the memory. This pointer is also
used to access the two registers that do *not* sit in the "adjusted"
memory space.
Rather than adjusting *that* pointer, we maintain a separate pointer
that's an adjusted copy of the "raw" pointer, and that is used for
most GSI register accesses.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Code at line 967 implies that rsp->fwdata.supported_fec may be up to 4:
967: if (rsp->fwdata.supported_fec <= FEC_MAX_INDEX)
If rsp->fwdata.supported_fec evaluates to 4, then there is an
out-of-bounds read at line 971 because fec is an array with
a maximum of 4 elements:
954 const int fec[] = {
955 ETHTOOL_FEC_OFF,
956 ETHTOOL_FEC_BASER,
957 ETHTOOL_FEC_RS,
958 ETHTOOL_FEC_BASER | ETHTOOL_FEC_RS};
959 #define FEC_MAX_INDEX 4
971: fecparam->fec = fec[rsp->fwdata.supported_fec];
Fix this by properly indexing fec[] with rsp->fwdata.supported_fec - 1.
In this case the proper indexes 0 to 3 are used when
rsp->fwdata.supported_fec evaluates to a range of 1 to 4, correspondingly.
Fixes: d0cf9503e908 ("octeontx2-pf: ethtool fec mode support")
Addresses-Coverity-ID: 1501722 ("Out-of-bounds read")
Signed-off-by: Gustavo A. R. Silva <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
There is a spelling mistake in the text in array rpm_rx_stats_fields,
fix it.
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for v5.12
Second set of patches for v5.12. Last time there was a smaller pull
request so unsurprisingly this time we have a big one. mt76 has new
hardware support and lots of new features, iwlwifi getting new
features and rtw88 got NAPI support. And the usual cleanups and fixes
all over.
Major changes:
ath10k
* support setting SAR limits via nl80211
rtw88
* support 8821 RFE type2 devices
* NAPI support
iwlwifi
* add new FW API support
* support for new So devices
* support for RF interference mitigation (RFI)
* support for PNVM (Platform Non-Volatile Memory, a firmware data
file) from BIOS
mt76
* add new mt7921e driver
* 802.11 encap offload support
* support for multiple pcie gen1 host interfaces on 7915
* 7915 testmode support
* 7915 txbf support
brcmfmac
* support for CQM RSSI notifications
wil6210
* support for extended DMG MCS 12.1 rate
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
hclge_rm_vport_all_mac_table() is bloated, so split it into
separate functions for readability and maintainability.
Signed-off-by: Hao Chen <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To make it more readable and maintainable, split
hclgevf_set_rss_tuple() into two parts.
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To make it more readable and maintainable, split
hclge_set_rss_tuple() into two parts.
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
hclgevf_cmd_send() is bloated, so split it into separate
functions for readability and maintainability.
Signed-off-by: Yufeng Mo <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
hclge_cmd_send() is bloated, so split it into separate
functions for readability and maintainability.
Signed-off-by: Yufeng Mo <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
hclge_dbg_dump_qos_buf_cfg() is bloated, so split it into
separate functions for readability and maintainability.
Signed-off-by: Jian Shen <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To improve code readability and maintainability, separate
the flow type parsing part and the converting part from
bloated hclgevf_get_rss_tuple().
Signed-off-by: Jian Shen <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To improve code readability and maintainability, separate
the flow type parsing part and the converting part from
bloated hclge_get_rss_tuple().
Signed-off-by: Jian Shen <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To improve code readability and maintainability, separate
the command handling part and the status parsing part from
bloated hclge_set_vf_vlan_common().
Signed-off-by: Peng Li <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Use common ipv6_addr_any() to determine if an addr is ipv6 any addr.
Signed-off-by: Jiaran Zhang <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
As more commands are added, hns3_dbg_cmd_write() is going to
get more bloated, so move the part about command check into
a separate function.
Signed-off-by: Peng Li <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To improve code readability and maintainability, refactor
hclgevf_cmd_convert_err_code() with an array of imp_errcode
and common_errno mapping, instead of a bloated switch/case.
Signed-off-by: Peng Li <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To improve code readability and maintainability, refactor
hclge_cmd_convert_err_code() with an array of imp_errcode
and common_errno mapping, instead of a bloated switch/case.
Signed-off-by: Peng Li <[email protected]>
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Output of ixgbe_rx_offset() is based on ethtool's priv flag setting, which
when changed, causes PF reset (disables napi, frees irqs, loads
different Rx mem model, etc.). This means that within napi its result is
constant and there is no reason to call it per each processed frame.
Add new 'rx_offset' field to ixgbe_ring that is meant to hold the
ixgbe_rx_offset() result and use it within ixgbe_clean_rx_irq().
Furthermore, use it within ixgbe_alloc_mapped_page().
Last but not least, un-inline the function of interest as it lives in .c
file so let compiler do the decision about the inlining.
Reviewed-by: Björn Töpel <[email protected]>
Signed-off-by: Maciej Fijalkowski <[email protected]>
Tested-by: Tony Brelinski <[email protected]>
Signed-off-by: Tony Nguyen <[email protected]>
|
|
Output of ice_rx_offset() is based on ethtool's priv flag setting, which
when changed, causes PF reset (disables napi, frees irqs, loads
different Rx mem model, etc.). This means that within napi its result is
constant and there is no reason to call it per each processed frame.
Add new 'rx_offset' field to ice_ring that is meant to hold the
ice_rx_offset() result and use it within ice_clean_rx_irq().
Furthermore, use it within ice_alloc_mapped_page().
Reviewed-by: Björn Töpel <[email protected]>
Signed-off-by: Maciej Fijalkowski <[email protected]>
Tested-by: Tony Brelinski <[email protected]>
Signed-off-by: Tony Nguyen <[email protected]>
|
|
Output of i40e_rx_offset() is based on ethtool's priv flag setting,
which when changed, causes PF reset (disables napi, frees irqs, loads
different Rx mem model, etc.). This means that within napi its result is
constant and there is no reason to call it per each processed frame.
Add new 'rx_offset' field to i40e_ring that is meant to hold the
i40e_rx_offset() result and use it within i40e_clean_rx_irq().
Furthermore, use it within i40e_alloc_mapped_page().
Last but not least, un-inline the function of interest so that compiler
makes the decision about inlining as it lives in .c file.
Reviewed-by: Björn Töpel <[email protected]>
Signed-off-by: Maciej Fijalkowski <[email protected]>
Tested-by: Tony Brelinski <[email protected]>
Signed-off-by: Tony Nguyen <[email protected]>
|
|
Fold the count decrement into the while-statement.
Reviewed-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Björn Töpel <[email protected]>
Tested-by: Kiran Bhandare <[email protected]>
Signed-off-by: Tony Nguyen <[email protected]>
|