Age | Commit message (Collapse) | Author | Files | Lines |
|
The AQCS109 supports a proprietary 2-pair 1Gbps mode. The standard
registers don't allow to tell between 1000BaseT and 1000BaseT2.
Add reporting this proprietary mode based on a vendor register.
Signed-off-by: Heiner Kallweit <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add reporting firmware details. These details are available only once
the firmware has finished initializing the chip. This can take some
time and we need to poll for init completion.
v2:
- Propagate timeout in aqr107_wait_reset_complete(). Don't bail out
completely on timeout because chip may be functional even w/o
firmware image.
Signed-off-by: Heiner Kallweit <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If both link partners are Aquantia PHY's then additional information is
exchanged as part of the auto-negotiation. Report remote capabilities
if link partner is Aquantia PHY.
Signed-off-by: Heiner Kallweit <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Previously the green and amber LEDs on this quad PHY were solid, to
indicate an encoding of the link speed (10/100/1000).
This keeps the LEDs always on just as before, but now they flash on
Rx/Tx activity.
Signed-off-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
New device of QNAP based on aqc111u
Add this ID to blacklist of cdc_ether driver as well
Signed-off-by: Dmitry Bezrukov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Simplify aqr_config_aneg().
Signed-off-by: Heiner Kallweit <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch implements accessors for the QCA8337 MDIO access
through the MDIO_MASTER register, which makes it possible to
access the PHYs on slave-bus through the switch. In cases
where the switch ports are already mapped via external
"phy-phandles", the internal mdio-bus is disabled in order to
prevent a duplicated discovery and enumeration of the same
PHYs. Don't use mixed external and internal mdio-bus
configurations, as this is not supported by the hardware.
Signed-off-by: Christian Lamparter <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This belated patch implements Andrew Lunn's request of
"remove the phy_read() and phy_write() functions."
<https://lore.kernel.org/patchwork/comment/902734/>
While seemingly harmless, this causes the switch's user
port PHYs to get registered twice. This is because the
DSA subsystem will create a slave mdio-bus not knowing
that the qca8k_phy_(read|write) accessors operate on
the external mdio-bus. So the same "bus" gets effectively
duplicated.
Cc: [email protected]
Fixes: 6b93fb46480a ("net-next: dsa: add new driver for qca8xxx family")
Signed-off-by: Christian Lamparter <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
So far we effectively clear the BMCR register. Some PHY's can deal
with this (e.g. because they reset BMCR to a default as part of a
soft-reset) whilst on others this causes issues because e.g. the
autoneg bit is cleared. Marvell is an example, see also thread [0].
So let's be a little bit more gentle and leave all bits we're not
interested in as-is. This change is needed for PHY drivers to
properly deal with the original patch.
[0] https://marc.info/?t=155264050700001&r=1&w=2
Fixes: 6e2d85ec0559 ("net: phy: Stop with excessive soft reset")
Tested-by: Phil Reid <[email protected]>
Tested-by: liweihang <[email protected]>
Signed-off-by: Heiner Kallweit <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This structure is used to define the packet flags. These flags are
applicable for both TX and RX packet. Thus, this patch changes its
name from ice_rx_flag64_bits to ice_flg64_bits, and its member definition.
Signed-off-by: Chinh T Cao <[email protected]>
Reviewed-by: Bruce Allan <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
There are numerous for() loops iterating over each of the max traffic
classes. Use a simple iterator macro instead to make the code cleaner.
Signed-off-by: Bruce Allan <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Update VF VSI tc info along with vsi->num_txq/num_rxq when VF requests to
configure queues.
Signed-off-by: Preethi Banala <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
In the current implementation of ice_reset_subtask, if multiple reset
types are set in the pf->state, the most intrusive one is meant to be
performed only, but the bits requesting the other types are not being
cleared. This would lead to another reset being performed the next time
the service task is scheduled.
Change the flow of ice_reset_subtask so that all reset request bits in
pf->state are cleared, and we still perform the most intrusive of the
resets requested.
Signed-off-by: Dave Ertman <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Provide DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC attributes to
the DMA API during the mapping operations on Rx side. With this change
the non-x86 platforms will be able to sync only with what is being used
(2k buffer) instead of entire page. This should yield a slight
performance improvement.
Furthermore, DMA unmap may destroy the changes that were made to the
buffer by CPU when platform is not a x86 one. DMA_ATTR_SKIP_CPU_SYNC
attribute usage fixes this issue.
Also add a sync_single_for_device call during the Rx buffer assignment,
to make sure that the cache lines are cleared before device attempting
to write to the buffer.
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Refactor ice_fetch_rx_buf and ice_add_rx_frag in a way that we have
standalone functions that do either the skb construction or frag
addition to previously constructed skb.
The skb handling between rx_bufs is spread among various functions. The
ice_get_rx_buf will retrieve the skb pointer from rx_buf and if it is a
NULL pointer then we do the ice_construct_skb, otherwise we add a frag
to the current skb via ice_add_rx_frag. Then, on the ice_put_rx_buf the
skb pointer that belongs to rx_buf will be cleared. Moving further, if
the current frame is not EOP frame we assign the current skb to the
rx_buf that is pointed by updated next_to_clean indicator.
What is more during the buffer reuse let's assign each member of
ice_rx_buf individually so we avoid the unnecessary copy of skb.
Last but not least, this logic split will allow us for better code reuse
when adding a support for build_skb.
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Pull out the code responsible for page counting and buffer recycling so
that it will be possible to clean up the Rx buffers in cases where we
won't allocate skb (ex. XDP)
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
{get,put}_page are atomic operations which we use for page count
handling. The current logic for refcount handling is that we increment
it when passing a skb with the data from the first half of page up to
netstack and recycle the second half of page. This operation protects us
from losing a page since the network stack can decrement the refcount of
page from skb.
The performance can be gently improved by doing the bulk updates of
refcount instead of doing it one by one. During the buffer initialization,
maximize the page's refcount and don't allow the refcount to become
less than two.
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Instead of adding a frag and later when dealing with EOP frame accessing
that frag in order to copy the headers onto linear part of skb, we can do
this in ice_add_rx_frag in case where the data_len is still 0 and frame
won't fit onto the linear part as a whole.
Function comment of ice_pull_tail was a bit misleading because of
mentioned optimizations that can be performed (drop a frag/maintaining
accurate truesize of skb) - it seems that this part of logic was dropped
and the comment was not updated to reflect this change.
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Introduce ice_can_reuse_rx_page which will verify whether the page can
be reused and return the boolean result to caller.
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Introduce ice_get_rx_buf, which will fetch the Rx buffer and do the DMA
synchronization. Length of the packet that hardware Rx descriptor
contains is now read in ice_clean_rx_irq, so we can feed ice_get_rx_buf
with it and resign from rx_desc passed as argument in ice_fetch_rx_buf
and ice_add_rx_frag.
Signed-off-by: Maciej Fijalkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
The hardware now supports link events over the admin receive queue (ARQ),
so enable HW link events over the ARQ and remove code for link event
polling.
Signed-off-by: Brett Creeley <[email protected]>
Reviewed-by: Bruce Allan <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Someone went through the effort of making this a variable so let's use
it instead of recalculating it again.
Signed-off-by: Alan Brady <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
The VLAN rule is lost when VM starts or the AVF driver (iavf.ko) is
reloaded. So it is necessary to add this rule again.
Signed-off-by: Michal Swiatkowski <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
When VSI increases the number of queues dynamically, the scheduler
just needs to add the new required nodes rather than re-adjusting with
previously allocated number of nodes. Readjusting didn't provide enough
parents to add the upper layer nodes also can't place lan and rdma
subtrees separately.
In decrease case, keep the VSI configuration with max number of queues
always. This will leave some extra nodes in the tree but no harm done.
Signed-off-by: Victor Raj <[email protected]>
Reviewed-by: Bruce Allan <[email protected]>
Signed-off-by: Anirudh Venkataramanan <[email protected]>
Tested-by: Andrew Bowers <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
|
|
Some drivers are becoming more dependent on NET_DEVLINK being selected
in configuration. With upcoming compat functions, the behavior would be
wrong in case devlink was not compiled in. So make the drivers select
NET_DEVLINK and rely on the functions being there, not just stubs.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Move the type set of devlink port after it is registered.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Similar to other driver, move the port type set after netdev registration
is done. Along with that, clear the type before unregistration.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Since attrs are static during the existence of devlink port, set the
before registration of the port.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Call devlink_port_type_eth_set() before devlink_port_register(). Bnxt
instances won't change type during lifetime. This avoids one extra
userspace devlink notification.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Set the attrs properly so delink has enough info to generate physical
port names.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
devlink functions are in use, so include the related header file.
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The driver is now constantly tested in our lab on aarch64 hardware:
Jetson tx2, Pascal and Xavier tegra based hardware.
Many of tegra smmu related HW bugs were fixed or workarounded already.
Thus, add ARM64 into Kconfig.
Add also COMPILE_TEST dependency.
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Default LRO HW configuration was very conservative.
Low Number of Descriptors per LRO Sequence, small session
timeout, inefficient settings in interrupt generation logic.
Change max number of LRO descriptors from 2 to 16 to
increase performance. Increase maximum coalescing interval
in HW to 250uS. Tune up HW LRO interrupt generation setting
to prevent hw issues with long LRO sessions.
Signed-off-by: Nikita Danilov <[email protected]>
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
For multigig rates 1K ring size is often not enough and causes extra
packet drops in hardware.
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This correlates with default internet MTU. This also allows page
flip/reuse to be activated, since each allocated RX page now serves for
two frags/packets.
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Before that, we've refilled ring even on single descriptor move.
Under high packet load that caused page allocation logic to be triggered
too often. That made overall ring processing slower.
Moreover, with page buffer reuse implemented, we should give a chance
higher networking levels to process received packets faster, release
the pages they consumed and therefore give a higher chance for these
pages to be reused.
RX ring is now refilled only when AQ_CFG_RX_REFILL_THRES or more
descriptors were processed (32 by default). Under regular traffic this
gives quite enough time for packet to be consumed and page to be reused.
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We introduce internal aq_rxpage wrapper over regular page
where extra field is tracked: rxpage offset inside of allocated page.
This offset allows to reuse one page for multiple packets.
When needed (for example with large frames processing), allocated
pageorder could be customized. This gives even larger page reuse
efficiency.
page_ref_count is used to track page users. If during rx refill
underlying page has users, we increase pg_off by rx frame size
thus the top half of the page is reused.
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Atlantic driver used 14 bytes preallocated skb size. That made L3 protocol
processing inefficient because pskb_pull had to fetch all the L3/L4 headers
from extra fragments.
Specially on UDP flows that caused extra packet drops because CPU was
overloaded with pskb_pull.
This patch uses eth_get_headlen for skb preallocation.
Signed-off-by: Igor Russkikh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2019-03-20
This series includes updates to mlx5 driver,
1) Compiler warnings cleanup from Saeed Mahameed
2) Parav Pandit simplifies sriov enable/disables
3) Gustavo A. R. Silva, Removes a redundant assignment
4) Moshe Shemesh, Adds Geneve tunnel stateless offload support
5) Eli Britstein, Adds the Support for VLAN modify action and
Replaces TC VLAN pop and push actions with VLAN modify
Note: This series includes two simple non-mlx5 patches,
1) Declare IANA_VXLAN_UDP_PORT definition in include/net/vxlan.h,
and use it in some drivers.
2) Declare GENEVE_UDP_PORT definition in include/net/geneve.h,
and use it in mlx5 and nfp drivers.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
100GbE Intel Wired LAN Driver Updates 2019-03-22
This series contains updates to ice driver only.
Akeem enables MAC anti-spoofing by default when a new VSI is being
created. Fixes an issue when reclaiming VF resources back to the pool
after reset, by freeing VF resources separately using the first VF
vector index to traverse the list, instead of starting at the last
assigned vectors list. Added support for VF & PF promiscuous mode in
the ice driver. Fixed the PF driver from letting the VF know it is "not
trusted" when it attempts to add more than its permitted additional MAC
addresses. Altered how the driver gets the VF VSIs instances, instead
of using the mailbox messages to retrieve VSIs, get it directly via the
VF object in the PF data structure.
Bruce fixes return values to resolve static analysis warnings. Made
whitespace changes to increase readability and reduce code wrapping.
Anirudh cleans up code by removing a function prototype that was never
implemented and removed an unused field in the ice_sched_vsi_info
structure.
Kiran fixes a potential divide by zero issue by adding a check.
Victor cleans up the transmit scheduler by adjusting the stack variable
usage and added/modified debug prints to make them more useful.
Yashaswini updates the driver in VEB mode to ensure that the LAN_EN bit
is set if all the right conditions are met.
Christopher ensures the loopback enable bit is not set for prune switch
rules, since all transmit traffic would be looped back to the internal
switch and dropped.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
We prefer static_branch_unlikely() over static_key_false() these days.
Signed-off-by: Eric Dumazet <[email protected]>
Acked-by: Soheil Hassas Yeganeh <[email protected]>
Acked-by: Willem de Bruijn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Bartek reported that after few cable unplug/replug cycles suddenly
replug isn't detected any longer. His system uses a RTL8106, I wasn't
able to reproduce the issue with RTL8168g. According to his bisect
the referenced commit caused the regression. As Realtek doesn't
release datasheets or errata it's hard to say what's the actual root
cause, but this change was reported to fix the issue.
Fixes: 38caff5a445b ("r8169: handle all interrupt events in the hard irq handler")
Reported-by: Bartosz Skrzypczak <[email protected]>
Suggested-by: Bartosz Skrzypczak <[email protected]>
Tested-by: Bartosz Skrzypczak <[email protected]>
Signed-off-by: Heiner Kallweit <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The call to of_get_child_by_name returns a node pointer with refcount
incremented thus it must be explicitly decremented after the last
usage.
Detected by coccinelle with the following warnings:
./drivers/net/ethernet/ti/netcp_ethss.c:3661:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 3654, but without a corresponding object release within this function.
./drivers/net/ethernet/ti/netcp_ethss.c:3665:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 3654, but without a corresponding object release within this function.
Signed-off-by: Wen Yang <[email protected]>
Cc: Wingman Kwok <[email protected]>
Cc: Murali Karicheri <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: David S. Miller <[email protected]>
|
|
The call to ehea_get_eth_dn returns a node pointer with refcount
incremented thus it must be explicitly decremented after the last
usage.
Detected by coccinelle with the following warnings:
./drivers/net/ethernet/ibm/ehea/ehea_main.c:3163:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 3154, but without a corresponding object release within this function.
Signed-off-by: Wen Yang <[email protected]>
Cc: Douglas Miller <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: David S. Miller <[email protected]>
|
|
The call to of_parse_phandle returns a node pointer with refcount
incremented thus it must be explicitly decremented after the last
usage.
Detected by coccinelle with the following warnings:
./drivers/net/ethernet/xilinx/xilinx_axienet_main.c:1624:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 1569, but without a corresponding object release within this function.
Signed-off-by: Wen Yang <[email protected]>
Cc: Anirudha Sarangi <[email protected]>
Cc: John Linn <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: David S. Miller <[email protected]>
|
|
After 90eff9096c01 ("net: phy: Allow splitting MDIO bus/device support
from PHYs") the various MDIO bus drivers were no longer parented with
config PHYLIB but with config MDIO_BUS which is not a menuconfig, fix
this by depending on MDIO_DEVICE which is a menuconfig.
This is visually nicer and less confusing for users.
Fixes: 90eff9096c01 ("net: phy: Allow splitting MDIO bus/device support from PHYs")
Signed-off-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The tristate prompt should have been replaced rather than defined a few
lines below, rebase mistake.
Fixes: 17cc9821766c ("net: phy: Move Omega PHY entry to Cygnus PHY driver")
Reported-by: Stephen Rothwell <[email protected]>
Signed-off-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Changing the VLAN header may be implemented by pop the existing header
and push a new one. Translate those operations as VLAN modify.
Applicable for use cases such as OVS where the controller translates a
vlan modify meta (OF) rule to DP pop+push actions rule.
Signed-off-by: Eli Britstein <[email protected]>
Reviewed-by: Roi Dayan <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Support VLAN modify action by emulating a rewrite action for the VLAN
fields. Currently, the only supported field is the vid. The prio in the
action must be set to 0 to indicate no change.
Signed-off-by: Eli Britstein <[email protected]>
Reviewed-by: Roi Dayan <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Add VLAN ID rewrite fields as a pre-step to support this rewrite.
Signed-off-by: Eli Britstein <[email protected]>
Reviewed-by: Roi Dayan <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|