aboutsummaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)AuthorFilesLines
2020-07-22net: mscc: ocelot: fix non-initialized CPU port on VSC7514Vladimir Oltean1-14/+14
The VSC7514 is marketed as a 10-port switch, however it has 11 physical ports (0->10) in the block diagram: https://www.microsemi.com/product-directory/ethernet-switches/3992-vsc7514 (also in the device tree at arch/mips/boot/dts/mscc/ocelot.dtsi) Additionally, by architecture it has one more entry in the analyzer block, situated right after the physical ports, for the CPU port module. This is not a physical port, it only represents a channel for frame injection and extraction. That entry for the CPU port is at index 11 in the analyzer. When the register groups for QSYS_SWITCH_PORT_MODE, SYS_PORT_MODE and SYS_PAUSE_CFG are declared to be replicated 11 times, the 11th entry in the array of regfields is not initialized, so the CPU port module is not initialized either. The documentation of QSYS_SWITCH_PORT_MODE for VSC7514 also says that this register group is replicated 12 times, so this patch is simply reflecting that and not introducing any further inconsistency. Fixes: 886e1387c73d ("net: mscc: ocelot: convert QSYS_SWITCH_PORT_MODE and SYS_PORT_MODE to regfields") Fixes: 541132f0961a ("net: mscc: ocelot: convert SYS_PAUSE_CFG register access to regfield") Reported-by: Bryan Whitehead <[email protected]> Signed-off-by: Vladimir Oltean <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21ionic: interface file updatesShannon Nelson1-20/+68
Add some new interface values and update a few more descriptions. Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21ionic: rearrange reset and bus-master controlShannon Nelson1-5/+4
We can prevent potential incorrect DMA access attempts from the NIC by enabling bus-master after the reset, and by disabling bus-master earlier in cleanup. Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21ionic: update eid test for overflowShannon Nelson1-1/+1
Fix up our comparison to better handle a potential (but largely unlikely) wrap around. Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21ionic: remove unused ionic_coal_hw_to_usecShannon Nelson1-13/+0
Clean up some unused code. Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21ionic: set netdev default nameShannon Nelson1-0/+1
If the host system's udev fails to set a new name for the network port, there is no NETDEV_CHANGENAME event to trigger the driver to send the name down to the firmware. It is safe to set the lif name multiple times, so we add a call early on to set the default netdev name to be sure the FW has something to use in its internal debug logging. Then when udev gets around to changing it we can update it to the actual name the system will be using. Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21ionic: get MTU from lif identityShannon Nelson3-5/+15
Change from using hardcoded MTU limits and instead use the firmware defined limits. The value from the LIF attributes is the frame size, so we take off the header size to convert to MTU size. Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ethernet: ti: add NETIF_F_HW_TC hw feature flag for taprio offloadMurali Karicheri1-1/+2
Currently drive supports taprio offload which is a tc feature offloaded to cpsw hardware. So driver has to set the hw feature flag, NETIF_F_HW_TC in the net device to be compliant. This patch adds the flag. Fixes: 8127224c2708 ("ethernet: ti: am65-cpsw-qos: add TAPRIO offload support") Signed-off-by: Murali Karicheri <[email protected]> Signed-off-by: Grygorii Strashko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ethernet: ave: Fix error returns in ave_initWang Hai1-1/+1
When regmap_update_bits failed in ave_init(), calls of the functions reset_control_assert() and clk_disable_unprepare() were missed. Add goto out_reset_assert to do this. Fixes: 57878f2f4697 ("net: ethernet: ave: add support for phy-mode setting of system controller") Reported-by: Hulk Robot <[email protected]> Signed-off-by: Wang Hai <[email protected]> Reviewed-by: Kunihiko Hayashi <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21dpaa2-eth: add support for TBF offloadIoana Ciornei2-1/+48
React to TC_SETUP_QDISC_TBF and configure the egress shaper as appropriate with the maximum rate and burst size requested by the user. TBF can only be offloaded on DPAA2 when it's the root qdisc, ie it's a per port shaper. Signed-off-by: Ioana Ciornei <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21dpaa2-eth: add API for Tx shapingIoana Ciornei3-0/+65
Add the necessary API (dpni_set_tx_shaping) for configuring the rate and burst size of a per port shaper in DPAA2. Signed-off-by: Ioana Ciornei <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21dpaa2-eth: move the mqprio setup into a separate functionIoana Ciornei1-6/+13
Move the setup done for MQPRIO into a separate function so that with the addition of another offload we do not crowd dpaa2_eth_setup_tc(). After this restructuring it's easier to see what is supported in terms of Qdisc offloading. Signed-off-by: Ioana Ciornei <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21r8169: allow to enable ASPM on RTL8125AHeiner Kallweit1-0/+2
For most chip versions this has been added already. Allow also for RTL8125A to enable ASPM. Signed-off-by: Heiner Kallweit <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21qed: suppress false-positives interrupt error messages on HW initAlexander Lobakin3-24/+32
It was found that qed_pglueb_rbc_attn_handler() can produce a lot of false-positive error detections on driver load/reload (especially after crashes/recoveries) and spam the kernel log: [ 4.958275] [qed_pglueb_rbc_attn_handler:324()]ICPL error - 00d00ff0 [ 2079.146764] [qed_pglueb_rbc_attn_handler:324()]ICPL error - 00d80ff0 [ 2116.374631] [qed_pglueb_rbc_attn_handler:324()]ICPL error - 00d80ff0 [ 2135.250564] [qed_pglueb_rbc_attn_handler:324()]ICPL error - 00d80ff0 [...] Reduce the logging level of two false-positive prone error messages from notice to verbose on initialization (only) to not mix it with real error attentions while debugging. Fixes: 666db4862f2d ("qed: Revise load sequence to avoid PCI errors") Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: Michal Kalderon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21qed: suppress "don't support RoCE & iWARP" flooding on HW initAlexander Lobakin1-2/+2
Change the verbosity of the "don't support RoCE & iWARP simultaneously" warning to debug level to stop flooding on driver/hardware initialization: [ 4.783230] qede 01:00.00: Storm FW 8.37.7.0, Management FW 8.52.9.0 [MBI 15.10.6] [eth0] [ 4.810020] [qed_rdma_set_pf_params:2076()]Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only [ 4.861186] qede 01:00.01: Storm FW 8.37.7.0, Management FW 8.52.9.0 [MBI 15.10.6] [eth1] [ 4.893311] [qed_rdma_set_pf_params:2076()]Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only [ 5.181713] qede a1:00.00: Storm FW 8.37.7.0, Management FW 8.52.9.0 [MBI 15.10.6] [eth2] [ 5.224740] [qed_rdma_set_pf_params:2076()]Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only [ 5.276449] qede a1:00.01: Storm FW 8.37.7.0, Management FW 8.52.9.0 [MBI 15.10.6] [eth3] [ 5.318671] [qed_rdma_set_pf_params:2076()]Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only [ 5.369548] qede a1:00.02: Storm FW 8.37.7.0, Management FW 8.52.9.0 [MBI 15.10.6] [eth4] [ 5.411645] [qed_rdma_set_pf_params:2076()]Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only Fixes: e0a8f9de16fc ("qed: Add iWARP enablement support") Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: Michal Kalderon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: support new LLQ acceleration modeArthur Kiyanovski7-24/+109
New devices add a new hardware acceleration engine, which adds some restrictions to the driver. Metadata descriptor must be present for each packet and the maximum burst size between two doorbells is now limited to a number advertised by the device. This patch adds: 1. A handshake protocol between the driver and the device, so the device will enable the accelerated queues only when both sides support it. 2. The driver support for the new acceleration engine: 2.1. Send metadata descriptor for each Tx packet. 2.2. Limit the number of packets sent between doorbells.(*) (*) A previous driver implementation of this feature was comitted in commit 05d62ca218f8 ("net: ena: add handling of llq max tx burst size") however the design of the interface between the driver and device changed since then. This change is reflected in this commit. Signed-off-by: Netanel Belgazal <[email protected]> Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: move llq configuration from ena_probe to ena_device_init()Arthur Kiyanovski1-63/+73
When the ENA device resets to recover from some error state, all LLQ configuration values are reset to their defaults, because LLQ is initialized only once during ena_probe(). Changes in this commit: 1. Move the LLQ configuration process into ena_init_device() which is called from both ena_probe() and ena_restore_device(). This way, LLQ setup configurations that are different from the default values will survive resets. 2. Extract the LLQ bar mapping to ena_map_llq_bar(), and call once in the lifetime of the driver from ena_probe(), since there is no need to unmap and map the LLQ bar again every reset. 3. Map the LLQ bar if it exists, regardless if initialization of LLQ placement policy (ENA_ADMIN_PLACEMENT_POLICY_DEV) succeeded or not. Initialization might fail the first time, falling back to the ENA_ADMIN_PLACEMENT_POLICY_HOST placement policy, but later succeed after device reset, in which case the LLQ bar needs to be mapped already. Signed-off-by: Sameeh Jubran <[email protected]> Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: enable support of rss hash key and function changesArthur Kiyanovski2-2/+6
Add the rss_configurable_function_key bit to driver_supported_feature. This bit tells the device that the driver in question supports the retrieving and updating of RSS function and hash key, and therefore the device should allow RSS function and key manipulation. This commit turns on device support for hash key and RSS function management. Without this commit this feature is turned off at the device and appears to the user as unsupported. This commit concludes the following series of already merged commits: commit 0af3c4e2eab8 ("net: ena: changes to RSS hash key allocation") commit c1bd17e51c71 ("net: ena: change default RSS hash function to Toeplitz") commit f66c2ea3b18a ("net: ena: allow setting the hash function without changing the key") commit e9a1de378dd4 ("net: ena: fix error returning in ena_com_get_hash_function()") commit 80f8443fcdaa ("net: ena: avoid unnecessary admin command when RSS function set fails") commit 6a4f7dc82d1e ("net: ena: rss: do not allocate key when not supported") commit 0d1c3de7b8c7 ("net: ena: fix incorrect default RSS key") The above commits represent the last part of the implementation of this feature, and with them merged the feature can be enabled in the device. Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: add support for traffic mirroringArthur Kiyanovski2-7/+13
Add support for traffic mirroring, where the hardware reads the buffer from the instance memory directly. Traffic Mirroring needs access to the rx buffers in the instance. To have this access, this patch: 1. Changes the code to map and unmap the rx buffers bidirectionally. 2. Enables the relevant bit in driver_supported_features to indicate to the FW that this driver supports traffic mirroring. Rx completion is not generated until mirroring is done to avoid the situation where the driver changes the buffer before it is mirrored. Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: cosmetic: change ena_com_stats_admin stats to u64Arthur Kiyanovski2-7/+7
The size of the admin statistics in ena_com_stats_admin is changed from 32bit to 64bit so to align with the sizes of the other statistics in the driver (i.e. rx_stats, tx_stats and ena_stats_dev). This is done as part of an effort to create a unified API to read statistics. Signed-off-by: Shay Agroskin <[email protected]> Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: cosmetic: satisfy gcc warningArthur Kiyanovski1-1/+1
gcc 4.8 reports a warning when initializing with = {0}. Dropping the "0" from the braces fixes the issue. This fix is not ANSI compatible but is allowed by gcc. Signed-off-by: Sameeh Jubran <[email protected]> Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: add reserved PCI device IDArthur Kiyanovski1-0/+5
Add a reserved PCI device ID to the driver's table Used for internal testing purposes. Signed-off-by: Arthur Kiyanovski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ena: avoid unnecessary rearming of interrupt vector when busy-pollingArthur Kiyanovski2-1/+8
For an overview of the race created by this patch goto synchronization label. In napi busy-poll mode, the kernel invokes the napi handler of the device repeatedly to poll the NIC's receive queues. This process repeats until a timeout, specific for each connection, is up. By polling packets in busy-poll mode the user may gain lower latency and higher throughput (since the kernel no longer waits for interrupts to poll the queues) in expense of CPU usage. Upon completing a napi routine, the driver checks whether the routine was called by an interrupt handler. If so, the driver re-enables interrupts for the device. This is needed since an interrupt routine invocation disables future invocations until explicitly re-enabled. The driver avoids re-enabling the interrupts if they were not disabled in the first place (e.g. if driver in busy mode). Originally, the driver checked whether interrupt re-enabling is needed by reading the 'ena_napi->unmask_interrupt' variable. This atomic variable was set upon interrupt and cleared after re-enabling it. In the 4.10 Linux version, the 'napi_complete_done' call was changed so that it returns 'false' when device should not re-enable interrupts, and 'true' otherwise. The change includes reading the "NAPIF_STATE_IN_BUSY_POLL" flag to check if the napi call is in busy-poll mode, and if so, return 'false'. The driver was changed to re-enable interrupts according to this routine's return value. The Linux community rejected the use of the 'ena_napi->unmaunmask_interrupt' variable to determine whether unmasking is needed, and urged to use napi_napi_complete_done() return value solely. See https://lore.kernel.org/patchwork/patch/741149/ for more details As explained, a busy-poll session exists for a specified timeout value, after which it exits the busy-poll mode and re-enters it later. This leads to many invocations of the napi handler where napi_complete_done() false indicates that interrupts should be re-enabled. This creates a bug in which the interrupts are re-enabled unnecessarily. To reproduce this bug: 1) echo 50 | sudo tee /proc/sys/net/core/busy_poll 2) echo 50 | sudo tee /proc/sys/net/core/busy_read 3) Add counters that check whether 'ena_unmask_interrupt(tx_ring, rx_ring);' is called without disabling the interrupts in the first place (i.e. with calling the interrupt routine ena_intr_msix_io()) Steps 1+2 enable busy-poll as the default mode for new connections. The busy poll routine rearms the interrupts after every session by design, and so we need to add an extra check that the interrupts were masked in the first place. synchronization: This patch introduces a race between the interrupt handler ena_intr_msix_io() and the napi routine ena_io_poll(). Some macros and instruction were added to prevent this race from leaving the interrupts masked. The following specifies the different race scenarios in this patch: 1) interrupt handler and napi routine run sequentially i) interrupt handler is called, sets 'interrupts_masked' flag and successfully schedules the napi handler via softirq. In this scenario the napi routine might not see the flag change for several reasons: a) The flag is stored in a register by the compiler. For this case the WRITE_ONCE macro which prevents this. b) The compiler might reorder the instruction. For this the smp_wmb() instruction was used which implies a compiler memory barrier. c) On archs with weak consistency model (like ARM64) the napi routine might be scheduled and start running before the flag STORE instruction is committed to cache/memory. To ensure this doesn't happen, the smp_wmb() instruction was added. It ensures that the flag set instruction is committed before scheduling napi. ii) compiler reorders the flag's value check in the 'if' with the flag set in the napi routine. This scenario is prevented by smp_rmb() call after the flag check. 2) interrupt handler and napi routine run in parallel (can happen when busy poll routine invokes the napi handler) i) interrupt handler sets the flag in one core, while the napi routine reads it in another core. This scenario also is divided into two cases: a) napi_complete_done() doesn't finish running, in which case napi_sched() would just set NAPIF_STATE_MISSED and the napi routine would reschedule itself without changing the flag's value. b) napi_complete_done() finishes running. In this case the napi routine might override the flag's value. This doesn't present any rise since it later unmasks the interrupt vector. Signed-off-by: Shay Agroskin <[email protected]> Signed-off-by: Arthur Kiyanovski <[email protected]> Cc: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21qed: Fix ILT and XRCD bitmap memory leaksYuval Basson2-0/+6
- Free ILT lines used for XRC-SRQ's contexts. - Free XRCD bitmap Fixes: b8204ad878ce7 ("qed: changes to ILT to support XRC") Fixes: 7bfb399eca460 ("qed: Add XRC to RoCE") Signed-off-by: Michal Kalderon <[email protected]> Signed-off-by: Yuval Basson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: hns3: fix return value error when query MAC link status failJian Shen2-27/+25
Currently, PF queries the MAC link status per second by calling function hclge_get_mac_link_status(). It return the error code when failed to send cmdq command to firmware. It's incorrect, because this return value is used as the MAC link status, which 0 means link down, and none-zero means link up. So fixes it. Fixes: 46a3df9f9718 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support") Signed-off-by: Jian Shen <[email protected]> Signed-off-by: Huazhong tan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: hns3: fix error handling for desc fillingYunsheng Lin2-0/+9
The content of the TX desc is automatically cleared by the HW when the HW has sent out the packet to the wire. When desc filling fails in hns3_nic_net_xmit(), it will call hns3_clear_desc() to do the error handling, which miss zeroing of the TX desc and the checking if a unmapping is needed. So add the zeroing and checking in hns3_clear_desc() to avoid the above problem. Also add DESC_TYPE_UNKNOWN to indicate the info in desc_cb is not valid, because hns3_nic_reclaim_desc() may treat the desc_cb->type of zero as packet and add to the sent pkt statistics accordingly. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Yunsheng Lin <[email protected]> Signed-off-by: Huazhong Tan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: hns3: fix for not calculating TX BD send size correctlyYunsheng Lin2-3/+1
With GRO and fraglist support, the SKB can be aggregated to a total size of 65535, and when that SKB is forwarded through a bridge, the size of the SKB may be pushed to exceed the size of 65535 when br_dev_queue_push_xmit() is called. The max send size of BD supported by the HW is 65535, when a SKB with a headlen of over 65535 is sent to the driver, the driver needs to use multi BD to send the linear data, and the send size of the last BD is calculated incorrectly by the driver who is using '&' operation, which causes a TX error. Use '%' operation to fix this problem. Fixes: 3fe13ed95dd3 ("net: hns3: avoid mult + div op in critical data path") Signed-off-by: Yunsheng Lin <[email protected]> Signed-off-by: Huazhong Tan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: hns3: fix for not unmapping TX buffer correctlyYunsheng Lin1-11/+3
When a big TX buffer is sent using multi BD, the driver maps the whole TX buffer, and unmaps it using info in desc_cb corresponding to each BD, but only the info in the desc_cb of first BD is correct, other info in desc_cb is wrong, which causes TX unmapping problem when SMMU is on. Only set the mapping and freeing info in the desc_cb of first BD to fix this problem, because the TX buffer only need to be unmapped and freed once. Fixes: 1e8a7977d09f("net: hns3: add handling for big TX fragment") Signed-off-by: Yunsheng Lin <[email protected]> Signed-off-by: Huzhong Tan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21enetc: Add adaptive interrupt coalescingClaudiu Manoil4-7/+73
Use the generic dynamic interrupt moderation (dim) framework to implement adaptive interrupt coalescing on Rx. With the per-packet interrupt scheme, a high interrupt rate has been noted for moderate traffic flows leading to high CPU utilization. The 'dim' scheme implemented by the current patch addresses this issue improving CPU utilization while using minimal coalescing time thresholds in order to preserve a good latency. On the Tx side use an optimal time threshold value by default. This value has been optimized for Tx TCP streams at a rate of around 85kpps on a 1G link, at which rate half of the Tx ring size (128) gets filled in 1500 usecs. Scaling this down to 2.5G links yields the current value of 600 usecs, which is conservative and gives good enough results for 1G links too (see next). Below are some measurement results for before and after this patch (and related dependencies) basically, for a 2 ARM Cortex-A72 @1.3Ghz CPUs system (32 KB L1 data cache), using 60secs log netperf TCP stream tests @ 1Gbit link (maximum throughput): 1) 1 Rx TCP flow, both Rx and Tx processed by the same NAPI thread on the same CPU: CPU utilization int rate (ints/sec) Before: 50%-60% (over 50%) 92k After: 13%-22% 3.5k-12k Comment: Major CPU utilization improvement for a single flow Rx TCP flow (i.e. netperf -t TCP_MAERTS) on a single CPU. Usually settles under 16% for longer tests. 2) 4 Rx TCP flows + 4 Tx TCP flows (+ pings to check the latency): Total CPU utilization Total int rate (ints/sec) Before: ~80% (spikes to 90%) ~100k After: 60% (more steady) ~4k Comment: Important improvement for this load test, while the ping test outcome does not show any notable difference compared to before. Signed-off-by: Claudiu Manoil <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21enetc: Add interrupt coalescing supportClaudiu Manoil4-10/+132
Enable programming of the interrupt coalescing registers and allow manual configuration of the coalescing time thresholds via ethtool. Packet thresholds have been fixed to predetermined values as there's no point in making them run-time configurable, also anticipating the dynamic interrupt moderation (DIM) algorithm which uses fixed packet thresholds as well. If the interface is up when the operation mode of traffic interrupt events is changed by the user (i.e. switching from default per-packet interrupts to coalesced interrupts), the traffic needs to be paused in the process. This patch also prepares the ground for introducing DIM on Rx. Signed-off-by: Claudiu Manoil <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21enetc: Drop redundant ____cacheline_aligned_in_smpClaudiu Manoil1-1/+1
'struct enetc_bdr' is already '____cacheline_aligned_in_smp'. Signed-off-by: Claudiu Manoil <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21enetc: Fix interrupt coalescing register namingClaudiu Manoil3-7/+7
Interrupt coalescing registers naming in the current revision of the Ref Man (RM) is ICR, deprecating the ICIR name used in earlier (draft) versions of the RM. Signed-off-by: Claudiu Manoil <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21enetc: Factor out the traffic start/stop proceduresClaudiu Manoil1-25/+49
A reliable traffic pause (and reconfiguration) procedure is needed to be able to safely make h/w configuration changes during run-time, like changing the mode in which the interrupts are operating (i.e. with or without coalescing), as opposed to making on-the-fly register updates that may be subject to h/w or s/w concurrency issues. To this end, the code responsible of the run-time device configurations that basically starts resp. stops the traffic flow through the device has been extracted from the the enetc_open/_close procedures, to the separate standalone enetc_start/_stop procedures. Traffic stop should be as graceful as possible, it lets the executing napi threads to to finish while the interrupts stay disabled. But since the napi thread will try to re-enable interrupts by clearing the device's unmask register, the enable_irq/ disable_irq API has been used to avoid this potential concurrency issue and make the traffic pause procedure more reliable. Signed-off-by: Claudiu Manoil <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21enetc: Refine buffer descriptor ring sizesClaudiu Manoil2-4/+5
It's time to differentiate between Rx and Tx ring sizes. Not only Tx rings are processed differently than Rx rings, but their default number also differs - i.e. up to 8 Tx rings per device (8 traffic classes) vs. 2 Rx rings (one per CPU). So let's set Tx rings sizes to half the size of the Rx rings for now, to be conservative. The default ring sizes were decreased as well (to the next lower power of 2), to reduce the memory footprint, buffering etc., since the measurements I've made so far show that the rings are very unlikely to get full. This change also anticipates the introduction of the dynamic interrupt moderation (dim) algorithm which operates on maximum packet thresholds of 256 packets for Rx and 128 packets for Tx. Signed-off-by: Claudiu Manoil <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-21net: ethernet: ravb: exit if re-initialization fails in tx timeoutYoshihiro Shimoda1-2/+24
According to the report of [1], this driver is possible to cause the following error in ravb_tx_timeout_work(). ravb e6800000.ethernet ethernet: failed to switch device to config mode This error means that the hardware could not change the state from "Operation" to "Configuration" while some tx and/or rx queue are operating. After that, ravb_config() in ravb_dmac_init() will fail, and then any descriptors will be not allocaled anymore so that NULL pointer dereference happens after that on ravb_start_xmit(). To fix the issue, the ravb_tx_timeout_work() should check the return values of ravb_stop_dma() and ravb_dmac_init(). If ravb_stop_dma() fails, ravb_tx_timeout_work() re-enables TX and RX and just exits. If ravb_dmac_init() fails, just exits. [1] https://lore.kernel.org/linux-renesas-soc/[email protected]/ Reported-by: Dirk Behme <[email protected]> Signed-off-by: Yoshihiro Shimoda <[email protected]> Reviewed-by: Sergei Shtylyov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: neterion: vxge: reduce stack usage in VXGE_COMPLETE_VPATH_TXBixuan Cui1-1/+1
Fix the warning: [-Werror=-Wframe-larger-than=] drivers/net/ethernet/neterion/vxge/vxge-main.c: In function'VXGE_COMPLETE_VPATH_TX.isra.37': drivers/net/ethernet/neterion/vxge/vxge-main.c:119:1: warning: the frame size of 1056 bytes is larger than 1024 bytes Dropping the NR_SKB_COMPLETED to 16 is appropriate that won't have much impact on performance and functionality. Signed-off-by: Bixuan Cui <[email protected]> Signed-off-by: Stephen Hemminger <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: ag71xx: add missed clk_disable_unprepare in error path of probeHuang Guobin1-1/+2
The ag71xx_mdio_probe() forgets to call clk_disable_unprepare() when of_reset_control_get_exclusive() failed. Add the missed call to fix it. Fixes: d51b6ce441d3 ("net: ethernet: add ag71xx driver") Reported-by: Hulk Robot <[email protected]> Signed-off-by: Huang Guobin <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net/fealnx: switch from 'pci_' to 'dma_' APIChristophe JAILLET1-38/+53
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated, GFP_KERNEL can be used because it is called from the probe function (i.e. 'fealnx_init_one()') and no lock is taken. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20ionic: use mutex to protect queue operationsShannon Nelson3-25/+17
The ionic_wait_on_bit_lock() was a open-coded mutex knock-off used only for protecting the queue reset operations, and there was no reason not to use the real thing. We can use the lock more correctly and to better protect the queue stop and start operations from cross threading. We can also remove a useless and expensive bit operation from the Rx path. This fixes a case found where the link_status_check from a link flap could run into an MTU change and cause a crash. Fixes: beead698b173 ("ionic: Add the basic NDO callbacks for netdev support") Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20ionic: keep rss hash after fw updateShannon Nelson1-3/+2
Make sure the RSS hash key is kept across a fw update by not de-initing it when an update is happening. Fixes: c672412f6172 ("ionic: remove lifs on fw reset") Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20ionic: update filter id after replayShannon Nelson1-0/+24
When we replay the rx filters after a fw-upgrade we get new filter_id values from the FW, which we need to save and update in our local filter list. This allows us to delete the filters with the correct filter_id when we're done. Fixes: 7e4d47596b68 ("ionic: replay filters after fw upgrade") Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20ionic: fix up filter locks and debug msgsShannon Nelson2-10/+12
Add in a couple of forgotten spinlocks and fix up some of the debug messages around filter management. Fixes: c1e329ebec8d ("ionic: Add management of rx filters") Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20ionic: use offset for ethtool regs dataShannon Nelson1-2/+5
Use an offset to write the second half of the regs data into the second half of the buffer instead of overwriting the first half. Fixes: 4d03e00a2140 ("ionic: Add initial ethtool support") Signed-off-by: Shannon Nelson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: add hwmon getter for MAC temperatureMark Starovoytov10-22/+215
This patch adds the possibility to obtain MAC temperature via hwmon. On A1 there are two separate temperature sensors. On A2 there's only one temperature sensor, which is used for reporting both MAC and PHY temperature. Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: A0 ntuple filtersDmitry Bogdanov1-28/+88
This patch adds support for ntuple filters on A0. Signed-off-by: Dmitry Bogdanov <[email protected]> Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: use intermediate variable to improve readability a bitNikita Danilov1-11/+10
This patch syncs up hw_atl_a0.c with an out-of-tree driver, where an intermediate variable was introduced in a couple of functions to improve the code readability a bit. Signed-off-by: Nikita Danilov <[email protected]> Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: use U32_MAX in aq_hw_utils.cMark Starovoytov1-3/+2
This patch replaces magic constant ~0U usage with U32_MAX in aq_hw_utils.c Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: add support for 64-bit reads/writesPavel Belous6-8/+33
This patch adds support for 64-bit reads/writes where applicable, e.g. A2 supports them. Signed-off-by: Pavel Belous <[email protected]> Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: enable ipv6 support for TCP LSO and UDP GSOIgor Russkikh2-1/+2
This patch enables ipv6 support for TCP LSO and UDP GSO. The code itself (aq_nic_map_skb) was ready for this after udp gso feature, but corresponding NETIF_F_TSO6 wasn't enabled. We now have tested both tcp and udp v6 GSO, and enabling them safely. Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2020-07-20net: atlantic: PTP statisticsPavel Belous3-19/+105
This patch adds PTP rings statistics. Before that these were missing from overall stats, hardening debugging and analysis. Signed-off-by: Pavel Belous <[email protected]> Signed-off-by: Mark Starovoytov <[email protected]> Signed-off-by: Igor Russkikh <[email protected]> Signed-off-by: David S. Miller <[email protected]>