Age | Commit message (Collapse) | Author | Files | Lines |
|
Convert list_for_each() to list_for_each_entry() where
applicable. This simplifies the code.
Reported-by: Hulk Robot <[email protected]>
Signed-off-by: Wang Hai <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Currently rx page will be reused to receive future packet when
the stack releases the previous skb quickly. If the old page
can not be reused, a new page will be allocated and mapped,
which comsumes a lot of cpu when IOMMU is in the strict mode,
especially when the application and irq/NAPI happens to run on
the same cpu.
So allocate a new frag to memcpy the data to avoid the costly
IOMMU unmapping/mapping operation, and add "frag_alloc_err"
and "frag_alloc" stats in "ethtool -S ethX" cmd.
The throughput improves above 50% when running single thread of
iperf using TCP when IOMMU is in strict mode and iperf shares the
same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500).
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Current rx page offset only reset to zero when all the below
conditions are satisfied:
1. rx page is only owned by driver.
2. rx page is reusable.
3. the page offset that is above to be given to the stack has
reached the end of the page.
If the page offset is over the hns3_buf_size(), it means the
buffer below the offset of the page is usable when the above
condition 1 & 2 are satisfied, so page offset can be reset to
zero instead of increasing the offset. We may be able to always
reuse the first 4K buffer of a 64K page, which means we can
limit the hot buffer size as much as possible.
The above optimization is a side effect when refacting the
rx page reuse handling in order to support the rx copybreak.
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Using the queue based tx buffer, it is also possible to allocate a
sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec
in order to support the dma_map_sg() to decreases the overhead of
IOMMU mapping and unmapping.
Firstly, it reduces the number of buffers. For example, a tcp skb
may have a 66-byte header and 3 fragments of 4328, 32768, and 28064
bytes. With this patch, dma_map_sg() will combine them into two
buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU.
Secondly, it reduces the number of dma mapping and unmapping. All the
original 4 buffers are mapped only once rather than 4 times.
The throughput improves above 10% when running single thread of iperf
using TCP when IOMMU is in strict mode.
Suggested-by: Barry Song <[email protected]>
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add support to query tx spare buffer size from configuration
file, and use this info to do spare buffer initialization when
the module parameter 'tx_spare_buf_size' is not specified.
Signed-off-by: Huazhong Tan <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
when the packet or frag size is small, it causes both security and
performance issue. As dma can't map sub-page, this means some extra
kernel data is visible to devices. On the other hand, the overhead
of dma map and unmap is huge when IOMMU is on.
So add a queue based tx shared bounce buffer to memcpy the small
packet when the len of the xmitted skb is below tx_copybreak.
Add tx_spare_buf_size module param to set the size of tx spare
buffer, and add set/get_tunable to set or query the tx_copybreak.
The throughtput improves from 30 Gbps to 90+ Gbps when running 16
netperf threads with 32KB UDP message size when IOMMU is in the
strict mode(tx_copybreak = 2000 and mtu = 1500).
Suggested-by: Barry Song <[email protected]>
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Factor out hns3_fill_desc() so that it can be reused in the
tx bounce supporting.
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
desc_cb is used to store mapping and freeing info for the
corresponding desc, which is used in the cleaning process.
There will be more desc_cb type coming up when supporting the
tx bounce buffer, change desc_cb type to bit-wise value in order
to reduce the desc_cb type checking operation in the data path.
Also move the desc_cb type definition to hns3_enet.h because it
is only used in hns3_enet.c, and declare a local variable desc_cb
in hns3_clear_desc() to reduce lines of code.
Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Guangbin Huang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The previous commit didn't fix the bug properly. By mistake, it replaces
the pointer of the next skb in the descriptor ring instead of the current
one. As a result, the two descriptors are assigned the same SKB. The error
is seen during the iperf test when skb_put tries to insert a second packet
and exceeds the available buffer.
Fixes: c7718ee96dbc ("net: lantiq: fix memory corruption in RX ring ")
Signed-off-by: Aleksander Jan Bajkowski <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
As already done for mvneta and mvpp2, enable skb recycling for ti
ethernet drivers
ti driver on net-next:
----------------------
[perf top]
47.15% [kernel] [k] _raw_spin_unlock_irqrestore
11.77% [kernel] [k] __cpdma_chan_free
3.16% [kernel] [k] ___bpf_prog_run
2.52% [kernel] [k] cpsw_rx_vlan_encap
2.34% [kernel] [k] __netif_receive_skb_core
2.27% [kernel] [k] free_unref_page
2.26% [kernel] [k] kmem_cache_free
2.24% [kernel] [k] kmem_cache_alloc
1.69% [kernel] [k] __softirqentry_text_start
1.61% [kernel] [k] cpsw_rx_handler
1.19% [kernel] [k] page_pool_release_page
1.19% [kernel] [k] clear_bits_ll
1.15% [kernel] [k] page_frag_free
1.06% [kernel] [k] __dma_page_dev_to_cpu
0.99% [kernel] [k] memset
0.94% [kernel] [k] __alloc_pages_bulk
0.92% [kernel] [k] kfree_skb
0.85% [kernel] [k] packet_rcv
0.78% [kernel] [k] page_address
0.75% [kernel] [k] v7_dma_inv_range
0.71% [kernel] [k] __lock_text_start
[iperf3 tcp]
[ 5] 0.00-10.00 sec 873 MBytes 732 Mbits/sec 0 sender
[ 5] 0.00-10.01 sec 866 MBytes 726 Mbits/sec receiver
ti + skb recycling:
-------------------
[perf top]
40.58% [kernel] [k] _raw_spin_unlock_irqrestore
16.18% [kernel] [k] __softirqentry_text_start
10.33% [kernel] [k] __cpdma_chan_free
2.62% [kernel] [k] ___bpf_prog_run
2.05% [kernel] [k] cpsw_rx_vlan_encap
2.00% [kernel] [k] kmem_cache_alloc
1.86% [kernel] [k] __netif_receive_skb_core
1.80% [kernel] [k] kmem_cache_free
1.63% [kernel] [k] cpsw_rx_handler
1.12% [kernel] [k] cpsw_rx_mq_poll
1.11% [kernel] [k] page_pool_put_page
1.04% [kernel] [k] _raw_spin_unlock
0.97% [kernel] [k] clear_bits_ll
0.90% [kernel] [k] packet_rcv
0.88% [kernel] [k] __dma_page_dev_to_cpu
0.85% [kernel] [k] kfree_skb
0.80% [kernel] [k] memset
0.71% [kernel] [k] __lock_text_start
0.66% [kernel] [k] v7_dma_inv_range
0.64% [kernel] [k] gen_pool_free_owner
[iperf3 tcp]
[ 5] 0.00-10.00 sec 884 MBytes 742 Mbits/sec 0 sender
[ 5] 0.00-10.01 sec 878 MBytes 735 Mbits/sec receiver
Tested-by: Grygorii Strashko <[email protected]>
Reviewed-by: Grygorii Strashko <[email protected]>
Signed-off-by: Lorenzo Bianconi <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
There is a spelling mistake in a dev_err message. Fix it.
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2021-06-14
1) Trivial Lag refactroing in preparation for upcomming Single FDB lag feature
- First 3 patches
2) Scalable IRQ distriburion for Sub-functions
A subfunction (SF) is a lightweight function that has a parent PCI
function (PF) on which it is deployed.
Currently, mlx5 subfunction is sharing the IRQs (MSI-X) with their
parent PCI function.
Before this series the PF allocates enough IRQs to cover
all the cores in a system, Newly created SFs will re-use all the IRQs
that the PF has allocated for itself.
Hence, the more SFs are created, there are more EQs per IRQs. Therefore,
whenever we handle an interrupt, we need to pull all SFs EQs and PF EQs
instead of PF EQs without SFs on the system. This leads to a hard impact
on the performance of SFs and PF.
For example, on machine with:
Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz with 56 cores.
PCI Express 3 with BW of 126 Gb/s.
ConnectX-5 Ex; EDR IB (100Gb/s) and 100GbE; dual-port QSFP28; PCIe4.0 x16.
test case: iperf TX BW single CPU, affinity of app and IRQ are the same.
PF only: no SFs on the system, 56 IRQs.
SF (before), 250 SFs Sharing the same 56 IRQs .
SF (now), 250 SFs + 255 avaiable IRQs for the NIC. (please see IRQ spread scheme below).
application SF-IRQ channel BW(Gb/sec) interrupts/sec
iperf TX affinity
PF only cpu={0} cpu={0} cpu={0} 79 8200
SF (before) cpu={0} cpu={0} cpu={0} 51.3 (-35%) 9500
SF (now) cpu={0} cpu={0} cpu={0} 78 (-2%) 8200
command:
$ taskset -c 0 iperf -c 11.1.1.1 -P 3 -i 6 -t 30 | grep SUM
The different between the SF examples is that before this series we
allocate num_cpus (56) IRQs, and all of them were shared among the PF
and the SFs. And after this series, we allocate 255 IRQs, and we spread
the SFs among the above IRQs. This have significantly decreased the load
on each IRQ and the number of EQs per IRQ is down by 95% (251->11).
In this patchset the solution proposed is to have a dedicated IRQ pool
for SFs to use. the pool will allocate a large number of IRQs
for SFs to grab from in order to minimize irq sharing between the
different SFs.
IRQs will not be requested from the OS until they are 1st requested by
an SF consumer, and will be eventually released when the last SF consumer
releases them.
For the detailed IRQ spread and allocation scheme please see last patch:
("net/mlx5: Round-Robin EQs over IRQs")
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
Added police action for ingress TC flower
hardware offload. With this rate limiting can be
done per flow. Since rate limiting is tied to
RQs in hardware the number of TC flower filters
with action as police is limited to number
of receive queues of the interface. Both bps
and pps modes are supported.
Examples to rate limit a flow:
$ ethtool -K eth0 hw-tc-offload on
$ tc qdisc add dev eth0 ingress
$ tc filter add dev eth0 parent ffff: protocol ip \
flower ip_proto udp dst_port 80 action \
police rate 100Mbit burst 32Kbit
$ tc filter add dev eth0 parent ffff: \
protocol ip flower dst_mac 5e:b2:34:ee:29:49 \
action police pkts_rate 5000 pkts_burst 2048
Signed-off-by: Subbaraya Sundeep <[email protected]>
Signed-off-by: Sunil Kovvuri Goutham <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch modifies all netdev_err messages in
tc code to NL_SET_ERR_MSG_MOD. NL_SET_ERR_MSG_MOD
does not support format specifiers yet hence
netdev_err messages with only strings are modified.
Signed-off-by: Subbaraya Sundeep <[email protected]>
Signed-off-by: Sunil Kovvuri Goutham <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add TC_MATCHALL ingress ratelimiting offload support with POLICE
action for entire traffic coming into the interface.
Eg: To ratelimit ingress traffic to 100Mbps
$ ethtool -K eth0 hw-tc-offload on
$ tc qdisc add dev eth0 clsact
$ tc filter add dev eth0 ingress matchall skip_sw \
action police rate 100Mbit burst 32Kbit
To support this, a leaf level bandwidth profile is allocated and all
RQs' contexts used by this interface are updated to point to it.
And the leaf level bandwidth profile is configured with user specified
rate and burst sizes.
Co-developed-by: Subbaraya Sundeep <[email protected]>
Signed-off-by: Subbaraya Sundeep <[email protected]>
Signed-off-by: Sunil Goutham <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Added support for dumping current resource status of bandwidth
profiles and contexts of allocated profiles via debugfs.
Signed-off-by: Sunil Goutham <[email protected]>
Signed-off-by: Subbaraya Sundeep <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
CN10K silicons supports hierarchial ingress packet ratelimiting.
There are 3 levels of profilers supported leaf, mid and top.
Ratelimiting is done after packet forwarding decision is taken
and a NIXLF's RQ is identified to DMA the packet. RQ's context
points to a leaf bandwidth profile which can be configured
to achieve desired ratelimit.
This patch adds logic for management of these bandwidth profiles
ie profile alloc, free, context update etc.
Signed-off-by: Sunil Goutham <[email protected]>
Signed-off-by: Subbaraya Sundeep <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
On RX an SKB is allocated and the received buffer is copied into it.
But on some architectures, the memcpy() needs the source and destination
buffers to have the same alignment to be efficient.
This is not our case, because SKB data pointer is misaligned by two bytes
to compensate the ethernet header.
Align the RX buffer the same way as the SKB one, so the copy is faster.
An iperf3 RX test gives a decent improvement on a RISC-V machine:
before:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 733 MBytes 615 Mbits/sec 88 sender
[ 5] 0.00-10.01 sec 730 MBytes 612 Mbits/sec receiver
after:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec receiver
And the memcpy() overhead during the RX drops dramatically.
before:
Overhead Shared O Symbol
43.35% [kernel] [k] memcpy
33.77% [kernel] [k] __asm_copy_to_user
3.64% [kernel] [k] sifive_l2_flush64_range
after:
Overhead Shared O Symbol
45.40% [kernel] [k] __asm_copy_to_user
28.09% [kernel] [k] memcpy
4.27% [kernel] [k] sifive_l2_flush64_range
Signed-off-by: Matteo Croce <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Whenever users provided affinity for an EQ creation request, map the
EQ to a matching IRQ.
Matching IRQ=IRQ with the same affinity and type (completion/control) of
the EQ created.
This mapping is being done in agressive dedicated IRQ allocation scheme,
which described bellow.
First, we check whether there is a matching IRQ that his min threshold
is not exhausted.
- min_eqs_threshold = 3 for control EQ.
- min_eqs_threshold = 1 for completion EQ.
In case no matching IRQ was found, try to request a new IRQ.
In case we can't request a new IRQ, reuse least-used matching IRQ.
Signed-off-by: Shay Drory <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Reviewed-by: Tariq Toukan <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Move mlx5_sf_max_functions() and friends from the privete sf/sf.h
to the public lib/sf.h. This is done in order to have one direction
include paths.
Signed-off-by: Shay Drory <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
SFs (Sub Functions) currently use IRQs from the global IRQ table their
parent Physical Function have. In order to better scale, we need to
allocate more IRQs and share them between different SFs.
Driver will maintain 3 separated irq pools:
1. A pool that serve the PF consumer (PF's netdev, rdma stacks), similar
to what the driver had before this patch. i.e, this pool will share irqs
between rdma and netev, and will keep the irq indexes and allocation
order. The last is important for PF netdev rmap (aRFS).
2. A pool of control IRQs for SFs. The size of this pool is the number
of SFs that can be created divided by SFS_PER_IRQ. This pool will serve
the control path EQs of the SFs.
3. A pool of completion data path IRQs for SFs transport queues. The
size of this pool is:
num_irqs_allocated - pf_pool_size - sf_ctrl_pool_size.
This pool will served netdev and rdma stacks. Moreover, rmap is not
supported on SFs.
Sharing methodology of the SFs pools is explained in the next patch.
Important note: rmap is not supported on SFs because rmap mapping cannot
function correctly for IRQs that are shared for different core/netdev RX
rings.
Signed-off-by: Shay Drory <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Reviewed-by: Tariq Toukan <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Store newly created IRQs in the xarray DB instead of a static array,
so we will be able to store only IRQs which are being used.
Signed-off-by: Shay Drory <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
IRQs are being simplified in order to ease their sharing and any feature
specific object will be moved to upper layer.
Hence we move rmap object into eq_table.
Signed-off-by: Shay Drory <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Extend mlx5_irq_request so that IRQs will be requested upon EQ creation,
and not on driver boot.
Signed-off-by: Shay Drory <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
In next patches, IRQs will be requested according to demand, instead of
statically on driver boot.
Also, currently, rmap is managed by the IRQ layer. rmap management will
move out from the IRQ layer in future patches.
Therefore, we want to remove the IRQ from the rmap, when IRQ is destroyed,
instead of removing all the IRQs from the rmap when irq_table is destroyed.
Signed-off-by: Shay Drory <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
The eq.[c|h] files are under major rewrite. so use this opportunity and
update their copyright and license texts.
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
The users of EQ are running their code on different CPUs and with
various affinity patterns. Move the cpumask setting close to their
actual usage.
Signed-off-by: Leon Romanovsky <[email protected]>
Reviewed-by: Shay Drory <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Introduce new API that will allow IRQs users to hold a pointer to
mlx5_irq.
In the end of this series, IRQs will be allocated on demand. Hence,
this will allow us to properly manage and use IRQs.
Signed-off-by: Shay Drory <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Shared IRQ are consumed by multiple EQ users and in order to properly
initialize and later release such IRQs, we add kref counting of IRQ
structure.
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Lag is used to combine two PCI functions of the same HCA into a single
logical unit. This is a core functionality and as such should be managed by
the core driver. Currently this isn't the case. While we store the lag
software structure inside the lower device, its lifetime (creation /
destruction) is dictated by the mlx5e part. Change the ownership model so
lag is tied to the lifetime of the lower level driver instead to the
mlx5e part.
Signed-off-by: Mark Bloch <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
If MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV is set it means the device is going
down and mlx5_rescan_drivers_locked() shouldn't be called.
With this patch and the previous one in the series, unbinding a PCI
function when its netdev is part of a bond works and leaves the system in a
working state.
Signed-off-by: Mark Bloch <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
When a net device is removed (can happen if the PCI function is unbound
from the system) it's not enough to destroy the hardware lag. The system
should recreate the original devices that were present before the lag.
As the same flow is done when a net device is removed from the bond
refactor and reuse the code.
Signed-off-by: Mark Bloch <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
|
|
Zhou Yanjie says:
====================
Add Ingenic SoCs MAC support.
v2->v3:
1.Add "ingenic,mac.yaml" for Ingenic SoCs.
2.Change tx clk delay and rx clk delay from hardware value to ps.
3.return -EINVAL when a unsupported value is encountered when
parsing the binding.
4.Simplify the code of the RGMII part of X2000 SoC according to
Andrew Lunn’s suggestion.
5.Follow the example of "dwmac-mediatek.c" to improve the code
that handles delays according to Andrew Lunn’s suggestion.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
Add support for Ingenic SoC MAC glue layer support for the stmmac
device driver. This driver is used on for the MAC ethernet controller
found in the JZ4775 SoC, the X1000 SoC, the X1600 SoC, the X1830 SoC,
and the X2000 SoC.
Signed-off-by: 周琰杰 (Zhou Yanjie) <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add traps that have init_action being set to DROP.
Add 'trap_drop_counter_get' (devlink API) callback implementation,
that is used to get number of packets that have been dropped by the HW
(traps with action 'DROP').
Add new FW command CPU_CODE_COUNTERS_GET.
Signed-off-by: Oleksandr Mazur <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add devlink traps registration (with corresponding groups) for
all the traffic types that driver traps to the CPU;
prestera_rxtx: report each packet trapped to the CPU (RX) to the
prestera_devlink;
Signed-off-by: Oleksandr Mazur <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The 3rd argument is u32 by function definition while it is __be32
by function declaration.
Signed-off-by: Lijun Pan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Some micrel devices share the same PHY register defines. This patch
moves them to one common header so other drivers can reuse them.
And reuse generic MII_* defines where possible.
Signed-off-by: Michael Grzeschik <[email protected]>
Signed-off-by: Oleksij Rempel <[email protected]>
Reviewed-by: Vladimir Oltean <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
A recent change tidied up some conditional code, avoiding the use of
some #ifdefs. Unfortunately, if CONFIG_IPV6 was not enabled, it
meant that two functions were referenced but never defined.
The easiest fix is to just define stubs for these functions if
CONFIG_IPV6 is not defined. This will soon be simplified further
by some other development in the works...
Reported-by: kernel test robot <[email protected]>
Fixes: 75db5b07f8c39 ("net: qualcomm: rmnet: eliminate some ifdefs")
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Current MCAM allocation scheme allocates a single lot of
MCAM entries for ntuple filters, unicast filters and VF VLAN
rules. This patch attempts to cleanup this logic by segregating
MCAM rule allocation and management for Ntuple rules and unicast,
VF VLAN rules. This segregation will result in reusing most of
the logic for supporting ntuple filters for VF devices.
Also added debug messages for MCAM entry allocation failures.
Signed-off-by: Sunil Goutham <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The TID returned during successful filter creation is relative to
the region in which the filter is created. Using it directly always
returns Hi Prio/Normal filter region's entry for the first couple of
entries, even though the rule is actually inserted in Hash region.
Fix by analyzing in which region the filter has been inserted and
save the absolute TID to be used for lookup later.
Fixes: db43b30cd89c ("cxgb4: add ethtool n-tuple filter deletion")
Signed-off-by: Rahul Lakkireddy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If an error occurs after a 'pci_enable_pcie_error_reporting()' call, it
must be undone by a corresponding 'pci_disable_pcie_error_reporting()'
call, as already done in the remove function.
Fixes: e87ad5539343 ("netxen: support pci error handlers")
Signed-off-by: Christophe JAILLET <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If an error occurs after a 'pci_enable_pcie_error_reporting()' call, it
must be undone by a corresponding 'pci_disable_pcie_error_reporting()'
call, as already done in the remove function.
Fixes: 451724c821c1 ("qlcnic: aer support")
Signed-off-by: Christophe JAILLET <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The purpose of the loop using u64_stats_fetch_*_irq() is to ensure
statistics on a given CPU are collected atomically. If one of the
statistics values gets updated within the begin/retry window, the
loop will run again.
Currently the statistics totals are updated inside that window.
This means that if the loop ever retries, the statistics for the
CPU will be counted more than once.
Fix this by taking a snapshot of a CPU's statistics inside the
protected window, and then updating the counters with the snapshot
values after exiting the loop.
(Also add a newline at the end of this file...)
Fixes: 192c4b5d48f2a ("net: qualcomm: rmnet: Add support for 64 bit stats")
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We need the fixes in here as well.
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
We don't support any extension headers for IPv6 packets. Extension
headers therefore contribute 0 bytes to the payload length. As a
result we can just use the IPv6 payload length as the length used to
compute the pseudo header checksum for both UDP and TCP messages.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We compare a payload checksum with a pseudo checksum value for
equality in rmnet_map_ipv4_dl_csum_trailer(). Both of those values
are computed with a unary NOT (~) operation. The result of the
comparison is the same if we omit that NOT for both values.
Remove these operations in rmnet_map_ipv6_dl_csum_trailer() also.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The csum_value field in the rmnet_map_dl_csum_trailer structure is a
"real" Internet checksum. It is a 16 bit value, in big endian format,
which represents an inverted ones' complement sum over pairs of bytes.
Make that clear by changing its type to __sum16.
This makes a typecast in rmnet_map_ipv4_dl_csum_trailer() and
another in rmnet_map_ipv6_dl_csum_trailer() unnecessary.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The previous patch makes rmnet_map_ipv4_dl_csum_trailer() return
early with an error if it is determined that the computed checksum
for the IP payload does not match what was expected.
If the computed checksum *does* match the expected value, the IP
payload (i.e., the transport message), can be considered good.
There is no need to do any further processing of the message.
This means a big block of code is unnecessary for validating the
transport checksum value, and can be removed.
Make comparable changes in rmnet_map_ipv6_dl_csum_trailer().
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
In rmnet_map_ipv4_dl_csum_trailer(), if the sum of the trailer
checksum and the pseudo checksum is non-zero, checksum validation
has failed. We can return an error as soon as we know that.
We can do the same thing in rmnet_map_ipv6_dl_csum_trailer().
Add some comments that explain where we're headed.
Signed-off-by: Alex Elder <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|