aboutsummaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel
AgeCommit message (Collapse)AuthorFilesLines
2022-02-09Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski10-146/+308
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-02-09 We've added 126 non-merge commits during the last 16 day(s) which contain a total of 201 files changed, 4049 insertions(+), 2215 deletions(-). The main changes are: 1) Add custom BPF allocator for JITs that pack multiple programs into a huge page to reduce iTLB pressure, from Song Liu. 2) Add __user tagging support in vmlinux BTF and utilize it from BPF verifier when generating loads, from Yonghong Song. 3) Add per-socket fast path check guarding from cgroup/BPF overhead when used by only some sockets, from Pavel Begunkov. 4) Continued libbpf deprecation work of APIs/features and removal of their usage from samples, selftests, libbpf & bpftool, from Andrii Nakryiko and various others. 5) Improve BPF instruction set documentation by adding byte swap instructions and cleaning up load/store section, from Christoph Hellwig. 6) Switch BPF preload infra to light skeleton and remove libbpf dependency from it, from Alexei Starovoitov. 7) Fix architecture-agnostic macros in libbpf for accessing syscall arguments from BPF progs for non-x86 architectures, from Ilya Leoshkevich. 8) Rework port members in struct bpf_sk_lookup and struct bpf_sock to be of 16-bit field with anonymous zero padding, from Jakub Sitnicki. 9) Add new bpf_copy_from_user_task() helper to read memory from a different task than current. Add ability to create sleepable BPF iterator progs, from Kenny Yu. 10) Implement XSK batching for ice's zero-copy driver used by AF_XDP and utilize TX batching API from XSK buffer pool, from Maciej Fijalkowski. 11) Generate temporary netns names for BPF selftests to avoid naming collisions, from Hangbin Liu. 12) Implement bpf_core_types_are_compat() with limited recursion for in-kernel usage, from Matteo Croce. 13) Simplify pahole version detection and finally enable CONFIG_DEBUG_INFO_DWARF5 to be selected with CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor. 14) Misc minor fixes to libbpf and selftests from various folks. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (126 commits) selftests/bpf: Cover 4-byte load from remote_port in bpf_sk_lookup bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide libbpf: Fix compilation warning due to mismatched printf format selftests/bpf: Test BPF_KPROBE_SYSCALL macro libbpf: Add BPF_KPROBE_SYSCALL macro libbpf: Fix accessing the first syscall argument on s390 libbpf: Fix accessing the first syscall argument on arm64 libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALL selftests/bpf: Skip test_bpf_syscall_macro's syscall_arg1 on arm64 and s390 libbpf: Fix accessing syscall arguments on riscv libbpf: Fix riscv register names libbpf: Fix accessing syscall arguments on powerpc selftests/bpf: Use PT_REGS_SYSCALL_REGS in bpf_syscall_macro libbpf: Add PT_REGS_SYSCALL_REGS macro selftests/bpf: Fix an endianness issue in bpf_syscall_macro test bpf: Fix bpf_prog_pack build HPAGE_PMD_SIZE bpf: Fix leftover header->pages in sparc and powerpc code. libbpf: Fix signedness bug in btf_dump_array_data() selftests/bpf: Do not export subtest as standalone test bpf, x86_64: Fail gracefully on bpf_jit_binary_pack_finalize failures ... ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2022-02-09ice: Add ability for PF admin to enable VF VLAN pruningBrett Creeley4-2/+40
VFs by default are able to see all tagged traffic regardless of trust and VLAN filters. Based on legacy devices (i.e. ixgbe, i40e), customers expect VFs to receive all VLAN tagged traffic with a matching destination MAC. Add an ethtool private flag 'vf-vlan-pruning' and set the default to off so VFs will receive all VLAN traffic directed towards them. When the flag is turned on, VF will only be able to receive untagged traffic or traffic with VLAN tags it has created interfaces for. Also, the flag cannot be changed while any VFs are allocated. This was done to simplify the implementation. So, if this flag is needed, then the PF admin must enable it. If the user tries to enable the flag while VFs are active, then print an unsupported message with the vf-vlan-pruning flag included. In case multiple flags were specified, this makes it clear to the user which flag failed. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Add support for 802.1ad port VLANs VFBrett Creeley1-7/+44
Currently there is only support for 802.1Q port VLANs on SR-IOV VFs. Add support to also allow 802.1ad port VLANs when double VLAN mode is enabled. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Konrad Jankowski <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Advertise 802.1ad VLAN filtering and offloads for PF netdevBrett Creeley2-49/+238
In order for the driver to support 802.1ad VLAN filtering and offloads, it needs to advertise those VLAN features and also support modifying those VLAN features, so make the necessary changes to ice_set_netdev_features(). By default, enable CTAG insertion/stripping and CTAG filtering for both Single and Double VLAN Modes (SVM/DVM). Also, in DVM, enable STAG filtering by default. This is done by setting the feature bits in netdev->features. Also, in DVM, support toggling of STAG insertion/stripping, but don't enable them by default. This is done by setting the feature bits in netdev->hw_features. Since 802.1ad VLAN filtering and offloads are only supported in DVM, make sure they are not enabled by default and that they cannot be enabled during runtime, when the device is in SVM. Add an implementation for the ndo_fix_features() callback. This is needed since the hardware cannot support multiple VLAN ethertypes for VLAN insertion/stripping simultaneously and all supported VLAN filtering must either be enabled or disabled together. Disable inner VLAN stripping by default when DVM is enabled. If a VSI supports stripping the inner VLAN in DVM, then it will have to configure that during runtime. For example if a VF is configured in a port VLAN while DVM is enabled it will be allowed to offload inner VLANs. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Support configuring the device to Double VLAN ModeBrett Creeley17-59/+996
In order to support configuring the device in Double VLAN Mode (DVM), the DDP and FW have to support DVM. If both support DVM, the PF that downloads the package needs to update the default recipes, set the VLAN mode, and update boost TCAM entries. To support updating the default recipes in DVM, add support for updating an existing switch recipe's lkup_idx and mask. This is done by first calling the get recipe AQ (0x0292) with the desired recipe ID. Then, if that is successful update one of the lookup indices (lkup_idx) and its associated mask if the mask is valid otherwise the already existing mask will be used. The VLAN mode of the device has to be configured while the global configuration lock is held while downloading the DDP, specifically after the DDP has been downloaded. If supported, the device will default to DVM. Co-developed-by: Dan Nowlin <[email protected]> Signed-off-by: Dan Nowlin <[email protected]> Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2Brett Creeley6-43/+1226
Add support for the VF driver to be able to request VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and enable/disable VLAN offloads. VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the following virtchnl opcodes: VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS VIRTCHNL_OP_ADD_VLAN_V2 VIRTCHNL_OP_DEL_VLAN_V2 VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 Legacy VF drivers may expect the initial VLAN stripping settings to be configured by the PF, so the PF initializes VLAN stripping based on the VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only be supported when a port VLAN is configured. Update the function based on the new expectations. Also, change the message when the PF can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal. When a VF isn't in a port VLAN and it only supports VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and configure the VF in software only VLAN mode. To do this add the new function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's inner and outer ice_vsi_vlan_ops functions and sets up software only VLAN mode. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Konrad Jankowski <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Add hot path support for 802.1Q and 802.1ad VLAN offloadsBrett Creeley9-22/+87
Currently the driver only supports 802.1Q VLAN insertion and stripping. However, once Double VLAN Mode (DVM) is fully supported, then both 802.1Q and 802.1ad VLAN insertion and stripping will be supported. Unfortunately the VSI context parameters only allow for one VLAN ethertype at a time for VLAN offloads so only one or the other VLAN ethertype offload can be supported at once. To support this, multiple changes are needed. Rx path changes: [1] In DVM, the Rx queue context l2tagsel field needs to be cleared so the outermost tag shows up in the l2tag2_2nd field of the Rx flex descriptor. In Single VLAN Mode (SVM), the l2tagsel field should remain 1 to support SVM configurations. [2] Modify the ice_test_staterr() function to take a __le16 instead of the ice_32b_rx_flex_desc union pointer so this function can be used for both rx_desc->wb.status_error0 and rx_desc->wb.status_error1. [3] Add the new inline function ice_get_vlan_tag_from_rx_desc() that checks if there is a VLAN tag in l2tag1 or l2tag2_2nd. [4] In ice_receive_skb(), add a check to see if NETIF_F_HW_VLAN_STAG_RX is enabled in netdev->features. If it is, then this is the VLAN ethertype that needs to be added to the stripping VLAN tag. Since ice_fix_features() prevents CTAG_RX and STAG_RX from being enabled simultaneously, the VLAN ethertype will only ever be 802.1Q or 802.1ad. Tx path changes: [1] In DVM, the VLAN tag needs to be placed in the l2tag2 field of the Tx context descriptor. The new define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN was added to the list of tx_flags to handle this case. [2] When the stack requests the VLAN tag to be offloaded on Tx, the driver needs to set either ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN or ICE_TX_FLAGS_HW_VLAN, so the tag is inserted in l2tag2 or l2tag1 respectively. To determine which location to use, set a bit in the Tx ring flags field during ring allocation that can be used to determine which field to use in the Tx descriptor. In DVM, always use l2tag2, and in SVM, always use l2tag1. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Add outer_vlan_ops and VSI specific VLAN ops implementationsBrett Creeley16-86/+813
Add a new outer_vlan_ops member to the ice_vsi structure as outer VLAN ops are only available when the device is in Double VLAN Mode (DVM). Depending on the VSI type, the requirements for what operations to use/allow differ. By default all VSI's have unsupported inner and outer VSI VLAN ops. This implementation was chosen to prevent unexpected crashes due to null pointer dereferences. Instead, if a VSI calls an unsupported op, it will just return -EOPNOTSUPP. Add implementations to support modifying outer VLAN fields for VSI context. This includes the ability to modify VLAN stripping, insertion, and the port VLAN based on the outer VLAN handling fields of the VSI context. These functions should only ever be used if DVM is enabled because that means the firmware supports the outer VLAN fields in the VSI context. If the device is in DVM, then always use the outer_vlan_ops, else use the vlan_ops since the device is in Single VLAN Mode (SVM). Also, move adding the untagged VLAN 0 filter from ice_vsi_setup() to ice_vsi_vlan_setup() as the latter function is specific to the PF and all other VSI types that need an untagged VLAN 0 filter already do this in their specific flows. Without this change, Flow Director is failing to initialize because it does not implement any VSI VLAN ops. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Adjust naming for inner VLAN operationsBrett Creeley6-142/+140
Current operations act on inner VLAN fields. To support double VLAN, outer VLAN operations and functions will be implemented. Add the "inner" naming to existing VLAN operations to distinguish them from the upcoming outer values and functions. Some spacing adjustments are made to align values. Note that the inner is not talking about a tunneled VLAN, but the second VLAN in the packet. For SVM the driver uses inner or single VLAN filtering and offloads and in Double VLAN Mode the driver uses the inner filtering and offloads for SR-IOV VFs in port VLANs in order to support offloading the guest VLAN while a port VLAN is configured. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Use the proto argument for VLAN opsBrett Creeley11-26/+78
Currently the proto argument is unused. This is because the driver only supports 802.1Q VLAN filtering. This policy is enforced via netdev features that the driver sets up when configuring the netdev, so the proto argument won't ever be anything other than 802.1Q. However, this will allow for future iterations of the driver to seemlessly support 802.1ad filtering. Begin using the proto argument and extend the related structures to support its use. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Refactor vf->port_vlan_info to use ice_vlanBrett Creeley2-35/+44
The current vf->port_vlan_info variable is a packed u16 that contains the port VLAN ID and QoS/prio value. This is fine, but changes are incoming that allow for an 802.1ad port VLAN. Add flexibility by changing the vf->port_vlan_info member to be an ice_vlan structure. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Introduce ice_vlan structBrett Creeley10-59/+82
Add a new struct for VLAN related information. Currently this holds VLAN ID and priority values, but will be expanded to hold TPID value. This reduces the changes necessary if any other values are added in future. Remove the action argument from these calls as it's always ICE_FWD_VSI. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Add new VSI VLAN opsBrett Creeley14-334/+450
Incoming changes to support 802.1Q and/or 802.1ad VLAN filtering and offloads require more flexibility when configuring VLANs. The VSI VLAN interface will allow flexibility for configuring VLANs for all VSI types. Add new files to separate the VSI VLAN ops and move functions to make the code more organized. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Add helper function for adding VLAN 0Brett Creeley4-5/+14
There are multiple places where VLAN 0 is being added. Create a function to be called in order to minimize changes as the implementation is expanded to support double VLAN and avoid duplicated code. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09ice: Refactor spoofcheck configuration functionsBrett Creeley2-50/+128
Add functions to configure Tx VLAN antispoof based on iproute configuration and/or VLAN mode and VF driver support. This is needed later so the driver can control when it can be configured. Also, add functions that can be used to enable and disable MAC and VLAN spoofcheck. Move spoofchk configuration during VSI setup into the SR-IOV initialization path and into the post VSI rebuild flow for VF VSIs. Signed-off-by: Brett Creeley <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-09Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/nextDavid S. Miller5-8/+42
-queue Tony Nguyen says: ==================== 40GbE Intel Wired LAN Driver Updates 2022-02-08 Joe Damato says: This patch set makes several updates to the i40e driver stats collection and reporting code to help users of i40e get a better sense of how the driver is performing and interacting with the rest of the kernel. These patches include some new stats (like waived and busy) which were inspired by other drivers that track stats using the same nomenclature. The new stats and an existing stat, rx_reuse, are now accessible with ethtool to make harvesting this data more convenient for users. ==================== Signed-off-by: David S. Miller <[email protected]>
2022-02-09Merge branch '1GbE' of ↵David S. Miller3-10/+16
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2022-02-07 Corinna Vinschen says: Fix the kernel warning "Missing unregister, handled but fix driver" when running, e.g., $ ethtool -G eth0 rx 1024 on igc. Remove memset hack from igb and align igb code to igc. ==================== Signed-off-by: David S. Miller <[email protected]>
2022-02-08Merge branch 'iwl-next' of ↵Jakub Kicinski1-0/+5
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux Nguyen, Anthony L says: ==================== iwl-next Intel Wired LAN Driver Updates 2022-02-07 Dave adds support for ice driver to provide DSCP QoS mappings to irdma driver. [1] https://lore.kernel.org/netdev/[email protected]/ * 'iwl-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux: ice: add support for DSCP QoS for IDC ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2022-02-08i40e: Add a stat for tracking busy rx pagesJoe Damato5-5/+15
In some cases, pages cannot be reused by i40e because the page is busy. Add a counter for this event. Busy page count is accessible via ethtool. Signed-off-by: Joe Damato <[email protected]> Tested-by: Dave Switzer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-08i40e: Add a stat for tracking pages waivedJoe Damato5-4/+17
In some cases, pages can not be reused because they are not associated with the correct NUMA zone. Knowing how often pages are waived helps users to understand the interaction between the driver's memory usage and their system. Pass rx_stats through to i40e_can_reuse_rx_page to allow tracking when pages are waived. The page waive count is accessible via ethtool. Signed-off-by: Joe Damato <[email protected]> Tested-by: Dave Switzer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-08i40e: Add a stat tracking new RX page allocationsJoe Damato5-1/+9
Add a counter for new page allocations in the i40e RX path. This stat is accessible with ethtool. Signed-off-by: Joe Damato <[email protected]> Tested-by: Dave Switzer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-08i40e: Aggregate and export RX page reuse statJoe Damato3-1/+6
rx page reuse was already being tracked by the i40e driver per RX ring. Aggregate the counts and make them accessible via ethtool. Signed-off-by: Joe Damato <[email protected]> Tested-by: Dave Switzer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-08i40e: Remove rx page reuse double countJoe Damato1-2/+0
Page reuse was being tracked from two locations: - i40e_reuse_rx_page (via 40e_clean_rx_irq), and - i40e_alloc_mapped_page Remove the double count and only count reuse from i40e_alloc_mapped_page when the page is about to be reused. Signed-off-by: Joe Damato <[email protected]> Tested-by: Dave Switzer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-07igb: refactor XDP registrationCorinna Vinschen2-10/+13
On changing the RX ring parameters igb uses a hack to avoid a warning when calling xdp_rxq_info_reg via igb_setup_rx_resources. It just clears the struct xdp_rxq_info content. Instead, change this to unregister if we're already registered. Align code to the igc code. Fixes: 9cbc948b5a20c ("igb: add XDP support") Signed-off-by: Corinna Vinschen <[email protected]> Acked-by: Vinicius Costa Gomes <[email protected]> Tested-by: Sandeep Penigalapati <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-07igc: avoid kernel warning when changing RX ring parametersCorinna Vinschen1-0/+3
Calling ethtool changing the RX ring parameters like this: $ ethtool -G eth0 rx 1024 on igc triggers kernel warnings like this: [ 225.198467] ------------[ cut here ]------------ [ 225.198473] Missing unregister, handled but fix driver [ 225.198485] WARNING: CPU: 7 PID: 959 at net/core/xdp.c:168 xdp_rxq_info_reg+0x79/0xd0 [...] [ 225.198601] Call Trace: [ 225.198604] <TASK> [ 225.198609] igc_setup_rx_resources+0x3f/0xe0 [igc] [ 225.198617] igc_ethtool_set_ringparam+0x30e/0x450 [igc] [ 225.198626] ethnl_set_rings+0x18a/0x250 [ 225.198631] genl_family_rcv_msg_doit+0xca/0x110 [ 225.198637] genl_rcv_msg+0xce/0x1c0 [ 225.198640] ? rings_prepare_data+0x60/0x60 [ 225.198644] ? genl_get_cmd+0xd0/0xd0 [ 225.198647] netlink_rcv_skb+0x4e/0xf0 [ 225.198652] genl_rcv+0x24/0x40 [ 225.198655] netlink_unicast+0x20e/0x330 [ 225.198659] netlink_sendmsg+0x23f/0x480 [ 225.198663] sock_sendmsg+0x5b/0x60 [ 225.198667] __sys_sendto+0xf0/0x160 [ 225.198671] ? handle_mm_fault+0xb2/0x280 [ 225.198676] ? do_user_addr_fault+0x1eb/0x690 [ 225.198680] __x64_sys_sendto+0x20/0x30 [ 225.198683] do_syscall_64+0x38/0x90 [ 225.198687] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 225.198693] RIP: 0033:0x7f7ae38ac3aa igc_ethtool_set_ringparam() copies the igc_ring structure but neglects to reset the xdp_rxq_info member before calling igc_setup_rx_resources(). This in turn calls xdp_rxq_info_reg() with an already registered xdp_rxq_info. Make sure to unregister the xdp_rxq_info structure first in igc_setup_rx_resources. Fixes: 73f1071c1d29 ("igc: Add support for XDP_TX action") Reported-by: Lennert Buytenhek <[email protected]> Signed-off-by: Corinna Vinschen <[email protected]> Acked-by: Vinicius Costa Gomes <[email protected]> Tested-by: Dvora Fuxbrumer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-04ixgbevf: Require large buffers for build_skb on 82599VFSamuel Mendoza-Jonas1-6/+7
From 4.17 onwards the ixgbevf driver uses build_skb() to build an skb around new data in the page buffer shared with the ixgbe PF. This uses either a 2K or 3K buffer, and offsets the DMA mapping by NET_SKB_PAD + NET_IP_ALIGN. When using a smaller buffer RXDCTL is set to ensure the PF does not write a full 2K bytes into the buffer, which is actually 2K minus the offset. However on the 82599 virtual function, the RXDCTL mechanism is not available. The driver attempts to work around this by using the SET_LPE mailbox method to lower the maximm frame size, but the ixgbe PF driver ignores this in order to keep the PF and all VFs in sync[0]. This means the PF will write up to the full 2K set in SRRCTL, causing it to write NET_SKB_PAD + NET_IP_ALIGN bytes past the end of the buffer. With 4K pages split into two buffers, this means it either writes NET_SKB_PAD + NET_IP_ALIGN bytes past the first buffer (and into the second), or NET_SKB_PAD + NET_IP_ALIGN bytes past the end of the DMA mapping. Avoid this by only enabling build_skb when using "large" buffers (3K). These are placed in each half of an order-1 page, preventing the PF from writing past the end of the mapping. [0]: Technically it only ever raises the max frame size, see ixgbe_set_vf_lpe() in ixgbe_sriov.c Fixes: f15c5ba5b6cd ("ixgbevf: add support for using order 1 pages to receive large frames") Signed-off-by: Samuel Mendoza-Jonas <[email protected]> Tested-by: Konrad Jankowski <[email protected]> Signed-off-by: Tony Nguyen <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2022-02-04Merge branch '40GbE' of ↵David S. Miller6-50/+255
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 40GbE Intel Wired LAN Driver Updates 2022-02-03 This series contains updates to the i40e client header file and driver. Mateusz disables HW TC offload by default. Joe Damato removes a no longer used statistic. Jakub Kicinski removes an unused enum from the client header file. Jedrzej changes some admin queue commands to occur under atomic context and adds new functions for admin queue MAC VLAN filters to avoid a potential race that could occur due storing results in a structure that could be overwritten by the next admin queue call. ==================== Signed-off-by: David S. Miller <[email protected]>
2022-02-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski5-21/+74
No conflicts. Signed-off-by: Jakub Kicinski <[email protected]>
2022-02-03ice: add support for DSCP QoS for IDCDave Ertman1-0/+5
The ice driver provides QoS information to auxiliary drivers through the exported function ice_get_qos_params. This function doesn't currently support L3 DSCP QoS. Add the necessary defines, structure elements and code to support DSCP QoS through the IIDC functions. Signed-off-by: Dave Ertman <[email protected]> Signed-off-by: Shiraz Saleem <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-03i40e: Fix race condition while adding/deleting MAC/VLAN filtersJedrzej Jagielski1-11/+13
There was a race condition in access to hw->aq.asq_last_status while adding and deleting MAC/VLAN filters causing incorrect error status to be printed as ERROR OK instead of the correct error. Change calls to i40e_aq_add_macvlan in i40e_aqc_add_filters and i40e_aq_remove_macvlan in i40e_aqc_del_filters to _v2 versions that return Admin Queue status on the stack to avoid race conditions in access to hw->aq.asq_last_status. Signed-off-by: Sylwester Dziedziuch <[email protected]> Signed-off-by: Jedrzej Jagielski <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-03i40e: Add new version of i40e_aq_add_macvlan functionJedrzej Jagielski2-20/+77
ASQ send command functions are returning only i40e status codes yet some calling functions also need Admin Queue status that is stored in hw->aq.asq_last_status. Since hw object is stored on a heap it introduces a possibility for a race condition in access to hw if calling function is not fast enough to read hw->aq.asq_last_status before next send ASQ command is executed. Add new _v2 version of i40e_aq_add_macvlan that is using new _v2 versions of ASQ send command functions and returns the Admin Queue status on the stack. Signed-off-by: Sylwester Dziedziuch <[email protected]> Signed-off-by: Jedrzej Jagielski <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-03i40e: Add new versions of send ASQ command functionsJedrzej Jagielski3-8/+150
ASQ send command functions are returning only i40e status codes yet some calling functions also need Admin Queue status that is stored in hw->aq.asq_last_status. Since hw object is stored on a heap it introduces a possibility for a race condition in access to hw if calling function is not fast enough to read hw->aq.asq_last_status before next send ASQ command is executed. Add new versions of send ASQ command functions that return Admin Queue status on the stack to avoid race conditions in access to hw->aq.asq_last_status. Add new _v2 version of i40e_aq_remove_macvlan that is using new _v2 versions of ASQ send command functions and returns the Admin Queue status on the stack. Signed-off-by: Sylwester Dziedziuch <[email protected]> Signed-off-by: Jedrzej Jagielski <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-03i40e: Add sending commands in atomic contextJedrzej Jagielski1-9/+12
Change functions: - i40e_aq_add_macvlan - i40e_aq_remove_macvlan - i40e_aq_delete_element - i40e_aq_add_vsi - i40e_aq_update_vsi_params to explicitly use i40e_asq_send_command_atomic(..., true) instead of i40e_asq_send_command, as they use mutexes and do some work in an atomic context. Without this change setting vlan via netdev will fail with call trace cased by bug "BUG: scheduling while atomic". Signed-off-by: Witold Fijalkowski <[email protected]> Signed-off-by: Jedrzej Jagielski <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-03i40e: Remove unused RX realloc statJoe Damato2-3/+1
After commit 1a557afc4dd5 ("i40e: Refactor receive routine"), rx_stats.realloc_count is no longer being incremented, so remove it. The debugfs string was left, but hardcoded to 0. This is intended to prevent breaking any existing code / scripts that are parsing debugfs for i40e. Signed-off-by: Joe Damato <[email protected]> Reviewed-by: Jesse Brandeburg <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-03i40e: Disable hw-tc-offload feature on driver loadMateusz Palczewski1-1/+4
After loading driver hw-tc-offload is enabled by default. Change the behaviour of driver to disable hw-tc-offload by default as this is the expected state. Additionally since this impacts ntuple feature state change the way of checking NETIF_F_HW_TC flag. Signed-off-by: Norbert Zulinski <[email protected]> Signed-off-by: Przemyslaw Patynowski <[email protected]> Signed-off-by: Mateusz Palczewski <[email protected]> Tested-by: Dave Switzer <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-01Merge branch '1GbE' of ↵Jakub Kicinski3-19/+44
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2022-02-01 This series contains updates to e1000e driver only. Sasha removes CSME handshake with TGL platform as this is not supported and is causing hardware unit hangs to be reported. * '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: e1000e: Handshake with CSME starts from ADL platforms e1000e: Separate ADP board type from TGP ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
2022-02-01e1000e: Handshake with CSME starts from ADL platformsSasha Neftin1-2/+4
Handshake with CSME/AMT on none provisioned platforms during S0ix flow is not supported on TGL platform and can cause to HW unit hang. Update the handshake with CSME flow to start from the ADL platform. Fixes: 3e55d231716e ("e1000e: Add handshake with the CSME to support S0ix") Signed-off-by: Sasha Neftin <[email protected]> Tested-by: Nechama Kraus <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-02-01e1000e: Separate ADP board type from TGPSasha Neftin3-17/+40
We have the same LAN controller on different PCH's. Separate ADP board type from a TGP which will allow for specific fixes to be applied for ADP platforms. Suggested-by: Kai-Heng Feng <[email protected]> Suggested-by: Dima Ruinskiy <[email protected]> Signed-off-by: Sasha Neftin <[email protected]> Tested-by: Nechama Kraus <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31i40e: Fix reset path while removing the driverKaren Sornek2-1/+19
Fix the crash in kernel while dereferencing the NULL pointer, when the driver is unloaded and simultaneously the VSI rings are being stopped. The hardware requires 50msec in order to finish RX queues disable. For this purpose the driver spins in mdelay function for the operation to be completed. For example changing number of queues which requires reset would fail in the following call stack: 1) i40e_prep_for_reset 2) i40e_pf_quiesce_all_vsi 3) i40e_quiesce_vsi 4) i40e_vsi_close 5) i40e_down 6) i40e_vsi_stop_rings 7) i40e_vsi_control_rx -> disable requires the delay of 50msecs 8) continue back in i40e_down function where i40e_clean_tx_ring(vsi->tx_rings[i]) is going to crash When the driver was spinning vsi_release called i40e_vsi_free_arrays where the vsi->tx_rings resources were freed and the pointer was set to NULL. Fixes: 5b6d4a7f20b0 ("i40e: Fix crash during removing i40e driver") Signed-off-by: Slawomir Laba <[email protected]> Signed-off-by: Sylwester Dziedziuch <[email protected]> Signed-off-by: Karen Sornek <[email protected]> Tested-by: Gurucharan G <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31i40e: Fix reset bw limit when DCB enabled with 1 TCJedrzej Jagielski1-1/+11
There was an AQ error I40E_AQ_RC_EINVAL when trying to reset bw limit as part of bw allocation setup. This was caused by trying to reset bw limit with DCB enabled. Bw limit should not be reset when DCB is enabled. The code was relying on the pf->flags to check if DCB is enabled but if only 1 TC is available this flag will not be set even though DCB is enabled. Add a check for number of TC and if it is 1 don't try to reset bw limit even if pf->flags shows DCB as disabled. Fixes: fa38e30ac73f ("i40e: Fix for Tx timeouts when interface is brought up if DCB is enabled") Suggested-by: Alexander Lobakin <[email protected]> # Flatten the condition Signed-off-by: Sylwester Dziedziuch <[email protected]> Signed-off-by: Jedrzej Jagielski <[email protected]> Reviewed-by: Alexander Lobakin <[email protected]> Tested-by: Imam Hassan Reza Biswas <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31ixgbe: respect metadata on XSK Rx to skbAlexander Lobakin1-4/+10
For now, if the XDP prog returns XDP_PASS on XSK, the metadata will be lost as it doesn't get copied to the skb. Copy it along with the frame headers. Account its size on skb allocation, and when copying just treat it as a part of the frame and do a pull after to "move" it to the "reserved" zone. net_prefetch() xdp->data_meta and align the copy size to speed-up memcpy() a little and better match ixgbe_construct_skb(). Fixes: d0bcacd0a130 ("ixgbe: add AF_XDP zero-copy Rx support") Suggested-by: Jesper Dangaard Brouer <[email protected]> Suggested-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Sandeep Penigalapati <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31ixgbe: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skbAlexander Lobakin1-3/+1
{__,}napi_alloc_skb() allocates and reserves additional NET_SKB_PAD + NET_IP_ALIGN for any skb. OTOH, ixgbe_construct_skb_zc() currently allocates and reserves additional `xdp->data - xdp->data_hard_start`, which is XDP_PACKET_HEADROOM for XSK frames. There's no need for that at all as the frame is post-XDP and will go only to the networking stack core. Pass the size of the actual data only to __napi_alloc_skb() and don't reserve anything. This will give enough headroom for stack processing. Fixes: d0bcacd0a130 ("ixgbe: add AF_XDP zero-copy Rx support") Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Sandeep Penigalapati <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31ixgbe: pass bi->xdp to ixgbe_construct_skb_zc() directlyAlexander Lobakin1-9/+10
To not dereference bi->xdp each time in ixgbe_construct_skb_zc(), pass bi->xdp as an argument instead of bi. We can also call xsk_buff_free() outside of the function as well as assign bi->xdp to NULL, which seems to make it closer to its name. Suggested-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Tested-by: Sandeep Penigalapati <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31igc: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skbAlexander Lobakin1-6/+7
{__,}napi_alloc_skb() allocates and reserves additional NET_SKB_PAD + NET_IP_ALIGN for any skb. OTOH, igc_construct_skb_zc() currently allocates and reserves additional `xdp->data_meta - xdp->data_hard_start`, which is about XDP_PACKET_HEADROOM for XSK frames. There's no need for that at all as the frame is post-XDP and will go only to the networking stack core. Pass the size of the actual data only (+ meta) to __napi_alloc_skb() and don't reserve anything. This will give enough headroom for stack processing. Also, net_prefetch() xdp->data_meta and align the copy size to speed-up memcpy() a little and better match igc_construct_skb(). Fixes: fc9df2a0b520 ("igc: Enable RX via AF_XDP zero-copy") Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Nechama Kraus <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31ice: respect metadata on XSK Rx to skbAlexander Lobakin1-4/+10
For now, if the XDP prog returns XDP_PASS on XSK, the metadata will be lost as it doesn't get copied to the skb. Copy it along with the frame headers. Account its size on skb allocation, and when copying just treat it as a part of the frame and do a pull after to "move" it to the "reserved" zone. net_prefetch() xdp->data_meta and align the copy size to speed-up memcpy() a little and better match ice_construct_skb(). Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Suggested-by: Jesper Dangaard Brouer <[email protected]> Suggested-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Kiran Bhandare <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31ice: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skbAlexander Lobakin1-3/+1
{__,}napi_alloc_skb() allocates and reserves additional NET_SKB_PAD + NET_IP_ALIGN for any skb. OTOH, ice_construct_skb_zc() currently allocates and reserves additional `xdp->data - xdp->data_hard_start`, which is XDP_PACKET_HEADROOM for XSK frames. There's no need for that at all as the frame is post-XDP and will go only to the networking stack core. Pass the size of the actual data only to __napi_alloc_skb() and don't reserve anything. This will give enough headroom for stack processing. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Kiran Bhandare <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31ice: respect metadata in legacy-rx/ice_construct_skb()Alexander Lobakin1-4/+11
In "legacy-rx" mode represented by ice_construct_skb(), we can still use XDP (and XDP metadata), but after XDP_PASS the metadata will be lost as it doesn't get copied to the skb. Copy it along with the frame headers. Account its size on skb allocation, and when copying just treat it as a part of the frame and do a pull after to "move" it to the "reserved" zone. Point net_prefetch() to xdp->data_meta instead of data. This won't change anything when the meta is not here, but will save some cache misses otherwise. Suggested-by: Jesper Dangaard Brouer <[email protected]> Suggested-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Kiran Bhandare <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31i40e: respect metadata on XSK Rx to skbAlexander Lobakin1-4/+10
For now, if the XDP prog returns XDP_PASS on XSK, the metadata will be lost as it doesn't get copied to the skb. Copy it along with the frame headers. Account its size on skb allocation, and when copying just treat it as a part of the frame and do a pull after to "move" it to the "reserved" zone. net_prefetch() xdp->data_meta and align the copy size to speed-up memcpy() a little and better match i40e_construct_skb(). Fixes: 0a714186d3c0 ("i40e: add AF_XDP zero-copy Rx support") Suggested-by: Jesper Dangaard Brouer <[email protected]> Suggested-by: Maciej Fijalkowski <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Tested-by: Kiran Bhandare <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-31i40e: don't reserve excessive XDP_PACKET_HEADROOM on XSK Rx to skbAlexander Lobakin1-3/+1
{__,}napi_alloc_skb() allocates and reserves additional NET_SKB_PAD + NET_IP_ALIGN for any skb. OTOH, i40e_construct_skb_zc() currently allocates and reserves additional `xdp->data - xdp->data_hard_start`, which is XDP_PACKET_HEADROOM for XSK frames. There's no need for that at all as the frame is post-XDP and will go only to the networking stack core. Pass the size of the actual data only to __napi_alloc_skb() and don't reserve anything. This will give enough headroom for stack processing. Fixes: 0a714186d3c0 ("i40e: add AF_XDP zero-copy Rx support") Signed-off-by: Alexander Lobakin <[email protected]> Reviewed-by: Michal Swiatkowski <[email protected]> Acked-by: Jesper Dangaard Brouer <[email protected]> Tested-by: Kiran Bhandare <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
2022-01-27igbvf: Remove useless DMA-32 fallback configurationChristophe JAILLET1-16/+6
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <[email protected]> Tested-by: Konrad Jankowski <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>